Hi. Has anyone had any luck or does anyone have any info regarding this?
Looking to use external IR sensors to track real world objects - using the Oculus tracking camera or other if need be.
thanks
Hi. Has anyone had any luck or does anyone have any info regarding this?
Looking to use external IR sensors to track real world objects - using the Oculus tracking camera or other if need be.
thanks
Hi!
I’m interested in doing object tracking too. Unless you invest on some expensive secondary tracking system, I doubt you will be able to use the Oculus tracking camera to track anything else other than what it is intended to track (HMD and hand controllers or whatever else they officially add in the future), unless you reverse-engineer their IR LED technology and somehow full their system to believe your object is, as an example, the hand controller. But even this might not be possible because their tracking system is not only using the camera but also IMU sensors on the devices. A cheap alternative to using a secondary external tracking system would be to attach a hand controller to the object itself, if the size of the object allows it. This is what some people have been doing with the Vive.
Hmm. It’s not a particularly elegant solution (using the existing controller itself), but it might do for testing purposes. In a final installation though, I would need something more elegant. Reverse engineering, I think I remember someone doing this with the DK2. I wonder also if the people from the Void might be willing to share their techniques.
Sucks but I find it hard to understand why there shouldn’t be support for this! Or, do you know why it’s expensive to use secondary tracking system? What about using a Kinect2 with cheap IR sensors from ebay :p. Thought I’d seen some cheap mo-cap techniques online somewhere using this method
Yes, the Kinect is a possible solution, but you still have to figure out how to recognize the objects of interest from the point-cloud since I believe it only recognizes human shapes (which is why it’s been used as cheap human mocap). A computer vision algorithm could be figured out but then you have other problems such as occlusion (i.e. the user’s body in between the Kinect and the tracked object).