SparkFun Forums 

Where electronics enthusiasts find answers.

By joelongstreet
#186476
How does the oculus touch controller know the absolute position of your hand in 3d space?
By Heaney555
#186993
There are one or more (usually two) high resolution, high FOV, global shutter IR cameras, connected to the same PC as the Rift is via USB 3.0. Either 60 or 120 FPS, not sure.

They can be mounted on your desk (the desk lamp stand comes with it), on stands, or on the wall, but always pointing slightly inwards towards the main tracking area.

On the Touch controllers, there is an array of flashing IR LEDs (hidden in the consumer version) that flash in a certain pattern (to identify the object), and basically, the IR cameras see this pattern, and through complicated computer vision algorithms, fusing this optical data with the IMU (gyro and accelerometer data) sent by the IMU inside the Touch controllers, can determine the position of the controllers down to sub-mm accuracy.

The exact algorithms are proprietary (called 'Constellation'), and the general concept is extremely complicated, but effectively the accelerometer value can be integrated to determine velocity, and then that value can be integrated to get change in change in distance. Unfortunately, that double integration means event the most minute fluctuation gives exponential drift, and thus this needs to be corrected. The optical system of the IR LEDs is what does this correction.

So the end result is 1:1, sub-mm accurate tracking with near zero latency.

Video on the Touch controllers: https://www.youtube.com/watch?v=s6BuN1uyq48
Last edited by Heaney555 on Tue Jan 05, 2016 11:11 am, edited 1 time in total.
By Crappinni
#187088
To talk a bit more about algorithms that can be used to achieve this, the relative positions of the IR LEDs on the touch controller are known, and given any one image of the controller, the number of possible relative poses between the camera and the controller can be narrowed down. With the movement of the controllers and additional information from the IMUs, a unique pose can eventually be determined. Subsequent estimations of pose can then be implemented as updates over previous poses.