Gesture Tracker VR Help and Feature Requests

I used an algorithm I developed myself that doesn’t use any machine learning architectures. My technique doesn’t have any academic foundation, it’s just an idea I had that I tried and tweaked until it felt good to me. I was inspired by these children’s wire toys. The wire represents the gesture path. As long as you’re pulling the bead in vaguely the same direction as the current part of the path the bead will advance along the wire. If the bead makes it to the end of the wire the gesture is completed. This doesn’t exactly describe the algorithm, but basically if the tracked motion vector and vector for the current part of the reference gesture where the “bead” is have a dot product greater than the Acceptable Similarity parameter then the bead will advance along the wire. Gestures are stored with their yaw rotation normalized around 0 so you can do the same gesture while facing any direction (I do my best to interpret the direction the user is facing using the rotations of the tracker component).

There’s a lot in the details of course but if you want to go that far I’d just look through the source. It’s not as mathematically rigorous as other methods but it’s cheap, recognition is O(n) in the number of gestures. It also makes continuous recognition easy, since I just have to reset the “bead” back to the start of a gesture’s “wire” every time its determined the gesture was not being performed. Continuous recognition is somewhat more expensive though, since it uses additional memory Θ(n) in the number of gestures (realistically this will never be more than a few kilobytes though) and no gesture can ever be ruled out (unlike during normal recognition, where most gestures are ruled out almost immediately) so its recognition is Θ(n).

1 Like