Had a chance to play with this Motion Controller binding last weekend when I was given an updated build for the HTCVive Jam in London.
It is very applicable to hydra/psmove/touch/lighthouse or where you are tracking one point for each hand. For example for the jam I spent the first 30min porting over my hydra plugin, recompiling and simply updating the Motion Controller positions that I had attached to my blueprint when hydras were detected and moving. This allowed me to test my early builds completely using the hydras and then walk over to the limited Vive kits and it would work the very same, saving heaps of debugging time.
I am curious how this can be done in C++ so that the blueprint adaptation I used won’t be needed and importantly I don’t believe it addresses more complicated inputs such as the leap motion or indirect positional input devices such as the Myo. There should me more thought on that and I should really get in touch with on the subject. The jam was very informative and some of the things I learned will be used in future updates; e.g. pulling events from components similarly to how you do that for collision interaction, eliminating the requirement for setting an interface to your blueprint.
Overall device agnostic binding is something I deeply believe in, but I think it needs to go one step further and abstract a whole body skeleton, which all input devices would fill. From there you would extract the information you want, with convenience components for most common configs. This way it would include input which doesn’t fit into the two hand controller paradigm as well as full mo-cap suits.