Hi Guys!
A few months back we posted here with a survey asking for developers feedback on a plugin that would use machine learning to help predict more immersive, realistic animations in VR. We’ve been working hard since, and we have just finished developing an early beta for our plugin, . We’re looking to find developers to help beta test our product and give us early feedback before we release it here.
About
is an Unreal Engine 4 plugin which uses for machine learning to predict specific joint locations given inputs from the head mounted display (HMD) and left and right motion controllers (MC). We use motion capture data collected from a Kinect 2 sensor to generate a set of training data for our models in order to predict these locations. These predicted joint locations can in turn be used to animate the body in an immersive and realistic way. For example, knowing the location of an elbow joint position can then be used as a way to animate the entire arm in a realistic way using inverse kinematics (IK) tools such as two-bone IK.
currently supports the prediction of the left and right elbow, left and right shoulder, and spine middle joint locations. These can be used in any way you’d like - it’s useful to be able to know with more accuracy where specific joints are located on your body! In our starter content, we have included an example of a pawn with the default UE4 mannequin’s arms animated using two-bone IK to provide a realistic looking animation of an arm’s movement.
One important part of our packaged product is an executable containing the Training Level. In this level, developers with access to a Kinect 2 sensor will be able to train their own models. The reason we want developers to be able to train their own models is primarily because different models will likely be needed for different experiences. For example, elbow location for a seated experience might be completely different from that of a standing experience due to the way the human body naturally positions itself. As another example, a single model might be trained for the animation of a specific action such as holding a gun or performing a specific dance. These trained models are stored as simple text files, and as such can easily be exported and shared amongst developers.
Showcase
In the following video, we demonstrate the basic features of the training level. As you can see at points in the video, the model we have trained isn’t perfect yet and still needs to be improved, but it is a lot better than standard two bone IK!
At the end of the video, I train my own model using a few seconds of training data. You’ll notice that when I use the model, it is accurate for the movements I just made, but goes crazy when
I move my arms in a different way entirely.
Showcase of the Training Level
If this sounds interesting to you, either post here or shoot me a DM with a way to reach you and we’ll send you a link to download our plugin. You don’t need a Kinect to be able to take advantage of the prediction (we’ve given you a sample model for use), but to be able to fully train your own models you’ll need to have a Kinect 2 with a windows adaptor.
Thanks!
Team