[Feedback Wanted] ML Plugin for Immersive VR Animations

Hi Guys!

A few months back we posted here with a survey asking for developers feedback on a plugin that would use machine learning to help predict more immersive, realistic animations in VR. We’ve been working hard since, and we have just finished developing an early beta for our plugin, . We’re looking to find developers to help beta test our product and give us early feedback before we release it here.

About

is an Unreal Engine 4 plugin which uses for machine learning to predict specific joint locations given inputs from the head mounted display (HMD) and left and right motion controllers (MC). We use motion capture data collected from a Kinect 2 sensor to generate a set of training data for our models in order to predict these locations. These predicted joint locations can in turn be used to animate the body in an immersive and realistic way. For example, knowing the location of an elbow joint position can then be used as a way to animate the entire arm in a realistic way using inverse kinematics (IK) tools such as two-bone IK.

currently supports the prediction of the left and right elbow, left and right shoulder, and spine middle joint locations. These can be used in any way you’d like - it’s useful to be able to know with more accuracy where specific joints are located on your body! In our starter content, we have included an example of a pawn with the default UE4 mannequin’s arms animated using two-bone IK to provide a realistic looking animation of an arm’s movement.

One important part of our packaged product is an executable containing the Training Level. In this level, developers with access to a Kinect 2 sensor will be able to train their own models. The reason we want developers to be able to train their own models is primarily because different models will likely be needed for different experiences. For example, elbow location for a seated experience might be completely different from that of a standing experience due to the way the human body naturally positions itself. As another example, a single model might be trained for the animation of a specific action such as holding a gun or performing a specific dance. These trained models are stored as simple text files, and as such can easily be exported and shared amongst developers.

Showcase

In the following video, we demonstrate the basic features of the training level. As you can see at points in the video, the model we have trained isn’t perfect yet and still needs to be improved, but it is a lot better than standard two bone IK!
At the end of the video, I train my own model using a few seconds of training data. You’ll notice that when I use the model, it is accurate for the movements I just made, but goes crazy when
I move my arms in a different way entirely.

Showcase of the Training Level

If this sounds interesting to you, either post here or shoot me a DM with a way to reach you and we’ll send you a link to download our plugin. You don’t need a Kinect to be able to take advantage of the prediction (we’ve given you a sample model for use), but to be able to fully train your own models you’ll need to have a Kinect 2 with a windows adaptor.

Thanks!

Team

1 Like

Sounds great. Now that Vive trackers are widely available, are there plans to expand it to use more tracking points?

Hey Team . This sounds fantastic! We would love to work with a tool like this for our healthcare architectural visualization work. We’re very excited to see how this idea will continue to grow.

This sounds really interesting, do you guys have any videos of the system in action?

We don’t have any plans yet to integrate Vive trackers, as we’ve still got work to do on the base functionality before we head that route. However, the way we’ve written it right now the plugin would be pretty easily extended to use the Vive trackers. This would open the door to predicting even more joints and would be especially helpful for animating the legs in particular.

We’re working on building a more complete set of videos including tutorials if there is interest, but in the meantime I’ll record a quick example of the system in action for you guys to see - I’ll post the link here when it is ready.

Edit: Video of the basic Training Level features is now up - there’s a link in the original post now.

Awesome, thanks for sharing so quickly!

Quick update for you guys! Internally we’ve updated our beta to a new version. The biggest improvements in this update include scaling the arm and third person body animation to be closer to the true body’s animation, which improves the look and feel of the plugin a lot! I’ve uploaded a new video showcasing the training world in the description.