With the Neo Kinect](Neo Kinect in Code Plugins - UE Marketplace) plugin you can use the Kinect v2 sensor advanced capabilities within Unreal Engine, with easy to use Blueprint nodes, all nicely commented, or directly through the C++ methods. Take a look at the Quick Start Guide](http://files.rvillani.com/neokinect/NeoKinect-QuickStart.pdf) to see how it works and have an idea of what you can achieve with it!
Robust and fast
The plugin was created with performance and usability in mind, so you can track all 6 possible users, their faces and enable all of the Kinect’s frame types (color, depth, infrared etc) at the same time with almost no hit in performance at all. Sensor polling is made in its own thread and a custom texture type was created just for the high-res-realtime updates, which is still compatible with the material editor system. If you need to, there are even functions to access the textures pixel values.
No need for components
The sensor is unique, no matter how many Actors or Widgets are using it. So, instead of needing to add components or extend specific Blueprints, you just call functions like with a function library. That way you can control the device from any Blueprint, including Widgets.
Besides access to the standard Microsoft Kinect API coordinate remapping methods, the plugin also comes with other remapping features that facilitate AR applications, like getting the location of a joint in the Color frame without losing its depth information. Every location and orientation was adapted to Unreal’s coordinate system and Joints transforms are compatible with the Engine’s Mannequin character rig.
Fully production proven
I’ve used Neo Kinect a lot (more than a year) before releasing to the public and removed all bugs found so far, besides making a lot of performance improvements. It was used in applications that go through a whole day without crashing and packages without problems.
- Tracking of up to 6 simultaneous users’s skeletons, with 25 joints each
- Users leaning angle, tracking confidence, Body edge clipping, hands states
- Per Body found/lost events
- Location and orientation of up to 6 simultaneous users’s faces
- Face points (left and right eyes, nose and left and right mouth corners) in 3D and 2D (Color and Infrared space)
- Faces bounding boxes in Color and Infrared frames space
- Expressions reading (Engaged, Happy, Looking Away, Mouth Moved, Mouth Open and Left and Right Eyes Open/Closed) and if users are Wearing glasses or not
- Per Face found/lost events
- Global bodies/faces tracking events (found/lost)
- Init/Uninit sensor
- Get sensor tilt, ground plane normal and sensor height
- 3D camera location to Color texture (optionally with depth) and to Depth texture
- Find depth of a Color texture location
- Depth point to Color point and to 3D location
- Get each frame FOV and dimensions
- Toggle frames usage individually
- Sample a pixel value from the Depth frame and find the depth of a Color pixel
Network Replicated: No
Platform: Win64 only
Quick Start guide: NeoKinect-QuickStart.pdf
Example Project for Unreal Engine 5: NeoKinectExamples.zip.
- This example is not yet using the new demo room from UE5, only the new skeleton (the UE4 one is still there as well).
Example Project for Unreal Engine 4: NeoKinectExamples_UE4.zip.
- Updated AvateeringDemo Blueprint with joint smoothing for UE 4.26+ (replace the asset on the example project).
Q: The Avateering AnimBP doesn’t work for my skeleton. Orientations look wrong. How can I fix it? Can I use retargeting?
A: Retargeting won’t work for Neo Kinect, because that’s only for animation assets. Neo Kinect is getting data from Kinect and converting it to Unreal Engine coordinate space. Then, to make things easier for avateering, I compute the transforms that move the Mannequin skeleton correctly. For that to work, the bones’ orientations are very important.
If your skeleton has the same bone names but orientations are different, you’ll have to customize the Avateering AnimBP for your bones’ orientations. The Mannequin
upperarm_r bone has X pointing opposite to the bone direction, Z pointing towards the skeleton body and Y towards its back. To see those orientations, select the bone in the SK_Mannequin asset, press W to activate the translate tool and select the local transform mode in the top-right corner (first icon after the move/rotation/scale tools). If
upperarm_r on your custom skeleton doesn’t match that exactly, the orientation set on the Avateering AnimBP will look wrong.
But you can still compute correct orientations yourself.
Let’s say your mesh’s
upperarm_r has X pointing forward, Z towards the bone direction and Y towards the character’s ribs. This means that [X, Y, Z] on your mesh’s
upperarm_r map to [-Y, Z, -X] on the Mannequin. Therefore, to create the correct rotation use the
Make Rot from ZX node and pass the correct axes from the Kinect rotation for that joint. There are all combinations of axes for that node. I chose ZX here because Z is the axis along the bone. Always pick that one first. The other axis doesn’t really matter, as long as you pick the correct one from Kinect to convert. In this case, the Z axis on your mesh maps to -X on the Mannequin, so you need to get the forward vector from Kinect Right Elbow joint rotation and multiply it by -1 to invert its direction. For X, it maps to -Y on the Mannequin. So you get the rotation right axis and multiply it by -1. That would give you the correct rotation for your mesh’s
upperarm_r coming out of the
Make Rot from ZX node.
It will require manual work to get those correct for all bones, but I can’t have the plugin support every possible different orientation of bones. So I followed the Mannequin as a standard.
Q: How do I reduce jittering on my tracking?
A: For UE4, use the updated example AvateeringBP found here. It uses linear interpolation to interpolate the joints’ transforms from one frame to the the next, smoothing out big random changes in movement. The UE5 demo is already updated with that functionality.
Q: My packaged game crashes on launch. How do I fix it?
A: Check if you copied the runtime face dlls from the Kinect SDK
redist folder into your packaged game. Specific instructions are in the Quick Start pdf that comes with the plugin.
Q: Will Neo Kinect support Azure?
A: No. If I create an Azure plugin (no plans for it currently), that would be its own plugin since the API is different from Kinect.
Q: Kinect is no longer in production. For how long will Neo Kinect be supported?
A: I’ll keep it up to date with the engine for as long as there are people buying it, which means there’s still interest in using Kinect v2 with Unreal.
Q: Can I use Neo Kinect for mocap?
A: I wouldn’t recommend it for mocap. Not because of the plugin, but Kinect itself.
First, Kinect always thinks you’re facing it, even if you’re backwards to it, so knees and elbows become very weird. And sideways, it gets really confused about the members it can’t see.
Then, the recording part. The plugin is made for runtime use. What I assume would be possible is to use the take recorder in Unreal for recording skeletal movements that happen in game, so you could use the Neo Kinect Avateering example for that, but I still don’t think you’d like the final animation quality because Kinect is not that precise (no matter what MS tells you).
I’ve worked a lot with Kinect and it’s very nice for controlling things with gestures, but for mocap or AR applications, it’s not even satisfactory IMHO.
Q: Any advice to create a virtual dresser?
A: Yes. Prepare yourself because even with the features I added to the plugin just for that, it’s complex.
It’s very important that you use a skeleton with same rig as the UE mannequin. Not just same bone names, but also their orientations. Otherwise setup is harder (see first questions in this QA section).
You’ll need to activate color space joints transforms to have joints coordinates aligned with the color camera, right after you init the sensor on BeginPlay. That’s done by calling the node
Set Use Joints Color Space Transforms with
Use checked. Once you’ve done that, you can use the
Color Location and
Color Orientation properties from when you break a
Scale the bones in the SkeletalMesh to the user’s bones, as some people are taller or shorter than others. I did it by storing the original SkeletalMesh bones’ lengths on BeginPlay, before any change was done to it. To get a bone’s length from the Skeletal Mesh, you use
Get Socket Location on the two joints that make that bone (like
lowerarm_l for the left upper arm) and save the distance between them. I suggest saving those values in an array ordered the same as the Kinect joints. You can get an index from any Kinect joint value using
Joint to Index to keep it consistent when passing the values to the AnimBP.
Notice that for the array index I used the same joint used for the “upperarm_l” in the AnimBP.
Then scale them each frame by sending the correct scales to the SkeletalMesh AnimBP. The plugin gives you the user’s bones lengths via
Get Bone Length, which you can call from a tracked
NeoKinectBody. What worked best for me was to scale only the axis along the bone (X for the Mannequin), to not change its thickness. Then it’s almost unnoticeable that you’re scaling it. The math is
Scale = UserBoneLength / OriginalMeshBoneLength.
Scaling logic added to
BP_AvateeringDemo, from the demo project.
I advise you lerp the scale value over time (
Lerp(PreviousFrameValue, CurrentFrameValue, DeltaTime * Speed) because Kinect will jitter it constantly.
Finally, you’ll need to mirror the scene horizontally. Easiest way is to scale the Actor to -1 on the Y axis, but back when I did that, it would break cloth simulation. So what I did was to create a post process that would invert the U axis on the rendered scene by passing it through a
OneMinus node in the material.