Neo Kinect - easy access to the Kinect v2 capabilities in your games

With the Neo Kinect](Neo Kinect in Code Plugins - UE Marketplace) plugin you can use the Kinect v2 sensor advanced capabilities within Unreal Engine, with easy to use Blueprint nodes, all nicely commented, or directly through the C++ methods. Take a look at the Quick Start Guide](http://files.rvillani.com/neokinect/NeoKinect-QuickStart.pdf) to see how it works and have an idea of what you can achieve with it!

Robust and fast

The plugin was created with performance and usability in mind, so you can track all 6 possible users, their faces and enable all of the Kinect’s frame types (color, depth, infrared etc) at the same time with almost no hit in performance at all. Sensor polling is made in its own thread and a custom texture type was created just for the high-res-realtime updates, which is still compatible with the material editor system. If you need to, there are even functions to access the textures pixel values.
No need for components

The sensor is unique, no matter how many Actors or Widgets are using it. So, instead of needing to add components or extend specific Blueprints, you just call functions like with a function library. That way you can control the device from any Blueprint, including Widgets.
Advanced Remapping

Besides access to the standard Microsoft Kinect API coordinate remapping methods, the plugin also comes with other remapping features that facilitate AR applications, like getting the location of a joint in the Color frame without losing its depth information. Every location and orientation was adapted to Unreal’s coordinate system and Joints transforms are compatible with the Engine’s Mannequin character rig.
Fully production proven

I’ve used Neo Kinect a lot (more than a year) before releasing to the public and removed all bugs found so far, besides making a lot of performance improvements. It was used in applications that go through a whole day without crashing and packages without problems.
Technical Details

Body tracking:

  • Tracking of up to 6 simultaneous users’s skeletons, with 25 joints each
  • Users leaning angle, tracking confidence, Body edge clipping, hands states
  • Per Body found/lost events

Face tracking:

  • Location and orientation of up to 6 simultaneous users’s faces
  • Face points (left and right eyes, nose and left and right mouth corners) in 3D and 2D (Color and Infrared space)
  • Faces bounding boxes in Color and Infrared frames space
  • Expressions reading (Engaged, Happy, Looking Away, Mouth Moved, Mouth Open and Left and Right Eyes Open/Closed) and if users are Wearing glasses or not
  • Per Face found/lost events

Sensor control:

  • Global bodies/faces tracking events (found/lost)
  • Init/Uninit sensor
  • Get sensor tilt, ground plane normal and sensor height

Remapping:

  • 3D camera location to Color texture (optionally with depth) and to Depth texture
  • Find depth of a Color texture location
  • Depth point to Color point and to 3D location

Frames (textures):

  • Get each frame FOV and dimensions
  • Toggle frames usage individually
  • Sample a pixel value from the Depth frame and find the depth of a Color pixel

Network Replicated: No
Platform: Win64 only

Quick Start guide: NeoKinect-QuickStart.pdf

Example Project for Unreal Engine 5: NeoKinectExamples.zip.

  • This example is not yet using the new demo room from UE5, only the new skeleton (the UE4 one is still there as well).

Example Project for Unreal Engine 4: NeoKinectExamples_UE4.zip.

FAQ

Q: The Avateering AnimBP doesn’t work for my skeleton. Orientations look wrong. How can I fix it? Can I use retargeting?
A: Retargeting won’t work for Neo Kinect, because that’s only for animation assets. Neo Kinect is getting data from Kinect and converting it to Unreal Engine coordinate space. Then, to make things easier for avateering, I compute the transforms that move the Mannequin skeleton correctly. For that to work, the bones’ orientations are very important.

If your skeleton has the same bone names but orientations are different, you’ll have to customize the Avateering AnimBP for your bones’ orientations. The Mannequin upperarm_r bone has X pointing opposite to the bone direction, Z pointing towards the skeleton body and Y towards its back. To see those orientations, select the bone in the SK_Mannequin asset, press W to activate the translate tool and select the local transform mode in the top-right corner (first icon after the move/rotation/scale tools). If upperarm_r on your custom skeleton doesn’t match that exactly, the orientation set on the Avateering AnimBP will look wrong.

But you can still compute correct orientations yourself.

Let’s say your mesh’s upperarm_r has X pointing forward, Z towards the bone direction and Y towards the character’s ribs. This means that [X, Y, Z] on your mesh’s upperarm_r map to [-Y, Z, -X] on the Mannequin. Therefore, to create the correct rotation use the Make Rot from ZX node and pass the correct axes from the Kinect rotation for that joint. There are all combinations of axes for that node. I chose ZX here because Z is the axis along the bone. Always pick that one first. The other axis doesn’t really matter, as long as you pick the correct one from Kinect to convert. In this case, the Z axis on your mesh maps to -X on the Mannequin, so you need to get the forward vector from Kinect Right Elbow joint rotation and multiply it by -1 to invert its direction. For X, it maps to -Y on the Mannequin. So you get the rotation right axis and multiply it by -1. That would give you the correct rotation for your mesh’s upperarm_r coming out of the Make Rot from ZX node.

It will require manual work to get those correct for all bones, but I can’t have the plugin support every possible different orientation of bones. So I followed the Mannequin as a standard.

Q: How do I reduce jittering on my tracking?
A: For UE4, use the updated example AvateeringBP found here. It uses linear interpolation to interpolate the joints’ transforms from one frame to the the next, smoothing out big random changes in movement. The UE5 demo is already updated with that functionality.

Q: My packaged game crashes on launch. How do I fix it?
A: Check if you copied the runtime face dlls from the Kinect SDK redist folder into your packaged game. Specific instructions are in the Quick Start pdf that comes with the plugin.

Q: Will Neo Kinect support Azure?
A: No. If I create an Azure plugin (no plans for it currently), that would be its own plugin since the API is different from Kinect.

Q: Kinect is no longer in production. For how long will Neo Kinect be supported?
A: I’ll keep it up to date with the engine for as long as there are people buying it, which means there’s still interest in using Kinect v2 with Unreal.

Q: Can I use Neo Kinect for mocap?
A: I wouldn’t recommend it for mocap. Not because of the plugin, but Kinect itself.

First, Kinect always thinks you’re facing it, even if you’re backwards to it, so knees and elbows become very weird. And sideways, it gets really confused about the members it can’t see.

Then, the recording part. The plugin is made for runtime use. What I assume would be possible is to use the take recorder in Unreal for recording skeletal movements that happen in game, so you could use the Neo Kinect Avateering example for that, but I still don’t think you’d like the final animation quality because Kinect is not that precise (no matter what MS tells you).

I’ve worked a lot with Kinect and it’s very nice for controlling things with gestures, but for mocap or AR applications, it’s not even satisfactory IMHO.

Q: Any advice to create a virtual dresser?
A: Yes. Prepare yourself because even with the features I added to the plugin just for that, it’s complex.

It’s very important that you use a skeleton with same rig as the UE mannequin. Not just same bone names, but also their orientations. Otherwise setup is harder (see first questions in this QA section).

You’ll need to activate color space joints transforms to have joints coordinates aligned with the color camera, right after you init the sensor on BeginPlay. That’s done by calling the node Set Use Joints Color Space Transforms with Use checked. Once you’ve done that, you can use the Color Location and Color Orientation properties from when you break a KinectJoint structure.

Scale the bones in the SkeletalMesh to the user’s bones, as some people are taller or shorter than others. I did it by storing the original SkeletalMesh bones’ lengths on BeginPlay, before any change was done to it. To get a bone’s length from the Skeletal Mesh, you use Get Socket Location on the two joints that make that bone (like upperarm_l and lowerarm_l for the left upper arm) and save the distance between them. I suggest saving those values in an array ordered the same as the Kinect joints. You can get an index from any Kinect joint value using Joint to Index to keep it consistent when passing the values to the AnimBP.


Notice that for the array index I used the same joint used for the “upperarm_l” in the AnimBP.

Then scale them each frame by sending the correct scales to the SkeletalMesh AnimBP. The plugin gives you the user’s bones lengths via Get Bone Length, which you can call from a tracked NeoKinectBody. What worked best for me was to scale only the axis along the bone (X for the Mannequin), to not change its thickness. Then it’s almost unnoticeable that you’re scaling it. The math is Scale = UserBoneLength / OriginalMeshBoneLength.


Scaling logic added to BP_AvateeringDemo, from the demo project.

I advise you lerp the scale value over time (Lerp(PreviousFrameValue, CurrentFrameValue, DeltaTime * Speed) because Kinect will jitter it constantly.

Finally, you’ll need to mirror the scene horizontally. Easiest way is to scale the Actor to -1 on the Y axis, but back when I did that, it would break cloth simulation. So what I did was to create a post process that would invert the U axis on the rendered scene by passing it through a OneMinus node in the material.

3 Likes

Hello. I want to get a texture there is only body pixel, Alpha = 1 where there’s a body, 0 otherwise.

I can’t believe the whole thread is gone!!
@VictorLerp any idea what happened here? Several messages, all images… everything’s gone but the OP.

Hi, RVillani! I’ve backed up a couple of pages of your thorough answers, so I can send you them if you will. BTW, will you update plugin to be compatible with UE 5?

1 Like

That’s great! Yes, please and thank you :smiley:

I will, yeah. I intend to make it compatible with DX12 as well.

1 Like

That’s good news, thank you!
I’ve PM’d you original pages - give me a note if you didn’t receive them.

Hi RVillani, I’ve been using your plugin in a lot of projects with great results, but I was wondering if you were planning on integrating support for the new Azure Kinect or make another plugin dedicated to it :slight_smile: currently there is a plugin but it’s not been updated after 4.25 and I’d love to work on the new hardware with a stable and supported plugin.
Thanks!

1 Like

Hi, Linton!
I’m happy to know you like the plugin :smiley:
For now, I have no plans of making a plugin for Kinect Azure, though. I couldn’t get one in Brazil and now that I’m living in Canada, I would have very little time to work on it.
Cheers

Hi Rodrigo,

Thanks for your plugin, it looks amazing and robust.

I am about to buy it for a project using Kinect v2 for a AR experience where users will take control of an invisible skeleton with niagara particle systems attached to joints (mostly the arms since we want the users to turn in some kind of feather person). The particles will be feathers.

We also would eventually like to bind wing models to their arms. The global idea is to turn them into some kind of bird.

The plugin looks it could exactly do what we need, but I have stumbled onto this comment of yours in the answers on the plugin’s marketplace page:

What do you mean? Because it sounds to me that what we need is mocap features and that your plugin provides exactly that :slight_smile:

Thank you!

Hi!
Sorry for the delay. I formatted my PC and then phone recently and still haven caught up to setting up everything as before.

If the user will always be facing the screen, it should work just fine for your purposes (and I’d like to see that, it sounds awesome!).

What I mean when I say Kinect is not good for mocap is that it’s bad with sideways and back-facing capture. So full mocap, where you can do anything and the software will record correctly, is not feasible with it.

1 Like

Understood :slight_smile: thanks for your answer

Yes our user will face the screen so that should do it. Perfect. Will let you know when the project is completed and show you how it looks like.

Hello,
im seing bad rotations on manequin shoulders and a bit on the elbow’s, to be clear i already asked this question and the author was kind enough to give me an approach to fix rotations using “make rot from ZX” given the assumption that my character has different bone rotations than those in the ue4 manequin but
the problem is i see the same problem in the example project (ue4) with the manequin (attached image )

it happens with any other character and with any user testing the game.

i would like to know if im the only one getting this issue , maybe my kinect is giving this weird rotations for shoulders ?

Could you please explain how to implement an other skeletal mesh? I tried but I cant get it to work and I guess I dont understand the workflow. The skeletal mesh is twisted and damaged every time. I tried DAZ characters and metahumans.Thank you !

hi,in ar color mapping,can i mapping more than one person?

您好,问一下现在您实例里面的小白人模型的动作是矛盾的我需要这么去调整(我抬左手,模型也抬左手,我需要抬模型抬右手,这样才符合虚拟试衣镜的效果)

Plugin seem very complete, but how to use it to make a virtual dresser ? Do we have to put a plane with a material with video color with the avatar in front or do we have to use a widget or something else ?

How to get the result you showed in the presentation images ?

Hello, how can I get a texture? Only body pixels, alpha=1, otherwise 0.

Hi, check the first item on the FAQ section of the first post. I added that section recently, with explanations for most commonly asked questions and yours is the very first.

Yes. The plugin works with up to 6 people at once.

That’s right. You need to flip your mesh’s Y axis scale (-1) or use a post process material to mirror the whole screen.