Kinect 4 Unreal

Thanks a lot @. Problem with Leap Motion is the narrow FOV. My requirements require the hands to go out further. I’m a kickstarter supporter of Perception Neuron which seems like a great solution, it just comes with the burden of having to calibrate, along with worrying about the handedness and hand size of various users. I’m starting to think that I may be better off waiting for the Oculus Touch, whatever Valve has to offer, Stem or some other hand-held tracking mechanism.

Thanks again for your input.

@ : Thanks for the help.

Although I think it is not gonna be solved with a simple shader. I need to find out the exact position of the point on the camera space. already Z will be given in cm away from the camera, but X and Y are in pixels and this is where coordination mapping solve the problem by using color data and turning X and Y to the cm ( camera space ) too.

By the way, even if you are going to use the shader then how are you going to access its data ( compiled and rendered one ) from C++ or blueprint ( ok blueprint is a bit too ambitious but c++ ) if you know a generic way please share it with me.

@plangton
Your case is indeed much more complicated than I thought. I don’t know how to get the data from the depth texture. I thought you were trying to make stuff in 3D appear as if it was behind stuff in real life (I did it for a virtual wardrobe with Kinect 1 so the person could be in front of the 3D objects).

I have an idea. Don’t know if it would work, though. You could “hack” into the depth texture from making a C++ function that accepts a UTexture2D as parameter and making it accessible in Blueprints, so you would be able to pass the texture into your C++ function. After that, you could use the pixel at the coordinate in X and Y pixels to calculate the X and Y cm based on the camera field of view and that pixel distance from the camera. I don’t thing that’s an easy formula, but I’m sure there’s trigonometry to do it.

It would be nice if the depth texture could be translated into a simple mesh for creating collision, just a thought.

I’m trying to implement Kinect movement and shooting in the Unreal 4 FPS example. Currently the player can slightly move left,right,forwards and backwards but I really have to move around the room. I want to make it when I lean forward the player moves forward continuously, if I lean left the player strafes left continuously etc.

I’m using the blueprint from the Kinect4Unreal project setup video.

Can anyone help me get proper movement and mouse look using Kinect?

Hi predalienator, you can use the get lean amount node.

@Predalienator : always consider that the values you get for data from kinect are very small. you need to amplify them in order to match neatly with your project. there are two common ways.
1 - the easiest : multiply the output with a constant value, and make the value editable so you can modify it outside of the blueprint itself too
1 - more complicated : kinect see a triangular arc. you should calculate in which distance you are then calculate based on the horizental kinect field of view how wide is it in centimeter and convert that to the maximum left and right you wanna to in you project/

@ : Actually I followed somehow the approached you mentioned already. I am able to create a mesh based on the depth image of camera with UV mapping Tangents and Normals and it is perfect. There is just one problem. for X and Y to match kinect 2 has a coordinationmapper class which convert quite easily the X,Y from pixels to centimeters. rigth now I am using constant. and since I want to use it for augmented reality it should match the exact place or the calibrating will be cumbersome. The issue is that the coordinationmapper class has many function but those of which I want are not mapped.
maybe if it is not in the plan to get included i should implement it myself :wink:

@plangton : The first Kinect had this kind of conversions in it’s C++ library. Either thought no one would use it or this version of the Kinect SDK doesn’t come with it. Either way, I believe you can find some formula that by having the camera FOV and the position of something at the plane distance, you could project it’s position as if it was nearer the camera, thus finding the XYZ cm.
I have not searched, but I have an idea that may work. If you test it, let me know. Here goes:

Having the camera position and the pixel position you can find the angle of that pixel in relation to the camera as it’s a percentage of the FOV angle. Than project something at the distance of that pixel depth value from the camera at the calculated angle. That will be your final cm position.

Like: pX = 100, pY = 100. Depth value (D) = 100cm. Depth texture size = 512x424. Camera FOV = 84°.

If we consider the front of the camera to be the 0°, we must know how far off the center pX and pY are to get it’s angles. I’ll center pX and pY as if the texture center was 0,0 to make things neater.
pX = pX - 512/2 = -156;
pY = pY - 424/2 = -112;

Percentage of pX and pY from the center:
pctX = pX / (512/2) = -0.609;
pctY = pY / (424/2) = -0.528;

Percentage in degrees of the camera’s FOV:
degX = pctX * (84°/2) = -25.578°;
degY = pctY * ((424/512)*84°/2) = -18.364°;

Having those angles, we can find the position of that pixel in centimeters projecting a point from the camera at D and those angles. I’ll make it in Unreal coords.
cmX = D * cos(-degY);
cmY = D * cos(degX) * sin(-degY);
cmZ = D * sin(degX) * sin(-degY);

I’m not sure if on the last part you really need to use -degY. But I think so because on the texture Y grows down and in the world Z grows up.
Also, I’m assuming your camera is at 0,0,0 pointing straight at X. If not, you’ll have to add your camera’s world position to the final result and rotate the vector accordingly. In that case it’ll probably be easier to use these coords as relative to the camera in a camera’s child actor.

This is an approach method, of course. I remember on the first Kinect SDK that these coord conversion methods accounted for the difference in position of the Kinect sensors etc. It was a much more advanced calculation.

Thanks Predalienator. This helped alot. I was struggling with this and was near your code but had to make a few adjustments after looking at yours and now it works great.

E

The calculation you mentioned is correct. however we need the calibration data specifically to the sensor we are testing with.
microsoft is providing this by GetDepthCameraIntrinsics function under coordinationmapper class. ://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
so I still need to access this data in which still does not provide this access, or if it is I don’t know how to. The following is a link of the detail of the calculation and CameraIntrinsics dataa structure

I thought I would share two projects that I completed over the weekend using the plugin. Thanks Opaque Multimedia for making such a user friendly plugin. I was able to implement these based on the tutorial and intro project packages provided.
Control of 3rd Person Avatar with unreal avatar follows movement position of human movement in relation to the kinect.
://youtube.com/watch?v=n764eI6scA4
Control of FPS with Kinect whole body movements (including rotation, head tilt, forward and backward movements). Also the character fires balls from each hand when the hand is opened.
://youtube.com/watch?v=_kgCtbPqlCg

This is cool - i’m quite interested in the 3rd person one. Are the legs jittery because they’re out of frame? How difficult is it to drive the character with the plugin - is there much set up involved?

Dan

I found it quite easy to figure out. I am still no expert I wanted to simply control the whole body position of the avatar with changes in my body position in 3d space. I.e. I move forward the avatar moves forward. I think the work that I need to do to smooth out some of the joints is to clamp some of the possible joint angles expressed by the avatar. I.e. I think each socket on the avatar can move any rotational direction. Therefore you get wonky knee angles because it is a hinge planar joint and not a socket joint like the shoulder and hip when the kinect misreads the body position. This will take me awhile to figure out and as I dig into it more I’ll post some updates. Does that make sense?

I have noticed that in the 4.8 version of the plugin, there is no KinectPlayerController (KPC), or I just cannot find it. In the Introduction project, #3 on the Installation/Setup BP_Signpost_11 says to Parent your PlayerController to the Kinect Player Controller (KPC). Therefore what do you do instead? Also many of the other steps that it shows refers to using KPC. So please can you either clearify how to find KPC or explain what to use instead. Thanks.

Also, I am wondering if I can use to Record Animations for a game that does not use Kinect within the game.

When we moved to version 1.1, we moved the architecture over to a component based system. This removes several steps that were previously necessary and brings the plugin more in line with the general direction of unreal. Instead of any other steps before getting into your blueprinting, you instead add a Kinect interface component and can then get straight into things.

When modifying the ABP_Avaiteering Blueprint I am unable to control the motion @ the hand joint. It seems to be fixed. Further when I try to extract Kinect information for the tip of the hand there does not seem to be any values being tracked even though that it is an option to track this joint via the Blueprint node. Is there some way to resolve this issue so I can script wrist flexion and extension into the animation blueprint. See the picture below of the screen shot of the animation blueprint. You can also see a picture of the output of the joint rotation (and how it remains the same for each tick) and how the right wrist is in a “stuck” position. Any help would be much appreciated. Thanks,

Well I have figured out how to get the hand joint to be integrated using the ABP_Avateerion_MacroSmooth Animation Blueprint. Now my question is in using that BP if anyone has created angle clamps that control for unrealistic movements that occur when the kinect reports a large value? For example to keep the wrists from spinning around 360 degrees. I have tried playing around with this however it is difficult for me to discern which component of rotation I should clamp. Further it seems like the joint orientation rotation is reported in absolute terms not relative to the current position. For example, if I clamp the rotation in one anatomical position then it effects a planar movement with arms by sides and a different planar movement with arms are out in front of you. Although I am not 100% sure on this. If anyone else has messed around with this it would be interesting to see what your blueprints look like for using clamps given what I have mentioned above. You can get a sense of the issue I am confronting via the video below.

://youtube.com/watch?v=ZcfWfWU3HTw

Thanks,

I am trying to implement Avateering using the code given in the Introduction level but I’m getting this message. I don’t understand what it means by “Info Component Pose was visible but ignored”.

That note just signifies that there is not information passed into this node and that assumes a nuetral pose.

@: Guys, something is wrong with the plugin or Unreal on Windows 10. I haven’t managed to find out what yet. Did you guys have any problems? It’s stuttering a lot, like it’s at around 5 fps or so. The engine fps is fine, though. I also tested the Kinect with it’s SDK browser and it’s very smooth in all demos.

My specs:
Windows 10 x64
Unreal Engine 4.8.3
i7 4710MQ @ 2.50 GHz
GeForce GTX 960M
8.00 GB RAM

Thanks in advance!