Advice on creating virtual dressers.
Prepare yourself because even with the features I added to the plugin just for that, it’s complex.
It’s very important that you use a skeleton with same rig as the UE mannequin. Not just same bone names, but also their orientations. Otherwise setup is harder (see first questions in this QA section).
You’ll need to activate color space joints transforms to have joints coordinates aligned with the color camera, right after you init the sensor on BeginPlay. That’s done by calling the node Set Use Joints Color Space Transforms
with Use
checked. Once you’ve done that, you can use the Color Location
and Color Orientation
properties from when you break a KinectJoint
structure.
Scale the bones in the SkeletalMesh to the user’s bones, as some people are taller or shorter than others. I did it by storing the original SkeletalMesh bones’ lengths on BeginPlay, before any change was done to it. To get a bone’s length from the Skeletal Mesh, you use Get Socket Location
on the two joints that make that bone (like upperarm_l
and lowerarm_l
for the left upper arm) and save the distance between them. I suggest saving those values in an array ordered the same as the Kinect joints. You can get an index from any Kinect joint value using Joint to Index
to keep it consistent when passing the values to the AnimBP.
Notice that for the array index I used the same joint used for the “upperarm_l” in the AnimBP.
Then scale them each frame by sending the correct scales to the SkeletalMesh AnimBP. The plugin gives you the user’s bones lengths via Get Bone Length
, which you can call from a tracked NeoKinectBody
. What worked best for me was to scale only the axis along the bone (X for the Mannequin), to not change its thickness. Then it’s almost unnoticeable that you’re scaling it. The math is Scale = UserBoneLength / OriginalMeshBoneLength
.
Scaling logic added to
BP_AvateeringDemo
, from the demo project.
I advise you lerp the scale value over time (Lerp(PreviousFrameValue, CurrentFrameValue, DeltaTime * Speed
) because Kinect will jitter it constantly.
Finally, you’ll need to mirror the scene horizontally. Easiest way is to scale the Actor to -1 on the Y axis, but back when I did that, it would break cloth simulation. So what I did was to create a post process that would invert the U axis on the rendered scene by passing it through a OneMinus
node in the material.