I used the “Send to UE” Blender (LTS version) addon to send a character mesh (plus deform-only Rig and animation) to Unreal. On my first try, it split off all the character meshes, and only animated the eyeball mesh (the top mesh selected). When played, it seemed like it could be accurate movement, but at ground level, instead of head height.
Next, I “joined” all of the character’s meshes in Blender, and sent it that way. This gave positive results, and the skeleton, plus the Mesh, were seemingly correctly scaled, but the animation, when viewed, was crushed down to the approximate size of a basketball.
While snooping around, I discovered that if I looked at the animation skeleton screen, it was performing roughly as it should, but was reduced working with a very tiny skeleton size within that animation. I discovered a way to view “Translation Retargeting” selections, in options, and found that if I manually changed every bone from it’s default, of “armature” to “Skeleton”, it basically repaired the animation, bone by bone.
However, something had added a root bone to the bottom of her spine… it worked to anchor the entire rig by it’s spine to ensure it couldn’t do any location transforms away from that point. This meant that in the dance animation, she could no longer “dip” to the ground, resting on on her heels, instead it retracted her legs up into the air. She also couldn’t do any movements in the dance which had her travel even slightly from that anchor point.
I was attempting to use her as a dancing background character, but am I required to use blueprints and stuff to figure-out how to manually “co-animate” all of her location translations in sync with the mocap, instead of just allowing the mocap to do what it was already doing in Blender? If so… what’s my next step? what’s the tree I should be barking up to get this going?