Combining livelink recorded face animation and body animation disconnects head from body

I’ve recorded a facial animation with head rotation using livelink. It all seems to work fine. I’d like to include that animation in a level sequence alongside a separate body animation (e.g. an idle animation).

When I do this, they work fine separately, but when I try to combine them, the head disconnects from the body and stays floating:

I thought I might be able to fix it by deleting whatever curves from the facial animation that might be interacting with the body animation and overriding it. When I tried that, something strange happened:

Note the 1000+ curve being deleted. The face no longer animates, but the head is still rotating when I press play. I tried to capture a frame where the head is still tilted here to make it a bit more obvious.

This brings up the question: where is the head rotation even stored? How can I access it and/or delete parts of it?

Perhaps this is a red herring though. Does anyone know how I can combine these animations correctly in a sequence? I am attempting to use the UE5 MM_Idle animation for the UE5 skeleton. I followed this tutorial to get that set up.

Hi dellis23, did you find a solution to connect the head with the body?

Hey Marc. Nah, I’ve had this tab open refreshing multiple times daily hoping someone had the answer :slight_smile:

I’m guessing you have the same problem? Are you also trying to use a retargeted animation from the UE5 mannequin?

Yes, we have a UE5 Meta human and are trying to combine a body animation with live link face capture animations. We tried to socketing the floating face to the head position on the skeleton. It got them into sync but it was super messy.

1 Like

Any updates?

Hi Guys! on my part the reason was that I was baking the Take Recorder result. If you use the take recorder facial animation without baking it, at least on my case everything works perfect :slight_smile:

Interesting, thanks for the heads up. I don’t recall explicitly specifying anything with respect to “baking”. Is there a flag to turn that off? And you’re still able to record and play back the animation later?

There’s no flag, I actually did a baking procedure so it is not an option you tick but rather a step you omit. And yes I was able to record and playback :slight_smile:

For the case that combines the metahuman head and custom body, if the head is floating above during Sequencer, maybe you shouldn’t pin the head onto the neck_01 socket in the blueprint before Livelink face capture, if not the face mocap animation will include offset.

Hi. Can you explain what you mean by baking the take recorder? Im having this same issue and I cant figue it out. Thanks.

Sorry to pile onto this with no solutions, but I’ve got the same problem. Looks like all the bones are following the sequenced animation (which only has animation on face bones and the rest is in bind pose) instead of respecting the parenting from hierarchy.

I’ve resolved this in my case (state machines):

Since the face doesn’t have the same skeleton that the body has we can’t simply attach the body as a master bone component here. Therefore what we need to do is add some stuff to the Face_AnimBP animation blueprint.

  1. In the Event Graph of Face_AnimBP (it’s in the MetaHumans/Common/Face folder) get the owning Actor of our Face and cast it to the character blueprint of your metahuman:

Here I’ve attached it to Begin Play - this means it won’t work until you actually play the game. Out of the character blueprint we drag Get: Body (almost at the very bottom of the list) and save it as a Body Skeletal Mesh variable.

  1. In the AnimGraph I’ve added a blend where I do yet another blend between the Body Skeletal Mesh copied pose and the State Machine.

This means that whatever the state machine plays the body skeletal mesh anim will be blended in there. Since I don’t plan to add any animation to the root → to head bones on the face it’s like the body animation overrides the face animation for matching bones.

face_connect

I don’t think it’s an ideal solution but I hope this helps!

OK :smiley:

There’s a much easier way. Instead of casting to BP and all that you can just insert a Use Cached Pose ‘BodyPose’ in this structure in by the end of the anim graph:

This is cache is being set and used by default in this Anim BP, but the signal just doesn’t reach the Output pose node. For sequencer use you can simply attach it to the main blend by the end like so:

Added benefit is that it’s going to look ok in the editor too :wink:

I had this same issue with using a body animation and an Omniverse Audio2Face animation for the face, I ended up fixing it by going into Sequencer and right clicking on the Face skeletal mesh the face animation was running on then selecting Bake Animation Sequence;
image
And then in the Animation Sequence Options popup after selecting where to save the animation, making sure “Evaluate All Skeletal Mesh Components” option was checked;
image
The newly created animation sequence for the face then contained all the body movement baked into it, and I was able to replace the previous floating head animation sequence with the newly baked animation sequence into Sequencer, and it worked fine after that. This is potentially helpful if you don’t want to use any blueprints for this, which was the case with wanting to just use Sequencer