How are the Metahuman facial controls linked to blendshapes in Maya?

Hello, I’m looking at the metahuman facial controls in Maya and trying to work out how everything is hooked up and how it works with Unreal. I’ve used Maya before for some basic animation, but I’m not a regular user.

If I move the left-eye control left and right, I can see that the eyeball moves, and the skin area around the eye also moves. After a bit of digging, I found out that the eyeball rotates via a driven key connected to the eye control (this digging involved looking at “CTRL_L_Eye” in the node editor, following a few links in the chain to “LOC_L_eyeUIDriver”, where I noticed there were some yellow keys on its attribute editor. I right-clicked and went to “set driven key” and could see “CTRL_L_Eye” in there).

However, I’m struggling to find out how the blendshapes are linked to the facial control rig. If I click on the head mesh and go over tho the “blendshapes” tab, I can see that the “eye_lookLeft_L” blendshape moves when I move the control for the left eye. Obviously, all the blendshapes here also have yellow keys on them, but if I try to rightclick and go to “SetDrivenKey” again, there are no drivers in the list.

How are these blendshapes being affected, and how would I find that out by myself?

2 Likes

The name you see there is not a blendshape, but rather a pose which is achieved by simply using joints.
If you enable the joints visibility on the face, you’ll see that you have hundreds of them.
In Maya you create a 52 frames animation, and for each frame you have a shape ( mouth open, left eye closed, and so on ), all done using joints ( or rather the facial controls that are driving the joints ).
Once you do that, you export the animation in Maya, which you then use as a pose asset.
The pose asset allows you to “translate” the 52 frames animation to match the shape names, so that ( for example ) the data that the iPhone is sending is then translated to a pose based on the 52 frames animation you created, triggering that specific pose using joints.

If you select the Face and check the AnimBP, you’ll notice that after the “LiveLink” node there is a node called " mh_arkit_mapping_pose, which is using a pose asset, which is generated form the “mh_arkit_mapping_anim”.
If you open that animation, you’ll see that for each frame there is a pose.
The pose asset simply gets all those poses and assign them a matching name following arkit specifications.

So tldr: no blendshapes are used, just joints that makes a face pose, which is then translated with the same name as the correspondent blendshape.

3 Likes

Thanks for that info. Is there any way to get access to the elements you mention in Maya? I’ve exported a metahuman character to Maya via Bridge, but the file doesn’t seem to have the 52 animation frames keyed in as part of the file. I’ve also looked at the Maya Pose Editor, and there are some “Pose Interpolators” listed, but clicking on them doesn’t seem to do anything (Probably the Maya Pose Editor is something entirely separate from the Unreal Pose Asset, I know it’s used for correcting joint deformation in Maya).

Regarding the following:

“The name you see there is not a blendshape, but rather a pose which is achieved by simply using joints”.

What is actually connecting the facial control rig in Maya to the poses that you mention? Changing the rig controls obviously makes the face move, but I feel a bit in the dark about how this works. I wondered whether it’s got something to do with the Dna File Path, as breaking this seems to break the face rig. I also found a node called “CTRL_Expressions” with “Extra Attributes” that were linked to the facial control rig, and playing with them changed the expressions, but I couldn’t work out how these “Extra Attributes” were connected to the bones in the face.

My guess is that the dna file is doing a lot of this work, and that it’s probably got something to do with making sure all the bones in the face the correct length when Metahuman creator is saving out a given character.

1 Like

The framebyframe animation is only available as a uasset in Unreal, but if you export it from there and import into Maya on the facial rig, it won’t do much, since that animation is joint based, while you’ll need to move the facial GUI controls.
However in the UE4 animation you do see which controls are used ( as a curve animation ), so if you have a lot of patience, you can recreate the same animation in Maya triggering those controls accordingly.
Not really sure about the Pose Interpolators, never notice them.

In Maya you don’t have poses, you just have the controls that are moving the joints.
By moving those joints you create the poses, but they’re not stored or saved inside the controls, the only reason you see them as shapes inside UE4 is because of the 52 animation frame I mentioned, so each pose you crreate in Maya is “retargeted” in Unreal using the pose asset.
As far as I know the DNA file store a lot of stuff and make sure that the rig won’t break.

Regarding Maya > UE4 for the facial rig, imagine you want to have lipsync inside Unreal.
In Maya you create a set of mouth shapes that correspond to vowels/consonant ( ah, the, m, b, and so on ) which you then export to Unreal.
In Unreal, if you want to create lipsync, you can use one frame of that animation you previously created, in order to have a “ready made” mouth shape, so that you can build your own lipsync animation, rather than creating the entire mouth shape in Unreal.

The core idea behind the way the facial rig is used in Unreal is that all the hundreds of joints you see are driven by the control rig, and in order to make thing easier ( use the iPhone for example ) a simple 52 frames animation is used to create the required mouth shapes ( so basically the blendshapes from the ARKit setup ) and use that as your source for the facial animation.
Is basically using the data from the iPhone, but using them onto a different rig to achieve the same results as with blendshapes.

1 Like

@Enter_Reality

I know this is an older thread, but I just discovered it and your explanation of how the Arkit poses work was so helpful! I thought it was all based on sculpted blendshapes and now I get it. Thank you!

I’m having a hard time finding the “framebyframe” animation of all the 52 Arkit face poses in Unreal. I would like to export them and use them as guides in another program as I re-create the 52 poses for a cartoony character.

Would you happen to know where in Unreal I can access them and how I can export them?

Thanks!
The asset is called mh_arkit_mapping_pose and it’s included for all Metahumans Face ABPs.

Perfect. Now I know what to look for. Much appreciated!