Facial Animation transfered between VERY different Metahuman looks equally NICE, why?

Hi guys!
Recently I sculpted several very different characters through MHC. I made the feature points of each character located as differently as possible: one may has chubby face and tiny eyes while the other has a Gollum-style look.

Within my limited experience on ARKit based characters, if I use the blendshape set & values of Mr.Chubby DIRECTLY on Mr.Gollum, the expressions will be broken: either losing semantics or messing up the mesh.

However, in the Metahuman case, when I use LiveLink to project my ARKit data on Mr.Chubby as well as Mr.Gollum, I didn’t observe such phenomenon! The animations appear to be equally good, even though their mesh differs a lot.

So why Metahuman faces can share the face expressions decently? Is there some virtual animator did the retargeting job or what…

Well if they are that different then I’ll assume you are using morph targets.

One of the key elements of morph targets is they are additive relative to the master the target was made from. If Vertex 128 has a XYZ value of say 10X10X10 then the target value is add as a + or - value so the saved value could be 10+5X10+5X10+5 as the “relative” offset. This means that you can create an injector shape that totally changes the shape of the character but morph shapes like expressions will, as an additive, behave in the expected manner.
This is not tech unique to Unreal but has been around for ages and used for creating unique character in say Final Fantasy: The Spirits Within as well in “Massive” scenes in Lord Of The Rings or say in games like Assassins Creed.

Of general info Daz3D uses Injectors as part of their Genesis 3-8 base

1 Like

thanks for your reply!
So, if expressions are taken as an invariant additive accross different characters, while the offset of facial vertex varys: isn’t there gonna be any mistake?
For example, I have a character A, whose upper-eyelid at y=10 and lower-eyelid at y=9. The eye-blink expression tells my character A to move his upper eyelid down, say -1 unit. Then, I morph character A to make his eye bigger, i.e., lift the upper-eyelid altitude from y=10 to y=15, so I got character B, a morph target that take A as its master.
Now with additive, I perform eye-blink on B: upper-eyelid y = 15-1 = 14 , lower-eyelid y=9, the semantic of eye-blink is lost. (This is what I encountered while using ARKit data. )
Tell me if i misunderstood pls.

Well there could be but I would not consider it mistake but rather the lack of consideration as to the expected behaviour.

OK so morph targets only record vertices that moves and zero values are ignored so the target shapes are a lot smaller in size as to the entire vertex count of the master. The ideal would be to create shape clusters as part of the desired target shape so to duplicate the normal behaviour of an eye closing you need a shape for top and bottom eye lid.
Now if you add a ready made expression then any shape added as a cluster will compound the final output.
Expression sets are good for one offs but is going to cause, as you say, mistakes as to the final shape as the target vertices could be included with in two different shape clusters.
Personally I don’t use MHC but Genesis3/Daz Studio instead so can’t really advise you but in DS I have access to, could be, thousand of shapes so I can create an expression using dozens of clusters and still do say lip sync with out the shapes compounding.

this explanation makes lot of sense, thank you :wink: !