Thank you
I’m going to prepare a detailed tutorial, but for now, here are some additional tips:
1/
The whole blending magic happens in a Material Function called MF_AnimatedMaps - you’ll find it in the MetaHumans/Common/Face/MaterialFunctions directory. It further uses MF_HeadMask_01A, MF_HeadMask_02A, MF_HeadMask_03A. They use several parameters (e.g. head_wm1_normal_head_wm1_browsRaiseInner_L), which become weights for texture masks located in MetaHumans/Common/Face/Textures/Utilities/AnimMasks.
You can analyze those graphs and textures to have full control over the blended areas, but IMO it’s enough (and this is what I did) to put some solid colors (such as Red, Green, blue) to CM1, CM2 and CM3 and verify what happens with your face for particular facial expressions. I’ve created a simple screencast below for you to show you what I mean.
Unreal blends both normal maps (WM1, WM2, WM3) as well as blood-flow maps (CM1, CM2, CM3) so they have to be mutually consistent. And I just hand-painted these textures (the final ones, not included in the video :D) in Substance Painter (using MAIN textures and some additional layers on top of it) based on my photos with these particular expressions.
Keep in mind though you don’t control the aforementioned material parameters (head_wm1_normal_head_wm1_browsRaiseInner_L, etc…) manually. I once though I could do it because mh_arkit_mapping_anim has the specific curves and sets their values there, but they seem to be ignored. I believe the final calculation happens inside RigLogic (in Post Process Anim) and it’s based on the Control Rig values.
The RigLogic itself is pretty complex and is based on several years of R&D made by the 3Lateral company bought by Epic.
It provides controls based on FACS (Facial Action Coding System) and calculates: joint transformations, blend shape weights, shader multipliers for animated maps (the ones described above).
There’s a new MetaHuman version (1.3) released yesterday which gives you better control through the DNA Calibration Library, but frankly I haven’t checked this yet.
2/ If you animate your face with LiveLink, you’ll get some decent results, but:
-
remember to run the LiveLink Face app under different lighting conditions because the quality will vary
-
when your iPhone/iPad gets hot (and unfortunately, this happens sooner than later with LiveLink Face) the framerate drops to 30 FPS - don’t use it then, it’s better to wait
3/ No matter how accurate the LiveLink data is, the lipsync won’t be perfect. You’ll need some manual work here. What you can do is baking your LiveLink animation (captured with Take Recorder) onto the Control Rig (Face_ControlBoard_CtrlRig) and then adding an additive section for fine-tuning.