MetaHuman - Tips & Tricks

I’d like to share a couple of tips&tricks related to Mesh 2 MetaHuman workflow for all of you who’d like to experiment with the technology a little deeper than a simple Promote Frame->Track Active Frame->Identity Solve-> Mesh to MetaHuman wizard-like style. Read below how to overcome MHC limitations (e.g., my avatar has a haircut that isn’t available in the creator) and how to fix an Unreal bug related to ARKit Live Link.

First of all - don’t rely on MetaHuman Creator Sculpt/Move tools too much, as they’re tricky. I don’t use them at all because:

  • their flexibility is minimal,

  • MHC has a huge focal length (I’d assume the Field Of View is somewhere between 8-15) - it’s far from what pictures taken with your phone usually have.

Therefore - unless you’re a character artist with trained eye who can capture the likeness from pictures having different FOV, you’ll get into trouble - even if you achieve fairly good results from the front view, your MH won’t look alike when you rotate it. That’s why it’s better to make all tweaks in advance before you send it to MHC.

Here’s my full workflow:

1/ Scan your face (I assume most of us don’t have sophisticated 3d scanners and rather use Polycam or another software)

2/ Clean up your mesh (I’ve noticed it gives slightly better results) - smooth out bumps, sculpt missing parts (e.g., I had to sculpt my ears from the scratch, because they’re obscured by hair). You don’t have to care about mesh topology at this stage

3/ Bake your albedo/diffuse map onto the modified mesh (it’s important because auto-trackers leverage this texture, but don’t worry if it’s not very accurate - you’ll be able to tweak markers manually)

4/ Import your mesh into Unreal, add MetaHuman Identity asset, click + Components from Mesh (this will automatically add the Capture Data, Face and Body parts, and a Neutral Pose)

5/ Set viewport FOV to a small value (e.g., 5) and set the camera to view your face from the front. Promote Frame and Track Active Frame

Points 6-12 are ESSENTIAL:

6/ Auto trackers are not good enough if you strive for likeness. You should leverage more curves, but remember they’re not available until you click “MetaHuman Identity Solve” the first time. So do it and then enable more curves (in the right panel).

7/ To tweak the curves you can drag control points. You can add more control points (CTRL+LMB on the curve) or remove the existing ones (CTRL+LMB on a control point). Their resolution doesn’t matter at all. Their count doesn’t influence the final topology. Use as many as you need to follow the original shape (but not too many because you need smooth organic shape). You can multi-select many control points with SHIFT+LMB drag, but your mouse has to be moved in left->right and top->bottom direction. Otherwise, points aren’t selected.

8/ Spend a lot of time on eyes, as they are very important to achieve likeness (I usually spend 20-30 minutes here). Switch between mesh/template view frequently and make sure all eye marks, eye lids (and their widths/depths) are accurate. “Eye crease” curve is tricky - as the fold doesn’t follow it exactly. So you have to tweak it, click Identity Solve, compare the template with the original mesh, and repeat (tweak, Identity Solve, compare…). You can press L + LMB drag to rotate the light.

9/ Autotrackers (at least for me) mark brows way too high. Leverage Brow curves to fix this.

10/ You’re not limited to a single Frame. You can promote more frames. Just make sure you don’t use the same curves on different frames. This is especially useful if the auto-tracking doesn’t detect your ears accurately.

11/ Now the MAGIC console command which will make a huge difference: mh.Identity.ExportMeshes 1

This will export both your scanned AND the conformal (the retopologized one, with MH topology) meshes to your “Saved” directory the next time you click MetaHuman Identity Solve. It lets you open both meshes in a DCC and they’re perfectly aligned (no more moving/rotating/rescaling). This is a perfect moment to bake an additional normal map. Because polycam models have poor quality, it doesn’t make sense to bake the normal map from it, but what I usually do here is: a) copy my conformal model b) shrinkwrap the copied model to the scanned model - you’ll get better alignment, but it’s still smooth enough) c) because some parts will get broken (especially the eyelids and mouth) - I limit the shrinkwrap by weigh-painting the vertices d) bake a normal map from the copied and shrinkwrapped mesh to the conformal one. After this step, your model will look much better (although this normal map won’t be visible in MHC, we’ll use it later in Unreal)

12/ When you’re ready, click Mesh to MetaHuman, open the Creator and set the skin, eyes, brows, hair, etc… Don’t worry if you don’t find a perfect haircut, but select the one which resembles yours as much as possible. Don’t try to sculpt/move your custom mesh even if it seems misplaced - remember the focal length is really high and the perspective is different.

13/ Download your MetaHuman, but DON’T modify it in a DCC. Some tutorials show you how to tweak the blendshapes or resculpt the mesh, but don’t do it. It will get out of sync with the rigging system and facial expressions will look awkward. This is why it’s crucial to make all the mesh modifications at the beginning of the process.

14/ You can tweak your haircut by modifying the appropriate Groom asset (as long as you use grooms). It provides two groups: Group ID 0 refers to sideburns so if you want to get rid of it, simply set its Hair width to 0. Group ID 1 refers to the rest of your hair. You can shorten it, change its scale at the root and the tip (to imitate the density) and also update physics if you need it. Do the same thing with brows. BTW - the brows can be rendered using Cards (in such a case you can change the cards mesh to modify the shape and density of your brows).

Even if your avatar looks good now, it will probably look strange as soon as you start animating it with ARKit (LiveLink), because the facial expressions will not resemble your own. There are a couple of steps to be taken to fix this:

1/ Things like teeth/jaw position can be modified with appropriate Curves in the Anim Blueprint - open your MH Blueprint, select Face, find out what Anim Class is used and edit it. Open AnimGraph and add a ModifyCurve node, with appropriate curve pins and set their values.

2/ You can make further adjustments to how your avatar reacts to your ARKit expressions. ARKit provides head and eye rotations + 52 weights. The appropriate mapping is done in mh_arkit_mapping_pose. For each of those 52 weights there’s a specific animation frame (in mk_arkit_mapping_anim). If for instance, you don’t want your nasolabial fat pad to be that apparent, you can edit the appropriate frame. Keep in mind though - Unreal has a bug. The provided mh_arkit_mapping pose is missing one Pose Name (MouthClose) so if you change the animation asset and click “Update Poses from Source Animation” in mh_arkit_mapping_pose, it’ll break. To fix this, you have to add MouthClose pose name right after JawOpen (and before MouthFunnel). As soon as you do this, you can use “Update Poses from Source Animation” again.

3/ You should export at least 8 textures: FaceColor_MAIN, FaceColor_CM1, FaceColor_CM2, FaceColor_CM3, FaceNormal_MAIN, FaceNormal_WM1, FaceNormal_WM2, FaceNormal_WM3. These are the textures that are blended for specific facial expressions (normal maps for skin folds and colored textures for blood flows - e.g. parts of your face become reddish when you frown). I’ll write another post on how to use these textures and also describe my whole workflow here (especially the way I blend my photogrammetry face texture and the baked normal maps with the MH).





14 Likes

Very useful information. Looking forward to part II about the textures and your workflow using them.

Thank you for sharing this!

Thank you :wink:

I’m going to prepare a detailed tutorial, but for now, here are some additional tips:

1/

The whole blending magic happens in a Material Function called MF_AnimatedMaps - you’ll find it in the MetaHumans/Common/Face/MaterialFunctions directory. It further uses MF_HeadMask_01A, MF_HeadMask_02A, MF_HeadMask_03A. They use several parameters (e.g. head_wm1_normal_head_wm1_browsRaiseInner_L), which become weights for texture masks located in MetaHumans/Common/Face/Textures/Utilities/AnimMasks.

You can analyze those graphs and textures to have full control over the blended areas, but IMO it’s enough (and this is what I did) to put some solid colors (such as Red, Green, blue) to CM1, CM2 and CM3 and verify what happens with your face for particular facial expressions. I’ve created a simple screencast below for you to show you what I mean.

Unreal blends both normal maps (WM1, WM2, WM3) as well as blood-flow maps (CM1, CM2, CM3) so they have to be mutually consistent. And I just hand-painted these textures (the final ones, not included in the video :D) in Substance Painter (using MAIN textures and some additional layers on top of it) based on my photos with these particular expressions.

Keep in mind though you don’t control the aforementioned material parameters (head_wm1_normal_head_wm1_browsRaiseInner_L, etc…) manually. I once though I could do it because mh_arkit_mapping_anim has the specific curves and sets their values there, but they seem to be ignored. I believe the final calculation happens inside RigLogic (in Post Process Anim) and it’s based on the Control Rig values.

The RigLogic itself is pretty complex and is based on several years of R&D made by the 3Lateral company bought by Epic.

It provides controls based on FACS (Facial Action Coding System) and calculates: joint transformations, blend shape weights, shader multipliers for animated maps (the ones described above).

There’s a new MetaHuman version (1.3) released yesterday which gives you better control through the DNA Calibration Library, but frankly I haven’t checked this yet.

2/ If you animate your face with LiveLink, you’ll get some decent results, but:

  • remember to run the LiveLink Face app under different lighting conditions because the quality will vary

  • when your iPhone/iPad gets hot (and unfortunately, this happens sooner than later with LiveLink Face) the framerate drops to 30 FPS - don’t use it then, it’s better to wait :wink:

3/ No matter how accurate the LiveLink data is, the lipsync won’t be perfect. You’ll need some manual work here. What you can do is baking your LiveLink animation (captured with Take Recorder) onto the Control Rig (Face_ControlBoard_CtrlRig) and then adding an additive section for fine-tuning.

1 Like

8/ Spend a lot of time on eyes, as they are very important to achieve likeness (I usually spend 20-30 minutes here). Switch between mesh/template view frequently and make sure all eye marks, eye lids (and their widths/depths) are accurate. “Eye crease” curve is tricky - as the fold doesn’t follow it exactly. So you have to tweak it, click Identity Solve, compare the template with the original mesh, and repeat (tweak, Identity Solve, compare…). You can press L + LMB drag to rotate the light.

I would love to see a video tutorial that shows in details the process of adjusting the eyelids and the eyes in general to achieve likeness. This is the part that I’m struggling with most of the time.

Are you planning to make a video tutorial showing the whole process creating this face example? That would be really useful. Your explanations are clear, concise and to the point and geared toward people that want to push beyond the basics of MH. There’s really not a lot of good tutorial about MH that go into a lot of depth, at least that I could find.

I’ll make a tutorial as soon as I finish my project. Polycam has trouble with the eyeballs and their shape, but luckily MetaHuman can recreate the eyes pretty well. One important thing I forgot to add in my original description above (unfortunately I can’t edit it anymore) is to switch to Unlit mode when you set the curves. It’s much easier to see where to put the control points.

Thank you for sharing this insight, really looking forward to the tutorial! Cheers!

Really useful tips – thanks!

This I discovered after a bit of “curve outside of bounds” error or the like. So for the first “frame” I’d turn almost everything off but the few features that Autofind gets right away. So then created a new frame and placed the right ear. You can grab all the dots if you shift drag over them even though it doesn’t look like anything is happening or the rectangle is off. Once I got the one ear, the other ear magically got into place. It seems it gets better at “guessing” after you add a few in the right place. But don’t try and do it all at once – do whatever is easiest to detect first and the others will be easier to orient on later frames.

Sorry to resurrect an old thread, but is this still working for everyone? I’m on 5.2.1 and it gives me the three scanned meshes when I add the command and solve the identity but I don’t get a conformal mesh. :melting_face:

The conformal mesh is put in a directory now. Check your Saved folder for a new directory with the Metahuman’s name. You’ll find the conformed mesh in there (as well as the .dna file).

2 Likes

Ah great - good to know thanks! I ended up downloading the previous engine version to get the conformal but this is handy for future.

Well as the thread has been resurrected, perhaps I can ask for some advise here as well.

I have used this export command before successfully. A little while back now. I’m trying to do it again now but I’m getting Nada from it. As it seems to be working I must be doing something wrong. Would someone be able to precise instructions? Even if it is just Open the identity, run the command, press solve, open folder X then win. At least then I will know I havent missed anything. I remember last time it was a little flakey to get working but this time I really am having no luck at all.

Thanks

Just trying to get some help on something, and based on this thread, you look like the right person to ask! Here’s my question…

I have a working test case, and I can map metahuman trackers to a normal face. However, I am going with something more non-uniform, more like a skull, that doesn’t have clear/normal facial features that the auto trackers pickup up on. I want to manually add them. (I realized I’ll have some tweaking to do on how the facial performance looks, since it’s not exactly uniform). How do I move forward?

Thank you, any input or workflow ideas are super appriciated!

It’s not working for me either. Nothing happens on 5.4