How to add new Facial BlendShapes to base Meta Human so that all metahumans can have them

I’m working on an application to have metahumans do Sign Language. I’m not sure if it’s the same everywhere but in my country sign language includes many specific facial expressions that are not possible to reproduce with metahuman’s current blendshapes.
I know it’s possible to alter a Metahuman to include new blendshapes or customize the mesh as we see fit. But that would mean i’d need to do it to every character I export from the Metahuman app.
My question however is this: Is it possible to extend the base metahuman, add a blend shape, that would then apply to all different characters created with MH.

And would it be possible to extend the control rig to include this animation?

Bump.
I basically have the same question, also concerning sign language

You can create your own facial expression using the Face Control Board, which moves the skinned joints, so it has nothing to do with blendshapes.
You can eventually also customize your Facial Control Board by adding a control specific to that new facial expression that you created, then animate that facial expression manually on top of the facial tracking data ( if you have any ).

1 Like

Hey, thanks for the reply! :slight_smile:
I know about the Facial Control Board, but as I’m not an animator but a programmer, I’m not too familiar with everything that is going on inside Unreal Editor. We first used ARkit to capture facial expression from a person signing and now compute facial animation data from videos, at first still using the 52 blendshapes defined in ARkit and now added custom ones in our computation of the facial animation data. That’s why I’m not looking at the Facial Control Board, but how to extend the blendshapes already used

Got it, if you’re using ARKit, you can add the blend shape in whatever DCC you want.
Then in AnimBP use the Modify Curve node and add the blend shape you need ( image shows the JawOpen blend shape as an example ), promote its value to a variable and make that Instance Editable and Expose to Cinematics are enabled.
You may also need to create the same variable in your BP, cast to the ABP to drive the variable you created in order for that to be visible inside Sequencer, so that you can manually set the value for that specific blend shape on top of the ARKit facial tracking.

1 Like

Thank you for the explanation and the picture! This is super helpful to know! In case you have a bit more time, I have two questions for clarity and one for advice.

  1. Does is matter where in the AnimBP I plug in the Modify Curve nodes? For now I just put the one I created right before Output Pose, just as shown in your picture.

  2. If I create the same variable in my metahuman BP (I hope I understood it correctly that you meant this by “your BP”), do I cast the variable to the ABP like this?


    Or how do you mean it?

  3. Because I only know that we used the Live Link app on iPhone and except our mocap suit don’t know of anything else we use as an source for Live Linkt, do you have any suggestion how to go about using a DCC where I can add the custom blend shapes?

And thanks again for your time and help! I very much appreciate it and don’t want to bother you too much

  1. It depends what you need to do:
  • If you try to trigger that blendshape on top of a livelink realtime facial tracking, then you first need to us ethe LiveLink node, then the Modify curve, then Output pose
  • But if you just have the facial mocap data imported ( I guess by either FBX or inside Sequencer? ) you can just have the modify curve node directly to the output node
  1. Your BP is the one where the skeletal mesh ( and its assigned AnimBP ) is, so I in your case is your Metahuman_BP.
    Regarding the casting, you shouls also have a variable ( maybe also called tongue Left ) and cast from the Metahuman_BP Event Graph towards the Face_AnimBP, setting the TongueLeft variable, so that the Metahuman_BP TongueLeft variable will drive the Face_AnimBP TongueLeft variable.
    In your picture you’re driving the TongueLeft from the Face_AnimBP, so you’re doing the opposite of what I Described above.
  2. Blender/Maya or any other DCC allows you to add the blend shape to your Metahuman.
    Export the head skeletal mesh from Unreal in FBX, import that into your DCC, add/create the blendshape, give it a proper name and remember to add that to the blendshape list, then export the skeletal mesh and reimport that into Unreal.
    Be aware that the face skeletal mesh has 800+ blend shapes ( from LOD0/Cinematic version ), so it’ll take quite a bit of time to import that into your DCC…you can export the FBX without blend shapes, but after reimporting I’m not sure if something breaks or doesn’t work as intended, but I think you should export without blend shapes…if I remember correctly, Unreal keeps the blendshapes within the asset, even though they’re not included in your updated FBX you exported, I guess give it a go and see if it works.
1 Like

Sorry for the late reply and thank you for answering all my questions! I had to pause this project to work on another, but will try this approach soon :slight_smile: