Hello,
I wanted to ask about MetaHuman Facial Rig.
I was creating Facial Rig and tech to Kindgom Come 2. We had 110 blendshape expressions based on FACS (BrowRaise_L, mouthDimple_R, ..). Blendshapes were converted to skinning with SSDR - Smooth Skinning Decomposition with 350 facial joint directly on face mesh. Then we store each expression inside pose.XML per head - translation and rotation of every facial joint per expression. To be able to play same animations on multiple heads (with different proportions and unique expressions) we didn’t bake animation directly on face joints. We used pose driver joints to drive facial joints. Every pose driver joint were linked to each animation controller -> 110 expressions/controlllers = 110 pose joints. Pose joints were driving all skinned facial joints based on pose.XML in cryEngine. it is similar approach as to having .DNA format, although we did not used RBF. It was something cheaper - hacked morph driver, but instead of driving every vertex it was driving facial joints.
I’ve already created Facial Rig prototype in Maya with ssdr skinning, compatible with MetaHuman FacialRig GUI -> that mean I can use MetaHuman tech to generate facial animation, which I can play on my custom rig. I have total control over topology, what expressions or how many joints I want.
But I don’t have it connected to Facial Control Rig inside UE, only in maya. I would be probably able to do it through PoseEditor in Metahuman for Maya plugin ? But I am not sure it is the way, and I am also worried that I might run in some legal issues, when I am basicaly recreating MetaHuman and creating lite version of it.
That is why I want to ask what is best way to go with, what are our options when it comes to optimalization of MetaHuman? Since DNA Calibration was not updated to support MH characters from Unreal 5.6 from MH Creator and it is said we should use MetaHuman for Maya plugin for that. But MH for Maya plugin does not support any removal joints, expressions or replacing higher LODs for lower LODs?
And lets say we might need to have 25 people on scene with some facial animations, ofc not every character needs Highest LOD
Thanks in advance
Hi Zdeněk,
I am not sure I fully understand what you are asking, so will answer as best I can.
Pose Editor in MetaHuman for Maya enables the editing of the MetaHuman body DNA file (RBF, SwingTwist etc). Depending on the edits made, the DNA may be used with either MetaHuman Creator or directly with RigLogic in Unreal Engine. I am not sure of the connection you are making here with the facial control rig in Unreal Engine.
You are correct that the tools in MetaHuman for Maya do not directly support the same operations as DNA Calibration; these tools are intended to ensure edits remain compatible with the current rig definition used by MetaHuman Creator and Animator.
However, the underlying Python API for DNA Calibration is still part of MetaHuman for Maya and can be accessed directly to make edits that are not supported by the user interface (such as joint removal). Although not yet documented, it should be as simple as updating the import statements to access the same API calls as before. The only exception is DNA Viewer which has been replaced entirely by Character Assembler.
I hope that helps,
Mark.
Thank you for your answer.
So what do you think is the best approach for using the MetaHuman Animator while still having a game that runs efficiently with several characters on screen using facial animation?
What optimization steps would you recommend?
My idea was to use a full, unmodified MetaHuman head.dna for generating animation in MetaHuman Animator, and then play the generated animation on optimized head.dna in the game (with a reduced number of bones/expressions and lower LODs replacing the higher ones).
Yes, MetaHuman Animator will always solve animation to the standard set of facial controls, and so in that sense will always assume a MetaHuman character that has been created using MetaHuman Creator (and so a ‘full’ DNA for the head). If solving from a depth-based capture source, MetaHuman Animator also requires a MetaHuman Identity which has the same constraint.
Optimization may take a number of forms, and is heavily dependent on your specific context.
For general guidance, our tech. dev. rel. team have compiled as a set of animation performance tips and tricks that cover a number of common use cases we’ve seen when speaking to customers. For example, what we refer to as ‘cinematic’ MetaHuman characters come with 8 LODs although an optimized variant is also available, while vertex animation is a popular technique we’re seeing for crowds that number 100s of low-detail characters. The same team also gave a talk at Unreal Fest Bali that is available on YouTube.
More specific rig optimization (such as removing bones) is not directly supported by MetaHuman Creator or Expression Editor (in MetaHuman for Maya) and so would be something you could explore using the underlying DNA Calibration library (as part of MetaHuman for Maya). We know others customers have had success here.
There is also an experimental plugin for Unreal Engine 5.7 called RigMapper that introduces new flexibility for cross-rig animation workflows, allowing motion to be transferred between different character setups. This could help when transferring animation from MetaHuman Animator to an optimized character, however as an experimental plugin, this is in its very early stages of development and so not feature complete.
Mark.
Hello Mark,
Thanks for the insight!
I’m curious why is “full” DNA needed for the MH Animator if it solves to controls. We were planning to remove bones and solve deformations to the remaining ones, but keep the controls as they are. So controls and everything above abstraction-wise will be MH standard. Is that really a problem? If so, why?
Kuba
Yes, thinking about it some more you are right. While the Identity (for depth based animation) is a specific asset that is important to the solve, this is not the case for audio driven, mono or realtime animation. I haven’t tried this to be certain, however.
I’m also going to share this issue with a few other folks in the team to see if there is more we can add to the general guidance I have shared so far.
Mark.
Hi Jakub,
Have you profiled the original MH setup to see where the bottlenecks are with 25 MetaHumans on screen? What LOD are you expecting to use for your highest LOD and what’s your target platform and CPU budget for animation? I would usually recommend face LOD 2 as being the top in-game LOD unless the camera is getting very close to the face.
Much of the CPU time used on animating MetaHumans in their default setup is spent on correctives and in synchronising the different mesh components, so some licensees have done things like merged clothing meshes onto the body and turned off correctives for background characters.
The evaluation of RigLogic on the face tends not to be the most expensive thing, even though there are a large number of bones at higher LODs, so I haven’t seen any other users trying to do what you’re doing. That’s not to say you’re going in the wrong direction, but it would be useful to see some captures of where the CPU time is being spent to make sure this path is going to give you the results you’re looking for.
I’m not very familiar with MetaHuman Animator myself, so I’ll try to find someone who can answer your questions about that.
Hope this helps,
Henry