What skeleton does metahuman use?

Hello,

I haven’t seen a straight-forward answer to this. Which skeleton, precisely, does MetaHuman use?

I thought it used Manny/Quinn, but upon investigating the control rig, there are only three spine bones, as opposed to the current default set of five.

Clarity would be appreciated.

The “base” skeleton is very similar to Manny, but you have 5 spine joints instead of 3.
The main difference are the corrective joints on the torso, shoulders, elbows, hands, fingers, thigh, calfs and feet…and also the hundreds of joints on the face rig.

You can just download a MetaHuman, select the body ( or whatever other SK you want ), then open the SK and open its skeleton, and you’ll see the entire structure.

2 Likes

Their own thing, with way more bones than anyone would ever need even for 8k cinematic work.

Granted, generally skeletons do not matter to animations or mocap or much of anyhing… but metahumans are taking all that is best practice and throwing it into the trashbin… since inception I’d say.

If you need to develop your own thing, do just that. Unhindered by what epic or metahumans do, which at this point is just truly bad/wrong for all purposes.
And after you have your thing working, you can set up some retargeting parameters to transfer things to/from metahumans as needed. (Assuming its animations, but it can even work to change skeletons up later-on…)

2 Likes

@Enter_Reality @MostHost_LA

First off, thank you so much for answering my question. It saves hours.

Second, I actually saw that there is a “body skeleton” with 5 spine bones, and then a “mesh” skeleton with 3. Not sure what the “mesh” does. Maybe it’s redundant or for debug? Or an artifact from the class default settings?

Third, my traditional knowledge of animating skeletons/meshes doesn’t seem to apply to MetaHumans (LeaderPose to combine head + body, AnimBP for passive animations, sequencer + control rig poses for facial animations). Is this because I’m trying to use them in 5.1 and there’s a compatibility issue or is there a different methodology for animating them? If so, could you point me to a good comprehensive tutorial on full body + head Metahumans animation tutorial (I’ve search and watched hours, but none of the ones I’ve seen seem to work or be very good… again, maybe 5.1 is the reason).

I don’t think there is a direct “free” pathway here.

As a studio, we spent a lot of money on several Rokoko suit, but as much as I would like to there is just no way I could ever reccomend anyone spend that kind of money (now even more as they sent out a mailer that prices are increasing) for such a low quality end result.

An indy studio is definitely 100% better off spending the 2 or 3 k needed to book a proper recording session at a proper mocap studio.
In fact, they propbably end up saving on time and money.
And get a much better end result in the end. Mocap studios have 2 things you don’t: experts or people with hands on experience that can fix up most things by just looking at a rig, and the proper tools needed to get you the final fbx…

Back to doing it yourself…
We record perfomances in engine, extract to Blender acrually, just because of BoneBreaker (addon we created specifically to work with unreal), edit and fix them up, and then import into the final project.

This is done via rokokolive and unreal face, both of which use the livelink plugin to communicate with the engine over the network.

There is a bit of delay between the actor and the recording on the unreal side.

You have to have your assets ready to go to film properly. Particularly the actor face morph targets need to be created specifically to match the actor’s maximum range of motion.

The process for that is rather grueling, about 4 days of work between rigging the face and creating the poses, assuming you remebered to have the actor do all the needed poses so that you can mimic the final poses.

Tweaking facial expressions after the fact is generally not a good idea.
Best you can do is make them more fluid by removing keyframes between the minimum and maximum values so as to have really clean exports (since they are curves, and our ue4Curve tools does still work, you can re-export from blender without having them keyframed at every frame, peoviding a generally better end result).

Tweaking bone positions after the fact is rather standard, you rig the animation (again, we use are tools for it) and you mess with it to your hart’s content.
This too generally just involves removing hard keyframes to improve fluidity.

Definitely a lot of tweaking for things that the rokoko suit fails at - touching yourself, touching your head/face, touching a specific target, being in a specific x/y/z location for syncs, finger positions, awkward bone rotations due to the sensors sliding around during the takes… etc.

Yes, in theory you can bring it all back in engine, apply it to a metahuman, and use the control rig stuff to tweak parts - I surely never even thought about doing it, but i do suppose that’s an option (if you want to ruin a good thing you put hours into :P).

That’s all i got.
TL:DR;
Spend $2k on a mocap studio, save yourself 2 years of work.

Ps: didn’t bother proofreading it… maybe later i will…

4 Likes

Interesting, but was this for me? :upside_down_face:

2 Likes

To reply to your questions:
To update my previous post, the skeleton of a Metahuman does have 5 spine bones, plus all the corrective joints, but not the facial rig, which is from another skeleton.

The Mesh is skinned character that is using the skeletal hierarchy from the skeleton.
If you open up all the different body parts, you’ll see that each one has the same base skeleton, so the MetaHuman is built by different body parts, that you can also change if you want, which is quite comfortable, since if you have a character where all the body parts ( face, boots, pants, and so on ) are a single mesh, it’ll be quite annoying to update.

Open a Metahuman base BP ( the one that comes with the downloaded MetaHuman ) and look at the “body” skeletal mesh.
This is the “body” skeletal mesh ( note the top right selection ), and as you can see there’s just the hands and the calfs


This is the Skeleton ( note the top right selection ), where I do have a preview mesh called “f_med_nrw_body”

If you’re not seeing 5 spine joints, I’m not sure if you’re looking at the Mannequin or something else.

Regarding the facial rig, the face has its own skeletal heirarchy, which does share joints with the body rig, so that using the face as a child of the body, and using the Copy Pose from Mesh node inside the AnimBP, allows the face neck/head to inherit the animation from the body, while the facial rig does its own thing.


The combining of the head+body is already done inside the AnimBP of the face, so if you decide to record mocap directly inside Unreal, you can use Take Recorder, and it’ll take care of recording your performance as a simple animation clip.
If you also want to record facial animation ( with an iPhone or Faceware/Facegood ), you can also do that together with the body, sicne the animation will be saved separately from the body animation, since it’ll save just the face joints animation.

You can then use Control Rig to bake the animation that you previously recorded onto the rig itself, it is a very simple process, so you could have two separate animations, one for the body and one for the face, all baked onto the controls of the Control Rig.

Honestly it would be better to tweak/cleanup the mocap recording using an external DCC ( Maya, MoBu, whatever ), since as of now the tools available within Control Rig are not ideal for cleanup, same thing for the facial animation.

I do use Unreal as my “all in one” mocap recording tool, but I then export all the animations in MoBu, clean them up, and the reimport them inside Unreal, then use Sequencer to add the anim clips, audio and camera movement.

2 Likes

Thank you for the clarification.

I confirm you’re spot on… the onnllyy issue I have is that during the simulation the facial blueprint does not animate. It does in the editor, but the moment I simulate, no information is passed.

I bit the bullet, reinstalled 5.0.3 and learned that MetaHumans are not compatible with 5.1.

Ugh.

Odd issue.

However, during simulation or anything else really, facial capture is usually done with blend shapes now a days. Not bones. So…
Are you expecting the bones to move? If so that could be the issue, depending on what it is you are using.

Generally speaking an animation is supposed to include ALL the data, so that a skeleton can play it and animate whatever parts. So in a setup like @Enter_Reality shared, you would need to also manually merge the head animation onto the base skeleton.
However, again, the bones don’t really matter unless you are using the bones for facial animation. The curves for facial mocap are probably recorded within the animation to begin with (depends on setup really, on modular character that’s not a given).

1 Like

@Enter_Reality @MostHost_LA

Rockstars you two.

I got the character working in 5.0.3, so here’s hoping they update 5.1 ASAP. (I had to disable IK rig node in the ABP, but I’ll learn how to implement IK bones in the future).

That said, is retargeting the only way around the VERY EXPENSIVE default MetaHumans meshes or is there a way to cull the resources that they use? I would just retarget but none of my current skeletons have face rigs, and I’m no modeler and have no plans to be.

(This is assuming it’s the meshes and not the myriad bones that cause the performance drop).

Inside the MetaHuman_BP there’s already a LOD manage ( called LOD sync ) that takes care of handling polycount, shader complexity and also manage the joints reduction.
Consider that for a project I did a while ago, I had a Metahuman running on the Quest2, so you can definitively have performance boost if you know what settings you need to tweak.

Also you can force the LOD manually if you need ( by default the LOD are set automatically based on distance ), so that you gain performances in editor.

1 Like

That’s a wrong assumption. the main performance loss is the abnormal bone count. Tris count, materials used, etc. are essentially peanuts to a circus…

Peanuts? not at all.

The LOD sync already takes care of reducing the bone count automatically, and also drop some of the more “heavy” stuff, such as correctives ( completely useless for game characters, even in cutscenes ), hair strands ( also useless for game characters ) skin influence, facial hair, and so on, so unless you use LOD0, you already have performance gain.

@Leomerya12 here there is a comprehensive list of what each LOD does.

MetaHuman Scalability Specs

So long story short, manually set LOD3/4/5 if you want to have In Editor performances, then if you need switch to LOD0/1 while rendering.

If the performances are still bad, go back to the good old Mannequin setup, and in case have a facial rig with blend shapes such as the one required for ARKit.

2 Likes

Mesh etcetera compared to just the abnormal skeleton? that’s absolutely peanuts. Basically, you can test this by just converting the mesh(es) to static and throwing them in the level.
Sure, the skeletal mesh itself is slightly more costly to begin with (even with a single bone) than a static mesh, but doing this gives you a quick idea of just how much damage to performance the basic metahuman skeleton causes in comparison.

Glad that they seem to have some sort of setup with some modicum of performance consideration (vai lod)… but realistically, metahumans are always going to be junk…

1 Like

@Enter_Reality @MostHost_LA , your disagreement is very informative. Thank you! (I’m being cheeky, but also sincere.)

When MetaHuman were released I was still using my old workstation with a 1070, and the first time I dropped a MetaHuman in the scene I was shocked about the terrible performances, and at the time the documentation was really outdated, so it wasn’t clear what the main issue was, or if they were designed that way.

I think that initially Epic promoted them way too much on the “look how detailed and amazing those characters are” side, rather than explaning that a MetaHuman can really be used as a template for a cinematic shot, a game character or a mobile character, but of course still nowdays you see everyone doing loseup shots using MetaHumans, because they look very good, and maybe 2 people using them as game characters.

There’s just one video from Epic explaining the LOD setup, and I guess that many people just never really looked at it, mostly because as soon as you drop a MetaHuman into the scene, your workstation starts to melt.

Having said that, 3Lateral did an amazing job on the facial rig, and the fact that this scale/adapt itself across characters with different facial structure is simply amazing.
Also the corrective joints on the body and the Pose Driver node established a more streamlined solution for body deformation, since clipping and bad deformations were usually an issue that most developers simply didn’t care too much to fix.

Before saying that this tech is junk, I would just take my time and give it an in depth look of what something like this can be used for…oh, and is also free to use by the way.

1 Like

Ugh, it’s going to take years for tutorials/workflows to come out on how to use these characters effectively.

I’ve Googled “Metahuman optimization” and nothing really comes up. If you know of anything, I’m all ears.

(The LODs help, but the resource pull is still pretty aggressive.)

High fidelity characters and performance are the last real hurdle I want to master before actual game dev.

Some studios are using them as base characters, mostly using the LOD setup to get them to work in an efficient way considering performances.
You could also do that, but you need to understand the under the hood setup, otherwise you have no idea what’s going on.

Thing is, is it critical for you to use a MetaHuman? or better yet, what are the features that you would like to have from a MetaHuman onyour character?

Because after you set your own rules for characters, you can simply get what you need from the available resources.

For example, you don’t need the super shiny over complex facial rig, but you just want a simple blend shape based rig, and you eventually want to use also lipsync?
You can extract facial shapes in Maya literally creating the pose, duplicate the face and delete its history, call it with a proper name, that’s it.
Do you need all the various PBR maps from the facial rig? or you just want diffuse/normal?
They’re already available for you to use, in case you can just use a good old 512/1024 map if you don’t need anything super detailed.
Do you need proper body deformation? if yes, the current skeleton works fine, otherwise just delete all the corrective joints and update the skinning on the mesh.

Worried about too many characters on screen? Use Vertex Animation, within the Matrix demo they show a setup where they switch in realtime between skeletal characters and Vertex Animation characters.

I said before, you can use a MetaHuman as a game character…but do you really need one?

2 Likes

I don’t intend to use the actual MetaHuman. For the purpose of learning (mocap, rigs, etc), I’m using them to speed up the indoctrination process because they already are set up.

Really hard to learn and want to learn without seeing immediate results.

But again, the MAIN reason is because of the facial rig. I can animate the body day and night, but mocapping with dialogue is my new challenge.

My intention was to, at some point, buy a base facial rig that doesn’t have the absurd amount of detail that MetaHumans has.

I want to produce a game this century, so using as many reasonable presets to expedite the process is my goal. I’ve started learning code and programing the moment 5.0 came out, so still fairly new… but I’ve put in a tremendous amount of hours.