Skeletal mesh and facial animations

Hello there!

I’m in the process of creating a character, and I was wondering about face rigging and animation.
What I would like to do is simple : most of my characters will be able to blink, to talk, to have basic expressions, and to look angry or hurt. When they’re dead, their eyes are closed.

In UE4, it is good to have a main skeletal base for all characters. The problem is that I don’t know if the same applies to the face. Should create the same “facial bones” for everyone (some eyelids, brows and a jaw, for instance)? Or should I leave it empty, like the mannequin, and add some features depending on the character?

I don’t know if I’m being clear right now. This is my first character creation for UE4, and I really want to make it right. I don’t even know if I should use actual bones, or if there is some kind of deformation process.

Thanks for reading!

you can do facial animation either by using a rig with facial bones or by using blendshapes.

if you choose to use facial bones you should add them to the skeleton, yes.
you could make different rigs for different characters or make one shared skeleton with the same facial rig. this depends on how different your characters are from eachother (i.e. you probably want a different setup if you have a human and a werewolf or a xenomorph alien, but across diferent humans it’s ok to use the same one)

blendshapes is the most used method but I personally prefer a facial rig.
it’s in my plans to allow character customization of facial features, and that is best done with blendshapes. once I get that going, I doubt that using blendshapes also for animation will play along well. maybe it would work, but I don’t want to find out the hard way :slight_smile:

I’m actually a little afraid of facial bones x) It works well for jaws and eye direction, but I have no idea of how to “rig a smile” for instance.
On the other hand, I was told that blendshapes could be somewhat costly. Is this true?

Regarding my characters, most of them will be humans, so they should all behave the same. I may have some smaller or/and chubbier figures though (not much more, but still). Can they still use the same skeleton?

Yes blend shapes (morphs) can “become” very expensive and can only be applied per character and would require updating to the set if additional shapes are added or required. That’s to say if the character needs to “act” in real time then as part of the animations the shape needs to be available as part of the character import. It’s ok for smallish tasks like eye blinks or character mail box talks. Another consideration is depending on how many characters you have the memory foot print can get rather large.

Still to be proven.

Traditionally morph targets do not preform well in a game and have special requirements in UE4, assume to maintain efficiency. Material set up has to know it’s being applied to a morph object else you will generate rendering errors and in most engines I’m familiar with a morph object does not get hardware rendered (don’t know if this is true with UE4).

Overall morphs are very messy and difficult to animate.

Using marker based rigging, clusters, facial bones, uses the same animation pipeline as you would use to do a run cycle and adapted to other input sources like markerless tracking or voice drives as well can be driven procedurally just as any bone with in the rig. Since it is just translation data you can apply the same rig naming convention to all of your characters and use the same dialogue and expression tracks and since it’s translation data is hardware rendered.

To give it a number I would say clusters out preform morphing 10-1 in all areas and because it’s easy to set up morphing is a trap as it takes a while before things gets really messy. :wink:

Actually blend shapes in UE4 are no more an issue in terms of memory usage and you can use them however you want…A Boy and his Kite is the perfect example, if I’m not wrong 3lateral created a blend shape rig with hundreds of blend shapes and Epic itself encourage developers to use blend shapes in their rig…

Blend shape are very easy to animate and you can animate them also directly inside UE4, which is something that you can’t do with a joint based facial rig…also don’t forget you can trigger normal maps dynamically which is a built in function inside UE4 :wink:

Anyway it’s up to you which one to choose…if you like soft selection modeling go with blend shapes, if you’re good at skinning try bones…
Or you can use bones with blend shapes for correctives and you’re done

Weeeell it looks like I’m having mixed reviews everywhere I ask ^^
Apparently, morphtargets are great for animation works. But since I’m into video games, and I’m working alone, I won’t be able to recreate every facial expression for each of my generic characters. Instead, I’ll probably use face rigging with 3ds Max CAT tool. Thanks for taking the time to give your opinion!

I got another question for you, actually. Let’s say that I have two human characters : the first one is a girl, small size, large hips, and round face. The second one is a taller man, large shoulders, and square jaw. No additional junctions, just different bone scaling and position.
Can they both use the same base skeleton? With the same face bones? Or is size/proportion a problem?

In other words : could Trip and Monkey from Enslaved use the same skeleton base?

You can use a trick in order to accomplish that :wink:

Gears of War 3 2011 GDC paper

Old but still pretty good considering if you want to re-use the skinning and joint informations to multiple characters :slight_smile:

If you want all of your characters to share the same rig then the base of each of the characters should all fit on the same rig to start. You can then create a component character of the 6 foot 3 character and another for the 5 foot 8 and scale the relative size using whatever you set up as the root object in the component Character BP. Straight answer is yes you can use one rig.

Also facial expressions is not the same as lip sync and only require a single pose frame so you could make a set for each possible expression and add them as an additive on a per bone layer.

Quick test done using clusters and a single layer.

Still need to add the expression and eyes layer that I’m thinking of doing procedurally.

P.S. Since a cluster is just another bone it can be animated just like any other joint in Persona.

Nicolas3D > Thanks for the link! I’ll make sure to check this all.

FrankieV > That looks great! This is the kind of result I would like to have.
I’m still not really comfortable with the whole layer thing though. Is this that thing that allows the character to play several animations at the same time? I need to learn how to setup this.

If I understand correctly, you’ve rigged the face on some cluster bones. But then, did you create some kind of loop animation? Or does it randomly move when sound is played?

P.S :Also, how did you only make one layer? The minimum I find in most tutorials for lip sync is three : one for the opened/closed jaw ; one for the widen lips ; and one for narrowed lips ^^

Well the how two first starts with the Daz3d Genesis 3 alpha character model. What it can do would turn the this post into a book but bottom line the framework would work with any character design we want to come up with and using Daz Studio is ideal for our current content development pipeline.

The main feature of interest is all of our player designs will share the same identical rigging and naming convention out of the box so our content creators can focus on the character design and not the technical requirements which is ideal for small development team, like ours, but the two key features is it comes with as many morphing targets you need as well as facial clusters so you could use one, the other, or a combination of both. This works for us as it keeps the entire process contained with in a single channel as it only has to be done once so by using clusters processing, authoring, facial animations is no more difficult to do as say a run or walk cycle.

For the test I first exported the base model and rig to 3ds Max to fix the spiking bug and then set it over to MotionBuilder. To get the shapes I needed I harvested them directly from Daz Studio as a take and snapped the frame to a character face. Once I had the base shapes I wired the facial clusters to the voice device set the input to a wav file and off it went. I did not do expressions as the process is identical and I have not yet figured out how many layers I would need or if I just wanted the character to talk or to create a performance, act, and I can add as many layers as I need or just the one.

I’ll get to layering in a moment but using morphs or clusters is not the problem as to which to use but what tools you have in hand, or inexpensive, to author the animation data that can be imported into Unreal 4. With clusters you can author and add facial animations just as you would any other form of animation data. For morphs you would have to add, append, the blend shape additions to a matched character as blend shapes only work per character. You could not that the boy from the kite demo for example and apply the animation data to a different character that does not have the same blend shapes. With clusters you can so be kind to your animator and use clusters as although it took just as much time to set up the above test if it used morphs it only a process than needs to be done once and applied to as many characters you wish.

So in this case my tool of choice is Motion Builder for authoring animation so I could do it all on one layer or added as many as needed to account for a unique character.


Layering as a process is no more difficult as say laying in Photoshop as in the image progression is from the bottom up. In the case of Unreal 4 if you can do something simple like an aim offset then you can do laying as it’s just another additive as the animation migration moves from bottom up as in the same manner as laying for the image. Super simple in concept as all your saying to Unreal 4 is take the transform info from the animation set and apply it from bone X up and ignore the others below.

So as to how many layers you would need is subjective as cluster data does not have the same requirements as setting up additive morphing targets. You will “have” to have the layers you suggested because of the additive nature of morphing. With clusters it’s just transform data than can be applied as an absolute or as an additive so how it’s applied depends on what would be easier as to the ideals of only having to do the process, set up, once.

Don’t just focus on what’s easier to rig, think of the animators. We lost so much time in our studio with bad facial controls (we tried both blend and bones).
Whatever you chose - animators should have the ability to prepare all the main letters and OneClick load them in the animation.

This is old but I think still an issue. I have worked out a way to use Facial Bones positions as easily as morpher in 3ds Max, You create all your ‘bone morphs’ on different keyframes, like 0=base 1=AA 2=EE 10=Smile. Etc. Clone the Face Bones. Delect all keyframes on original facebones. Then, use a plugin or script that works with poses like Pose-o-Matic, (remembers bone positions for Smile or AA and turns into a Animatable Slider EXACTLY LIKE MORPHER but with light bones and their relqtive positions, INSTEAD of Morphs!! I KNOW RIGHT?, !), Apply the Pose Script, or Pose-o-Matic to the Original head. Go to frame one, tell Pose-o-Matic you want to save a ‘pose’ called AA, select all the original face bones, then using a simple alignment script, (turned into a button) tell every original face bone to align with its animated clone with (the animated cloned facial bones each 001 at the end!) They will pop to the AA Viseme positiond, and after you click ‘save’ in Pose-o-Matic, you now have a SLIDER that moves all the Original face bones, to the AA positions / shape, but with only true bone placement not a heavy morpher, you can animate just as easily as with morphs, once they are all aligned and setup.
. ALSO I don’t think anyone mentioned the Third option of using a streaming Point cache file from your anination package… That allows Morph and bones in the source. Also, a possible 4th option, I know one can, in 3ds Max, drive a Very complex mesh with a Very simple one, using SKIN WRAP, perhaps one could use morph targets on a simple mesh, then use that both as the high rez’s LOD2 AND it’s Animated Driver! The small footprint mesh would serve double duty and potentially cut the numver of animated verts for the higher rez mesh, by like 80%! ;")