Facial Animation via FacewearTech - blendshape alembic or bones ?

Hi guys, we are slowly dipping our hands into the UE4 environment, coming from a film background. Our current project is to take a character into UE4 and animate him via XSens Mocap suite and FacewearTech facial rig.
We have the character setup as multiple meshes such as body, head, hair, eyeballs, eyebrows… and are using the Advanced Skeleton rig for body and face. It has a function to convert the body part of the rig to a UE Skeleton automatically which we got to work nicely. There is also a option to convert the face rig to a “simple rig via bones” for UE. (since FBX does not support constraint driven)

Here are my questions:
If I want to record cinematic quality face animation, but also later on be setup for tons of voice tracks in-game play mode whats the best pipeline ?

*Do I have to go alembic, so I can use blend shapes,… ?
*Do I create the simple bone rig, and then how do I drive it ?

*For example If I want to use standard animations (either from the marketplace or my Xsens) and then use a seperate FBX file to drive the bones for the dialoque and face, do these two animation files have to be combined into 1 animation first, or can they be separately attached and game logic driven onto one skeletal mesh ?
*Could I use AI to have my character run around and then attache a fbx animation to the head only?
*What is the preferred RIG ? (We are playing with Advanced Skeleton because it can convert to Motionbuilder for Mocap sessions and also hand animation in Maya)

I am a bit in limbo about the pipeline to put our cards into. Would love some advice.

PS. We will also want to do the real-time UE setup eventually by using the iKinema and FacewearTech Live plugins.

I hope I am not to confusing, please let me know if I forgot to mention anything needed.

Well character development with in the UE4 space depends more on the ability to author animations in a manner than can be parsed by UE4 as an input rather than feature sets currently available with in engine to drive the kind of performance you are looking for. In this case UE4 can handle both animated morph targets or shaping via bones (clusters) or a combination of both.

In my opinion.

Morphing is still the best way to get the level of performance and fidelity necessary to have a character “act” in relationship and equal to the performance of the artist as presented. Let say you need to capture the performance of a Sigourney Weaver as a ten foot tall blue character the best way is via morph shaping.

The negative is the unique requirements as part of the overall development of a player character development pipeline can become rather expensive as to the need to develop working assets as to single Hero type characters and in general the animation data can only be applied and drive a single channel actor.

Bones or cluster shaping on the other hand is much easier to work with as far as authoring goes, the animation data as authored can be used across a lot more characters, but does require a skilled animator to work harder to get the kind of performance out of the mesh that you could get above and beyond the performance capture direct from the source.

Here is a test I did in MotionBuilder using voice capture.

P.S. NOT WORK SAFE :wink:

The result is I was able to tweak the output as to dialogue but still not even close to representing a “performance” of a Amy Poehler. :wink:

Experience wise though the problems solved was via application of technology by just plugging in some stuff but the starting point was all about having a functional framework that all things that comes afterwards can an would work off of, be it we decided to work with morphing or clusters, and the solution to our problem was going with the Genesis 3 product, available via Daz3d, that filled the need for a character framework that supported the needs for both performance requirements as well as requirements best practice wise used for a video game.

The result of our choice is we do not need to decide to use either morphing or clusters as it’s already built into the asset of being able to use one, the other, or both.

Clusters can be applied by layering as part of the animation BP so as in the example being via cluster shaping it does not matter if it is applied as a single animation solution or via layer per bone. Morphing being unique data can only be recorded to the available target and that target has to be available as part of the imported package. As a perspective cluster shaping is animated in the same manner as you would animate a run cycle as being translation data where morphing requires the percentage change from one shape to another.

Sure using clusters you can layer per bone so that the animation is only applied from the selected bone and up. Using morphs only available shapes will animate. Opinion wise though I would author facial animations off of the bind pose and use Story to test for performance.

Check out the Genesis 3 solution. It might not be what you need but opinion wise is the only off the shelf solution of studio grade quality that will help you to figure things out via discovery.

Thanks Frankie,
I was unable to open your youtube click, says its unavailable.
As far as your reference to layering animation via clusters, but then rather doing animation off of the bind pose and using Story, can you expand on that a bit. I might just be too new to UE for some of this to make sense. Would love a point to a tutorial :slight_smile:
What I am doing now is able to use the UE sequencer to rough out our cinematics via our basic animations for all general body moves. Now I want to go in and add facial animation on top of it. Coming in via Maya from FacewearTech Retargeter. So most likely exported as FBX ?
Without having to switch out my current animation sequence, but adding to it.
Thanks for the insights!

I agree with Frankie regarding his points:

Lately we were doing custom facial rigs to import into unreal for a hero character which also has a the Dorito effect happening in the original setup so we could use morph targets to drive the clusters. However we are now reconsidering our approach and switching to full morph target approach and dumping the rig. Couple of reasons for this (note this is best practice for higher quality facial animation work and for key hero characters):

1- The amount of bones we ended up with in the final facial rig was a lot and even with these large numbers it only got us through 75 - 80% of the original morph poses, which is a big deal for faces, for the extra bit you have to either increase bone count further plus create more hassle for animators in this case us : ). Also coming from a film background we enjoy using morph rigs for the most part, as they will provide best practice and best quality approach in my opinion. And making quick changes in Zbrush for the poses and loading them back in works fine for the workflow.

So FBX can export your morphs plus animations and in UE you will end up with morph sliders for the head only which is great because you dont have to end up importing all the pieces and reassembling them again in the engine.

Lastly If you do end up with a facial rig you can have corrective morph poses for specific hard to get shapes and maybe that will help solve some issues, but then again you will need a lot of poses for those correctives. so using a facial rig becomes questionable.

2 - So far from our tests UE was able to handle good amount of morph targets, be mindful that this also depends on your head poly count as I heard that this stuff may take up memory later, how this matches up against hundreds of bones plus corrective morphs for a facial rig in performance is still a question mark but at this stage we are focusing at what gives best results and if it performs then should be OK. We noticed form our reseacrh that even large studios have had similar problems with facial rigs, there was an articla about Ryse, using lots of corrective morphs to fix the issues.

3 - You will still need a limited custom facial rig regardless, and by this I mean you need something to drive the eyelashes, tearline, eyebrows, teeth, hair etc. In our case we have a separate setup linked to the morph poses which drive the bones that in turn drive those elements automatically so as an animator I just need to focus on one slider types for the face poses and not worry about how to move the rest separately.

Regarding Body rig, when you speak of Advanced rig is this the ART tool for Maya? Sorry i may not be able to help you here since i’m mostly a Max user.

You also don’t need alembic to export facial morphs that’s done through FBX. Alembic is more for pre-baked sims and other topology changing animations.

However from what i understand you are shaping up to create many characters using same motion and re-targeting? If this is the case then of course you may find Blend shapes like the ones described above to be a bit more trouble but have a look at some tips and tricks from an old article here.

https://cdn2.unrealengine.com/Resources/files/Jeremy_Ernst_FastAndEfficietFacialRigging2-1007627780.pdf

The bind pose is the pose of the character model in the same position as skin weighted to the rig. This could be the T pose or in some case the A pose.

Story mode is a feature in Motion Builder where one can apply animations in a layer process similar as to how you can layer animations using Unreal 4. In Maya I believe the same can be done using animation layering.

The pipeline really depends as to the purpose that needs to be served as to how to best “manage” animation resources and animating for a video game is different than animating for the purpose of acting in a video game. If it’s acting that you need, as in a cut scene or where the action is scripted as in a QT event then I would go with animating verbose.

Both the Matinee and Sequencer samples are very good on how to structure usable source assets as all of the animations are done as a single sequence so how you could script things out would be the same way you would script the action for any animated feature.

Now if you did need to add facial expressions and dialogue after the fact it can be added using per bone layering in Unreal 4 for clusters but I’m not sure just yet of the process for morph shaping but I’m assuming that the rig over rides shaping as it does in any 3d application.

We opted for cluster shaping as it’s easier to manage and our game does not require acting.

[MENTION=202133] K[/MENTION] we are trying to use the Advanced Skeleton Rig, here is a link

to develop a process. We like it because it is very flexible and has conversion functions to go straight into UE, and also has a conversion for Mo-cap and Face Rig.
@FrankieV - yes, it is currently for Acting/Scripted Cinematic workflow.

So we are able to get the animation in via Bland Shapes, which we agree with both of you as far as getting more control then bones. My question now is:

  1. Can I export the animation data of Blend Shapes ( from Maya via 'Blend Shape Sliders 0-1) and re-apply them to the UE Character Mesh (Morph Target Preview under the Character Mesh -1 to 1)
    Otherwise currently we have to re-export the complete mesh with blend shape animation attached for each Take. Or maybe retarget it from an FBX imported mesh to my already body animated mesh.
    I am trying to stick with the workflow of doing the body animation first and then adding the blend shape facial animation on-top afterwards. Potentially even via live punch in on the the faceweartech.

  2. On top of #1 I would like to keep the Character bone part of the rig in such a way that it fits the standard UE Skeleton, so I can apply all the marketplace animations without retargeting.
    @FrankieV - you are mentioning Sequencer Samples ?

From my experience with facial rig and UE4 setup:

The best way to easily share the facial animations between characters ( all of them using a Blend Shape rig ) is to build an additional, standalone rig, which is driven by the blend shape values, here is an example ( using a lipsync setup ):

By doing this you’ll be able to drive the blend shapes in realtime onto a character and share the standalone rig with few steps, rather then have the animations tied to the characters.

In addition to that I also developed a setup in Maya which allows for easily rig sharing between different characters

I’m currently working on a tech demo for my VR project, which will feature a full body mocap suit, combined with a VR headset, and the player will interact with another human, who’s also wearing a mocap suit and a facial tracking camera to read facial movement using Faceware Live for UE4, so I really can’t wait to test everything all together :slight_smile:

Since you’re also talking about alternative solutions for the facial rigs I strongly suggest you to try Fabric Engine, which is integrating a lot of useful stuff for game engines

I have been learning the faceware pipeline for our production and am having issues with the animation being imported into unreal. Right now I am baking the retargeted animation onto the face mesh. When I import the asset into Unreal I am now getting like 160 different individual animation sequences that are one track of the blendshape animation. So far I haven’t had any success bringing my faceware performances into unreal which is incredibly frustrating!