Facial Animation Sharing combined with FaceAR?

Hello there,

I’m fairly new to Unreal, and some things might be obvious and I simply might be overthinking them, so I thaugt I’d try asking some of you nice folks out there for some friendly advice.

With some of the new features that currently came up, namely Facial Animation Sharing and FaceAR, I’d love to try out of somewhat “combining” both to be able to create simple but to a certain degree believable facial animations and sharing them over different characters. Giving me the possibility to capture my face, record that animation and use it on various faces.

I went through the documentation of both but still have some open ends which make me wonder if that approach could actually work.
On the one side - there is the Facial Animation Sharing:

So, to get that right in short:

  • All the Faces have to have the same topology and the same facial rig (“Master” Skeletal Mesh)
  • So that also means, that to share facial animation, these animations have to be solely achieved through bone/joint animation (resulting in the faces deformations) and no morph targets
  • these animations have then to be baked and “translated” into animation curves to reuse them for different faces

Does that mean these shared animations don’t “support” morphtargets, because I’d have to generate them for every face individually? At least that’d be what I think…
but then this sentence from the documentation somewhat confuses me:

“One important caveat however is that your animation must not have any bone transform data within it. Any bone transform data, even with one mesh’s reference pose, won’t work for other meshes so it is important to remove bone transforms (keeping only curves) and start with each mesh’s own reference pose if you want to share the curve between different meshes. This enables you to share the facial curves between different faces.”

On the other handside, there is FaceAR:

If I got that right, this makes mostly use of morph targets (52 is the number it usually would take), but you can combine it with joint animation by creating and animating the facial animation with both blendshape and joint animation in for e.g. 3dsMax and then “baking” the state into a combined morph target for UE.
Which essentially means, it just uses morph targets. Making it somewhat “useless” or “incompatible” for my idea, because I can’t directly control my “shared rig” or better: the animationcurves for the shared facial animations with it.

Or did I just miss out on some detail and actually I can simply “link” FaceARs blendshapes to the animationcurves, telling it to use “curveA” when “BlendshapeA” is used?

I’d really appreciate any help and advice :slight_smile:
Thanks in advance and cheers!

I don’t have experience with the Facial Animation Sharing technique, but in order to share the same animation across different characters, all with Blend Shape rigs ( so no joint driven ), with different topology, I simply have a base mesh in the scene with FaceAR enabled, then I simply save all the animation curve into float values, so that all the data is avaliable at runtime.

Then in my newly imported character, onTick I get a reference on the FaceAR base mesh I previously set, and get the float values I need.

At the very beginning I used this setup just to remove the blend shapes I did not need, but you can use those values to drive anything you want, because they’re not related to any specific blend shape, since you decide what that 0 to 1 value will be used for, like overdriving a blend shape or reducing the amount of it, or just assign those values to an audio waveform :wink:

Hope that helps :wink:

Hey Enter Reality,

thank you very much for your help and advice!

I think that definetely will help regarding the transition from the FaceAR-animation data to the target characters face animation. Just to check if I got you right:

You bring in a base mesh with FaceAR in, capture the animations you want, translate the resulting animation curves into float values and then you simply “link” those values to the respective blendshape values of your target face. So that it basically matches/controls your target face. (As you mentioned you could also link those values to other things, let’s say an arm’s movement for e.g.)

I can see that working with blendshape-based facial expressions, as you can literally use the blendshape-values of the FaceAR base mesh to control the target blendshapes…Thinking about it now, I feel a little stupid for how simple that part actually is…
Now it would be interesting how or if that could be then used for facial animation sharing…In my “simple thinking” I could imagine translating those float values to control the facial rig (joints) pretty much the same way you probably would build a “standard face rig” with sliders.

Cool, that sounds quite straight forward actually.

The only real question left for me is now basically how the facial animation sharing works in detail, as I can’t quite figure it out from the documentation…is it joint based animation, or blendshapes? Or both? Assuming that it needs a Master Skeleton for it, I guess it’s joint animation, but it looks on the demo faces (in the documentation) as if there are blendshapes integrated as well…then there is that statement I quoted in the first post…I think I might just have interpretation issues here.

To give you an idea what I’m trying to build/test: It’s actually a simple scene for VR. It’s you and one character you can interact/talk with. But this character might be exchanged for e.g. a skinny man instead of a fat man. I’d like to have something of a “character editor” to create your own variation to that character (I’d tackle this with morph targets as well). To save time and not having to animate every single face for every variation, I’d like to reuse animations through all of the character variations - and I think that could work via facial animation sharing.

Once again: Thank you!

I think that the best solution to do what you have in mind is to use blend shapes, on both the body and face, so that you can mix whatever you need, but still have the animation playing as expected.

There is a great GDC paper by Jeremy Ernstthat describe the method he used in order to efficiently have a base mesh that is then adapted to fit different characters, so same topology for all the characters.

You can probably apply the same technique, in order to have baked facial animation to apply onto every character you need.

If this can help, a while ago I created what I called a “proxy rig”, where together with the standard UE4 skeletal hierarchy, I also added 52 joints, each one named after the correspondent blend shape, you can see the entire pipeline in action in this video.

In short, the blend shape inside UE4 are driving the 52 joints in the skeleton on their Z axis, so that the facial animation can be exported using joints ( since UE4 doesn’t export blend shapes ), and then in Maya, using SDKs, I link the 0 to 1 Z axis values of the joints to the facial rig blend shape and there you go.

I guess that you could apply the same technique as well, but in this case you can retarget the different characters using the same or different skeletal hierarchy, and use the “proxy rig” technique to share the animations.

Hi Enter Reality,

thanks for you further answer and help! I haven’t been active for a while (a lot going on right now) and I didn’t get the time to work more on that project. But I definetely want to continue on it - so thanks again!
I will let you know if I could get it to work as soon as I got back to it :slight_smile: