I’m fairly new to Unreal, and some things might be obvious and I simply might be overthinking them, so I thaugt I’d try asking some of you nice folks out there for some friendly advice.
With some of the new features that currently came up, namely Facial Animation Sharing and FaceAR, I’d love to try out of somewhat “combining” both to be able to create simple but to a certain degree believable facial animations and sharing them over different characters. Giving me the possibility to capture my face, record that animation and use it on various faces.
I went through the documentation of both but still have some open ends which make me wonder if that approach could actually work.
On the one side - there is the Facial Animation Sharing:
So, to get that right in short:
- All the Faces have to have the same topology and the same facial rig (“Master” Skeletal Mesh)
- So that also means, that to share facial animation, these animations have to be solely achieved through bone/joint animation (resulting in the faces deformations) and no morph targets
- these animations have then to be baked and “translated” into animation curves to reuse them for different faces
Does that mean these shared animations don’t “support” morphtargets, because I’d have to generate them for every face individually? At least that’d be what I think…
but then this sentence from the documentation somewhat confuses me:
“One important caveat however is that your animation must not have any bone transform data within it. Any bone transform data, even with one mesh’s reference pose, won’t work for other meshes so it is important to remove bone transforms (keeping only curves) and start with each mesh’s own reference pose if you want to share the curve between different meshes. This enables you to share the facial curves between different faces.”
On the other handside, there is FaceAR:
If I got that right, this makes mostly use of morph targets (52 is the number it usually would take), but you can combine it with joint animation by creating and animating the facial animation with both blendshape and joint animation in for e.g. 3dsMax and then “baking” the state into a combined morph target for UE.
Which essentially means, it just uses morph targets. Making it somewhat “useless” or “incompatible” for my idea, because I can’t directly control my “shared rig” or better: the animationcurves for the shared facial animations with it.
Or did I just miss out on some detail and actually I can simply “link” FaceARs blendshapes to the animationcurves, telling it to use “curveA” when “BlendshapeA” is used?
I’d really appreciate any help and advice
Thanks in advance and cheers!