Blending lip-synch to face mocap Metahumans

Hi, I’ve been impressed with auto lip-synch in 5.5, it does a pretty good job from an audio file in next to no time. This of course opens up the possibility of using AI voices for characters. However, the top of the Metahuman’s face is relatively uninhabited. As a partial solution I’ve used the Livelink importer to capture some generic expressions for my phone. It also does a pretty good job of capturing ‘micro-expressions’ and subtle moves that just give the character a bit more life. I’ve been blending these tracks in sequencer - so in this test the lip-synch track has a weight of 1, and the FMC 0.6. This is okay-ish, but the FMC reduces (scales down) the movement provided by the lip-synch performance. Although at first it seems like a small difference, in fact it is quite significant (even though I’ve deleted the mouth keys in FMC data). Similarly the upper part of the face would benefit from its layer being set to 1. (Audio track is pretty random and deliberately quite neutral BTW).
The ideal solution would seem to be an animation blueprint with a blend node, but of course a) there are blend shapes, not just bones, to take into account, and b) there’s an unbelievable number of of bones anyway! (And let’s not get into mismatches between voice and character, visual surroundings and audio surroundings, echo etc. - that feels like a whole new can of worms!)
If anyone knows of any techniques for layering / blending animation by region in Metahuman faces, any help would be much appreciated. Thanks!

2 Likes