MH Animator - runtime lipsync from audio

"I would like to generate lipsync animation (MH data) for a MetaHuman face based on audio playback at runtime.

I’m wondering whether the new capabilities of MetaHuman Animator—specifically in the runtime lipsync domain—can be leveraged for this purpose.

Are there any known workarounds or techniques to connect in-game voiceovers to this system or its new features?

Also, there is a misleadingly named third-party commercial solution called MetaHuman SDK (metahumansdk.io), which is capable of achieving this.

Hi,

At the moment we don’t directly support audio driven animation at runtime. The ability to solve audio driven animation in realtime (released as part of Unreal Engine 5.6) is a step in this direction, however more work is required to ensure it is available across all platforms and integrates with other runtime features. We do know of customers who have looked at doing this themselves by building upon the tools we provide, and based on their specific use case.

We are aware of MetaHuman SDK; as you note, it’s a third party product and so not one we can give guidance on.

Mark.