Hey there,
I’m trying to stream facial animation from Omniverse Audio2Face, to my custom 3D model in Unreal 5.3.
I’m struggling to find a way that works without using a Metahuman facial rig, any other way of streaming from Audio2Face to my custom 3D model?
Hi, you could take a look at the following plugin, it performs high quality lip sync in realtime from any audio data source with minimal latency, even on mid-range CPUs, and supports packaged projects: Runtime MetaHuman Lip Sync (AI for NPCs) (+ CC4, Genesis, ARKit, and more) | Fab
The plugin supports custom character rigs as well: How to use the plugin with custom characters | Georgy Dev Docs