Lip syncing in realtime from audio

I’m currently working on implementing real-time procedural facial animation for my MetaHuman character, driven by audio or text input. Initially, I tried using the OVR Lip Sync plugin, which performed flawlessly in the editor environment but encountered limitations during runtime due to frame sequence requirements.

While many suggest using the alternative solution of Audio2Face, it too requires Live Link and fails to function even after packaging the game. Additionally, I explored ConvAI AI and MetaHuman SDK plugins, but they lack offline support and instead offer only monthly subscription options.

I greatly appreciate any assistance you can provide in resolving this challenge.

Thank you."

1 Like

Hello, is there any progress? I’m looking for the same solution. :slightly_smiling_face:

2 Likes

I recommend using this item.https://github.com/xszyou/Fay/tree/fay-assistant-editionThe project uses the native OVR Lip Sync service and plays the generated lip data in unreal using websocket. The limitation is that OVR Lip Sync is only available on Windows, so I hope it helps. :slightly_smiling_face:

1 Like

You might want to look at NeuroSync Alpha when it arrives.

It’s shaping up really well and is real-time, works pretty seamlessly.

<3

1 Like

Hi, any progress with this ? I am also interested

Nope. Still looking for it

Come get some!

1 Like

this one is non-commercial

while its in alpha, yes.