I’m currently working on implementing real-time procedural facial animation for my MetaHuman character, driven by audio or text input. Initially, I tried using the OVR Lip Sync plugin, which performed flawlessly in the editor environment but encountered limitations during runtime due to frame sequence requirements.
While many suggest using the alternative solution of Audio2Face, it too requires Live Link and fails to function even after packaging the game. Additionally, I explored ConvAI AI and MetaHuman SDK plugins, but they lack offline support and instead offer only monthly subscription options.
I greatly appreciate any assistance you can provide in resolving this challenge.
I recommend using this item.https://github.com/xszyou/Fay/tree/fay-assistant-editionThe project uses the native OVR Lip Sync service and plays the generated lip data in unreal using websocket. The limitation is that OVR Lip Sync is only available on Windows, so I hope it helps.