Is there any way to automate audio to lipsync via blueprints? I’ve used the Metahuman performance capture lipsync whole setup and that works fine, but since I’m generating my audio from ChatGPT, I don’t have any audio pre-recorded.
If there was a node that allowed me to run the ‘Process And Export To Anim Sequences’ (as seen in the screenshot below) to the newly generated audio, then find and apply it to the metahuman, then I’d be golden, but there are no such nodes to my knowledge. Are there other ways to do this with blueprints? I would really appreciate some advice.
Runtime lipsync generation at the quality of Audio to Lipsync would be amazing. There are some plugins that claim to do that with ChatGPT but they all have kind of insane monthly costs.
Yeah, convai is a definite solution (what I think you are refering to), but the monthly costs are too much to be beneficial in the long run in my opinion
I’ve seen these. Can either of them be called upon via blueprints? I’m using audio that is generated on the spot, so it all has to be real time; everything has to be done in blueprints. (Thanks for the info, btw.)
and also some plugins that do lip sync use “no ai usage” tag. will this violate the tag? since they only submitted that no ai training in their license.
They can be called from blueprints. You might need to call some C++ functions for the live streamed audio data. The function is called FeedAudio in OVRLipSync as far as I can remember.
There’s also a sample project from Nvidia audio2face
In case anyone needs it, I recently created a plugin called Runtime MetaHuman Lip Sync that enables lip sync for MetaHuman-based characters across UE 5.0 to 5.5. It supports real-time microphone capture with lip sync, separate capture with lip sync during playback, and text-to-speech lip sync.