Can You Automate Metahuman Audio To Lipsync With Blueprints?

  1. Is there any way to automate audio to lipsync via blueprints? I’ve used the Metahuman performance capture lipsync whole setup and that works fine, but since I’m generating my audio from ChatGPT, I don’t have any audio pre-recorded.

If there was a node that allowed me to run the ‘Process And Export To Anim Sequences’ (as seen in the screenshot below) to the newly generated audio, then find and apply it to the metahuman, then I’d be golden, but there are no such nodes to my knowledge. Are there other ways to do this with blueprints? I would really appreciate some advice.

Runtime lipsync generation at the quality of Audio to Lipsync would be amazing. There are some plugins that claim to do that with ChatGPT but they all have kind of insane monthly costs.

Yeah, convai is a definite solution (what I think you are refering to), but the monthly costs are too much to be beneficial in the long run in my opinion

You can try OVRLipSync or NVIDIA Audio2Face.

I’ve seen these. Can either of them be called upon via blueprints? I’m using audio that is generated on the spot, so it all has to be real time; everything has to be done in blueprints. (Thanks for the info, btw.)