LipSync with Metahuman

Hello, I would like to know which are the available tools to take a text or audio and in realtime/dinamically make the Metahuman move its mouth according to the text or audio.

Thank you in advance
David Bueno

meathumansdk, audiowave2, azure_viseme

This one was done with ReadSpeaker plugin

oh wow u can still see my message. so at that time there must be limited plugin or tools to use everthing u need to code them yourself right?

would it be possibel to have a look at your project really curious how your project gonna be developed after two years

Dear @HANJONECHAN my project is evolving fast thank to the new Convai platform. Here you can see one of my last videos: https://youtu.be/fSh5bi49Igk

can u share me some of insight when it comes to how to speed up lipsync animation generation?

I have tried some ‘real time’ solutions but they weren’t very practical or effective. The best solution for me so far as been to use iClone 8 - it has excellent viseme detection based on audio, but if you provide the written script, it’s even better. If you add facial mocap from your iPhone on top of that and blend the results, you get even more believable expressions etc. However, this is not a free solution, so it might not meet your needs.

I suggest to see MetahumanSDK it works really fast generating textToSpeech inside Unreal and also de face animation. The have good tutorials in their youtube.

Hi David, i have done a similar project with convai, lipsynch is ok, but how you let the meta stand move? i have used an idle animation from the base game 3rd person idle, but the head is detached, and despite tutorials i m unable to reattach…

Dear Diego, you can’t use directly de Manny animations, you must retarget to the metahuman. You can use ie. this video: https://youtu.be/xvHOamXuZDI?si=Ec3RY9E_jyiFOclB

Best Regards