How do I get started with Lip Synching?

The question says it all, really. :slight_smile:

I have a level populated with NPCs. The Player has to go around the level eavesdropping on conversations. What are the methods to get their lips moving properly.

For what it’s worth, I’m currently using “Mixamo Fuse”-generated characters, if that helps.

Do the characters have a jaw bone? If not, you should go back to your 3D package and add one. I’m not sure exactly how to do it in Unreal, but you’ll need to read the volume of the speech audio files and rotate the jaw bone accordingly.

This of course does not look realistic. If you need realism, you’ll have to go back to your 3D package and animate them by hand or use a plugin like Voice-O-Matic to analyze speech audio files and blend between mouth shapes automatically, though you’ll still need to set up these mouth shapes yourself.

Thanks Jared.

So…does anyone have a spare 3DS Max subscription they could spare? :slight_smile:

No, but seriously, anyone?

Blender is free :smiley:

But do they have a Voice-O-Matic plugin? This lip synching stuff seems far more work intensive than anything else. That’s it! I’ll get everyone to wear helmets! :slight_smile:

S’okay. I’ll ask Epic if they’ll give me the money for a 3DS Max sub. They have $5m burning a hole in their pocket, I hear.

It’s not possible for me to love them more. :wink:

Blender interfaces with Papagayo for automated lip-sync: News & Blender 2_57 Lip Sync - YouTube

But if you get some of that sweet nectar from Epic, just remember who helped you in this thread :eek:

Thanks, Jared. :slight_smile:

Hello guys.I managed to make a lip sync with Papagayo in blender but I cant find a way to import the animation data to Unreal.
All it does is import the shape keys I created, not the keyframed data that Papagayo created in Blender.Is there some option I have to choose in the export?

Maybe this might help you