Download

How to animate speech?

Just wondering if there is an easy way to animate speech for the metahumans? Ideally I would like to just enter some text and it would create a realistic animation of it (and with a set of different expressions). Possible?

It’s definitely possible but I don’t think the technology already exists, especially about emotional expressions.

I recently made a lip-sync plugin and can share my pose asset with visemes if you like to do something with it (but not the plugin itself).

1 Like

I’d be very interested in that @YuriNK

Here: GitHub - AntiAnti/MetahumVisemeCurves: Pose Asset with visemes for Epic's MetaHuman face skeleton

But, as I said, this is just a pose asset.

1 Like

Thank you! I’ll have a look. Have you have had a look at the Omniverse Audio2Face?.. they do a pretty good job interpreting the audio to lipsync

Oh, Omniverse’s really good, it’s next-gen for lip sync. My solution works classical way: voice recognition engine → pronouncing dictionary → curves for visemes.

I have been working on taking a single audio file and after a mix of blueprints and a little manual effort, have a Metahuman perform the dialog.

Here is what I have so far.

Take a complete voice actor audio performance and use an audio editor to create regions for each word. A region is nothing more than the start and end time with a region length time.

For my example, I used Reaper and named each region the standard English written word.

The regions are then exported to CSV and imported into UE as data table.

Using blueprints and a separate data table, which contains a dictionary of American English to IPA, the standard English words are broken down into the IPA phonemes.

The phonemes are then sent to a custom version of the common Metahuman face Anim blueprint, which has all the phoneme poses created by using the Modify Curve and the Metahuman CTRL curves, which are already built into the Metahuman Face Anim blueprint.

A timeline along with the region times exported from the audio file as used to control the timing of the animation and sync it with the original audio file.

My IT support day job is starting soon but I could get a short demo video uploaded tonight. If there was enough interest, I could create a tutorial video.

NVIDIA Omniverse Speech2Face will basically transfer your speech a face mesh that they supply and then you can transfer it to your metahuman, I haven’t tried it as the Speech2Face app won’t launch, I’ve tried their other apps on the Omniverse like Create and View, but they like most other free programs, Quixel Mixer comes to mind, and obviously Unreal Engine excluded from those, are so painfully slow and temperamental they are not even worth bothering with. Would be interested to see if anyone has any luck with Speech2Face as it looks good for inputting pre recorded audio or live recording and seems to do a pretty good job on the YouTube tutorials.

Not happy with the resultant video, quality and sync is not good but it’s time for bed.

Here is a slightly cleaner one after fixing an obvious fault with the “h” shape.