Subtitles-based Lip Sync

The plugin is using text information (subtitles) to generate lips sync animation for characters in real-time. Audio envelope value is used to detect silent intervals and to pause animation in this intervals and adjust future animation speed.

Important Notes.

  1. The plugin requres new experimental audio engine (need to be enabled menually in the engine’s WindowsEngine.ini or **MacEngine.ini **config file, details here).
  2. Only supports animation based on morph targets (blend spaces), but support of animation curves is planned in the future.
  3. Animation quality isn’t as good as lipsync animation from FaceFX, to check - try executable demo.
  4. It also requires manual adjustments depending on audio assets (if they’re clean or noisy, loud or quiet) - please read documentation. I’m going to improve it in a future.

Documentation: https://drive.google.com/file/d/1GKX…JMDdzoqkk/view
Executable Demo: https://drive.google.com/open?id=1mu…0qMOZisnDUN2O6
Video tutorial: https://www.youtube.com/watch?v=MWsNb4kOaws

Great job! It is interesting to see how it works on good quality audio and morph targets!)) Is there a possibility to save this animation to anim asset?

Interesting question. I’ll try and let you know.

No, I can’t capture morph targets animation.

I am still trying to get CC2 characters to work with this system - which would be a major bonus. The phonemes in CC2 are as follows, but when in UE4 the morphtargets look quite different.

The full pipeline is described here: Character Creator to Unreal Part 3: Facial Animation in iClone - YouTube

@Macw0lf
I believe, phonemes in iClone are simply composite presets for morph targets. There is no problem for me to add a support of composite presets, but there are few problems:

  1. plugin doesn’t know how lips should move for every single phoneme, and hard-coded solution specially for Character Creator would decrease flexibility for devs
  2. even if I’d like to add a special support for CC: we don’t know exact morph targets presets used in iClone phonemes
  3. animating multiple morph targets in realtime is bad for perfomance

I’ll add support for composite presets anyway, but that’s all. In theory, you’ll be able to use my plugin with this morph targets. On practice - unlikely, because you don’t know how iClone mix its morph targets to get this phonemes. The best solution IMO is to export all phonemes from iClone to fbx as an animation, then import it in a modelling software (3ds max), collapse separate meshes for each phoneme from the animation and attach this meshes as an additional morph targets to the main skeletal mesh. And, of course, remove all lips morph targets which a unncesasary for emotions.

Thanks, that sounds like a better plan. I was worried about loss of accuracy using the iClone morph targets… so let me see if I can make headway with your plan…

Version 1.0.2

  • composite morph targets (using multiple morph targets for one phoneme)
  • built-in emotions system (by tags in subtitles)

Hey there Yuri,

What engine is used to produce the phoneme mapping from the audio input?

Thanks!

No-no, there is no audio recognition engine. The plugin just ‘reads’ provided subtitles.

Hi Yuri, I think the lip sync animation in your demo is brilliant :slight_smile:

Are you planning on adding support for bone based face rigs?

(I’m currently using daz 3d characters with a blender based rigify body rig and pitchipoy face rig).

Also, could you post a youtube showing the workflow involved in creating these animations with your plugin?

Hello, I have purchased your plugin, but how to do microphone audio recording in real time,Can you provide the project files in the video?

It doesn’t work with microphone input, because it requires subtitles to work. You should try to refund, probably.
For real-time lipsync with microphone try Oculus plugin (https://forums.unrealengine.com/community/community-content-tools-and-tutorials/85726-ovrlipsync-plugin-for-ue4)

  1. Bones - not soon.
  2. Yes, I’ll make a video in a few days.

Tutorial: https://www.youtube.com/watch?v=MWsNb4kOaws
I use Daz3D-based head (Genesis 3), but with morph targets.

Greetings,
We have purchased and utilized your plugin. Works fine in the engine, but does not activate when creating a build for the Oculus Go. Wondering if an additional step is necessary to get functionality on Android devices.

Hi, I can’t seem to get this plugin to work. I followed the video tutorial and the only major differences is I’m using composite phonemes rather than simple morph targets (I do have the “Use Composite Morph Targets” flag enabled) and I’m triggering the “Speak” function automatically after a delay rather than with a key press as is done in the video. Even still, audio doesn’t even trigger with the “Speak” function, and audio works fine if I use the built in “Play” function. I do have the experimental audio plugin enabled. I’m on version 4.19.2.

Hi everyone, Yuri,
I do some test with the pluggin, it’s look like useful for me, but I have several questions.
I’m using WWise on my project, not the unreal audio engine and I wondered which was the functions you use relative to the sound source file. I already see that you detect the sound amplitude to reflect on the weight of the shapes, detecting the pauses and some timing correction, but is there other things ?
Do you think it could have an incompatibility with WWise system ?

@YuriNK I am relatively cheap and inexperienced, so I have been using Adobe Fuse CC and Mixamo (both free) for my characters and exporting animations directly into UE4. They come with built-in facial blendshapes/morph targets. I was wondering if you, or anyone, has been able to use this project in that pipeline. It looks very useful, but I just want to make sure I can use it first :slight_smile:

The plugin requires separate blend spaces (morph targets) for phonemes, like here: https://pp.userapi.com/c850228/v850228076/9c448/p9bv_rL1JwY.jpg
Not necessarily the same list. 8-10 morph targets work good usually.