The plugin is using text information (subtitles) to generate lips sync animation for characters in real-time. Audio envelope value is used to detect silent intervals and to pause animation in this intervals and adjust future animation speed.
Important Notes.
The plugin requres new experimental audio engine (need to be enabled menually in the engine’s WindowsEngine.ini or **MacEngine.ini **config file, details here).
Only supports animation based on morph targets (blend spaces), but support of animation curves is planned in the future.
Animation quality isn’t as good as lipsync animation from FaceFX, to check - try executable demo.
It also requires manual adjustments depending on audio assets (if they’re clean or noisy, loud or quiet) - please read documentation. I’m going to improve it in a future.
Great job! It is interesting to see how it works on good quality audio and morph targets!)) Is there a possibility to save this animation to anim asset?
I am still trying to get CC2 characters to work with this system - which would be a major bonus. The phonemes in CC2 are as follows, but when in UE4 the morphtargets look quite different.
@Macw0lf
I believe, phonemes in iClone are simply composite presets for morph targets. There is no problem for me to add a support of composite presets, but there are few problems:
plugin doesn’t know how lips should move for every single phoneme, and hard-coded solution specially for Character Creator would decrease flexibility for devs
even if I’d like to add a special support for CC: we don’t know exact morph targets presets used in iClone phonemes
animating multiple morph targets in realtime is bad for perfomance
I’ll add support for composite presets anyway, but that’s all. In theory, you’ll be able to use my plugin with this morph targets. On practice - unlikely, because you don’t know how iClone mix its morph targets to get this phonemes. The best solution IMO is to export all phonemes from iClone to fbx as an animation, then import it in a modelling software (3ds max), collapse separate meshes for each phoneme from the animation and attach this meshes as an additional morph targets to the main skeletal mesh. And, of course, remove all lips morph targets which a unncesasary for emotions.
Thanks, that sounds like a better plan. I was worried about loss of accuracy using the iClone morph targets… so let me see if I can make headway with your plan…
Greetings,
We have purchased and utilized your plugin. Works fine in the engine, but does not activate when creating a build for the Oculus Go. Wondering if an additional step is necessary to get functionality on Android devices.
Hi, I can’t seem to get this plugin to work. I followed the video tutorial and the only major differences is I’m using composite phonemes rather than simple morph targets (I do have the “Use Composite Morph Targets” flag enabled) and I’m triggering the “Speak” function automatically after a delay rather than with a key press as is done in the video. Even still, audio doesn’t even trigger with the “Speak” function, and audio works fine if I use the built in “Play” function. I do have the experimental audio plugin enabled. I’m on version 4.19.2.
Hi everyone, Yuri,
I do some test with the pluggin, it’s look like useful for me, but I have several questions.
I’m using WWise on my project, not the unreal audio engine and I wondered which was the functions you use relative to the sound source file. I already see that you detect the sound amplitude to reflect on the weight of the shapes, detecting the pauses and some timing correction, but is there other things ?
Do you think it could have an incompatibility with WWise system ?
@YuriNK I am relatively cheap and inexperienced, so I have been using Adobe Fuse CC and Mixamo (both free) for my characters and exporting animations directly into UE4. They come with built-in facial blendshapes/morph targets. I was wondering if you, or anyone, has been able to use this project in that pipeline. It looks very useful, but I just want to make sure I can use it first