Hello Indi developers. Plz help me to figure out how do I make lip syncs for UE4?
I have WAV files with speech recorded for my characters mad in MakeHuman, how to make them speak in UE? How do u do it?
Unfortunatley I have no money for FaceFX and other non-free software. Maybe some tutorials?
use drivers to get the mouth shapes… just put together your standard “ohh,ahh,eee… .ext” then animate between them using drivers… will give you what you want, and looks cleaner than mocap imo
I have character models with animations… and the idea basicly would be to set the head free.
The rest of the body does animation but the head would be controlled by dialoge only.
Its kinda hard to code… but i think i can manage that via sockets.
Morph targets / Blend Shapes and skin weight animation are two different beasts.
In theory morph targets are a modification of the original mesh done by moving vertex inside the editor, while the “traditional” animation is using skin weight information to deform the mesh, so you can easily put morph target animation on top of the “standard” animation without any issues, no need to code it but you just need to manage the animations in different ways ( or just use matinee for the lipsync/facial animation ).
Thanks to you guys I finally understood (at least theoreticly) how to make it.
Maybe I was a little mistaken at the Topic’s title. I don’t need lip sync or exact same lips position. I just need to make it looks real as characters speak but not presise(must kinda look real). Is there even esier way to make it?)))
for now i figured out that to make character speak and walk\jump etc i need to:
model a mesh
add biped\skeleton rig
add morph targets
export mesh as FBX with morph targets and skeleton
export rig animations as fbx (import them without mesh later in UE)
animate mouth via randomly changing morhes to make effect of talking
If you don’t need super precise lipsync animation you can even create the animations directly inside UE4 using persona and by adding keys on a timeline of the morph target itself
There’s a free tool called Papgayo that will analyze your audio files and show you which mouth shapes to use and when.
It might be of some use to you here.
I’m wondering what if to go another way? I’ve seen lots of videos on YT where ppl just used markers on own face and then film it on webcam, track and transform the resul to mesh… It’s not a problem for me to track it, but idk what to do next, how to connect the tracking data with the Character’s face (they used facial bones, right?)
Did you guys see any tutorial about this? Everything I found is how to make it via kinnect and other special hardware.
Maybe this way could fits very well for me cuz it provides pretty decent resul and animate the face’s emo and mouth at once … Tons of saved time, right?
Exactly…they fired almost everyone at Faceshift and everyone is ****** right now because lots of people bought the Intel Sensor @60fps for Faceshift which was cheap and perfect for facial animation…and now you can’t get a license…great move Apple, great move.
Using morphing in a video game is not the ideal solution for lip sync animation due to the heavy resource load and requirements across a broad range of different characters.
For each character you will need a set of matched targets as the requirement is for each character you have you will need a matched set and the based on the total number of characters the total memory foot print will grow exponentially possibility into the gigabytes.
Each character you added will need “Used with Morphs” in your materials once again adding to the resource load
Morphing is “not” GPU rendered at this moment and still on the Trillo road map as a future feature addition.
A better solution is to use clusters, joints, or markers as part of the rig set up.
The memory footprint is equal to any other animation clip added that can be used across as many different players you want as an instance copy.
Because it’s transform based animation it gets GPU rendered.
No need to set any material requirements.
The animation can be authored to say hello in the same way you can animate a run cycle and added to the graph just as easy as an aim offset using a blend per bone.
Did this test back a bit with a character model already set up using clusters using MotionBuilders voice device.
I did not do eye movement, blinking, or expressions as they can be layered and procedurally driven.