What is the easiest way to animate a non metahuman face? This is for a character which is not human. Or even humanoid. It’s a talking kitchen utensil. I guess Live Link and the new audio to facial animation will only work with metahumans or humans in general? Or can it be made to work here?
I’m trying to avoid having to hand animate, either by rig or by morphs etc. Some sort of lip sync would be great! Thanks.
Or is the best way, just using Blender for the talking parts?
P.S. Wouldn’t it be great if UE had an interactive community? The best piece of 3D software in the world has probably the least interactive community. Ask how to detangle a cassette tape, how to adjust a carburetor, how to get an old 1984 Mac running or even the best way to watch paint dry on any other forum and you get a dozen replies in a day. UE forum, most questions, crickets…
If you want to avoid hand animation and lean on lipsync tooling, you will likely need some way to interpret the data those tools provide. It’s been a good while since I’ve last looked at lipsync, but the way I’ve seen it done in a few places was to extract visemes and drive blends based on those.
I can’t speak to the audio to facial animation system, but Microsoft’s system appears to function this way.
Either way, you likely need blendshapes or joint poses that express visemes for the mesh in some form to make use of automated lipsync.
Thanks. I was looking into some auto lip-sync add ons for Blender. It would be great if the one in UE would work with non meta human characters or even not human. But since it doesn’t, I think the easiest way will be to export the shots where there is face animation to Blender, do it there and export from there to edit it all together on a NLE. It seems it would be easier than what you suggested? Or you see advantages?
Fairly fresh topic so here’s the answer in terms of best practice for 2024/25
Morph targets. And iphone to record them for free via live link face.
Setting up the morph targets in blender is done by creating 52 keframes and styling each keyframe with the appropriate mesh deformation that works for your actor.
The closer the expressions match your actor, the better/more natural the end result will seem. (For an inanimate object you don’t need to match anyone that closely probably).
Let me see if i find a good ol blender tutorial on this …
This is still relevant but outdated. Relevant for non iphone userers.
This is the other one, which should still work even if the script will likely need tweaking
Assuming you are now familiar with the concept.
This is where you get your 52 poses from:
You have an actor make faces at the camera for a while following the instructions of each shape.
Model each face shape to match.
And you end up with a near perfect (recoding can always be faulty) final rendering.
Thanks a lot. I will check all that. Specially the one for non iPhone users, which is me.
But so I was right in my assumption that the easiest way is doing the shots in Blender instead of using UE?
These seem to be quite involved. Since this is for a cartoon, I might not need the perfection of motion capture or a video. I found some audio only based lip sync add ons which seem might do the job ok enough. What do yo think? Or can you recommend any other simpler solution than the above? Thanks again.
It can be fairly involved, especially if you aren’t used to the workflow. Morphs are usually done in dedicated software, such as Maya or Blender, then imported. I think it may now be possible in the editor, but don’t quote me on that.
Lipsync should give you the basics in terms of the character will follow the overall mouth shapes, but the rest of the face largely won’t emote. You could manually animate those emotions – and those morph targets and control rig definitely help speed that up – or you can use performance capture techniques.
There should be a fair number of tools around for this though. The shapes that MostHost mentions are part of the facial action coding system (FACS). These are largely standardised, so many tools will support them. As an aside, a great reference for what each shape looks like on a human face is FACS – cheat sheet – Face the FACS. Stylised characters typically exaggerate these forms somewhat, but the core ideas remain the same.