Hi there, I’ve been looking around for a solution to allow my character and npc’s to have lip sync done at run time with an audio file. The idea would be that I already have morph targets set up for my visemes, and have the mouth animate based on the audio file for dialogue in real time instead of having to pair up thousands of dialogue recordings with thousands of separate animations. If pre-animated lip sync is the only way, I will have to live with that, but it would be cool to have it done automatically.
i feel like this should be possible
too… you can do it in other
programs. did you find a solution?for me even a simple solution like.
the louder the sound, the wider the
mouth opens. it would be better than
nothing even ifnthey look like
puppets.
I know this is very late, but you can check OVR Lipsync plugin. You can use audio files to generate OVRLipsync files that outputs visemes with correct timing which you can use to drive your morph targets. By default you need to generate the OVRLipSync file in editor, but I think you can modify the code a bit to make it work in runtime.
I guess OVR plugin only works using it with an Oculus platform game?
I used to use it with non Oculus targeted projects and it works the way I intended it to work. On the legal side, however, I am not sure if the EULA for the OVR plugin requires it to be only used for Oculus targeted projects to be able to publish.
Hi, you can take a look at this plugin: Runtime MetaHuman Lip Sync/Lipsync (+ CC4, Genesis, ARKit, and more) | Fab
I’ve done it by using envelope following and Finterp To to drive the mouth open alpha on a Metahuman. It’s cheap, and can be dialed in to look better than you’d think it would in certain contexts. You could probably drive different face controls using get magnitude of frequencies on whatever subgroup you’re analyzing off of tick.