OVRLipSync Plugin for UE4

It is a very nice plug-in!
I was impressed!

By the way, Can it lip-sync with the voice that has been played using the node(Spawn Sound at Location)?
Instead of the microphone.

If it is a possible, you’re God!

Update! I spent a few minutes messing with the code and have got it more or less working how I expect instead of how it was, which was completely broken (random movement all the time, movement never appears to match input), to where it is now, which is basic functionality if not totally smooth as the Unity version. At this point though things can progress now that I’m seeing some correct results :slight_smile:

So anyone that tried it before test it again with the latest commits, it should produce some decent-ish output with the caveat of not going back to silence when mic input stops. I’ll be working on that soon.

For now this functionality is only in the ovrlipsync-example repo, I’ll be putting it on the plugin-only repo soon here.

Eventually I’m going to have it support taking different sorts of input (SoundWave, direct buffer etc), but for the moment I’m focusing on getting the basic functionality with mic input working.

Thanks a lot again, n00854180t! Manual lipsync is painfully time consuming and the other solutions out there are just fiddly and simply not cut out for the Unreal Engine. Thanks again for putting in time and effort :).

@n00854180t

Haven’t heard any news for a while :o Wondering if using sound file as input (instead of mic) is already implemented :o

Haven’t had any time to work on the lip sync lately.

I’ll try and push an update that will give you basic support. Basically: you’ll need to grab eXi’s Sound Visualization plugin in order to get the data out of the sound wave to pass into the lip sync plugin. I was going to just wait and make it so it would work without doing that, but it’s far easier to get it pushed out if I don’t have to mess with the soundwave code.

:frowning:

Does eXi’s plugin work on Android ?

It should work.

If it does, then indeed it would be more practical to use that plugin than write whole new code.

Can you post a video of this in use?

Probably the earliest I could do that is this weekend.

That said, if you download the OVRLipSyncExample repo (rather than the one that has just the plugin), it has a fully ready-to-go example project with a mesh all set up - all you have to do is open it in the editor and hit play, then speak into the mic.

The real reason I’d want to use the plugin is because it selectively decompresses bits of the SoundWave, rather than the entire thing at once, like the default behavior, and can also do so in packaged builds. So without recreating the parts of eXi’s plugin that I wrote to do that, it wouldn’t work very well on large files (would blow your memory out fast).

Ahh, gotcha. Good deal!

Well, it’s been almost a month since my last post :slight_smile: Just wondering what’s new :o

I forked the repository, fixed the lipsync result and ported the components in Unity like morph target and texture flip:

@windywang - Excellent work dude, thanks for fixing this up, I’ve been swamped on other projects.

@windywang any chance to add support for audio files to be used as input, in addition to mic ? (for example have NPCs talking)

If @windywang doesn’t get to it I’ll be adding that in myself soon here.

I have tried to do that, and it works. But there is not a raw PCM buffer in UE4, so you need decompress the audio data, that takes a lot of memory! By the way , I use a fixed sample rate 16000 in the plugin, so the soundwave file must be 16000 sample rate and 1 channel. The ProcessFrame function is designed to can be filled by any audio data source, what to do next is steaming decompress the raw PCM data, in this way the memory footprint will be lower.

Wouldn’t it be better to record low quality sounds just for lipsync to work and to cut down RAM usage, and use normal sounds (or FMOD banks, depending on the platform I guess), to playback actual audio ?

I was planning on using eXi’s Sound Vis plugin to do the decompression (so it only has to decompress a chunk at a time). The code in there that decompresses to PCM samples is something I wrote for the purpose of making the Sound Vis plugin workable in packaged games, and eXi did all the grunt work of polishing it up into the plugin.

If you’re really that pressed for performance, I would just record the resulting viseme values over time, then use those directly to drive the character. In the case of canned audio to be used for NPCs, there isn’t any real drawback to doing so.