OVRLipSync Plugin for UE4

OBSOLETE! USE THE OFFICIAL OCULUS VERSION HERE: https://developer.oculus.com/downloads/package/oculus-lipsync-unreal/

OFFICIAL DOCUMENTATION: Not Found | Oculus


Old thread…

Example Project + Plugin Repo:

Plugin Only Repo:

Updates

Updated to windywang’s version with fixed code and support for the image based visemes.

After messing with PocketSphinx’s phoneme recognition and not getting the results I wanted, I looked back into porting Oculus’ LipSync plugin for Unity over to UE4, since the Unity plugin is just a wrapper for a DLL and some examples of how to use it.

I’m happy to report that I’ve got a basic version of the OVRLipSync plugin working in UE4, and it’s ready for people to use. I’ve gone ahead and made an example project to show how to use the UE4 version of the plugin (quite straightforward, see example images below). The project has an example mesh to see it in action, and should work out of the box.

For those that just want a quick rundown on how to use it without downloading the example project, here are some screenshots of the VisemeGenerationActor derived blueprint class.

VisemeGeneratActor event graph setup:

SetMorphTargets function:

MorphIdxToNames array:

As long as the mesh has the appropriate morphs as listed above, it will work decently well. I’m sure there are things I’m doing wrong, and bug reports are welcome!

Hey!

Nice work!

I downloaded the plugin and example file, but strangely she doesn’t do anything, when I hit play on the example map. Did I miss something?

Speak into your mic! :stuck_out_tongue:

I did try it, but it just won’t work. I also deactivated all things besides my mic, which is an USB mic. Are there any other requirements, than the plugin?

Would be nice if other users could check, if they have the same problem. I also tried standalone mode, new editor window and selected viewport. I’m running it on Windows 10.

Not sure then, I’ll have to debug it.

@DarkGodsLair - DOH! Really really stupid of me, I know why it’s not working! I just checked it out on my end and saw the error message “Failed to create audio device”. That’s when it hit me: Voice capture isn’t enabled in the ini, doh.

The repo has now been updated to fix this. If you don’t want to re-download, you can simply go into the Config folder and edit DefaultEngine.ini and add these lines:



[Voice]
bEnabled=true


I should note that even though that will get it working, the results aren’t really up to snuff with the official Unity version just yet. Basically, it’s not entirely ready for production.

Haha, I know exactly, how you feel. Sometimes I am so braindead that I even forget the simplest of things :D.

Thanks for looking into it. Will test it right away. You stated already above that this is a really basic implementation, so I don’t mind the limitations :).

EDIT: I can confirm that it is working :).

There’s something actually going wrong with it, despite it working right now - I think it’s due to not utilizing the frame numbers/frame delays, which from my tests were returning weird numbers, so I just skipped it. There’s also the issue that it doesn’t go back to silence/closed mouth sometimes after speaking.

Edit: Gotta be something else going on, the frame numbers I don’t think are the main issue.

I kind of got the feeling that the mic input always triggers/activates the lip sync, so the motion of the mouth doesn’t stop. Maybe imposing a mic input threshold would get rid of it, but I am mainly a blueprint guy, so take this with a grain of salt :).

Hey man. Great to see this come out :slight_smile:
I`ll play around with this over the weekend.
If you are having accuracy issues, perhaps you could feed a constant audio recording through as input to both Unity/UE, to log timestamps.
At least then, you’d be able to visualize the differences easily. Just an idea.

I think I know what’s going on - I suspect it’s to do with setting the main audiobuffer to a static size, and it’s going over and reading through the random bytes that don’t have data. I just have to test and upload the new version, probably tomorrow. Initially I was getting better results, more in keeping with the Unity version, so I think reverting that change should do the trick, we’ll see.

Pretty cool stuff!

Thanks for sharing :slight_smile:

Edit: I was thinking to adapt the Kite Boy in order to be used as a test subject for the plugin and share it once the morphs are done. Is there a sample mesh for Unity available? Just to have a look at the morph setup and avoid weird results :wink:

There’s a sample mesh in the example project right now (Oculus’ one from their original Unity plugin). On the Unity asset store, I know the Morph3D characters have the appropriate morphs already set up, but you need to go in and delete the secondary skeleton for the eyes before UE4 will import the FBX files for those (there are free “lite” versions of those characters).

hi, does this work if I want to use animations instead of morph targets?

It just spits out an array of viseme values - how to use them is up to you, animations can certainly work.

Nice, but how exactly would we switch between either anims or drawn phonemes (I am guessing it would be a sprite sheet or texture atlas with drawn expressions) ?

@motorsep - Not sure - I didn’t really look into the image based version in the Unity plugin. I suggest taking a look at it and see how they do it there - you’d just want to copy the same approach.

The plugin itself is not set up to care about what you do with the values, so if you want to use animations, images or morphs, you just need to hook that up specifically. I used the morph targets because they are the easiest to plug and play.

Very interesting!
I am making a game that characters will talk and i was searching for solutions.The only one i found that didnt cost a fortune-and its free- was a Blender plug-in combined with Papagayo software.Basically the sound triggers rigged bones that move approprietly forming Ah,O,Ie etc in the mesh and then importing them as animations in UE4.
OVRLipsync is working in my PC but unfortunately whatever i say to the mic the mesh plays morphs randomly and also as it was mentioned the last phoneme used is stuck to the mesh face.Could this be affected by mic quality?(i used a poor headphone mic) Also how is it used in production pipeline? Is there a Rec button in UE4 and converts my voice to audio or animation sequence to be used to characters?
If you can make it work flawlessly i am willing to donate/pay for it! Especially if its until 20-25 of June because of my deadline.

@n00854180t , this is awesome, I will test the project as soon as I have a little of time. thanks!!

Hey guys, thanks for your interest :slight_smile:

I didn’t get a chance to look into the problems with the plugin this weekend, as I was working on getting some stuff implemented for my game. I hope to fix up the problem this week, though, and will post updates when I get something.