Anyone using Audio2face from Nvidia to create lip-synch and facial animations from audio files? Having trouble importing the data into Unreal (4 and 5) and with animation playback


Has anyone been able to export Audio2Face cache data and then import it into Unreal? And if so, would you mind sharing the steps? I’m not an animator and my knowledge is somehow limited and I can’t manage to import the data into Unreal 5 (also tried in Unreal 4, same issues).

Here are the steps that I am doing:

Create cache data from Audio2face. Import it into Maya. Everything plays back correctly in Maya. I have tried two methods, the first one is my Maya scene only contains once facial mesh and I import the cache to it and it works in Maya. The Facial mesh is rigged to one bone so that I can import it into Unreal. When I import it, it creates an animation, but when I play it back nothing is moving, I can only see the frame that was exported. There’s an animation curve also imported. I’ve tried putting values into the curve, but it’s still not playing back the full animation (no movement at all). Second method, I have two meshes in my Maya scene, one is a non-rigged mesh where I import the cached data, and the other one is a copy of the same mesh, rigged to a single bone. The rigged mesh has 1 blend shape. I import the cache data, and then both meshes in my Maya scene play back the animation correctly. I export the rigged mesh with one bone, same result in Unreal, no playback of the animation. For both methods when I export (in the FBX options), I make sure to bake the frames and export animations.

I’m sure that I’m just missing something very simple, but I haven’t been able to figure it out yet.


If anyone is looking for a solution to this:

Just use the Alembic format to export from Maya and import into Unreal. This solution was provided graciously by Ricardo.lpc.