UE5.6 Audio2Face does not play audio or animate face

Summary: I have created a default metahuman using a Meta Human Creator preset and have imported example sounds from NVidia’s example project. I have followed all of NVidias setup directions, and can see log messages that audio is sent. However, I cannot hear the audio and the metahuman face never animates.

REQUEST: If anyone has gotten Audio2Face working with a locally running model, could you please direct me to your example project?

Setup: Unreal Engine 5.6.1, NVidia ACE Reference Plugin 2.5.0, NVidia Audio2FaceMark 2.4.0, Windows 11 v10.0.26100. GeForce RTX 5090 driver v32.0.15.9174

Cross-Posting: Nope… Nvidia developer forums helpfully informs me that I should either get Enterprise support (I’m not an enterprise customer) or Log in to Discord. Discord helpfully informs me that I should go to the forums.

Background & Details:

My first attempt was to download NVidia’s “Gaming Sample”.( Log in | NVIDIA Developer from ACE for Games | NVIDIA Developer ). The sample expects UE5.3, the oldest version supported by the plugin is 5.4, and the oldest version supported by models is 5.5. Updating the sample, I find that the character selection menu is empty… as are the others. Also there is no option to run locally, and as I mentioned above I’m not an enterprise customer.

My second attempt was to simply add the plugins to a UE5.6 project. Following NVidia’s directions ( Character Animation (required for Audio2Face-3D, Animation Stream) — ACE Unreal Plugin ) I modified the Face_AnimBP as directed, and added it to my metahuman. (I did not configure spatial audio) Then I added AnimateCharacterFromSoundWaveAsync to execute on the start event of the metahuman actor blueprint. For testing I used the “Mark” audio samples from NVidia’s sample project, which sound fine when played in the editor. Unfortunately, this did not successfully launch, and sometimes caused a segfault.

My third attempt begins from the end of my second attempt, with copying the plugins into my Unreal project so that I can debug and edit the source. In A2FLocal.cpp the A2E parameters were being applied to A2FCommonCreationParams (Lines 302-304) when the probably should be applied to A2ECommonCreationParams . Fixing that allowed the allocation to proceed. I also switched from declaring the chain to use CIG in the PipelineCreationParams (line 322) to declaring this for each element of the a2f and a2e parameters individually. (I’m unsure of this helps.) With these changes, the blueprint executes without crashing… usually.

What is see in the log is:

LogACERuntime: [ACE SID 2] Started LocalA2F-Mark session
...
LogACERuntime: [ACE SID 2] Sent 560 samples to LocalA2F-Mark
...
LogACERuntime: [ACE SID 2] End of sample

As mentioned, during this no audio plays and the metahuman face does not move. The blueprint node then yields Success = false when the Audio Send Completed pin activates. Finally, when I stop playing I see

LogACERuntime: [ACE SID 2 callback] received 0 animation samples, 0 audio samples for clip on BP_Test_C None.None:None.None

In light of all this, my conclusion is that no animation samples are received. Setting a breakpoint in ACEAudioCurveSourceComponent.cpp inside ConsumeAnimData_AnyThread (line 336) I can confirm that no animation data is being received despite data being sent.

In closing, I would like to say that I really do appreciate how much an independent developer like NVidia is able to do on shoe-string budget. I would encourage anyone reading this to donate a few dollars on their GitHub page so that they can afford the time to implement some unit tests.

Although I cannot figure out how to get CIG (compute in game) to work, I was able to get this working with a local Docker container. For my purpose this is sufficient.

Setup:

  1. Update NVidia drivers

  2. Install WSL https://docs.docker.com/desktop/setup/install/windows-install/?uuid=5A2054AF-CC02-4D94-99DB-AE147062405B#system-requirements

  3. Install Docker https://apps.microsoft.com/detail/xp8cbj40xlbwkx?hl=en-US&gl=US

PROBLEM: “Docker Desktop failed to start because virtualisation support wasn’t detected. Contact your IT admin to enable virtualization or check system requirements.”

SOLUTION: Need either Hyper-V or WSL https://docs.docker.com/desktop/setup/install/windows-install/?uuid=5A2054AF-CC02-4D94-99DB-AE147062405B#system-requirements

  • Check WSL install via PowerShell: wsl –version

  • If not installed, enter to install

  • Restart computer

  1. Get NVidia API key build.nvidia.com/nvidia/audio2face-3d (ignore the brief “restricted by organization” pop-up)

  2. Download NVidia’s container using PowerShell. You do NOT need an enterprise account.

    1. docker login nvcr.io

    2. # Username: $oauthtoken (Type exactly this string)

    3. # Password:

    4. docker pull nvcr.io/nim/nvidia/audio2face-3d:latest

  3. Make that private key an environment variable

    1. $env:NGC_API_KEY = “…” <Paste your API Key into quotes
  4. Launch the container

docker run --rm -it --name audio2face-nim `

–gpus all `

-p 8000:8000 `

-p 52000:52000 `

-e NGC_API_KEY=$env:NGC_API_KEY `

-e NIM_HTTP_API_PORT=8000 `

-e NIM_GRPC_API_PORT=52000 `

nvcr.io/nim/nvidia/audio2face-3d:latest

  1. Download “Gaming Sample” project from: https://developer.nvidia.com/ace-for-games#section-get-started-with-nvidia-ace

  2. Open the project, update to UE5.6 and install all MetaHuman plugins, then re-start the project

  3. Create a character to animate (instructions if not using Quixel Bridge)

    1. In Content/MetaHumans create a folder named for your new meta

    2. Right-click to create a new MetaHuman Character, then in the editor select the Preset

    3. Click Create Full Rig

    4. Click Download Texture Source

    5. Select the Assembly panel and click “Assemble” (bottom green button)

    6. Add an idle animation (following NVidia’s instructions)

  4. Follow NVidia’s setup instructions: https://docs.nvidia.com/ace/ace-unreal-plugin/2.5/ace-unreal-plugin-animation.html https://docs.nvidia.com/ace/gaming-avatar/1.1/gaming-avatar-unreal-sample-project.html

    1. Face_AnimBP should already be modified in Gaming Sample

    2. Add ACEAudioCurveSource component to your metahuman

    3. Add Audio component to your metahuman and select one of the samples

    4. Add the Face_AnimBP to the metahuman face skeletal mesh

  5. Play in editor

    1. Drop the character into the scene

    2. Click “Settings” in the left HUD panel

    3. Enter http://localhost:52000 for the Server URL

    4. Connection status should turn green

    5. Select your metahuman, and your audio clip

    6. Click play. You should see Receiving audio data and Anim data sending messages in the Docker log, and your metahuman should begin to speak!