Summary: I have created a default metahuman using a Meta Human Creator preset and have imported example sounds from NVidia’s example project. I have followed all of NVidias setup directions, and can see log messages that audio is sent. However, I cannot hear the audio and the metahuman face never animates.
REQUEST: If anyone has gotten Audio2Face working with a locally running model, could you please direct me to your example project?
Setup: Unreal Engine 5.6.1, NVidia ACE Reference Plugin 2.5.0, NVidia Audio2FaceMark 2.4.0, Windows 11 v10.0.26100. GeForce RTX 5090 driver v32.0.15.9174
Cross-Posting: Nope… Nvidia developer forums helpfully informs me that I should either get Enterprise support (I’m not an enterprise customer) or Log in to Discord. Discord helpfully informs me that I should go to the forums.
Background & Details:
My first attempt was to download NVidia’s “Gaming Sample”.( Log in | NVIDIA Developer from ACE for Games | NVIDIA Developer ). The sample expects UE5.3, the oldest version supported by the plugin is 5.4, and the oldest version supported by models is 5.5. Updating the sample, I find that the character selection menu is empty… as are the others. Also there is no option to run locally, and as I mentioned above I’m not an enterprise customer.
My second attempt was to simply add the plugins to a UE5.6 project. Following NVidia’s directions ( Character Animation (required for Audio2Face-3D, Animation Stream) — ACE Unreal Plugin ) I modified the Face_AnimBP as directed, and added it to my metahuman. (I did not configure spatial audio) Then I added AnimateCharacterFromSoundWaveAsync to execute on the start event of the metahuman actor blueprint. For testing I used the “Mark” audio samples from NVidia’s sample project, which sound fine when played in the editor. Unfortunately, this did not successfully launch, and sometimes caused a segfault.
My third attempt begins from the end of my second attempt, with copying the plugins into my Unreal project so that I can debug and edit the source. In A2FLocal.cpp the A2E parameters were being applied to A2FCommonCreationParams (Lines 302-304) when the probably should be applied to A2ECommonCreationParams . Fixing that allowed the allocation to proceed. I also switched from declaring the chain to use CIG in the PipelineCreationParams (line 322) to declaring this for each element of the a2f and a2e parameters individually. (I’m unsure of this helps.) With these changes, the blueprint executes without crashing… usually.
What is see in the log is:
LogACERuntime: [ACE SID 2] Started LocalA2F-Mark session
...
LogACERuntime: [ACE SID 2] Sent 560 samples to LocalA2F-Mark
...
LogACERuntime: [ACE SID 2] End of sample
As mentioned, during this no audio plays and the metahuman face does not move. The blueprint node then yields Success = false when the Audio Send Completed pin activates. Finally, when I stop playing I see
LogACERuntime: [ACE SID 2 callback] received 0 animation samples, 0 audio samples for clip on BP_Test_C None.None:None.None
In light of all this, my conclusion is that no animation samples are received. Setting a breakpoint in ACEAudioCurveSourceComponent.cpp inside ConsumeAnimData_AnyThread (line 336) I can confirm that no animation data is being received despite data being sent.
In closing, I would like to say that I really do appreciate how much an independent developer like NVidia is able to do on shoe-string budget. I would encourage anyone reading this to donate a few dollars on their GitHub page so that they can afford the time to implement some unit tests.