The best approach for audio2face blendshape driven data for MetaHuman in realtime.

Hi all,

I am a newbie to Unreal and MetaHuman. I have researched how to use Nvidia’s Audio2Face tool application to export USD files and import them into Unreal for animation. I also learned about the LiveLink feature that can drive MetaHuman without exporting files. However, we are trying to run a headless mode of Audio2Face on the server and export the BlendShape JSON file. The clients will then download these JSON files to drive the MetaHuman.

I have noticed that the BlendShape data exported by Audio2Face is in the ARKit set. However, the default BlendShape in MetaHuman is a different set, making the Audio2Face exported JSON incompatible. Is this approach feasible? How do I use the JSON files to drive MetaHuman models at runtime, especially on mobile devices? Are there any examples or demos I can learn from?

I appreciate any help you can provide.

Bowie

Ah! few month ago I have the same question ,after researching Audio2Face source code ,actually Audio2Face has a convert map to convert 52BS(ARKitFace) to 72BS(Metahuman) .
I haven’t tried export .json format but .usd file, and with my python script the conversion just work fine for me.

For some copyright reasons I can’t upload my script, but you can look some useful information in your audio2face folder

  1. \exts\omni.audio2face.exporter\omni\audio2face\exporter\scripts\exporter.py line: 2363,
  2. \exts\omni.audio2face.exporter\data\presets\a2f_mh_convert_data.json

Besides, there is another file in UE Editor named mh_arkit_mapping_pose_A2F if you install A2F Livelink plugin.

Good Luck!

you might find our open source audio2face, NeuroSync, helpful : GitHub - AnimaVR/NeuroSync_Player: The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

It does this locally :wink: