The best approach for audio2face blendshape driven data for MetaHuman in realtime.

Hi all,

I am a newbie to Unreal and MetaHuman. I have researched how to use Nvidia’s Audio2Face tool application to export USD files and import them into Unreal for animation. I also learned about the LiveLink feature that can drive MetaHuman without exporting files. However, we are trying to run a headless mode of Audio2Face on the server and export the BlendShape JSON file. The clients will then download these JSON files to drive the MetaHuman.

I have noticed that the BlendShape data exported by Audio2Face is in the ARKit set. However, the default BlendShape in MetaHuman is a different set, making the Audio2Face exported JSON incompatible. Is this approach feasible? How do I use the JSON files to drive MetaHuman models at runtime, especially on mobile devices? Are there any examples or demos I can learn from?

I appreciate any help you can provide.

Bowie