I am a newbie to Unreal and MetaHuman. I have researched how to use Nvidia’s Audio2Face tool application to export USD files and import them into Unreal for animation. I also learned about the LiveLink feature that can drive MetaHuman without exporting files. However, we are trying to run a headless mode of Audio2Face on the server and export the BlendShape JSON file. The clients will then download these JSON files to drive the MetaHuman.
I have noticed that the BlendShape data exported by Audio2Face is in the ARKit set. However, the default BlendShape in MetaHuman is a different set, making the Audio2Face exported JSON incompatible. Is this approach feasible? How do I use the JSON files to drive MetaHuman models at runtime, especially on mobile devices? Are there any examples or demos I can learn from?
Ah! few month ago I have the same question ,after researching Audio2Face source code ,actually Audio2Face has a convert map to convert 52BS(ARKitFace) to 72BS(Metahuman) .
I haven’t tried export .json format but .usd file, and with my python script the conversion just work fine for me.
For some copyright reasons I can’t upload my script, but you can look some useful information in your audio2face folder