The best approach for audio2face blendshape driven data for MetaHuman in realtime.

you might find our open source audio2face, NeuroSync, helpful : GitHub - AnimaVR/NeuroSync_Player: The NeuroSync Player allows for real-time streaming of facial blendshapes into Unreal Engine 5 using LiveLink - enabling facial animation from audio input.

It does this locally :wink: