Can You Automate Metahuman Audio To Lipsync With Blueprints?

They can be called from blueprints. You might need to call some C++ functions for the live streamed audio data. The function is called FeedAudio in OVRLipSync as far as I can remember.
There’s also a sample project from Nvidia audio2face