How to use viseme data in json format for realtime lip sync for MetaHuman or whatever character?

I appreciate tutorial (AmazonPollyMetaHuman)which most of people I ask for help get tempted to refer when read my following question but it does not work with UE 5. My question:How to animate MetaHuman (or whatever character) using viseme data in json format for realtime lip sync without using paid plugins or 3rd party services requiring registration but just UE 5 native functionality?My scenario: json containing viseme and wav format audio in binary format is sent to UE and my custom made bluePrint exposes function “get_speech_frame_data()” which returns a single frame data and “get_all_speech_data()” which returns the whole data. I can refactor my C++ code to return the data in any format. Please explain to me at least the approach and HL architecture, like shell I use LiveLink? shell I use ARKit format? Control Curves>? etc
Please enlighten!
Many Thanks