Can we make an avatar connect to a 3rd party conversational A.I. (Conversational A.I. like Alexa), and move the mouth in sync with the words generated in real time to match what is being said?
You would need to have a 3D mesh with bones and anims for the phonics. This can be done with Epic’s MetaHuman.
The hard part is training the MH to make the phonetic face for the dialog. I don’t know how that would be done.
What if we externally created a usd file in real time that matches the voice, then push it to the game for it to be used for the next animation?
Having the facial animation A.I. cloud hosted & real time fully automated after settings had been applied in advance, using code from Omniverse or Realusion, the usd file could be auto generated.
The in game A.I. then would just need to read the USD file & translate it into facial movements.