I have my own NPC character rig in Unreal Engine 5.2 that I would like to set up lip sync for. This is a custom model I imported from 3ds Max 2022. Came across Audio2Face and the results seem good enough for what I need to do.
Imported the character’s face into Audio2Face and set up the Mesh Fitting and Blendshapes without too much trouble, but now I’m not quite sure how to go about bringing that into Unreal and integrating that with my existing rig.
Is there a tutorial out there that covers this? Everything out there seems to involve Metahuman or Blender, neither of which I’m using.
Thanks.