Is there an Audio2Face tutorial out there that doesn't involve Metahuman?

I have my own NPC character rig in Unreal Engine 5.2 that I would like to set up lip sync for. This is a custom model I imported from 3ds Max 2022. Came across Audio2Face and the results seem good enough for what I need to do.

Imported the character’s face into Audio2Face and set up the Mesh Fitting and Blendshapes without too much trouble, but now I’m not quite sure how to go about bringing that into Unreal and integrating that with my existing rig.

Is there a tutorial out there that covers this? Everything out there seems to involve Metahuman or Blender, neither of which I’m using.

Thanks.

Hi, you could take a look at the following plugin, it performs high quality lip sync in realtime from any audio data source with minimal latency, even on mid-range CPUs, and supports packaged projects: Runtime MetaHuman Lip Sync (AI for NPCs) (+ CC4, Genesis, ARKit, and more) | Fab
It also supports custom character rigs: How to use the plugin with custom characters | Georgy Dev Docs