I’m using Neurosync in a Conversational NPC project and would like to ask a few questions regarding latency in the TTS section. Has anyone else used XTTS or a 100% offline model? I’m experiencing significant delays in audio generation.
Hey, the following plugin might be more suitable for minimal latency, as the lip sync and TTS is performed fully natively: Runtime MetaHuman Lip Sync (AI for NPCs) (+ CC4, Genesis, ARKit, and more) | Fab
See the demo project here to test the latency yourself: Redirect