Bring your MetaHuman and custom characters to life with real-time, cross-platform lip synchronization!
Transform your digital characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your characters respond naturally to speech input, creating immersive and believable conversations with minimal setup.
Quick links:
Packaged Demo Project (Windows)
Demo source files (UE 5.5)
Documentation
YouTube video demonstration
Discord support chat
Custom Development: solutions@georgy.dev (tailored solutions for teams & organizations)
Key features:
- Real-time Lip Sync from microphone input
- Offline Processing - no internet connection required
- Cross-platform Compatibility: Windows, Mac, Android, and Meta Quest
- Works with both MetaHuman and custom characters:
- Popular commercial characters (Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe)
- FACS-based character models
- ARKit blendshape standard
- Preston Blair phoneme system
- 3ds Max phoneme system
- Any character with custom morph targets for facial expressions
- Multiple Audio Sources:
- Live microphone input (via Runtime Audio Importer’s capturable sound wave)
- Captured audio playback (via Runtime Audio Importer’s capturable sound wave)
- Synthesized speech (via Runtime Text To Speech)
- Custom audio data in float PCM format
How it works:
The plugin processes audio input to generate visemes (visual representations of phonemes) that drive your MetaHuman’s facial animations in real-time, creating natural-looking speech movements that match the audio perfectly.
Perfect for:
- Interactive NPCs and digital humans
- Virtual assistants and guides
- Cutscene dialogue automation
- Live character performances
- VR/AR experiences
- Educational applications
- Accessibility solutions
Works great with:
- Runtime Audio Importer - For microphone capture and audio processing
- Runtime Text To Speech - For synthesized speech generation