Three Advanced Models for Every Project Need!
Bring your MetaHuman and custom characters to life with zero-latency, real-time lip sync! Now featuring three quality models to suit your project requirements:
-
Mood-Enabled Realistic Model - Emotion-aware facial animation for MetaHuman characters with 12 different moods (Happy, Sad, Confident, Excited, etc.), configurable intensity, and smart lookahead timing -
Realistic Model - Enhanced visual fidelity specifically for MetaHuman characters with more natural mouth movements (81 facial controls) -
Standard Model - Broad compatibility with MetaHumans and custom characters (14 visemes)
Transform your digital characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your characters respond naturally to speech input, creating immersive and believable conversations with minimal setup.
Demo projects:
Speech-to-Speech Demo (Full AI Workflow)
Speech-to-Speech Demo Project (Windows)
Speech-to-Speech Demo source files (UE 5.6) – Showcases full speech recognition + AI chatbot + TTS + lip sync workflow. Requires this plugin + Runtime Audio Importer + Speech Recognizer + AI Chatbot + Text To Speech (optionally). Standard model requires a small extension plugin (see here).
Basic Lip Sync Demo
Basic Packaged Demo Project (Windows)
Basic Demo source files (UE 5.6) – Showcases basic lip sync workflows (microphone input, audio files, TTS). Requires this plugin + Runtime Audio Importer, optional: Text To Speech / AI Chatbot. Standard model requires a small extension plugin (see here).
Quick links:
Fab link
Product website
Documentation
Discord support chat
Plugin Support & Custom Development: solutions@georgy.dev (tailored solutions for teams & organizations)
Latest video tutorials:
High-Quality (Realistic Model) Tutorials:
Speech-to-Speech Demo (Full AI Workflow)
NEW
Lip Sync with Mood Control & Local TTS
Lip Sync with ElevenLabs & OpenAI TTS
Live Microphone Lip Sync
Basic Demo
Standard Model Tutorials:
Basic Setup:
Key features:
- Real-time Lip Sync from microphone input and any other audio sources
- Emotional Expression Control with 12 different moods and configurable intensity
- Dynamic laughter animations from detected audio cues
- Pixel Streaming microphone support - enable live lip sync from browser-based input!
- Offline and Native Processing - no internet connection required
- Cross-platform Compatibility: Windows, Mac, iOS, Linux, Android, Meta Quest
- Optimized for real-time performance on all platforms
- Works with both MetaHuman and custom characters (Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, FACS, ARKit, custom morph targets)
- Multiple Audio Sources:
- Live microphone input or Captured audio playback
- Synthesized speech
- From audio file/buffer
- Custom audio source (including streaming)
How it works:
The plugin internally generates visemes (visual representations of phonemes) or facial control data based on the audio input, and offers three models: Standard Model (14 visemes, works with all characters), Realistic Model (81 facial controls, MetaHuman-exclusive), and Mood-Enabled Realistic Model (emotional expressions with 12 moods, MetaHuman-exclusive).
Perfect for:
- Interactive NPCs and digital humans
- Virtual assistants and guides
- Cutscene dialogue automation
- Live character performances
- VR/AR experiences
- Educational applications
- Accessibility solutions
