Three Advanced Models for Every Project Need!
Bring your MetaHuman and custom characters to life with zero-latency, real-time lip sync! Now featuring three quality models to suit your project requirements:
-
Mood-Enabled Realistic Model - Emotion-aware facial animation for MetaHuman characters with 12 different moods (Happy, Sad, Confident, Excited, etc.), configurable intensity, and smart lookahead timing -
Realistic Model - Enhanced visual fidelity specifically for MetaHuman characters with more natural mouth movements (81 facial controls) -
Standard Model - Broad compatibility with MetaHumans and custom characters (14 visemes)
Transform your digital characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your characters respond naturally to speech input, creating immersive and believable conversations with minimal setup.
Demo projects:
Speech-to-Speech Demo Project (Windows)
NEW
Speech-to-Speech Demo source files (UE 5.6)
NEW
Basic Packaged Demo Project (Windows)
Basic Demo source files (UE 5.6)
Quick links:
Product website
Documentation
Discord support chat
Plugin Support & Custom Development: solutions@georgy.dev (tailored solutions for teams & organizations)
Latest video tutorials:
High-Quality (Realistic Model) Tutorials:
Speech-to-Speech Demo (Full AI Workflow)
NEW
High-Quality Lip Sync with Mood Control & Local TTS
High-Quality Lip Sync with ElevenLabs & OpenAI TTS
High-Quality Live Microphone Lip Sync
Demo video showcasing the plugin’s capabilities
Standard Model Tutorials:
Standard Live Microphone Lip Sync
Standard Lip Sync with Local Text-to-Speech
Standard Lip Sync with ElevenLabs & OpenAI TTS
General Setup:
Key features:
- Real-time Lip Sync from microphone input and any other audio sources
- Emotional Expression Control with 12 different moods and configurable intensity
- Dynamic laughter animations from detected audio cues
- Pixel Streaming microphone support - enable live lip sync from browser-based input!
- Offline and Native Processing - no internet connection required
- Cross-platform Compatibility: Windows, Mac, iOS, Linux, Android, Meta Quest
- Optimized for real-time performance on all platforms
- Works with both MetaHuman and custom characters:
- Popular commercial characters (Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo)
- FACS-based character models
- ARKit blendshape standard
- Any character with custom morph targets for facial expressions
- Multiple Audio Sources:
- Live microphone input (via Runtime Audio Importer’s capturable sound wave)
- Captured audio playback (via Runtime Audio Importer’s capturable sound wave)
- Synthesized speech (via Runtime Text To Speech or Runtime AI Chatbot Integrator)
- From audio file/buffer
- Custom audio source (including streaming)
How it works:
The plugin internally generates visemes (visual representations of phonemes) or facial control data based on the audio input, and offers three models: Standard Model (14 visemes, works with all characters), Realistic Model (81 facial controls, MetaHuman-exclusive), and Mood-Enabled Realistic Model (emotional expressions with 12 moods, MetaHuman-exclusive).
Perfect for:
- Interactive NPCs and digital humans
- Virtual assistants and guides
- Cutscene dialogue automation
- Live character performances
- VR/AR experiences
- Educational applications
- Accessibility solutions
Works great with:
- Runtime Audio Importer - For microphone capture and audio processing
- Runtime Text To Speech - For local (offline) text-to-speech
- Runtime AI Chatbot Integrator - For ElevenLabs, OpenAI, Google Cloud and Microsoft Azure text-to-speech
