In this session recorded at Unreal Fest Orlando 2025, Epic Games presents xADA, a model for generating expressive and realistic animation of the face, tongue, and head directly from speech audio.
The approach leverages state-of-the-art machine learning models to extract rich features from audio input to automatically create face and body animation directly on a MetaHuman or MetaHuman-compatible model.
xADA supports two modes of operation: a fully automatic mode and a customizable mode where artists can specify and override emotion and/or blink timings.
This session includes a live demo of xADA. Topics include how the technology behind audio-driven animation works, and how to use xADA in Unreal Engine for offline and streaming character speech animation.
To find out more about audio-driven animation for MetaHumans, check out our website: dev.epicgames.com/documentation/metahuman/audio-driven-animation
https://dev.epicgames.com/community/learning/talks-and-demos/MoK4/xada-expressive-audio-driven-animation-for-metahumans-unreal-fest-orlando-2025