Talks And Demos: xADA: Expressive Audio Driven Animation for MetaHumans | Unreal Fest Orlando 2025

In this session recorded at Unreal Fest Orlando 2025, Epic Games presents xADA, a model for generating expressive and realistic animation of the face, tongue, and head directly from speech audio.

The approach leverages state-of-the-art machine learning models to extract rich features from audio input to automatically create face and body animation directly on a MetaHuman or MetaHuman-compatible model.

xADA supports two modes of operation: a fully automatic mode and a customizable mode where artists can specify and override emotion and/or blink timings.

This session includes a live demo of xADA. Topics include how the technology behind audio-driven animation works, and how to use xADA in Unreal Engine for offline and streaming character speech animation.

To find out more about audio-driven animation for MetaHumans, check out our website: dev.epicgames.com/documentation/metahuman/audio-driven-animation

https://dev.epicgames.com/community/learning/talks-and-demos/MoK4/xada-expressive-audio-driven-animation-for-metahumans-unreal-fest-orlando-2025

My Questions:

  1. Is there a supported approach to create a custom Live Link Audio Source that feeds into xADA’s processing pipeline? If so, which classes/interfaces should I extend?

  2. Could you clarify the architecture of how audio flows from MetaHuman Audio Live Link Source → xADA processing → facial animation? Understanding this would help me determine if a custom source is feasible.

  3. For runtime SoundWave generation, what’s the recommended pattern for creating SoundWave assets from PCM data that xADA’s offline processor will accept? Should I use USoundWaveProcedural, or is there a better approach?

  4. Is there any roadmap consideration for supporting programmatic audio input to xADA? The use case of AI-driven charactersis becoming extremely common, and native support would be incredibly valuable.