Inside Unreal: MetaSounds and Quartz

Unreal Engine 5 introduces MetaSounds, a new high-performance audio system that provides audio designers with complete control over Digital Signal Processing (DSP) graph generation for sound sources. We’re excited to have Aaron McLeran present our new audio features live, and we invite you to join us!

Check out the Unreal Engine Twitch Page for the full UE5 EA Livestream schedule.

If you’re unable to make the livestream, all episodes of Inside Unreal can be viewed afterwards on-demand .

Thursday, July 22 @ 2:00PM ET - Countdown


Aaron Mcleran - Lead Audio Programmer - @MinusKelvin
Victor Brodin - Product Specialist - @victor1erp

Unreal Engine 5 Early Access
Release Notes


I was waiting for this!

Could you show examples of how to replace SoundMix and SoundClass with Metasounds, please?

Thank you very much @VictorLerp !


Can you animate things in UNREAL at the rhyme of the music like the audio plugin Audio Analyzer Plugin – Parallelcube


SoundMix/SoundClass are higher level than MetaSounds – they won’t replace those systems. Instead, today, SoundMixes and SoundClasses simply apply their logic to MetaSounds – i.e. volume scaling and pitch scaling. Also, I suppose applying an EQ (the legacy EQ system), though I don’t recommend using that today as Submix Effects are more effective for EQ.

What will be a replacement of those systems is the Audio Modulation Plugin. That’s a simpler and more powerful parameter modulation system (e.g. like a “sound mix” thing but more generalized) that will allow modulation of any number of parameters via an orthogonal mix matrix (vs. the SoundClass hierarchical graph).


That is an interesting plugin – seems like a lot of complexity to do something that is mostly already supported in UE4 via the Audio Synesthesia plugin and many native features – e.g. getting Audio Envelope data and FFT data is something that is supported out of the box in UE Audio Components and Submixes. Audio Synesthesia supports more complex analysis (Non-realtime so it’s performant). For UE5, we’re adding more robust real-time analysis for Audio Synesthesia as well.

If you google UE4 Audio Synethesia you’ll find lots of docs and tons of examples of stuff people are doing – no 3rd party plugin needed!


Livestream where we talk about it here:

1 Like

Can I use this new sound system to record from a microphone and get the audio samples of anything that is coming through the microphone? something like unity mic

or is there already a library for that I can use for that in ue 4.26?
(IVoiceCapture has noise suppression that I can not turn off)

I’m not sure of the use case here with all of this synthesis, is the team thinking composers will work in Unreal for their synthesis needs? I’m one, also a programmer (but 99% of composers aren’t) so I can handle the BP style programming, but while it’s a neat facility I’d rather not work in Unreal for this, but instead interact with my Moogs. It’s two kinds of thinking, when I’m composing I need to work with instruments, not software paradigms.

SFX then? Talking to my sound designer, the workflow there is to do the typical layering in DAW/Nuendo, apply plugins and so forth. When he needs synthetic beeps and boops he’ll also use whatever synth’s he’s familiar with.

Sample level triggering is fantastic, especially that it is music knowledgeable so I’m trying to figure out how to use it, but the entire composition/audio world works with WAV’s. And I’m certainly not trying to put down this work in any way, I’m just not sure how to use it. What’s the use case for Engine as DAW, or really it’s a synth DAW plugin as far as I can see.

1 Like

looks awesome! However it seems this is all applied to the sound source before its played? i’m wondering if we will one day be able to use a combination of DSP, material properties, colliders and audio ‘rays’, (maybe similar to how light rays function), in order to simulate reverb that is responsive to the virtual environment? this would be the audio equivalent to post-processing. It would be amazing to build a custom size/shape room and have audio ‘waves’ bouncing off the walls to create a reverb simulative of that space, governed by the material properties of that room, in realtime. for digital musicians it would be amazing to experiment with practicing, recording, or playing live in a custom shaped room, studio, cave, that does not / could not exist in real life.