Can we PLEASE get some more advanced audio please? Like custom convolution reverb etc?

One thing that NEEDS to be in this engine is reverb and to be able to put the reverb after the mix, but before the main output. Basically the same level of control as a DAW.

Have you looked into fmod? I think it’s an engine partner.

Epic hired an audio engineer a few months back, so hopefully he is hard at work expanding the audio side of the house. They mentioned in the latest stream that they aren’t yet ready to talk about what is coming on that front. Hopefully we get some news soon.

I have always felt that audio is under appreciated in game development, and I am not even an audiophile.

BIG BUMP! for community interest in this.
last year they said this summer will see audio improvements. fingers crossed.

is there an ‘audio’ version of ue4 on github we can get?

Yeah, that would be most helpfull. :slight_smile:

Hey Guys!

I’m Epic’s new audio programmer (I started just a few months ago) and can give you some insight into what we’re planning with audio. Don’t take this as an official announcement or anything – just an informal reply since you’re so interested about audio. As you can hopefully appreciate, writing an entirely new system that replaces an extremely complex and long-standing full-featured system used by lots of companies and games is an ambitious proposition.

First of all, I’m really excited you guys are excited about audio! As somebody who’s worked in AAA game audio now on all sides (composer, sound designer, and programmer), I totally agree with the sentiment that audio is often under-looked and under-appreciated. In my experience, the appreciation-deficit is not just in games, but in our culture!

So yeah, I’m with you about bringing UE4’s audio systems up to speed with state-of-the-art game audio tech. I’m still in the earlier stages of planning out a new roadmap and learning the capabilities of the current UE4 tech, which is why we haven’t talked too much about it yet. Nobody wants to make promises they can’t keep. I’m also not sure what Epic’s plans were for audio tech before hiring me, but I suspect a big part of their plan might have been to hire somebody like me (i.e. an audio programmer). :slight_smile:

In the shorter term, I’ve done a few leafy features for the existing audio systems. The upcoming 4.8 release has a some handy bug fixes to issues I ran across (e.g. a fix to the random-weight-picking algorithm in the RandomNode in Sound Cues), support for a separate audio device per game world for multiple PIE instances (i.e. so each PIE session can have its own audio resources/reverb/etc), and an integration of an HRTF-based 3d audio spatialization algorithm by Oculus as an optional plugin (PC only unfortunately). There’s a few more nice fixes I got coming down the pike, and I might still do a couple simpler features since Epic is shipping a game (Fortnite) with the existing audio engine and the audio team could use a few things to help them out.

For the longer term, the plan right now is to build a new platform-independent audio engine as a separate UE4 module (i.e. outside the “engine” module) that is mostly feature parity with the current UE4 audio system. The key problem with UE4 audio tech and feature development right now is that each platform has a very deep and totally different implementation. It’s almost as if UE4 has 7 very different audio engines. That means doing something really cool (like the convolution reverb you guys mentioned) would need to be done 7 different times! It’s not just that the APIs are different: developing on different platforms takes more time simply due to the different development environments, different testing procedures, hardware issues, and so on. It’s a pretty common problem in hand-rolled audio engines for game companies that ship their games on lots of platforms and most eventually switch to a design like I’m describing.

So, with the above context in mind, what I’m working on now isn’t really that newsworthy or exciting: I’m writing a thin platform audio device interface that simply interfaces with audio devices on various platforms (e.g. querying speaker arrangements and device capabilities and formats) and opens up an audio output stream that calls back into common platform-independent mixing layer. I’ve implemented the interface for XAudio2/Wasapi APIs (for windows/XBOXONE) and CoreAudio API (for Mac/iOS). I will probably use the same device APIs we’re using right now for our other platforms (PS4, Android, Linux). Of course, one of the primary goals of the new audio engine is to try to make every platform UE4 runs on as close as possible to the other platforms (accounting for differences in CPU/Memory, etc, they should sound the same, or at least very close!). This is really the only feasible way of building a platform-independent audio solution and I believe is the approach taken by anybody writing platform-independent audio.

So after handing the platform-device layer, it’s a matter of building up the new audio system with the same feature set as the current audio engine. The sorts of things that’ll need to be made anew are things like audio asset importing/exporting, audio asset playback (like our Sound Waves in current audio tech), a voice manager (code that handles determining which sounds get to play and handling updating them), 3d spatialization, mixing, sound classes, etc.

Once me and everybody working in audio at Epic is confident that the new stuff is solid and can support the existing UE4 audio feature set (maybe with a few low-hanging fruit additional features), I’ll pull out the metaphorical table cloth from under the UE4 audio-tech dinnerware.

Once we have this more reasonable platform-independent foundation for audio tech, we’ll all be able to do some really exciting things without the huge overhead I mentioned above. I’d love to list the features that we’re thinking of working on, but again, it’s a bit too early for that now. Rest assured, really cool things like convolution reverb are on the list.

If you’re a coder, you can follow my progress on github. Since I’m doing it as an independent UE4 module, we decided that it’d be a good to implement the new audio engine in mainline so you guys can follow along.

Here’s a github link to the new audio module work-in-progress:

If you’re an audio programmer (or aspiring to be one) and looking to help out, you’re totally welcome to submit pull requests, bug fixes, suggestions, etc.

Fmod is not an official engine partner, but we work with them to help them with technical issues related to integrating their stuff with UE4.

Oh, ok. It was just advertised on their page:
Anyway, sounds great that you’re working on the sound engine.

Sounds great! Maybe by the time I am ready for audio you have worked everything out. Please keep blueprint in mind when you are making all these systems! :slight_smile:

This all sounds awesome. I watched the stream a few weeks ago and it appears that in the long term we might see support for VST Plug-ins, a mixer panel with proper monitoring, psuedo-mastering tools etc like spatialization monitoring and stuff.

If that all happens I will be very excite.

thanks for the info aaronmcleran

vst support would be awesome.
if i were to dream, the future ue4 sound editor would be a little like NI Reaktor, or native support for something like libpd would be ace.
something where we can not only play sound samples but generate them using waveforms and envelopes ect.
to be able manipulate sound using events/input from the game world, not just pitch and volume but control all parameters in the sound cue, be it reverb strength or lfo frequency or phase shift or whatever.
it really would bring the sound engine up to date and in line with ue4’s rendering capabilities if sound is not limited to just playing a .wav

good point here
I’m interested, too

Yeah, turns out, I was wrong! Heh. I guess “official” partner meant something different than what I thought. I’m new here!

Yeah, well, blueprints are the secret sauce of UE4. It’d be silly to not take full advantage of it for audio. You ever use Max or Pd? I basically lived and breathed that stuff for years. :slight_smile:

Again, as I said, I’m not going to promise anything at this point (as it’s way too soon), but yeah, these are the types of things that would be great to support in the new system.

Yeah man, I agree!

Blueprints are the reason I got into Unreal 4. Kismet was the reason for UDK. So yes! It is the secret sauce. The finest in the land if I do say so myself. :slight_smile:
I’m also glad to see you guys are working on the audio system. It would be nice to see it updated. :slight_smile:
And my Dad is an Audio engineer. So he agrees with you about the this sentence. ‘In my experience, the appreciation-deficit is not just in games, but in our culture!’
Hope to see more on the audio system soon. :slight_smile:

Never got into any audio programs, though it is good to hear that visual scripting is in wide use. :slight_smile:

To add to my wish for good blueprint integration I would like to also make that procedural blueprint integration. Being able to string together music procedurally would be pretty amazing! :slight_smile:

and for that we would need sample accurate (tick/frame independent) timing please :slight_smile:


At my good old Amiga i had cool sampleplayers, to create nice sounds.
Something like that, transformed into BP-technology would be nice.
Audiovisualization now is ok, but it could pushed further too.
Amateurfriendly would be nice.
This is fresh full random generated audio.

The audiofiles are only for testing and there are even bad ones in.

Aaron, do you have any plans to incorporate audio directly into gameplay, rather than just as a content feature?

For example, someone makes a sound into their microphone and this can be used to trigger gameplay events in blueprint.

Analyzing waveform sounds made by the player in real time (pitch, volume, etc) to trigger gameplay events based on that.

Any of this sort of thing would be useful in using Unreal4 to do more unconventional, cutting edge game design.

Yes please!