New Audio Features and Game Jam Kickoff- Live from Epic HQ

WHAT

On this stream, cracks open his example projects to explore Spatialization, Occlusion, Focus and more aurally pleasing features. These new controls give you the power to fine-tune when, where, how and why your players experience sound. Audio designers and VR developers alike will want to tune in to see how 3D stereo is implemented and how it can create more immersive scenes.

At the end of the stream we’ll be kicking off the February UE4jam and announcing sponsored prizes, so make sure to stay for the whole thing!

WHEN
Thursday, February 11th @ 2:00PM ET - Countdown]

WHERE

WHO
Aaron McLeran - Sr. Audio Engineer @minuskelvin](https://twitter.com/minuskelvin)

  • Community Manager - ]()

Questions for Aaron? Let’s hear em!

Archive:

Will these features be in 4.11?

Most of them are and some are already in 4.10. I don’t believe anything we’re showing is past 4.11 based on what I’ve been told. You should have access to the features we’re showing off right now actually.

Awesome! Can’t wait for the stream. :slight_smile:

What kind of spacialization technology/library are you using?
Is it using Oculus Audio when available?

Cheers!

What about audio streaming support? Will be able to read external files and play audio chosen by the user? What i mean, will be implemented something like this directly in engine without have to write custom code?

I’ve been really enjoying testing out the new occlusion features which are great- however the current filter sweep when a sound begins to be occluded can be very jarring and noticeable (I managed to get something a lot more natural by having a reverb volume inside the occluded area and fast volume and filter interpolation times). Even so, for audio that has a lot of high frequency content, like wind being cut off when entering a building or cave, it seems challenging to create a convincing transition into the occlusion. My question: are there plans to offer more advanced filter parameters/models, and have the ability to draw in custom envelopes (similar to a timeline) that could really aid the transition, depending on a sound?

EDIT: used a much higher value for the filter cutoff, and extended the interpolation time, and got a nice natural result (quick test here with spectator pawn: - YouTube). Really love this feature, will make creating a believable world so much easier! It’s easy to go for an extreme effect (super lowpass + super quiet) but in real life, things wouldn’t be so drastic. I think with subtle occlusion and well placed audio clips with nice smooth falloff attenuation settings, ue4 can really provide super immersive audio experiences.

EDIT2: ****! Didn’t realise how useful the focus system was until I used it- having certain sounds pop out more makes a huge difference. Hats off to you Aaron, Christmas has come early!

Great! Looking forward to this.

Hello hello Aaron,

Interesting that that stream is just about 3D audio, I look forward to it.

I am very interested in how 3D audio performance and how much impact it can have on a game “fps”
For example, what are the is consequences of having very long sequence ( +10mn) (memory, performance), etc…
What are the best workflow and things to avoid to make sure UE4 shines!

I’ll be happy if you could talk about 3rd party integration too, like Wwise and Fmod on their 3D audio plugins etc…

So happy that VR is bringing 3D audio interest to the light again!

Cheers,

Ol

Any plans for VST / DSP support etc?

Any support for ‘virtual’ sound channels? Right now if you play a sound outside of the audible radius, it will start playing when you enter the radius from the beginning - not it’s current play time. There are workarounds but it’s a bit hacky.

Any Future plans for any kind of in-engine audio suite tools, such as a mixer, visualizers etc?

Oh yes the visualizer… THAT would be so nice…!
You know just fix the existing one will be good :slight_smile:

So I missed if it was answered: How/can you set up spatialization for VOIP?

[Question] What’s the workflow to start setting up spatialized audio for VR? I’m starting audio work on my VR game and confused on where to start. I’ve been reading the Oculus Audio documentation to set up wwise but is that even necessary? Also is there different setups for using spatialized audio in a VR game intended for both Rift and Vive? Do I have to work with the audio plugins of each headset?

Yes they will be in 4.11

No, the features I talked about today are not in 4.10

We use the spatialization tech for each platform(Xaudio2, Ngs2, CoreAudio, OpenAL) to do spatialization. We use the Oculus Audio Plugin that ships with UE4 I implemented to do HRTF/Binaural spatialization. You have to enable the plugin though before you can use it. Be careful about performance though, HRTF filters can be expensive.

The new spatialization-related features will of course work in VR (as any of the existing spatialization stuff).

Yes. UE4 does support audio streaming. You have to go to “experimental features” area of the project settings and enable showing audio streaming options. Audio streaming is working but is still somewhat experimental so use at your own risk.

We currently don’t have any built-in support for allowing users of games to play their own audio files (like user playlists in other games).

Yeah, this is basically a “bonus” feature I implemented that wasn’t requested for Paragon. I did it as an “epic friday” project a few months ago and pitched it as a thing to be added as a general feature for 4.11. It’s not the fanciest implementation, but it works pretty well and I think that it’ll be useful for people, at least until we have time to implement an “advanced occlusion” system (or partner with a audio tech vendor, etc). It uses the pre-existing per-voice LPF that was already being used in the engine (previously modulated with parameters variously called things like HighFrequencyGain, etc). I think the most effective implementations will probably use the occlusion volume scaling parameter in conjunction with the LPF frequency value.

Which reminds me: I forgot to mention in the twitch stream that I went back and fixed the way the LPF filters were getting used for various features. The Audio Volumes, for example, allow you to apply a LPF for sounds in the volume, etc. The cutoff frequency was being set by a parameter labeled, “HFGain”. This was a problem for a few reasons. The first is that the parameter was a number between 0 and 1 and translated to a frequency in Hz value using a strange conversion algorithm that resulted in a frequency spread of ~0 hz to 6000 hz. As a sound designer, figuring out what this parameter’s effect would be very problematic. Furthermore, the HFGain value (which again, wasn’t a gain but a cutoff-frequency value) would “mix-in” with other HFGain values from other systems. For example, if you wanted to filter a sound based on distance but also filter a sound as it went inside an audio volume, those values would multiply together! Thus the resulting product of the HFGain values would then get translated to a cutoff frequency that was… probably very hard to understand. Since multiply frequencies that way doesn’t really make sense, I changed these systems to actually accept frequency values for LPF cutoff frequencies. Then, the final LPF cutoff frequency actually applied to the sound is the lowest LPF frequency of all the possible subsystems that might apply a filter. In other words, it doesn’t make sense to spend CPU resources on 5 LPF filters when really the final, lowest-frequency LPF will be the one you actually hear.

The stream, as you know, wasn’t just about 3d audio. Sort of split between 3d-audio-related things (occlusion and stereo spat) and sound-control-related things: focus and concurrency groups. I feel that the biggest gains for game audio are often features that allow sound designers to better control what sounds play, when they play, and how. Spatialization is cool, but when it comes down to it, I feel like “sound design” is more important. You can have the most amazingly spatialized audio, but if its not the right sound (or is poorly sound designed), it doesn’t matter. Game audio engines really need to provide tools so sound designers can deal with the complexities and dynamics of interactive software.

So, the work with developing our own multi-platform mixer (rather than using all the various platform APIs) will really enable us to do exciting things like allowing users to write their own per-voice and submix effects.

Once you have this ability, wrapping VST plugins (or any plugin) should be pretty straightforward – it’d be essentially just writing the glue code between the audio effect plugin (which is simply a dll) and the UE4 effect interface. So yes, eventually that sort of thing is something I want to support.

Imagine the workflow where from a UE4 project, when you got to create a C++ class, you’ll be able to choose from an IAudioEffect base class which will create a subclass that implements the interface to define an audio effect. Then, automatically, this user-created audio effect would show up in the UI as an option for adding effects to sound cues or a (new) submix-graph editor. That’s the sort of long-term vision of where I want to take UE4 audio. I’m only now being able to get back to focusing on making progress in that direction after my recent work to help ship Paragon (and Fortnite) so I’m a bit hesitant to make any promises in terms of timelines.