Audio Engine Updates Preview - Feb 2nd - Live from Epic HQ

Yes. 4.15 is shipping with the version we are using for Robo Recall, which is a PC-only title. The other platform backends are in the works as we speak. A few of the Epic platform guys are helping me out with that effort. My goal is to have all the backend platforms working by 4.16, which includes Android. And thus, when the Android backend is implemented, it’ll have all the features. That’s exactly the point of doing our own audio renderer! And yes, Android is a perfect example as to the platform problem UE4 was facing. Not only does it not support spatialized audio (due to the limitations of the native android audio API), it also doesn’t support pitch shifting! OR any effects, etc. Just bad. Especially for GearVR.

I mean… anything’s possible! But not sure what you’re asking. I don’t deal with Sequencer myself so might have to defer to them.

But yeah, we’ll be better positioned to more tightly integrate with other tools, including Sequencer.

I think I answered this – 4.16.

As for specifically Oculus, I’ll just need to get the Oculus Audio SDK libs for those platforms.

Not currently and not for 4.15. However, I might go ahead and add support for mic input with a synth component to show off at GDC. Should be easy enough to do. :slight_smile:

In my opinion, EAX support is in general dead in game audio and probably never coming back. For those who don’t know what that means, it’s “Environmental Audio Extensions” and has a long history in game audio but dropped out of favor somewhere in the mid-2000s.

Not only does it go against the general goals a platform-independent audio renderer (what’s more platform dependent than a specific piece of hardware!?) but it’s also not really needed anymore. Most games are usually way more GPU bound than CPU bound. Audio is generally not as expensive as you might think, especially with modern CPUs. Furthermore, the PC industry in general has progressed to the point where the vast majority of people don’t really want to get specific pieces of hardware to have specific feature support. They generally want their games to just work and sound the same on whatever they’re using.

And personally, I simply don’t like the way almost any hardware-accelerated audio system sounds. My feeling is that people’s feelings about EAX quality is mostly nostalgic. When EAX effects first came out, doing software reverb or other effects was pretty much an impossibility and the “effect” was appreciated from a point of view of novelty. In now way did they ever sound “realistic” nor did they do any kind of environmental modelling. Just check out retrospective youtube videos showing A-B demos of games with and without EAX.

Here’s some examples I found googling:
https://youtube.com/watch?v=30fTc5t5QNUhttps://www.youtube.com/watch?v=mYDmcR8gJyU https://youtube.com/watch?v=Vmk3dFQHX0I
Then listen to the video Dan made with the new master reverb I wrote in the new audio engine.

That said, I think a hardware accelerator designed for general audio processing would be great. For example, one of the heaviest computations in audio for lots of cool effects is “convolution”. A hardware accelerated convolution operation would be fantastic. HRTF processing, IR reverbs, etc, could be done on hardware but the sound would still be the same on all platforms, including ones without the hardware acceleration. The hardware acceleration would simply make the math faster. Such a thing would be very cool and get you the best of both worlds.

BTW,here’s an interesting interview with Rich Heimlich about the history of audio in PC gaming, including lots of interesting history about audio cards.

I think I missed it :(( anywhere like YouTube I could find the video of this great preview is available?

So I missed the stream but do have a quick question: Would it be possible to support IR reverbs in future iterations of the audio engine? This would probably requrie some fancy form of DSP…

Also, one really cool feature would be to bind MIDI controllers to parameters in-editor. I think there’s a plugin for MIDI devices already actually, though not sure how easy is it for the editor to easily link up controllers to sliders etc.

Missed the stream but I’m incredibly excited about the new audio engine and the improvements. So happy to see things rolling closer to physically based audio! The reverb ball demo sounded amazing! Will be super exciting to start getting a huge amount of possible variations out of a small amount of input audio- immersion in game will rise dramatically!

One other thing I forgot to ask was could we get VoIP support for Android?

And so looking forward to 4.16 now! (proper android audio support!)

anyone know if this will allow music to be played between levels in a non-streamed world?

That would be nice!

Dunno if it was really answered or not, but we’re in my team are looking for some Audio Volume enhances regarding swithing from different sounds in a cue depending if you’re inside a small, medium, large room or under a roof or outside. We got it working when placing our custom made Audio Volume outside in levels, but when interfering with buildings and outside environent it gets very tricky.
To explain it better, it’s how BF3-BF1 is using it. To play certain sounds with real recorded tails from inside rooms, or valleys, fields etc. Not talking about computer generated reverb effects.
We’re thinking of using physmats for this, but was hoping that UE4 would have something prebuilt for people that would want this feature. Atleast it would open a new world for me as a sound designer.

Yeah, there will be better support for 3rd party audio extensions in a number of areas: HRTF/Spatialization, reverb, occlusion, etc. That includes IR verbs. I’m working with a plugin maker right now that I can’t name to get support for IR verbs, etc.

Re: midi.

This is an idea we had too but the midi plugin is designed to be used in BP, which is not the same as working in the editor. It’d have to be a pretty big change to do MIDI in-editor – probably a totally new tool that has midi-mappings to things. Not quite sure how it would exactly work. But it’s an idea.

There’s already support for this. Audio components can persist between level loads.

You’re talking about what I called “contexts” when I was a programmer on CoD (MW3 and AW). I had a feature where you’re variations had an optional user-defined context that could be dependent on the lister/source configuration relative to audio brushes/trigger volumes. That sort of thing should be done in a plugin/outside of the audio renderer. The audio renderer is more lower level than such a feature. If I were you, I’d do it as a special trigger volume, and create a UObject/UStruct wrapper around USoundWaves with your enumerated context that you could define your custom trigger volume. I’d personally like to get more into writing utility plugins for various game-dependent features (not every game would need such a thing) – there’s some demand, for example, for dialogue management systems, procedural music and music stitching utilities, etc. Such stuff will probably be added as optional plugins. In general, we’re trying to move more in that direction at Epic as we’re trying to reduce the feature bloat of the core functionality.

This is actually technically not “physical-based audio”. It’s a simple trick that I’ve found to be quite effective and is borrowed from post-production sound design techniques. Basically things that are supposed to be further away are more wet, things closer by are more dry. Makes a lot of sense, right? Anyway, doing it in a game engine automatically is pretty easy once you have a robust submix system where audio can automatically “send” their audio to submixes in the same way you would with an aux channel in a mixing board.

Actually one thing I’d really love that I forgot to ask about, is a new way to do sound concurrency, based on distance to source (which updates continuosly).

Basically i’m doing a vehicle-based MP game with RTS elements, so there are a lot of objects on the map and lots of long-range projectile based weapons. All of these vehicles and projectiles are constantly moving and going in and out of audio range, but the issue is that with the current concurrency options - vehicles that are newer but further away don’t make any sound, or older ones stop making sound when newer ones are created etc.

What would be nice is a concurrency group that constantly polls the sources of sounds and checks their distance from the listener. Sounds are then prirotised / faded in and out based on distance to the listener - regardless of how many of them there actually are in listenable range. E.g, say you have a group that has an audible range of 2000 units, and you can play 10 at a time - if I have 50 objects making that sound in say, 1500 units, I want them to be prioritized based on distance. As I move around inside that radius and the distnace between each unit changes, keep updating and using the shortest distance for sound sources.

When can I change the volume on the Media Player, Android - and not effect the full game?

Missed the stream, but I have a question: will there be vst plug-ins support ?