Unreal Engine Livestream - Unreal Audio: Features and Architecture - May 24 - Live from Epic HQ

WHAT
Epic Games Lead Audio Programmer Aaron McLeran will describe the architecture of the multi-platform audio renderer, including submix graph, source rendering, effects processing, realtime synthesis, and plugin extensions. He’ll demonstrate simple implementations of architectural features and walk through a couple simple audio effect plugins and a synthesizer, followed by a general discussion of the future of Unreal’s audio system.

WHEN
Thursday, May 24th @ 2:00PM ET - Countdown

WHERE
Twitch
Youtube
Facebook

WHO
Aaron McLeran - Lead Audio Programmer - @minuskelvin](https://twitter.com/minuskelvin)
Tim Slager - Community Manager - @Kalvothe](http://twitter.com/Kalvothe)
Amanda Bott - Community Manager - @amandambott](http://twitter.com/amandambott)

If you have questions for our guests, feel free to toss them below and we’ll try to get to them on the stream.

Thanks much for the stream :smiley:

Question 1: I understand this is still a moving target – but will the official documentation be updated in the near future to cover the new audio engine? Looking here: Working with Audio | Unreal Engine Documentation While I love experimenting, I love even more to have some docs with best practices and implementation details/features I might have missed.

Question 2: Any plans for an ugly-but-functional “mixing board” or some sort of UI to see/edit entire audio setup in one place? Audio bits seem to be buried in a number of places… Particularly while PIE-ing, a single cohesive UI would be great for checking levels/clipping, adjusting/toggling effects, etc for rapid iteration.

No rush on either of these, just curious if there’s a roadmap/timeframe (or if I’m missing the obvious).

Can this stream please talk about Pros/Cons when using the Unreal Audio System vs using Middleware like FMOD or WWise?

i cant download the fortnite

unsupported os

yesterday i was visualizing my studio and me singing… anyone interested and how do have colors there ?

Looking forward to this !!

Like most things, I imagine it will be determined by what you’re trying to do, and how you’re trying to do it. You can probably only really make this decision for yourself, once you learn about the options available to you. That said, I’d like to know what seasoned developers think about the options that are available.

**QUESTION: **An engine built-in cross-platform solution is needed to capture audio from the microphone on mobile (Android and iOS). Is this planned to be included in the near future? I believe this is basic functionality and hopefully it will be considered a priority.

Thanks for the stream! Looking forward to it.

’ Sounds '** good! :)**

Just want to echo acatalept’s second question asking if there’s any plans for an audio mixer panel. It’s purely a UX/UI thing but the one that Unity has is really nice with the ability to call mix snapshots at runtime.

Q: It seems anything is possible with interactive audio now. In what directions do you envision the editor and blueprint ‘user’ experiences going, in the near and far future? Especially in regards to working with and creating interactive audio - but a bigger picture view is welcome too.

I cant wait

I’ve a question I hope he answer it, will it gonna be blueprint friendly to drive gameplay with voice on smartphone yet or still too long way to go to add this feature?

Good questions – I’ll chat about it on the stream. 1) docs are def bad. We have a plan on getting better docs out on new audio engine stuff soon. It’s been delayed primarily because it’s not out as “on by default yet”. We want to wait until we’ve launched with it on Fortnite on our platforms before we do that. However, games are shipping with it. It’s hard to have docs that are “this only works in this one mode”, etc. Once it’s on, we’ll get cracking on docs.

  1. We have plans for a more advanced mix system I’ll try to remember to talk about on the stream. It’s not quite yet #1 priority, but it’s coming up. I agree visualizing mixing is important. I’m not convinced DAW-like mixing console is the best way to do that (primarily because game mixes are more matrix-like than linear), but it may be. Or at least be part of a visualization solution.

@Minus_Kelvin How Mixes behave when several of them triggered at once/overlapping?

@Amanda.Bott what is the nameof the short scifi film in community spotlights , (the last one) ?

@Minus_Kelvin Do you have a link to that 2017 GDC example project you mentioned? I saw the GDC video, but I didn’t know that project ever got “officially” released.

Additionally, is there any way to make the Synthesis stuff easier to modify? For example, the Sound Cue has a dedicated editor where you can make adjustments and then hear them “on the fly,” as it were. I know that’d be hard to do for the Synthesizer, but it’s a pain to make an adjustment, open PIE, see how it sounds, close PIE, make another adjustment, open PIE, etc. Some way of at least being able to demo what the sound will sound like in the editor would be really helpful for tweaking and debugging, even if it’s not a full-on graph.

I’d love to have the PDF from the stream.

That PDF was just beautiful. Can I frame it and put it on my wall?