Download

Unreal Engine Livestream - Unreal Audio: Features and Architecture - May 24 - Live from Epic HQ

WHAT
Epic Games Lead Audio Programmer Aaron McLeran will describe the architecture of the multi-platform audio renderer, including submix graph, source rendering, effects processing, realtime synthesis, and plugin extensions. He’ll demonstrate simple implementations of architectural features and walk through a couple simple audio effect plugins and a synthesizer, followed by a general discussion of the future of Unreal’s audio system.

WHEN
Thursday, May 24th @ 2:00PM ET - Countdown

WHERE
Twitch
Youtube
Facebook

WHO
Aaron McLeran - Lead Audio Programmer - @minuskelvin](https://twitter.com/minuskelvin)
Tim Slager - Community Manager - @Kalvothe](http://twitter.com/Kalvothe)
Amanda Bott - Community Manager - @amandambott](http://twitter.com/amandambott)

If you have questions for our guests, feel free to toss them below and we’ll try to get to them on the stream.

Thanks much for the stream :smiley:

Question 1: I understand this is still a moving target – but will the official documentation be updated in the near future to cover the new audio engine? Looking here: https://docs.unrealengine.com/en-us/Engine/Audio While I love experimenting, I love even more to have some docs with best practices and implementation details/features I might have missed.

Question 2: Any plans for an ugly-but-functional “mixing board” or some sort of UI to see/edit entire audio setup in one place? Audio bits seem to be buried in a number of places… Particularly while PIE-ing, a single cohesive UI would be great for checking levels/clipping, adjusting/toggling effects, etc for rapid iteration.

No rush on either of these, just curious if there’s a roadmap/timeframe (or if I’m missing the obvious).

Can this stream please talk about Pros/Cons when using the Unreal Audio System vs using Middleware like FMOD or WWise?

i cant download the fortnite

unsupported os

yesterday i was visualizing my studio and me singing… anyone interested and how do have colors there ?

Looking forward to this !!

Like most things, I imagine it will be determined by what you’re trying to do, and how you’re trying to do it. You can probably only really make this decision for yourself, once you learn about the options available to you. That said, I’d like to know what seasoned developers think about the options that are available.

**QUESTION: **An engine built-in cross-platform solution is needed to capture audio from the microphone on mobile (Android and iOS). Is this planned to be included in the near future? I believe this is basic functionality and hopefully it will be considered a priority.

Thanks for the stream! Looking forward to it.

’ Sounds '** good! :)**

Just want to echo acatalept’s second question asking if there’s any plans for an audio mixer panel. It’s purely a UX/UI thing but the one that Unity has is really nice with the ability to call mix snapshots at runtime.

Q: It seems anything is possible with interactive audio now. In what directions do you envision the editor and blueprint ‘user’ experiences going, in the near and far future? Especially in regards to working with and creating interactive audio - but a bigger picture view is welcome too.

I cant wait

I’ve a question I hope he answer it, will it gonna be blueprint friendly to drive gameplay with voice on smartphone yet or still too long way to go to add this feature?

Question #1

For people who are making rhythm games and real-time instruments, we are prone to deal with latency issues.
In theory, the audio alone can be cut down to 28ms (which is great), but we also have to think about “input latency” (for both software/hardware).

Here’s what can add up to the stack…

Software (version 4.19)

  1. Audio Latency: 28ms (60fps + 512/22050 setup, w/ new audio mixer enabled)
  2. Input Latency: 33ms (60hz)

Hardware (w/ wide range of variance)

  1. Keyboard: 2.83 to 10.88ms (PS/2) ~ 18.77 to 32.75ms (USB)
  2. Speaker/Headset: Bluetooth is the worst…

= almost 100ms (PC can be the worst platform)

Are there any future plans (e.g. implementing “sub frame tick” or maybe something even more awesome) that we can look forward to?
Are there any other ways to improve audio response time? Can we really make (playable) real-time instruments someday?

Also, can you give us more options so we can experiment (e.g. removing the lower cap on “callback buffer size”), or would it simply be a bad idea?

Question #2

For people who want to learn DSP (without any knowledge in C++, but familiar playing around with instruments), can you give us an advice on how or where to start? Maybe a learning path? What about PureData (which seems like a visual scripting language)? Also, what do you see in the future?

Thanks!

Good questions – I’ll chat about it on the stream. 1) docs are def bad. We have a plan on getting better docs out on new audio engine stuff soon. It’s been delayed primarily because it’s not out as “on by default yet”. We want to wait until we’ve launched with it on Fortnite on our platforms before we do that. However, games are shipping with it. It’s hard to have docs that are “this only works in this one mode”, etc. Once it’s on, we’ll get cracking on docs.

  1. We have plans for a more advanced mix system I’ll try to remember to talk about on the stream. It’s not quite yet #1 priority, but it’s coming up. I agree visualizing mixing is important. I’m not convinced DAW-like mixing console is the best way to do that (primarily because game mixes are more matrix-like than linear), but it may be. Or at least be part of a visualization solution.

Question #2

For people who want to learn DSP (without any knowledge in C++, but familiar playing around with instruments), can you give us an advice on how or where to start? Maybe a learning path?

What about PureData (which seems like a visual scripting language)? Also, what do you see in the future?

Thanks!

@Minus_Kelvin How Mixes behave when several of them triggered at once/overlapping?

@Amanda.Bott what is the nameof the short scifi film in community spotlights , (the last one) ?

@Minus_Kelvin Do you have a link to that 2017 GDC example project you mentioned? I saw the GDC video, but I didn’t know that project ever got “officially” released.

Additionally, is there any way to make the Synthesis stuff easier to modify? For example, the Sound Cue has a dedicated editor where you can make adjustments and then hear them “on the fly,” as it were. I know that’d be hard to do for the Synthesizer, but it’s a pain to make an adjustment, open PIE, see how it sounds, close PIE, make another adjustment, open PIE, etc. Some way of at least being able to demo what the sound will sound like in the editor would be really helpful for tweaking and debugging, even if it’s not a full-on graph.