Audio Engine Updates Preview - Feb 2nd - Live from Epic HQ

when is the next game jam

I have seen the stream, but for some reason avoids VST questions 3 or 4 times. Anyway, I find cool the synth. Maybie someone could figure out a VST plugins wrapper plugin for UE4. VST Instruments would look like an expandable node with midi inputs and audio outputs and VST effects with audio inputs and audio outputs

Can’t have VST support without a big licensing circus with Steinberg :confused:

cool, thanks

Sorry this one took so long to get up, coincidentally there was an issue with the archive’s audio track not working. Qlint fixed it and got it uploaded.

Follow-up Q&A

It won’t be like fmod or wwise. Still very UE4-y. New features take advantage of the ways of UE4.

Yeah, you register a delegate to get notifications on playback progress.

Comparible. Out of the box, new engine is cheaper since it processes sources in parallel. However, it opens up new and exciting features (effects, real-time synthesis, etc) so hard to compare. But since all the code will be inside UE4, optimizing and monitoring will be much more doable.

A major goal of the audio engine is to support procedural audio not only in terms of logic but synthesis and effects processing.

HUGE IMPACT.

You can currently do odd things with the listener. I can look at ways to extend that. But it’s strictly above the level of the new audio engine.

Not out of the box, no, but you can easily add that functionality in a C++ component, etc.

VR and all of this are best friends.

The synthesized sounds are treated like any other sound source so all of the things work together.

Audio has always been ready for VR. Biggest thing is binaural (or HRTF) processing with audio and we’ve had that before the new audio engine. However, the new engine’s architecture will allow for greater experimentation w/ respect to physical audio and environmental processing, both things which are big topics in VR for audio.

No. A sound source is analogous to an audio track in a DAW. The closest thing you’ll see to something like that is in Sequencer where you can arrange your audio and see the waveform rendered, etc. But no, game audio is typically fundamentally different than a DAW so you won’t see game audio engine’s trying to replicate the behavior of a DAW.

You can do this now but you have to code it in C++. You can load anything from disk if you want, get the PCM data yourself, and feed it a USoundWave. I’d like to do some tutorials to show how this and other “advanced” topics might be done. I’ve also talked about putting together a utility plugin for extra-curricular features like this that aren’t really appropriate as a general feature for all games.

No. The audio visualization plugin will likely be deprecated soon. It’s functionality will likely be incorporated more deeply into other plugins. I’m writing a “synthesis” plugin which will probably have support for FFT and envelope following sound sources and submixes.

If you mean VOIP, that technically already exists. Some games are using it but it’s not easy and it’s not setup to be a core feature of the game. You sort of need programmers to deal with it if you want to use it. I’d like to revist VOIP and make it easier for people to use and setup.

The new synth component (4.16 feature but I’ll be demoing it at GDC) is specifically setup to make dealing with procedural audio very easy. It supports currently mono or stereo float data. No need to deal with PCM data directly. It handles the format conversions for you.

There already is acces to audio engin in cpp.

Yeah, the synth component is basically a wrapper around an audio component and USoundProcedural (which is a USoundBase) type. What that means is that you get all the power of all the other features. So short answer: yes.

This is already supported in the engine. It’ll just be sample accurate on all platforms.

Both. The 4.16 version will have “per source” (or insert) effects and submix (or bus) effects. Obviously source effects will be more expensive.

Yes.

If you know how to program one, you’ll be able to make one without dealing with the internal guts of the audio engine. I won’t personally make a Moog synth as that’s probably going to end up being a licensing issue. The synth I made is a pretty subtractive synth that you might find familiar as there’s probably a hundred variants out there. Basically there’s a few oscillator inputs (saw/sine/tri/square/noise) that feed into some filters (I implemented a standard state-variable filter and a ladder filter). It has a classic ADSR envelope for amplitude modulation and parameter modulation. I implemented a modulation matrix (allows you to map LFOs and modulation envelope to any number of other modulatable parameters).

Aaron: No, but it will be easy for people to wrap VST with the the UE4 audio effect API if they want to do it themselves and work with Steinberg directly. Audio plugin APIs are all pretty similar. Audio stream in, out stream out. The issue with being a VST host for us right now is licensing issues, etc. Technically it’s pretty doable.

: Sorry @, we just got bombarded with questions and were on a tight schedule. Sometimes we can’t answer everything in the timeframe.

No.

Not sure what this means. If you mean the direct audio output from the device, then yeah, such a thing will be very easy now. I might implement such a thing just to demo the idea at GDC. I’m thinking like a BP function that grabs audio output from a submix (including a master submix) and feeding into a USoundWave asset or something. Maybe direct to .wav?

Yes. There’s still not sub-game frame timing support for audio though I have an idea on how I might implement this I want to try. But yeah, any events which occur at the same time on the game thread are gauranteed to happen at the same time in the audio render thread. What that means is if your game has a pretty steady game thread tick, you can get accurate enough for a music game as long as your musical timings are multiples of the game frame rate.

Anythings possible. But no, not without a lot more work. DAWs are fundamentally different from a game’s audio engine. E.g. there are no asset editing support, etc.

I’ve added hooks to make it very easy for 3rd parties to extend audio capabilities. We’ll be working with 3rd party vendors who have some pretty amazing occlusion solutions.

There’s plans to do more tools support in the coming year. I’m personally on the fence about the utility of a mixing board interface for game audio, at least in the way you would traditionally think of a mixing board, but it does look cool.

The audio engine does its mixing with 32 bit floats, the XAudio2 output device format is opened at whatever the device’s current settings are from windows OS (e.g. 24 bit, etc). The sources are still forced to be 16 bit so it’s not a dramatic change in output quality but I’d like to look at opening up the required import audio formats and output/platform compression formats.

I’ve not quite looked into any detailed changes to how audio behaves in sequencer, but since sequencer plays sound waves and sound cues, those sound waves and sound cues will be able to use effects like any other audio file.

Not sure what this is asking but yeah, audio volumes are a thing. I’d like to revisit that feature set once all the old audio engine code is removed. There’s a lot we can do with audio volumes. This is probably something I’d look into adding into a special audio plugin – basically environmental audio plugin. We’re talking utilities for procedurally generating ambient audio, more robust connections to maps/volumes, better interaction with reverb and other effects, source-based reverb (vs listener-based reverb), etc.

All of the stuff is exposed to BP. The synthesizer you saw in the preview is BP controllable. But you have to write the synthesizer itself in C++ as a C++ component. I’d like to look into creating a gui tool like the material editor for synthesis and effects. Like an “audio shader” sort of thing. But that’s down the road.

Yes. You’ll have so much EQ.

You can do that, yes. Events that occur at the same time on the game thread will be guaranteed to happen at the same time in the audio render thread.

The audio renderer doesn’t change this part of UE4 audio yet. So we’re currently at the status quo. Once we remove all the old audio engine code, we’ll be able to open up format support more easily since there won’t be as much concern about platform dependencies, etc.

No. But I have written a synth component implementation that lets you load the PCM contents of a sound wave file and modify it like a wave table.

You’ve already got sound occlusion.

It’ll support all the platforms we currently support for UE4.

No sound wave editing yet. I’m not parsing loops or metadata in sound waves yet. We’ll look into more options for sound file imports, etc. in the future.

It already has “Sound FX” and “Music” categories.

Yes. But in C++. You’ll have to brush up on your C++ coding and DSP. But it’ll be very easy for you to add audio effects. 4.16 will come with a bunch of standard audio effects that I’ve implemented so you’ll have examples to learn from. I’m hoping to have some time to put together tutorials and hopefully get people excited about learning how to write DSP effects and synthesis.

Yes.

Not yet, but we’ve got conversations going with them (and others) for these technologies.

Audiokinetics manages their wWise plugin. This is an announcement about the built in audio engine.

I was wondering if there are any plans to include ambisonic support and decoding to binaural (B-Format to binaural). This would be very useful for VR. Otherwise, will it be possible to do partitioned convolution on multichannel signals?

Great work!

Would you be so kind, , to point me to any working example of the proper binaural expample in UE please?

I spent weeks to fake binaural audio to deliver our PSVR project past year…

Probably a bit of an outside case, but is it possible to direct different audio sounds to different devices? What I want in my case is to send music out of the engine to a different device/channel from my sound effects - multi-channel output really, but not surround sound.

Thanks for the great stream btw.

Hey , in continuation with RoadStar’s question.

Will we at some point get some more documentation on use of HRTF audio with VR (Oculus). It’s just a little bit fragmented where to get info. I know you’ve answered some stuff so am going to sift through, but:

Despite that HRTF has sort of worked for positional audio in VR, we did find that when you rotate your head side to side, the audio would immediately cut in one ear if you were facing 100% in one direction. So, 100% left ear, 0% right ear, or reverse. Are there settings that I’m missing that would solve that?

How does this relate to the just-announced Steam Audio SDK?

https://valvesoftware.github.io/steam-audio

from Free Unreal and Unity plugin Steam Audio will let sounds bounce around virtual environments

Whether these are completely separate things (Steam Audio plugin vs UE4 “new audio engine”), this is all great news :wink: Thanks much to and the rest of Epic for the much needed audio love!

Really happy to see audio getting some love in UE! You guys are just killing it :smiley:

Thanks [MENTION=8] [/MENTION] sharing this interested information.

I’m a little confused, is it possible to have sample-accurate timing by now?

Can someone explain, how I would approach this? I assume this would be done in C++. I’m not the best coder, so a little hint in the right direction would be awesome.
Thanks!

There is an entire subforum for the audio stuff now, I think its probably better to ask there.

In fact I think something similar has been discussed there recently: New Audio Engine: Quick-Start Guide - Audio - Epic Developer Community Forums

edit - oops I see it was you that asked that question there. But I will leave my reply here anyway in case it helps someone else.

I am also curious about this