New Audio Engine: Quick-Start Guide

Been having a lot of fun with the new audio engine so far, great work!

Was wondering if there was any way of getting our hands on the content-example-esque demo scene shown at GDC?

I get no audio when the game is packaged, anyone had this issue?

Turns out this was due to needing UnfocusedVolumeMultiplier=1.0 in the WindowsEngine.ini as I was trigging from another app.

After adding a reverb effect to the master submix using AddMasterSubmixEffect, I can’t hear anything. Do I need to do anything else to get this working?

We would like to have quite a few tutorials, videos, blog posts, documentation, etc. But we will be holding off on a lot of educational productions at least until the new Audio Engine is no longer experimental.

We’re still shaking things!

Gary-Busey-Look-Alike-Shake-Dance-Gif.gif

We’re not sure what version that will be yet, as release dates can fluctuate. We’ve been making great headway now that we have a new Audio Programmer joining Aaron–2 times the power, 2 times the audio!

There is already a Master Reverb on by default, you can activate it by the traditional method. I would have to check with Aaron, but I believe that AddMasterSubmixEffect adds inline effects, which means that you’ll need to manage dry/wet levels on your Submix Effect Preset and I believe reverb is no dry signal by default because the expectation is that you’ll be using a Send for your reverb (as is traditional for audio production).

8cfe1b7fe8362858f2c1e3ca4775bc8d01d89ad3.jpeg

Full gain on both oscillators seemed to be the source of all my confusion, thanks!

Another quick question - can the Modular Synth Component be Spatialized? Is checking “Allow Spatialization” in its setting like any other sound-producing class all I need to do? Do I need to use one of these new Submix classes or something? Many of the settings, like Spread and Osc Sync and the Delay settings, seem to imply a fixed stereo pipeline. I’ve got Synthesizer functionality embedded inside an object that can also play Sound Cues at Location, so the synth notes would need to sound like they’re coming from the same spot in multiplayer. Thanks!

The Modular Synth Component has an Audio Component and derives from SoundBase which means you absolutely have spatialization options! :smiley: The output is Stereo, so it will work like spatialized stereo works in UE4.

Source Effect Presets Empty?

I followed the steps to use the new audio engine, and i can see all the new submixes and effect chain tools in my sound section. but when i try to create a new Source Effect Preset, the drop down window that pops up show up empty?! am i doing something wrong? i had it working previously on another project and now it just doesn’t show up. could it be a sign that i am really not using the audio engine? i am using the command line -audiomixer in a short cut of duplicate of my engine launcher and the guide seems to suggest, but it doesn’t seem to work. Any help would be greatly appreciated.

Make sure the Synthesis plugin is turned on–all the DSP effects are inside it.

Oh WOW! that was it! thank you so much. i totally spaced on that one!

Hey,

I’m trying to build a VR stepsequencer with the Unreal Engine, using Wwise and the Oculus Spatializer for 3D-Audio. In the current state of the new audio engine, is there a way to play Soundfiles with sample-accuracy? Everything I tried until now was depending on the framerate, which obviously doesn’t offer a very accurate timing for audio. But maybe I’m missing something.
I’d love to see a short walkthrough of how to achieve sample-accurate timing if this is possible.

Thank you for the great work on the new audio engine, the synthesis features are amazing!

So part of the challenge (regardless of what engine you use) is when logic and user interaction traffic through the game thread and the game thread is synchronized with your frame rate.

We don’t have anything for scheduling events for inter-frame event calls out of the box.

If you wish to use Blueprints, you will need to adhere to the limitations of your frame rate tick.

With that said, in the new Audio Engine, if you make a play call for multiple audio files on the same frame execution, they will all play synchronized. If you mark Virtualize When Silent, they will continue to track playback even when silent.

However, it’s important to appreciate that there are many threads at work.

You have the Game Thread, you have the Audio Logic Thread, and you have the Audio Rendering Thread. If you wanted, you could create an object in code that operates on the Audio Logic Thread and where visualizations wait for delegates from that thread. But we don’t have a walk through for that and it’s not a trivial thing to build.

With that said, the Audio Logic thread can tick faster than the Game thread.

Or you can optimize performance on your game thread to ensure high framerates.

It’s not clear to me from the provided pictures how to modulate a value on a Source Effect Preset. How do I access a reference to the Source Effect Preset? Do I have to drill into it from an Effect Chain? Are Effect Chains and Source Effect Presets global instances, or are they per-instance uniquely modulated like Dynamic Material Instances?

Never mind, I didn’t see the Source Effect (Some Effect) Object type before, only the Source Effect (Some Effect) Preset type.

Yes, they are globally controlled at the moment, but we’re looking at ways we can modulate instances of the Source Effects without having to create a bunch of Instanced assets (like with Materials).

Hi, another quick one for you. Is it possible to control which audio source the Modular Synth Component outputs to? Whenever I test my patch while in VR, the rest of my audio (which is coming from a different engine entirely, never mind that) is coming out of whatever my Windows System Setting is (my monitor) while just the Synthesizer component is playing only on my Rift headphones. If I PIE, it comes out of the monitor as expected.

Thanks

Hey Mindridellc! Are you using the new Audio Engine or the old Audio Engine? (-audiomixer)

And you’re saying that when you do VR Preview, the synth is coming out of your Rift and when you do regular PIE it comes out of your Windows default?

And you say that when you do VR Preview, sounds being used by another engine come out of your Windows Default and NOT the Rift?

The expectation from Unreal Engine’s perspective is that if you’re doing a VR Preview, the audio should come out of your Rift’s headphones (or whatever VR device you have active), and that regular PIE should come out of whatever your computer default is OR whatever you’ve set as your default output for your Windows Project Settings:

This would be great. Right now I have a Blueprint Actor which, when one of its variables is set on some type, it uses the Effect Chain, and when that variable is set to another type, it bypasses the chain. Swapping it out at runtime is not an option, since the Effect Chain is global, so one instance of that object’s message to the chain overrides the other.