New Audio Engine: Quick-Start Guide

Fantastic presentation, very impressive.
Still we can’t do much dynamic content with timeline, without sample accurate timing. As a heavy REAKTOR user i’m very excited:)

Very nice!
But i have a problem. Im using it on an android app and when i suspend the app the music is still running in background. I need to kill the app process to kill the music. Can anybody help ?

Hi,

So, if I understand you correctly, it’s not really possible to manually specify the output of the Unreal audio engine output (new OR old) in VR Preview. I’ve booted my project both with and without the -audiomixer flag, and changing the device setting as you showed in your screenshot never stopped piping Modular Synth Component output and Play Sound 2D output to my Rift so long as I was previewing in VR.

Maybe it’s different with Vive, but this seems to be how the Rift behaves no matter what. Maybe I’ll have to use a virtual mixer like JACK or VoiceMeeter to intercept the Rift audio output on the way to the monitor output.

Hi Mindridellc,

So the Modular Synth should only work in the new Audio Engine, and discussing this with @Minus_Kelvin, it looks like this is something that still needs to be worked out in the new Audio Engine. With that said, specifying the Windows Target Device in your project manager should override the HMD output in the old Audio Engine. Are you certain you have checked in the old Audio Engine as well?

for the android and ios .ini files could I force mute if there is no headphone detected? I saw a workaround for Unity, but cant get this to work with Unreal. I basically dont want audio to play or just have zero volume, if there are no headphones being used. Any help would be super.

Hi there quick question about patches:

I’ve wired a very simple patch with Source->envelope and Destination->gain and it works great - the envelope correctly affecting the patched destination (in this case, overall gain). However, when I change the destination to be Destination->osc 1gain it seems to have no effect. Is there something that I’m missing? I’m seeing a similar behavior with any of the individual osc parameters (gain, freq, etc)

I’m trying to use osc 1 and osc 2 to make a sort of 808 sounding bass kick where there’s the clicky sound (noise in osc1) and the resonating bass sound (sine in osc2). In order to do that I’m trying to apply different adsr envs to each oscillator for the two different parts of this sound. I assume that changing the gain on osc 1 independent of osc 2 is possible, otherwise I don’t see why there would be a distinction between osc 1 and 2 offered in the patch destination dropdown.

I’m very new to synthesis but have been doing lots of outside reading to learn the basics. Is there one simple step that I’m missing or perhaps a parameter that is obviously set incorrectly but I wouldn’t know?

Thanks,

  • Arj

Hey!
I just upgraded to 4.18 on Mac, and when I play in editor a synth patch that works, when I stop the game, it freezes the editor. Am I doing something wrong and is it only a Mac problem?
Thanks!

After investigation it is due to the start/stop behavior. You need to use the stop node if you want to avoid editor freeze. Which is tricky.

Hi Tomavatars! Thanks for the report, I’ll talk to Ethan if he has an idea about this!

Hi Arj!

Yeah the patch system can get a bit weedy. When I made my drum kit for our GDC floor demo, I conceded to having two synthsizers per kit piece. A bit pricier, but it was way easier to program.

Hi Rasamaya!

You will need to take advantage of some kind of device notification message. You will probably need to look into the APIs for the various devices, as they will differ.

You can create a mute button though and use the SoundMix system to establish 0.0f volume audio on the Master SoundClass.

Hey! First of all, awesome work. It’s amazing to see Epic putting more and more resources into audio development. I’m currently working on some kind of audio visualization. For that I need to get the frequencies of the played audio. I’m basically trying to map my sound frequencies to color values. However, when I’m using the “Compute frequency spectrum” node, which I think was developed on an Epic Friday and isn’t documented at all, I get weird values I can’t really wrap my head around. So my question: Is their a way with either the new Audio Engine or older built in stuff like the mentioned node to get the frequency data of my sounds?

This is an inspirational addition to the engine. My mind is a raging torrent of imagination with what I could do with this.

http://www.ripcitybadboys.com/wp-content/uploads/2014/02/mind-blown-2.gif

We do have an implementation of KissFFT in the engine (which allows frequency domain analysis), but a proper Spectral analyzer hasn’t been implemented yet, definitely something we want to get around to doing though!

I don’t remember the old visualizer, but I believe it’s spitting out non-normalized audio values. So you’ll want to probably get the absolute value of the output and scale it from Integer to Float (0.0f to 1.0f) ranges.

1 Like

Any way of getting a SynthComponent to output its audio through an ASIO audio device?

Hi,

Excited to get into the stuff in the new audio engine. I have a couple questions involving the best way to build a music system in BP that I think tie into that.

Currently we are on UE4.17 and planning to jump to 4.19 when it’s out. I note that timing stuff was covered in this thread back around post #73 from @drfzjd.

Probably the most critical timing thing for me is tracking playback time of a music file, and stopping it at designated “exit points” where we then play/stitch an “ending stinger” Cue.

To track timing for the currently playing music cue, we are multiplying % of Cue’s progress by its duration. So for instance 43% complete * 1:12.434. We have a binding from the audio component’s OnAudioPlaybackPercent event to multiply the Percent float that it outputs by the duration of the sound cue (On Audio Playback Percent | Unreal Engine Documentation).

This brings me to my first question: Is this the most accurate way to monitor a music Cue’s time?

Also, I just watched the “Procedural Audio in the new Unreal Audio Engine” video from May of last year. At about 43 minutes in, Aaron mentions that he addressed some stuff where the old audio engine was not queueing up and executing events at the same time.

Next question: He mentions this was done for 4.16, but is it in the new audio engine that you have to enable or part of the default one at this point?

Ultimately I’m hoping to be able to stop a track and play an ending stinger with <20ms of latency, so not exactly “sample accuracy”. Still testing, but may already be there. One thing that appeared to be causing the end stinger cues to play late is if the game requests to stop current Cue, and next exit point is not far enough away. After some experimentation it looks like it’s best to skip an exit point and go to next if it’s <0.5 seconds after the request.

Final question(s):

If we switched to new audio engine now with 4.17:

  • Are things pretty much the same, stability-wise if we aren’t using any of the new plugins?
  • Will existing audio related BP or Sound Cue nodes change in functionality at all?

Thanks

What kind of C++ magic would it take to make this work? I know enough to cobble things together and planning out a visual installation using projection mapping in a VR cave with jellyfish swimming around a tank and want to drive the colors of the jellys from live audio (smaller jellys are mapped to higher frequencies, medium jellys respond to mid-range, and large jellys respond to low frequencies.) I have 4.19 set up now to work with Omnidome for projection mapping. Thanks!

No need for C++ really, I saw local mic capture with envelope(amplitude, not diff frequencies) in 4.19 changelog, Can use the older visualization plugin to get diff freq values, or set up your own little machine that does it, with the tools and effects in new audio engine.
What kind of audio are you going for to drive it? If it’s OS audio, and mic is working, we can always virtually route pc audio through to mic “input” with programs like VoiceMeeter. Beware of conversion to mono and other mic eccentricities.
There’s probably already a better way to do all this, I forget…

Sweet, It’ll be from a mic or from output from a DAW. I’ll check out VoiceMeeter