New Audio Engine: Quick-Start Guide

Hi,

I may have missed information but does the new Audio engine is still in Beta? If yes, any final date in mind?

Thanks,

Hey thank you for the reply!

As a quick hack I was able to do just as you suggested, using 2 different synths to make a kick.

When you say it gets a bit weedy, do you mean that the patch that I was trying to achieve isn’t possible due to one bug or another? Or are there further advanced steps required to patch individual oscillator gain/freq/etc? If it’s a bug, no big deal. But if you have a solution I would really appreciate it if you would share some details on how to make that work.

I’m nitpicking but really great work on this! I’m having fun building out wacky sound contraptions. Thanks a lot!

The advanced patches may have bugs, it’s not well tested because of the possible combinatorics.

Hi Elvince, it is still in Early Access. We are currently working toward shipping it with one of our major titles–it’s important to us to have road tested it on an internally shipped title before officially switching over.

With that said, a few titles out in the wild are already using it, including the soon-to-be-released game, A Way Out.

I just began to work again with the synths modules (both modular and granular) and I still have a Mac editor freeze when using a blueprint actor in a scene with a synth activated and started in it. When playing in editor in viewport and stopping with Esc key, the editor freezes. BUT, when playing from a blueprint window, it’s ok. Which is very strange!

Thanks for the heads-up, ConcreteGames; I’ll see if I can repro this issue. Did you make sure you’re running the New Audio Engine?

Is the new audio engine already default @ 4.19?
Is it possible that I can run it by default? The last time I tried it I had to start the ed with **-audiomixer **and after that i had to open my project; I usually just open by clicking on the project file (I tried to put it in the projectfile link, but it did not work back then).

It is not on by default, we want to road test it on Fortnite before we consider it properly released–that’s just our due diligence. Follow the instructions at the top of the thread for adding the config file, this will make sure it’s always on for your project.

alright, so the -audiomixer parameter has no impact anymore?
I am working with a small team and for some even setting the right ini value in the launcher engine might be a challenge.:rolleyes::stuck_out_tongue:

-audiomixer still works. But if you want it on by default for YOUR project, then you need to set up the config override. You can check ini files into your version control so that everyone gets them. I can’t imagine how you’re collaborating otherwise.

Good feature, but there a lot of things in the guide that I don’t understand, like on the modified screenshots that I attached.
The guide doesn’t go step by step, I don’t see connections, and there is no screen-shot showing some complete blueprint set-up like in Midi tutorial
Sorry if my question is very basic, I just never had any experience with sound-thing
Please halp,
Kind Regards

The white connections are EXECUTIONS.

They come from EVENTS.

I would recommend a generalized Blueprints tutorial, something like turning on and off a light or opening and closing a door. The gist is that an event happens and it executes a series of functions and macros. These happen in a specific order, a sequence determined by the order in which your functions are connected. Sometimes there are macros that alter the execution pathway (like a Branch or a ForEach, etc.) in those cases, you might have several possible execution pathways. How you construct these pathways determines your game logic.

The question of what Event prompts a Note On for a Synth or a Set Frequency for a Filter is not one posed to me, that is up to you.

Maybe the Note On plays when your BP interprets a MIDI Event, or maybe it plays when your character walks into a Trigger Volume. Maybe you set the Filter Frequency of your Source Effect when your player loses the game or when you enter a menu.

The point is, I don’t know what comes before those any more than you do.

Thanks,
I know the blueprints and stuff. But I don’t know for example what are the Note on, Note Off, how the sound-thing works, etc google is silent on this matter.

Note On and Note Off are functions called on the Modular Synth Component, they mimic MIDI Note On and Note Off events, which have been a standard for hardware synth communication since the 1980s.

They start a synth note and end a synth note, respectively.

How do I complete the following step on a Mac? do I use terminal?

Launching the Editor with the New Audio Engine on via Command Line:

  1. Open the project using command line flag: -audiomixer

very easy, you only source control the project itself and everyone has to use the launcher engine (the package build is done from engine source tho)

Thx for your answer, sadly I still have problems with steam audio on the current launcher version, maybe switching the engine source might be an idea now :smiley:

I just enabled this and it fixed all my android issues.

How would you get the UModularSynthComponent added to an Actor in C++? I’ve got everything set up in Blueprints already, but I wanted to bring the logic down to a lower level and the compiler can’t find the header file. I tried adding “Synthesis” to my PublicDependencyModuleNames in the .build.cs file, but it still has issues finding the header files.

I managed to get it to compile! I forgot to add it to the AdditionalDependencies inside the .uproject file. Adding “Synthesis” in the .uproject file, to PublicDependencyModuleNames in .build.cs, and including “SynthComponents/EpicSynth1Component.h” got it to compile properly.

I’d also be super interested in getting answers to @pfaudio’s questions as I’am in a similar spot: trying to play two sound cues back to back and getting ~18-20ms of additional time (sound cue duration + said additional time) between starting a sound cue and receiving OnAudioFinished. I’am using a single AudioComponent on an otherwise empty actor, alternating between two Sound cues when receiving the OnAudioFinished delegate in C++.

What is the recommended way of playing e.g. a list of WAV with as little delay as possible between one ending and the next one starting? Are sound cues still the way to go or should I use a granular synth, even though I’d play the source file start to finish?

I’am currently on 4.18.3 but could change to a later version if really required. Not using the new Audio Plugin right now - but would switch to it, if required to decrease current 20ms delay.

An answer or pointers into the right direction to do this would be really appreciated.