Unreal Engine Livestream - Unreal Audio: Features and Architecture - May 24 - Live from Epic HQ

I missed the stream. is it possible to see it somewhere?

The most efficient means I’ve found of bulk creating/editing synth patches is to set up a MIDI keyboard input system (described elsewhere, in the sticky thread about the new audio engine I think) and also a UMG knob/slider/etc control panel to make adjustments during PIE which persist in your modular synth preset bank. This last bit took me a while to figure out, but you can pass-by-reference in Blueprint to make presets ‘stick’. Eventually I added panel buttons for Create New Preset, Save Current Preset, Clone Preset to New Slot, etc. but the quick and dirty solution if you don’t want to make a UMG panel is to release mouse control from PIE and make adjustments to the preset bank array in a second editor window, while hammering your MIDI keyboard for an updated preview of the sound after every tweak. And if you don’t have a MIDI keyboard, just set up a little looping melody with a Timer hooked up to NoteOn. It’ll get irritating fast, but it’ll do the job :slight_smile:

Setting up your own system like this is ultimately more flexible than the kind of built-in preview you describe, since you might - like me - end up with a synth that’s layering the Modular Synth, the Granular Synth and a custom ROMpler-style sample synth simultaneously, using a custom preset structure, and you’d want to preview the whole thing to get stuff balanced properly. And given that the Modular Synth is intended to be extended, or used as an example for people who want to design their own, I can’t see it ever getting what would have to be a highly specific and inflexible in-editor/non-PIE preview.

@Logtrix Yes. Twitch
Later it should be available on Youtube. Not sure why it’s not there yet…

We haven’t implemented mic capture yet on mobile, though I agree it would be a nice feature for sure. It’s not quite considered a priority at the moment unfortunately.

#1 – for the new audio engine, I’ve implemented a way to automatically schedule audio events from the game thread to the audio render thread sub-frame. However, this only as accurate as the game thread and is not going to resolve your latency problems.

I have a plan on a design to create an “audio scheduler” component which will allow you to schedule audio events at arbitrary rates and times in the audio render thread. Again, won’t solve the issue of input/output latency.

#2 – learning audio DSP is not too bad if you’re good at Math. If you struggle with math (calculus, linear algebra) it’ll be a bit more tricky to pick up. My background is physics so studying audio DSP has been generally more easy going for me. There’s a number of good books to check out to get you started. If you’re new, I recommend starting with a 2-book text Musimathics (http://www.musimathics.com/). There’s a bunch of other good places to go. Pd is cool to start. Miller Puckette’s free book on DSP/electronic music (http://msp.ucsd.edu/techniques/v0.03/book.pdf) in Pd is a good thing to check out but will probably quickly go over your head unless you’re already familiar with the mathematical foundations.

Unfortunately, it didn’t get released. We intended to dust it off and get it in a presentable mode for people but we have yet to have the time (we’re a super small team spread among many projects here at Epic). We are working on making some sample projects to show people now, we’ll consider dusting off some of the 2017 demos and getting them working on later versions of the engine since it’s a popular request.

Good question: Different mixes will scale together. Re-application of the same mix won’t double-trigger but will update their timings.

Hello! FYI the video is non linked, and did you upload the folders somewhere?

Can anyone explain how the baking of FFT analysis works in the new Audio Engine?
So far I’ve got the realtime FFT working. ‘Start Analyzing Output’ of a submix and then using ‘Get Magnitude for Frequencies’ from that submix as pictured in the attached image.
But I can’t use the “Get Cooked FFTData” to return true. What node or process is used to cook the FFTData?
I’ve tried digging and can’t seem to find anything. Even tried turning of ‘Context Sensitive’ for ‘All Possible Actions’

I figured it out. There’s a checkbox in the actual Sound Wave uasset for generating baked FFTData.