Dynamic Audio Loading

@dan.reynolds Hello Dan and folks,

We are using UE4 for a live dance performance involving mocap and avatars of the dancers in a virtual space. It also involves capture and live processing of audio from the performance. For this we are making use of the new audio engine and the granular synth. Another program is receiving and recording the audio. Is it possible to dynamically load the audio, from memory or from file, into UE4 to make it available to the synth?

I would be happy to learn more about the synth or see the source. I am having some small problems with audio hiccups while it is playing and also would like to be able to control it precisely.

I don’t have much experience with UE4 but have lots of experience with Max and sound in general. There is an UE4 expert on the team.

Sorry for the double post.

Hi slo_burn, it sounds like an interesting project.

The granular synth uses a Set Sound Wave but I don’t know how you plan on getting the asset cooked into UE4.

I think you would be better off coding a Granular Delay Source Effect and trying to work out real-time audio input–does it need to be an asset?

The granular synthesizer is fundamentally not appropriate for a realtime granulation of live audio input.

There is a new mic component that is implemented now in our master branch:

https://github.com/EpicGames/UnrealE…e/AudioCapture

It is a simple component which feeds audio from the default capture device into a game as a synth component. It’s not hooked up yet to “record” the audio to an asset but you can apply arbitrary source effects to the audio.

However, for this, you’ll want to write a grain delay effect (not a granular synthesizer) likely as a submix effect but it could also work as source effect. Grain delay have fundamentally different contstraints and use-cases than a granulator and are a bit trickier to write as you are granulating a live delayed buffer as an effect vs resynthesizing a single loaded buffer.

If you’re not depending on any kinda shipped product vs doing an installation, you could theoretically write a submix effect which sends the audio of the submix buffer to Max via OSC. Then, in that application, you could do whatever effects/prcessing you want. If you have Ableton Live, you could write a Max patch which receives sent audio streams (via OSC), then feeds the output into Ableton LIVE. At that point, you could just process the audio through the literal Ableton Grain Delay effect (and/or anything else).

Hi Guys, thanks for the responses.

Looks like things have been sorted on this end. I should have desceibed better that the audio was actually being recorded by another bit of software (a python script) because that script is sending the audio to Watson for some speech to text analysis. So the granular processing would not have been live per se, but rather on very recently recorded audio. Someone here had written a component in c++ that retrieves the audio.

In the end the I am programming the effects in Max for this edition because time ran out, but I am looking forward to getting to know the new audio engine.

BTW the audio glitching I mentioned was just CPU overload on an underpowered machine.

Thanks again!