Analysis and Delayed playback

Hello there,

First thing: I hope you are doing well!

I have been experimenting a while so I will start with my need in case I am starting to get confused.

Basically I would need to play a track while analyzing it. Ideally the analysis could have a look-ahead so the gameplay can prepare things. For example: (analyzing spectrum) -----(2 seconds later)—> sound is on speakers.

I started to use TimeSynthComponents in a hacky way: 2 tracks, one for analysis, one delayed for playback. And I muted the analysis one. It is working fine-ish but I wanted something more data driven (no need to create clips and all) and moreover “streamable” in case I use other audio sources.

Entering AudioMixer: I recently started to experiment and wrapping my head around all these (awesome) new concepts.

Experiment 1:

Since I would prefer to keep things simple I currently only have an AudioComponent with a Submix taking care of the delay and analysis. The analysis (GetMagnitudeForFrequencies) is working fine I think but for some reason I cannot send the sound only to the submix and I am getting two playbacks.

Experiment 2:

Comeback of the hacky way. I tried using two AudioComponents with a delayed “play”. Problem: when I set the outputvolume of the analysis AudioComponent to 0, the PlaybackTime is not maintained so it is hard to monitor the real delay between analysis and playback.

Experiment 3:

It seems using AudioCues is doing the trick, however do you have any advice on how to dynamically change a “WavePlayer” sound runtime (c++)?

WrapingUp:

Do you think one approach is more “the Unreal” way than the others? Any other more appropriate approach? Do you have any advice to fix one of the experiment?

Thanks a lot for your help and sorry for this wall of text!

The new Synesthesia plugin does baked analysis, so you can set it up with your tracks beforehand, and have the whole analysis dataset available for level generation etc.

Hey, thanks for your answer! I really like your blueprint collection by the way.
Unfortunately baked analysis cannot really work in my case since I would need the ability to import new content runtime (chosen by the player), and ultimately analyze streamable inputs like youtube for example etc… (not critical at the moment).

Ok it seems to be quite simple in fact.

In case someone want to do this kind of stuff…

All you have to do is add your submixes (analysis, delay, whatever) on the SoundWave and not on the sound component.
This way you can keep it simple and working.

Cheers.