First thing: I hope you are doing well!
I have been experimenting a while so I will start with my need in case I am starting to get confused.
Basically I would need to play a track while analyzing it. Ideally the analysis could have a look-ahead so the gameplay can prepare things. For example: (analyzing spectrum) -----(2 seconds later)—> sound is on speakers.
I started to use TimeSynthComponents in a hacky way: 2 tracks, one for analysis, one delayed for playback. And I muted the analysis one. It is working fine-ish but I wanted something more data driven (no need to create clips and all) and moreover “streamable” in case I use other audio sources.
Entering AudioMixer: I recently started to experiment and wrapping my head around all these (awesome) new concepts.
Since I would prefer to keep things simple I currently only have an AudioComponent with a Submix taking care of the delay and analysis. The analysis (GetMagnitudeForFrequencies) is working fine I think but for some reason I cannot send the sound only to the submix and I am getting two playbacks.
Comeback of the hacky way. I tried using two AudioComponents with a delayed “play”. Problem: when I set the outputvolume of the analysis AudioComponent to 0, the PlaybackTime is not maintained so it is hard to monitor the real delay between analysis and playback.
It seems using AudioCues is doing the trick, however do you have any advice on how to dynamically change a “WavePlayer” sound runtime (c++)?
Do you think one approach is more “the Unreal” way than the others? Any other more appropriate approach? Do you have any advice to fix one of the experiment?
Thanks a lot for your help and sorry for this wall of text!