Hey Everyone, I’ve been the main man working on Audio Synesthesia. Super glad to see you all interested! I want to give you some quick notes on what’s available in 4.24 and what we have planned for the near future.
Audio Synesthesia In 4.24
Audio Synesthesia is in a pretty early beta stage. As it stands in 4.24 there will be support for baked analysis (aka Non Real-time) and it will generate:
- perceptual loudness over time
- constant Q over time (Constant Q is like a spectrogram except that the frequencies are spaced more like notes in a scale)
- audio onsets over time
The analysis assets keep references to the sounds that created the underlying analysis data, but there’s nothing that forces them to be used together. In blueprints you can pick whichever asset you’d like to use as a data source and sync it to whatever sound you’d like, or not sync it to any sound at all.
Audio Synesthesia In The Future
Audio Synesthesia in the future will absolutely support real time analysis for any algorithms that can be done in real time. That means loudness and constant Q will for sure be in there. Possibly onests will make it to with some modifications from the baked version.
It will also have some usability updates and new analysis algorithms.
On my short list of analyzers to add are
- Beat Grid - This is only a non-real-time analyzer, but it’s great for syncing animations to musical clips. It will fire off regularly spaced blueprint events just like you were tapping your foot to the music.
- Peak partial tracking - This will break down a sound into a handful of 100 or so tones with amplitudes and trajectories
- A regular old spectrogram
If you have any things you’d like to see in future version, let me know!