I’ve been messing around with the native sound visualization plug in, and trying to recreate a basic audio spectrum effect (you know, one of these things:)
Here’s my blueprint:
I can dynamically initialize n number of static components and scale their Z values based on the output from the frequency function. (The amplitude is disabled, even though it’s in the screenshot for now.)
So my first question is, why are most of the values negative? I don’t know if I quite understand the theory behind sound waves, because this output doesn’t make sense to me. I thought if I pass 64 in, the audible sound frequency range from 20Hz to 20Khz will be evenly divided into 64 segments, and the value in each array element would be the decibel of that frequency range? Is this not the case? No matter what song I play, it seems like the only positive values are in the 1st and 2nd index. Moreover, you would think that if you played an audio from this video for example, you would think that as the frequency goes up, so would values in the higher indices. Doesn’t seem to be the case.
And my second question is, likewise, I don’t quite understand the output from the amplitude function. I noticed that the end result looks a lot like the video I linked from above, but if I put in 64 as the number of buckets, what does the index and value of the output array representing?