Output of Calculate Frequency Spectrum

I’ve been messing around with the native sound visualization plug in, and trying to recreate a basic audio spectrum effect (you know, one of these things:)
spec.png

Here’s my blueprint:

I can dynamically initialize n number of static components and scale their Z values based on the output from the frequency function. (The amplitude is disabled, even though it’s in the screenshot for now.)

So my first question is, why are most of the values negative? I don’t know if I quite understand the theory behind sound waves, because this output doesn’t make sense to me. I thought if I pass 64 in, the audible sound frequency range from 20Hz to 20Khz will be evenly divided into 64 segments, and the value in each array element would be the decibel of that frequency range? Is this not the case? No matter what song I play, it seems like the only positive values are in the 1st and 2nd index. Moreover, you would think that if you played an audio from this video for example, you would think that as the frequency goes up, so would values in the higher indices. Doesn’t seem to be the case.

And my second question is, likewise, I don’t quite understand the output from the amplitude function. I noticed that the end result looks a lot like the video I linked from above, but if I put in 64 as the number of buckets, what does the index and value of the output array representing?

Thanks!

Just guessing, but Spectrum Width is probably asking for the width of whatever Sound Wave you are passing it, so ~20K in this case.

You’re receiving results relating to the digital range of the audio file. You might notice that the chart you linked begins at -60dB and goes to 0dB. In digital signals 0db is the strongest possible signal, or to stretch the concept “loudest” signal that can be represented, with a power of 1. Anything above that is “clipped” and is the effect of lound distorted crackling you might hear when an audio file is over-compressed. -60 dB actually represents a power level of 10^-9 or 0.000000001 units.

Every digital bit you add to a file gives about 6 dB of amplitude, so a 24 bit wav file has a noise floor of -144dB, below that is essentially zero because the audio signal can’t be distinguished from noise. So the loudness of the signal is really just the difference between the representable values. The same holds true in a similar way in the width of the signal from 20Hz to 20Khz, with frequencies outside that range being clipped off. That’s about the limit of my knowledge, so I hope that helps you understand.

This actually makes sense. I should have paid more attention to the y-axis of the screenshot I posted myself… Seems like the axis starts at -60 db.

As for the spectrum width, I was studying up a bit on sound theory, and turns out octaves of notes are factor of 2 hertz apart (If C1 is 64hz, and C2 is one octave higher, C2 would be 128hz etc.). Again, you can notice that the x-axis doesn’t scale linearly in my first screenshot. I should have paid attention to my own images…

That answers my questions, but thought I’d chime in to talk about what I learned. Thanks!