Audio Visualization Plugin 101

Ive come to realize that the Audio Visualization Plugin isn’t very useful because you cannot package it with a final build. Being that some use UE4 for film-making It is still a very useful plugin.

So from what I have gathered from various video tutorials below is what I have thus far in my Blueprint, i’m not looking to make a spectrum analyzer, i just want to have a single object react to to the sound exactly like the video here - YouTube

Your link seems to be broken, here’s the fixed one:

thanks corrected!

That guy also posted an explanation and a screenshot of his Blue print for this @ Further and more advanced mucking about. - DanIEL KELLY: CGMA

And now I see the missing link was the Get function which turns the Array data in to normal float info that can then be used for whatever purpose after!

Success…I managed to get the data flowing from the Amplitude into the intensity of the light actor, I used a fire sound to make the light flicker as if it was the fire itself, hence why this could be a useful addition to the game package as well…blueprint to come

Is there any way to get the Sound Wave inside of the Amplitude node to loop?

I will post the blueprint, here is a short clip
https://www.youtube.com/watch?v=NDXbZgpJkoA

the blueprint, sadly despite its potential, it does not track every change in volume…

I don’t understand the purpose of the feedback loop after the ‘Set Intensity’ function is called. The delay node has a duration of zero and you are requesting the amplitude on tick anyway… :confused: I’m confused…

The reason it is not tracking every change in amplitude is because you are only passing it the first index from the amplitude float array. It wouldn’t really be viable to use every sample’s amplitude value anyway due to the world timer not being sample accurate (or as fast as audio playback)…I would also suggest interpolating between the resulting values for a smoother intensity change.

I hope this helps :slight_smile:

To be perfectly honest, the blueprint was a deviation or blending from the blog link I provided and another tutorial video. Im still new at this so forgive my noob creations. I also realized the delay loop wasn’t really doing anything too.

All is forgiven, I hope I didn’t come across as patronizing in any way…I’d be happy to lend a hand if there’s anything you need to know :slight_smile:

All is good, you mentioned the world timer not being sample accurate, hmm that’s not good. The end result I’m going for will have individual sound tracks for ex. Bass drum, snare drum, piano, synth etc, but if the timeers are off the results may not be very good!

So if I’m understanding the index function correct, the way that it is setup in the first blue print and with the addition of the get for each index and then some how combining these to alter the intensity would give a better result?

Let me explain…

I believe the array that is returned is an array of float values representing the amplitude of the audio for each sample from the given start time for the supplied time length. 44100 samples per second is a commonly used sample rate in audio. You are just grabbing the very first of these samples for the time period that you have obtained. This may not be representative of the amplitude at all (audio has the potential to massively fluctuate between sample points).

I would personally grab a few of these values (5 - 10 perhaps) and calculate the median value, then use ‘FInterpTo’ to smoothly transition between this and the next median value.

:slight_smile:

I’ll try to knock something up this eve to show you (time depending)

Hey , I think what ULLS is trying to say is that you should use a linear interpolation node, also known as a LERP.

It takes 3 input values, start, end, and a percentage.

If you haven’t dealt with this before it takes a little bit of mind bending to understand what it’s doing because of the time factor.
Basically for the first parameter you put the light’s current intensity level, and you create a variable as your destination light intensity for the second parameter.
Then for the third parameter you put in a floating-point value from 0 to 1 to act as what percentage of the way between the two you want the new value to become.

for example, light.intensity = Lerp(1000, 2000, 0.5f)
This is what it might look like in actual code, but it’s the same thing in blueprints

in this case you would be interpolating between 1000 and 2000 by 50%, so you would get 1500 as a result.

so you would want something more like:

light.intensity = Lerp(light.intensity, targetIntensity, 0.1f)

This is a little strange because you are making the variable the first parameter but also assigning the return value to that same variable but it makes sense when you think about it.
Because if you were to do like what I did above, put a constant value like 1000, you would get 1500 and it would just sit there every frame. But if you put something that varies and
is assigned to every frame you will get a nice smooth gradation.

So say intensity started at zero within a variable X.
X = Lerp(X, 1000, 0.1f) would give you 100, next frame would give 190, next would give 271 and on and on

Also it’s important to realize with your delay node you posted above, it is doing SOMETHING, not nothing.
I don’t know exactly how the timer code works but I would assume that you’re creating an infinite loop, if it’s not crashing im not sure why but you should get rid of that if you haven’t already to get more reliable results

Hmm, seems to make sense in regards to the interpolation, I think I can wrap my head around it. I will give another jab guys using your advice.

My other solution is importing the animation data from blender with the sound fcurve baked in and somehow extracting the data from that. Managed to get the animation in but fell asleep before I could go any further.

I got the impression the delay loop was added to re trigger the get amplitude to take a constant sampling. I did try different delay times and it simply kept the effect at a specific value.

Convert the tutorial from Andrew Kramer using After Effects into Blueprints. It should follow the same logic.

Convert the audio file into 2 parts (treble and bass), keyframe those parts, and use those keyframes on the object you want to animate. It shouldn’t even have that much cost to it, since it would be rendered into a “static animation”. If you were to do a “real-time” animation, where the user could chose whatever track that they wanted from their own selection, that would be pretty costly, since the engine would need to do the keyframing / conversions in real-time.