Hey there! I was SUPER excited to find spectral analysis being implemented into the engine as it was something I’ve been struggling with for my own project, I’ve used a flurry of plugins that were supposed to solve this for me however none have really provided anything useful.
So the second 4.22 was out I started digging into this, trying to figure out how it works, now keep in mind I’m an artist here, not a programmer!
I wish to analyse my games currently playing music and feed the data to shaders in order to drive HUD elements and create an on-screen bar visualiser.
Exactly like what is shown in the release notes gif!
So this is my current set up, I’m just scaling some boxes based on these output magnitudes using the (Get Copy) node to select which frequency to map to which box.
I assumed it was proper to set the first array to 20, as anything below 20 can not be heard by the human ear, my current Max Frequency value is set to 20000.
Now the values that I get out of these frequencies I’d like to remap to 0-1 values to make it easy to use it within code and shaders via material parameter collections, however this is where I get confused!
So I’m dividing the max frequency by six, as I currently want six outputs, the result on Array 0, I would have assumed would this become a value between 20-3333,3
However what is currently being output is a 0-0.30 (Approx) value and I’ve no clue how to translate these properly into usable values.
Folks out there whom have dug into this, send me your aid!
I’m familiar with range mapping, the issue lays in the fact that the base values confuse me, I can’t set a proper max and min value because the outcome of the frequency analysis isn’t what I thought it’d be.
Yeah it will be very reactive until it’s warmed up, so this is more of a live input thing as it is in the picture. Would maybe want to record and lock in ranges before packaging, if there won’t be any surprises regarding what is played through it.
Outside the picture there is smoothening via interpolation going on, and also a ‘forgetful memory’ function that will calm down the max recorded values over time(without this, a single extreme volume event could ruin the long term normalizing).
Hmm yeah, gave it a try! Sadly I need this to be able to react properly to a wide range of songs so taking down manual values for each and every one of them is going to be a rough one.
Could reset it between songs? It works well as a general thing for songs now, It works out nice that it’s extra sensitive in the beginning when a song starts soft and builds.
Hopefully you can help as I’ve been trying different tutorials and potential plugins for a while now.
I want to make a simple audio visualiser like you’ve shown but using either the overall engine audio or plug in the source the comes off the webbrowser.
So far most ive seen require an actual source audio file which is pretty limited. Im still learning but really want to get something up and running. Any chance you could make a short guide or example bp project available please as it would really help. Thanks!
Great work and thanks for this! I just had a look at the thread and see you got mic input working so it got me wondering if that’s one way of getting general audio of what you hear like the windows audio capture plugin?
I would try to use that but apparently it’s got performance issues and hasn’t been updated.
Np:) Yeah you could use software on your pc to reroute audio to the mic. Wouldn’t automatically work for end users with a packaged game, but would work for you with the routing. There is some delay though.
I recommend both or either of these: