So I have looked all over the internet to find some way to make an audio to light converter in unity. I have found no luck and I really want to make one for a game I am making for college so I can test out my skills, anyone got any ideas on blueprints that might be able to sort out this problem.
You kind of lucky as new audio engine that got finalized in 4.24 have audio analysis APIs, i didn’t play with it yet myself so i can only give you link:
Id you don’tt see it then that means it’s a plugin you need to enable
Thanks I’ll try this out and hopefully it will work out in the way I want, many thanks.
I’ll look into this and see if it will work, thanks for the help
@'s way should work to pulse the light at the beginning of an individual sound.
The audio analysis APIs I have seen so far in 4.23+ rely on pre-baking the FFT data from a WAV file, so it might not be what you need for real- analysis.
However, there is probably a way to get the FFT values in almost-realtime from incoming waveforms from someone’s microphone, but I have a lot of doubts whether there is a way to do it already in Blueprints so you’d have to C++ it.
But there might be a way to set up analysis on a real- stream of audio. Just be aware that real- or delayed real- analysis is not going to be very clean or accurate. but if all you’re doing is a light pulse then it might not matter so much. But in my experience you will get some flickering and some brightness where it seems too quiet to elicit such a response, due to the inaccuracy. If the audio is delayed until it has been analyzed more cleanly then you can do it accurately and in sync.
Anyway I think it gets more difficult the more realtime responsive you want it to be.
But I haven’t checked out all the tools in 4.24 so maybe they’ve got some nice simple-to-use solutions there.
What I have said is based on the idea that you want the light to pulse with the amplitude of streamed speech from the user’s microphone. That’s the harder thing to do.
The easier thing is making it work with pre-recorded audio files. You can use Envelope Follower on the baked FFT data from a WAV file to generate a curve and then sample the value of that curve with respect to , in order to find out an amplitude for your light at that point.
You can also analyze frequencies if you need more detail than the current volume of the sound.
The easiest thing is what was given in @'s blueprint which pulses once for each entire sound cue played, but I think it doesn’t pulse multiple times with the changes in the soundwave itself within a single cue.
Thank you this so helpful, the audio is going to be prerecorded and I’m planning to have it triggered on movement so I wanted to have a light react with the sound as if it was a dalek
In that case I would:
- Go to the WAV sound file asset for the speech and open it up.
- Scroll down in the details pane for the sound WAV asset and tick the checkmarks and such for baking FFT data (you can find out what you need to do in the documentation).
- Make sure it’s baking FFT data for Envelope Following and firing Envelope Follow events.
- In your blueprint that makes the light pulse, Bind an Event to the character speech Sound Cue that uses that WAV file, and this event will be for Envelope Following.
- In the bound event, use the envelope value to set the light intensity.
There are more details like setting the attack interval and falloff interval for the FFT data curve. This determines how sensitive it is to changes in volume and how fast the curve falls back to baseline values after experiencing an increase in sound volume. You’ll have to tweak these values a few times to get the speech light pulse to respond the way you want. The default values are, I think, set to what would look good on a graphic equalizer (jump up fairly quickly, but fall off in a slower trickle).