@'s way should work to pulse the light at the beginning of an individual sound.
The audio analysis APIs I have seen so far in 4.23+ rely on pre-baking the FFT data from a WAV file, so it might not be what you need for real- analysis.
However, there is probably a way to get the FFT values in almost-realtime from incoming waveforms from someone’s microphone, but I have a lot of doubts whether there is a way to do it already in Blueprints so you’d have to C++ it.
But there might be a way to set up analysis on a real- stream of audio. Just be aware that real- or delayed real- analysis is not going to be very clean or accurate. but if all you’re doing is a light pulse then it might not matter so much. But in my experience you will get some flickering and some brightness where it seems too quiet to elicit such a response, due to the inaccuracy. If the audio is delayed until it has been analyzed more cleanly then you can do it accurately and in sync.
Anyway I think it gets more difficult the more realtime responsive you want it to be.
But I haven’t checked out all the tools in 4.24 so maybe they’ve got some nice simple-to-use solutions there.
What I have said is based on the idea that you want the light to pulse with the amplitude of streamed speech from the user’s microphone. That’s the harder thing to do.
The easier thing is making it work with pre-recorded audio files. You can use Envelope Follower on the baked FFT data from a WAV file to generate a curve and then sample the value of that curve with respect to , in order to find out an amplitude for your light at that point.
You can also analyze frequencies if you need more detail than the current volume of the sound.
The easiest thing is what was given in @'s blueprint which pulses once for each entire sound cue played, but I think it doesn’t pulse multiple times with the changes in the soundwave itself within a single cue.