Download

Material that reacts to sound? (waveform display)

Hey everyone!

I have a robot character design that displays a waveform instead of a mouth, which is pretty common in a lot of existing robot designs. I would like this to atcually sync up to the spoken audio. Now, asking for an ACTUAL working generated waveform might be a bit too much, but something that can detect when audio is playing and just switch a waveform image out (with a panner) would be really neat!

Problem is, I don’t know how to sync up audio and a material. I know this is possible as Mass Effect did something similar where the Quarian aliens displayed a light whenever they talked. That was made in UDK, yes, but the material system seems similar enough to make it possible.

How would I go around to making a system like this? I would really appreciate the help!

The audio detection shouldn’t happen in the material as materials have no notion of audio.

If you want to fake it with a panning image all you need in the material is a switch parameter to toggle between the two states in the material. The logic to start and stop this animation should be implemented in blueprints/c++. Simply set the material parameter to true at the start of the audio clip and back to false at the end.

Quick note- we’re working on a new audio engine that should help with this kind of idea :). However, in the meantime I’d suggest a relatively effective hack- in a Blueprint, for each of the dialog lines, create a matching Timeline float track that has a curve that is similar to the amplitude of the dialog line. That way you can use the float to drive materials as the audio plays.

There is also an experimental plugin that Marc Audy made that’s a realtime FFT visualizer for audio, that allows the FFT analysis to drive floats/vectors, but this is unsupported and a pretty intensive perfomance cost.

Hope this helps-
Zak

Great news about new audio engine!

Will it have listener, or “microphones” that i can put elsewhere, instead of always listening trough camera?

I want create doppler effect for projectiles, but since camera is quite far from pawn, doppler effect is non existent.
To get it i need add pawn to camera offset to every sound location. This would be great if i could offset microphones instead.

It is already possible. Override the GetAudioListenerPosition(…) function in your PlayerController in C++. I also recall it being possible somewhere else, but don’t remember exactly where.

Problem is that my project is BP only. We do not have mac to compile C++ for ios. This may change soon.
And thanks for this info, when we get mac i will know where to look in.

I saw this in the source code. Should be available in Blueprint.