I have a robot character design that displays a waveform instead of a mouth, which is pretty common in a lot of existing robot designs. I would like this to atcually sync up to the spoken audio. Now, asking for an ACTUAL working generated waveform might be a bit too much, but something that can detect when audio is playing and just switch a waveform image out (with a panner) would be really neat!
Problem is, I don’t know how to sync up audio and a material. I know this is possible as Mass Effect did something similar where the Quarian aliens displayed a light whenever they talked. That was made in UDK, yes, but the material system seems similar enough to make it possible.
How would I go around to making a system like this? I would really appreciate the help!