Hi, I’m currently trying to understand how the whole UE4 audio system is structured.
After deep code browsing, I still have a few questions, hope someone will be able to answer them; I may then write a nice wiki page!
From what I gather, AudioDevice is the main component, maintaining an unique list of active sounds, sources, and listeners. An ActiveSound being nothing else than some kind of sound metadata susceptible to be played multiple times at once, it has to be somehow “instantiated” - buffered, etc. - and that’s the purpose of a WaveInstance: at each update the AudioDevice browses through the active sounds, update their parameters (including the ReverbVolume they’re included in) and creates or updates a WaveInstance for each one of them if they’re to be played. It’s then associated to one or more source.
These AudioSources are next being dealt with (e.g. started/stopped). All details regarding source playing are handled through various implementations of AudioDevice (XAudio2 etc.)
- Is this correct? I’ve omitted some details (SoundClass/SoundMix) intentionally.
- What about AudioComponent? I understand it’s an easy way to expose a source to the user and make it more manipulable (by attaching it to an Actor), but it seems there’s history here - e.g.I understand that FWaveInstance probably took an AudioComponent at instantiation a while ago.
- Why is there an array of Listeners in AudioDevice? Especially since most of the time no one bothers and takes the first one anyway, e.g. Listeners[0].DoSomething()
- If I wanted to implement, say, a spatialisation plug-in, I understand the cleanest way would be to inherit from AudioDevice, handles sources, and then call the proper hardware abstraction (FXAudio2Device for instance). Is there any other way?
- What of the above is susceptible to be modified in the next months, with audio threading and such?
- Occlusion is being dealt usign a simple raytracing test between each ActiveSound and the listener, is that correct?
Thanks for your time!