Can I use UE4's audio decompression without playing?

I need to feed PCM data, on demand, to a third party library for mixing. I then get back the mixed output and want to pass that to UE4 for playback.

The latter I’m doing using a procedural USoundWave subclass, no idea if that’s a good approach, but it works.

Currently though I’m feeding the data manually from files, whereas I really need to be able to use imported sound wave assets as the source, and have UE4 stream/decode them on request. Is this possible? Can someone outline the basic process to do so? With no background in audio coding, I’m completely lost among all the device/buffer/source/active sound classes.

Essentially, I want to say to UE4: Here’s a USoundWave, seek to this point, give me some PCM data, then give me some more PCM data, etc.

PCM data is decoded and submitted to our audio buffers in realtime using the async decoders. Check out FAsyncRealtimeAudioTaskWorker and the ERealtimeAudioTaskType::Decompress to see how we decode compressed audio data.

Unfortunately, it’s a bit difficult to describe in extreme detail about how the whole thing works from top to bottom. I’ll try to outline it:

  1. A sound is requested to play
  2. If the sound is below a duration threshold (defined in a SoundGroup), it fully decodes the entire file into memory (or pulls it from a cache of already decoded audio that was decoded on map load),
  3. If the sound is above a duration threshold, then it does “real time decompression” on the loaded compressed asset.
  4. The realtime decompression uses async workers (FAsyncRealtimeAudioTaskWorker) to decode portions of the compressed asset.
  5. Depending on the platform, the decoded audio chunks are fed to the playing source voice in queued decoded chunks. In Xaudio2, the xaudio2 voice itself performs a callback when a buffer finishes playing. In that callback, we consume a decoded buffer from the voices async task worker, submit it to the xaudio2 voice, then kick off another async task to generate another buffer.

Extracting that code to use in a different procedural sound wave system will not be an easy task but is doable.

A high level outline was exactly what I wanted, I’m starting to understand the code a lot better now. Much appreciated!

Is the procedural USoundWave a reasonable approach for feeding the mixed output back into the UE4 audio system for playback? Currently I have a custom UAudioComponent class which references the source sound assets, takes care of setting up the procedural wave object and then assigns it to it’s Sound property. If there’s some flaw in that setup I’m overlooking, please let me know. Regardless thanks for your time.

The only thing I can think of is that the procedural sound wave is not going to be able to mix to a surround sound mix. I haven’t tried, but I think you can make a 2D stereo procedural sound but will have trouble if you want to do a surround sound procedural sound.

Incidentally, this will be significantly easier with the audio mixer module I am currently working on (in a dev-stream not visible to the public yet). The audio mixer module will perform all mixing in platform independent code and implement a much lower-level device interface which is somewhat similar to a massive procedural sound (that feeds a N-channel output audio stream directly to the hardware audio device). You could just implement your wrapper around the 3rd party mixer as an implementation of the audio mixer interface. I’m hoping to be able to ship an early preview version of the audio mixer in 4.14 (I missed the 4.13 cutoff).