Yes, this is possible, though it is a fair amount of work. The problem with sound assets is that the engine is hard coded to expect OGG data in many places dealing with audio. I know this because I’m making a FLAC importer, and it contains a few too many workarounds that shouldn’t be necessary for my taste
Though, dealing with raw PCM data will be easier than trying to replace the compressor entirely.
A general approach you could take is the following:
- Create a plugin with two modules: one runtime module for audio processing, and one editor module for asset importing. See the SoundMod plugin in Engine\Plugins\Runtime\SoundMod for a great example that does exactly this.
- Your asset import factory must declare it import type as USoundWave::StaticClass() and its file extension as “.wav”. Essentially you can just copy the whole USoundFactory and/or USoundSurroundFactory from EditorFactories.cpp for this (that’s the default .WAV importer). The FactoryCreateBinary() method is what creates a USoundWave object and copies the PCM data to it. One important thing is that you must set the UFactory’s ImportPriority to some value higher than DefaultImportPriority. This ensures that your importer will be used instead of the default one when you import a .WAV file.
- For the runtime module, I suggest making a UObject class that inherits from USoundWave, so you’'ll have say a ‘UPCMSoundWave’ class. The PCM data will be contained in the RawData member variable. There are two ways that I can think of to feed the samples to the audio device:
-
Set CompressionName to ‘NAME_None’ and ResourceData to nullptr so that the DecompressionType will be set to DTYPE_Native by FAudioDevice during initialization. This way the audio device will skip decompressing the sound wave as OGG and use the raw PCM data instead.
-
Set CompressionName to anything other than OGG, set bProcedural to true and override USoundWave::GeneratePCMData(). GeneratePCMData() will be called by the engine to request samples for playback. You can take a look at USoundWaveProcedural to see how it works (but don’t subclass it, because it is transient and doesn’t retain any data). In your case the function would simply copy the samples you already have to the output buffer provided by the device.
Either way will work, so pick whichever you like. The main problem with subclassing USoundWave however isn’t feeding the samples to the audio device, it’s stopping USoundWave from stomping all over your data and/or constantly trying to compress and decompress your stuff as OGG. So you’ll have to override a lot of USoundWave methods to keep your PCM data safe from deletion and (de)compression. Basically you’ll just have to comb through the USoundWave code and override anything you find related to loading, serialization and resource management.
The SANE way to do this would of course be to implement an FAsyncAudioDecompress class that can deal with whatever type of audio format you want, replacing the OGG decompressor. But unfortunately USoundWave’s decompressor instance constantly gets deleted and recreated by the audio device buffer, and there is no way to tell it which decompressor to use. So that’s a dead end. The same goes for making a custom compressor: you can create an IAudioFormat module perfectly fine, but letting the engine know about it is another story.