How to Reliably Access USoundWave PCM Data (with or without FAsyncAudioDecompress)

I have a bunch of Sound Waves (mono, 16 bit, 16k sample rate), and I need to access the PCM data of these Sound Waves in C++.

I did some looking around, but couldn’t really find any way to do this, or even any recent discussion on this (see references below).

The USoundWave.RawPCMData and USoundWave.RawPCMDataSize are exactly what I need, the problem is they are often nullptr and 0 even when there is audio in the Sound Wave. Also, unfortunately, there doesn’t seem to be any getter for the PCM data which would handle preparation of that data for you.

I got excited when I saw the suggestion to use the FAudioThumbnail::GenerateWaveformPreview method, which uses FAsyncAudioDecompress (in file AudioTrackEditor.cpp). In the example there is first a check to see if decompression is required, then decompress if necessary, then access the raw PCM data - exactly what I need! Unfortunately, when I tried to implement this it seems to do nothing, and I’m still getting nullptr and 0 for the raw PCM data. Am I doing something wrong? How can I get this working? (my implementation below)

void UMyClass::UseCannedAudioData(USoundWave* CannedAudio)
{
	if (CannedAudio->RawPCMData == nullptr) {
		FAsyncAudioDecompress TempDecompress(CannedAudio);
		TempDecompress.StartSynchronousTask();
	}
	UE_LOG(LogTemp, Warning, TEXT("UseCannedAudioData, RawPCMDataSize: %d"), CannedAudio->RawPCMDataSize);

	TArray<uint8> AudioBuffer(CannedAudio->RawPCMData, CannedAudio->RawPCMDataSize);
	VoiceData = AudioBuffer;
	UE_LOG(LogTemp, Warning, TEXT("UseCannedAudioData, voice data size: %d"), VoiceData.Num());
}

Also, in case I can’t get this working, is there any other way to access the audio data? It seems like this shouldn’t be so difficult, but I’ve been stuck on it for days. Please help!

References:

Post from 2015 saying there is no way to do this:

Post from 2014 forums, saying to use the FAudioThumbnail::GenerateWaveformPreview method:
https://forums.unrealengine.com/development-discussion/c-gameplay-programming/19191-accessing-soundwave-pcm-data-from-c-code

Same post but on answer hub instead, explaining the unreliability of USoundWave.RawPCMData:

1 Like

Update: I did a verbatim copy of the FAudioThumbnail::GenerateWaveformPreview method, and it seems to have worked!

I’m happy - but confused, and have new questions: (see code implementation below for reference)

  • Why is the “AudioDevice->StopAllSounds(true)” line necessary? Stopping all audio everywhere whenever raw PCM data needs to be accessed is very disruptive.
  • The first time I ran this, it hung for about 20 sec (as expected, it’s doing the decompress on the main thread and needs time), then it worked but I got a significant number of errors in the log starting with “[2019.03.04-22.02.43:341][597]LogOutputDevice: Error: Ensure condition failed: false [File:D:\Build++UE4\Sync\Engine\Source\Runtime\Engine\Classes\Sound/SoundWave.h] [Line: 460]”. It continued to work the following times it ran, and I didn’t get the errors again, and it didn’t hang again. What caused these errors and why did they mysteriously disappear?
  • For the following times it ran, how was it decompressing the audio instantaneously (it’s over 1 min long), and without blocking the main thread? I shut down the editor and reset the project- still no hang. Is the decompressed audio being cached and saved to disk somewhere? Will this behavior be the same both in the editor and the final live version of the game? I’m not sure If I need to create an async task for this or not…

void UMyClass::UseCannedAudioData(USoundWave* CannedAudio)
{
if (CannedAudio->RawPCMData == NULL)
{
// @todo Sequencer optimize - We might want to generate the data when we generate the texture
// and then discard the data afterwards, though that might be a perf hit traded for better memory usage
FAudioDevice* AudioDevice = GEngine->GetMainAudioDevice();
if (AudioDevice)
{
AudioDevice->StopAllSounds(true);

        			EDecompressionType DecompressionType = CannedAudio->DecompressionType;
        			CannedAudio->DecompressionType = DTYPE_Native;
        
        			if (CannedAudio->InitAudioResource(AudioDevice->GetRuntimeFormat(CannedAudio)) && (CannedAudio->DecompressionType != DTYPE_RealTime || CannedAudio->CachedRealtimeFirstBuffer == nullptr))
        			{
        				FAsyncAudioDecompress TempDecompress(CannedAudio);
        				TempDecompress.StartSynchronousTask();
        			}
        
        			CannedAudio->DecompressionType = DecompressionType;
        		}
        	}
        
        	UE_LOG(LogTemp, Warning, TEXT("UseCannedAudioData, RawPCMDataSize: %d"), CannedAudio->RawPCMDataSize);
        
        	TArray<uint8> AudioBuffer(CannedAudio->RawPCMData, CannedAudio->RawPCMDataSize);
        	VoiceData = AudioBuffer;
        	UE_LOG(LogTemp, Warning, TEXT("UseCannedAudioData, voice data size: %d"), VoiceData.Num());
        }

You misunderstand the hang part, it’s ensure being triggered. It works as assert but insted of crashing it just dumps memory (which is what hangs editor) and drops crash log to log. If you have ensure or asset fail then look on code location it gives you for you to understand whats wrong.

That ensure is being triggered because unimplemented GeneratePCMData is being called, which originally been made by procedural sound in old sound system. There also note there:

/** 
 * This is only used for DTYPE_Procedural audio. It's recommended to use USynthComponent base class
 * for procedurally generated sound vs overriding this function. If a new component is not feasible,
 * consider using USoundWaveProcedural base class vs USoundWave base class since as it implements
 * GeneratePCMData for you and you only need to return PCM data.
 */
virtual int32 GeneratePCMData(uint8* PCMData, const int32 SamplesNeeded) { ensure(false); return 0; }

Included with condition you should have a call stack showing what exacly is calling that function

Im actully not sure how unsure behaves in shipping build, you need to check it out. But i susspects it does may not hung,

AudioDevice->StopAllSounds(true) is not needed as far as i know, i used this without it and it worked fine.

2 Likes

Thank you !

Hi there,

It took me 2 days to find a working solution, so just in case someone bumps into this problem i thought it would be a good idea to share it, so here’s a workaround to access the PCM Data.
I needed to get the data from a mic capture.
The capture is done by a UAudioCaptureComponent sending its signal to a USoundSubmix:
BotAudioCaptureComponent->SetSubmixSend(BotSoundSubmix, 10.f);
Then the idea is to send the result in a USoundWave when stopping the recording.
This is done in principle by using the USoundSubmix::StopRecordingOutput function.
Unfortunately, this function doesn’t seem to do the job as
1/ when PIE it creates a SoundWave on disc
2/ in any case (even in packaged game), it calls Writer.BeginGeneratingSoundWaveFromBuffer with a nullptr instead of our custom USoundWave.

So i created a custom class inheriting from USoundSubmix and modified the StopRecordingOutput function to suit my needs.

Here’s the full code:

void UBotSoundSubmix::BotStopRecordingOutput(const UObject* WorldContextObject, USoundWave* ExistingSoundWaveToOverwrite)
{
	if (!GEngine) return;

	EAudioRecordingExportType ExportType = EAudioRecordingExportType::SoundWave;
	FAudioDevice* AudioDevice = nullptr;

	// Find device for this specific audio recording thing.
	if (UWorld* ThisWorld = GEngine->GetWorldFromContextObject(WorldContextObject, EGetWorldErrorMode::LogAndReturnNull))
	{
		AudioDevice = ThisWorld->GetAudioDeviceRaw();
	}

	if (!AudioDevice) return;

	float SampleRate;
	float ChannelCount;

	Audio::AlignedFloatBuffer& RecordedBuffer = AudioDevice->StopRecording(this, ChannelCount, SampleRate);

	// This occurs when Stop Recording Output is called when Start Recording Output was not called.
	if (RecordedBuffer.Num() == 0)
	{
		return;
	}

	// Pack output data into DSPSampleBuffer and record it out!
	RecordingData.Reset(new Audio::FAudioRecordingData());

	RecordingData->InputBuffer = Audio::TSampleBuffer<int16>(RecordedBuffer, ChannelCount, SampleRate);

	RecordingData->Writer.BeginGeneratingSoundWaveFromBuffer(RecordingData->InputBuffer, ExistingSoundWaveToOverwrite, [this](const USoundWave* Result)
		{
			if (OnSubmixRecordedFileDone.IsBound())
			{
				OnSubmixRecordedFileDone.Broadcast(Result);
			}
		});
}

It’s basically copy-pasting + modifying of the original USoundSubmix::StopRecordingOutput.
The only essential difference is that now i call Writer.BeginGeneratingSoundWaveFromBuffer with ExistingSoundWaveToOverwrite instead of nullptr.
After that, all the PCM data can be found as expected in ExistingSoundWaveToOverwrite.
I call it like this:
BotSoundSubmix->BotStopRecordingOutput((), BotSoundWave);
And i can access BotSoundWave->RawPCMData and BotSoundWave->RawPCMDataSize.

Cheers
Cedric

I am getting warnings and errors when recording audio in-game related to PCM data. I am using a Blueprint project and probably can’t change the code like you did. I am wondering if your solution is related to my errors and if there is some work-around.

Warning: LPCM data failed to load for sound SoundWave /Game/Audio/RecordedSoundWaves/test.test

Warning: Can’t cook SoundWave /Game/Audio/RecordedSoundWaves/test.test because there is no source LPCM data

Failed to retrieve compressed data for format BINKA and soundwave /Game/Audio/RecordedSoundWaves/test.test.

Failed to build BINKA derived data for /Game/Audio/RecordedSoundWaves/test.test

LogAudioDerivedData: Display: No streamed audio chunks found!

Warning: Failed to seek decoder input during initialization: (format:BINKA) for wave (package:/Game/Audio/RecordedSoundWaves/test) to time ‘0.000000’

Error: Failed to find codec (format:BINKA) for wave

Does anyone have a working solution for UE5? none of the posted solutions work anymore
I tried

bool InitFromSoundWave(USoundWave* SoundWave)
	{
		if (SoundWave->bProcedural)
		{
			return false;
		}

		if (SoundWave->RawPCMData == NULL)
		{
			FAudioDevice* AudioDevice = GEngine->GetMainAudioDeviceRaw();
			if (AudioDevice)
			{
				AudioDevice->StopAllSounds(true);

				EDecompressionType DecompressionType = SoundWave->DecompressionType;
				SoundWave->DecompressionType = DTYPE_Native;
        
				if (SoundWave->InitAudioResource(AudioDevice->GetRuntimeFormat(SoundWave)) &&
					(SoundWave->DecompressionType != DTYPE_RealTime || SoundWave->CachedRealtimeFirstBuffer == nullptr))
				{
					FAsyncAudioDecompress TempDecompress(SoundWave, 1, AudioDevice);
					TempDecompress.StartSynchronousTask();
				}
        
				SoundWave->DecompressionType = DecompressionType;
			}
		}
        
		UE_LOG(LogTemp, Warning, TEXT("RawPCMDataSize: %d"), SoundWave->RawPCMDataSize);

which doesnt work anymore, I also tried this


	// This struct contains information about the sound buffer.
	struct SongBufferInfo
	{
		int32 NumChannels;
		float Duration;
		int32 SampleRate;
		int32 RawPCMDataSize;

		SongBufferInfo()
			: NumChannels(0), Duration(0), SampleRate(0), RawPCMDataSize(0)
		{
		}

		SongBufferInfo(int32 PCMDataSize, int32 numChannels, float duration, int32 sampleRate)
			: NumChannels(numChannels), Duration(duration), SampleRate(sampleRate), RawPCMDataSize(PCMDataSize)
		{
		}
	};

	// this struct contains the sound buffer + information about it.
	struct SongBufferData
	{
		TArray<uint8> RawPCMData;
		SongBufferInfo BufferInfo;

		// default to nothing.
		SongBufferData() : SongBufferData(0, 0, 0, 0)
		{
		}

		// allocate memory as we populate the structure.
		SongBufferData(int32 PCMDataSize, int32 numChannels, float duration, int32 sampleRate)
			: BufferInfo(PCMDataSize, numChannels, duration, sampleRate)
		{
			// create the space
			//RawPCMData = (uint8*)FMemory::Malloc(RawPCMDataSize);
			//RawPCMData = new uint8[PCMDataSize];
			RawPCMData.SetNumZeroed(PCMDataSize);
		}
	};

	bool DecompressUSoundWave(USoundWave* soundWave, TSharedPtr<SongBufferData>& Out_SongBufferData)
	{
		FAudioDevice* audioDevice = GEngine ? GEngine->GetMainAudioDeviceRaw() : nullptr;

		if (!audioDevice)
			return false;

		if (!soundWave)
			return false;

		if (soundWave->GetName() == TEXT("None"))
			return false;

		bool breturn = false;

		// erase whatever was previously here.
		Out_SongBufferData = nullptr;

		// ensure we have the sound data. compressed format is fine
		soundWave->InitAudioResource(audioDevice->GetRuntimeFormat(soundWave));

		// create a decoder for this audio. we want the PCM data.
		ICompressedAudioInfo* AudioInfo = audioDevice->CreateCompressedAudioInfo(soundWave);

		// decompress complete audio to this buffer 
		FSoundQualityInfo QualityInfo = {0};
		
		if (AudioInfo->ReadCompressedInfo(soundWave->GetResourceData(), soundWave->GetResourceSize(), &QualityInfo))
		{
			Out_SongBufferData = TSharedPtr<SongBufferData>(new SongBufferData(QualityInfo.SampleDataSize,
			                                                                   QualityInfo.NumChannels,
			                                                                   QualityInfo.Duration,
			                                                                   QualityInfo.SampleRate));

			// Decompress all the sample data into preallocated memory now
			AudioInfo->ExpandFile(Out_SongBufferData->RawPCMData.GetData(), &QualityInfo);

			breturn = true;
		}

		// clean up.
		delete AudioInfo;

		return breturn;
	}

would be very glad if anyone could shed some light on how this is supposed to be done nowadays…

Some progress, this works in the editor but fails in a cooked game since

SoundWave->GetCompressedData

returns NULL

bool InitFromSoundWave(USoundWave* SoundWave)
	{
		if (SoundWave->bProcedural)
		{
			return false;
		}

		if (SoundWave->RawPCMData == NULL)
		{
			FAudioDevice* AudioDevice = GEngine->GetMainAudioDeviceRaw();
			if (AudioDevice)
			{
				EDecompressionType DecompressionType = SoundWave->DecompressionType;
				SoundWave->DecompressionType = DTYPE_Native;

				FName format = AudioDevice->GetRuntimeFormat(SoundWave);

				FByteBulkData* Bulk = SoundWave->GetCompressedData(format, SoundWave->GetPlatformCompressionOverridesForCurrentPlatform());
				if (Bulk)
				{
					SoundWave->InitAudioResource(*Bulk);
	
					if (SoundWave->DecompressionType != DTYPE_RealTime || SoundWave->CachedRealtimeFirstBuffer == nullptr)
					{
						FAsyncAudioDecompress TempDecompress(SoundWave, 128, AudioDevice);
						TempDecompress.StartSynchronousTask();
					}
				}

				SoundWave->DecompressionType = DecompressionType;
			}
		}

		UE_LOG(LogTemp, Warning, TEXT("RawPCMDataSize: %d"), SoundWave->RawPCMDataSize);

		return InitFromShortSamples(TArrayView<SHORT>((SHORT*)SoundWave->RawPCMData, SoundWave->RawPCMDataSize / 2),
			SoundWave->GetSampleRateForCurrentPlatform(), SoundWave->NumChannels);
	}
1 Like

Solved thanks to Max Hayes,
The data wasnt available because it was being loaded into the Audio Streaming Cache,
setting the SoundWave Loading Behavior Override to ForceInline fixed it.

2 Likes