Announcement

Collapse
No announcement yet.

Loading .WAV at runtime to a USoundWave

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Loading .WAV at runtime to a USoundWave

    I am having some issues with the following function. I originally found this function on a post from Frisco, and decided to try and debug the same problem, but I have come up empty.

    THE FUNCTION: I am trying to load a .wav file into a USoundWave file at runtime. The user should click a button, select a file from the file dialog, and the audio should be loaded in to the application. The user should be able to play and stop the audio at will.


    WHAT SHOULD HAPPEN:
    1) A function loads in a raw audio file (wav) in the form of a raw TArray<uint8> byte data.
    2) This data is passed to the function from another function I have that loads the data from an absolute file path using TinyFileDialogs (https://github.com/native-toolkit/tinyfiledialogs).
    3) From there, the data is parsed with this function, and returns the WAV file loaded into a new USoundWave.
    4) AFTER this function, the USoundWave reference is pushed to another component that handles playing the audio.
    5) The audio is played from an audio player component when the user inputs the proper input


    THE ISSUES:
    1) The file is loading, the data is being passed in, and the audio will even play at runtime..... in the PIE.
    2) When the audio does get loaded in, the first time the audio is attempted to be played, the whole application hangs for a few seconds (assuming this is while the audio is being decompressed)
    3) When I package the build, the audio doesn't play. The settings for the SoundWave inside the function are being set (after debugging) but the audio is either not being set to the local variable of the AudioPlayer or there are issues with the actual loading the of the .wav file.

    Click image for larger version

Name:	LoadFunciton.JPG
Views:	395
Size:	189.5 KB
ID:	1687188


    ***Take note that the audio is played on a separate component. Here you can see the audio is saved to a USoundWave reference in the settings manager, and the audio manager component pulls from the settings manager to play the audio.***



    Code from the "Open File Dialog" function
    Code:
    FString UFileDialogLibrary::openFileDialog()
    {
        //creates the char const for saving the file
        char const* lTheOpenFileName;
        //the filepath to be returned at end of function
        FString filePath;
        //the types of file filters that can be passed in
        char const* lFilterPatterns[1] = { "*.trf", };
    
        //open the open file dialog
        lTheOpenFileName = tinyfd_openFileDialog(
            "Open File",
            "",
            1,
            lFilterPatterns,
            NULL,
            0);
    
        //sets filePath to the returned filePath in order to change data with the FString 
        filePath = (FString)lTheOpenFileName;
        //removes the backslash and replaces it with the OS independent forward slash
        for (int x = 0; x < (int)filePath.Len(); x++)
        {
            if (filePath[x] == '\\')
            {
                filePath[x] = '/';
            }
        }
    
        //returns the SAFE filePath with corrected slashes
        return filePath;
    }



    Code from the "Return Raw Binary Data" function
    Code:
    TArray<uint8> UFileDialogLibrary::returnRawBinaryData(FString filePath)
    {
        /*
        ---------------------------------
        READING THE OBJECT FROM BINARY
        ---------------------------------
        */
        TArray<uint8> dataArray; //creates a placeholder bytes array
        FFileHelper::LoadFileToArray(dataArray, *filePath);  //loads serialized data into byte array
        FMemoryReader memoryReader = FMemoryReader(dataArray, true);  //creates a memory reader object that reads from the dataArray
    
        return dataArray;
    
    }

    Code from the "Get Sound Wave from Raw Data" function
    Code:
    USoundWave* UImportAudioToSoundWave::GetSoundWaveFromRawData(TArray<uint8> Bytes)
    {
    
    
        USoundWave* sw = NewObject<USoundWave>(USoundWave::StaticClass());  //creates a placeholder SoundWave on the stack
    
        if (!sw)
        {    
            GEngine->AddOnScreenDebugMessage(-1, 10.0f, FColor::Red, FString::Printf(TEXT("NULLPTR")));
            return nullptr;  //checks to be sure it was created, if not return NullPtr 
        }
    
        TArray <uint8> rawFile;  //creates rawFile TArray on the stack
        rawFile = Bytes;  //pulls passed in paramters TArray to the TArray on the stack
    
        FWaveModInfo WaveInfo;  //creates new WaveInfo for the SoundWave, will be passed to it later
    
    
        if (WaveInfo.ReadWaveInfo(rawFile.GetData(), rawFile.Num()))
        {    
            sw->InvalidateCompressedData(); //changes the GUID and flushes all the compressed data
    
            sw->RawData.Lock(LOCK_READ_WRITE);
            FMemory::Memcpy(sw->RawData.Realloc(rawFile.Num()), rawFile.GetData(), rawFile.Num()); 
            sw->RawData.Unlock();
    
    
            int32 DurationDiv = *WaveInfo.pChannels * *WaveInfo.pBitsPerSample * *WaveInfo.pSamplesPerSec;   //calucates the duration divider const 
            if (DurationDiv)  //IF dureation div is not null
            {
                sw->Duration = *WaveInfo.pWaveDataSize * 8.0f / DurationDiv;  //it sets the USOUNDWAVE.Duration to thid calculation result
            }
            else
            {
                sw->Duration = 0.0f;   //otherwise, it sets the duration to 0.0f
            }
    
            sw->SetSampleRate(*WaveInfo.pSamplesPerSec);  //sets the sample rate of the USoundWave
            sw->NumChannels = *WaveInfo.pChannels;  //sets the number of channels of the USoundWave
            sw->RawPCMDataSize = WaveInfo.SampleDataSize;  //sets the rawPCMDataSize of the USoundWave
            sw->SoundGroup = ESoundGroup::SOUNDGROUP_Default;   //sets the sound group of the USoundWave
        }
        else 
        {
    
            return nullptr;
        }
    
        return sw;   //returns the stack allocated sound wave, and following end of function, releases it from memory??????
    }
    Director/Owner of Turf 3D Drill
    Marching Band Drill Design Software
    http://www.turf3d.com

    #2
    Excuse me: Is your problem solved? I also encountered the same problem as you: The .Wav file was read from the outside and the USoundWave was created for playback. The problem is that it works fine when you use the "play" feature of the editor, you can hear the sound, but if you package it, you will get an error. Can anyone help?

    LogAudio: Error: Attempt to access the DDC when there is none available on sound 'SoundWave /Script/Engine.SoundWave:SoundWave_2147482554', format = OGG. Should have been cooked.
    LogAudio: Error: FVorbisAudioInfo::ReadCompressedInfo, ov_open_callbacks error code: -132
    LogAudio: Error: Failed to parse header for compressed vorbis file.

    Comment


      #3
      I agree, this doesn't make much sense.

      You need to fill two buffers, one is for the preview in the editor and one is the asset itself. My guess is that you're filling the preview buffer only.

      Comment


        #4
        This is a bit difficult for me, I don't know how to fill another buffer, and, this sounds a bit strange: Does the programmer need to manage two different destination buffers?
        Originally posted by Stefan Lundmark View Post
        I agree, this doesn't make much sense.

        You need to fill two buffers, one is for the preview in the editor and one is the asset itself. My guess is that you're filling the preview buffer only.

        Comment


          #5
          Yes, that's how it works. One is serialized to disk and the other is used only for the preview, it's never written to disk. How does your code look like?

          Comment


            #6
            Originally posted by Stefan Lundmark View Post
            Yes, that's how it works. One is serialized to disk and the other is used only for the preview, it's never written to disk. How does your code look like?
            void USbinTTSComponent::ReadWaveForPC(FString path, TArray<uint8> &RawSamples, USoundWave *&SoundWave)
            {
            TArray<uint8> FileSamples;
            if (!FFileHelper::LoadFileToArray(FileSamples, *path))
            {
            UE_LOG(LogTemp, Warning, TEXT("---LoadFileToArray Error---"));
            GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Red, TEXT("LoadFileToArray Error"));
            return;
            }

            FWaveModInfo WaveInfo;
            FString ErrorMessage;
            if (WaveInfo.ReadWaveInfo(FileSamples.GetData(), FileSamples.Num(), &ErrorMessage))
            {
            USoundWave* Sound = NewObject<USoundWave>(USoundWave::StaticClass());

            // Compressed data is now out of date.
            Sound->InvalidateCompressedData();

            // If we're a multi-channel file, we're going to spoof the behavior of the SoundSurroundFactory
            int32 ChannelCount = (int32)*WaveInfo.pChannels;
            check(ChannelCount > 0);

            int32 SizeOfSample = (*WaveInfo.pBitsPerSample) / 8;

            int32 NumSamples = WaveInfo.SampleDataSize / SizeOfSample;
            int32 NumFrames = NumSamples / ChannelCount;


            Sound->RawData.Lock(LOCK_READ_WRITE);
            void *LockedData = Sound->RawData.Realloc(FileSamples.Num() );
            FMemory::Memcpy(LockedData, FileSamples.GetData(), FileSamples.Num());
            Sound->RawData.Unlock();


            Sound->Duration = (float)NumFrames / *WaveInfo.pSamplesPerSec;
            Sound->SetSampleRate(*WaveInfo.pSamplesPerSec);
            Sound->NumChannels = ChannelCount;
            Sound->TotalSamples = *WaveInfo.pSamplesPerSec * Sound->Duration;

            SoundWave = Sound;

            RawSamples = FileSamples;

            }
            else
            {
            SoundWave = nullptr;
            GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Red, TEXT("--- ReadWaveInfo Error ---"));
            UE_LOG(LogTemp, Warning, TEXT("--- ReadWaveInfo Error:>> %s"), *ErrorMessage);
            }
            }

            // This is my code , How to fill another buffer?

            Comment


              #7
              Well, I also found this code in reference to the vorbis error we keep getting.

              The below is a snippet of code from an OGG import plugin from this github project: https://github.com/Geromatic/Unreal-OGG/tree/USoundWave

              This code seems to deprecated with the updates of at least 4.23. I can not find references to the FVorbisAudioInfo header file nor the FSoundQualityInfo header. Not to mention in this code InSoundWave->SampleRate is being set, which in the documentation, SampleRate is a protected variable and can not be accessed without using SetSampleRate().

              If anyone has any insight to FVorbisAudioInfo that would be great! Can not find it in the documentation.



              Code:
               
               bool USoundProcessingLibrary::FillSoundWaveInfo(USoundWave* InSoundWave, TArray<uint8>* InRawFile) {  // Info Structs FSoundQualityInfo SoundQualityInfo; FVorbisAudioInfo VorbisAudioInfo;   // Save the Info into SoundQualityInfo if (!VorbisAudioInfo.ReadCompressedInfo(InRawFile->GetData(), InRawFile->Num(), &SoundQualityInfo)) {     return false; }   // Fill in all the Data we have InSoundWave->DecompressionType = EDecompressionType::DTYPE_RealTime; InSoundWave->SoundGroup = ESoundGroup::SOUNDGROUP_Default; InSoundWave->NumChannels = SoundQualityInfo.NumChannels; InSoundWave->Duration = SoundQualityInfo.Duration; InSoundWave->RawPCMDataSize = SoundQualityInfo.SampleDataSize; InSoundWave->SampleRate = SoundQualityInfo.SampleRate;  return true; }
              Director/Owner of Turf 3D Drill
              Marching Band Drill Design Software
              http://www.turf3d.com

              Comment


                #8
                So after playing around with some things, I have discovered a few new issues.

                a) The vorbis header file is only for loading in OGG files. The issue here is that (from my understanding) UE4 converts everything to an OGG file during runtime in order to have a consistant audio format for the application.
                b) the code I most recently posted is ONLY for loading in raw OGG files, and should not be used.
                c) I am getting closer to understanding what is happening, because i have gotten this error message during a full crash of unreal engine during playback in the PIE

                Assertion failed: Wave->GetPrecacheState() == ESoundWavePrecacheState::Done [File:D:/Build/++UE4/Sync/Engine/Source/Runtime/Windows/XAudio2/Private/XAudio2Buffer.cpp] [Line: 364]


                Will do some more reasearch on the XAudion2Buffer, but if anyone has some insight that would be great!


                Current Code

                Code:
                if (WaveInfo.ReadWaveInfo(rawFile.GetData(), rawFile.Num()))
                    {    
                            //************************************************
                            //CREATES THE SOUNDWAVE OBJECT AND ENSURES IT IS NOT NULL
                            //************************************************
                        USoundWave* sw = NewObject<USoundWave>(USoundWave::StaticClass());  //creates a placeholder SoundWave on the stack
                            if (!sw)
                            {    
                                UE_LOG(LogTemp, Error, TEXT("There was a nullptr when creating the USoundWave object."));
                                return nullptr;  //checks to be sure it was created, if not return NullPtr 
                            }
                
                
                
                            //************************************************
                            //FILLS THE SOUNDWAVE DATA USING THE USOUNDWAVE NATIVE FUNCTIONS
                            //************************************************
                        int32 DurationDiv = *WaveInfo.pChannels * *WaveInfo.pBitsPerSample * *WaveInfo.pSamplesPerSec;   //calucates the duration divider const
                        if (DurationDiv)  //IF dureation div is not null
                        {
                                //PRE Debug logs
                                UE_LOG(LogTemp, Log, TEXT("SQ NumChannels-> %i"), *WaveInfo.pChannels);
                                UE_LOG(LogTemp, Log, TEXT("SQ Duration-> %f"), *WaveInfo.pWaveDataSize * 8.0f / DurationDiv);
                                UE_LOG(LogTemp, Log, TEXT("SQ RawPCMDataSize-> %i"), WaveInfo.SampleDataSize);
                                UE_LOG(LogTemp, Log, TEXT("SQ SampleRate-> %u"), *WaveInfo.pSamplesPerSec);
                            // Fill in all the Data we have
                            sw->DecompressionType = EDecompressionType::DTYPE_RealTime;
                            sw->SoundGroup = ESoundGroup::SOUNDGROUP_Default;
                            sw->NumChannels = *WaveInfo.pChannels;
                            sw->Duration = *WaveInfo.pWaveDataSize * 8.0f / DurationDiv; ;
                            sw->RawPCMDataSize = WaveInfo.SampleDataSize;
                            sw->SetSampleRate(*WaveInfo.pSamplesPerSec);
                                //POST Debug logs
                                UE_LOG(LogTemp, Log, TEXT("SW NumChannels-> %i"), sw->NumChannels);
                                UE_LOG(LogTemp, Log, TEXT("SW Duration-> %f"), sw->Duration);
                                UE_LOG(LogTemp, Log, TEXT("SW RawPCMDataSize-> %i"), sw->RawPCMDataSize);
                                UE_LOG(LogTemp, Log, TEXT("SW SampleRate-> %u"), sw->__PPO__SampleRate);
                        }
                        else
                        {
                            UE_LOG(LogTemp, Error, TEXT("There was an error reading data from WaveInfo. Duration Div Error."));
                            return nullptr;
                        }            
                
                            //************************************************
                            //INVALIDATES COMPRESSED DATA, AND WRITES THE RAW DATA TO THE RAW PCM DATA OF THE NEW SOUNDWAVE
                            //************************************************
                        sw->InvalidateCompressedData(); //changes the GUID and flushes all the compressed data
                        sw->RawData.Lock(LOCK_READ_WRITE);
                        FMemory::Memcpy(sw->RawData.Realloc(rawFile.Num()), rawFile.GetData(), rawFile.Num()); 
                        sw->RawData.Unlock();
                
                        return sw;
                    }
                    else 
                    {
                        return nullptr;
                    } 
                }
                Director/Owner of Turf 3D Drill
                Marching Band Drill Design Software
                http://www.turf3d.com

                Comment


                  #9
                  After debugging, i found that setting the decompression type is the cause of this error:

                  ERROR: Assertion failed: Wave->GetPrecacheState() == ESoundWavePrecacheState::Done [File:D:/Build/++UE4/Sync/Engine/Source/Runtime/Windows/XAudio2/Private/XAudio2Buffer.cpp] [Line: 364]

                  Code:
                  //editing out this code has seemed to stop causing the error
                  
                  sw->DecompressionType = EDecompressionType::DTYPE_RealTime;
                  Director/Owner of Turf 3D Drill
                  Marching Band Drill Design Software
                  http://www.turf3d.com

                  Comment


                    #10
                    I am referring to the SoundWave code created in SoundFactory.cpp. From the output log, the error seems to be caused by the fact that the SoundWave I created does not have the correct encoding format, causing the UE to process according to the OGG format. Also, I tried using the USoundWaveProcedual type, which can make me hear the sound, but there is also a problem that bothers me: it causes the OnAudioFinished callback to not be triggered and OnAudioPlaybackPercent always returns 0. This problem has been tossing me for many days, and I have been unable to find a solution. From this point of view, Unity has done a good job, clear documentation and a large number of cases.

                    Comment


                      #11
                      Originally posted by Stefan Lundmark View Post
                      I agree, this doesn't make much sense.

                      You need to fill two buffers, one is for the preview in the editor and one is the asset itself. My guess is that you're filling the preview buffer only.
                      Hello, may I ask, what do you describe to fill another buffer? Can you give me more tips or related links? Thank you very much for your help.

                      Comment


                        #12
                        Hi WheezyDaStarfish Can You share how you linked against... tinyfiledialogs? I'm trying to do the same and get a bunch of compiler errors...

                        Comment


                          #13
                          I am just working for voice chat by steamworks.But I have trouble now,I have data of the voice,but I don't how to play the voice by using sound wave,can anybody answer me?I am not good at c++,I just think this forums can help me

                          Comment

                          Working...
                          X