Announcement

Collapse
No announcement yet.

Loading .WAV at runtime to a USoundWave

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Hi,
    I have recently created a plugin to import the audio files with mp3, wav, flac in runtime.
    https://github.com/Respirant/RuntimeAudioImporter

    Leave a comment:


  • replied
    Originally posted by Aviv_Azran View Post
    Hello, I have a sound source in my level and I want to be able to change the WAV file it plays in runtime.
    mad in giving the user the ability to choose a wav file from his computer so that the sound source will play it.
    This post seems like a good place to start but I haven’t experimented with audio in the engine so I don’t know how to do it.
    Can you share the full solution here please?
    Here's a blueprint for OGG load but another user recently published code to load WAV too...
    EPIC community!
    https://forums.unrealengine.com/deve...quired/page163
    Plugin: https://github.com/EverNewJoy/VictoryPlugin

    Leave a comment:


  • replied
    Hello, I have a sound source in my level and I want to be able to change the WAV file it plays in runtime.
    mad in giving the user the ability to choose a wav file from his computer so that the sound source will play it.
    This post seems like a good place to start but I haven’t experimented with audio in the engine so I don’t know how to do it.
    Can you share the full solution here please?

    Leave a comment:


  • replied
    Originally posted by alioth_du View Post
    RawPCMData also needs to fill when play audio in packaged game. Try add such code when copy RawData: sw->RawPCMDataSize = WaveInfo.SampleDataSize; sw->RawPCMData = (uint8*)FMemory::Malloc(sw->RawPCMDataSize); FMemory::Memmove(sw->RawPCMData, rawFile.GetData(), rawFile.Num());
    alioth_du this works like a charm in editor as well as in the build. Thanks a bunch!

    Leave a comment:


  • replied
    RawPCMData also needs to fill when play audio in packaged game. Try add such code when copy RawData: sw->RawPCMDataSize = WaveInfo.SampleDataSize; sw->RawPCMData = (uint8*)FMemory::Malloc(sw->RawPCMDataSize); FMemory::Memmove(sw->RawPCMData, rawFile.GetData(), rawFile.Num());

    Leave a comment:


  • replied
    I am just working for voice chat by steamworks.But I have trouble now,I have data of the voice,but I don't how to play the voice by using sound wave,can anybody answer me?I am not good at c++,I just think this forums can help me

    Leave a comment:


  • replied
    Hi WheezyDaStarfish Can You share how you linked against... tinyfiledialogs? I'm trying to do the same and get a bunch of compiler errors...

    Leave a comment:


  • replied
    Originally posted by Stefan Lundmark View Post
    I agree, this doesn't make much sense.

    You need to fill two buffers, one is for the preview in the editor and one is the asset itself. My guess is that you're filling the preview buffer only.
    Hello, may I ask, what do you describe to fill another buffer? Can you give me more tips or related links? Thank you very much for your help.

    Leave a comment:


  • replied
    I am referring to the SoundWave code created in SoundFactory.cpp. From the output log, the error seems to be caused by the fact that the SoundWave I created does not have the correct encoding format, causing the UE to process according to the OGG format. Also, I tried using the USoundWaveProcedual type, which can make me hear the sound, but there is also a problem that bothers me: it causes the OnAudioFinished callback to not be triggered and OnAudioPlaybackPercent always returns 0. This problem has been tossing me for many days, and I have been unable to find a solution. From this point of view, Unity has done a good job, clear documentation and a large number of cases.

    Leave a comment:


  • replied
    After debugging, i found that setting the decompression type is the cause of this error:

    ERROR: Assertion failed: Wave->GetPrecacheState() == ESoundWavePrecacheState::Done [File:D:/Build/++UE4/Sync/Engine/Source/Runtime/Windows/XAudio2/Private/XAudio2Buffer.cpp] [Line: 364]

    Code:
    //editing out this code has seemed to stop causing the error
    
    sw->DecompressionType = EDecompressionType::DTYPE_RealTime;

    Leave a comment:


  • replied
    So after playing around with some things, I have discovered a few new issues.

    a) The vorbis header file is only for loading in OGG files. The issue here is that (from my understanding) UE4 converts everything to an OGG file during runtime in order to have a consistant audio format for the application.
    b) the code I most recently posted is ONLY for loading in raw OGG files, and should not be used.
    c) I am getting closer to understanding what is happening, because i have gotten this error message during a full crash of unreal engine during playback in the PIE

    Assertion failed: Wave->GetPrecacheState() == ESoundWavePrecacheState::Done [File:D:/Build/++UE4/Sync/Engine/Source/Runtime/Windows/XAudio2/Private/XAudio2Buffer.cpp] [Line: 364]


    Will do some more reasearch on the XAudion2Buffer, but if anyone has some insight that would be great!


    Current Code

    Code:
    if (WaveInfo.ReadWaveInfo(rawFile.GetData(), rawFile.Num()))
        {    
                //************************************************
                //CREATES THE SOUNDWAVE OBJECT AND ENSURES IT IS NOT NULL
                //************************************************
            USoundWave* sw = NewObject<USoundWave>(USoundWave::StaticClass());  //creates a placeholder SoundWave on the stack
                if (!sw)
                {    
                    UE_LOG(LogTemp, Error, TEXT("There was a nullptr when creating the USoundWave object."));
                    return nullptr;  //checks to be sure it was created, if not return NullPtr 
                }
    
    
    
                //************************************************
                //FILLS THE SOUNDWAVE DATA USING THE USOUNDWAVE NATIVE FUNCTIONS
                //************************************************
            int32 DurationDiv = *WaveInfo.pChannels * *WaveInfo.pBitsPerSample * *WaveInfo.pSamplesPerSec;   //calucates the duration divider const
            if (DurationDiv)  //IF dureation div is not null
            {
                    //PRE Debug logs
                    UE_LOG(LogTemp, Log, TEXT("SQ NumChannels-> %i"), *WaveInfo.pChannels);
                    UE_LOG(LogTemp, Log, TEXT("SQ Duration-> %f"), *WaveInfo.pWaveDataSize * 8.0f / DurationDiv);
                    UE_LOG(LogTemp, Log, TEXT("SQ RawPCMDataSize-> %i"), WaveInfo.SampleDataSize);
                    UE_LOG(LogTemp, Log, TEXT("SQ SampleRate-> %u"), *WaveInfo.pSamplesPerSec);
                // Fill in all the Data we have
                sw->DecompressionType = EDecompressionType::DTYPE_RealTime;
                sw->SoundGroup = ESoundGroup::SOUNDGROUP_Default;
                sw->NumChannels = *WaveInfo.pChannels;
                sw->Duration = *WaveInfo.pWaveDataSize * 8.0f / DurationDiv; ;
                sw->RawPCMDataSize = WaveInfo.SampleDataSize;
                sw->SetSampleRate(*WaveInfo.pSamplesPerSec);
                    //POST Debug logs
                    UE_LOG(LogTemp, Log, TEXT("SW NumChannels-> %i"), sw->NumChannels);
                    UE_LOG(LogTemp, Log, TEXT("SW Duration-> %f"), sw->Duration);
                    UE_LOG(LogTemp, Log, TEXT("SW RawPCMDataSize-> %i"), sw->RawPCMDataSize);
                    UE_LOG(LogTemp, Log, TEXT("SW SampleRate-> %u"), sw->__PPO__SampleRate);
            }
            else
            {
                UE_LOG(LogTemp, Error, TEXT("There was an error reading data from WaveInfo. Duration Div Error."));
                return nullptr;
            }            
    
                //************************************************
                //INVALIDATES COMPRESSED DATA, AND WRITES THE RAW DATA TO THE RAW PCM DATA OF THE NEW SOUNDWAVE
                //************************************************
            sw->InvalidateCompressedData(); //changes the GUID and flushes all the compressed data
            sw->RawData.Lock(LOCK_READ_WRITE);
            FMemory::Memcpy(sw->RawData.Realloc(rawFile.Num()), rawFile.GetData(), rawFile.Num()); 
            sw->RawData.Unlock();
    
            return sw;
        }
        else 
        {
            return nullptr;
        } 
    }

    Leave a comment:


  • replied
    Well, I also found this code in reference to the vorbis error we keep getting.

    The below is a snippet of code from an OGG import plugin from this github project: https://github.com/Geromatic/Unreal-OGG/tree/USoundWave

    This code seems to deprecated with the updates of at least 4.23. I can not find references to the FVorbisAudioInfo header file nor the FSoundQualityInfo header. Not to mention in this code InSoundWave->SampleRate is being set, which in the documentation, SampleRate is a protected variable and can not be accessed without using SetSampleRate().

    If anyone has any insight to FVorbisAudioInfo that would be great! Can not find it in the documentation.



    Code:
     
     bool USoundProcessingLibrary::FillSoundWaveInfo(USoundWave* InSoundWave, TArray<uint8>* InRawFile) {  // Info Structs FSoundQualityInfo SoundQualityInfo; FVorbisAudioInfo VorbisAudioInfo;   // Save the Info into SoundQualityInfo if (!VorbisAudioInfo.ReadCompressedInfo(InRawFile->GetData(), InRawFile->Num(), &SoundQualityInfo)) {     return false; }   // Fill in all the Data we have InSoundWave->DecompressionType = EDecompressionType::DTYPE_RealTime; InSoundWave->SoundGroup = ESoundGroup::SOUNDGROUP_Default; InSoundWave->NumChannels = SoundQualityInfo.NumChannels; InSoundWave->Duration = SoundQualityInfo.Duration; InSoundWave->RawPCMDataSize = SoundQualityInfo.SampleDataSize; InSoundWave->SampleRate = SoundQualityInfo.SampleRate;  return true; }

    Leave a comment:


  • replied
    Originally posted by Stefan Lundmark View Post
    Yes, that's how it works. One is serialized to disk and the other is used only for the preview, it's never written to disk. How does your code look like?
    void USbinTTSComponent::ReadWaveForPC(FString path, TArray<uint8> &RawSamples, USoundWave *&SoundWave)
    {
    TArray<uint8> FileSamples;
    if (!FFileHelper::LoadFileToArray(FileSamples, *path))
    {
    UE_LOG(LogTemp, Warning, TEXT("---LoadFileToArray Error---"));
    GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Red, TEXT("LoadFileToArray Error"));
    return;
    }

    FWaveModInfo WaveInfo;
    FString ErrorMessage;
    if (WaveInfo.ReadWaveInfo(FileSamples.GetData(), FileSamples.Num(), &ErrorMessage))
    {
    USoundWave* Sound = NewObject<USoundWave>(USoundWave::StaticClass());

    // Compressed data is now out of date.
    Sound->InvalidateCompressedData();

    // If we're a multi-channel file, we're going to spoof the behavior of the SoundSurroundFactory
    int32 ChannelCount = (int32)*WaveInfo.pChannels;
    check(ChannelCount > 0);

    int32 SizeOfSample = (*WaveInfo.pBitsPerSample) / 8;

    int32 NumSamples = WaveInfo.SampleDataSize / SizeOfSample;
    int32 NumFrames = NumSamples / ChannelCount;


    Sound->RawData.Lock(LOCK_READ_WRITE);
    void *LockedData = Sound->RawData.Realloc(FileSamples.Num() );
    FMemory::Memcpy(LockedData, FileSamples.GetData(), FileSamples.Num());
    Sound->RawData.Unlock();


    Sound->Duration = (float)NumFrames / *WaveInfo.pSamplesPerSec;
    Sound->SetSampleRate(*WaveInfo.pSamplesPerSec);
    Sound->NumChannels = ChannelCount;
    Sound->TotalSamples = *WaveInfo.pSamplesPerSec * Sound->Duration;

    SoundWave = Sound;

    RawSamples = FileSamples;

    }
    else
    {
    SoundWave = nullptr;
    GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Red, TEXT("--- ReadWaveInfo Error ---"));
    UE_LOG(LogTemp, Warning, TEXT("--- ReadWaveInfo Error:>> %s"), *ErrorMessage);
    }
    }

    // This is my code , How to fill another buffer?

    Leave a comment:


  • replied
    Yes, that's how it works. One is serialized to disk and the other is used only for the preview, it's never written to disk. How does your code look like?

    Leave a comment:


  • replied
    This is a bit difficult for me, I don't know how to fill another buffer, and, this sounds a bit strange: Does the programmer need to manage two different destination buffers?
    Originally posted by Stefan Lundmark View Post
    I agree, this doesn't make much sense.

    You need to fill two buffers, one is for the preview in the editor and one is the asset itself. My guess is that you're filling the preview buffer only.

    Leave a comment:

Working...
X