Announcement

Collapse
No announcement yet.

Custom Music Player

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    Custom Music Player

    Hi, I'm a newbie and I want to implement a music player using UE4.
    I think this can be a simple and useful exercise before starting with bigger projects.

    All I need is to load, play, stop audio files chosen by the user.
    Let's assume music can be in ogg files.

    How can I do so?
    Is there any tutorial about that or just someone who already tried to do this?

    Thank you

    #2
    Unless I'm mistaken, there hasn't really been anything like this before or a tutorial to follow. In terms of how audio files are normally incorporated into the engine, they're imported from .wav files in the editor to create USoundWaves and generally are known about before a 'game' runs. Raw wave files are used so that data can be re-compressed for different platforms from the highest quality source.

    We do use ogg compression for sounds played in most formats at the moment though so it's possible that you could load .ogg files directly but there isn't currently support for it. You could look at the existing import file process to get some hints, which is in Factory.cpp and EditorFactories.cpp. Specifically FFileHelper::LoadFileToArray is used to load a file from disk into an array of bytes, which could then be decompressed into raw PCM data using a FVorbisAudioInfo class. If you're looking to play sounds on the fly as a game runs without specifically creating assets for them, you might be able to create a temporary USoundWave for playback and just drop the .ogg file contents into the CompressedFormatData FFormatContainer the SoundWave has. That way you could potentially avoid the conversion from ogg to .wav to ogg and back again, while also only decompressing the chunk of ogg data needed to play next if memory is going to be a concern.

    Let me know if any of this doesn't make sense or if I'm going off on a completely different tangent from what you're trying to create.

    Comment


      #3
      Thank you, this is more or less what I needed.

      The next step is... trying to do that.

      A stupid question about USoundWave:
      is there any support for streaming music from memory? how does it work?
      I mean... a long time ago, when I was (more) stupid I was using Unity and there was a callback function automatically triggered when the buffer was nearly empty and a refill was needed.
      Fisting an uncompressed audio array in that object will kill the memory using any pop song as audio file. Decompressing only the next (or few next) chunk is the way but... how do I know when it's time to send the next chunk to the USoundWave?

      Comment


        #4
        I had similar idea, and only one response, pointing to VIDEOLAN aka VLC player. It has SDK and works under linux and windows.

        So take a look into VLC SDK: https://wiki.videolan.org/LibVLC_Tutorial/
        Hopefully it can be used from within unreal engine. Or look at old winamp stuff and plugins code.

        Anything else would require you to code custom decompressor for sound, and mp3 is copyrighted format.
        You probably can google sources for open source ogg decompressors like this one: http://www.nothings.org/stb_vorbis/

        Ps. unless you use something like VLC SDK, or windows media player making custom music player is not trivial.

        Comment


          #5
          Originally posted by Nawrot View Post
          I had similar idea, and only one response.
          It was me.

          ...but as I said above I want to try to include the player in the game without external programs (if it is possible).

          Comment


            #6
            There is support for decompressing ogg from memory on the fly on Windows, Mac, Xbox and Android AFAIK. To make use of it, you'll need to set the SoundGroup of a SoundWave to one that won't always decompress on loading. If you look in Engine\Config\BaseEngine.ini you should see:
            Code:
            [/Script/Engine.SoundGroups]
            +SoundGroupProfiles=(SoundGroup=SOUNDGROUP_Default, bAlwaysDecompressOnLoad=false, DecompressedDuration=5)
            +SoundGroupProfiles=(SoundGroup=SOUNDGROUP_Effects, bAlwaysDecompressOnLoad=false, DecompressedDuration=5)
            +SoundGroupProfiles=(SoundGroup=SOUNDGROUP_UI, bAlwaysDecompressOnLoad=false, DecompressedDuration=5)
            +SoundGroupProfiles=(SoundGroup=SOUNDGROUP_Music, bAlwaysDecompressOnLoad=false, DecompressedDuration=0)
            +SoundGroupProfiles=(SoundGroup=SOUNDGROUP_Voice, bAlwaysDecompressOnLoad=false, DecompressedDuration=0)
            So currently we don't force any sounds to decompress on load but for Default, Effects and UI it will fully decompress any sounds that are less than 5 seconds in length. Music and Voice sounds will always attempt to decompress each chunk as needed, so you wouldn't need to worry about callbacks to fill a buffer yourself.

            Comment


              #7
              Originally posted by keru View Post
              A stupid question about USoundWave:
              is there any support for streaming music from memory? how does it work?
              I mean... a long time ago, when I was (more) stupid I was using Unity and there was a callback function automatically triggered when the buffer was nearly empty and a refill was needed.
              Fisting an uncompressed audio array in that object will kill the memory using any pop song as audio file. Decompressing only the next (or few next) chunk is the way but... how do I know when it's time to send the next chunk to the USoundWave?
              If you do want to procedurally generate audio data take a look at USoundWaveStreaming. It has a virtual function that you override call GeneratePCMData which requests a certain number of samples and you provide as much as you can.

              The only current example of it being used is for VOIP, however, I did something not too dissimilar to this recently to add Mod support and will hopefully be getting it finished up and submitted on our next Epic Friday.

              In 4.1 much of the infrastructure needed to extend sound wave streaming in a plugin didn't exist. You can see the changes related to that here: https://github.com/EpicGames/UnrealE...00adc7212e0f80

              It should also be noted that the procedural audio paths are currently only implemented for XAudio (PC/XBoxOne) and Core (Mac) audio devices, however, that will be rectified at some point.
              Last edited by Marc Audy; 06-04-2014, 04:02 PM.

              Comment


                #8
                Originally posted by mgriffin_pitbull View Post
                If you're looking to play sounds on the fly as a game runs without specifically creating assets for them, you might be able to create a temporary USoundWave for playback and just drop the .ogg file contents into the CompressedFormatData FFormatContainer the SoundWave has. That way you could potentially avoid the conversion from ogg to .wav to ogg and back again, while also only decompressing the chunk of ogg data needed to play next if memory is going to be a concern.
                I got your point but I have difficulties in creating USoundWave on the fly with .ogg content as data.


                Originally posted by mgriffin_pitbull View Post
                There is support for decompressing ogg from memory on the fly on Windows, Mac, Xbox and Android AFAIK.
                [...]
                so you wouldn't need to worry about callbacks to fill a buffer yourself.
                Originally posted by Marc Audy View Post
                If you do want to procedurally generate audio data take a look at USoundWaveStreaming. It has a virtual function that you override call GeneratePCMData which requests a certain number of samples and you provide as much as you can.
                The only current example of it being used is for VOIP, however, I did something not too dissimilar to this recently to add Mod support and will hopefully be getting it finished up and submitted on our next Epic Friday.
                In 4.1 much of the infrastructure needed to extend sound wave streaming in a plugin didn't exist.
                Once I've got USoundWave working with native support, it will be interesting to extend the project using USoundWaveStreaming combined with custom decoder in order to extend the supported encodings.

                Right now my installed version is 4.1.1. I'm going to update soon to 4.2 (:

                Comment


                  #9
                  Originally posted by keru View Post
                  I got your point but I have difficulties in creating USoundWave on the fly with .ogg content as data.
                  I tried creating an actor and trigger an event to load a file from memory. Code follows

                  Code:
                  AMyFooActor::AMyFooActor(const class FPostConstructInitializeProperties& PCIP)
                  	: Super(PCIP)
                  {
                  	Box = PCIP.CreateDefaultSubobject<UBoxComponent>(this, TEXT("box"));
                  	
                  	sw = (USoundWave*) StaticConstructObject(USoundWave::StaticClass(), this, TEXT("MyTestSoundWave"));
                  	
                  	bool loaded;
                  
                  	loaded = FFileHelper::LoadFileToArray(rawFile, TEXT("D:\\my song.ogg"));
                  
                  	Box->bGenerateOverlapEvents = true;
                  	Box->SetRelativeScale3D(FVector(5, 5, 5));
                  
                  	if (loaded){
                  		//sw->CompressedFormatData = *(reinterpret_cast<FFormatContainer*> (rawFile.GetData()));
                  		//uncommenting line above causes funny Editor crash when opening project (v 4.1.1 and v 4.2.0)
                  		Debug("loaded");		
                  	}
                  
                  	RootComponent = Box;
                  
                  	Box->OnComponentBeginOverlap.AddDynamic(this, &AMyFooActor::TriggerEnter);
                  	Box->OnComponentEndOverlap.AddDynamic(this, &AMyFooActor::TriggerExit);
                  }
                  
                  void AMyFooActor::TriggerEnter(class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
                  {
                  	Debug("trigger enter");
                  	sw->CompressedFormatData = *(reinterpret_cast<FFormatContainer*> (rawFile.GetData()));
                  	//uncommenting above causes very funny runtime Editor crash (v 4.1.1 and v 4.2.0)
                  	Debug("done! LOL");
                  }
                  
                  void AMyFooActor::TriggerExit(class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex){
                  
                  	Debug("trigger exit");
                  }
                  I can load the .ogg file but I don't know how to handle its content.

                  Am I doing wrong the cast or should I change approach (USoundWaveStreaming, external player, ...)?

                  Comment


                    #10
                    The FFormatContainer won't be able to just cast the contents of your file like that. If you use its GetFormat function with the name "OGG" you'll get back a FByteBulkData structure for that format. Then you can copy your file data into that, it should look something like this:
                    Code:
                    FByteBulkData& BulkData = sw->CompressedFormatData.GetFormat(TEXT("OGG"));
                    BulkData->Lock(LOCK_READ_WRITE);
                    FMemory::Memcpy(BulkData->Realloc(rawFile.Num()), rawFile.GetTypedData(), rawFile.Num());
                    BulkData->Unlock();
                    You can see something similar inside USoundWave::GetCompressedData where it uses the Derived Data Cache to retrieve compressed data or compress it again from the source .wav file. Let me know if you have any more issues, this is still untested territory but I think it should work as long as the platform you are running uses ogg for sound.

                    Comment


                      #11
                      Originally posted by mgriffin_pitbull View Post
                      this is still untested territory but I think it should work as long as the platform you are running uses ogg for sound.
                      IT COULD WORK

                      Actually, I made it work in some rough way (Thanks!)
                      code

                      When I create a USoundWave I should also set its parameters (such as duration, number of channels, ...) the Memcpy does not set.
                      I finded out it is enough to manually set the numChannel parameter to have the USoundWave playing in UAudioComponent.
                      Is there any function that "automagically" sets those parameters in the USoundWave?
                      (I will also appreciate a not-so-automagic way to retrieve only the number of channels parameter from rawfile - is it possible?)


                      Another issue I founded out was about the UAudioComponent.
                      I obtained a play / stop system but it will be nice to have the PAUSE function.
                      I didn't find a pause function in UAudioComponent but there's a Play(startTime) function that can be used to play the song from a certain point.
                      If I could save the exact time when the user stop the song I will also be able to play from that time. How can I get that time?

                      Noob note: I used UAudioComponent because it was easy to set up and just worked. Please let me know if there is a better way to control a sound through C++.

                      Comment


                        #12
                        Hi there, not quite sure what happened with this thread but I did write a reply to your last post the other day but then both seemed to get deleted.

                        On the first point, ICompressedAudioInfo::ReadCompressedInfo will fill out a FSoundQualityInfo structure with all the information it can read from the ogg file's header data. This does happen already if you search for the places it's used, the original SoundWave is updated incase any of the values differ because of compression. So you can either create a temporary info object just to read this data and set up the SoundWave or you can continue to manually set the NumChannels property as you've discovered, the rest should be set automatically anyway once the sound starts playing.

                        As for the UAudioComponent, yes that's the best way to play sounds in general. You could potentially pause your game, which pauses all active sounds that aren't tagged as 'bIsUISound' if you're only dealing with that one sound. I am slightly surprised there isn't a Pause function available in UAudioComponent but I guess it would require a re-write of how the pausing code works so that manually paused sounds wouldn't be re-activated when un-pausing a game. If you look at how FAudioDevice::HandlePause is used, you might be able to make use of it to pause just your piece of music and let the game continue. You could also drill down and get the lower level FSoundSource for your SoundWave and call Pause on that if you never need to pause your game but I'm not sure whether to recommend that. As for finding out the current time of a playing sound to restart it at a specific time, I don't think that is currently exposed anywhere - or can easily be discovered sadly.

                        Comment


                          #13
                          Originally posted by mgriffin_pitbull View Post
                          Hi there, not quite sure what happened with this thread but I did write a reply to your last post the other day but then both seemed to get deleted.
                          It was a forum fault with data loss that day details

                          Originally posted by mgriffin_pitbull View Post
                          On the first point, ICompressedAudioInfo::ReadCompressedInfo will fill out a FSoundQualityInfo structure with all the information it can read from the ogg file's header data.
                          Thanks, this automagically worked ^_^

                          Originally posted by mgriffin_pitbull View Post
                          You could potentially pause your game, which pauses all active sounds that aren't tagged as 'bIsUISound' if you're only dealing with that one sound.
                          Originally posted by mgriffin_pitbull View Post
                          If you look at how FAudioDevice::HandlePause is used, you might be able to make use of it to pause just your piece of music and let the game continue. You could also drill down and get the lower level FSoundSource for your SoundWave and call Pause on that if you never need to pause your game but I'm not sure whether to recommend that.
                          I don't want to pause all the environment to pause the music.

                          I looked into that function, FAudioDevice::HandlePause will pause any source tagged as IsGameOnly by calling ->Pause() on them.
                          What I was planning to do was to manually Pause the FSoundSource associated to my UAudioComponent (making that source not GameOnly)
                          I dug a bit but I still don't understand how to get the FSoundSource from UAudioComponent.

                          PS I also found a not-so-cheering comment in FActiveSound::UpdateWaveInstances
                          Code:
                           //@todo audio: Need to handle pausing and not getting out of sync by using the mixer's time.
                          Last edited by keru; 06-23-2014, 11:46 AM.

                          Comment


                            #14
                            Yeah, getting from an Audio Component to the sound source could be a bit difficult so that's why I wasn't sure whether to recommend it. You can use FAudioDevice::FindActiveSound to get the FActiveSound associated with the audio component while it's playing. Then you'd have to use the map of FWaveInstances the active sound has (should only be one when playing a sound wave) and use the Audio Devices WaveInstanceSourceMap to get the sound source associated with it. This is basically the process that occurs when you call Stop on an Audio Component if you follow the functions through so I don't think there's anything too hacky about this approach.

                            I'm not sure what the todo comment relates to but I guess it's best to give this a try first and see if it throws up any more problems.

                            Comment


                              #15
                              Originally posted by Marc Audy View Post
                              It should also be noted that the procedural audio paths are currently only implemented for XAudio (PC/XBoxOne) and Core (Mac) audio devices, however, that will be rectified at some point.
                              hello, my first post on ue4 forums
                              whats the word on this?

                              ive been looking for an excuse to mess with ue4, 3 ios apps made with udk are waiting review so downloading ue4 source sounded like a good idea.
                              GeneratePCMData was one of the first things i thought about messing with, basically with an aim of producing sound with maths on the fly.

                              any use me trying just yet to make it work on a pc?

                              edit:
                              just seen GetVoiceData in VoiceCaptureWindows
                              could this be a way in?
                              Last edited by tegleg; 06-26-2014, 08:04 PM.
                              tegleg.co.uk - indie electronic music label
                              Android + HTML5 WIP Physics Game
                              PC Games - Old Android Music Apps

                              Comment

                              Working...
                              X