HI forum,
I just test the new 4.13 with new media framework, but I really don´t feel comfortable using it, first I need create a media source, then a media player and finally one media texture as container, I don’t see any logical reason to have 3 files instead 1, I really think it could be improve, I know is easy to use, but is easy or “super easy”, anyway is cool to have .mp4 inside UE on my windows and it is so much stable right now, but when I play in editor is not sound, how can I active the sound?, and another problem, when we are on Media player editor we are not able to move forward into the video by hitting the scroll bar, just rewind.
Hi everybody,
sorry if this is the wrong place to ask this question, but I am trying to make/use an in-game music player where the user can specify and play audio files of different formats, and decoding the chosing audio files for on-set detection (think an audio-surf style game).
Would the media framework be suitable for this task (under windows)? If not, would the VLC-Media plugin be ok to use for this, or are there any other alternatives?
The MediaPlayer asset can also play audio files. The supported formats depend on the platform and player plug-ins you have installed. With the WmfMedia plug-in on Windows we support aac, adts, mp3, wav, wma.
The AvfMedia plug-in doesn’t list any audio file extensions at this time. This is an oversight.
Here’s what I added for 4.14 (bold = supported in 4.13):
Yes, if you create your own native player (IMediaPlayer derived classes, such as FWmfMediaPlayer) you can register an IMediaAudioSink with the player’s output interface (IMediaPlayer::GetOutput). The sink will then receive raw audio samples. Take a look at UMediaSoundWave to see how the sink is implemented there. Also check UMediaPlayer to see how native players are created.
I was just wondering if you’ve looked at sync issues with the video frame rate against the project frame rate. I’m trying to use the media framework to play videos on textures but even if the video frame rate matches the game engine framerate I get dropped or doubled frames every now and then (quite often).
Is it possible to get the media player to force the next frame in the video file to be rendered/presented regardless of it’s playback rate vs the framerate. As an extreme example a 30fps video lasting 10 secs would be played back in 5 seconds if the game engine was running at 60fps.
This obviously isn’t my end goal - i’ll ultimately be playing back 50fps video with the engine locked at 50fps, I just don’t want any doubled or skipped frames.
@Dannington Yeah, this is a known issue. I’m working on it for 4.15. The problem is that the decoder is often several frames ahead, so when it delivers more frames than actually needed, the triple buffer in FMediaTextureResource starts dropping them. I’m going to implement a frame queuing system that also takes into account the frames’ time codes for 4.15.
Thanks for the reply - I know you’re probably sick of people saying this, but if you need someone to test your work i’d be - let’s say keen. I’m working on a live-broadcast gameshow and I could really do with smooth playback in UE. At the moment i’m playing videos in a BLUI buffer and - for whatever reason - the results I get are OK - the problem here is that i’m having to juggle javascript requests and callbacks in order to time and trigger my video streams. I’ve also been taking a look at using spout to feed in live textures, which also works ok but due to a few bugs in the plugin i’m getting memory leaks and slowdowns from prolonged use.
Anyway - thanks for all the work you’ve been putting into the media framework.
Is there any simple way get data from audio through media player for visualisation? I new to programming and very hard understand unreal media framework.
When playing media in the Engine, the audio data goes to a UMediaSoundWave asset. I don’t know if there’s already a way to extract audio data from sound waves, but I think this will be possible with the new audio sub-system. I’ll see if our audio programmer can respond here.
The UMediaSoundWave asset is a USoundWaveProcedural. Procedural sound waves are raw PCM data that is directly submitted to the lower audio engine in the same way as other sound-file waves (which decode their compressed audio to raw PCM data). This raw data is then submitted to platform-dependent sound sources and mixed/rendered to the output hardware. Although this is not supported out of the box, you could either copy off the PCM data that the UMediaSoundWave is submitting to the procedural sound wave. Once you have the raw 16 bit PCM data, you can format convert it to floats and do whatever you want.
This process will be generalized and easier to perform in the new audio mixer (or “audio renderer”) since we’ll be doing our own mixing in platform-independent code. But procedural sound waves haven’t changed and it has the same USoundWaveProcedural implementation.
Oh awesome, I didn’t realize this thread was still active. I was just curious about webcam integration in the new MediaSource assets. Is that something that’s still in the works or is it possible to locally stream a webcam and then feed that to a Stream Media Source using something like VLC? It seems a bit overkill to stream the feed to the web and then retrieve it in engine.
I made a more detailed post about it here before finding this thread.