As someone who is trying to implement 360 video for VR, it is great to hear that media framework is being worked on again.
Same for me, would be great to be able to use 4k or 6k videos for 360 panos, especially on the Gear VR. Also, please help us with this bug, it is a real deal breaker even with very simple videos.
Here’s a quick update…
I’m still working on the Media Framework overhaul - have been able to focus on it for about a week now. There are major changes to the API, but the content is mostly backwards compatible right now (I’m planning to write code for automatically upgrading your projects). I’m not yet in a working state, but anticipate to have first working code in the next couple days.
Here’s what’s in the works right now:
- File & URL media sources (finished)
- Playlists (finished)
- API performance improvements for 4K/360 video (almost done)
- API simplification (almost done)
- Content pipeline overhaul (almost done)
- Upgrade all player plug-ins to new API (VlcMedia & PS4Media almost done, rest to do)
- Better audio support (in progress)
- Linux support (in progress via VlcMedia)
- Android camera media source (in progress by ChrisB)
- WmfMedia command queue & fix state machine bugs (to do)
- WmfMedia H.264 support & GPU pixel format conversion (to do)
- Sequencer integration (to do)
I’m not sure if this big change will make it into 4.12, because we’re branching on Thursday and there is still so much to do. At least it’s going to be in the GitHub Master branch soon. I’ll keep you posted.
I am excited for when this comes out. Thank you for working on it.
Awesome news, thanks gmpreussner!
Thank you gmpreussner!
I think the 4k support on mobile platforms it the real deal, at least for us
Will this be coming in 4.12??
Nope, didn’t make it into 4.12 as we already branched last week.
So 4.13 then?
Yep, and in GitHub Master much sooner than that.
Been wanting to have a video of my graphic design portfolio on a like macbook that you can walk up to and turn on for like ages now. When is 4.12 supposed to come out, over the summer perhaps, will you be updating this thread once the Github master comes out?
4.13 is scheduled for July right now. Yes, I will post regular updates here.
Thanks for all the updates.
Do you know what the frame delay is(if any) for when you pass a buffer off to IMediaSink and to when it would be displayed in engine? Are the optimizations you mentioned earlier in the thread aimed at reducing this delay by removing extra buffer copies? Our goal is to get video from a hardware input device and rendered in the engine with as little frame delay from the source as possible.
The ability to embed audio into video files on Windows will be great since it works on Mac and mobile. Unless there’s a method I’m not aware of!
FYI we started working on in-engine YouTube playback.
Yeah, better audio support is a big ticket item. It should work on Windows already, but it is very limited.
The delay depends on how the player plug-in works. Some decoder APIs are able to render directly into a render target, others will return frame buffers on some random thread.
We do not currently have a decoder that decodes directly on the GPU. I believe that Bink is the only solution that can do this at the moment, but it is not free.
I think that with the current implementation the problem is not so much frame delay, but performance, because we’re copying frame buffers several times. The delay should be at most one frame either way. The new Media API will allow for two different video sink modes: triple-buffered render targets, and direct-write to render target. The latter is used for decoders that can render or need to do post-processing on the Render thread, and the former is used for everything else. When writing to render targets on the Render thread, there should be no frame delay. When using the triple-buffered mechanism, there can be a delay of at most one frame (the texture resource will grab the latest buffer the next time the Render Tick is executed, which may be before or after the current frame finished rendering).
The main focus for Media 2.0 is to eliminate extra frame buffer copies.
Is there a frame rate limit in the current or future implementation of the media framework? Currently I’m trying to feed in frames using our own plugin at 60fps however when I’m seeing it rendered in engine the actually fps displayed on the media texture seems to be closer to 30fps.
You think this is a problem with my implementation or is it a current limitation?
The frame rate is limited by the Engine’s frame rate. If the Engine can only render 30 frames per second, your movie won’t play at 60. Movie frames will be dropped during rendering.
In my example engine is rendering at 100fps+, but the media texture in engine looks to be approx 30fps (eyeballing it) when compared to source.
So if frames are pushed into the media framework at 60fps it should be rendered on a media texture at that rate ?