A realtime renderer like Unreal relies on the previous frame’s information to build the current frame. This means that to create a single screenshot (like HighResShot does) the engine is actually forcibly rendering additional frames at the requested resolution. The default value is 4, (and is specified by “r.HighResScreenshotDelay”), so when you request the screenshot at 4k, you’re actually requesting 5 4k renders on a single frame. There are no mechanisms within HighResShot to make “good” frames - it simply resizes the internal resolution of the renderer, and then reads data back, synchronously, at the end of the 5 renders. This feature pre-dates most of UE4’s advanced rendering techniques (such as TAA/TSR, Lumen, Nanite, Virtual Shadow Maps, etc.).
MRQ on the other hand has a fairly significant amount of code designed for efficient, multi-frame renders. MRQ can do 1 frame long renders (if you just need screenshots) but it is designed first and foremost for multi-frame renders. It adds several things that HighResShot does not;
1) Longer warm-up periods for better visual history (ie: I believe it defaults to 32)
2) Flushes any outstanding shaders, materials, distance field meshes, nanite mesh builds, etc. at the end of each frame (before the render) to ensure there is nothing missing for a given frame.
3) Sets a number of CVars by default when rendering starts to improve quality, ie: disabling LODs, setting Cinematic Scalability levels, etc.
4) Disables Texture Streaming by default
5) Uses an asynchronous readback from the GPU, ie: On a Frame 1, the render for Frame 1 is sent off to the GPU, but it normally doesn’t get worked on until Frame 2, and then only by the start of Frame 3 is the data ready and available for readback. HighResShot effectively blocks the entire game thread until any existing gpu work is done, then submits the higher res render, then waits for the CPU side of that to finish, and then has to wait for all of the GPU side to finish. MRQ is based on how regular rendering works, which allows overlapping this data and allowing a greater deal of concurrency.
You can look at MoviePipelineGameOverrideSetting.cpp, and MoviePipelineRendering.cpp to get an idea of what MRQ disables, and what per-frame flushing it does (see UMoviePipeline::FlushAsyncEngineSystems).
As indicated in my previous post, the Viewport has world rendering disabled;
`// MoviePipeline.cpp
if (UGameViewportClient* Viewport = GetWorld()->GetGameViewport())
{
Viewport->bDisableWorldRendering = !ViewportInitArgs.bRenderViewport;
}`
This means that when the viewport is drawn, it does not request that the 3d world is rendered, but still does the 2d elements (such as the user interface), which MRQ needs to be able to show the on-screen information widgets.
1) Yes. Both the in-game 3d world render, and the Editor world are skipped during renders, the only render of the world that should be happening is MRQs.
2) All renders are done to an intermediate pool of render targets everywhere in the engine, and the final step is to copy the final image to a regular render target (ie: the player’s screen, or a texture that MRQ then reads back, etc.).
3) MRQ was built with a number of goals in mind; a) Higher quality output, b) Maintaining performance where possible when it doesn’t sacrifice image quality, c) Be easy to configure and have defaults that “Just Work” for most use cases. I would not expect MRQ to be faster than regular gameplay, ie: If your game renders at 30fps, I would not expect you to be able to render in MRQ at 30fps, because more work is being done. It would be faster than repeatedly triggering HighResShot because HighResShot doesn’t do any work asynchronously, and it would do the 4 frame warm up for every single frame (while MRQ only does it once at the start of each shot).
4) There is not any, but it’s reasonably straight forward.
`// MoviePipeline.cpp
FCoreDelegates::OnBeginFrame.AddUObject(this, &UMoviePipeline::OnEngineTickBeginFrame);
// Called at the end of the frame after everything has been ticked and rendered for the frame.
FCoreDelegates::OnEndFrame.AddUObject(this, &UMoviePipeline::OnEngineTickEndFrame);`
The movie pipeline process hooks both the start and end of each engine frame. Before the engine ticks, MoviePipeline calculates what the delta time should be (ie: a 24fps movie would have a delta time of 0.041s, but the math is much more complicated when using temporal sub-sampling). As part of this calculation, it also figures out what time in the Level Sequence should be evaluated, and caches that information for later.
The standard engine tick happens now, and before the world is ticked, the Level Sequence is evaluated (using the cached time).
The regular world tick happens. Actors have their Tick functions run, physics is run, animations update, particles move, etc.
The Viewport is rendered. No 3d world is rendered at this time, just the 2d elements.
That concludes the normal engine tick, and then Movie Pipeline’s OnEngineTickEndFrame runs;
A render for the current frame is requested. Post Processing happens as part of the world render.
The engine then moves onto the next frame while the GPU starts working on the previous frame. When MRQ requests a render of the world, it associates a blob of state data with the render request - what frame of the sequence it was, etc. That way when the asynchronous processes complete 2-3 frames later, we can match up the pixel data with the correct frame, to ensure that what comes out on disk saying “Frame 2” was actually what was rendered when the Level Sequence itself was also on Frame 2.