Sync of Movie Render Queue & Game Mode

Hello,

We are working with the Sequencer with Movie Render Queue (MRQ) and Game Mode in Unreal Engine. The idea is to combine the respective output of both (RGB frames from MRQ and some data extraction related to objects from game mode) simultaneously.

For our current experiment, synchronization between game mode event function Trigger and corresponding frame obtained as output from MRQ render is critical.

Background - Dynamic World data generated by the Game Mode, such as physics and interactions, need to be aligned in every rendered frame from the MRQ.

Requirement- To ensure that the data generated on CPP function call from Game Mode simulation aligns with every frame of the RGB images rendered by the Sequencer, We use Events Track Feature of Sequencer.

Some queries on trying to achieve this:

1. Event Repeater Track Executing Unexpectedly (A Suspected Bug with Sequencer Event Track)

The Sequencer’s Event Repeater Track fires five times before the first image starts saving, leading to unintended behavior.

Our script is added to the Event Repeater Track from the first frame to the last frame, yet it executes five times before the first frame is even reached.

What could be causing this behavior, and how can we ensure the reliability?

2. Viewport Resolution Mismatch During Simulation vs. Render

When rendering at 1920 × 1080, we noticed that the Viewport simulation and logged data are at 1920 × 1008 instead.

It seems like Unreal Engine automatically resizes the viewport during simulation even though we set it in the settings?

Why does this resizing occur, and how can we ensure that the game viewport simulation runs at the exact resolution set in the render settings (1920 × 1080 or so)?

3. Delay in Camera Perspective Update with Project World to Screen

The Project World to Screen Unreal Blueprint function takes Player 0’s camera as input, which, during Sequencer rendering, corresponds to the current Cine Camera (e.g., Camera A).

However, when switching to a second Cine Camera (Camera B), the screen coordinates still appear to be derived from Camera A for a brief period before finally switching.

Why does this delay in perspective switch occur, and how can we ensure that Project World to Screen immediately reflects the new active camera’s perspective when the camera cut happens in sequencer?

Also, any additional suggestions or alternatives such as use of delegates for using both the Game Mode and Level Sequencer in a combined synced manner will be helpful.

Please let me know if you require further context or technical details.

Steps to Reproduce

This in general is probably a bit of a tricky thing to do, because there is a varying delta time on the first frame before MRQ can start taking over, and the number of engine ticks for a given output frame will change depending on your MRQ settings, ie: additional warm-up frames or temporal sub-samples will increase the number of engine ticks.

> The Sequencer’s Event Repeater Track fires five times before the first image starts saving, leading to unintended behavior.

> Our script is added to the Event Repeater Track from the first frame to the last frame, yet it executes five times before the first frame is even reached.

> What could be causing this behavior, and how can we ensure the reliability?

I suspect this is by design. MRQ needs to do a fair amount of setup work on the first frame for renders and it involves jumping around in the Sequencer timeline. For example, when first opened we jump to the first frame, and then to simulate motion blur we jump to the second frame, and then back to the first frame. There is also a mix of Jump and Play commands issued (which may end up repeating evaluations).

The best way to work around this would be to not do events on the first frame. To do this in MRQ you need to extend your camera cut track to the left of your Playback Range Start, and then in the anti-aliasing settings choose “Render Warm Up Frames” (true), “Use Camera Cuts for Warm Ups” (true), “Render Warm Up Count” (0). This should, instead, make sequencer evaluate where the camera cut section starts, and then it will walk towards frame zero, not recording anything to disk until it reaches frame zero. It will then skip the motion blur emulation setup because it theoretically has “real” data from the previous frames before the 0th frame.

> When rendering at 1920 × 1080, we noticed that the Viewport simulation and logged data are at 1920 × 1008 instead.

> It seems like Unreal Engine automatically resizes the viewport during simulation even though we set it in the settings?

This may be the size of the title bar for the window and you may need to simply offset the requested resolution by a fixed amount, otherwise you would need to look through the Window Creation code in PlayLevel.cpp.

> 3. Delay in Camera Perspective Update with Project World to Screen

You would need to debug through the implementations of these functions. When the frame starts, Sequencer is evaluated before anything else in the world, and the Camera Cut track should be evaluated. It requests the Player Camera Manager changes view targets, and what is rendered to screen is based on the player view targets. You will have to look to see what data is out of data, as the Player Camera Manager should be updated with the new view target at this point, and the output resolution isn’t changing so I don’t think there’d be any wrong values being provided to the unproject function in terms of the screen.

It’s not really clear to me what the goal is here, but here are some ideas that may or may not help;

1) Use Take Recorder to record your simulation data instead, and then use the recorded data tracks in the sequencer instead. This will ensure synchronization, as Sequencer will be controlling and playing back all of the data at once.

2) Record your data separately, and shift it after the fact. You can use the Console Variables setting in MRQ to add a “Start Console Command”. If you use “ke * MyFunctionName” as a command, and then place a Blueprint instance in the world and give it a no-parameter/no return function named MyFunctionName, it should be called by MRQ. This should only happen once for a given render, and it should happen reasonably predictably, though I believe it still executes before all of the warm-up related frames in MRQ so you’ll still need to shift your two sets of data after the fact to align. But that could at least give you a synchronization point between the two data streams.

Hello Matt,

Warm-Up Frames: Added warm-up frames in MRQ as you mentioned, I noticed that the first 3 frames need to be skipped to avoid sync mismatch. These frames include 2 pre-render engine setup as you mentioned and one Frame before start. (Tested with multiple random cases, its 3 frames all the time. So, this could be treated solved by skipping 3 frames)

Viewport Resolution Issue & Event Repeater CSV Logging Context: (Pending Main Issue)

I suppose query related to this was not conveyed clearly earlier. I am adding some more info and context:

The resolution is unexpectedly changing during rendering and doesn’t align with the resolution explicitly set in MRQ. I’ve configured MRQ to render at 800x600, and this is correctly reflected in the shot setup logs:

LogMovieRenderPipeline: Finished setting up rendering for shot. Shot has 1 Passes. Total resolution: (800x600) Individual tile resolution: (800x600). Tile count: (1x1) However, during execution, the viewport automatically resizes to 1920x1008, as shown in the logs:

LogViewport: Scene viewport resized to 1920x1008, mode Windowed. LogWindows: resolution: 1920x1080, work area: (0, 0) -> (1920, 1008), device: '\\.\DISPLAY17' [PRIMARY]This resolution change happens after Render execution begins, and it causes a mismatch between expected render resolution and the actual player viewport dimensions.

Context:

We’ve added a function that triggers every frame via the Sequencer Event Repeater, logging some data. This data logged has dependency on viewport size and hence the change in viewport resolution is affecting the accuracy of data.

Here’s a snip from viewport settings (Editor settings -> Play) where the dimensions are getting set back to 1920 x 1008 post rendering though the resolution set was 800x600.

Camera Perspective Shift Issue: As per your inputs, tried the take recorder & also checked the playlevel.cpp for finding the player controller’s current camera.

Take recorder: a) Doesn’t seem to be recording the actor component attached to actor (which we need); b) Creates an non-editable sequence; c) Not accounting for the data inconsistency issue while camera changed in a sequence

So, we are thinking to create separate sequencer in MRQ on our own for each camera.

Engine version used: 5.2 (Vanilla version)

Regards,

Indrajith

If your issue is with the PIE window created during the MRQ render, then please set the Resize PIE Window to Output Resolution project setting;

[Image Removed]

This may limit your rendering resolution to the size of the screen being used when rendering, and very large resolutions may cause issues, but it will request the PIE window that is created be the same size as the output resolution specified by the job.

> b) Creates an non-editable sequence;

You can unlock these sequences by clicking on the icon in the top-right, they’re designed to be read-only by default to prevent accidentally changing them (as they may be in sync with external mocap data).

Are you saying that the file you end up with on disk is only 1920x1008? Can you upload one of the frames after setting MRQ to output in Multi-layer exr format, I’d like to have a look at the metdata and data window.

I am referring to the size of the window created in the Movie Render Queue pop-up window. This is a PIE session (in a new window), as, as far as I am aware, we do not intend to support the Movie Render Queue PIE Executor starting inside the Level Editor viewport - it is only intended to be used as a pop-up window. If you are using the runtime version of Movie Render Queue (ie: not using the Window > Cinematics > Movie Render Queue UI, but are instead triggering it after the game has started) then MRQ has no control over the actual viewport - it renders entirely to an off-screen texture, where the game’s viewport does not currently matter. Your use case is unusual and not something that MRQ would account for.

> LogMovieRenderPipeline: Expected size: 540 x 960 (Render Job Resolution)

> LogMovieRenderPipeline: Actual size: 544 x 960 (?)

> LogViewport: Scene viewport resized to 540x960, mode Windowed. (?)

This is not entirely expected, but also not unexpected. When creating textures on the GPU the GPU may decide to pad their sizes up to a 16 pixel alignment. So 540/16 = 33.75, which the GPU can’t represent, so it pads it up to 544 (16*34). So this part is not unexpected, MRQ is designed to handle this case and it instead takes only the top left 540x960 pixels out of the image, as the extra 4 pixels on the edge were never written to as they exist on the GPU texture, but were never written to.

What is a bit unexpected here is that you are seeing that warning, as the only place I see that prints it in code is a safety check after MRQ has supposedly handled it, and it not handling it at that point should have been fatal and caused the application to close. Do you have a minimal repro case that I can run that would cause this log to show up?

> Is this behavior expected? Is a small portion of the screen (bottom UI bar or window ) always reserved, making it impossible to hit full monitor resolution (even with full screen option checked)?

There are generally limitations in place to prevent the client from creating a window larger than their monitor, as this is usually a mistake (ie: the user has switched monitors since the last time they used the application) and if the application retained the original size it would prevent the user from being able to change it again.

There is a command line parameter, “-ForceRes” but it is not clear from looking at the code if this applies to the editor, it may only apply to the game mode. You could try launching the editor with “UnrealEditor.exe <path to uproject.uproject> -ForceRes -ResX=3840 -ResY=2160 -Windowed” and see if that allows subsequent PIE windows to get created at the full size (even though they would fall off the screen). If the editor does not support it, you may be able to make it work with “-game” mode, which would be: “UnrealEditor.exe <path to uproject.uproject> -game -ForceRes -ResX=3840 -ResY=2160 -Windowed”. This is not an ideal scenario - you would need to change how you invoke Movie Render Queue, and you may see small visual differences between rendering in the editor versus rendering in -game, but it is worth a shot to see if this gives you the requested window you need for your pixel points/object precision that you’re trying to calculate.

Otherwise, as stated earlier, you will need to debug the code to see where the clamp is coming from and possibly making an edit to the engine. This is an unusual use case and not something that the editor is designed to support.

Unfortunately I think you would need to debug through the window creation code to find where the windows task bar limitation is coming from, I suspect this isn’t a very common case - it’s rather unusual to have a full screen window (that isn’t just full screen), and there’s generally no reason for PIE windows to be full screen.

In the UMoviePipelinePIEExecutor::Start function, where it create sthe custom window to be used by the PIE session:

// Create a Slate window to hold our preview.

TSharedRef<SWindow> CustomWindow = SNew(SWindow)

.ClientSize(WindowSize)

.AutoCenter(EAutoCenter::PrimaryWorkArea)

.UseOSWindowBorder(true)

.FocusWhenFirstShown(true)

.ActivationPolicy(EWindowActivationPolicy::Never)

.HasCloseButton(true)

.SupportsMaximize(true)

.SupportsMinimize(true)

.SizingRule(ESizingRule::UserSized);

WeakCustomWindow = CustomWindow;

FSlateApplication::Get().AddWindow(CustomWindow, !IsRenderingOffscreen());

Looking through SWindow’s arguments, there’s one called “SaneWindowPlacement”, which has this description:

/** If the window appears off screen or is too large to safely fit this flag will force realistic

constraints on the window and bring it back into view. */

SLATE_ARGUMENT( bool, SaneWindowPlacement )

It defaults to true, you could try changing the engine code for the Movie Pipeline PIE Executor to set that to false to see if that is what is causing the window to be clamped to the screen area not counting the task bar.

> I suppose the rendering of same frame is happening twice? (output log of two resolutions also seem to be confirming this).

Yes and no. The regular game viewport is being rendered, but MRQ is disabling the rendering of the 3d world within that viewport (which is why the background of the PIE session is black, despite the debug widget not actually providing a background). So the actually intensive part of rendering is skipped during the rendering of the view, and in general, the rendering system pools render target resources (ie: there isn’t a separate set of render targets just for the game-world separate from the MRQ render, the same one would be re-used, since the only unique output between renders is a single full-screen image and not the full heavy gbuffer). So there shouldn’t really be any noteable performance gains left on the table, the 3d render is the heaviest performance component. Additionally the world is only ticked once per frame, MRQ just requests a different render of the world after being updated by the game.

A realtime renderer like Unreal relies on the previous frame’s information to build the current frame. This means that to create a single screenshot (like HighResShot does) the engine is actually forcibly rendering additional frames at the requested resolution. The default value is 4, (and is specified by “r.HighResScreenshotDelay”), so when you request the screenshot at 4k, you’re actually requesting 5 4k renders on a single frame. There are no mechanisms within HighResShot to make “good” frames - it simply resizes the internal resolution of the renderer, and then reads data back, synchronously, at the end of the 5 renders. This feature pre-dates most of UE4’s advanced rendering techniques (such as TAA/TSR, Lumen, Nanite, Virtual Shadow Maps, etc.).

MRQ on the other hand has a fairly significant amount of code designed for efficient, multi-frame renders. MRQ can do 1 frame long renders (if you just need screenshots) but it is designed first and foremost for multi-frame renders. It adds several things that HighResShot does not;

1) Longer warm-up periods for better visual history (ie: I believe it defaults to 32)

2) Flushes any outstanding shaders, materials, distance field meshes, nanite mesh builds, etc. at the end of each frame (before the render) to ensure there is nothing missing for a given frame.

3) Sets a number of CVars by default when rendering starts to improve quality, ie: disabling LODs, setting Cinematic Scalability levels, etc.

4) Disables Texture Streaming by default

5) Uses an asynchronous readback from the GPU, ie: On a Frame 1, the render for Frame 1 is sent off to the GPU, but it normally doesn’t get worked on until Frame 2, and then only by the start of Frame 3 is the data ready and available for readback. HighResShot effectively blocks the entire game thread until any existing gpu work is done, then submits the higher res render, then waits for the CPU side of that to finish, and then has to wait for all of the GPU side to finish. MRQ is based on how regular rendering works, which allows overlapping this data and allowing a greater deal of concurrency.

You can look at MoviePipelineGameOverrideSetting.cpp, and MoviePipelineRendering.cpp to get an idea of what MRQ disables, and what per-frame flushing it does (see UMoviePipeline::FlushAsyncEngineSystems).

As indicated in my previous post, the Viewport has world rendering disabled;

`// MoviePipeline.cpp

if (UGameViewportClient* Viewport = GetWorld()->GetGameViewport())
{
Viewport->bDisableWorldRendering = !ViewportInitArgs.bRenderViewport;
}`

This means that when the viewport is drawn, it does not request that the 3d world is rendered, but still does the 2d elements (such as the user interface), which MRQ needs to be able to show the on-screen information widgets.

1) Yes. Both the in-game 3d world render, and the Editor world are skipped during renders, the only render of the world that should be happening is MRQs.

2) All renders are done to an intermediate pool of render targets everywhere in the engine, and the final step is to copy the final image to a regular render target (ie: the player’s screen, or a texture that MRQ then reads back, etc.).

3) MRQ was built with a number of goals in mind; a) Higher quality output, b) Maintaining performance where possible when it doesn’t sacrifice image quality, c) Be easy to configure and have defaults that “Just Work” for most use cases. I would not expect MRQ to be faster than regular gameplay, ie: If your game renders at 30fps, I would not expect you to be able to render in MRQ at 30fps, because more work is being done. It would be faster than repeatedly triggering HighResShot because HighResShot doesn’t do any work asynchronously, and it would do the 4 frame warm up for every single frame (while MRQ only does it once at the start of each shot).

4) There is not any, but it’s reasonably straight forward.

`// MoviePipeline.cpp

FCoreDelegates::OnBeginFrame.AddUObject(this, &UMoviePipeline::OnEngineTickBeginFrame);
// Called at the end of the frame after everything has been ticked and rendered for the frame.
FCoreDelegates::OnEndFrame.AddUObject(this, &UMoviePipeline::OnEngineTickEndFrame);`

The movie pipeline process hooks both the start and end of each engine frame. Before the engine ticks, MoviePipeline calculates what the delta time should be (ie: a 24fps movie would have a delta time of 0.041s, but the math is much more complicated when using temporal sub-sampling). As part of this calculation, it also figures out what time in the Level Sequence should be evaluated, and caches that information for later.

The standard engine tick happens now, and before the world is ticked, the Level Sequence is evaluated (using the cached time).

The regular world tick happens. Actors have their Tick functions run, physics is run, animations update, particles move, etc.

The Viewport is rendered. No 3d world is rendered at this time, just the 2d elements.

That concludes the normal engine tick, and then Movie Pipeline’s OnEngineTickEndFrame runs;

A render for the current frame is requested. Post Processing happens as part of the world render.

The engine then moves onto the next frame while the GPU starts working on the previous frame. When MRQ requests a render of the world, it associates a blob of state data with the render request - what frame of the sequence it was, etc. That way when the asynchronous processes complete 2-3 frames later, we can match up the pixel data with the correct frame, to ensure that what comes out on disk saying “Frame 2” was actually what was rendered when the Level Sequence itself was also on Frame 2.

Thanks for the suggestions.

I’ve now tried enabling “Resize PIE Window to Output Resolution” in the project settings, but unfortunately, that didn’t resolve the issue — the MRQ render still outputs at a resolution 1920*1008, rather than the job’s specified 1920*1080 output resolution.

Also, the sequence was already unlocked via the icon in the top-right

`LogViewport: Scene viewport resized to 1920x1008, mode Windowed.

LogMovieRenderPipeline: Finished setting up rendering for shot. Shot has 1 Passes. Total resolution: (1920x1080) Individual tile resolution: (1920x1080). Tile count: (1x1)`

Hi Matt, Clarifying the scenario here:

1) There is no problem with the image output, I have control over the MRQ image output resolution (Say, 1920x1080 / 1920x1008 etc. No problem)

2) The problem being faced is on viewport resolution (getting auto reset as discussed earlier) which gets set back to 1920 x 1008.

Because of viewport not getting set to specified resolution (same as MRQ, Say: 1920x1080 / 4K etc), there is mismatch of data extracted in pixel space between image out from MRQ and data out from game mode (viewport).

As expected, setting MRQ image out to the resolution viewport is defaulting to (1920x1008), there is match. But apart from this any other resolution of image out in MRQ would give mismatch of data as viewport is resizing back to different resolution than set.

[Image Removed]

If you have set the “Resize PIEWindow to Output Resolution” setting to true in the Project Settings, as far as I can tell, the window does get resized as requested, ie:

[Image Removed]If I request a render at 540x960, the resulting PIE window is ~540x992, but the actual viewport area given to the game window is the expected 540x960. (see bottom corner of screenshot for the resolution denoted by the red rectangle)

[Image Removed]If I request a render at 1920x1080, I get similar results;

[Image Removed]

(This feature does not work with Movie Render _Graph_ configurations right now, but does work with the default Movie Render Queue).

If setting the outputput resolution to 540x960 and when Resize PIEWindow To Output Resolution is enabled does not result in a tall vertical PIE window, then there is some other code involved that is fighting MRQ and you will need to debug. You can start with UMoviePipelinePIEExecutor::Start to verify what sized window is being requested:

TSharedRef<SWindow> CustomWindow = SNew(SWindow)

.ClientSize(WindowSize)

.AutoCenter(EAutoCenter::PrimaryWorkArea)

.UseOSWindowBorder(true)

.FocusWhenFirstShown(true)

.ActivationPolicy(EWindowActivationPolicy::Never)

.HasCloseButton(true)

.SupportsMaximize(true)

.SupportsMinimize(true)

.SizingRule(ESizingRule::UserSized);

If that is showing the correct resolution but it’s resolution is being changed after it is loaded, then you will need to look into SWindow::Resize, or SWindow::ReshapeWindow to see if some other code is calling that after PIE starts and is resizing the window again.

You should also try setting your DPI scaling for your monitor in Windows to 100% to ensure the mismatch is not related to HDPI support.

1. Understanding distinction between PIE , Viewport and Actual Render Size

  • When you say viewport, are you referring to -
  • a) Actual Game Editor Viewport
  • b) or the PIE (Play In Editor) - the Movie Render Preview Window (In this Case Game Viewport simulation happens as Movie render preview)?

Observation:

  • In the logs, I see the following resolution entries:

LogMovieRenderPipeline: Expected size: 540 x 960 (Render Job Resolution)

LogMovieRenderPipeline: Actual size: 544 x 960 (?)

LogViewport: Scene viewport resized to 540x960, mode Windowed. (?)

Even when both the Viewport and the Final Rendered Image are exactly 540x960, the Movie Render Pipeline (MRQ) logs an Actual Size of 544x960, which doesn’t reflect in the saved image or viewport view.

  • Why is there a mismatch between Expected and Actual size, even when the output is visually correct, What this Actual Size refers to?

2. Limitations with Monitor Resolution (Even with Rendering at Fullscreen)

The following settings are ensured:

  • DPI scaling is set to 100%

  • “Resize PIE Window to Output Resolution” is enabled

  • When rendering at a resolution less than the monitor resolution, everything works, But when trying to render at the exact resolution of the monitor, the following observation is noticed:

On a 1080p Monitor:

  • Render job: 1920x1080
  • LogViewport shows:
  • Scene viewport resized to 1920x1008
    • (Windowed)
  • Scene viewport resized to 1920x1032
    • (Fullscreen)

On a 4K Monitor:

  • Render job: 3840x2160
  • LogViewport shows:
  • Scene viewport resized to 3840x2112
    • (Windowed)

Rendering anything below full resolution, say 1024x720 on a 1080 monitor doesn’t produce this issue.

  • Is this behavior expected? Is a small portion of the screen (bottom UI bar or window ) always reserved, making it impossible to hit full monitor resolution (even with full screen option checked)?
  • Is using a larger monitor (higher resolution than target render) the only way to hit the exact resolution like 1920x1080 or 3840x2160?
  • Is there a known workaround to force Unreal to use the full screen resolution without losing these vertical pixels ?

Hello Matt,

I was held up with some other work, couldn’t check & get back sooner.

Ok, so regarding the mismatch or resolution between MRQ render and game viewport, the issue is sorted now. It was actually the window’s task bar which was causing this limitation of vertical resolution. Hiding the taskbar from window’s settings fixes this issue and allows us to make use of full monitor resolution. Now both the resolution is matching and hence the information from scene is also coming out correct!

One follow-up question on same context:

  1. Since hiding the task bar is fixing the issue, any fixes or overrides can be done to make the “set viewport size” parameter work correctly? Some simple rendering windows using say glfw seems to be generating viewport in specified dimension
  2. In this case, as we are rendering from MRQ (image output) and extracting screen space coordinates data from game mode (viewport), I suppose the rendering of same frame is happening twice? (output log of two resolutions also seem to be confirming this). In this case, any possibility of making the engine use same buffers in the interest of performance and resource management? Currently as per workflow seems, everything is done twice (one for MRQ & one for Game Mode though done simultaneously)

Hi matt,

1.Will try modifying the Mentioned Source Code Modification and will get back to you on the same

2. Clarification on MRQ Render Pipeline Optimization vs Game Mode HiResShot Capture

We’re evaluating the performance and rendering fidelity differences between Game Mode capture using HiResShot and Movie Render Queue (MRQ), under different rendering scenarios including Lumen, Ray Tracing, and Path Tracing.

From testing, we’ve observed that:

  • Game Mode (HiResShot) introduces redundant overhead, likely due to rendering the real-time 3D scene and executing capture on top, resulting in texture streaming issues, frame inconsistencies, and instability under load.
  • MRQ, in contrast, seems to bypass the real-time viewport’s 3D rendering, as indicated by the black background during PIE sessions and improved rendering consistency.

We understand from discussions and forums that:

“The regular game viewport is being rendered, but MRQ disables rendering of the 3D world within that viewport. The actual heavy 3D render is skipped during that view. Render targets are pooled and reused, and the world is only ticked once per frame — MRQ simply requests a different render after the update.”

Based on this, we’d like to confirm and expand on the following points:

  1. Does MRQ completely skip rendering the 3D scene in the game viewport during capture, and instead invoke a separate off-screen pass for the final output?
  2. The render targets are reused between Game Mode and MRQ, and does MRQ avoid duplicating the full G-buffer generation? Is this understanding correct?
  3. Is the perceived performance efficiency in MRQ due to elimination of this redundant rendering path and resource reuse?
  4. Can you share or point us to any official internal flow diagrams or documentation showing how MRQ interacts with the world tick, render passes, and post-processing stages compared to Game Mode?