Inside Unreal: Movie Render Queue Enhancements in 4.26

Unreal 4.26 brings several enhancements to the Movie Render Queue. With a new improved interface, it is now easier to visualize your queued camera shots, allowing you to choose which camera sequence you want included in your final outputs. This week on Inside Unreal, Matt and Andy will give you a crash course and cover advanced topics such as post-production workflows, python scripting, best practices and troubleshooting tips!

Thursday, January 21 @ 2:00PM ET - Countdown


Matt Hoffman - Sequencer Programmer
Andy Blondin - Senior Product Manager - @andyblondin](
Victor Brodin - Community Manager - @victor1erp](

Slide Deck
Movie Render Queue Overview
Movie Render Queue Render Passes




Thank you, guys !
Such an important topic, let’s go!

hi wher i can find this softwaire

Please ask the person presenting to take a small moment to show their camera settings for a bit, if anything for us to able to take screenshots and replicate the effect

Are there plans to support other, simpler export codecs, for projects that don’t exactly require ultimate quality on export? I don’t need 2000 frames of single PNG files, nor a 5GB+ uncompressed video. Why isn’t there support for something as basic as MP4 video, or WebM ?

Wow… waiting for this

MP4 is an inter-frame codec that calculates values across multiple frames… it’s actually more complicated (not simpler) than PNG files, and the best results are derived when it’s rendered through a video encoding tool like ffmpeg (free/cross-platform), Adobe Media Encoder (cross platform PC/Mac), or Compressor (Mac).

To get smaller files, you could try rendering to AppleProRes Proxy, but you’d likely get better results (and compression) by rendering to JPG files from Unreal, then use an encoding tool to create the MP4 video from the frame files. Could probably even craft an “End Console Command” to launch the encoding process automatically after the UE render.

ProRes or DNx is your answer. It’ll also give you much more performance while playing and editing the file, vs. highly compressed files such as H264.

so the problem i faced while using novie render queue was cam animations works fine while sequence export but if any media is being used in project for example media player blueprint on plane static mesh it ends up with having much more high speed than the original footage used for media player. any one knows the reason ?

Ho instalato SketchUp pro 2021 e Twinmotion non mi apre file di questa versione, ivece co la versione sketchup pro 2020 funziona. Cosa dovrei fare per poter usare Twinmotion con il nuovo SketchUp !?

Looking Forward!

Can’t wait for this one!

Would love to see in-depth of Mask IDs and their workflow! Can’t wait!

The slide deck can be downloaded here. Thanks for watching!

Great job on this one - the slides alone are a ton of great information. I highly recommend watching it. I’ll be grabbing the transcript when it’s available :slight_smile:

I saw the link on the marketplace and was like, “Ooh! The helicopter project!” hahah

Could someone please be so kind to explain how I get my level und the level sequence of choice into that soft object paths as described at the Using Movie Render Queue in Runtime Builds documentation?

A Soft Object Path can be gotten from an object using the “GetPath” and “Make Soft Object Path” functions. Unfortunately getting the level path through blueprints can be a little tricky as they are not exposed very well to Blueprints.

The easiest way would be to right click on your level and level sequence in the Content Browser and choose Copy Reference. This will give you something like:

For the soft object path you need to remove the prefix (and trailing ') like so:

Then you can use these in the Make Soft Object Path function.

If you have a reference to your level sequence you could just use GetPath/MakeSoftObjectPath instead of hard coding it. If you need the level to be dynamic, then you will have to keep track of which level is loaded due to limitations in the Blueprint API when working with levels.

There is a mistake in the documentation regarding the Nuke node for creating Depth of Field in post. The revised code is:

set cut_paste_input [stack 0]
version 12.0 v3
push $cut_paste_input
add_layer {FinalImageMovieRenderQueue_WorldDepth FinalImageMovieRenderQueue_WorldDepth.alpha}
ZDefocus2 {
math depth
fill_foreground false
center {{"\[metadata exr/unreal/camera/FinalImage/focalDistance]"}}
focal_point {960 540}
size {{"((input.height*(focalLength*focalLength / (fstop * (focalDistance - focalLength)))*.5 / sensorWidth)/10)" x1 26}}
max_size 100
filter_type bladed
legacy_resize_mode false
show_legacy_resize_mode false
blades {{"\[metadata exr/unreal/camera/FinalImage/dofDiaphragmBladeCount]"}}
name ZDefocus1
selected true
xpos 959
ypos 229
addUserKnob {20 User}
addUserKnob {7 focalLength l "Focal Length"}
focalLength {{"\[metadata exr/unreal/camera/FinalImage/focalLength]"}}
addUserKnob {7 focalDistance l "Focal Distance"}
focalDistance {{"\[metadata exr/unreal/camera/FinalImage/focalDistance]"}}
addUserKnob {7 sensorWidth l "Sensor Width"}
sensorWidth {{"\[metadata exr/unreal/camera/FinalImage/sensorWidth]"}}
addUserKnob {7 fstop l Fstop}
fstop {{"\[metadata exr/unreal/camera/FinalImage/fstop]"}}

The online documentation will be updated soon, but in the mean time you can use this. This changes the size formula and wires the focal plane and blades to exr metadata.

But I don’t have ridiculous hard drive space to hold thousands of individual frames. I don’t care if the format isn’t 100% lossless. This isn’t for a theater or game, it’s not going to be seen at 8k or 4k, or even 2k. This is a 1080p video I’m trying to make for a Twitch intro of all things. It’s frustrating that there’s not ever a “low end” solution to exporting video.

There’s already a lot of issues I’m having with what I’m trying to achieve. I’m bringing 2D animated characters into UE4 via mediaplayer textures. The videos include color and alpha side-by-side and I offset the UVs in the material so that they stay in sync during playback (I tried them individually and it was a disaster). But sequencer constantly gets out of sync with the media textures. Then on top of that, the videos I render out of UE4, I can’t even bring into AfterEffects for post work.

I understand, these solutions you’re all offering are the “ideal” solutions, highest quality, etc. It’s just frustrating that what’s seemingly a low end solution, isn’t even an option.

Also, I don’t even see those export options in Unreal? Is that only in the new engine update? This is all I have:

After some digging, I found that the Apple and DNx formats are plugins you have to enable. Got it. The ProRes was no dice for me, but that DNx format looks like it could be a winner. Thank you, OlaHaldor!
Now I just need to find a solution to the sequencer desync issues :frowning:

One thing you can try is ensure you are using an image sequence as the source media, potentially using uncompressed exrs. Many video codecs have problems seeking to very specific frames and Sequencer seeks to each frame individually due to the (potentially long) period of time between renders. Using an image sequence as the source increases the chance that it will be able to seek to the correct frame.