In this tutorial, we will cover the basics of the Movie Render Graph, talk about the different nodes, and show how to start rendering your shots in layers.
https://dev.epicgames.com/community/learning/tutorials/LpaB/unreal-engine-using-the-movie-render-graph-to-render-in-layers
In legacy Movie Render Queue, there was the option of rendering on frame steps. Is there a way to do this using Movie Render Graph? It is not immediately apparent. Thanks!
No but I will put in a request to see if we can get it into 5.5
Hi Shaun,
That would be great. Just to fill out the thread. Hereās a few ideas Iām hoping get into another version.
- Rather than start frame and end frame, maybe just allow a field that allows certain syntax to make this easier. One could use 1-100 for a continuous range. They could do 1-100:2 for the same range on twoās. They could do ā10,15,37,45ā for just those frames. The renderer could run through the whole range to make sure sampling and motion blur is accurate, but it would only write out those frames.
There are several reasons why those above are important. While render time isnāt as expensive in real-time, pathtracing can be. Moreover, often you donāt want to hold onto all those frames early on in review. So people can use steps early on to limit that. Also, the reason of the arbitrary individual frames is that a great way to communicate the status of a cinematic, for example, is with a contact sheet. Contact sheets can be representative frames of each shot that give a birdās eye view of the state of the entire cine at a glance. Those frames can be assembled into a contact sheet image, or just a very brief sequence. So being able to do that would be awesome.
We could use python after the fact, but it just seems wasteful. Weād be creating I/O debt and writing information, that weād rather not write or render in the first place.
Having that control āseemsā reasonable since steps, at least, are current available in the MRQ UI options.
Thanks!
One more ideaā¦
I couldnāt find a way to do it currently. But it would be great to be able to pull a variable as a token in the output paths. Sometimes we render vdev of assets and being able to query an actor and then use that as a token would be great. Again, we could run a python rename after the fact, but that would be great to add.
Thank again.
Chris
Hi! Iāve been doing some tests with this and ran into a couple issues with using ExponentialHeightFog.
Here is my test scene, everything rendered together for ground truth comparison:
Here is my assembly in Nuke from layers:
Looks pretty good, apart from some differences in the shadows on the chair (I made sure my Lumen screen traces are off) and some edge artifacts caused by the fog layer:

In this case itās fixable relatively easily with some erodes, but would like to highlight the issue nonetheless.
I am rendering a fog layer separately with the objects in the scene held out, like this:
Looks pretty good, but when I look at the alpha channel I notice some artifacts:
Iām wondering if I am missing something here, some kind of setting that I didnāt turn on. Might be worth also mentioning that the fog layer here is both AtmosphereFog and ExponentialHeightFog combined into one.
Any advice is appreciated!
Hi! Is Tone Curve disabled? And in Nuke are you adding or doing an over?
Tone Curve is disabled, in Nuke I am doing an over with the fog layer, disjoint-over on layers which are held-out against each other for correct edges.
Disjoint over should work but can you try and plus them together?
Plus doesnāt match the render of everything together because in this case the fog is darkening the brighter sky in the background. But it does get rid of the visible artifacting, so it could be considered a solution for certain cases.
Yea i just sadly noticed the abstence of steps when i desperately need it. Also please add a text-field where you can type in non-sequential frame ranges and single frames, ie. ā7; 16; 55-78; 110ā. Need this a lot in a production workflow.
Cheers
It seems like the āSpatial Sample Countā option isnāt being applied to the layers.
Even when I increase the count, I still only get 1 subsample and no proper anti-aliasing.
Or am I missing something?
Are you using MSAA by chance?
Normaly i use āNoneā Anti Aliasing Method.
Then setting the Spatial Sample Count to something like 64.
Do you visually see a difference in the render or are you going by the preview window? We donāt show the spatial samples in the preview window because its not necessarily linear and it was causing confusion.
Hi Shaun, thanks for your answer!
I donāt see a difference in the rendered image and as you can see on the right side there is only 1 Sub Sample.
What would be the right way to increase the Spatial Samples to get a nice and clean Anti Aliasing for the Layers rendered?
Iām trying to render a object with transparency on alpha.
For that Iām using the āHoldoutā property for my background objects.
The HUD will not list the Spatial Samples. You set those on the rendering node. I have not been able to reproduce the issue you are having. When I increase the spatial samples, I can see the quality change visually in the render.
I created a new project and look there!
It works.
Seems like there is something wrong in the project setup.
I upgraded a project from 5.3 to 5.5.
Sorry for that confusion.
Thank you so much for the good support on this!
I need to come back on this.
I was blinded by the default āTemporal Sample Countā.
Can you try to set āTemporal Sample Countā to 1 & āSpatial Sample Countā to 32 and see if that works for you?
Tested with a template project on Unreal Engine 5.5.4.
Yes this works for me. Can you send over your template project with your MRG config?