Tutorial: Using the Movie Render Graph to Render in Layers

In this tutorial, we will cover the basics of the Movie Render Graph, talk about the different nodes, and show how to start rendering your shots in layers.

https://dev.epicgames.com/community/learning/tutorials/LpaB/unreal-engine-using-the-movie-render-graph-to-render-in-layers

3 Likes

In legacy Movie Render Queue, there was the option of rendering on frame steps. Is there a way to do this using Movie Render Graph? It is not immediately apparent. Thanks!

1 Like

No but I will put in a request to see if we can get it into 5.5

1 Like

Hi Shaun,

That would be great. Just to fill out the thread. Here’s a few ideas I’m hoping get into another version.

  • Rather than start frame and end frame, maybe just allow a field that allows certain syntax to make this easier. One could use 1-100 for a continuous range. They could do 1-100:2 for the same range on two’s. They could do ā€œ10,15,37,45ā€ for just those frames. The renderer could run through the whole range to make sure sampling and motion blur is accurate, but it would only write out those frames.

There are several reasons why those above are important. While render time isn’t as expensive in real-time, pathtracing can be. Moreover, often you don’t want to hold onto all those frames early on in review. So people can use steps early on to limit that. Also, the reason of the arbitrary individual frames is that a great way to communicate the status of a cinematic, for example, is with a contact sheet. Contact sheets can be representative frames of each shot that give a bird’s eye view of the state of the entire cine at a glance. Those frames can be assembled into a contact sheet image, or just a very brief sequence. So being able to do that would be awesome.

We could use python after the fact, but it just seems wasteful. We’d be creating I/O debt and writing information, that we’d rather not write or render in the first place.

Having that control ā€œseemsā€ reasonable since steps, at least, are current available in the MRQ UI options.

Thanks!

1 Like

One more idea…

I couldn’t find a way to do it currently. But it would be great to be able to pull a variable as a token in the output paths. Sometimes we render vdev of assets and being able to query an actor and then use that as a token would be great. Again, we could run a python rename after the fact, but that would be great to add.

Thank again.

Chris

Hi! I’ve been doing some tests with this and ran into a couple issues with using ExponentialHeightFog.

Here is my test scene, everything rendered together for ground truth comparison:

Here is my assembly in Nuke from layers:


Looks pretty good, apart from some differences in the shadows on the chair (I made sure my Lumen screen traces are off) and some edge artifacts caused by the fog layer:
Nuke11.2_g7Au4Mq5Jd
In this case it’s fixable relatively easily with some erodes, but would like to highlight the issue nonetheless.

I am rendering a fog layer separately with the objects in the scene held out, like this:


Looks pretty good, but when I look at the alpha channel I notice some artifacts:

I’m wondering if I am missing something here, some kind of setting that I didn’t turn on. Might be worth also mentioning that the fog layer here is both AtmosphereFog and ExponentialHeightFog combined into one.

Any advice is appreciated!

Hi! Is Tone Curve disabled? And in Nuke are you adding or doing an over?

Tone Curve is disabled, in Nuke I am doing an over with the fog layer, disjoint-over on layers which are held-out against each other for correct edges.

Disjoint over should work but can you try and plus them together?

Plus doesn’t match the render of everything together because in this case the fog is darkening the brighter sky in the background. But it does get rid of the visible artifacting, so it could be considered a solution for certain cases.

Yea i just sadly noticed the abstence of steps when i desperately need it. Also please add a text-field where you can type in non-sequential frame ranges and single frames, ie. ā€œ7; 16; 55-78; 110ā€. Need this a lot in a production workflow.

Cheers

It seems like the ā€˜Spatial Sample Count’ option isn’t being applied to the layers.
Even when I increase the count, I still only get 1 subsample and no proper anti-aliasing.
Or am I missing something?

Are you using MSAA by chance?

1 Like

Normaly i use ā€œNoneā€ Anti Aliasing Method.
Then setting the Spatial Sample Count to something like 64.



Do you visually see a difference in the render or are you going by the preview window? We don’t show the spatial samples in the preview window because its not necessarily linear and it was causing confusion.

1 Like

Hi Shaun, thanks for your answer!
I don’t see a difference in the rendered image and as you can see on the right side there is only 1 Sub Sample.
What would be the right way to increase the Spatial Samples to get a nice and clean Anti Aliasing for the Layers rendered?
I’m trying to render a object with transparency on alpha.
For that I’m using the ā€œHoldoutā€ property for my background objects.

The HUD will not list the Spatial Samples. You set those on the rendering node. I have not been able to reproduce the issue you are having. When I increase the spatial samples, I can see the quality change visually in the render.

1 Like

I created a new project and look there!
It works.
Seems like there is something wrong in the project setup.
I upgraded a project from 5.3 to 5.5.
Sorry for that confusion.
Thank you so much for the good support on this!

I need to come back on this.
I was blinded by the default ā€œTemporal Sample Countā€.
Can you try to set ā€œTemporal Sample Countā€ to 1 & ā€œSpatial Sample Countā€ to 32 and see if that works for you?
Tested with a template project on Unreal Engine 5.5.4.

Yes this works for me. Can you send over your template project with your MRG config?