How are partially overlapped objects rendered?

Hello! I have a lot of environment objects that are overlapped by other objects by 70% (respectively, only 30% of this object is visible). In the example of two cubes in the screenshot, one object is inside the other, will the blue and green cubes be rendered completely and later cut off by the Depth value of the buffer, or during the rendering of the blue cube, only that part of the blue cube that is visible will be rendered? This is question about optimization, in my case with environment there are a lot of objects that have quite complex material overlapped by another object and I’m trying to understand whether it makes sense to cut this mesh leaving only the visible part or does it not affect performance in any way and only the visible for camera part will be rendered.

Not sure how nanite handles it, however in terms of ue4, if I’m not wrong, if any part of the mesh is visible it will be fully loaded.
|Check out this video for more in depth explanation An In-Depth look at Real-Time Rendering | Course


The raster-process works back2front, like layers of animation in classic, hand-drawn style, where they layer transparent layers on top of one another to get that layered effect.

Since one thing cannot know if another will be in front of it, be transparent, etc, the stuff behind has to still be drawn, even if it’s eventually (totally) occluded by a non-transparent object; the objects CANNOT know that so it’s a bit of dumb-brute-force in that regards.


Thank you @Devic.a and @IlIFreneticIlI !

This might help too: Material Blend Modes in Unreal Engine | Unreal Engine Documentation

At least with Masked you can eliminate some rendering overhead, so if you can somehow finagle a material solution, it might help. No idea how you might do this…

Otherwise, Nanite is well-suited for just this scenario. Since it involves some precomputed clustering/visibility, overlapping items and whatnot shouldn’t really matter; only the parts of Nanite objects that are actually visible (plus a little border-overhead, given clustering) will be rendered. W/Nanite the paradigm goes from ultimate pixel-draw given the raster-method, vs having a fixed-cost based on your resolution since you only/just-about render what you need to vs ‘everything’…

That’s not actually true.
“Back2front” (Painter’s algorithm) is only used for transparent objects, because it’s quite limited and not that fast. For opaque objects the z-buffer algorithm is used instead. (Masked counts as opaque.)

With it, the performance cost of overlapping meshes depends on when the z-test is performed. Generally speaking, the expensive part of rendering (on the GPU side of things) is the fragment shader. (Fyi, your Material nodes run in the fragment shader.)

Afaik Unreal does perform (or have a setting for) a depth pre-pass. This means your frame is rendered twice: first only depth information, then fully. What this allows you to do when you do the full rendering, is discard invisible fragments of your mesh before running the fragment shader.

The downside is that it introduces overhead from running the depth pre-pass itself. Another question is how Unreal deals with materials which modify the shape of the object inside the material. I don’t know about that.

My mistake, thanks for the correction.

My understanding was that this depth-only prepass was what powered Nanite. It’s also run in 4.2x era-tech?

It’s not new tech. Since I don’t have UE installed at the moment, I can’t check myself. However, I did find this: Unreal’s Rendering Passes - Unreal Art Optimization

1 Like