Multiple meshes or one merged mesh?

Hello everyone!

I was recently watching a level design process by someone and I was seeing that for instance, instead of them having a single plane and correcting its UVs, they were duplicating the planes and placing them manually.

I thought to myself that having one single mesh, scaled to cover the ground, would be a more reasonable and probably a faster option but they weren’t using it…

Now, does this have any performance difference by any chance? For instance, how would Nanite meshes or normal meshes perform using these two different approaches? Would these by any chance affect the Texture Streaming Buffer?

Also, if we bring it to a larger scale, if we want to create an interior map for example, would it be reasonable for different sections of the map to be divided into different meshes, or having one single merged mesh for different sections would be a better option? Cause it’s easier right now to create the maps using the Unreal’s modeling tool and having every section of the map merged together as they’re easily modifiable from within the engine.

Thank you in advance!

Modern hardware can push a crazy amount of triangles, even without Nanite. Using a few planes instead of one generally won’t make a meaningful difference, especially if they are instanced static meshes. There isn’t much difference in memory cost if they are using the same texture in either case. You do have to store each transform (position) though so there is a per object cost.
Nanite is very efficient at batching it’s draws so multiple objects are not a problem there either.
Lumen lighting in particular prefers modular meshes. Not too small, but segments of walls etc that are separate meshes. It makes for better distance field calculations which are at the heart of the system.
Another reason to keep separate, modular pieces is culling. You want the engine to be able to stop rendering things that aren’t in view, but if the whole area is one big mesh, it’s always in view and can never be culled.
There are always exceptions but these are generally the modern best practices.

1 Like

Thank you for the explanation. I really appreciate it.

So, the question that pops in my head is that, ideally object culling would be performed in a way that if even one single polygon of the object is in the rendering sight, we would render the whole object, right?

I was thinking of per-polygon culling. Meaning that if a polygon is in sight, we’re going to render that polygon.

Do you mean that polygon culling doesn’t happen even when a small portion of the object is in sight? I ask this because I’ve read somewhere that some rendering APIs do not render the polygons that are not on the screen. Or maybe this means that rendering happens first and then the polygon omission happens?

Thank you in advance!

Culling is performed on the whole object if no portion of it’s bounding box is visible. If any part is visible, the entire object is drawn. Some engines and do per triangle culling but it’s uncommon - it’s more expensive to calculate than bounding boxes and triangles are relatively cheap to draw anyway. Because of this, unreal uses simple view frustum culling and occlusion culling on a per object basis.

1 Like

Got it. Thank you for the explanations.

At last, could you redirect me to somewhere that I can read more about per-object and per-triangle culling? Cause I want to understand each of them in the proper way because I was thinking of rendering as a per-screen-pixel raytracing and was thinking that per-triangle rendering would be more efficient because we’re using raytracing and we can accurately calculate the correct triangles to render and at last in comparison to the time it takes to render the whole mesh, it would be a more efficient solution.

I certainly don’t have the proper advanced knowledge on the rendering side so maybe my perceptions are a bit off in comparison to the reality of rendering.

Thank you!

Raytracing requires that objects that aren’t in view aren’t culled, because they could be seen in reflections. Special consolidations need to be taken.
Culling triangles in real time wont really help raytracing performance because the BVH or Distance Fields in the case of Lumen are built for the meshes in advance and tracing is done against them as acceleration structures - not against the true geometry.
Rebuilding the acceleration structures in real time would cost far more than you can save by not drawing a few polygons.

1 Like

Oh, yeah. Now it sounds more rational to me as I had only applied simple rendering in my head for the raytracing case, not complex lighting or reflections.

Thank you for the clarification on this. I really appreciate your help!