I am following tutorials to build worlds with premade assets
and it’s the second time the youtuber pretends that objects behind/inside other objects are not rendered
now, are the GPU advanced to the point of doing 3d Volume boolean operations on the fly ? or are these guy spreading a false belief confusing face culling and z-sorting?
Unreal does several levels of culling. If there is a known blocker ahead of the object being rendered, it may very well be that the object never even gets issued to the GPU.
Then, Unreal does “early Z” pre-pass where it fills the Z buffer and the GPU can quickly cull (parts of) triangles that won’t be visible – very little pixel shading will be done. (This is why modifying Z in the pixel shader is expensive; it disables this optimization.)
Then, the final Z test means that the object in question won’t actually write to the framebuffer, which means it won’t be “visible.”
People who actually know how the mesh drawing pipeline in unreal work are few. Don’t expect the average youtuber or artist to know how it works in detail. And even if they know the explenation might not be necesseray for you to “build worlds with premade assets”
When they say not visible objects are not rendered I would agree with them.
First “not rendered” can mean many different things.
So a completly not visible object might be rejected fully, like some have suggested here to freeze the rendering state and you will see. Then unreal will find which triangles actually need to be rendered. Even thought the triangles will display in Wireframe they will not all be rendered. You can display quad overdraw to get a sense of it. Transperent meshes will get red because they have all to be rendered and not only the most in front one.
If this wasn’t the case you could not use the engine because the performance would decrease linear to the amount of objects processed by the gpu and their screen coverage. For example you had 1 Rock which covers the whole screen and get 100fps limited by gpu. If you know dublicate them 10 times and move them slightly you will not get only 10fps but still nearly 100.
The slight performance drop will come from sorting more triangles and then having more overdraw in the rendering and stuff like that…
I mean, if we are to the point where anyone who asks for advice, gets advice, ignores it, and keeps posting nonsense - and gets reply with videos showing the advice they already got… what’s the point of the darn forum?
Certainly it’s not learning, not even discussing as you can’t discuss with a brick wall…
Nanite occludes clusters, not individual tris (unless your tris are so large that they’re making up clusters on an individual basis). Freezerendering also doesn’t work with Nanite, annoyingly, so checking the cluster culling with freezerendering is impossible, but you can sort of see it at work with how the selection outline as you move objects behind/through other objects changes.
I’m also pretty sure Object Occlusion culling is tested against the bounding sphere of an object, so long skinny objects that have small bounding boxes but large bounding spheres will sometimes not cull when expected.
I don’t know them, but like everything else in engine, it likey has its own console commands to stop it in its tracks and analize the area.
One of the initially discussed features was indeed the ability to stream only what is visible out of a mesh - it’s possible the epic team wholly failed to deliver (as they often do), but I rather think that once the system isn’t so raw that it barely works it will be optimized to actually do this.
I hope at least. Afterall the engine cant even render 4k at 144hz on an empty scene anymore, so if they hope to remain an engine used for video games they better get to work on it at some point…