Nanite Performance is Not Better than Overdraw Focused LODs [TEST RESULTS]. Epic's Documentation is Dangering Optimization.

I’ll play devils advocate for a bit

Here’s the same shot from the city sample on 5.1, first one uses Nanite, second one has Nanite disabled on the project. Same resolution, on a 2080

It’s first really important to note, this comparison is super dis-ingenuous to the non-nanite shot. It is not nearly optimized in any way. Realistically you would setting better draw distances, HLODs, fine tuned standard LODs, etc. This is also just profiling off the editor and not a packaged build so all these numbers are basically worthless.

You can look at the average frame in unreal insights

But it’s not really the specific fps that I’m trying to highlight. Obviously GPU times are better on Nanite cause the non-nanite content isn’t optimized. But polycounts aren’t the only feature of Nanite. It’s doing per-cluster occlusion culling, eliminating mesh draw calls by finalizing them in the Nanite pass, etc. You can see the render thread is way less utilized in the Nanite scene.

When you enable Nanite, you are taking a significant up-front cost on the GPU for it. Buuuut… it scales a lot better with polycounts on objects, instances in scene, and mesh memory with not a lot of extra developer work. When you enable Nanite and populate a scene with the same mesh thousands of time you are taking the cost of Nanite but missing out on a most of the benefits (essentially no culling, draw calls were not an issue since they are probably instanced anyways).

This is probably true in simple games like Lyra as well (I dunno, I haven’t opened it so take it with a grain of salt). The benefits of Nanite start paying off when you have more complicated static environments, where you have a lot of instances and a lot of models that you want looking good from any distance. Think like big open world games. The cost of nanite is more tied to the screen resolution rather than scene complexity. So when you are testing performance of nanite in a simple scene, it looks really bad lol.

The idea of “everything that can be nanite should be nanite” I think is more regarding if you are already using Nanite, you should try to enable it for everything possible. The more that is rendered into the Nanite VizBuffer, the more it use it from occlusion and strip away traditional draw calls. Again, if only half of your scene is using Nanite, you are still basically taking the same cost but only utilizing half of the benefits (in a simplified way of thinking about it).

The performance of Nanite is a different beast than traditional LODs, and I find polycounts are one of the most irrelevant metrics to guage whether or not Nanite works for you (despite it being one of the biggest selling points of the feature, I don’t think people should be using Zsculpts in a game production anyways lmao). I’ve had scenes that actually perform better when models are higher density in triangles since the nanite clusters become smaller and the occlusion culling can be more precise to reduce Nanite overdraw.

Anyways, in all, I actually do agree with your main sentiment in the post. I feel like there has been a miscommunication in UE5 suggesting all projects should use Nanite. Obviously not true since it doesn’t even officially support PS4 / Xbone. But whether or not Nanite makes sense for a project is a harder question. I’m not even really advocating to use Nanite or anything, I believe a lot of projects don’t need it, but I do think it warrants a more thorough investigation.

16 Likes