Nanite Performance is Not Better than Overdraw Focused LODs [TEST RESULTS]. Epic's Documentation is Dangering Optimization.

Since everyone needs a reminder of what we achieved years ago but unreal refused to catch up.

  1. Ordered dithering for LOD transitions since this algorithm doesn’t animate in motion. This would still be compatible with crappy TAA everyone loves.

  2. A better algorithm (or workflow) would be something comparable to the industry standard: Simplygon. A quick and efficient system that can optimize micro detail as a depth texture for effects like SS shadows.. Then a step in reducing overdraw by converging or removing thin details that will decimate due to linear sampling fall off. Another aspect UNREAL’S LOD algorithms misses is the ability to enforce pro performance topology for triangle arrangement.

  3. Drawcalls? Precomputed(Lod swapping, culling) meshlets made with local static(optimized) objects.

  4. Deferred texturing for foliage since Nanite hates WPO.

Nanite is NOT good for meshes that have lower triangle counts than about 100,000.

Nanite is MUCH better than LODs for complex moving objects (like hi-res vehicles).

Nanite is best used in OPEN WORLDS. If you’re in a small world, your camera will never be far enough to trigger Nanite to adjust mesh complexity.

LODs are the absolute best at lower-complexity meshes (<100,000), and for cramped or smaller areas.

There are numerous examples online where users have achieved significantly higher FPS, sometimes up to 100% more, by activating Nanite. However, I’m unsure if this truly reflects Nanite’s actual performance

0% understanding of overdraw. Test included pre-existing problems for non-nanite meshes that are not present in OPTIMIZED games. Here’s a quick clip explaining overdraw.

Performance is NOT affected by poly count, its surface area of the overdraw visible on screen.
What I mean by surface area is explain in this video chapter. But in that scenario it’s (surface area on frame) x (material cost) where as a more specific determination of performance for objects is (surface area on frame) x (material cost) x (related overdraw).

This “poly threshold of what’s good for nanite” is completely ridiculous. That’s why I exported a 6 million poly mesh(an optimized game scene with little overdraw) into unreal and then enabled nanite. NANITE was 30% slower in a SIMPLE scene. But in a scenario where we let free with billions of triangles with nanite, we are met with several visual problems only blurry(bye bye relevant gameplay detail & detail costing 30%+ more perf) or expensive temporal crap can mitigate.

Epic has enough experience with graphics to know this. They just don’t CARE about crappy TAA. They brag about it and will continue to abuse it

It is only recently where I have seen some possible improvement with nanite-shadow map compatibility.

1 Like