Nanite Performance is Not Better than Overdraw Focused LODs [TEST RESULTS]. Epic's Documentation is Dangering Optimization.

Ouch ! It must be VERY laggy lol. In my example, only the landscape was using Nanite because I was afraid of this possibility. But even then… Well we will see what happens with 5.4.

You do know that Unreal Engine 5 isn’t in a released state yet right? It’s all still early access. Epic is thinking of the future of game development, not just right now. In another 4 years once the engine gets to 5.20, everything will likey be way more performant. Not to mention that average systems will be a lot more powerful. It’s such a strange thing to be upset about. Would you prefer they don’t try to innovate and push the field forward?

god…I’m new to unreal.I don’t know much about the history…is there going to be a 5.20 ?I thought when it reached 5.9,The Ue 6 will come…

I don’t know what their plan is. I was just basing it off of UE4’s trajectory. I’d hope they don’t switch to UE6 that soon.

That’s wishful thinking, and they have been relying on horrible temporal smear for years now. Games do not need TAA/DLSS smear. Effects can have their own temporal aspects, I have seen better hair in other engines, but all this crap is being done in the false name of “optimization”.

Upscaling is ugly and crunch.

They shouldn’t be investing in the crap they are making now(TSR etc) and instead making better workflows like specular aliasing texture filtering, better LOD/Streaming systems, effects that don’t look like crap without another kind of crap. They should be investing in Lumen versions that take advantage of Static geometry.

It’s not wishful thinking. There are going to be vast improvements to the engine from alpha to full release, and also the average system is going to be more powerful. I’ve been using Unreal since 2009. It always gets better in time. Prime example, it used to be that nanite didn’t work with foliage. It now does. It is frustrating that it’s not more performant right now for sure, but it’ll be worthwhile in the future.

Prime example, it used to be that nanite didn’t work with foliage. It now does.

Yeah, and that’s not a good thing, it performs worse. In fact a lot worse on foliage. It doesn’t stand anywhere near as good as other visibility buffer implementations becuase it’s not focused on performance it’s focused on “storage” as stated by Brian Karis. Is that really your prime example? DFAO would have been a better one since they gave better and more relevant timing regarding.

Listen I’m not upset with you. I’m upset that Epic’s funding is going towards features that are creating false illusions about affordable hardware. We are talking the best you can get with $300 on GPU terms which isn’t far from consoles which have had some pretty atrocious performing games that look like blurry crap in motion. The issues are stemming from clear problems stated here and on the 1# feedback thread , Epic has the money to fund the better solutions.

Updated Performance test with Lyra 5.3:
I highlighted all static objects in and disabled Nanite on them


Overdraw (no nanite)

Performance with Nanite(project default)

Turning on VSMs adds another ms with Nanite.
Ignore scalability, I have my own I use.

I would expect a test in a level where Nanite strengths are obvious…

Comparing total memory could be interesting too.

I would expect a test in a level where Nanite strengths are obvious

In other words an unoptimized scene with extreme overdraw. The reason why I am posting this is because Epic and the UE channel has videos and presentations that tells devs Nanite is faster even for simple scenes when it’s clearly not.

If I do a test with a extremely high poly to the point of extreme overdraw, then that’s faking a reason to use Nanite over LODS in terms of performance(No game should have extreme overdraw). Also regarding you memory comment, it’s not my fault Epic doesn’t have LODs with distance hierarchies that load into the GPU based on closest draw distance. Instead we have a giant mess with visibility buffers and clusters.

If you mean a scene with high drawcalls from several kinds of meshes? Again not my fault we don’t have meshlets that combine multiple meshes we know always get loaded in memory together and their LODs get combined into one instanceable meshlet the GPU can precompuational cull with a single draw call.

Also, I recently dumped an entire city scene from NFS2015 by dumping the geo from a API inspector and loaded the frame’s scene geo in unreal, ofc it was one big mesh but it was 6 million triangles total. I enabled Nanite on that 355mb mesh and instantly lost 3ms due to nanite’s overhead. It was a pure geo test with one unshadowed skylightm 3ms without Nanite and 7ms with. The scene was well optimized and had little overdraw from the gameplay perspective.

So what in your opinion is a scene with Nanite’s streets? A dense foliage scene with really small triangles and overdraw hell? Nanite explodes with WPO even with the distance limiter.

A scene where the desired goal requires more tris than what the target hardware is able to render at playable frame-rates.

A scene where the desired goal requires more tris than what the target hardware is able to render at playable frame-rates.

What about playable resolution? And the target hardware(Next gen consoles are near the $300 GPU range)

I can get 80fps with or without Nanite and keep Lumen and VSMs on and just upscale from a blur fest, doesn’t mean anything. If you want to know what a playable frame rate it, it’s 60fps at atleast 1080p on $300 gpu power.

Nanite isn’t going to help with that and try not to be so vague this time.

Requires more tris than what the target hardware is able to render

Like what.

I’ll leave it at that. Best of luck.

To my point:

First subject in the GDC talk is that Nanite and Lumen are 2x faster in 5.4
I don’t understand why people don’t seem to realize Unreal 5 is in beta.

First subject in the GDC talk is that Nanite and Lumen are 2x faster in 5.4

Yeah, they also forgot to mention the cost of other feature canceled out performance improvements, it’s one of the first things I mentioned on the 1# feedback thread about 5.4.

Whatever test they did that with must of had some serious out-of-context problems.
It’s NOT 2x at fast.

They act like that snow trick is all new and stuff, once again I need to remind people Death Stranding came out 2019. And guess what? Nanite in plenty of scenes I tested in 5.4 is JUST AS AS SLOW. What did they do, upscale from 720p with the more advanced TSR in their 5.4 performance measurement?

EDIT: I know, they got them running faster and like the guy said in the video, they use the new headroom for “increased quality”. Like I stated in the feedback thread, UE is in a constant equilibrium of poor performance due to a constant tug of war across departments and features.

hey i found increase with nanite vs no nanite enabled on landscape 5k size (5.3.2 +1 Frame avg 3440x1440 res -no world composition - playing in editor) So upgrade for someone like me who don’t want create HLODS and World Composition. For making game i would like to go with LOD since you can reduce number of triangles faster in distance and you can scale for taste. It will outperform nanite system quickly +and the cherry on the top impostor in last lod. that’s the case. Nanite can’t win with lowpoly meshes for now. Do the test when you have dozens of very low poly objects and enable nanite . Engine will render that slower.For me its better to have very good look and twice the dense in close distance with same frames. Nanite will not give you that advantage. LOD looks little worse with distant objects but fps you gain through reducing numbers of triangles is more important than technology made for quantum computers^
Nanite Landscape ON


Nanite Landscape OFF

Review the Overdraw optimization view, you need overdraw under a lite(VERY lite) sprinkle of green(control that with polycount and LODs), you will get much better geometry and shadow timings over Nanite if you stick to that rule.

That rule is kept in extremely optimized games, poly count isn’t one all be all.

1 Like

Death Stranding doesn’t use Unreal Engine so now I’m even more confused what you’re talking about. Again, wait until they finish building the engine and go into full release before you worry about performance. Also (I may have said this already but) make sure you’re testing your performance in a packaged project, otherwise its going to give completely incorrect reports on the performance. PIE and even Stand Alone don’t actually report the proper values.

Death Stranding doesn’t use Unreal Engine so now I’m even more confused what you’re talking about.

I’m saying the industry has been doing snow deformation several times faster than unreal years ago. It’s dumb way to present that. It’s not even needed.

Again, wait until they finish building the engine and go into full release before you worry about performance.

I stated my concern is not over my game, waiting for unreal to perform better isn’t going to fix the games using the engine as it’s provided now. They’re documentation is full of misleading information and they are not building designed that take advantage of the most common environment design(hint: It’s not 100% destructible worlds like FN)

in a packaged project, otherwise its going to give completely incorrect reports on the performance. PIE and even Stand Alone don’t actually report the proper values.

First of all, packaged games using nanite break hardware profilers like intel GPA cause cuase my PC to freeze, crash, and produce a corrupted multi GB file I can’t even open to inspect the real performance.

The results from “launch game” doesn’t change a lot of timings, vsync is the only thing you need to say away from. Feel free to prove me wrong anyone.
The results I publish on this thread are 3ms differences, doubt packaged will do anything. The shaders are already converted for the GPU.

8th Gen Game scene dumped as a 6 million poly mesh:

With Nanite 5.5ms(take out .70ms for whatever editor issues is caused by enabling Nanite)

3.4ms without Nanite

The scenes original overdraw:

3060 at 1080p. All settings Low except shadows where on High.
This was not easy to make, lots of loading, waiting, converting, importaning, combining, exporting etc. I would’ve liked to have made a packaged test version but It takes so damu long to switch between Nanite and regular Mesh I didn’t want to freeze unreal again. Besides I don’t think the story will be too different with packaged.

Videos’ showcasing the drastic changes between packaged and PIE don’t ever confirm the same settings and internal resoltion are synced. All the shaders are already compiled for my GPU and that doesn’t change in packaged.

More Overdraw/Optimization Neglect=More gains with Nanite.
Less Overdraw Controlled With LODs=More gains than Nanite.