Nanite performance is not better than LODS [TEST RESULTS]. Fix your documentation Epic. You're dangering optimization.

8th Gen Game scene dumped as a 6 million poly mesh:

With Nanite 5.5ms(take out .70ms for whatever editor issues is caused by enabling Nanite)

3.4ms without Nanite

The scenes original overdraw:

3060 at 1080p. All settings Low except shadows where on High.
This was not easy to make, lots of loading, waiting, converting, importaning, combining, exporting etc. I would’ve liked to have made a packaged test version but It takes so damu long to switch between Nanite and regular Mesh I didn’t want to freeze unreal again. Besides I don’t think the story will be too different with packaged.

Videos’ showcasing the drastic changes between packaged and PIE don’t ever confirm the same settings and internal resoltion are synced. All the shaders are already compiled for my GPU and that doesn’t change in packaged.

More Overdraw/Optimization Neglect=More gains with Nanite.
Less Overdraw Controlled With LODs=More gains than Nanite.

They’re not talking about Unreal having snow deformation. They had that in Unreal 3 even. What they’re saying is that NANITE landscape geometry now supports deformation which is a very different beast.

Developers have access to the source code for the engine, so if they’re concerned, they can choose to modify it themselves. Again, any developer choosing to ship a game with an incomplete engine is doing so at their own risk, and they are fully aware of said risks.

You don’t need a hardware profiler to check your stats. Just use the tools that come with the engine like you’re doing in PIE. When you package the game (choose developer as the option instead of shipping). You’re also running the engine while PIE or Stand Alone which does affect performance. Packaged games run way faster than looking at stats in engine.
We don’t need to prove you wrong. You can choose not to believe us if you want. That’s your prerogative.

Why do you keep posting similar stats saying that Nanite is slower than LODs? We got the point already. You don’t have to keep trying to prove that you’re right. lol

Why do you keep posting similar stats saying that Nanite is slower than LODs? We got the point already. You don’t have to keep trying to prove that you’re right. lol

Think of it as a compilation to a bigger project I’m working on.

They had that in Unreal 3 even. What they’re saying is that NANITE landscape geometry now supports deformation which is a very different beast.

I’m saying they got a basic feature working on something that is broken imo.

Developers have access to the source code for the engine, so if they’re concerned, they can choose to modify it themselves.

This is something I have addressed before, if a studio decides to use unreal, it means they lack the funding and/or motivation to use a proprietary engine. Studios are flocking to this engine for that reason. So the technology shouldn’t be given if it’s not ready. Studios will very rarely modify the engine. This assumption is one of the biggest issues with Epic Games and their stance on poor performance across third party uses. I’m not saying it’s all Epic, but it’s a good 50% their fault when it comes to UE5 releases.

Anyways, I also try to limit my test from PIE. It’s just with nanite, it’s a lot more complicated with off on comparisons. Also I can’t profile released games soo…
Yeah, like I said, my main concern is with other studios and Epic doesn’t understand how impactful their choices are on consumers.

Weird experience, opened this project with a fresh install. Took out all lights except directional and point light, everything on low except shadows where on high.

Default overdraw

6.3ms without the overdraw view.

I “enable” Nanite on all the static meshes and end up with 3.6ms
I say enable, because look at the top, I have nanite overview on and nothing is showing
That happens when the visibility buffer and other Nanite systems aren’t running.

Here was the overdraw after “enabling” nanite.
When Nanite is on properly, overdraw view only shows solid colors and that is not the case here:

Then I turned on SM6 and re did everything:

No nanite 6.3ms


Nanite now works as intended and the final cost is 9.8ms-.10ms of editor cost of turning on Nanite

These are the results of UE5.4, my last test was in 5.3
In this test, Nanite added 3.32ms of overhead at 1080p on a desktop 12GB 3060.

I guess even with the really bad overdraw, enabling Nanite didn’t even help that much.

You do know there’s a difference between broken and incomplete right? UE5 is incomplete. The engine isn’t finished. Epic themselves always suggests not to ship a game with Beta functionality. The entire engine itself is in Beta right now. Wait for full release and then see what the performance is like.
Besides, it’s not broken. In previous versions of the engine you couldn’t render more than 2 million polys on screen. Nanite fixes this limitation by ensuring that it’s only ever rendering 2 million at once even if there’s actually billions or trillions of polys on screen. Just because it’s not working the way you thought it would, doesn’t mean it’s broken. You want proof of this? Load up UE4.27, and import ten 100-million poly meshes and compare that performance to the same scene in UE5 using nanite.

Comparing something to the worst scenario will always make another option appear better.
Same thing is happening with DLSS and “Native”, DLSS is only better than Native with TAA which is horribly implemented in every game. This is another thing I’m releasing detailed documentation on away from this site.

In previous versions of the engine you couldn’t render more than 2 million polys on screen.

I can rendering 12 million on screen. I just recently loaded a 6 million tris scene without nanite and the performance got worse with Nanite. Again, research overdraw. Another problem with Nanite is when you reach the threshold of poly count of returns, you are required to use two other extremely expensive things being VSM’s and TSR. If not TSR, the 10x cheaper TAA which is going to make all the detail pointless in motion. And another problem with nanite is it doesn’t do anything for subpixel detail vs mipmaps and LODS. It only takes a few meters away from a “appropriately dense” nanite mesh to break down into insane noise only intense frame smearing can fix. This can ofc also happen with really dense non-nanite meshes.

There is no logical reason to use nanite other than funding or virtual production/greens screen etc.

You do know there’s a difference between broken and incomplete right?

My definition of broken means UNFIT for game development. This is what I’m proving on this exact thread, for any developer to review.

`

1 Like

If you can’t hit 60fps at 1080p with Nanite enabled, you’re doing something wrong, buddy. I’ve got a fairly dense forest scene and it’s just shy of 60fps running on a 5700xt (this is a GPU with an architecture a generation older than current gen consoles) with a 1080p render target in the least performant area. If I disable Lumen it jumps up to ~90fps. Lumen is the performance killer in UE 5, not Nanite.

This is quite possibly the dumbest possible way to try to prove your point. You’re taking the final submitted frame data, with no knowledge of what the original assets are or what kind of rendering optimizations this game’s engine are doing to that data before submitting it to the GPU, and then sticking it in Unreal (also, as far as I can tell, you’re not putting the camera in the same place relative to the scene as wherever it was when you captured it. Also everything appears to be one material, which is obviously not how it rendered in the original. Also also your reconstruction is clearly non-manifold, implying some other advanced stuff going on under the hood in that engine).

If I dumped a Nanite frame and reconstructed it and put it back into Unreal again, guess what? It will render faster! Y’know why? Because the raster optimization that Nanite does dynamically is now precalculated and static. Rendering the whole test meaningless.

1 Like

If I disable Lumen it jumps up to ~90fps. Lumen is the performance killer in UE 5, not Nanite.

Go tell remnant II players that. The difference between Lumen and Nanite is that Lumen has massive visual value over poor lighting alternatives in UE, “appropriately dense” nanite mesh to break down into insane noise only intense frame smearing can fix. At least we can turn off Lumen. I have already stated the Nanite requires multiple pieces of expensive software to go along with it.
As for your scene, I have zero context about whether or not your scene is small or how it looks. Shadow method, AA methed. I have stated I can get 60fps in city sample at native 1080p and FN with Lumen&Nanite, doesn’t mean looks good or the performance justifies the visuals.

This is quite possibly the dumbest possible way to try to prove your point. You’re taking the final submitted frame data, with no knowledge of what the original assets are or what kind of rendering optimizations this game’s engine are doing to that data before submitting it to the GPU

Okay, you obviously have zero idea how this was done nor my point about polycount vs overdraw which is why you decided to mention material. I know everything about how the frame was generated, buffer by buffer, microsecond from microsecond, dx11 call to dx11 call. I synchronized the unreal camera position with the one in the game, hence the low about of overdraw in the attached post.

If I dumped a Nanite frame and reconstructed it and put it back into Unreal again, guess what? It will render faster!

No, if you tried to hardware/API dump a frame from a Nanite scene, It freeze your computer for several minutes, crash, and create a corrupted 11GB dx12 profile file, for a single frame. For reference, a regular frame was captured under a minute with a 2-GB file. I choose this game since it’s much more realistic than most 9thgen titles.|

Epic’s decision to invest in Nanite over better LOD algorithms and transitions effects is directly causes negative effects in the game industry. I’m not doing anything wrong when I boot up a UE5 game and every last one performs the same garbage framerate. Epics “you should customize our engine” defense is poor since that’s exactly why people and major AAA studios use unreal.

aBUMP.

Here is the ms timing on an even bigger scale test.
The mesh is 4300-5000 tris(same mesh as before). 10,000+instances.
With 10 minute made LODs
Look at the ms render time with LODS.

Honestly, I think @TheKJ just wants to vent. I’m not going to be replying after this post and I’d suggest you all do the same. If they’re just arguing with anyone who counters their reasoning, then there’s no point. They’re not here to seek help, so let them keep shouting into the void. Eventually, once UE5 is completed they’ll stop complaining. It’s like people arguing with Ubisoft when they said that 30fps is actually fine for single player games. They’ve only been doing this for 38 years and develop their own engines, but gamers want to be right no matter what sometimes. If that’s the case, you can’t help them.

1 Like

so let them keep shouting into the void.

I love how people criticize my tone on this thread when others are just if not seemingly more derogatory. I have had plenty of people thank me for sharing this thread and I stated this thread is my documentation for compilation of information Epic isn’t willing to give to developers who care about consumer performance.

I literally made this thread to prove the results yet people think can convince me other wise.

It’s like people arguing with Ubisoft when they said that 30fps is actually fine for single player games.

It’s not. And Epic has the same problem. On 8th generation we had more photorealistic games running 60fps that didn’t use lightmaps. 9th Gen is SEVERAL times more powerful which should provide more visual potential to us. Instead the potential is wasted on stupid technology such as nanite and it’s requirements for faster development that people are sick and tire of.

It doesn’t matter how fast nanite allows us to pump out games if people can’t play them.

You took a RenderDoc/Pix capture, saved out the csvs for the vertex data from each draw call and used a script to rebuild it in Blender or some other DCC. It’s not difficult. People have been using this process to rip assets from games for years.

My point is that the point of the render pipeline where it’s making draw calls comes AFTER a bunch of other stuff that engines do (like culling and LOD selection). You’re cutting out a significant percentage of the draw thread overhead when Nanite is disabled because it’s already been calculated for you.

And you very obviously have NOT aligned the camera properly because half the geometry in the scene is missing, many of the normals are inverted, and the more detailed geo is on the sides of the buildings facing AWAY from the camera. You’ve either misaligned the camera or done an absolutely terrible job reconstructing the scene, either of which render your test invalid.

And I (and many others) HAVE done RenderDoc dumps of Nanite frames. If it’s freezing your PC for several minutes and crashing, then I can only assume that, once again, you’re doing something wrong.

if you don’t like my test, go do your own.

And you very obviously have NOT aligned the camera properly because half the geometry in the scene is missing

The bottom half is a road mesh that had issues exporting, it’s exactly the view that the game had during the capture.

you’re doing something wrong.

How many times do I need to say this. I AM NOT MAKING GAMES THAT PERFORM LIKE GARBAGE. I am NOT doing anything wrong, since I’m not the dumb*** developers who are using nanite in their games. The people making the games run 40fps on 3080’s are the ones doing something wrong. And that is becuase EPIC is not doing a good enough job. Even with FN, it performs terribly on GPU side. There is no good example. 30fps and/or broken noisy and/or blurry visuals is insanity on 9thgen.

On my side, I loose 35FPS without Nanite and I’m not fond of hi poly mesh and hi res texture.
If I disable lumen I get an extra boost of 15FPS.

Can you show us the scene and overdraw?

It’s not about polycount. I just proved that (and again several times before).
I WOULD LOVE, to see a scene where nanite makes sense, but you gave us nothing to us or other readers that might come across this thread.

Nanite is the biggest garbage I ever had to try to implement. Just enabling it clean cuts 30% performance off (also seen in the profiler, for WHAT?! theres 0 meshes with nanite enabled!).
Furthermore, after converting all meshes, I lost another 20 or so %, ending up with pretty much exactly half the performance I had before when I used auto generated LODs with custom screensize settings.

I’m more or less convinced that this is not an engine for games anymore … maybe I’ll go back to Ue4 if I want to make a game.

1 Like

I’m more or less convinced that this is not an engine for games anymore

Just don’t use UE5 specific technologies. I’ve seen a lot of porting test from UE4 to UE5(meaning no other changes to settings just engine, no Lumen enabled or Nanite conversion) and they showed performance boost with UE5.

It’s just that Nanite(and it’s VSMs), TSR and Lumen(if your game isn’t 100% static) are incredibly wasteful for performance.

Mixed feeling on Lumen, it would be fine if it had more caching a baking but all other lighting solutions are terrible in UE if you have basic interactions and realism in mind.

But you should still vote for the Thread asking for more performance centered designs for unreal.

1 Like

How do we play games like these without Nanite ? Can we have these without Nanite ?

How funny, if I fullscreen those images, It’s nothing but blur thanks to compression. So what it so great about those images? Same thing happens with modern games, only looks crisp if it’s scaled to 20% of the pixels on my screen.

Nanite isn’t going to make ANY of that kind of detail run well enough on consumer hardware including current gen consoles or the even the 090 if BASIC motion is introduced thank to temporal smear dependency

The closest thing we have to those AI pics are deferred texturing for foliage which is miles faster than nanite.

Can we have these without Nanite ?

Trick question so I’ll give both answers.
No: it’s AI/you already achieved it with AI
Yes if: Alpha texturing, screens space shadows, AO, and lighting is used well, we can have an optimized scene running a crisper resolution seen on most UE5 console ports with better fps(60).

Nanite and it’s garbage handling of subpixel detail vs optimized meshes.