How to optimize Nanite enabled mesh?

Hey all! Wanted to try out Nanite.

I used a prebuilt environment “City Subway Train Modular” on the marketplace (it’s free).

Created a UE4 project with it, opened it in UE5, then enabled nanite on all the static mesh with opaque material. Rebuilt lighting with production quality. Packaged and tested the fps. I later enabled virtual shadows, rebuilt lighting, and packaged that as well.

So I currently have 4 builds.
-Stock with no changes, straight packaged.
-Nanite enabled, packaged.
-Stock, virtual shadow enabled, packaged.
-Nanite enabled, virtual shadow enabled, packaged.

These are the FPS I’m getting in three different machines:
1070
Normal: 112
Nanite: 85
Normal + Virtual Shadow: 40
Nanite + Virtual Shadow: 36

3070
Normal: 160
Nanite: 140
Normal + Virtual Shadow: 70
Nanite + Virtual Shadow: 65

5700XT
Normal: 92
Nanite: 82
Normal + Virtual Shadow: 37
Nanite + Virtual Shadow: 35

Here’s a link to the packaged builds if you guys wish to test yourself: https://drive.google.com/file/d/1mVfS1ZNveCHYqwrBq35e7JN9cDowucQM/view?usp=sharing

I don’t think I can share the project files given I’m not the creator. If I’m wrong, I’m willing to share those too. Or make them yourselves from marketplace.

I tried the same procedure on “Factory Environment Collection” on the marketplace with similar results. Performance was worse (28 fps → 24 fps). Also there seems to be an issue with lots of mesh loading the wrong texture. But that’s another topic.

What am I missing or doing wrong here? I was expecting a performance increase, not decrease.

nanite is used for really high poly meshes. those static meshes that you have used are optimized low poly meshes so whats the point? also nanite does some things behind the scene that adds overhead. so it allows the use of high poly meshes in real time but at a fraction of the cost.

“I later enabled virtual shadows, rebuilt lighting, and packaged that as well”
what do virtual shadows have to do with baked lighting?

The performance increase comes from Nanite being more about pixels on screen, and less about triangle count. If your triangle count is ok to begin with, then Nanite won’t do much other than add some overhead.

I was hoping the number of repeated mesh would give nanite the advantage. But if even in a more complex scene (The factory environment) with 937,393 triangles and 9,164,979 sum triangles. It’s still unable to gain the advantage. I guess nanite has a more specific use case than I thought. (unless again, I’m doing something wrong)

The editor was giving me warnings that nanite is best used with virtual shadows, not normal shadow mapping. Given I don’t know exactly what virtual shadows do, I gave it a light build just in case. Is that not necessary? I honestly don’t know.

If you look at yesterdays livestream they show of the first Nanite demo. The demo hovers around 20,000,000 triangles in most scenes, but the actual source amount is in the billions.

I think the scene complexity you are testing on is too low for Nanite to shine. It works, but so does the old way of doing it.

PS: Virtual shadow maps works better with Nanite because they leverage the nanite structure to build shadows. Other technics need to use the nanite proxy mesh or distance fields.

Nanite actually works fairly well regardless of repeated (instances) meshes or unique ones. That is one of the cool features about it. Nanite uses 1 draw call per material, not per unique mesh.

I see, so as I understand it, it’s still better to use traditional static mesh in some cases then.

The unreal documentation says “Nanite should generally be enabled wherever possible. Any Static Mesh that has it enabled will typically render faster, and take up less memory and disk space.”. So I was under the impression these scenes would benefit as well.

4 Likes

I agree, a bit confusing. Are we doing something wrong? Cause in many cases performance isn’t improved.

I noticed with “stat gpu” command that “nanite instance culling” is taking a lot of time, sometimes becoming the heaviest gpu pass, sometimes way out of budget (like 30ms for this pass).
I don’t know how this culling works but I think that’s because nanite is still an experimental features and will get a lot of fix in the next monthes .

The point is nanite makes any mesh like 7 times lighter on disk, and is supposed to draw it more efficently, it doesn´t matter if its high poly or a more optimized low poly, they said on the last stram that nanite must be used as much as possible, and the documentation say it too, so this shuld not happen.

1 Like

Here is a video that shows Nanite working on a real world case:

Looks about the same as all the other youtube videos demonstrating Nanite. Loads a high poly model without LOD optimization and compared it with nanite.

From the discussions, I think the current summary is this:

Use nanite when:
-The object is large enough the player can be sufficiently close to see details while the edge of the model is far away enough that rendering the extra detail is not contributing to image quality. (ie. A very large rock)
-The object is highly detailed and repeated a lot in a scene to a point where it reaches 20 million rendered triangles.
-The object visually requires perfect detail transitions. No LOD swap should be visibly seen.
Otherwise, the overhead isn’t worth it and models should stick to being optimized with LODs.

If anything I said is wrong, feel free to correct me!

1 Like

Put a few hundred milliion polys in a scene in 4.26 and UE5 and you will understand the advantages of Nanite.

It scales incredibly well with scene complexity… which is perfect because I’m the type of person that always wanted to put millions/billions of polygons in a scene.

I also see no difference when using or not using Nanite. Framerate is identical. My scene has 5 million triangles. I think it should still do the difference. Many scenes simple don’t need many polygons.

I wonder if there is a parameter that controls Nanite effect based on distance? e.g. - further polys would be reduced more based on this scalar.

Im having same issues. Im using low poly assets and im getting 35 fps on 2080 and Ryzen 9 5900X. if i switch to DirectX 11, im getting 70-80 fps.

BUMP.
It’s obvious from Fortnite’s performance, an Epic Games production. And from other games utilizing Nanite. The performacne will always be worse vs LOD optimization.

What we really need is the auto LOD algorimth to resemble Nanites scale down method. And a better dither effect. This temporal dither is so ugly.

If FN and City Sample (with no AI) run the same LOW FPS with the same settings.
As I tested for myself by changing the City Samples project settings to the most optimized settings I found in FN). Which was 51fps native 1080p on a RTX 3060
(no potato card here). That’s with no Motion blur, AA method, or any postprocess on high “60fps” lumen.

It shows Nanite is not 60fps freindly unless you use an ugly, ghosting Temporal upscaler on current gen hardware to resolution.

So wtf would I use Nanite for “detail”?

  • Certainly not for “performance”

  • Or “seamless LOD transition” since I can probably make my own dither effect for regular LODs.

The only reason to use this, is if you are a lazy developer who want to slap on DLSS to fix your problems. Thats it.

Of course million polygon assets are going to render better with Nanite. But the fact is you don’t need one million polygons assets. And most assets shouldn’t.

Nanite is not a magic wand. You can get great looking assets using lower poly meshes with small multi tiled 8k UV textures. Warframe is a great example of that. Also, textures have far less Aliasing then high poly meshes.

Put a few hundred milliion polys in a scene in 4.26 and UE5

Cough Cough, 5 million without Nanite does better than with Nanite turned on.

Cap. Also in combination with lumen, vsm it will perform much worse than with nanite enabled.

1 Like

Turning off VSM’s when it was nanite or not always helped perf.
But I feel like VSM are worth it due to the more realistic look they can give to a game environment.