Hey all! Wanted to try out Nanite.
I used a prebuilt environment “City Subway Train Modular” on the marketplace (it’s free).
Created a UE4 project with it, opened it in UE5, then enabled nanite on all the static mesh with opaque material. Rebuilt lighting with production quality. Packaged and tested the fps. I later enabled virtual shadows, rebuilt lighting, and packaged that as well.
So I currently have 4 builds.
-Stock with no changes, straight packaged.
-Nanite enabled, packaged.
-Stock, virtual shadow enabled, packaged.
-Nanite enabled, virtual shadow enabled, packaged.
These are the FPS I’m getting in three different machines:
Normal + Virtual Shadow: 40
Nanite + Virtual Shadow: 36
Normal + Virtual Shadow: 70
Nanite + Virtual Shadow: 65
Normal + Virtual Shadow: 37
Nanite + Virtual Shadow: 35
Here’s a link to the packaged builds if you guys wish to test yourself: https://drive.google.com/file/d/1mVfS1ZNveCHYqwrBq35e7JN9cDowucQM/view?usp=sharing
I don’t think I can share the project files given I’m not the creator. If I’m wrong, I’m willing to share those too. Or make them yourselves from marketplace.
I tried the same procedure on “Factory Environment Collection” on the marketplace with similar results. Performance was worse (28 fps → 24 fps). Also there seems to be an issue with lots of mesh loading the wrong texture. But that’s another topic.
What am I missing or doing wrong here? I was expecting a performance increase, not decrease.
nanite is used for really high poly meshes. those static meshes that you have used are optimized low poly meshes so whats the point? also nanite does some things behind the scene that adds overhead. so it allows the use of high poly meshes in real time but at a fraction of the cost.
“I later enabled virtual shadows, rebuilt lighting, and packaged that as well”
what do virtual shadows have to do with baked lighting?
The performance increase comes from Nanite being more about pixels on screen, and less about triangle count. If your triangle count is ok to begin with, then Nanite won’t do much other than add some overhead.
I was hoping the number of repeated mesh would give nanite the advantage. But if even in a more complex scene (The factory environment) with 937,393 triangles and 9,164,979 sum triangles. It’s still unable to gain the advantage. I guess nanite has a more specific use case than I thought. (unless again, I’m doing something wrong)
The editor was giving me warnings that nanite is best used with virtual shadows, not normal shadow mapping. Given I don’t know exactly what virtual shadows do, I gave it a light build just in case. Is that not necessary? I honestly don’t know.
If you look at yesterdays livestream they show of the first Nanite demo. The demo hovers around 20,000,000 triangles in most scenes, but the actual source amount is in the billions.
I think the scene complexity you are testing on is too low for Nanite to shine. It works, but so does the old way of doing it.
PS: Virtual shadow maps works better with Nanite because they leverage the nanite structure to build shadows. Other technics need to use the nanite proxy mesh or distance fields.
Nanite actually works fairly well regardless of repeated (instances) meshes or unique ones. That is one of the cool features about it. Nanite uses 1 draw call per material, not per unique mesh.
I see, so as I understand it, it’s still better to use traditional static mesh in some cases then.
The unreal documentation says “Nanite should generally be enabled wherever possible. Any Static Mesh that has it enabled will typically render faster, and take up less memory and disk space.”. So I was under the impression these scenes would benefit as well.
I agree, a bit confusing. Are we doing something wrong? Cause in many cases performance isn’t improved.
I noticed with “stat gpu” command that “nanite instance culling” is taking a lot of time, sometimes becoming the heaviest gpu pass, sometimes way out of budget (like 30ms for this pass).
I don’t know how this culling works but I think that’s because nanite is still an experimental features and will get a lot of fix in the next monthes .
The point is nanite makes any mesh like 7 times lighter on disk, and is supposed to draw it more efficently, it doesn´t matter if its high poly or a more optimized low poly, they said on the last stram that nanite must be used as much as possible, and the documentation say it too, so this shuld not happen.
Here is a video that shows Nanite working on a real world case:
Looks about the same as all the other youtube videos demonstrating Nanite. Loads a high poly model without LOD optimization and compared it with nanite.
From the discussions, I think the current summary is this:
Use nanite when:
-The object is large enough the player can be sufficiently close to see details while the edge of the model is far away enough that rendering the extra detail is not contributing to image quality. (ie. A very large rock)
-The object is highly detailed and repeated a lot in a scene to a point where it reaches 20 million rendered triangles.
-The object visually requires perfect detail transitions. No LOD swap should be visibly seen.
Otherwise, the overhead isn’t worth it and models should stick to being optimized with LODs.
If anything I said is wrong, feel free to correct me!
Put a few hundred milliion polys in a scene in 4.26 and UE5 and you will understand the advantages of Nanite.
It scales incredibly well with scene complexity… which is perfect because I’m the type of person that always wanted to put millions/billions of polygons in a scene.
I also see no difference when using or not using Nanite. Framerate is identical. My scene has 5 million triangles. I think it should still do the difference. Many scenes simple don’t need many polygons.
I wonder if there is a parameter that controls Nanite effect based on distance? e.g. - further polys would be reduced more based on this scalar.