The best way to check triangle to vertices ratio for Nanite and overall polycount question

Hello,

From the many answered questions here, it is evident that artists need to be careful with the tris-to-vert ratio when creating the Nanite geometry.

Where should we check the ratio, in the DCC or in Unreal? I am asking because the vert count differs between Maya and Unreal.

While Nanite handles a huge number of vertices efficiently, I’m being cautious about memory consumption. Would it be better to use a Nanite approach that relies on geometry for bevelled edges and other small shadow-casting details, while still baking normal maps for areas that would require too many verts otherwise? I understand there’s a memory cost for the normal maps, but would fewer vertices offer better performance and memory consumption on lower-end PCs and consoles? Or would you recommend dropping baked normals altogether?

There is a section in the documentationwith the LODs and Nanite comparison, but having Nanite meshes that are 1 million each on Xbox Series S scares me. With the approach above, we will have fewer triangles (mid-poly mesh with no classical LODs, but with Nanite benefits) and still keep the details on the normal maps.

Thank you!

Hi there,

The vert count could differ from your DDC for a number of reasons. For instance, your DDC may not be taking into account UV, or material section seams, which will force these vertices to be split in Engine. Since Unreal takes into account all these split vertices, you might see an increased vertex count compared to your DDC after import. You should definitely consider Unreal the source of truth here, since these numbers represent the statistics of the actual rendered mesh.

You should evaluate the quality difference and check whether using geometry or a baked normal map gives better fine detail. However, in terms of memory consumption and disk space, Nanite meshes are very efficient (as partially outlined in the documentation you linked). So you could probably get away with very high poly meshes with no baked normal maps. You might still want detail normal maps, but these could be shared between multiple assets to save memory. Some rough tests show that a 2 million tri Nanite mesh takes up about 27MB on disk in a cooked build, which is similar in size to a 4k normal map. In terms of runtime memory usage, Nanite is very efficient because it’s a virtualized geometry system. Only the geometry pages required to render the current scene are kept in memory. With the particular 2M tri Nanite mesh I used in my test, I was only able to get the Visible Streaming Data Size stat up to about 10MB (visible using `stat NaniteStreaming`). This was done by placing the camera very close to the mesh so that the most detailed geometry pages were loaded. As you can see in the below stats, the default Nanite Streaming Pool Size is 512MB. Memory usage from geometry streaming should remain constant at this pool size, unless the pool overflows and needs to resize.

[Image Removed]The primary concerns for runtime memory consumption are then, how many unique meshes will be visible on screen at a time, and overdrawn from closely stacked surfaces (which may trigger additional, non-visible pages to be streamed in). Each unique mesh in a scene will require at least its own root page entries to be always loaded (This is reflected in the “Root Pool Size” stat above, which is allocated in 16MB chunks), in addition to whatever visible geometry pages need to be streamed in when viewed from a particular camera pose.

The main concerns with memory / disk space consumption will then probably be more on the development side. Since the editor version of the static mesh retains full original source data, these assets can end up very large on disk (162MB for the 2M tri asset I used for testing). Having to push and pull large assets like this from source control could quickly become a bottleneck for the team. Not to mention quickly blowing out the storage of your source repository. In this case, you should evaluate what reasonable size you can afford to make each asset (something like 500,000 tris might be more reasonable, which takes about 37MB), and whether this provides high enough geometric detail to allow dropping the normal map or not. Key hero assets are a good example where you might just throw in the full 1-2M tri model, since there will be relatively few of these.

On lower end consoles and PCs, you could try playing with r.Nanite.MaxPixelsPerEdge, to increase the target triangle sizes on screen. This will drop the streaming pool usage, and might increase performance on lower end hardware if you increase this number high enough (the default value is 1), but you would have to experiment with this.

Let me know if you still have any further questions,

Regards,

Lance Chaney

Some extra information after discussing with my colleagues this morning.

For scaling to lower end platforms, keep in mind that Nanite scales better with pixel count than triangle count. I recommended experimenting with r.Nanite.MaxPixelsPerEdge before (which changes Nanite’s target triangle density), but you will probably get better mileage out of scaling down your render resolution, and relying on a super resolution solution (TSR, or a third party solution such as DLSS, FSR or XESS) to upscale to the target display resolution. I highly recommend customizing your r.dynamicres.* settings for your target platforms and scalability groups. This is probably going to be your most obvious scaling option. Decreasing the render resolution will not only increase performance, but will also drop VRAM usage as well due to the smaller GPU buffer and texture requirements.

For dealing with very large static mesh assets on the development side, you might want to consider looking into Unreal’s new Virtual Asset workflow (see here, and here). Basically, this lets you split up your large assets into small metadata that is always synced + large bulk data, that is only synced on demand. While this wouldn’t save on total disk space usage on your revision control server, it could save a lot of space and sync time on your team’s development machines. It does require a bit of additional work to setup though.

Hopefully that more fully answers your questions, but let me know if there’s anything else, or I missed something.

Regards,

Lance Chaney

Amazing, thank you so much for the detailed answer!

Your very welcome

I’m going to close this case out now, but if you have any further questions, feel free to respond to reopen it.

Regards,

Lance Chaney