Hi there,
The vert count could differ from your DDC for a number of reasons. For instance, your DDC may not be taking into account UV, or material section seams, which will force these vertices to be split in Engine. Since Unreal takes into account all these split vertices, you might see an increased vertex count compared to your DDC after import. You should definitely consider Unreal the source of truth here, since these numbers represent the statistics of the actual rendered mesh.
You should evaluate the quality difference and check whether using geometry or a baked normal map gives better fine detail. However, in terms of memory consumption and disk space, Nanite meshes are very efficient (as partially outlined in the documentation you linked). So you could probably get away with very high poly meshes with no baked normal maps. You might still want detail normal maps, but these could be shared between multiple assets to save memory. Some rough tests show that a 2 million tri Nanite mesh takes up about 27MB on disk in a cooked build, which is similar in size to a 4k normal map. In terms of runtime memory usage, Nanite is very efficient because it’s a virtualized geometry system. Only the geometry pages required to render the current scene are kept in memory. With the particular 2M tri Nanite mesh I used in my test, I was only able to get the Visible Streaming Data Size stat up to about 10MB (visible using `stat NaniteStreaming`). This was done by placing the camera very close to the mesh so that the most detailed geometry pages were loaded. As you can see in the below stats, the default Nanite Streaming Pool Size is 512MB. Memory usage from geometry streaming should remain constant at this pool size, unless the pool overflows and needs to resize.
[Image Removed]The primary concerns for runtime memory consumption are then, how many unique meshes will be visible on screen at a time, and overdrawn from closely stacked surfaces (which may trigger additional, non-visible pages to be streamed in). Each unique mesh in a scene will require at least its own root page entries to be always loaded (This is reflected in the “Root Pool Size” stat above, which is allocated in 16MB chunks), in addition to whatever visible geometry pages need to be streamed in when viewed from a particular camera pose.
The main concerns with memory / disk space consumption will then probably be more on the development side. Since the editor version of the static mesh retains full original source data, these assets can end up very large on disk (162MB for the 2M tri asset I used for testing). Having to push and pull large assets like this from source control could quickly become a bottleneck for the team. Not to mention quickly blowing out the storage of your source repository. In this case, you should evaluate what reasonable size you can afford to make each asset (something like 500,000 tris might be more reasonable, which takes about 37MB), and whether this provides high enough geometric detail to allow dropping the normal map or not. Key hero assets are a good example where you might just throw in the full 1-2M tri model, since there will be relatively few of these.
On lower end consoles and PCs, you could try playing with r.Nanite.MaxPixelsPerEdge, to increase the target triangle sizes on screen. This will drop the streaming pool usage, and might increase performance on lower end hardware if you increase this number high enough (the default value is 1), but you would have to experiment with this.
Let me know if you still have any further questions,
Regards,
Lance Chaney