Moving forward detractors of nanite tech have claimed it would take enormous amounts of disk space across games, combined with the fact most of these games tend to use the same Quixel assets, would it be in our best interest to have a kind of repository that can be installed to our fastest drive for ALL future games? allowing the actual game sizes to shift drastically downward and allowing them to be placed on a secondary slower SSD and utilizing a main fast NVMe for this repository increasing streaming asset speed and decreasing load times.
It makes sense since most of the objects that create bloat are high polygon rocks or vegetation which will most likely be shared throughout games with possibly some AI method of tweaking the objects during load time to create more variation of model or texture.
This is what mega-assemblies are meant to help address. You can stack multiple lower-poly items into a single, aggregate assemblage, and since Nanite doesn’t have loss of detail, you can use the sheer amount of small-stuff to make a ‘bigger’, more-detailed thing.
Each small thing is going to be small on disk, just replicated multiple times in your thing.
Of course if you want that 20gig thing, the mega-assembly won’t help, but it does offer a path to at least sidestep the issue with a bit of work.
Look at the Valley of the ancients. Each of those stone-pillar/stack thingies is 25+ parts. Some very small to provide small-level detail. Might as well be a movie-level thing.
Nanite, however, turns the idea of the scene being limited by poly count/overdraw into one of sheer pixel count. The clustering/visibility mechanism only requires the engine to draw (just about) exactly what you need to so one can really just decorate like they would a cake; put piping (meshes) everywhere.
Current tech, your poly count will still get the better of you, so it’s somewhat limited.