I would assume PCG should be able to handle lots of geo, but I’m having trouble even with a simple test like 4 million points and 1 low-poly rock mesh. Am I doing something wrong? PCG world partition keeps crashing with Unreal 5.7.3-4 and RTX 5080. My test is the default world partition level, a simple PCG graph with GetLandscapeData ->Surface Sample (set density to 1 to generate 4mil points)->StaticMeshSpawner with a rock static mesh. Generates ok, but very low FPS. Enable hierarchical PCG in the graph and then enable is partitioned in the level. Generates 4 times more points if I debug and crashes the GPU if I try to spawn stuff. I’ve tried initially bypassing the mesh spawner when placing PCG Graph in the level that it’s empty when generated for the first time, then enabled IsPartitioned and then enabled the spawner. Still getting a crash. If the bounds are small, then it works, but the whole idea is to generate large-scale worlds.
Look up the saying on that - it’s apt in this case.
Relying on the engine for starters.
Again, that’s the engine.
Yea, no.
Firs of. World Partition can’t handle much anything - World Composition can.
Up to the limit of a float/double if you change the engine source. That’s likely bigger than earth if you do things properly.
Second issue is the PCG - it’s not magic. you still add assets. if the assets aren’t perfectly optimized you get the results you are getting - and even when they are - it depends on draw-call count + CPU Load. stuff you need to test and check.
Third issue is the fact you expect stuff to spawn and stay there - that’s unrealistic. Imagine loading every single blade of grass present in the world and keeping it’s data in memory. Not possible in our lifetime. Stuff like grass vanishes past 50m and gets replaced with 2d cardboard cut outs that are usually grouped (think HLOD but those don’t work in engine either).
Same idea with rocks, or anything else. In a proper system (something the engine doesn’t really offer if not with nanite maybe) you’d have 100 rocks visible in the immediate vicinity where needed, and the further you get the more grouped in solid clusters they would be.
All of that said, unless Nanite has become somewhat of a magic bullet (which I highly doubt; the engine is built by edjits not decent devs, and there is 0 positive feedback on the forums) you always have to carefully balance your overall scene by numerous bench tests - whenever you add/create things you always check for viability and stability in a packaged build.
And - you are still always bound by Mathematical limits. Unless you make your own systems using other things (hint Textures hold much more than just images).
Not particularly helpful generic reply. The question was specifically about partitioning PCGs, not optimizing performance. I might not have phrased my question explicitly enough.
The answer is to set PCG Graph To Generate at Runtime, and it runs as I was expecting, with no performance impact.
My thinking was that if you simply enable partitioning, it would divide 5 million points into grids (which is nothing considering that Nanite meshes are 1-10mil) and then stream/unload geometry into the level based on the camera position and distance settings. And that’s exactly what it does with Runtime Generation. If set to generate on load, it simply generates an ISM component with 5 mil instances, which Unreal still doesn’t like, even with distance culling setup and Nanite handling culling on the triangle level. In this case, why does partitioning even exist separately from runtime generation? Currently, it doesn’t seem to partition correctly if set to generate on load, as my assumption would be simply to have an ISM component generated per grid square, which should not eat up all GPU memory and crash.
Maybe re-read the answers to your direct quotes since you ask the same questions in a different way again. Btw if you were trying to prove your hypothesis you blatantly failed…