We are working on large open-world with 16k x 16k. Our original plan was to use ChunkDownloader to distribute cooked packages internally (for colleagues / developers), and eventually for live patching. However, we are currently blocked by a packaging size limitation.
AssetRegistryGenerator : The maximum size for a Pakfile is 2147MB, but the file to add is 16988MB.
This large file seems to corresponds to World Partition generated data (under the world’s _Generated_ directory), which appears to be bundled into a single chunk / pak.
Is this expected behavior for large World Partition worlds?
Can World Partition generated content be meaningfully chunked, or is splitting the world into multiple World Partition maps the recommended solution?
Can you share the full error message that you are getting? Knowing the name of the file that is bigger than the maximum chunk should help here. Getting the full log file would also be helpful.
Regarding the chunking of WP levels, it should be possible but you need to make sure that all the WP content is available when running. The engine will just crash if it tries to access data that is in a missing chunk.
While cooking and generating chunk manifests, I get the following error:
LogAssetRegistryGenerator: Error: Failed to add file /ProjectPath/Content/MAP/GAME_WORLD/GAME_WORLD to paklist ‘Metadata/ChunkManifest/pakchunk151_s1.txt’. The maximum size for a Pakfile is 13435MB, but the file to add is 13436MB.
LogCook: Warning: Failed to save chunk manifest
This happens even after setting: MaxChunkSize=13435000000
Are there any situations where World Partition content cannot be split into multiple chunks/paks?
Why would a single cooked map package (GAME_WORLD) become large enough to exceed the pak file size limit?
Do you mean that the file named GAME_WORLD.ubulk is ~13GB? If you are using zen as the cook output store, you will have to open the zen dashboard view to inspect the size of the data. http://localhost:8558/dashboard
I should have mentioned that using the MaxChunkSize is not the best way to split your project. If you only use this parameter, the content of the chunks will not be stable (ie files can be bumped to other chunks). This parameter is mostly meant to be used when a platform has a maximum file size that it can read.
This will create a logical split of the data that will be more consistent than the current mode. You can define each level to become a chunk using this system.
I talked with the WP team and we think that you might have a lot of non-spatially loaded HLODs if it’s the ubulk that is big. There is a command to generate a report of the HLODs in a WP level. You can run wp.Editor.HLOD.DumpStats as a console command. The generated CSV will contain information for all HLODs in the level. You will want to care for the ones that have FALSE in the SpatiallyLoaded column. The MemoryDiskSizeBytes can be used to estimate the size of the HLODs in the ubulk.