Paking cooked assets of multiple projects into a single pakfile (with IoStore).

Hi, we are currently converting millions of (generated) fbx files to uassets and ultimately cooking/paking them for use in the Editor and at Runtime with our custom Engine (based on 5.5). Due to the sheer amount of files we are batching the conversion using a custom Dataprep pipeline through dummy uprojects and dumping the converted files (from each /Content folder) into a single directory, making sure to maintain all relative paths - which seems to work okay for our use case. Then we batch the cook process, again using dummy uprojects to populate the Saved/Cooked/<Platform>/<Project>/Content directory. And currently we’re able to pak (with IoStore) the cooked content per dummy project, leaving us with multiple instances of pak/ucas/utoc files which we can currently use in the Editor and at Runtime.

However, it seems that paking with IoStore is dependent on the generated packagestore.manifest file that is generated at Cook time, which seemingly has the package names of each asset baked in, e.g. /Game/Path/To/My/Asset - and then used in the paking process like in IoStoreUtilities.cpp. This leads to our current situation where loading in the assets (in Editor or Runtime) from these pakfiles/ucas/utoc are automatically mounted in /Game even when specifying a custom mount point in FPakPlatformFile::Mount(…).

Our goal is to be able to “dump” all cooked content to a single directory while being able to pak the content (with IoStore) in any configuration (i.e not being bound by a single packagestore.manifest) and specify a custom mount point at Pak-time. (e.g. loading our assets at Runtime like /MyCustomMountPoint/Path/To/Asset)

After some investigation, we think it may be possible to just specify an array of packagestore.manifest files and get the needed info from an array of manifests instead of a singular one - but we aren’t entirely sure this would work. We are wondering if there is currently an existing method to achieve our goal (a command-line argument we may be missing) or a different way to manage our pipeline while minimizing the amount of times we have to convert/cook content as more generated fbx files come in - as these are the longest stages in our pipeline.

For additional context, we are automating the conversion with a custom UCommandlet, and cooking/paking using normal usage of the BuildCookRun automation tool. We have custom scripts to setup the needed context for each stage (like generating the dummy uprojects).

Thanks.

Hi Ronnin,

I think it’s going be difficult to do what you want using I/O store container files. First of all, we don’t support mount points with I/O store and the reason for this is that when using I/O store / Zen loader we pre-compute the

package dependency information during cook. This is what makes the Zen loader faster then the old EDL package loader. The package dependency information contains package ID’s which is a hash of the package name.

This is why we can’t just mount a container file with a different mount point. The package store manifest file contains all this information which we use when creating the container files (.utoc/.ucas) when running UnrealPak.exe.

Secondly, as you already noticed we can’t just pass in multiple package store manifest files to UnrealPak since we treat each cook as a self contained unit of content.

The only way I can see this working with different mount points is that you create a single project and create dummy plugins for each individual part of your content and cook/pak each plugin.

/Per

I you are asking whether the runtime load performance of loading an asset from an I/O store container file (.ucas) with 1M assets is different from loading from a container with 200K assets, it’s not. The runtime cost of using I/O store is the memory used by the lookup tables i.e. the package store entries in FPackageStore and the lookup tables in the I/O dispatcher backends. The benefits of partition the data into smaller sets of container files is that you don’t need to have everything mounted at all times.

You do know that you can chunk game content independent of the cooking output? i.e. you can cook the project as a monolithic unit and then create PAK chunk rules to create smaller sets of container files?

We’ve decided to continue similarly to our current pipeline, with one change: converting into a plugin with the name of the desired mount point. This way we don’t pollute the /Game mount point for users of our pak files. Since the runtime loading difference of different pak configurations with iostore is negligible - we are fine with maintaining many dummy projects for cooking, leaving us with many ucas/utoc to load in dynamically. Additionally we cook our assets only once, which was one of our main goals here. Closing this issue. Thanks for the help!

I see. Yes, the plugin idea is something we’ve thought about, as well. We mainly wanted to separate the cooking and paking process so we could easily test load speeds between 1 monolithic iostore (let’s say with a million assets) versus a batched one (e.g. 200,000 assets per iostore). Would you have any insight on how the performance would differ between these two scenarios? If the difference is small, we may just stick with our current pipeline or do the dummy plugin approach. I hope our goal is clear though, as testing with a different iostore configuration (e.g. 100 assets per iostore) would require us to cook the same exact assets just in different batches. I would also like to add our assets are basically independent of one another - we generate units of 1 mesh, 1 texture, 1 material, 1 level - which don’t depend on other units.

And to further discuss if it’s possible (with minor engine changes) to decide mount points at pak-time -if I can guarantee all cooked assets have non-conflicting package names would modifying usages of the FCookedPackageStore (like in IoStoreUtilities.cpp Line 2674, GetChunkInfoFromChunkId) to find things from an array of package stores, work in this case?

Thank you for the insight! It is very helpful. Yes, I’m aware, but for our case we do not intend to store all of the generated uassets into one project, but rather maintain multiple projects and cook each of them independently of one another - since we are also constantly generating raw .fbx files to convert into .uassets. And ideally we’d be running all stages of our pipeline (.fbx generation -> uasset conversion -> cooking -> paking) all at the same time, perhaps on different machines.