Asset migration does NOT copy payload data for virtual assets

If you have multiple projects, each with virtual assets enabled, and each project configured with a different location to store payloads, then copying assets between projects can silently fail in horrible ways as the payload data is not actually copied. Most critically there is no warning to users about this.

For example we have the following:

//project_1/main/…

//project_1/payloads/…

//project_2/main/…

//project_2/payloads/…

Both project 1 and project 2 are configured to use virtual assets. Their configs look like this:

`// project 1 DefaultEngine.ini
SourceControlCache=(Type=p4SourceControl, ClientStream=“//project_1/payloads”, DepotPath=“//project_1/payloads”)

// project 2 DefaultEngine.ini
SourceControlCache=(Type=p4SourceControl, ClientStream=“//project_2/payloads”, DepotPath=“//project_2/payloads”)`We have run into an issue were developers are using the Migrate function within the editor to copy some textures from project 1 to project 2. As both projects use the same local Zen store this works for the user copying the asset as the editor is able to load the bulk data. Once submitted though, unless other developers have loaded the asset from project 1 and have the bulk data in their local zen store, they will be unable to load the asset.

This fails when using the Migrate functionality in the editor as well.

There should at the very least be some kind of warning about this, and preferable the asset should be re-hydrated first and then copied.

The migration tool should support rehydrating virtualized package files as it will use the SAVE_RehydratePayloads flag when saving it out. How ever I suspect the problem is that it’s an import process rather than an export process, So when importing from project1 to project2 it will be done in project2’s editor and only have access to project2’s virtualization settings. This probably hasn’t come up internally as we tend to have fairly how shared DDCs and our content teams tend to only migrate between projects that share the same virtualization settings anyway.

Coincidently, last week I was improving the UnrealVirtualizationTool to better support rehydration of a package file that no longer exists in it’s original project for a different issue (importing into UEFN) which would probably be a better solution than the SAVE_RehydratePayloads flag as the UnrealVirtualizationTool can much more easily load different projects config file settings. So the migration tool would first import the package to the current project and then if the package file does contain virtualized data it would launch UVT with a hydrate command and the original projects location. You are also right that if that hydration attempt failed we should probably either warn the user or hard fail the import to prevent future issues, the severity of which could probably be a project setting so teams can make their own choice as to how it is handled. I can try to prioritize such work but I cannot promise any firm ETA for this at the moment.

Another change I have been looking at recently would be to pass better error contexts to the VA system, so that we could hopefully display the name of the package or asset that is failing to find it’s virtualized data so that it’s easier to find and fix these problems if and when they do occur,

As both projects use the same local Zen store this works for the user copying the asset as the editor is able to load the bulk data

I wouldn’t advise doing this, but when setting up your DDCBackend you could give each project it’s own bucket via Bucket=“???” that way the user importing the data would be more likely to run into the same sort of issues that other users are seeing as they would not be able to access the cached virtualized data for Project2 in Project1. Although that would only highlight the issue if the user tries to access the virtualized data and it’s fairly likely that they’d have the compiled texturemesh/audio result from Project2 in their local DDC anyway.

The obvious downside of this is that you’d no longer get deduping between the two projects and waste disk space and since this won’t be a 100% way of catching the problem it doesn’t seem like a good trade off.

----------

If this is a more frequent issue for you then you could consider running the -run=“VirtualizationEditor.ValidateVirtualizedContent” commandlet on your projects in CIS. This commandlet will find every reference to virtualized data in your project and check that the system can find the payloads in your persistent storage backends. Any missing data will be reported along with the owning package paths. There is an optional commandline switch -bValidateContent which will download each payload and validate that it’s content matches it’s hash etc but that is much slower and would only be useful if you don’t trust your persistent storage system. It’s been a while since I ran this so I will try to kick it off on a production project to verify that it still works at some point later today.

There is also a console command ValidatePackagePayloads <PackagePath> that does the same thing if you are already in the editor.

It’s not a great solution but it might help catch problems sooner.

In addition if you have projects that should never have virtualized data and you are worried that users might be importing virtualized packages there (or copy/pasting package files there in windows explorer) you could run the -run=“VirtualizationEditor.CheckForVirtualizedContent” commandlet which will error and log the paths of any package files it finds in the project that reference virtualized content.