Copy (move) of Virtual Assets within editor to mount point outside of Content

We have assets outside of the Content that are available to multiple projects. These assets are integrated to a ‘code base’ stream in perforce and then integrated into the projects.

We just encountered the case where two textures with virtualized data were copied in the editor of a project to a location outside of Content and the assets remained virtualized. When these assets were integrated to other projects we started getting ‘LogVirtualization: Error: Failed to pull payload’.

For the moment we have reimported the textures to fix the problem.

Is there a manner to copy assets as non-virtualized to avoid this issue? I also tried rehydrating, but I do not seem to be able to do this outside of Content.

Best regards,

David

Hello sorry about the delay but I was OOO at the tail end of last week.

At the moment I don’t have a particularly good solution to your problem. I am assuming that when you say “outside of the Content” that you mean outside of an unreal project as well and as you’ve probably noticed all of the virtualization tools and utilities very much rely on knowing which project a package file is part of before being able to do things like rehydration as it’s currently the only way we can work out where the virtualized data might be stored.

For certain internal use cases we did add automatic package rehydration to the asset migration tool (https://dev.epicgames.com/documentation/en\-us/uefn/migrating\-assets\-from\-unreal\-engine\-to\-unreal\-editor\-for\-fortnite) but this is still a little limited as the tool is run on the target project that you are importing the data into and ends up using the virtualization settings for that project, so in order for it to work correctly it would need access to the same persistent storage backend as the project that the package was first virtualized in. This works fine if you are storing your virtualized data for all projects in the same place but it does not sound like that is the case for your setup.

I do hope to extend the rehydration pipeline in 5.7 (for both this tool and UnrealVirtualizationTool) to allow the caller to supply the source project which would make things more flexible but that of course relies on the user a) having that project synced b) knows where the package came from in the first place. So I might also try to add the ability to provide a custom ini file containing a custom virtualization graph where you could detail all possible storage locations at your company but of course none of these future plans help you now. Please note that 5.7 is my target, but it is not officially scheduled and I cannot guarantee that it will land there.

Potential Fixes

The obvious suggestion is that the shared storage area be converted to be a dummy project. That project could be set up to have access to all of your persistent storage backends which would make it much easier to run rehydration passes on etc, but I will assume that moving stuff around would be too much work for too little gain.

I’m not sure what process you have for moving package files to this storage area. You could probably add a custom editor option to “export to shared storage” quite easily. All it would need to do is copy the package file to the target location and run the rehydration process on it from within the editor process so that it already has the correct settings. There is the possibility that out of the box the rehydration process would complain if the target package is not within a project, in which case you’d want to copy the package within the project first, then rehydrate, then copy to the shared location before submitting. We probably could provide a standard asset action to “export as hydrated” to the editor for projects with virtualized assets enable, I will add that to the backlog but as the initiative is no longer in active development I cannot give you any timeline as to when that might be done. If you want to take a look at adding one for your team, I suggest starting at Engine\Source\Editor\VirtualizationEditor\Private\RehydrateProjectCommandlet.cpp which should get you most of the way there.

Another commandlet you might find interesting is Engine\Source\Editor\VirtualizationEditor\Private\CheckForVirtualizedContentCommandlet.cpp which can be used to make sure that a project does not contain virtualized content. You might be able to make a similar piece of code to scan your shared storage area but I suspect you might hit areas of code that are relying on the packages being within a project.

Speaking of validation Engine\Source\Editor\VirtualizationEditor\Private\ValidateVirtualizedContentCommandlet.cpp (-run=“VirtualizationEditor.ValidateVirtualizedContent”) can be run periodically on a project to check if any package contains any virtualized data which cannot be found in that project’s persistent storage which can be useful to flag problems before they impact users.

My final idea is not something that I suggest you do (at least not long term) but might be useful as a short term fix if this is really impacting your developers. You could add the persistent storage backends for all of your projects to the VA graph but set the backends for the other projects as read only. Just make sure that you place the current project’s backend first so that it is hit first.

[VA_DefaultGraph] PersistentStorageHierarchy=(Entry=ThisProjectsCache, Entry=Project2Cache, Entry=Project3Cache) ThisProjectsCache=(Type=P4SourceControl, DepotPath="//Payloads/Project1/") Project2Cache=(Type=P4SourceControl, DepotPath="//Payloads/Project2/", ReadOnly=true) Project3Cache=(Type=P4SourceControl, DepotPath="//Payloads/Project3/", ReadOnly=true)I don’t really recommend this as it doesn’t really fix the data, but you could run this as the default graph, then run the ValidateVirtualizedContent commandlet with a custom graph that only has the current project’s backend to identify content that does need fixing. As a bonus you would then be able to rehydrate these broken packages in place and then re-virtualize them to get the data into the correct backend.

The ideal would be if the editor hydrates a virtualized asset when it is copied to a location

The asset migration tool is probably the best option here, at least once I add the extension work that I mentioned in the previous answer. AS mentioned it currently will try to rehydrate with the project’s VA settings but I want to extend this to detect if the payloads do not exist within the current project settings (or VA is disabled as you mentioned it might) that it requests the user provide the original source project so that UnrealVirtualizationTool can be compiled/launched to perform the rehydration.

This will have several drawbacks though, as first the UnrealVirtualizationTool will need to exist for the user and if the user is on a project without VA and only takes exe’s via precompiled binaries then it’s unlikely that UVT would be distributed to them. Additionally it would require that they would know the source project (which if they are pulling from a central dumping ground for package files, they might not) and have that uproject+config files to hand on their local disk. So although the tool would technically work, there are a lot of edge cases that would be hard for us to provide a fix for in the general case.

Although at the very least, blocking the user from importing a virtualized package to a project where it won’t work would at least prevent bad data from getting into the project and interrupting the work for others, even if it would be tricky for the importer to solve the problem themselves.

Of course this would still rely on people using the tool to import content rather than copy/pasting the package files themselves. This is why I recommend looking into adding some of the CIS checks in my previous answer as a last line of defense to catch VA issues.

If you have any ideas or suggestions on other approaches that might work better with your set up I’d be happy to hear them but as before I cannot make any promises on a schedule for any work needed.

because at a certain point the projects would no longer be on the same perforce server

It is possible to provide a P4PORT when defining a source control backend, although I think I only ever tested this with a single source control backend in a graph, it should in theory work with multiple source control backends as we create a new perforce connection per backend implementation. So to use the previous example:

[VA_DefaultGraph] PersistentStorageHierarchy=(Entry=ThisProjectsCache, Entry=Project2Cache, Entry=Project3Cache) ThisProjectsCache=(Type=P4SourceControl, Server="Server01:1666", DepotPath="//Payloads/Project1/") Project2Cache=(Type=P4SourceControl, Server="Server02:1666", DepotPath="//Payloads/Project2/", ReadOnly=true) Project3Cache=(Type=P4SourceControl, Server="Server03:1666", DepotPath="//Payloads/Project3/", ReadOnly=true)

Hi Paul,

thanks for the response. I was on holiday last week so I’m seeing the response only now.

The ideal would be if the editor hydrates a virtualized asset when it is copied to a location where it is not supposed to be virtualized.

I believe this could be more of an issue if OptIn mode is used or if exclusions are defined in the OptOut mode.

We do not/cannot have a single backend for all projects or add them all to a project, because at a certain point the projects would no longer be on the same perforce server (we use a method of splitting the perforce server so that old projects are no longer on the production server).

Best regards,

David