Possible Cook time optimization related to Asset Registry State

Hello, while cooking we see a lot of time is taken by removing elemets from CachedAssetsBy(Path/Class/Tag) structures which are coming from RemoveAssetData which in its turn comes from State.PruneAssetData coming from FAssetRegistryGenerator::FinalizeChunkIDs:

Basically:

FAssetRegistryGenerator::FinalizeChunkIDs -> State.PruneAssetData -> RemoveAssetData .

There are multiple call of RemoveSingleSwap for these structures and because it is a map of arrays (not sorted), it takes a lot of time, esp. it is visible when there are small changes to the content and most of the time is spent in reading/saving data from/to disk and this second big thing of removing data from CachedAssetsByXXX structures.

The thing is that later in the code, if Asset Registry has to be generated CachedAssetsByTag is completely cleared and recreated anyway both in case of bSerializeDevelopmentAssetRegistry and SaveOptions.bSerializeAssetRegistry in latter case by explicitly calling FilterTags() due to same performance related reasoning. So maybe there is a sense to not issue

for (auto TagIt = AssetData->TagsAndValues.CreateConstIterator(); TagIt; ++TagIt) { TArray<FAssetData*>* OldTagAssets = CachedAssetsByTag.Find(TagIt.Key()); OldTagAssets->RemoveSingleSwap(AssetData); }

(called in RemoveAssetData() ) and just keep it as-is by adding some parameter to Prune (to not alter other functionality where it might be needed), as this code is the bigger offender.

Regards,

Sergii.

P.S. I have not tried to change TArray to maybe some hash, or at least sort them before issuing all lookups, it could help as well though…

We noticed that unnecessary expense as well, and made a change to avoid it in CL 38513466, aka github commit e79cacfabeb02065e80622a51476b7b3d61e1a03

InitializeFromExistingAndPrune copies the data onto a new state using filtered Add, which is fast, rather than removing the data from the existing state using the slow Remove functions.

Thanks, will check it out