Each SCS Node’s Preload recursively loads subobjects and archetypes.
During this process, FScopedUObjectHashTablesLock is acquired for UObjectHashTables (Engine/Source/Runtime/CoreUObject/Private/UObject/UObjectArchetype.cpp:101).
With large PLAs containing hundreds of SCS nodes and subobjects, the lock is held for hundreds of milliseconds, blocking GameThread operations such as ForEachObjectOfClass.
Experimental mitigation:
We moved FScopedUObjectHashTablesLock inside Preload and added FPlatformProcess::YieldThread() between nodes:
for (USCS_Node* Node : RootNodes)
{
Node->PreloadChain();
FPlatformProcess::YieldThread();
}
GameThread stutters disappeared.
Profiling confirms the bottleneck is lock contention during recursive Preload of SCS nodes.
Notes / Warnings:
This is not safe for production as it changes lock scope, but it clearly identifies the root issue.
The problem is amplified for heavy PLAs with large Blueprint hierarchies.
We are also wondering whether it is advisable to split large PLAs into smaller ones to reduce async load contention, and if there are any recommended metrics or guidelines for PLA size in such cases.
Hey there, I’m talking to devs of SCS and PLAs about your scenario and will report back when I know more from them.
In the meantime I want to double check if you’re already following some of our high level recommendations surrounding PLAs. You asked:
“We are also wondering […] if there are any recommended metrics or guidelines for PLA size in such cases.”
The world building guide contains some generally good practices and pitfalls surrounding PLAs. It has this to say about PLA sizes:
Since packed level actors are just actors with multiple ISM, HISM components, creating PLA larger than streaming cell size can lead to streaming, performance and memory issues. PLA should be kept under streaming cell size in most cases. If a PLA’s bounds are larger than a streaming cell size, it will be promoted to a higher level of the streaming grid causing the entire PLA to be loaded even when large parts of it are still outside of the loading range. Crossing streaming cell bounds also causes promotion for large actors, so making your PLAs some % smaller than the cell size can have additional benefits but requires tweaking and profiling. At the very least, consider making your PLAs smaller if they’re currently larger than the streaming cell size of the WP grid they’re on.
“whether it is advisable to split large PLAs into smaller ones to reduce async load contention”
Sounds reasonable to me hearing your current performance issues, but I’ll report back after collecting more thoughts from the system owners.
Just to clarify — in our project we’re still using the legacy level streaming system (not World Partition), so we can’t really rely on the “streaming cell size” metric as a reference for PLA sizing.
Our PLAs are generated from hand-authored sublevels and can cover fairly large areas depending on level composition, so we’re trying to understand if there are any alternative guidelines or heuristics for PLA complexity/size in non-WP setups — for example, number of components, subobjects, or total serialized object count that might start to cause contention during async loading.
We’ll be very interested to hear any insights from the SCS and PLA teams once you’ve had a chance to talk with them.
I was just looking at this with Zhi Kang and an additional question came up.
It’s possible this is an artefact of the old loader code, which would be the case if you’re loading your assets from a pak file instead of using IOStore.
Could you let us know whether you use IOStore in your project?
I just double-checked our build setup and realized that our project was actually packaged without the -pak or -iostore arguments — so it seems we were indeed using the legacy loader.
Could you please clarify whether this issue is specific to the legacy loading path (e.g., when using Loose Files via Project Launcher), or if it could also occur in other configurations — such as when using pak files without IOStore, or with Zen loader/streaming enabled?