UPCGLandscapeCache::GetOrCreateCacheEntryInternal and UPCGLandscapeData sampling functions crash with EXCEPTION_ACCESS_VIOLATION when reading ULandscapeInfo::XYtoComponentMap and XYtoCollisionComponentMap concurrently from PCG async worker threads while the game thread mutates these maps during actor registration/unregistration or map transitions.
Two distinct issues contribute:
TMap race condition: PCG tasks scheduled via FPCGAsync on LowLevelTasks worker threads call FindRef() on XYtoComponentMap/XYtoCollisionComponentMap with no synchronization. Game thread mutations (Add/Remove/Compact/Empty in RegisterActorComponent, UnregisterActorComponent, Reset, Compact, Serialize) can trigger internal TSet reallocation mid-read, causing access violations.
GC lifetime issue: Worker threads hold raw ULandscapeInfo* pointers with no GC guard. During map transitions, GC can collect the ULandscapeInfo while a PCG task is mid-execution, leaving a dangling pointer.
XYtoComponentMap and XYtoCollisionComponentMap on ULandscapeInfo are unprotected TMap<FIntPoint, …> members. PCG’s async task scheduling reads these maps from worker threads in:
Create a large PCG-driven landscape level with multiple landscape proxies
Set up PCG graphs that sample landscape data (height/normals) across proxy boundaries
In PIE or standalone, trigger a map transition (e.g. OpenLevel or seamless travel) while PCG generation is actively running async tasks
The crash occurs during the transition when landscape actors unregister (mutating XYtoComponentMap) while PCG worker threads are reading the same maps
Alternatively:
Have a level with landscape streaming proxies that register/unregister as the player moves
Run PCG generation that queries landscape data across multiple components
Move the player to trigger proxy streaming while PCG tasks are in flight
The crash manifests as EXCEPTION_ACCESS_VIOLATION in TSet::FindId (called from TMap::FindRef) on a worker thread, with the game thread simultaneously in ULandscapeInfo::UnregisterActorComponent or ULandscapeInfo::Compact.
I should note that we stuck a mutex with a write lock and read locks around the callsites to fix this locally, but I don’t know if there is a better lockless based fix you would do instead
Thanks a lot for the report and the detailed description of the issue!
I’ve set up a local repro project and managed to confirm at least one crash and two deadlocks around that code, so this definitely needs to be addressed.
I’ll check with the team on the details, this will either get logged as a bug or if someone is available we might just fix it straight away.