First, sorry that I do not have working reproduction steps. I am unable to reproduce the crash myself…
Also, the reported crashes are for Test and Shipping builds, so they do not contain assertions, which would maybe have caught the issue sooner, with a different call-stack…
The issue we are seeing is that rarely the game crashes for our multiplayer clients, because their UWorldPartitionSubsystem::RegisteredWorldPartitions array contains invalid entries. Some entries are valid, others look like they had been valid once but partially overwritten, and others are complete garbage.
The crash happens in this loop:
for (UWorldPartition* RegisteredWorldPartition : GetRegisteredWorldPartitionsCopy())
{
if (RegisteredWorldPartition->StreamingPolicy)
{
RegisteredWorldPartition->StreamingPolicy->UpdateStreamingState();
}
}
Is something like this known?
Could it be related to [Content removed] (which has assertions enabled -> maybe the cause for the different call stack)?
If not, do you have any suggestions on how it might be possible to debug it from a (full) memory dump?
Steps to Reproduce
Sadly I do not have reproduction steps. The issue happens too infrequently, so we could not narrow it down yet. We never managed to intentionally reproduce it…
The issue has been reported to us in the following 2 scenarios - but, as said, the crash happens so infrequently that these scenarios might be random:
Scenario 1:
In a multiplayer game with more than 2 players
A client joins the game while another client is teleporting to a different location
Scenario 2:
In a multiplayer game with more than 2 players
All players get teleported, but the host teleports away immediately again
The location to which the host teleports triggers the loading of a DataLayer
The crash has only been observed in a packaged build.
Sorry for the late answer, but I have been waiting for a QA retest with CL 46949279 cherry-picked.
Again, sorry for asking this question without more information available, and thanks for looking into it despite that.
The good news is that with CL 46949279, our QA is no longer able to reproduce the issue. Since it was a very rare crash, this is not 100% guaranteed, but assuming that QA weren’t just very unlucky, this seems indeed to be the same issue as discussed here (but with a different call stack since it was a shipping build):