Apologies in advance for the long post.
Some context: We have a system that spawns many subobjects and components on actors, deterministically, at runtime. This is done so that those subobjects can be referenced over the network without needing to be explicitly replicated (i.e, they are net-addressable). The name generation method we use can be simplified to SOMEPREFIX_ObjectClass_###, where ### is a counter which increments each time we create a new object within the system.
In addition, these actors can be reconstructed at runtime in different configuration. To achieve this, we “destroy” all the objects we created, reset the object counter to zero, then run the construction logic again with the new configuration. The result is that regardless of replication relevancy and whatever else can occur over the network between rebuilds, eventually, the server + client have identical actors, and subobjects with identical names without fail.
This works well - but not unless we force a GC pass between reconstruction (which is unshippable).
Because we are often using FNames that have previously been used, StaticAllocateObject() will fall into the path where it recycles the memory of an old UObject that has not yet been fully destroyed. This can result in some very odd behaviour, such as **TWeakObjectPtr’**s and TObjectKeys’ that previously resolved as invalid, suddenly referencing live, valid objects again (because the Internal Serial Number hasn’t changed despite the object having it’s destructor + constructor called).
Likewise, raw UPROPERTY/TObjectPtr references also remain since they are just memory addresses and don’t lookup the object via GUObjectArray. Since the original object also has the Garbage flag cleared, it simply becomes a valid object again. This behaviour differs depending on whether a GC sweep runs between destruction/reconstruction of our actor, which can manifest in some very odd and time-critical/race-condition bugs, due to stale references becoming live again unexpectedly.
So far the only solution has been to manually track and clear all references, including weak ones - but this has it’s own issues and somewhat defeats the purpose of using them. We can actually resolve the Weak/Key reference issue by manually resetting the UObject’s serial number to zero - *but* this doesn’t seem particularly safe and is only possible due to “fortunate encapsulation” at the GUObjectArray level - and it doesn’t solve the issue with hard references anyway.
It seems the engine has also had to deal with this before, FLevelStreamingGCHelper::TrashPackage renames the objects into something globally unique, preventing the same object memory being recycled when that package name is NewObject()'d again. The downside of this approach is that while the same names can be used, objects are always allocated anew, so is more costly.
Ideally, we would pool our “destroyed” objects at our discretion by holding them somewhere centralised, clear all references “automatically” through UObject GC and changing the serial number (only when safe to do so) - but keep the memory allocated. This really becomes a generic “pooling” system where all references to pooled objects are auto-cleared. Note that simply renaming objects with a new outer unfortunately doesn’t solve these issues either, because the objects remain valid and weak/hard/key references remain valid also.
So what we’d like to do, is mark the object as garbage but retain a reference to it in some centralised pooling system. We would then allow GC to run reference collection + elimination as part of it’s normal cycle, which would clear hard references and run object destruction/cleanup and reset the serial number. The centralised system would then flag that object as “ready” for reuse and the object and memory can be reused, without it having ever left GUObjectArray or the memory being freed.
Is any of this natively possible, or do all pooling systems suffer the same fate here?