Hi,
Thanks for the additional info!
So the behavior you’re seeing is expected, although it is not well documented. I’ve made a note to try and include this info somewhere more visible.
Calling a RPC on a dormant actor can cause that actor to replicate, as the remote function handling in the NetDriver does not check for dormancy. If a channel doesn’t exist for an actor calling a RPC, one will be created, and the NetDriver will make sure the “initial” replication for this new channel is performed when sending the RPC (see UNetDriver::ProcessRemoteFunctionForChannelPrivate).
This means that if a client is in relevancy range of a dormant actor when a multicast RPC is called on that actor, that RPC and the actor’s property data will be replicated to the client.
Again, this isn’t well documented, but it is the intended behavior. The expectation with dormancy is that while dormant, an actor doesn’t need to send RPCs and its replicated properties aren’t changing.
It’s also worth noting that when a client enters relevancy range for an actor for the first time, the server will still perform an initial replication for this actor in order to inform the client of this actor, even if the actor is dormant. This looks to be done regardless of whether the actor was statically placed in the level or dynamically spawned.
Finally, something else worth noting is that it is possible for a client to receive a reliable multicast RPC on an actor that is no longer relevant, provided that the actor only recently went out of relevancy and the channel still exists on the server. This is to prevent clients from missing reliable RPCs in the case where the client goes out and then back into relevancy range of the actor quickly (see UNetDriver::ProcessRemoteFunction).
Given this, I do agree that dormancy may not be the best solution for your project’s needs, especially if you need to call RPCs on the dormant actors. Hopefully the info here helps as you implement a custom solution for disabling/enabling actor replication, but if you have any questions, please don’t hesitate to reach out!
Thanks,
Alex