SetReplicates does not send prior RPC's or that frame's property changes

For actor’s when using SetReplicates to false, any prior replicated property changes or RPC’s that aren’t reliable won’t be sent out when it is expected that they should have. If its by design then this is extremely frustrating since there is no way to know how long I have to wait before I can properly call SetReplicates and set it to false for actor’s.

If this helps the expectation is that SetReplicates wouldn’t affect properties within the previous two frames(which is how long I tested before it allowed me to send Replication deltas). And RPC’s after the flag was changed would be discarded but the queue wouldn’t be entirely thrown out unless it was reliable essentially.

Or at least providing a path for me to know how long until SetReplicates = false would come into affect so I can pre-emptively delay changing it to make sure everything else gets across the network first would be extremely helpful.

Steps to Reproduce

  1. Change the play in editor setting’s net mode to be Client.
  2. Create an Blueprint Actor class(lets name it BP_Chest) and open it.
  3. Enable Replicates for the actor.
  4. Open the level Blueprint in the current level and from BeginPlay node add an Switch Has Authority node connected to BeginPlay.
  5. Add a Spawn Actor of Class node, specify its class as BP_Chest and connect it to the Authority pin on Switch Has Authority.
  6. Back in BP_Chest, add a boolean named bChestIsOpen and set its Replication to be RepNotify.
  7. Inside the OnRep_bChestIsOpen function, add a print string node and connect it for seeing that the function was called.
  8. In the event graph add two custom events:
    1. Multicast_Test: Set this event to be a Multicast event that is NOT reliable.
    2. Multicast_Test_Reliable: Set this event to be a Multicast event that IS reliable.
  9. From the BeginPlay path add a Switch Has Authority node and connect them, and then connect a Delay node to the Authority pin and set its duration to 5 seconds.
  10. Connecting to the Delay node; set the bChestIsOpen boolean to the opposite of bChestIsOpen using a NOT node.
  11. After the set bChestIsOpen node connect a call to Multicast_Test or Multicast_Test_Reliable(doesn’t really matter because the result is the same). Connect the other multicast function to it as well both should be connected in a chain to the boolean setter we connected.
  12. At the very end of the chain, call SetReplicates by adding that node to the very end and make sure its InReplicates flag is set to false.
  13. Select Play in Editor and it after 5 seconds observe.
    1. You can shorten the seconds for testing but this was to confirm that nothing is waiting for more time to connect across the network in this scenario.

Observe that the only print string on the client is the reliable RPC, while the unreliable RPC doesn’t get sent(when its expected to be sent) and the OnRep doesn’t get called either on the client. Everything still executes on the server as expected.

Hi,

I believe what you’re seeing is the expected behavior, as when an actor is set to no longer replicate, it is expected that no more information should be sent for that actor. After calling SetReplicates(false), the remote role of the actor is updated to ROLE_None, and on the next net update, UNetDriver::ServerReplicateActors_BuildConsiderList will remove the actor from the network objects list.

In the repro, the reliable multicast is still sent because reliable multicast RPCs are sent as soon as they are called, which in this case is before the actor is set to no longer replicate. Unreliable multicast RPCs are queued to be sent with the actor’s next replicated property update, and so this gets dropped.

Depending on your use case for this actor, there are a couple of options for avoiding this behavior.

First, you could have the client inform the server when it has received the necessary data by sending a server RPC, and once the server has received this for all clients, it can set the actor as no longer replicated.

If the intention is that this actor should not be replicated again, then you could call AActor::TearOff on the server instead of SetReplicates(false). This will cause the actor to no longer replicate, but the server will still send an update to the client informing it of the tear off. Because of this, any changes made to the actor before the tear off will still be sent to the client, although because of packet loss, I believe it is still possible for the client to fail to receive these changes.

If the actor is expected to replicate again later, then we’d recommend using dormancy instead of toggling bReplicates. When an actor is dormant, it won’t be considered for replication until it is woken up or flushed, and when setting an actor as dormant, the engine will wait until all changes have been acknowledged by the clients before the actor is considered fully dormant. You can find more info here: [Content removed]

Thanks,

Alex

Gotcha for additional context its for an Actor Pooling setup. The reason for using SetReplicates instead of Dormancy is due to me already investigating Dormancy and finding a few caveats that showed different behavior from what I was seeing in the documentation. Because we also needed the requirement of supporting relevancy.

I found Dormancy and Relevancy affect each other depending on the situation(IE level placed actor vs spawned actor vs owner spawned actor).

I provided my findings below but generally the problem was I want to put the actor in an inactive state where there is a clear cutoff across the network and Dormancy was not fulfilling that need completely while SetReplicates was, I am fine with sending an RPC about notifying that they are ready to call SetReplicates but being able to check for this would be extremely helpful. You mentioned UNetDriver::ServerReplicateActors_BuildConsiderList handles removing it from the net driver’s active list(thank you for that, that helps a ton!), how do you recommend I check if the actor is within the Net Driver’s network actor’s list? Asking so I know the appropriate solution.

Dormancy findings(this is placing the actor’s default Dormancy state to Dormancy_All btw), will specify if there is variation for RPC’s but this was tested with both reliable & unreliable Multicast from Server:

  • Level Placed Actors with Dormancy_All:
    • Relevancy when in range: OnRep’s and RPC’s will execute on the client.
    • Relevancy out of range: No RPC’s or OnRep’s execute on the client.
      • Enter into relevancy: Only the initial OnRep will execute on the client, no other state changes will occur unless flush is called.
  • Spawned Actor with Dormancy_All:
    • Relevancy when in range: OnRep’s and RPC’s will execute on the client.
    • Relevancy out of range before it spawned on client: Actor is not spawned on this client so no RPC’s or OnReps execute.
      • Enter into relevancy: Spawns the actor with the intial OnRep state and execute’s OnRep unless flushed.
    • Relevancy out of range after it is spawned on client: Actor will not receive RPC’s or OnRep’s.
  • Owner Spawned Actors with Dormancy_All, Net Load on Client is enabled btw:
    • Owner client out of relevancy range: RPC’s and OnReps still execute.
    • Non-owner client out of relevancy range: RPC’s and OnReps don’t execute on client(as expected).
    • All other relevancy behaviors from the other two types of actors also occur the same way.

So the main point with these findings is that the documentation may be incorrectly explaining whats happening, to reference the doc you linked but also the official non-professional docs say similar stuff:

“While the actor channel for a replicated actor will be closed when it goes dormant, dormant actors will still exist on both the server and client. This is different to how relevancy is handled, where dynamic, replicated actors will be destroyed on the client when they are no longer relevant. It is worth noting that dormant actors will not be checked for relevancy, so if a dormant actor would otherwise go out of relevancy on a client, it will not be destroyed on that client (unless using a Replication Graph with “Net.RepGraph.DormantDynamicActorsDestruction” enabled).”

Where my goal would be to turn off relevancy entirely when the actor is dormant regardless of ownership in this case if I wanted to use dormancy. Which SetReplicates seems to provide.

Hi,

Apologies for the confusion, but could you clarify the the situations listed here and behavior you’re seeing? From what you’ve described, this seems to be the expected behavior described in the docs, but I want to make sure I’m not misunderstanding.

For instance, when you say that OnReps and RPCs will be executed on the client for “Level Placed Actors with Dormancy_All” when “Relevancy when in range,” is this referring to flushing/waking up the actor when it is relevant? Also, for the owned actors, are these doing anything custom for their relevancy, or are they also just using distance based relevancy? Is the main issue that the non-relevant actors perform an initial replication to clients when they become relevant, regardless of whether that actor is dormant or not?

As for checking the NetworkObjectsList, you can get this from the NetDriver to find if an actor is contained in it:

NetDriver->GetNetworkObjectList().Find(MyActor);The FNetworkObjectList class also has functions for getting all the active or dormant objects in the list.

Thanks,

Alex

First off thank you regarding the network objects list approach, I greatly appreciate it! That’s something I can work with regarding the actor being added/removed if I use the SetReplicates approach.

“Apologies for the confusion, but could you clarify the the situations listed here and behavior you’re seeing?”

Sure thing! I was not using any flush/waking up functionality at all in any of those scenarios I listed(unless I specify the flush requirement in it), it was being done in C++ because I know modifying a Replicated BP variable will call a flush(I can provide a sample project version of it, but I was able to reproduce this in the content examples replication map using the last Chest in the level by making a C++ class and just overriding the hierarchy. Using that because it makes it easy to test different states and events with visuals).

The relevancy I was using was the default distance based relevancy radius of 50,000 units thats specified in the class defaults so nothing really different(again referencing the content examples replication map’s settings pretty much as a baseline).

For owned actors I was spawning the actor from a non-owned actor and using the first player controller as the owner. I did confirm that the player controller was valid before spawning the actor(did a simple and dirty looping timer until it became valid using GameplayStatic’s GetPlayerController at index 0 and then spawned the actor using that as the owner).

The main issue was that if the player was in relevant range when the actor is set to Dormant my expectation(based on the documentation saying relevancy should not affect dormancy) is that relevancy will not cause the actor to receive any further OnRep/RPC’s unless I manually flush it. So I can place it in an fully inactive state.

But since OnRep/RPC’s are executing if the player is within range then I can’t place the inactive actor at something like the center of the world or something.

This was all tested on the launcher version of the 5.4 engine too btw(we have an source version we actually develop on but validation of if its our changes vs public code this is what I am entirely referencing for this entire post is the launcher version).

Hi,

Thanks for the additional info!

So the behavior you’re seeing is expected, although it is not well documented. I’ve made a note to try and include this info somewhere more visible.

Calling a RPC on a dormant actor can cause that actor to replicate, as the remote function handling in the NetDriver does not check for dormancy. If a channel doesn’t exist for an actor calling a RPC, one will be created, and the NetDriver will make sure the “initial” replication for this new channel is performed when sending the RPC (see UNetDriver::ProcessRemoteFunctionForChannelPrivate).

This means that if a client is in relevancy range of a dormant actor when a multicast RPC is called on that actor, that RPC and the actor’s property data will be replicated to the client.

Again, this isn’t well documented, but it is the intended behavior. The expectation with dormancy is that while dormant, an actor doesn’t need to send RPCs and its replicated properties aren’t changing.

It’s also worth noting that when a client enters relevancy range for an actor for the first time, the server will still perform an initial replication for this actor in order to inform the client of this actor, even if the actor is dormant. This looks to be done regardless of whether the actor was statically placed in the level or dynamically spawned.

Finally, something else worth noting is that it is possible for a client to receive a reliable multicast RPC on an actor that is no longer relevant, provided that the actor only recently went out of relevancy and the channel still exists on the server. This is to prevent clients from missing reliable RPCs in the case where the client goes out and then back into relevancy range of the actor quickly (see UNetDriver::ProcessRemoteFunction).

Given this, I do agree that dormancy may not be the best solution for your project’s needs, especially if you need to call RPCs on the dormant actors. Hopefully the info here helps as you implement a custom solution for disabling/enabling actor replication, but if you have any questions, please don’t hesitate to reach out!

Thanks,

Alex