Why is an actor that never gets placed in OutGatheredReplicationLists still being replicated to the viewer in the Replication Graph?

Hello,

I’m using the replication graph in our project and one of our use cases is to spawn a predicted projectile along with a server auth projectile and prevent the server version from spawning on the predicted client. We previously used IsNetRelevantFor to implement this logic so we’ve created a new UReplicationGraphNode_ActorList node for these projectiles in the rep graph which cycles through each projectile in the list and calls IsNetRelevantFor before recreating the new list.

I’ve confirmed that zero of these actors are being added to the OutGatheredReplicationLists list. However, I’m still getting a BeginPlay and EndPlay called for that actor. Ideally this would never spawn… I have a feeling it’s related to the ReplicateDestructionInfos or ReplicateDormantDestructionInfos functions further down in UReplicationGraph::ServerReplicateActors but I’m finding it difficult to track down. Do you know why this would still be getting replicated and if so, is there any way to prevent it?

Thanks

Hi,

It’s hard to say for sure why this is happening without some more info, and there are a couple of things you can do to further debug the problem.

First, you can try setting the projectile as the replication graph’s debug actor, using “Net.RepGraph.SetDebugActor <ClassName>” and/or “Net.RepGraph.ConditionalBreakpointActorName <ActorName>”. These will provide more info in the logs on how the actor(s) are being handled by the rep graph, as well as allow you to place breakpoints at certain places under the RepGraphConditionalActorBreakpoint function.

If the destruction info is a suspected cause, you can enable the LogNetTraffic category, as this will print a line when UNetDriver::SendDestructionInfo sends destruction info for a channel. This category will also print a lot of other information on the network traffic of your project, but you can check for the destruction info lines to see if these are being sent before the client calls BeginPlay/EndPlay for the actors.

Thanks,

Alex

Hi Alex,

Thanks for the prompt response. Using LogNetTraffic allowed me to see what is causing the issue here and it’s our blueprint calling a reliable multicast RPC which is causing the not relevent actor to be created on the client. Changing this RPC to unreliable does not help. If I remove all calls to this RPC, the actor no longer spawns as expected.

Is there something that I’m missing to prevent RPCs from being sent? I would assume that these follow the same rules as the rest of the node?

To get around this in the short term, I’ve added a new bool to the FConnectionReplicationActorInfo struct to prevent RPCs which I can set to true if the IsNetRelevantFor check fails. It will then just continue in the connection loop in UReplicationGraph::ProcessRemoteFunction if the flag is set.

However, I assume that there is a proper solution to this issue so any direction would be amazing. Thanks!

Hi,

When processing a remote function results in an actor channel being created, UNetDriver::ProcessRemoteFunctionForChannelPrivate will call ReplicateActor on the new channel, in order to ensure the actor channel’s initial replication will open the channel on the client. As you’ve noted, UReplicationGraph::ProcessRemoteFunction doesn’t check any node’s logic around gathering actors when determining if the actor channel should be created.

Unfortunately, I don’t believe there is a way to change this behavior outside of changing UReplicationGraph::ProcessRemoteFunction, like you’ve done here. Another potential workaround would be to send client RPCs to the connections that should receive the actor, rather than using a multicast.

I think your workaround here is reasonable, but if you run into any problems or have any further questions, please don’t hesitate to reach out. However, please note that support will be limited for the next two weeks due to the company break.

Thanks,

Alex