UMoverNetworkPredictionLiaisonComponent consumes a lot of bandwidth even for stationary actors

We’re evaluating using Mover with the Network Prediction backend for our project and seeing even stationary actor replication saturate the network pretty quickly where the UCharacterMovementComponent doesn’t exhibit this behavior.

We’ve done a fair amount of work to cut down on the number of bits sent when replicating our sync states (FMoverDefaultSyncState included) to the point where there isn’t a lot of fat left to cut from individual actors.

I’ve traced through UNetDriver.cpp into UActorChannel::ReplicateActor for both CMC and Mover. A stationary CMC-backed character ends up bWroteSomethingImportant false and a stationary Mover actor comes out true due to FReplicationProxy::Identical (code snippets at bottom)

I’m curious about the reasoning for FReplicationProxy::Identical comparing the CachedPendingFrame vs comparing the underlying data. It must save CPU cycles on the authority machine, but that’s a huge cost in terms of bits over the network.

Does it seem reasonable to create an FReplicationProxy::IdenticalFunc/Predicate analogous to FReplicationProxy::NetSerializeFunc, assign it in NetworkPredictionWorldManager::Bind functions, then call it from FReplicationProxy::Identical the same way that FReplicationProxy::NetSerialize calls its NetSerializeFunc?

bool FReplicationProxy::Identical(const FReplicationProxy* Other, uint32 PortFlags) const
{
	return (CachedPendingFrame == Other->CachedPendingFrame);
}

----
class UMoverNetworkPredictionLiaisonComponent : public UNetworkPredictionComponent, public IMoverBackendLiaisonInterface
----

class UNetworkPredictionComponent : public UActorComponent
...
UPROPERTY(Replicated, transient)
FReplicationProxy ReplicationProxy_Autonomous;

UPROPERTY(Replicated, transient)
FReplicationProxy ReplicationProxy_Simulated;

UPROPERTY(Replicated, transient)
FReplicationProxy ReplicationProxy_Replay;

Steps to Reproduce
Make a scene with ~50 actors with mover components configured to use the UMoverNetworkPredictionLiaisonComponent near the player start location (so they don’t get culled) and no controller/AI wired up so they remain stationary. Configure PIE to launch in Listen Server net mode with 2 players. Open the scene, run it, open the network profiler, take a look at the amount of bandwidth used in replicating the stationary actors.

Do the same with the same number of UCharacterMovementComponent-backed characters and compare the results.

I’m seeing nothing coming across the wire for stationary CMC actors (after the initial replication) and mover actors having a serious network impact.

Yes, this extra unnecessary bandwidth use is largely due to missing a delta serialization feature. This is something we’ll be tackling soon as part of the push to Beta. An identical check would be a reasonable stopgap in the meantime, and much easier to implement with a TFunction like you suggested. If the server doesn’t send any state for a while, NPP will start warning about starvation, so you may need to silence them. I think the existing logic of carrying the old frame forward in the absence of newly-received frames will still work.

Additionally, if you haven’t seen them, there’s a few settings to give yourself some extra headroom until the bandwidth usage issues are resolved. In Engine.ini:

[/Script/Engine.Player]
ConfiguredInternetSpeed=800000
ConfiguredLanSpeed=800000
[/Script/OnlineSubsystemUtils.IpNetDriver]
MaxClientRate=800000
MaxInternetClientRate=800000

Hope this helps!

Justin