Do you have any tips on keeping Mover bandwidth down?

Hello,

We’ve got 8 Mover pawns in our game, currently replicating at all times. Aside from using relevancy/replication graph, do you have any ideas on how to generally reduce Mover bandwidth? We need to optimise for the worst case which could be all 8 movers within the same relevancy area and as it is, each Mover component is using 673 bits of data per frame which is most of the 8Kb budget that we seem to have hard capped.

I can see that the 8Kb limit is applied in CoreNet.h under MAX_PACKET_SIZE. Is it ok to increase this value or is it set like that to support all client configurations?

Is there any Insights view to see how the Mover packet is being arranged? I know that there is the NetworkPrediction trace channel but that only seems to show the values on that frame rather than the memory layout and I haven’t seen anything in the Network insights view.

Thanks

Hi,

MAX_PACKET_SIZE defines the MTU for the connection, but it does not control the per-frame bandwidth limit. I don’t believe we generally recommend changing this value.

The bandwidth limit is set by the ConfiguredInternetSpeed, ConfiguredLanSpeed, MaxClientRate, and MaxInternetClientRate config values. These are by default 100,000, but they can be raised further if needed. These values represent bytes per second, so the per-frame bandwidth limit is also dependent on the framerate (you can see where this is calculated towards the end of UNetConnection::Tick).

In Networking Insights, it’s worth noting that each column represents a single packet, not a single frame. If you hover over each column, you can see the timestamp and engine frame number the packet was sent/received on.

As for seeing how the packet is arranged, I don’t believe there is any way to get more info than what is shown in the packet contents panel, where you can see the number of bits that were sent/received as part of the MoverNetworkPredictionLiasonComponent’s replication (ActorChannel->MoverNetworkPredictionLiasonComponent->Properties->ReplicationProxy_Simulated/Autonomous).

Finally, for more info on managing Mover’s bandwidth, I’m going to loop in someone more familiar with the system.

Thanks,

Alex

Some thoughts on reducing bandwidth:

Mover uses custom net serialization on the data it sends, which makes it difficult to inspect the contents/layout. There are a handful of tricks we’ve already employed in to reduce the data sent. FMoverDefaultSyncState::NetSerialize shows some of them, such as fixed/packed/compressed serialization and single-bit bool serializing. You can use these in any custom struct NetSerialize funcs if you aren’t already. (Long term, we want to support doing this via uproperty markup.)

If using the FMoverDefaultSyncState, make sure your non-moving walkable objects have their Mobility marked as non-Moveable. Otherwise, you may be including unnecessary movement base information.

If you can detect when you’re saturating bandwidth or have a lot of relevant characters in the same vicinity, you may be able to come up with a scheme to change the NetSerialize functions to serialize some things with lower fidelity (fewer bits) or stop sending some kinds of infrequently-changing data. This is trading bandwidth for fidelity.

Depending on the data in your sync state, it may be possible to pull some rarely-changing data out of sync state and replicate it using regular uproperty replication. This trades a reduction of general bandwidth for the risk of potential corrections when that data changes.

I hope some of this helps!

Justin

Hi guys.

Thanks a lot for all of the info. There are some good points in there!

Knowing that networking insights shows packets rather than per game frame makes much more sense and demystifies the question around MAX_PACKET_SIZE.

We’ve been using quite a few of the net tricks from FMoverDefaultSyncState::NetSerialize but good to know we’re not missing anything by trying to show the contents of the packet. It would be great to get that at some point but I can also understand why it’s particularly difficult!