how is server-to-client bandwidth controlled in Iris in UE 5.5

Hi Iris Networking friends!

I’m doing some exploring around bandwidth control and total object replication cap when using Iris. I’m somewhat familiar with configuration of the AGameNetworkManager::UpdateNetSpeeds function TotalNetBandwidth, MinDynamicBandwidth, and MaxDynamicBandwidth to set the player controller netspeed, but it looks like that only happens on non-dedicated servers. How does this get used on a dedicated server (does it?)

I’m also curious where the bandwidth limiting actually happens in iris- like how does iris decide “ok I’ve replicated enough prioritized data this frame, I’m going to stop.” From poking around roughly it looks like this loop:

// Begin the write, if we have nothing todo, just return
if (DataStreamManager->BeginWrite(BeginWriteParams) == UDataStream::EWriteResult::NoData)
{
    return;
}

do 
{
    // Write data until we are not allowed to write more
}
while ((WriteDataFunction() == UDataStream::EWriteResult::HasMoreData) && IsNetReady(UE::Net::Private::bIrisSaturateBandwidth) && (!IsPacketWindowFull()));

If this is the case, what controls do I have around how much data iris can send? it looks like IsPacketWindoFull is checking if we have close to 256 unack’d packets (if I’m reading it right) and IsNetReady is…I’m not exactly sure. It seems to check the following things

NumOutRec>=RELIABLE_BUFFER

if some number of outgoing “rec” is > RELIABLE_BUFFER? is that outgoing reliable rpcs?

and then there’s

return QueuedBits + SendBuffer.GetNumBits() <= 0;

which seems to check if the number of queued bits + the number of bit in the SendBuffer is <= 0? (So I’m guessing one of these numbers is negative?) Is there an easy place to understand what this is checking specifically and how this is tied to bandwidth throttling?

and then finally- is there some other place that iris caps outgoing bandwidth / number of objects that it replicates after they have become prioritized?

Thanks very much!

Josh

Steps to Reproduce

Hi,

Currently, Iris uses the same configurable bandwidth limits as the default replication system. You are correct that TotalNetBandwidth is only used for listen servers, with dedicated servers using the ConfiguredInternetSpeed/ConfiguredLanSpeed value (clients will use MaxClientRate/MaxInternetClientRate).

These are the main values that will be configured for bandwidth limit control, although projects can call APlayerController::SetNetSpeed to change this limit as well.

You can see where the CurrentNetSpeed is used to calculate the per-tick limit (DeltaBits) toward the end of UNetConnection::Tick. This value is truncated and made negative, and QueuedBits is set to this value:

/** 
 * The amount of bits allowed to be sent in a given tick to respect the bandwidth per second net speed.
 * Starts negative and increases with every serialized bits. When >0 it means we sent more bits then allowed.
 * Set at the end of every Tick based on the data sent that tick.
 */
int32			QueuedBits;

As for NumOutRec, this is the number of outgoing, unacknowledged reliable bunches, which may not necessarily just be reliable RPCs.

UDataStreamChannel::IsPacketWindowFull is checking if there are too many unacknowledged packets for the connection. This is done in FNetPacketNotify::IsSequenceWindowFull, with a default SafetyMargin of 4 (configured using net.Iris.PacketSequenceSafetyMargin). This function checks if the SequenceLength (the difference between the last packet sent, OutSeq, and the last packet that was ack’d, OutAckSeq) is greater than the MaxSequenceHistoryLength. If there are too many unacknowledged packets, Iris will stop sending until the OutAckSeq catches up.

It’s also worth noting that Iris enforces a max number of packets per batch. This was previously only one packet, but as of CL 39325359, this is now configurable using net.Iris.ReplicationWriterMaxAllowedPacketsIfNotHugeObject (default value is 3). You can see where this is checked at the beginning of UDataStreamChannel::WriteData.

Thanks,

Alex

Thanks, this is great info!