I need a clear explanation of the replication conditions

I mark the property I want to replicate as COND_InitialOnly in the GetLieftimeReplicated function, and after being replicated for the first time, I use Reliable Multicast to keep it synchronized.

The advantage of this strategy is that it maintains fewer synchronization requests for properties that are not frequently replicated, and also facilitates the handling of events.


void AInventoryBase::GetLifetimeReplicatedProps(TArray<FLifetimeProperty>& OutLifetimeProps) const
{
Super::GetLifetimeReplicatedProps(OutLifetimeProps);

DOREPLIFETIME_CONDITION(AInventoryBase, Container, COND_InitialOnly);
}

The problem is that I don’t know if Unreal Engine really optimizes DOREPLIFETIME_ requests marked with COND_InitialOnly.

I haven’t been able to figure out exactly where in the engine the Delta evaluation of the property occurs.

The ‘Container’ member in the code snippet above is a fairly large struct that contains multiple arrays, and if this struct evaluates to Delta for every replication tick like any other replication property that is always replicated, the performance loss will be quite large.

I want to know if properties flagged with COND_InitialOnly are really only evaluated when a new relevancy is established.

There’s no need to use reliable multicasts - and doing so is probably causing more problems than it’s solving.

  • Properties are only considered for replication when they are changed. There are no such syncronisation requests, the server will only send a property to a client if the last-acked property from that client doesn’t match the current value.
  • You can use Rep Notifies to get event callbacks when properties replicate. They are processed locally and therefore have no network cost, and you can even force them to be called when the received property hasn’t changed (for example, if you are modifying replicated properties locally client-side for prediction or somesuch).

Using multicasts means you have to maintain state syncronisation yourself - this won’t work well with network relevancy, join-in-progress etc. It also means that each time you call that multicast, you are broadcasting the entire property to every client - even though they may already have the latest state.

Unreal will also send the delta of properties where possible. If you have a struct and only one or two properties within the struct have changed, then only those properties will be sent. Arrays are less efficient by default, but you can use the FFastArraySerializer system to only send changed array elements. TL;DR - It is far more efficient to use the engines built in variable replication.

In 4.25 (I think it’s in now), there is also a new push-model system so that you can explicitly mark properties as “dirty” at the game-code level. This allows you to have fine-grain control over when properties replicate, and you can continue to rely on the existing variable replication system in the engine.


As for COND_InitialOnly - those properties will only be considered for replication when the actor channel is opened. Actor channels are opened when actors become relevant to a connection - so if you actor leaves relevancy range, then returns, the property will be sent again. This is because when an actor leaves relevancy, it is fully destroyed on the client - as far as the client is concerned, that actor doesn’t exist.

They will also be sent again if the actor has been dormant on a connection, and is then woken from dormancy.

Thanks for reply.

I got to know COND_InitialOnly clearly.

There are several reasons why I chose replication over reliable mutlicast delegates instead of Property Replication.

First is responsiveness. Typical property replication is evaluated at a pre-specified (sometimes dynamically changing) cycle, which can be a delay for the client to receive changes from the server.

Of course, in many cases, the delay of 0.1 seconds is sufficiently short, but I think it is advantageous to notify clients of sensitive information, such as the character’s HP status, as soon as possible.

The next reason is the overhead for delta evaluation. When a property is marked as Replicated, a delta evaluation of the entire property data must be performed at every replication cycle.

Even if the amount of data transmitted over the network is small, it is necessary to iterate over the entire data in order to make a delta evaluation. So it’s a bit scary to entrust large chunks of data, such as inventory, to a built-in replication system.

Inventory, for example, usually doesn’t change very often, but needs to be noticed as it changes, and is usually large.

By entrusting it to the built-in replication system, while data remains unchanged, you perform unnecessary delta evaluations on huge data. However, If you slow down the cycle, the responsiveness decreases.

So I thought it would be desirable for these classes to utilize COND_InitialOnly and Reliable Multicast.

In this case, if I synchronize only the initial state through the built-in system, and then send only the command to change some data to the client through Reliable Multicast, I can control much of the synchronization myself.

Event processing also determined that it is more intuitive to call the value change itself as a function, rather than evaluating and calling the replicated value from the client.

This is based on my personal understanding of Unreal Engine. If there is anything I have misjudged, I would appreciate it if you pointed out.

RPC’s and properties are both affected by latency in the same way. Reliable RPC’s can actually be considerably slower in poor network conditions, because they may need to be sent multiple times if they are dropped. Variable replication may take longer if your actor has a low NetUpdateFrequency, or if you are replicating lots of properties at once and other actors are taking priority (this often happens when spawning lots of actors, such as at level startup).

Reliables are particularly problematic because they garauntee call-order execution on the client (per-actor) - and they must therefore be processed from a buffer. If a reliable RPC is dropped, it will block all execution of subsequent reliable RPC’s until it is successfully handled. If a client loses too many packets and overflows their reliable RPC buffer, they are instantly kicked from the Server. There’s nothing you can do to prevent that - which is why reliables should only be used when absolutely neccesary.

Ultimately if variables are taking a long time to replicate it’s because you don’t have the available bandwidth to send them all at once. Brute-forcing it via RPC might mask that issue in the short-term, but all it’s doing is saturating the network - and this will cause you bigger problems down the road as the project grows.

You also still have the problem where you are sending redundant data to all clients who already have up-to-date properties. Variable replication does not have that issue.

The cost of property comparisons is a cost that is only incurred by the Server, and adopting the push-model essentially eliminates all unneccesary property comparisons. It’s a very easy system to adopt as you can see from the comments here:

https://github.com/EpicGames/UnrealEngine/blob/master/Engine/Source/Runtime/Net/Core/Public/Net/Core/PushModel/PushModel.h

Push-Model is still experimental, so if you don’t want to adopt it, an older technique that Epic recommends is to reduce the NetUpdateFrequency to something very low for actors which do not update often, and make calls for ForceNetUpdate() when you change a property. I’ve used this approach a lot before push-model came along to reduce server overhead, and the game remains perfectly responsive.

Replication graph is also another thing you should look into to reduce server overhead for replication, it’s more useful when you have high numbers of replicating actors and lots of client connections.


Ultimately, any multiplayer game made in Unreal will be using variable replication for state syncronisation. Personally i’ve done a lot of multiplayer, and no project I’ve worked on ever has a case where it made sense to use RPC’s instead. To be honest, most of projects don’t have a single reliable multicast anywhere.

The golden rule of replication in UE4 is to use RPC’s for transient events, and properties for persistent states. In some cases, it even makes more sense to use properties for events too (see ShooterGame for an example with the BurstCounter) - especially when they are firing often.

The engine has ***a lot ***of parameters you can tune to make property replication more efficient, all of which are better options than reliable multicasts. Just my two cents!

4 Likes

Thank you very much for the detailed answer. It has helped us a lot in determining the direction of the future design.

Now I need to start cutting the code that is based on Reliable Multicast…

Thank you very much! :slight_smile: