I mark the property I want to replicate as COND_InitialOnly in the GetLieftimeReplicated function, and after being replicated for the first time, I use Reliable Multicast to keep it synchronized.
The advantage of this strategy is that it maintains fewer synchronization requests for properties that are not frequently replicated, and also facilitates the handling of events.
The problem is that I donât know if Unreal Engine really optimizes DOREPLIFETIME_ requests marked with COND_InitialOnly.
I havenât been able to figure out exactly where in the engine the Delta evaluation of the property occurs.
The âContainerâ member in the code snippet above is a fairly large struct that contains multiple arrays, and if this struct evaluates to Delta for every replication tick like any other replication property that is always replicated, the performance loss will be quite large.
I want to know if properties flagged with COND_InitialOnly are really only evaluated when a new relevancy is established.
Thereâs no need to use reliable multicasts - and doing so is probably causing more problems than itâs solving.
Properties are only considered for replication when they are changed. There are no such syncronisation requests, the server will only send a property to a client if the last-acked property from that client doesnât match the current value.
You can use Rep Notifies to get event callbacks when properties replicate. They are processed locally and therefore have no network cost, and you can even force them to be called when the received property hasnât changed (for example, if you are modifying replicated properties locally client-side for prediction or somesuch).
Using multicasts means you have to maintain state syncronisation yourself - this wonât work well with network relevancy, join-in-progress etc. It also means that each time you call that multicast, you are broadcasting the entire property to every client - even though they may already have the latest state.
Unreal will also send the delta of properties where possible. If you have a struct and only one or two properties within the struct have changed, then only those properties will be sent. Arrays are less efficient by default, but you can use the FFastArraySerializer system to only send changed array elements. TL;DR - It is far more efficient to use the engines built in variable replication.
In 4.25 (I think itâs in now), there is also a new push-model system so that you can explicitly mark properties as âdirtyâ at the game-code level. This allows you to have fine-grain control over when properties replicate, and you can continue to rely on the existing variable replication system in the engine.
As for COND_InitialOnly - those properties will only be considered for replication when the actor channel is opened. Actor channels are opened when actors become relevant to a connection - so if you actor leaves relevancy range, then returns, the property will be sent again. This is because when an actor leaves relevancy, it is fully destroyed on the client - as far as the client is concerned, that actor doesnât exist.
They will also be sent again if the actor has been dormant on a connection, and is then woken from dormancy.
There are several reasons why I chose replication over reliable mutlicast delegates instead of Property Replication.
First is responsiveness. Typical property replication is evaluated at a pre-specified (sometimes dynamically changing) cycle, which can be a delay for the client to receive changes from the server.
Of course, in many cases, the delay of 0.1 seconds is sufficiently short, but I think it is advantageous to notify clients of sensitive information, such as the characterâs HP status, as soon as possible.
The next reason is the overhead for delta evaluation. When a property is marked as Replicated, a delta evaluation of the entire property data must be performed at every replication cycle.
Even if the amount of data transmitted over the network is small, it is necessary to iterate over the entire data in order to make a delta evaluation. So itâs a bit scary to entrust large chunks of data, such as inventory, to a built-in replication system.
Inventory, for example, usually doesnât change very often, but needs to be noticed as it changes, and is usually large.
By entrusting it to the built-in replication system, while data remains unchanged, you perform unnecessary delta evaluations on huge data. However, If you slow down the cycle, the responsiveness decreases.
So I thought it would be desirable for these classes to utilize COND_InitialOnly and Reliable Multicast.
In this case, if I synchronize only the initial state through the built-in system, and then send only the command to change some data to the client through Reliable Multicast, I can control much of the synchronization myself.
Event processing also determined that it is more intuitive to call the value change itself as a function, rather than evaluating and calling the replicated value from the client.
This is based on my personal understanding of Unreal Engine. If there is anything I have misjudged, I would appreciate it if you pointed out.
RPCâs and properties are both affected by latency in the same way. Reliable RPCâs can actually be considerably slower in poor network conditions, because they may need to be sent multiple times if they are dropped. Variable replication may take longer if your actor has a low NetUpdateFrequency, or if you are replicating lots of properties at once and other actors are taking priority (this often happens when spawning lots of actors, such as at level startup).
Reliables are particularly problematic because they garauntee call-order execution on the client (per-actor) - and they must therefore be processed from a buffer. If a reliable RPC is dropped, it will block all execution of subsequent reliable RPCâs until it is successfully handled. If a client loses too many packets and overflows their reliable RPC buffer, they are instantly kicked from the Server. Thereâs nothing you can do to prevent that - which is why reliables should only be used when absolutely neccesary.
Ultimately if variables are taking a long time to replicate itâs because you donât have the available bandwidth to send them all at once. Brute-forcing it via RPC might mask that issue in the short-term, but all itâs doing is saturating the network - and this will cause you bigger problems down the road as the project grows.
You also still have the problem where you are sending redundant data to all clients who already have up-to-date properties. Variable replication does not have that issue.
The cost of property comparisons is a cost that is only incurred by the Server, and adopting the push-model essentially eliminates all unneccesary property comparisons. Itâs a very easy system to adopt as you can see from the comments here:
Push-Model is still experimental, so if you donât want to adopt it, an older technique that Epic recommends is to reduce the NetUpdateFrequency to something very low for actors which do not update often, and make calls for ForceNetUpdate() when you change a property. Iâve used this approach a lot before push-model came along to reduce server overhead, and the game remains perfectly responsive.
Replication graph is also another thing you should look into to reduce server overhead for replication, itâs more useful when you have high numbers of replicating actors and lots of client connections.
Ultimately, any multiplayer game made in Unreal will be using variable replication for state syncronisation. Personally iâve done a lot of multiplayer, and no project Iâve worked on ever has a case where it made sense to use RPCâs instead. To be honest, most of projects donât have a single reliable multicast anywhere.
The golden rule of replication in UE4 is to use RPCâs for transient events, and properties for persistent states. In some cases, it even makes more sense to use properties for events too (see ShooterGame for an example with the BurstCounter) - especially when they are firing often.
The engine has ***a lot ***of parameters you can tune to make property replication more efficient, all of which are better options than reliable multicasts. Just my two cents!