Ticks behaviour

Is there are any documentation of how exactly engine calculate and moves from one state of game simulation to another aka tick it? It unclear for me what function(s) engine is use to proceed from current simulation state to next.

There are AActor::Tick(float DeltaTime) in documentation, but from its signature it looks like that its called each renderer frame, aka DeltaTime goes from 0 (previous gamesim state) to 1 (current gamesim state). So it should be used for rendering task, like visualization of something extremely fast with FaNcY animation or drawing virtual particles on a screen (Im sure there are exist some sort of render overlay which suits better for this, but thats just an example).
But what does engine use to compute all gamesim specific info (like collisions, movement, rotations, physics in general)?
Or my assumption that gamesim computation is, by default, is frame independent (depends from time, not from video card performance) is wrong?

DeltaTime is the amount of time elapsed from the last frame to the current frame, and is inside Tick that you do something like movement and rotation.

To ensure that your movement it frame independent you should use the DeltaTime in your calculations like in the following example:


FVector AmountToMoveThisFrame = MovementDirection * MovementSpeed * DeltaTime;

The delta-time you see will be the time elapsing in-game for that tick, e.g. at 30 fps you’d expect to see a fairly consistent ~0.033s delta-time.

The actual engine tick is kind of complicated, there are multiple “tick groups” which are mostly organized before/during/after the parallel physics update - you can perform work in any one of these groups, configure different components/actors to tick before/after one another, set up parallel tasks which span more than one group etc. Physics “ticks” once per frame but does some sub-stepping, since yeah it tries to give consistent results independent of the frame rate. Still, results may subtly vary based on performance, which is a fact of life, but you can configure a fixed delta-time in project settings if for some reason you want that.

Ehhhh…so Tick is called each render frame, but you supposed to implement gamesim specific actions in it, right?! That sounds…strange???

I mean, relying on rendering is inconsistent not only in performance scope. Obviously, there can be 30 or 16 or 34 frames per second. But also, for example, it would cause de-syncronization which will break multiplayer for FPS games, because someone can play on supercomputer cooled with arctic glacier and someone on Pentium 3 with ATI instead of video card. Its all sounds eeeeeeee…
Yes you can use delta time to negate performance inconsistency between different machines, but why dont make a dedicated “void change_any_play_specific_data_here()” method for such purpose?

For example in relatively old games with open sources, like Serious Sam or Doom(1993), each actions (regardless is it player, mob or script) that changes state of game simulation (character movement, interaction with objects, damaging of something, anything really) happens only on specific timestamps, for example each 1/35 or 1/25 seconds. It consistent, you dont need to rely on station performance, speed of a game easily can be changed by changing “tick rate” variable and other things.

Or Im stuck in 90-s and nowadays its okay to attach everything to fps???
Or I miss something in UE architecture?

You keep refering to game states and gamesim? What do you mean by these? Frankly, your understand of this seems to be completely wrong.

I know that “game state” is used to specify mode of game operation (on actual level, loading game, saving game, transitioning to next level, etc.).
I used it there as “snapshot of a current game simulation i.e. actor X at position X with velocity Z, actor I at position J with velocity K”.

If you would render the game at 60fps but the simulation only runs at 30fps then you would literally end up with duplicated frames every other frame, giving you literally the same result in the end (as if you were rendering at 30fps), so why not just clamp your game at 30fps then?

I can see reasons to do the opposite, like render at 15fps with the simulation at 30fps, like in a simple chess game that you want to reduce resources while no player makes an action. I know that Unity has something like that (called OnDemandRendering), but I don’t know if Unreal has something like this.

About your network use case, each game has its own requisites for that, some roll-out a custom deterministic game code, some just never trust that the client is fully in sync and always do the validations on server side and execute rollbacks on the client side, some don’t need real-time sync at all (like a simple chess game).

I’m still confused on what you mean by “gamesim” and “game state”. There is no “game state” in the context of a tick function. DeltaTime is the time between the previous frame and the current frame. It’s not going from 0 to 1. The engine itself isn’t doing anything with objects in Tick. All it does is calling the function for you. Actors can override this function to implement logic that would make sense for them. This can be anything you want. Anything that makes sense for your project. Also, Tick doesn’t have to be called every frame. Each actor can set their own tick rate.

Quite the opposite. The 90’s everything was attached to frame rate (which is why you have to have limiters on DOS Box and all that to prevent people from flying through walls with a simple tap of a button). Now everything is based on TIME.

If Client A runs the game at 120FPS and moves at a speed of 100 units per second for 0.5 seconds you get:



DeltaTime (0.00833~ - which is 120Hz) x 100 x 60 frames (120Hz * 0.5) =  49.98 units


if Client B runs the game at 30FPS and moves at a speed of 100 units per second for 0.5 seconds you get:



Delta Time (0.03 - which is 30Hz) x 100 x 15 frames (30Hz * 0.5) = 50 units


So Client A and B are pretty close, despite having wildly different performance locally. Client B is going to have slightly larger jumps in numbers because the DeltaTime is larger (and thus the updates less frequent) - but Client A will have a smoother update flow and less janky interpolation.

The server itself could be running at 120Hz (Valorant does this I believe), or as low as 15 - 20Hz (common in MMOs). As long as the server update rate is greater than the rate at which traffic comes in to the server from a specific client, thus ensuring a timely response to the client - you’re golden on your end. Internet Round Trip Time (RTT) will always be greater than your client + server refresh rate anyway (20ms is FOREVER to a local machine, but insanely fast for a net connection).

I think it’s your concept of “Game State” that is confusing you. Actors don’t continually tell the server “Hey, I’m here at XYZ”. Rather they send input or small updates to the server when the user does something they are interested in, the server then validates/moves the character and reports that delta to other clients. It can also batch those updates up so they are independent of the local framerate (if you only need 10Hz for movement updates - then that’s fine, clients can interpolate the rest).

In the end, Client A, and B will have similar experiences - Client A may be a bit smoother and purely client actors will benefit from smaller delta times (cloth, particles, foliage, etc), but all the “core” stuff related to gameplay will be available to Client A/B at the same rate and at roughly the same experience (enemies will appear at the same location at roughly the same time, treasures will spawn at the same time in the same place, etc).

Network programming (especially in UE) is all about tricks and how much you can get away with bandwidth (network traffic is costly and takes time) / update rate (server CPU time is at a premium) without breaking the game or the user experience.

EDIT: I should point out, that in my math above, I’m assuming a constant framerate - however, that doesn’t matter. If the frame rate went higher/lower for a frame or 2 (or 20, or 200), the delta time would raise/lower with it and we’d still end up at the same point at the end - because time is our scalar/base.

Clamping ticks to 30 was among many workarounds for hardware limitations. With 120hz to 144hz desktop monitors there is no reason to limit FPS. As long as your calculations include delta time the game should look similar on different hardware.

My problem is not with fps per se. It can be 30, 60 or >9000, I really dont care.
What I want is to clamp updating of all level related data (actors, scripts, etc.) to X times per seconds, so it would updates each 1/X seconds. It looks like there are no…

Ohhhhh… In seconds, I guess?!
If yes, how I was supposed to figure this out from

without specification “in seconds”? Or I just dont know how to read documentation properly?

In any case, its quite easy then, like

The code you just posted makes no sense. If you want to call something X times per second, you can either set the actor’s tick interval or you can use a Timer. Using a timer is better because it gives you more control, but setting the tick interval is much easier. So for example, if you set it to 0.1f, the actor’s Tick function will be called 10 times per second.



// .cpp file, your actor's constructor.
AMyActor::AMyActor()
{
    PrimaryActorTick.TickInterval = 0.1f;
}


The code actually makes sense (besides the syntax errors on the first line lol), just there are better ways to do that as you mentioned

So, yes what you posted is a common and reasonable approach. The actors still Tick() but only do some heavier weight calculation at an interval. As mentioned above that can be done with timers, but it amounts to the same thing and I find the timer in Tick() code a bit more readable. Either way is fine. :cool:

In those old game you had massive input lag with slow machines or high ping. aka Lock Step. Modern games (FPS) are built around client-side prediction, responsiveness. The downside to this is pings need to be low to reduce desync.