Lack of Movement Components for Networking

If I understand it correctly, the only MovementComponents that work in multiplayer (client-input to server, replication, lag compensation, client prediction…) are for characters and wheeled vehicles.

Not even the floating pawn or spectator movement work in multiplayer.

The amount of code added from PawnMovement to CharacterMovement and VehicleMovement is sooooo much that I really think you should add classes in between that implement network-functionality without being too specialized.

Like:
PawnMovement -> KinematicMovement(Network-ready) -> CharacterMovement
PawnMovement -> DynamicMovement(Network-ready and Physics-simulated) -> VehicleMovement

Then we could inherit our own movement components from those intermediate classes.

It says UE4 comes with built-in lag compensation, client-sided prediction etc., but I have to code all that myself (and do the research) just to make a spectator with replicated movement?..

Well a “movement component” for Root Motion would be nice but what is lacking is a much more cleaner and in depth “concepts” as to participial solutions as applied to much more complex animation and character control systems.

From time to time design concepts do show up in the twitch casts but Epic does seem to avoid definable use of a given system and my guess to avoid suggesting a preference for a give system with in a context of the design.

The problem with designs with in a context is usually void of words like "I think’ or “This might work” in favor of “I know this will solve your problem right now”

To put things int context it’s not hard to agree that Assassin Creed has one of the more complex character control systems and I found this vid very useful.

Sooooo as a best of both worlds when to decided to take player control “away” from the player could solve a lot of the networked issues as say in a QT driven event and then give it back when the event is done.

Long way of say I guess a “concepts in design” section to the forums would be nice in that the rule would be “this might work” rather than putting something out there that “this will work” :wink:

added

Hi,

I really like this idea because it’s a big problem that only the character class supports movement prediction at the moment.
My idea was to create an actor component in bp that you can assign to any actor - no matter if it’s a controllable pawn, a physics object etc. - and that does the job for you. But it’s in a very early state and currently I’m trying to understand the theory of movement prediction in detail to get this done which is not the easiest task if you are not a “real” programmer.

So I personally would really LOVE to see the prediction functionality for every kind of actor in unreal from the official epic side because it’s simply crucial for multiplayer games and would open the doors for really fancy games :slight_smile:

→ Please epic, integrate it! ←

Best regards,
Daniel

I just don’t understand why something that essential isn’t supported allready… I mean it’s something that many people need and why should everyone have to invent the wheel all over again?

I found this topic. I haven’t really read it yet. But if a couple of pros are struggling with the matter, I’m probably not even going to try and code my own lag compensation. Of all the things that a game engine could handle internally, that is probably the most important for me.

Seriously, for physics-simulated pawns, you have a transform, linear and angular velocities and input that applies forces to the body. Shouldn’t a game engine provide a general solution to sync these across the network? And for kinematic movement it would probably be very similar, only simpler.

So after learning that you can simulate latency for network PIE, I realised I was wrong about the vehicle movement. It only looks smooth (more or less), because there’s no client prediction at all, just replicated movement from the server.

Anyway, since every actor allready has a “ReplicatedMovement” member, wouldn’t that be a good place to implement an optional, universal way to deal with prediction and smooth correction?

I played every Unreal title since the original Unreal back in the 90s.

There have always been slowly moving and instantly moving projectiles (although I believe Unreal still had input lag). In UT2004 there were non-wheeled hover vehicles. And yet, so many years later, there is still no engine support for properly syncing these things :confused:

I’d really like to get some information regarding this from the Epic staff. Movement prediction is not a trivial task but it seems that it’s not a thing that’s completely impossible to integrate. Here’s an interesting article about it in general:

http://www.gabrielgambetta.com/fpm1.html

Daniel

Just wanted to bump this thread. Hoping this is ok :slight_smile:

Having worked on things like this for quite some time now (particularly for my hover tank project), I can tell you that it’s virtually impossible to implement a ‘generic’ movement component which supports prediction and reconciliation. You have to build it into the style of movement you’re creating. You’ve probably seen the code by now but Character Movement Component’s prediction code is intertwined with the movement code itself, it’s nearly impossible to make it work for any style of movement because there are too many variables to consider.

Wheeled Vehicle Movement also doesn’t do Prediction / Reconciliation because it’s too expensive and wheeled vehicles are physics driven, which makes it considerably harder. CMC works so well because the movement is near-enough deterministic and not physics-driven. Even Unreal Tournament doesn’t do prediction for vehicles, it allows for some difference between the client and server simulation but it’s mostly just hard position updates. In 99% of situations that’s good enough. Networked physics is hard as hell. even Rama’s early implementations were client-authoritative. I think he’s recently pulled off server-authoritative but he hasn’t posted code yet, so can’t be sure.

Honestly speaking from experience, networked movement is a minefield and every implementation is very specific. Welcome to hell!

I will just have to believe that it’s virtually impossible, but I don’t understand it. I could only repeat what I wrote in post #4. When all physics-driven pawns “only” need to sync transforms, velocities and forces (resulting from input), and both client and server know what to do with specific input-events, then I still don’t know what could be so very different about it. Again, I’m not saying it’s not virtually impossible, I just don’t get it…

I thought about some syncing I want to code for all physics-driven objects (although I don’t have any idea yet of how complicated it’s going to be). I want to save state-information (transform, velocity…) for all frames in the past second (or so) on the client. Then, when the server detects a significant error and sends a correction, I want to use that correction to calculate the error at the time it was sent. And this delta I want to smoothly (gradually) apply to the current state. I don’t know yet if the smoothing should only be applied to the visual part (physics corrected immediately). But then, all this would probably be very inaccurate without also “repeating and adjusting the moves” since the correction. And as I believe to have read here somewhere, this won’t be possible even if can control the PhysX timestepping somehow.

So… I don’t know, I guess something visually pleasing and moderately accurate would be fine with me, as long as you don’t see objects jittering around, constantly “rewinding” or being way off from where they are supposed to be.

I imagine that syncing transforms velocities and forces for physics driven actors, is enough only in some simple cases when their state is influenced infrequently. If we take something like a tank simulation or a racing game, there are easily could be a half a dozen of different forces effecting vehicle (which we want to apply in sub-stepping, at 60hz or higher). Those forces can be summed up to minimize data transfer but the point is it is very hard to do a prediction as its not a linear system. If you do a prediction just on the basis of velocity then that’s exactly how you are going to get jittering. On top of that, anything moving fast enough will bring issues with “avoidance” and “targeting code”, like avoiding collision with airplane or trying to shoot down one.

In UE4’s default movement, it basically sends hard updates of both position and velocity to the client every-time the object is moved on the server. Believe it or not the default implementation has a few problems of it’s own. The primary one being that if an object is completely still, then receives a collision that occurs ONLY on the client side - it will move on the client and not on the server. It’s not until the server simulation makes the object move again that the client comes back into sync via the replicated position / velocity data. This of course only occurs in physics-simulating actors, which can receive collision and move as a result. Now, you could force replication updates to go out every frame to fix the sync problems - but then the client won’t be able to move, because it will be receiving updates from the server, then trying to move, then receiving old updates etc.

Anyway… In a latent situation (aka any online connection) - the position / velocity updates are received on the client after they have happened. Although Velocity is also updated on the client for smoother transitions between updates, the simulation can still vary enough even over the course of a few frames to result in a noticeable position update. Unfortunately since UE4 isn’t deterministic you can’t sync time-stamps, simulations or anything, so we’re forced really to have these updates regardless of the networking solution you use.

To get around the noticeable position update (aka snapping), you can take the approach that character movement does and ONLY hard-update the position of the collision capsule. At that point, you calculate the offset from the old to the new position, and apply that as a relative position to the actual rendered mesh (and of course it’s children, which also usually has the camera). You then work out roughly how long you have until the next packet arrives, and over that time you then interpolate the relative transform of the mesh back to the collision capsule. If the time between updates is long enough and there’s a huge difference, you get the ‘rubber banding’ effect. Unfortunately there’s not really a way around that so it’s a ‘pick your poison’ kind of deal.

Going back a bit - I mentioned about the simulation differences. So in order to keep rubber banding and snapping to a minimum, you have to send the input data from the client to the server, and run the same data on the server. You sync them up with a rough time-stamp (which is hard enough as it is), and the server calculates whether it’s local simulation difference from the clients one. If it does, then it sends the hard position update and the same timestamp back to the client. The reason you have to re-simulate on the client is because by the time it receives this update, it’s already progressed in the game. If you applied the position update directly at the current client time then they would be jumping back to where they were previously, the client would always be behind and would be snapping constantly. This is where prediction comes in.

To get around that therefore, you have to replay all the moves from the received time-stamp back up to the most-recent move, then figure out where you think you will be on the server at time now. You re-simulate the input and calculate the positions again (but don’t send the recalculations to the server), and it’s rinse-and-repeat. Essentially, because you don’t know what kind of input data you need or what the simulation for movement actually is - there’s no one-size-fits-all solution. Character Movement Component and it’s prediction stuff is written veryspecifically for that style of movement. Since everything in the character can be inferred from it’s current velocity (like it’s rotation etc.) - they don’t actually send input data, they send the ‘acceleration’ vector instead to save a bit of bandwidth (which they can do, since the properties that govern the movement are either constant or replicated themselves). The acceleration vector is also modified by the navigation stuff for AI etc, so they get all that replicated for free essentially.

But yeah… anything more complex and it becomes a minefield. Went a bit overboard on this explanation but hopefully this explains the complexity a little :stuck_out_tongue: I tried for months and months to synchronize a physics-simulating pawn, but it just never would get close enough. I’m eager to see Rama’s latest solution which I believe is server-authoritative - but in the meantime I’m rewriting mine to use a similar system to Character Movement, calculating all collision impulses and movement in the movement itself and not relying on the physics engine at all.

EDIT: Just quickly adding, this was (and still is) my attempt to get a server-authoritative networked physics component working with history states and prediction etc. I still can’t even get my timestamps to sync correctly, which is a minefield in itself.

so am I understanding this correctly, my current project is a space game where you fly around in space ships, all movement is driven by angular and linear physics. Are we saying if i convert this to work on multiplayer it won’t work without coming up with my own replicated movement code?

It may work, but it’s more a question of how well will it work. If there’s latency, the client won’t have instant input, their input from the keyboard will be delayed by the round-trip time to the server, which most of the time isn’t good enough.

You can go down the Client-Authoritative route, but this can then result in cheating / hacking.

Well movement in all video games is based on some form of physics and the thing that makes network game play possible is for the clients to predict where to be X number of ms from their current position so in a way on the local client what you see in a lot of cases has already occurred. The most common form of physics is in video games the player is in a constant state of falling so if the clients were left on their own the player would hit the ground at different times and in different places if their position was not updated by the movement component capsule . So in a space game were velocities are not a constant but based on the law of physics it’s impossible to predict where the player will be.

This does create an argument that the use of root motion has a lot of benefits as to the need for network gaming as far as position and orientation to world space goes as velocity, position, and direction is contained with in the animation, and is a constant, as a data set so the netcode would not have to predict where the player will be because relative position could be calculated a lot more accurately.

Well as history has proven out a client-server-client connection has yet to solve the problem of on-line cheating and never will so maybe it’s time to try something else? As is UE4 does a lot more stuff client side to make sure what you shoot at is actual where you had aimed and not where it was.

Well if you make it that way sure, but you can go down either route - it’s up to the programmer coding the weapons. Mine follow ShooterGames’ example and are server-authoritative, whereas UT uses client authority with a mixture of server-side validation (which is what you need for such a high-speed game). UT’s implementation is actually really **** good, but the code is tough to follow.

Prediction with Root-Motion is just as hard if not more so, you still need to simulate input server-side with timestamps. There’s less room for calculation errors of course, but it will vary and spread over time like any simulation. In fact if you’re using floats at all, it technically can never be truly deterministic.

You just gave me idea for physics driven vehicles. What if one builds a controller which has more linear behavior. Kind of what I do on hovercraft - user specifies direction he wants to go and pid controller steers the vehicle. There are no “sudden” movements in such setup and it should in theory hide lag and make it easier to predict movement.

Well in theory one should be able to set ignore root motion but have the movement component respect the movement dynamics contained in the RM file. It’s more of a frame of mind thing as to design logic. Since root motion contains “data” that is “not” available using in-place how can that be used to replicate the behavior of hardware based input devices? Does the type of input really matter if it’s coming from a data set or from a joystick?

The big difference is with a data set the client would know where the movement component would be X number of frames into the event as compared to what would have to be tracked in real time using in-place.

LOL well yeah but since when has anything to do with Netcode been a free lunch? :wink:

What is being overlooked though is the perception that you have to use one over the other where there are opportunities where combining the two would be advantageous if and when taking control away from the player, or just drive it like any other replicated event, is all that is needed. The only place I can see as far as needing a movement component is required is anything relating to locomotion where every thing else is event driven and button mashing.

Overall though using a movement component is a no brainier but what is lacking is a proper “character animation” primer where even the UE4 documentation is not all that clear as to practical use.