Non-character movement replication

Bit of a bump that one, but I can add some information now that I’ve actually gone through and done this.

So essentially, the problem with using PhysX for networked physics is that replaying old moves is pretty much impossible unless you want to save the entire physics scene state every time your movement code ‘ticks’. I gave up on doing my movement with PhysX a while ago now. For replay / reconciliation to work, you have to update and replay the entire scene to have it working properly, which is costly.

There isn’t any one-size-fits-all prediction system, and frankly the engines Character Movement class is unbelievably messy. A lot of the code in CMC can be separated out into different classes or static managers (especially the timestamp code which is a minefield in itself) - and a lot of the functions could easily be templated to send different input to the server. It’s a relic from the UT / Gears days I expect, and while it’s had some improvements since the engine has released it’s very one-dimensional. For my hovertank game (which needs predictive physics-like movement), I had to essentially create my own version of the CMC, but along the way I tried to separate common code into different core classes so that the movement components themselves stay quite small. This took about 3 months (on and off) all said and done, but now works for both my hover vehicles and will work for turreted tracked vehicles etc as well.

There are some caveats to my current solution:

  • At the moment, there’s no ‘smoothing’. For smoothing, you need to have a separated collision primitive and a visual mesh. My vehicles are skeletal meshes using Physics Assets for collision (because the shape and size varies so widely, so no default primitive will fit). Currently I don’t have a way of separating the visual mesh from the collision in the skeletal mesh itself, so until I do that’s a problem. The code for smoothing is done, but until I can move the collision and visual mesh separately it won’t do anything.

  • Input values have to be clamped between -1.f and 1.f. This is because the input is quantized when sent over the network, and for that to work, you need to know the range. This is perfect for me, because mouse delta doesn’t change how fast my vehicles can move or rotate.

  • This is the major one (and is also a factor for character movement) - because you’re not using PhysX for movement, you have to write your own collision rejection. This is bloody easy for a Character with a capsule collision, but a minefield for me with floaty vehicles. This would be the major advantage of using PhysX, since it’s collision code is probably also faster than anything I could write. My collision code is also pretty sh*t at the moment.


Replicating PhysX objects with prediction is difficult, basically. I sank about 4/5 months of time into trying to do it, and never got anywhere. The combination of having to replay an entire scene, trying to sync timestamps from different players with variable frame times and connections speeds and everything else was just too much work for one person. Some people have had luck doing this, but have so far kept their code to themselves (perfectly reasonable of course, but unfortunate all the same).

Remote clients (non-local clients) just rely on replicated movement. You can’t do prediction for every client on every other client, because the bandwidth consumption becomes insane.

Unreal Tournament has always used client-authoritative physics and brute-force server corrections. Who knows, maybe UT4 will be different and we’ll finally see a more templated network model!