Networked Physics with PhysX

Good stuff. Be interesting to see how it develops.

In that case, the results should be even better if you switch physx to a fixed timestep, right?

Correct, it provides more reliable physics in general.

I’m joining this discussion as I really need this myself. Currently I have a temp solution working which basically mimics the behavior of Ramas attempt discussed here: [Video] Player-Controlled Replicating Physics Movement, Simulating Physics! - C++ Gameplay Programming - Unreal Engine Forums!
Sadly this is not enough for me as I need server authority as well as prediction of remote clients / simulated proxies.

@TheJamsh: I just cloned your repo and had a quick look at it. It seems like a good point to start from. Are you still working on this? I plan to dive into it in the next days / weeks and it would be good to know if the repo is the latest version of this project.

I am still working on it but only when time permits (I haven’t made any public changes since I uploaded source, but I have made a few).

There are some significant issues right now, a few of which are highlighted in code. Replay doesn’t work properly (the entire scene skips forward, so non-local objects flicker and jitter), and there are a few timing issues because some of the code for getting server delta time is incorrect. My aim is to try and use the new Intermediate Physics system in 4.16 for replay. I also want to avoid engine source code changes, which means it will never be as good as calculating physics yourself like Character Movement does.

As far as I can tell, Blue man is in the same position I am in regards to replay. I probably will not look at doing full dynamic scene replay, because I can’t see any way it can be both scalable and reliable. Static Scene might be possible.

Good to hear that the project didn’t die.

Considering the replay I think there are two possibilities how one could resolve the flickering / jitter:
First of all the one you pointed out in the code: Simulating the replay on a second PxScene. This might be complex since it requires all the static geometry to be copied to the replay scene in order to work properly.
The second possibility would be to mark all dynamic actors kinematic during replay. Problems with this attempt are that we might have collision problems if other (moving) objects stand still during replay.

Doing all of this without any Engine changes might be difficult. I don’t restrict myself to this as I already work with a source built engine (linux dev, I don’t even have the choice)

I’ll play around with it for a bit, and if I achieve something worth sharing I will open a Pull Request.

Yeah I use source build internally but not everybody does, so for this to ever be a viable option for 90% of users it needs to work without engine changes (or at least, future engine versions if PR’s are accepted). Replaying in a separate scene is certainly possible (and has to be done in some form to work properly), but there are a mountain of issues to face when it comes to implementation.

The first thing I want to avoid is duplicating the scene and static geometry in it. That’s not a very scale-able approach when it comes to large / complex scenes, it’s extremely hard to ensure that both static scenes are always in-sync, and copying each frame is probably too expensive. It should be easy to access that static geo in the main Px Scene without duplicating it. If the Static Scene changes during a replay then to be honest, I think the end user is just going to have to suck up the correction spike and deal with it.

Then there is the dynamic actor issue. Scaling to a wide variety of projects is a big issue:

  • It’s totally unreasonable to store replay moves for every dynamic object. That won’t scale past more than a handful of objects and even then will be expensive.
  • Storing moves for nearby dynamic actors is fine in theory, but again it doesn’t scale very well. There will be edge-cases where a dense population of dynamic objects make this non-performant. There are also actors outside of that radius affecting actors inside that radius. This starts to become a tangled minefield.
  • Storing moves for a single player means you always know how many moves you need to store, so you can preallocate memory which is much faster. If you’re storing for a varying amount of objects you’re going to be reallocating a lot, and this can lead to fragmentation over time. It also makes accessing and replaying moves much slower.
  • Replaying moves is expensive already, thanks to all the transform updates and collision sweeps.
  • Even if you do store moves for all dynamic objects, you have to replay and resolve in exactly the same order and manner as the simulation did, otherwise replays won’t be in sync. At that point, you may as well have not bothered.

Then there is the issue of smoothing.

  • In order to make the replay seamless, smoothing MUST be implemented - otherwise it’ll be unplayable and twitchy.
  • Smoothing means that collision has to be separated from visual representation, and you need to be able to access those transforms independently. For example, you want to attach the camera to the visual mesh - not the collision.
  • Currently I don’t believe there’s an easy way to move a Skeletal Meshes visual mesh relative to it’s Physics Mesh. My idea would be to blend based on a ā€˜Bone’, but that has issues of it’s own.
  • Also, how do you then handle attached components unless you smooth the bones too?

I’m hoping that by using the Intermediate PhysX mode - I can circumvent some of these issues. My plan is to duplicate the replayed object, and make the actual object blend to it’s position continuously for smoothing. I’m then hoping that I can replay the duplicated object in it’s intermediate scene and it will simulate similarly to how the main PxScene does, providing it also checks against static collision. It won’t be perfect but with decent smoothing it might be playable and responsive.

It’s a complicated problem, which is why 9/10 times I’d tell someone to go and make a custom version of the Character Movement Component and calculate collision / movement themselves. I’ve done it for a couple of games now and it works nicely (but is a big task), but for my next title I need the fast solving that PhysX provides.

I’m also making an FPS / RTS hybrid - so I have the issue of scaling to large amounts of objects and need first-person accuracy / responsiveness. In my case I’m probably willing to sacrifice replay stability with dynamic collisions rather than try some crazy approach for replaying dynamic objects.

This post got larger than I expected…

My plan is to create a separate physics scene for static objects and for the dynamic objects I plan to dynamically add and remove objects based on the distance from the actor. Right now I am trying to find out how to create a physics scene that has all the properties of a standard ue4 physics scene, for some reason my physics scene has no friction.

I can see the flaws of this approach but I just want to get something done.

I have a small update for Client side prediction component.
Main rewind/replay system is now running in a separate Physics scene so it doesn’t mess with other dynamic objects in the main scene.

Awesome work! Are you still managing to pull this off in a launcher build? Also is that a PhysX vehicle or your own vehicle movement component?

Certainly looks promising. Perhaps this kind of thing is finally within our grasp…

Thanks, this is still running in a launcher build. It is on my vehicle movement component but it can easily be applied to any physics object with some minor modifications to the source code.

So what you did here is that:

Server side
You have PhysX engine that is always running and it is authority over all clients.

Client side
You have the main PhysX scene that manage all world, and the secondary scene that you use only for correction.
You send each move (like player acceleration or break) from client to server
When the server and client are not in sync, a correction is sent. Then is set the correction data (like: Location Velocity and AngularVelocity) on the secondary scene then are performed all other unacknowledge (by the server) move then apply this final position to the Main scene.

Is it this the approach you used?
However nice work!

I have a main physx scene that ue4 creates by default and another one that I created just for correction (rewind/replay).
First client sends a request to the server with the current local time for example on a frequency of 30 Hz, when the request arrives on the server it sends back a confirmation with the correct location, rotation, velocity… Server always has the correct information. When correction arrives on the client it does a search through a history buffer for the entry with the closest time stamp and it deletes everything before that. It then stores the transform and velocities and prepares the actor for rewind/replay, actor is then removed from the main physics scene and added to the manually created one. After rewind/replay is finished it is then transferred back to the main scene and the input is set back. After all that a smoothing algorithm comes in and interpolates the transform and velocities based on the error.

I hope this somewhat explains it :slight_smile:

Ye perfectly!! but I’ve some few questions

Do you create the scene manually (I mean you add each actor static body of world) or you use a function already present in the engine that allow you to initialize the scene with all static actors?

Do you manage the execution of secondary scene using immediate API to execute all replay move back?

When you have the corrected transformation you don’t instantly set it to the main scene but you pass it to your smoothing algorithm, ok but how it works? because if you put some interpolation force (even for few frames) the scene not become out of sync again?

There is a class in the engine that creates the scene in the constructor ( FPhysScene ) but you have to manually transfer actors between scenes, also another thing to note is that you have to create a new UWorld for the tracing so it can use the new physics scene. Also PxActor can only exist in one PxScene so you have to transfer all static object just before the correction happens and transfer them back before the next tick/substep is called. You have to do it before the next tick because ue4 does a check for each BodyInstance to see in which scene it exists, if it is not in the main scene ue4 will crash. Not sure if there is a way to copy PxActor so I can have an identical copy of it.

I manually call simulate on that scene, PxScene::simulate(float DeltaTime).

I am not adding force, I’m directly setting the transform and velocity. If update rate is at 30Hz there is enough time to smoothly perform the corrections. I’m allowing a small percentage of error.

Ok got it.

Do you know where is constructed the main scene? And where is called the function simulate?

Not sure where it is constructed, maybe in PhysScene.cpp , don’t know about simulate :stuck_out_tongue:

Based on the information in this thread and LOTS of reading of UE4 code (mainly CharacterMovementComponent) I have the first part as a proof of concept working. Once the Server sends correction data to the client, I can rewind and replay the PhysX scene. While the replay itself is a big dirty hack and other Clients jitter like crazy the correction itself seems to work. The problem (as TheJamsh already pointed out) is smoothing the correction. As soon as I add a decent amount of PktLag (~120ms) and some lag variation the corrections get extremely noticeable.

Could you explain a bit how you smooth out the corrections to achieve this super smooth behavior from your video? My attempts are all either too aggressive or too permissive which results in jitter or big errors.

I set the update rate to 30 Hz, 10 hz is enough to perform the smoothing. Basically I never snap the actor at the predicted location, if desync is inside a margin of error 0.5% of correction is applied and if desync is more than 50% I apply from 40% to 70% of correction. Correction percentage is lerped based on the desync amount.

Will do a test with a friend to see how it behaves in a real network environment.

Does this solution work for bulk physics actors? Or just player controlled vehicles and dynamic actors affected by the player?