I reviewed the code and it did have issues. I’ve solved those mentioned things in past on my UE4 fork (https://github.com/0lento/UnrealEngine/tree/4.15-FixedTimestep/Engine). The reason I never made a PR for it was that I wasn’t happy with some of the approaches there (they worked good enough for the test project but wouldn’t have been clean enough code for epics codebase IMO). Just implementing interpolation itself is a huge task. I know your PR didn’t do it but I feel like feat like this should implement it as it’s crucial part of the feature.
As I’ve written in a GitHub comment, Thanks to your opinion I will improve the code with a file safe.
However even if the implementation is not the best, I think it’s always a good starting point. If we don’t insert as feature on the release code this feature will be never supported or improved.
I feel quite the contrary because for epic to approve the PR, it has to be good. Ori described how to make the change years ago, he could have made a quick implementation back then if it would have been up to Epics standards but it was never done (I’d guess mainly because proper implementation takes more time, I know my fork took more than week of work to get even in the state it’s currently in).
There’s an issue if you put an implementation in the engine and want to change it afterwards as Epic wants to keep backwards compatibility -> initial implementation can’t get in the way.
It’s way more readable now but you still have issue with SubTime math. You introduce a possibility where you divide by zero when there are no substeps:
SubTime = DeltaSeconds / NumSubsteps;
and you also do unnecessary math as you already know that Subtime is FrameRate in all situations but when you have zero substeps, this is how I solved that on my fork:
It helps on determinism for sure but there are still some issues related to that which are hard to solve. In my testing, if you truly fix the timestep for physx on UE4, you get deterministic sim at least on same computer (would assume it would be same on all computers that run the same CPU architecture at least). But… that only applies when you start the whole sim at the same point.
In multiplayer this isn’t true and even resetting the rigidbody to exact same location during the sim (even if you did it directly from physx) will somehow throw the determinism off on collisions. It’s enough that the values are off by the last digit (single precision floats used by ue4/physx are accurate to ~7.25 digits) and then the objects can bounce in completely different way. I never found the core reason why physx can’t set the exact same values (it you run set and get few times, they start to drift). Also since it’s multiplayer you are talking about, you’d usually want to quantize the transforms you send, which will be way worse than just losing the last digit. I’m not sure if you can even get the multiplayer collisions fully deterministic with physx unless you sacrifice some of the simulation fidelity or use lots of bandwidth.
And since we are talking about determinism… about that enhanced determinism flag. It doesn’t make things more precise, it only guarantees the physics engine’s internal solving order to remain the same between the rigidbodies if you add more rigidbodies to the sim while the sim is running. If you start the sim the same way on each computer, that setting probably does nothing (other than consume more CPU).
The issue is that while the math is right, since it’s floats we are talking about that value will have precision issues if you run the project at low framerates where you get a lot of substeps. Since fixed timesteps don’t need that math anyway, you can just assign the same subtime for the last substep anyway.
Also worth mentioning, I strongly feel that enhanced determinism flag should go under advanced settings, it does more bad than good and shouldn’t be shown in the main list like that.
There nothing else I found on UE4 side that hurted the determinism on my test project besides the mentioned things. I compiled and debugged physx code a bit to find out where it messed up with the transforms when you set them but there were just too many places it could have happened (physx does many math operations for the tranform before it gets stored anywhere internally) and there could also be some other internal physx cache scenario that caused things to get altered too. I’m now talking about not being able to reset the transform to exact same value it was between physics steps without it starting to drift slightly. This is probably not huge issue but more like something to keep in mind if you wonder when results are not 1:1.
All in all, just running physx at fixed timesteps guarantees that simulation itself runs the physics equations the same way so you’ll not get physics objects that move faster on other clients (like could happen on variable timesteps).
To be honest, I am quite disappointed that UE4 out of the box (actually Physx 3.x) is not really deterministic but I am also aware that what may seem trivial on the surface can actually be very complicated.
Btw I searched around and go this:-
from here:
I have got custom UE4 compiled in my computer and wonder if I provide a BP node called ‘Reset physx’, will it work? It will do what it says above - reinitialize physx SDK and also scene.
Edit: The other approach I may take is to modify Physx code where all the physcs calculations are done - convert all of them from float to double, and then have them stored back as float. This way, the calculation is more precise and the precision issues will be solved once for all. I know this sounds like a bit too optimistic. lol, but I think it definitely result in better deterministic…
I think your PR will have a much greater chance of being accepted if you split it into two focused branches: one focusing on fixed step physics with interpolation (which solves the problem of variable physics on variable framerate and is a problem that can be solved in reasonable time) and keep other one for determinism experiments.
This PR was born to implement fixed delta time for physics simulations. And as said by 0lento, to improve determinism must be done some changes to Nvidia PhysX’s code.
So my intent is only create a code that fix the simulation delta time. Probably I have to change the name of PR, have not?
But there is. There is a huge difference if you make a racing game where your physics run always on same speed and have the same handling vs some other computer run the same track one minute faster (which btw can happen if you just use default ue4 physics with add force). I’m sure anyone who played such game would agree that first case would be more deterministic. Of course if we are talking about fully deterministic simulation, then it’s really black and white.
When I mentioned ‘better deterministic’, the behavior is consistent enough for higher simulation time. And I wouldn’t want to mention 100% deterministic because it seem to be of too hard to achieve ie ‘perfect code’ (you have to perfectly understand the code you are changing) + identical hardware etc - if you get the drift. I would be happy with ‘hybrid approach’ where better deterministic behavior is combined with some kind of correction after certain interval.
Ah, yes, in that situation (ensure a consistent experience across different systems and platforms), yeah, you could call that “better determinism” or “consistent simulation”. If the goal is lockstep/rollback multiplayer, that requires either perfect determinism or a mechanism that can “correct” small discrepancies between simulations.