Should we be using the Network Prediction Plugin or NetPhysics Predicted Physics?

Hi!

I’ve noticed that Mover has a few different backends which are available for use: One for the Network Prediction Plugin, one for NetPhysics and one for standalone. I was wondering if there is any preference towards either of the networked backends.

As the default backend is for NPP I’d assume that would be the main focus but I know that Lego Fortnite uses NetPhysics which may mean more development time is being spent on that? If there is a preference, should we be avoiding the other version?

The backends each have their own focus, and are not at parity with each other yet. So your choice will depend on the kind of game you’re making. Here’s a quick rundown:

Network Prediction backend: This is the most flexible of the backends, and where Mover started. It makes no assumptions about actor composition, so you aren’t limited to traditional upright single capsule movement. You can use choose a fixed tick or independent ticking simulation (similar to CharacterMovementComponent’s networking). One of the biggest challenges is working with simulations that run very early in a game frame, before any of the game world’s tick phases run. This can make it difficult to integrate with other UE systems, and we haven’t solved all of these problems yet.

Network Physics backend: this makes use of UE’s Chaos Networked Physics system. This is a good choice if your project calls for interaction with physics objects, whether networked or not. Mover support is largely limited to upright capsule characters so far. This backend is undergoing a lot of internal development, so we hope to expand the types of objects that can be simulated in the near future, and its performance and efficiency is steadily improving. If using networked play, you must use a fixed physics sim tick. For single-player, there is no ticking requirement.

Standalone backend: this is for single-player games only, and has less overhead than the others. It breaks Mover-related work up into phases (input production, simulation, results application), with 2 goals: allowing some of that work to be performed asynchronously on worker threads, and allowing the phases to be ordered with dependencies of other systems like AI or animation. It also makes no assumptions about actor composition, similar to the Network Prediction backend.

Great to hear all of that, thanks Justin!

hello PyroJimmy. I find in

void UChaosFallingMode::SimulationTick_Implementation(const FSimulationTickParams& Params, FMoverTickEndData& OutputState)
{
        //...

	// The physics simulation applies Z-only gravity acceleration via physics volumes, so we need to account for it here 
	FVector TargetVel = ProposedMove.LinearVelocity - DefaultSimInputs->PhysicsObjectGravity * FVector::UpVector * DeltaSeconds;

       //...
}

here ProposedMove.LinearVelocity already has Gravity Calculate in UChaosFallingMode::GenerateMove_Implementation() . Why here apply gravity again? what is physics volumes, Is it correct to apply gravity twice?