Cache hits and send in batches for Server-Side Rewind?

Hello. I’m doing a server-side rewind algorithm for a multiplayer shooter. I would like a couple of the automatic weapons to fire quite quickly, potentially faster than the net update. I’m sending an RPC when the fire button is pressed and released, but in between that time, cosmetic automatic fire effects happen locally client-side and server-side, based on a fire timer that resets with each shot. The hit target destination (an FVector_NetQuantize) is sent from client to server via an unreliable RPC each shot as well. I’m not too concerned with the shots being 100% accurate between these updates, as they’re purely cosmetic and I’m server-side rewinding when the client sends a score request after hitting a player on their side.

However, since some of the weapons are firing quickly, if lots of shots are scored, there could be a lot of score request RPCs headed serverward. So I had the idea of only sending an RPC each net update. The first time a player gets a hit, the client would send the first score request, with the hit target (FVector_NetQuantize), a pointer to the hit character (ACharacter*), and the server-synced hit time (float), for the server-side rewind algorithm. Then the client could wait for the server to respond (the server’s sending a multicast RPC in return anyway, to inform the other clients about the shot and play fire effects). Until the server’s multicast RPC comes back, rather than spamming the server with more score request RPCs, any score hits could be cached, and once the multicast RPC gets back, send an array of hit requests. A hit request looks like:

USTRUCT(BlueprintType)
struct FHitScanHit
{
	GENERATED_BODY()

	FVector_NetQuantize100 HitLocation{};
	float HitTime{};
	ACharacter* HitCharacter;
};

Now I know that the net amount of data sent is the same, if not more since it’s now a TArray of USTRUCTS, and the bandwidth is really what matters. If you send too much data, you send too much data, right? My question here is a matter of timing. Which is better, if either? Spam lots of RPCs to the server faster than it can receive them, and leave it to catch up when the player stops firing, or send fewer RPCs that contain potentially more data. If I cache the hits and send them in batches, I could at least throttle the amount of data sent by capping off the array (3 or 4 hits, for example). If you need to cache too many hits, you’re too laggy to get scores anyway, and at least you’d get credit for the first 3 or 4 shots you made between net updates.

Anyway, I appreciate any input. If anyone thinks this is viable or knows of a better way to handle rapid-firing weapons, I’d be grateful if you shared some insight. Is there a lesser of two evils between too many RPCs too rapidly and less RPCs that are bigger?

Thanks.

My overall concern with your approach is “Bundled Damage”. Clients need to know about each hit as it happens so they can react to the hit. By Bundling what you’re actually building is a one hit kill event.

Second concern is that you are relying on the client to send “I hit on my screen” results to the server before the server, scrutinizes and distributes damage. This, even when you do it per shot in real time, results in desynchronization events.

e.g. Player gets behind cover and receives damage, or ultimately dies.

These two concerns are huge in the fps community.

You should be firing locally (faky) , RPC’ing the server to fire (auth), let its simulation handle hits and dmg calcs.

High ping players should have to “lead” shots, so they hit on the servers sim. They are the ones with bad connections and extremely behind the server (authority) sim.

Your argument is against server side rewind, rather than the technicalities of bundling data.

The tradeoff here is whether you want to avoid the “running around the corner and dying” problem, along with the other damage-bundling issues you mentioned, which I agree are valid, versus a more player-responsive experience. Afaik game companies choose one or the other based on the compromises they are willing to make. Some games implement server side rewind and some don’t.

To implement it the way you suggest is valid, though it has its limitations. The weapon cannot fire faster than the update rate, which is not a constant. 0.1 seconds per shot results in some real noticeable undesirable effects client side. So for the faster weapons, it’s the route I chose. My slower weapons like rocket launchers are implemented the way you described.

My question however is more regarding the consequences of sending large data on the engine side rather than on user experience.

I’m just curious what happens when you push it to the limit. If you send too many RPCs too quickly, I know that the queue can overflow, but is there a similar drawback/danger when it comes to RPCs harboring large amounts of data? I’ve attempted to send lots of data for debug purposes and past a certain size, the data just didn’t go through. Is that the worst thst can happen?

Do game companies that implement server-side rewind bundle their data and send it periodically or do they just send RPCs as quickly as the shots are fired?

I have no issues with server-side rewind. Server-sdie is the only place I’d have it in fact. I’m using it for my project. Tweaking update rates like NetServerMaxTickRate vs weapon tick, timers, net update frequencies etc are a pain but absolutely required. Increasing the bandwidth limit is also required.

I have weapons that fire projectiles at a 0.066 interval. Everything seems to be working fine on the replication side.


A player responsive approach (shooter) is to have the firing client do full fakey. Client-side prediction! Thus fire a local shot and handle impacts/fx…including blood splatter. Just about every FPS I’ve played uses this approach. Especially if they allow for high pings. Downside is false positives. You see a hit, server doesn’t.

Battlefield series (especially BF4/BF1) utilizes CSP and Rewind time, AND client hit requests. But they only did the client hit requests for low pings. Players above a set threshold had all shots fully scrutinized by the server. Full server auth on all shots fired.

The low ping clients would fire and only request server scrutinization on local hits. Per shot fired, not batched. The server in turn would only scrutinize and award dmg for those shot, if they hit in rewind sim. This reduced server workload.

To clarify further if a shot would hit on the server, but it did not on the firing client, no dmg would be awarded. Mainly a WYSIWYG type of approach.

PlayerUnknowns BattleGrounds (PuBG)… UE4.16.3… Limits 1 shot per client frame. This drastically reduces RPC’s for hrof weapons.

Overall my experience is they send on fire. Adding to network delay is a general no no.


Data, packet payload, has limits but you aren’t hitting them with what you’ve posted. The array would have to be huge. Processing the array on the other hand could impact server performance. Frame doesn’t end until initialized loops finish… and all that jazz.

You’ll have to do a lot of testing. Set it up both ways and test the results with varying connection quality. I’d max your players slots out and do this with 60-80% of the players having high ping (120-300ms).

1 Like

Thank you for that. It sounds like batching is more work than is really necessary then.

What is the reason for full authorization on high ping, rather than low? Too drastic an asynchronization, or just to minimize server workload?

Also, do you have on-hand the NetServerMaxTickRate and weapon tick for a weapon firing that fast? if not, it’s okay, but I’d be curious to see those numbers if you’re getting good performance.

EA/DICE wanted to limit ghost hits and all the other nasty artifacts HP/Loss/Jitter cause when the ping variance is high between players. For HP’s their requests would be extremely late. Thus for the LP that’s meters behind cover now to receive hits was just outright disgusting.

Say I have a 10ms ping. I’m going to be pretty synchronized with a server running at 60Hz. And it of me. A player at 250ms (common in BF) would be 125ms + server tick behind the servers sim at all times. His view of me is drastically in the past. Rendered wise there’s another local frame or two. Game Thread being 2 frames ahead of Render Thread. Take the total delay time and multiply that by max player velocity. This will give you a rough estimate to how far ahead movement wise the other layer is. if running in a straight line. So something like 150ms * 0.6cm/ms = 250cm. That’s pretty substantial.

NetServerMaxTickRate I’m rolling with atm is 60Hz. For weapons I’ve disabled tick for the moment. They aren’t running anything that requires tick. Firing logic uses timers.

1 Like

That makes sense. Thank you so much, I appreciate you sharing that with me.

I did a little digging for old content and found a reddit thread where Mischkag (DICE BF1 Netcode Engineer) is talking about some thing hit reg, ping threshold stuff I talked about previously.

Lots of info in the full thread. Worth a peek.

Awesome, I’ll give it a read!