Slowdown for listen host when clients are in game

I’ve noticed that when hosting a listen server, the host’s fps will drop by around 10-15 for each client that joins the game (even if the clients are not controlling a pawn and just spectating).

Does anyone know what’s causing this slowdown, or how I might investigate what is happening?

Thanks in advance.

I don’t know what’s causing the slowdown, but UDK’s GameplayProfiler might tell you.

Thanks @Nathaniel3W I still haven’t used the GameplayProfiler, so I will certainly look into using it.

One extra bit of information; if i’m running both the client and host instances on the same machine, the client instant runs fine, while the host instance still sees the slowdown. Meaning it can’t be to do with using excessive system resources.

This leads me to think there are some blocking network functions that impact the fps render time for the host.

Just thought i’d add this in case anyone has any thoughts.

I’ve been trying to analyse performance using the gameplay profiler to determine why the listen host is dropping in performance so much for each client that joins.

This is “frame function summary” for the listen host. (PLEASE RIGHT CLICK AND OPEN IN NEW TAB TO VIEW). On the left is a profile of just the listen host in-game with no clients joined and the right is with two clients joined and in game:

This is the “aggregate function summary”:

I’m really not sure how to make much use of this data.

Even if clients join as spectators, the same drop in host’s performance occurs. Is doesn’t appear to be system resources related, as the host can host a dedicated server, then run the game on the same computer and join the server with no slowdown at all. Any ideas of what might be causing the slowdown would be much appreciated.

it’s hard to see a reason with this alone, but from a quick glance it seems you are doing a ton of traces which ends up adding a ton of time to the world tick time
go to the frame actor/class call graph and see where the traces are coming from. or maybe upload the profiling file somewhere to have a deeper look at it, since just this screenshot doesn’t give enough details into where the traces come from

Thanks for the response @Chosker.

Here are the two profile files: profiles.zip - Google Drive

The one named “host_with_two_clients_joined.gprof” is the one where performance has dropped by a lot.

I’m not sure where to locate the source of the “line checks” (which I’m guessing are traces).

I had a look at the profiles. there really seems to be a lot of time taken by traces but they don´t seem to come from your script code as they are not shown (they simply get grouped into World Tick Time) which probably means it’s something from native code
perhaps you have some particle with a collision module enabled? it’s the kind of thing that would be making native traces that don’t go through your code. either that or something of that nature

otherwise you could try tracking the line checks. I’ve never used these commands but they are very well worth a shot: https://api.unrealengine.com/udk/Thr…Home.html#Line Checks

@Chosker That sounds like a good idea. However whenever I run “TOGGLELINECHECKS” on the console, i get “Command not recognized”.

Are you able to run this command?

yeah ok those commands seem to not exist in UDK :frowning:

it’s a long shot but you could try different stats to try to isolate the source of the issue - UDK | StatsDescriptions
you can try all of them until you find which has a big delta between the player-less server and when the players join. I would start with collision, physics, or octree
actually the terminology described in the stats octree section of the documentation seems to match the terminology of the traces that were shown in your gameplay profiler sessions. maybe something with actor encroaching but that’s just a wild guess

Thanks @Chosker. Yeah i believe those exec commands are commented out in native code. I’ll try your idea of trying to isolate the issue.

Hmmmm, so this is interesting. Looking at the octree stats for host only and then host + 1 joined player shows a huge increase in “ZE MNF Checks”:

I need to test on the vanila UT example game to see if this is exclusively happening on my game. I wish there was a way to inspect the origin of these calls in UDK. @Chosker if you have any other ideas while I’m testing, i’m all ears. Thanks again for the ideas you’ve suggested so far.

Edit: Interestingly, I just tried on the stock map DM-Deck and this didn’t have the issue. The line checks increase was very mior and proportional with the extra player. So something about my map is causing a radical increase in line checks once the second player joins.

So it turns out it is when I have the loot actors placed in the world that this increase in line checks is occuring when a client joins.

I wonder if these line checks are to check if these actors are relevant for replication…?

These actors extend Actor and have the following properties:



//-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
//Default properties
//-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
defaultproperties
{
    bOnlyDirtyReplication=true
    NetUpdateFrequency=8
    RemoteRole=ROLE_SimulatedProxy
    bHidden=false
    NetPriority=+1.4

    CollisionComponent=StaticMeshComp

    Physics=PHYS_None

    bStatic=false
    bNoDelete=false
    bMovable=true

    bCollideActors=true
    bCollideWorld=false
    bWorldGeometry=true
    bGameRelevant=true
    BlockRigidBody=true
}


I’m afraid I cannot be of much more help at this time
from the docs this seems to be the more relevant increases on your case:


ZE MNF CHecks - Total zero extent multi-node filter checks performed.
ZE Line CHecks - Total Actor zero extent line checks performed.

sadly I don’t know how the octree works, all I know by now is that it seems related to the navmesh. however I never used the navmesh in my game so I don’t know much of its entrails.

there seems to be some console commands related to the octree though - UDK | ConsoleCommands just search for “octree” within that page. maybe one of those commands gives you enough verbose to understand what causes it

still, I have no clue why one joined player would cause such big difference though. another wild guess: out of world traces for an actor related to the joining client

hah, just saw your new reply

maybe your loot actors are dynamically blocking navigation? chances are they do, since they have bWorldGeometry=true

I believe I’ve figured it out. For every client on the server, the server is doing a line-check to every single actor that is bAlwaysRelevant=false, which in my case was around 2000 loot actors spawned around the world.

This was occuring every NetUpdateFrequency per second (set per actor). Now I know what is causing it, i should be able to implement fixes to address the issue.

Side note: I wish there was a way to prevent replication checks for clients that are x distance away from said actor. The constraints you can set in the actor replicating applies to all clients on the server (apart from bNetOwner). Which is pretty limiting for an open-world game, where one client could be on the other side of the map from another.

glad to hear you isolated the issue

yeah it’s sad you can’t prevent replication checks by distance on any actor.
in UE4 the Actor class has NetCullDistanceSquared which does exactly that. In UDK only the Projectile class seems to have NetCullDistanceSquared implemented natively, so apparently the only way to use it is to extend from the Projectile class :rolleyes:
otherwise yeah you’ll have to come up with something

I actually just tested extending Projectile from Pawn and unfortunately got some error like:

“Attempted to spawn static mesh component in StaticMeshActor rather than Object”

Then the compiler closes saying something about needing to protect memory.

Update: I added dynamic scaling of NetUpdateFrequency for all actors causing the issue, based on their distance from the nearest viewing player. The results are pretty awesome. Check the stats: coop_host_performance_benchmench.jpg - Google Drive

>200% performance increase for hosts. I’m very happy with the results. Thanks for your help @Chosker.

oh, neat :slight_smile: