Our game calls for a limited number of players operating inside a large world, but currently there is no plan to allow users to host their own games. This means that we may have to run a high number of server machines and so would like to get as many instances of the dedicated server running as possible.
The game contains some reasonably complex physics objects, so I’m looking at the best ways to get CPU usage down on each dedicated server. As a server cannot (as far as I know) share assets or spawn multiple levels there is an inherent overhead for each instance which makes an impact when trying to cram as many players as possible onto one machine.
I’m currently in the process of running our game with various different physics settings and scenarios to try to measure to impact of different options, but are there any benchmarks showing the results of dedicated servers running on Windows and Linux in different use cases?
Also, what is the best way to profile the CPU of a running dedicated server? Is there any built-in mechanism for this or does it have to be done with a third-party profiler?