Hello,
Some benchmarks comparing the performance of UE 4.25 under Windows 10 and Linux (CentOS 8) using the Sci-Fi Bunk scene (SciFi Bunk in - UE Marketplace).
Hardware specs:
- Workstation: Mem: 24Go / CPUs: 1 x Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz, 3501 Mhz, 6 Core(s), 12 Logical Processor(s) / GPUs: 2 x GTX1070
- Server: Mem: 64Go / CPUs: 8 x Intel(R) Xeon(R) CPU E7-8870 @ 2.40GHz (80 Cores / 160 threads) / GPUs: 1 x GTX1070
Project:
SciFi Bunk
Workstation:
- Lightmass on LINUX: 1:39 min total, 1.51 sec importing, 151 ms setup, 5.35 sec photons, 1:31 min processing, 101 ms extra exporting [217/217 mappings]. Threads: 19:55 min total, 8:00 min processing.
- Lightmass on WIN10: 1:08 min total, 392 ms importing, 87 ms setup, 4.89 sec photons, 1:02 min processing, 0 ms extra exporting [217/217 mappings]. Threads: 11:29 min total, 4:23 min processing.
Server:
- Lightmass on LINUX: 4:18 min total, 1.05 sec importing, 334 ms setup, 5.55 sec photons, 4:11 min processing, 41 ms extra exporting [217/217 mappings]. Threads: 11:13:27 hours total, 31:40 min processing.
Summary:
- Workstation under Linux vs Workstation under Windows: Same hardware, roughly 2x more threads time under Linux
- Workstation under Linux vs Server under Linux:
Wall clock: 1:08 min vs 4:18 min
Threads: 11:29 min vs 673:27 min
So it took 4x the wall clock time and 61x more threads time on the server vs Windows on the Workstation.
What is going on here?