Intel i7 6800k or AMD Ryzen 7 1800X

I am really not sure how accurate this is. On my 1700 with stock clocks i got completely different lighting build times.

i.e. for high quality in zen garden:

as opposed to 188s for the even higher clocked 1700X.

Anyone else with a ryzen in for a quick test?

Got a Threadripper 1950X (16 cores, 32 threads), 16 GB RAM (but probably running slower than it can). I’m now at around 103 seconds for Zen Garden on High.
Production quality took 239 seconds.

Please note that my RAM settings are probably not right yet. It’s running, according to , at 1330 MHz, instead of the maximum 3600. This will make a big difference. I still need to fix this.

EDIT: see my later post in this thread for results with correct RAM (at 3600 MHz) speeds. It’s a lot better.

I have a 5930k (paid $300 for it, the 5820k should perform similar for $300 at Microcenter), which is basically the previous generation 6-core i7. Seems to perform around the 6850K.

Built on 4.18, 16GB RAM.

High - 168 seconds

Production - 320 seconds.

So at least those i7 6 core benchmarks seem accurate, no idea about the Ryzen benchmarks.

Seems like… threadripper sucks? My 7820X takes 86s for high quality and 190s for production. And it was cheaper too…

Sucks? No. I wouldn’t say that at all. I have seen a Ford Focus beat a Toyota Supra before, only because the Focus kills it in speed retained in turns. There for, with your logic, Toyota Supras “suck”.https://.com/watch?v=L6MHjQqvGX0In the gaming world, we already knew that the intel setup would beat the Ryzen setup. That was something we walked into 100% knowing before the actual release of the Zen series. The threadripper beats Intels “latest” by a fair margin in respects to running applications, compression, video decoding (anything CPU based thats NOT required to have a GPU assist). For Unreal, i would use Intel. For using the computer besides Unreal, AMD. Who needs to let the CPU worry about GPU when you have a GPU specific for games? The benchmarks are testing graphics ability on the CPU, not “how well does the CPU perform with a 1080Ti attached to it”.

Except building lighting or compiling the engine are purely CPU-based, and threadripper is slower at both of them.

As far as Cpu I would go with intel out of just pure support, reliability, and longevity for every operating system. I’m happy with the performance with any of the new cpu’s. That being said. I’ve had too many issues with nvidia drivers on Linux, and mac. They don’t support other OS’s as much as AMD. I have a GTX1070, and spent nearly 2 days fighting with nvidia drivers to get it to run on a mac, and a linux machine via graphics amplifier. My next Gpu purchase will be amd just to see if the grass is greener. I would pay an extra 100$ to have 2 days of my time plus interval maintenance issues, and update/bug fixes solved. Nvidia seems to cuddle windows, and likewise with Amd and apple.

Does Zen Garden even have enough jobs for 16 cores? It’s quite small, considering there are only a few major objects that are lightmapped. Then again, perhaps any level with one lightmap that’s larger than the sum of the rest could have the same.

Yes it has. Just to add one more point on your chart, guys :slight_smile:
Tried to build Zen Gardens on my dual Xeon e5-2670 with 16 cores/32 threads, 64Gb of DDR3-1600mhz ECC, GTX 970

High:
18:06:54: Lightmass on DESKTOP-NOGOBTL: 3:13 min total, 2:01 min importing, 198 ms setup, 4.31 sec photons, 1:08 min processing, 0 ms extra exporting [734/734 mappings]. Threads: 35:04 min total, 29:51 min processing.
18:06:54: Lighting complete [Startup = 2:01 min, Lighting = 1:12 m

So 193sec. No so bad for my oldies ($100 each) :)))

and for ‘Production’ quality:
18:18:48: Lightmass on DESKTOP-NOGOBTL: 4:40 min total, 2:01 min importing, 193 ms setup, 7.74 sec photons, 2:30 min processing, 0 ms extra exporting [734/734 mappings]. Threads: 1:22:54 hours total, 1:15:09 hours processing.
18:18:48: Lighting complete [Startup = 2:01 min, Lighting = 2:38 m

EDIT:
another run for ‘Production’ quality, and numbers are different because there is no time spent on ‘importing’. It’s cached! You need to take this in account when checking lighting build time. May be that’s the reason why Threadripper 1950X was slow ?

21:32:18: Lightmass on DESKTOP-NOGOBTL: 2:39 min total, 1.30 sec importing, 196 ms setup, 7.85 sec photons, 2:30 min processing, 0 ms extra exporting [734/734 mappings]. Threads: 1:22:27 hours total, 1:15:54 hours processing.
21:32:18: Lighting complete [Startup = 1.30 sec, Lighting = 2:38 min]

Please note that my RAM settings are probably not right yet. It’s running, accoring to , at 1330 MHz, instead of the maximum 3600. This will make a big difference. I still need to fix this. I’ll update my post with the stats to make this even more clear.

So we can agree that in this application, Unreal Engine, Intel is the clear choice.

First we need to agree on what we count as ‘lighting time’ :smiley:
I think we should not count ‘Startup’ time cause it is unstable, depends on other hardware (and looks like cached between runs).

[FONT=courier new]21:32:18: Lighting complete [Startup = 1.30 sec, Lighting = 2:38 min]

I’d suggest to compare this time from swarm log.

A lightmass performance per dollar chart for all modern processors would take a bit more effort than just going off the scattering of benchmarks posted in this thread. I’m not comfortable saying there isn’t any price point where AMD might be a better option than Intel.

Sure, i mean, i have seen benchmarks showing intel, i have seen video benchmarks of 2 “exact” systems (one AMD, one Intel) where the AMD has the better scores. I would be interested on a same-like-system UE4 test and not what we are seeing here of random PCs with random builds results.

Thank you for leting us know man.
Its strange that shows you 1330mhz.Since 2013-14? motherboards use a default of 2133mhz as a starting point.

I’m now at 3600 MHz with my RAM. Times of “Processing Mappings” on High are 52.75 seconds, 54 (it did an auto-save) and 53,75 . Total time for lighting is around 64-69 seconds.

“Production” lighting settings took 129 seconds, for a total lighting build of around 141 seconds, 126 for a total of 139 and 130 for a total of 146.

So the RAM definitely had a huge impact. For High, from 103 (total) to around 67. For Production from 239 to 146.

Pretty good I think.

gz with fixed memory settings :slight_smile:
Very nice. 2min 6sec. for production lighting of zen garden - best result so far. afaik

I found it’s cheaper to build two 7700k machines at the moment than a single Threadripper machine - as a result I’m not sure I can see a purpose for Threadripper so long as I’m using distributed build processes.

Yeah, that’s exactly what I was trying to calculate: two 8700K vs one Threadripper… With the high memory prices it’s very important to understand how much memory I need though: should I calculate for example 4GB ‘per core’ (i.e. 64GB for the Threadripper and 32GB for the Intel) or does memory not make a big difference?

With my current i7 4771 (4 cores) my 24GB is always completely full when building lighting…

Any thoughts?

I doubt memory makes a major difference above 32gb. I have 64gb (I’ll likely be selling 32gb of it as it’s expensive) and I have had no real returns on that excessive amount of RAM.

I have a 5820k CPU and I’m starting to feel it doesn’t quite cut it for LightMass if I am going for very high quality. I’m thinking of upgrading to a 8700k. My question is, is speed more important, or core/thread count? Because the 8700k is still 6 cores, whereas I could get Skylake-X or some other ridiculous thing for more cores/threads if that mattered more?