Hello Ivan,
I don’t understand how my PC’s previous build could have functioned as well as it did limping along with just one of the two 8 GB DIMMs engaged, but that does appear to be the case. I’m recalling now several instances where either RC wasn’t delivering, e.g. missing parts, and seeing a message flash up upon shutting down to the effect of “Memory location at 0000000000 is missing”. I ran a memory test when I saw this and it checked out, so dismissed it as a mystery. I was concerned about either selling bad DIMMs or not being able to sell perfectly good ones, yet didn’t want to rip out the new ITX motherboard to replace the old one for testing these DIMMs, not just to avoid the hassle, but not to reverse gears on the transferring of the Windows 10 license. I then thought to build an older ATX motherboard system compatible with DDR3, the memory checked out fine. That now pointed to the old ITX motherboard. A friend suggested he had seen similar errors as I described, suggested the fix as updating the drivers in BIOS, simplest remedy being to reset CMOS.
I’m left wondering what effect this limping memory issue had on my previous performance numbers. Clearly, the Peak usage difference shows lost time to hitting virtual memory, but how much? Is that all mainly in simplify and texturing? I was essentially quadrupling the memory, and those two operations showed roughly a doubling in performance in texture and 270% increase in simplify, so I tempted to extrapolate that had I had all 16 GB functioning normally, the increase in performance in those two areas would been more like 150% in texture and maybe 235% in simplify.
I question how much time others, not to discount myself, have to tease out the true performance characteristics of each system component per RC operation, not to forget how everything performs in combination. The path of least resistance is to look at an overall performance increase, which RC already logs, be happy AFTER making a purchase, or not. But, that’s precisely what motivates your benchmark project, to pin down what components play nicely with others to comprise either the optimum build, or at least the optimum build for a particular budget. To that end we’ll surely need real analytics applied to a statistically significant data set.
Another complicating factor, something exposed through running benchmark.bat with identical variables numerous times, the variability in performance that can only be attributed to RC. I’ve updated the spreadsheet (reposting the link), note the variance between the five iterations. We see a 20-32% difference in performance! Is this due to what Götz talks about with a certain randomness in the behavior of the algorithms? Regardless, my choice to run 5 iterations may or may not expose the true range of variability here, but would seem to represent a bare minimum to have much faith in the numbers, unless you’re good with a +/- 15% margin of error.
I’m about to turn on overclocking and see what time it is, managed to squeeze water cooling above the CPU in that tiny ITX.
I hope others will find time to test and post results, that collectively we move toward testing a standard set of images and enable automatic upload to a central database as discussed.
Cheers,
Benjy