Most useful, can drop the categories into a column in a spreadsheet, then paste updated performance values into new rows per configuration modification or other’s PCs.
Is there any way to track whenever virtual memory is invoked and how much time is chewed with those read/writes?
I’m also wondering about latency in memory, what if any benefit there is to low values. I’ve read where low latency doesn’t add much if anything to frame rates for video games. Is low latency cost justified?
Lastly, I’ve been urging new users to participate, one of whom is building nodes used for crunching bitcoin data, bringing in large data sets into RC as needed. We both agree that whenever you get to the point where a shared set of images is established to standardize that variable among all participants, it’s most useful if a third category is available beyond the small 30-image test, and the medium 500-image test you described. We need a large one right up to the 2500 limit in Promo. Not much is learned if image pixel density and count doesn’t push system resources to the brink. The bottleneck(s) reveals the thing of value, and 500 images, even 42 MP, isn’t much of a load on even a modest setup like mine. Having three image folders - small, medium, and large - can all come from the same scene, provides the user the choice what to bite off, no imposition. If this is the right idea, then it’s probably also useful to code the ability to intervene with a save command, and then a resume. That further supports users trying out the 2500 option, let that run at night, user saves during the day to work in RC on paying projects or other apps requiring system resources, then jump back in at night with resume.
I know this is all phase II, just a thought.