Decomposing for large scale processing - disk intensive!

I’ve been testing RC on different computers (none of them are new workhorses), and my current test rig is a HP Proliant DL380 G7 with the beefiest CPUs and two 1080Ti cards. The GPU power is in abundance of course, and the CPUs also have periods with heavy saturation.
However, after starting a high quality render of 1500 images today, I noticed a (to me) new stage in the processing that is absolutely hammering my disks (HP 410i raid controller with 4 10k drives in raid10). I’m going to add a work drive with raid 0 to see if that helps, but I’m just curious what this “decomposing for large scale processing” stage is. I haven’t seen it before, and I don’t remember seeing it while processing over 2000 images.
Is there an overview anywhere, where I can read about the different stages in the reconstruction phase, and what goes on?

I would guess that is the stage where RC splits the work into tasks that are small enough for your machine to handle.

Aha! Do you know if it’s memory related? I saw something odd, as the RAM utilization of my machine maxxed at 16Gb. I have 60Gb in there though, and I was wondering why it didn’t use more memory.

Maybe your system is just swapping intensively on disk. You say you have 60 GB on board (you mean 64 GB ?), is the system able to use everything ?

Hi Ørjan Sandland

Its absolutely OK that you observe the HDD activity, and it always present there no matter the project size. Best option here is to have SSD drive for RC CACHE data.

That the RAM maxed might be some kind of bug that has been reported a few times during reconstruction.
Have you tried it another time?

And no, I don’t think there is anything like the list of different steps out there.
I think that would be super useful though…
Open a feature request? :wink: