I routinely run into performance issues when my reconstructions have part sizes greater than what can fit into memory. I have all mesh reconstruction settings on default, and the scenes are not georeferenced or scaled in any way.
An example: at the moment, I’m running a Normal detail job and even though the Maximal vertex count per part is default 5,000,000, one of the parts was estimated to be over 18,000,000 verts while processing. While processing this part, RAM usage exceeds maximum (32GB in this case) and RC starts to swap to disk to get it finished. RAM returns to normal after it goes back to dealing with smaller part sizes.
I wanted to confirm whether this is normal behaviour, or whether I need to set the maximum vertex count smaller to bring it into line with my machine specs, and to ensure it never exceeds RAM? Or should this happen automatically in the “Decomposing for large scale processing” stage?
Previous attempts at experimenting with the setting were fruitless.
Any info greatly appreciated!