Maximal vertex count per part

Hi folks,
I routinely run into performance issues when my reconstructions have part sizes greater than what can fit into memory. I have all mesh reconstruction settings on default, and the scenes are not georeferenced or scaled in any way.

An example: at the moment, I’m running a Normal detail job and even though the Maximal vertex count per part is default 5,000,000, one of the parts was estimated to be over 18,000,000 verts while processing. While processing this part, RAM usage exceeds maximum (32GB in this case) and RC starts to swap to disk to get it finished. RAM returns to normal after it goes back to dealing with smaller part sizes.

I wanted to confirm whether this is normal behaviour, or whether I need to set the maximum vertex count smaller to bring it into line with my machine specs, and to ensure it never exceeds RAM? Or should this happen automatically in the “Decomposing for large scale processing” stage?
Previous attempts at experimenting with the setting were fruitless.

Any info greatly appreciated!

Hello dear user,
I believe this should work as expected and shouldn’t exceed the limit, but anyways, if it fails even after changing the value to higher or lower, you can play around with Maximal distance between two vertices and set it to 0.05 for example to ease the calculation or set a higher downscale factor (this one would be the last choice) 
Detail decimation factor could also be helpful here, when increased. 

Thanks for the reply! I’ll see if detail decimation can come to the rescue. I’ll also try and get a screen capture the next time it happens

We are Reconstruction in Normal Detail which is generating 230 parts and we have set at 10M vertex counts per part and have results with 400-600M triangles.

Is there a test to help figure out what to set the vertex counts per part to achieve the detail that you need for the project?  It looks like another 6-8 hours on a AMD system with 64 cores, 512GB ECC RAM