Photoscan and CapturingReality work in completely different way.
In Photoscan dense cloud (from depth maps) is END. Mesh, simplification, etc., is just some cloud to poly methods, like poison reconstruction, but nothing about photogrammetry. And this ways allow give estimate poly count before processing.
CapturingReality and some other modern photogrammetry tools, mostly skip high dense cloud as way to calculate meshes, but use depth maps instead. And, as far as i understand this parts, there is no way for estimate poly count from depth maps.
Low, Middle, High is just how many refine steps will be processed. Each step can add poly to surface (if your depth maps is High quality and have enough details) or just refine poly placement.
That’s why CapturingReality create meshes that so far away in quality and details from Photoscan (photoscan details is just less or more blobs from dense cloud).
So usual workflow for such tools like CapturingReality is create low or middle poly mesh. Check quality, understand which problem you have (for example misaligned parts give artifacts). And fix errors and run High or Ultra mesh (~1.5-5x times more poly). If needed.