Hey I just bought the Promo licence and I am very impressed by the speed and quality of the point cloud, impressive.
However I have some issues with the mesh generation.
I understand I can not see the mesh in the viewport and I have to export it but I have a few problems / questions:
How do I know how many polygons the point cloud will generate on export with out giving it a target (ie simplifying)?
What dose “Maximal vertex per part” mean? what is a part exactly, image?
What dose the detail decimation factor mean and what dose it do?
It’s just that I feel like I am working blindly.
For example in agisoft when I generate a mesh it gives a number for high medium and low
so I can gestimate more or less depending on the object size, what number to choose.
Here, I have no idea what is the initial poly count to begin with.
Further more, after simplifying the mesh (even though I understand it simplify the point cloud?)
and export it, I got a very dirty mesh with allot of artifacts, even though I gave it plenty of polys
for this specific object. However, when I rendered an image inside RC i got a beautiful render smooth and detailed.
Photoscan and CapturingReality work in completely different way.
In Photoscan dense cloud (from depth maps) is END. Mesh, simplification, etc., is just some cloud to poly methods, like poison reconstruction, but nothing about photogrammetry. And this ways allow give estimate poly count before processing.
CapturingReality and some other modern photogrammetry tools, mostly skip high dense cloud as way to calculate meshes, but use depth maps instead. And, as far as i understand this parts, there is no way for estimate poly count from depth maps.
Low, Middle, High is just how many refine steps will be processed. Each step can add poly to surface (if your depth maps is High quality and have enough details) or just refine poly placement.
That’s why CapturingReality create meshes that so far away in quality and details from Photoscan (photoscan details is just less or more blobs from dense cloud).
So usual workflow for such tools like CapturingReality is create low or middle poly mesh. Check quality, understand which problem you have (for example misaligned parts give artifacts). And fix errors and run High or Ultra mesh (~1.5-5x times more poly). If needed.
…
What dose “Maximal vertex per part” mean? what is a part exactly, image?
What dose the detail decimation factor mean and what dose it do?
…
The answer is in the Help / Model and Reconstruction Settings / Mesh Calculation…
Vladlen wrote:
Photoscan and CapturingReality work in completely different way… CapturingReality and some other modern photogrammetry tools, mostly skip high dense cloud as way to calculate meshes, but use depth maps instead… That’s why CapturingReality create meshes that so far away in quality and details from Photoscan (photoscan details is just less or more blobs from dense cloud). So usual workflow for such tools like CapturingReality is create low or middle poly mesh…
The name of the software is RealityCapture (without space). The name of the company
…
The name of the software is RealityCapture (without space). The name of the company is Capturing Reality (with space).
Desktop app shortcut tell me Reality Capture (with space), but web site logo and footer tell me CapturingReality (without space).
But this not made your great tool less attractive for me
…
The name of the software is RealityCapture (without space). The name of the company is Capturing Reality (with space).
Desktop app shortcut tell me Reality Capture (with space), but web site logo and footer tell me CapturingReality (without space).
But this not made your great tool less attractive for me