surface artifacts using two cameras

After trying every mentioned tip in the forum I start turning in circles with my experiments. I keep getting visible flaking-like surface artifacts. If I just use the photos from one camera-set I get clean results. using both keeps producing these artifacts. One camera is wideangle the other standard 50mm. It seems as if they produce different depths in the surface and therefore »jump« depending from which camera the surface gets reconstructed :shock: So two conflicting reconstructions in one component? They align well all in 1 Component. (Exept 3 Fotos)
The instruction: Group by Exif and then start with Division/2 and then Brown4/2 makes it worse. My best approach was keeping all photos ungrouped, align with brown4 or k+brown4t2. Zoom in Photos varies, thats why grouping makes no sense?

  1. What is the best workflow setting up a clean alignment without artifacts using different sensors?
  2. What settings for a) Feature Reprojection (2) b) Detector sensitivity (ultra) c) Distortion Model? (is k+brown4t2 most flexible?)
  3. How can I reset all settings to default? (Reinstall does not help)

Thanks for any help

Hi Jan F.

  1. What is the best workflow setting up a clean alignment without artifacts using different sensors?

Proper planning. Is this a statue or living person?

  1. What settings for a) Feature Reprojection (2) b) Detector sensitivity (ultra) c) Distortion Model? (is k+brown4t2 most flexible?)

Use alignment settings from the screenshot. And use MEDIUM or HIGH detector sensitivity, not ULTRA, as it generates a lot of “bad points”.

  1. How can I reset all settings to default? (Reinstall does not help)

Press and hold SHIFT and run RC, it will you ask to reset RC to its default settings.

Hi Wishgranter,
Thanks for Your reply. Its some stone figures. Photos are shot kind of hemispherical »out of hand«. Project was well planned actually. Constant lighting, sharp photos etc. Detector sensitivity LOW even made no change either. But I think I identified the problem now! :o Another figur was done with only one camera causing the exact same problems.

2016-04-05 19_55_06-MeshLab_64bit v1.3.4BETA - [Project_1].png

So it seems as if the system has problems handling sets with variing distances to the object or it has problems if you get too close to an object. In order to capture some of the details I came very close to the object. I am not sure if you can call this a bug or if its just some technical limitation? Agisoft had no problems with situation, but is calculating significantly longer. What possibilities with RC do I have? Is the distortionmodel out of bounds for objects too close to the lens?
is it possible to filter cameras which are too close to an object? Or would it make sense to identify and group cameras with an equal average distance to an object? The only way I see now is to look in 3D viewport and click on the maybe »too close« cameras and disable meshing? am I right? Thanks for any further help.

Was this the wide angle set? Are there images at intermediate distances?

no, it was the 50mm Fullframe. Thats why I reasoned it must be something else. And wideangle isnt even fisheye. So I think the wideangleset was not the problem per se. it was because I use the wideangle to get closer capturing details.

Hi Jan F.

Hard to say more specific recommendation without seeing at least screenshots of the cameras in the scene or getting the data for inspection. But a good rule to get the captured data to work is to not make jumps over 1/3 distance from previous camera positions. To explain it in an easy way, if you have camera captures from 1 meter, then the next closer position should not be under 70 . If you move closer then 70, say 50 cm ( 1/2 ), then alignment can fail or be not precise and stable. It is an “easy” math, by going 1/3 closer you change GSD (resolution) by 30+% and that is quite a big difference…

If you can, send me the data to my email milos.lukac@capturingreality.com for inspection…