simplify emphasising surface noise...

Hello Jennifer,

max feature error=2.
So, suggestions on what I should to to improve the camera positions?

Vladlen recently shared a worklfow that directly answers your question. In the Workflow settings use Group by exif data, assuming one lens and camera body combo, this as a starting point for initial Alignment. The default 2.0 setting for max reprojection error is fine, can be turned up to 3 or 4 in cases with problematic photography and when you can’t go back. After Alignment, you’ll likely see in your largest Component max error at 1.9, just below the threshold you set. Add CPs to orphaned images or to images that diagnostics lead you to target problem areas, e.g. thin regions in sparse point cloud, or more serious, breaks in surfaces. You can up the Weight of these CPs if you’re confident of features appearing the same from different perspectives and executing accurately. Running Alignment again until you’re happy with a master Component, now turn down the max reprojection error by .5 and align, goes quick, go check max reprojection error, you should see it drop, e.g. 1.9 to 1.4. Rinse and repeat until you get to .5 setting.

Also keep an eye on the median reprojection error, which tells you how things are going overall, the max may only relate to a few problem children. You can go after those weak spots (if you’re good with diagnosing them) with added CPs, and watch the numbers come down.

At the end, switch the Group exif data setting back, forget what it’s called, run align one last time, you should see a final small drop in max and median error. This setting is important for a couple reasons. By assuming a set distortion signature for your imagery, RC calculates this just once with a handful of images and runs with it for the others, it’s faster, but more importantly, it’s more flexible in dealing with overly converged subject matter, sections of a photo where the math involved to properly triangulate is strained and thus returns higher reprojection errors. This isn’t simply a matter of not following best practices while shooting, as some subject matter often forces you to introduce some extremely converged surfaces while getting ample coverage of others. You don’t want RC dealing with those problems in the beginning.

One or more of these images containing highly converged subject matter, if not set to Group by exif data, may align, but may introduce such distortions in the model, e.g. curved surfaces that should be planar, often at the edges of the model, that neighboring imagery can’t be tied in, max reprojection error forces RC to break into smaller Components, a wasted effort trying to bring these into a single Component with CPs when you’ve locked in these reprojection errors.

This iterative workflow makes it easiest on RC to get a handle on everything, with the opportunity to manually intervene between each alignment to catch problems while they’re young, then once you’ve optimized the model, at the very end you let RC consider each image individually to eek out that last bit. Even if you use the same lens and body for all your work, changes in focus and aperture will (slightly) affect distortion. Also, no two lenses are the same, manufacturing tolerances don’t consider (and can’t afford to) how one image from one lens relates to imagery from another lens, not at this granular level. I learned this testing “matched” Zeiss primes during the 3D days, and photogrammetry is yet less forgiving than what your brain does accommodating imagery from both eyes.

In your case with fine details in the wall, ratcheting down on how accurately these are modeled I expect would benefit how in turn they’re affected by the Smoothing and Simplify tools, worth an A/B comparison test. Do share.

Best,
Benjy

Wow Benjy - Thank you for so much detail and a lot of really good suggestions for the workflow.
I’ve taken your suggestions and ran with them for a bit…
I hadn’t realised re-running alignment would incrementally improve the results. (But should have when thinking about the prior pose data). Neat feature - well worth doing but didn’t help my data that much due to the poor quality of the baseline.
Upping the Component max error was an interesting exercise - this seems to filter out all the features the software doesn’t a good “lock” on. So I boosted my feature count and used the error filter to help select best (most distinctive I hope) features.
Lowered the threshold and aligned a few times with better alignment.
Lowered the threshold a bit more and suddenly my alignment results got much better - Mostly because the model lost a whole area of alignment! (Couldn’t align any points in a whole section due to the short baseline)
Fair enough

So go back to a good state with best report I can get while retaining the model and think maybe I can isolate a couple of especially poor photos. Turns out the time in the workflow when you can pick points/find camera is really limited. Miss your opportunity and you have to align the cameras again to get the control working (why can’t we pick a point or vertex anytime and backtrack the contributing photos? Just wanted to isolate some of the high/low points then disable their camera(s).

Good tools in RC for diagnosing the results - just have to know when they are available and where to use them!

Thanks again for the workflow suggestions everyone… So many options to find and tweek.
Jennifer

Jennifer Cross wrote:

why can’t we pick a point or vertex anytime and backtrack the contributing photos?

You mean select points in the sparse cloud with Points Lasso or Rect(angle) so they are highlighted in orange and then press Find Images?
That’s already possible… :smiley: …just not with the mesh…