"More photos = Worse Model" Fix?

Having some problems with a large(ish) project. Fairly confident about the source of the problem, but wanted to see about fix suggestions.

Have about 10k photos (aerial and handheld) and about 60 laser scans. Unfortunately, due to location and time limitations, some of the best shooting practices weren’t followed on this one. So final alignment is held together with CPs and bubble gum. 

One of the problems is individual subsets of images result in much better models than the whole thing. For example here’s a screen shot of a model of a roof generated from just Nadir photos compared to the same area from the the final component with all the photos:

I’m pretty sure the root cause is alignment errors between various components that were forced together into a series of unholy marriages. This is supported by the fact that placing CPs on what should be the same spot results in unreasonably large errors.

The only apparent solution is very carefully shove more CPs in there, re-align and hope for the best.

Something like this was discussed in a few threads like this one: https://support.capturingreality.com/hc/en-us/community/posts/115000790031-How-to-optimize-Reconstruction

Any other suggestions on dealing with this?

Hi Tim,

no time to read through the epic thread but my standard trick in such cases is to delete all componentd before a new alignment. Sometimes RC just drags bad positions along if you keep some components…

And I agree with you about your analysis of the problem!  :slight_smile:

it could also be worth removing images of lower (the lowest) quality from the dataset and rerun the processing again,

adding more images can help only when they are at least of the same quality as the original inputs

Thanks for the recommendations, Gotz and Lucia.