If I have two sets of images taken of the same object at different times of day (completely different lighting), there’s going to be issues if I feed in both sets of images into the same project to solve all together. I get a much higher accuracy result if I solve the two data sets separately.
So is there a way to first solve one image set, then the other, and then merge these two solves into the same project / space so we have all of the camera positions and model information aligned with each other in one scene? Then I could toggle which set of images will be used for texture projection, etc.
Hope that makes sense - thanks!
In theory combining two datasets of one object even with different lighting will increase mesh quality, due to higher number of images.
Texturing, depend on situation, but also mostly must increase of uniformness of texture. This is often used if you need scan outdoor site. But can’t wait forever for overcast day that can’t be for a months in some areas.
the easiest way is to get more images overlapping the “empty space” of both COMPONENTS if it is possible. If you cannot get more images, then you need to use control points (CPs) on both datasets, for each CP you need at least 3 projections per component, so 6 projections as the minimum, more projections mean more help to get it together.
Take a look at how good is to place and spread CPs in images: https://www.youtube.com/watch?v=d8naLEtLqDY