Troubling results with wide-angle lenses

I’ve been very happy with RC for the most part, but there seems to be some major problems with wide-angle and fisheye lenses. The results may look accurate at first glance, but a quick comparison with the same photos processed in Photoscan show that the Reality Capture result is severely warped and very inaccurate:

Reality Capture:
http://imgur.com/5P1c1KH

Photoscan:
http://imgur.com/KRU2yue

Take, for example, this long hallway. I photographed it with a combination of a 12mm lens and a 20mm lens, on two different cameras, a total of 350 photographs. Both programs aligned all the images, but Reality Capture exhibits a heavy “warping” or bowing of the model, where photoscan does not. The photoscan model is much, much, more true to the reality of the space.

What’s going on here? This is casting doubt on the accuracy of all the models I have processed with RC…

In RC I used the “fisheye” parameters detailed in other threads, setting camera priors to unknown and using Brown4 etc…

Hi Brennmat,

the result depends on the selected lens model and how image features are distributed in images. If the scene is like a single linear path (without scene loop), then cameras tend to drift. Camera grouping helps there a lot. By default PS groups camera parameters.

Try grouping camera parameters. Open 1Ds view, click “images” root and click “group” by EXIF in the Inputs panel.

martinb wrote:

Hi Brennmat,

the result depends on the selected lens model and how image features are distributed in images. If the scene is like a single linear path (without scene loop), then cameras tend to drift. Camera grouping helps there a lot. By default PS groups camera parameters.

Try grouping camera parameters. Open 1Ds view, click “images” root and click “group” by EXIF in the Inputs panel.

Thanks Martin, I’ll give that a try.

Brown4 does a reasonable job for a full frame fisheye but the accuracy may drop off rapidly near the edges if you’re starting from “unknown”. I’ve been having a bit more success using “Division” for the initial alignment (with Max reprojection error = 2) and then switching to Brown4. This has been producing better results for me than starting with Brown4 and Max reprojection error = 8

Have a look at some of the images in a 2D pane and enable the display of tie points. If you don’t have any tie points near the edges it’s likely that the distortion parameters are a bit out and you’re seeing a sudden rise in reprojection error (thus excluding the tie points).