When shooting with a 24MP Nikon DSLR in RAW, I turn all lens correction off as RC wants to characterize the lens itself by projection consistency and the camera focal length in the RC database.
I sometimes shoot supplemental images with a 24MP camera phone (ASUS Zenfone AR); in this case I can’t turn the geometry correction feature off, so it’s baked into the JPEG images the camera writes.
As a result, RC has trouble combining the DSLR and the camera phone images – they end up mostly in separate components, even with CPs to help reinforce the connections.
I’ve tried some of the 1D menu options for the input images to skip lens correction, but I can’t see a difference?
The link above is to a .zip file with three 24MP images of a chessboard.
I assume there is not a formal calibration process for RC using checkerboard images, but at the very least you can see the extent of the camera’s auto-correction.
You do not need to worry about manual lens settings. RC must work with this images without problem.
Problem, that 24Mpx smartphone camera sensor have real resolution around 6Mpx Comparing to DSLR camera.
And RC just can’t connect features found on smartphone images with same features found on DSLR images.
As workaround you can use control points for merging components. And/Or shoot more images with smartphone with better overlap from more close range distance than DSLR (2m for DSLR ~ 0.5-1meter for smartphone camera like iPhone 6S)
I don’t think it’s the lens correction - I got some quite good results with corrected images. I would try it with a better suitable object. This one is quite hard to begin with, the checker pattern is not easily identifiable and the rest is rather dark. Also, it might help if you delete all but the biggest component and realign. If that doesn’t work, delete all components and realign. I’ve described this a couple of times in more detail here on the forum…
The few images I shared were only to show sample images of the camera for use in Bouget-style chessboard calibration – I’m not trying to solve/reconstruct those.
Can someone from RC chime in with a definitive answer on the existence of manual lens calibration in RC? Thanks!
I cannot imagine the images of your phone are rectified to photogrammetric standards. So RC still needs to do some adjustments. I can only tell you that it worked for me many times, so I don’t think it’s the prior rectification. I rather agree with Vlad, if what he says about the resolution (6 mpx) is true. I would also suspect the invasive post processing (sharpening etc) to be too much for a proper alignment. You might notice some differences in the number of detected features - I bet the images from the phone hav substancially fewer.
I agree that the camera output is wrong for any SfM/MVS approach; I was hoping there might be an external RC calibration program (or procedure) that could effectively remove the in-camera post processing by calibrating it as a function of the lens, in the same way the air/water interface in SfM can be solved via ‘Brown-3’ style camera modeling. It wouldn’t capture everything, but it would be better than RC having to contend with processed images that obscure its attempt to solve for the camera intrinsics.
Thanks for the links, which are helpful but. I was interesting not in what RC does after image import, not the idea of qualifying the lens before…
Not sure what you mean by air/wave in sfm. Are you trying to re-distort the image?
The first link covers what I understand you want - a possibility to provide RC with elsewhere obtained distortion values for a specific camera/lens setting. I assumed you have this information for your phone. It would IMO need to be calculated data and not some manufacturers info. Another possibility, which I also did several times, is to use some external software to do this rectification and then import the undistorted images into RC. RC will still adjust them, but only marginally.
Have you tried to play around with the Distortion Model (Alignment - Settings - Advanced)? K + tangiental2 calculates/rectifiesthe most variables I think…
I was using the ‘air to water interface’ as a similar example of solving for a non-lens distortion with typical radial lens distortion parameters like Brown et al. In case you’re interested, I was referring to the refraction index of glass and water in underwater photography being solved – imperfectly – as a ‘lens’ by the solver.
I bring it up because the penalty in both cases (underwater and corrected photos) is that many SIFT points won’t satisfy epipolar geometry constraints and the resulting reconstruction is necessarily diminished.
That is, if RC can’t understand the lens correction applied before, many good matching points will be discarded by the ‘double-fitting’ of the camera model.
That will be true with any of the supported models, as they have ground truth idealized camera models from EXIF but can’t undo the distortion present when applying their camera model – thus, ‘double-fitting’.