currently it is possible to import only camera positions via Flight log import. However, you can insert the camera orientations manually for each image.
I have imported the camera position and orientation using flight log following the guide. But it doesn’t improve the camera alignment results at all, and the alignment RC estimated is still very poor (it can only estimate 5 cameras out of 53 I provided).
Currently what I did is to extract camera locations (XYZ) and orientation (euler angles) from OpenCV by decomposing the projection matrices we calibrated beforehand and wrote them to a csv (X, Y, Z, Yaw, Pitch, Roll).
I wonder if it’s possible to import focal length and distortion coeffs too and completely skip the alignment phase at all? Or if there is any way to improve the automatic calibration process since currently the estimation given by RC is far poorer than COLMAP in my dataset?
Also, I wonder what the camera coordinate system RC is using. For now I assume it’s the same as OpenCV, where cameras look at the Z. If that’s not the case then the prior I provided could be misleading, and does more harm than good.
But even if I leave orientation out and provide locations only, the quality of calibration doesn’t improve either… Which is really puzzling.
The result is pretty poor without flight log, so that’s why I tried to provide some prior in hope of making it better.
After removing some cameras with little overlaps with others the result improves but still quite poor. Is there an option to do the feature matching in an “exhaustive” fashion same as COLMAP?
Btw, the cameras form a ring with object in the center. But RC insists they are in the same side for some reason I am not aware of… I increase the weights of prior but it doesn’t help.
Regarding the camera coordinate it looks the same as OpenCV, then I don’t get it why it becomes worse after the optimization process…