I have an app that reprojects photogrammetry models into the photos to evaluate shape errors. I am testing with a set of 68 photos taken all around a statue at 4 levels. RS made a satisfactory model. With the .csv file exported in ‘internal/external camera parameters’ format I get incorrect results now matter how I interpret the pose parameters, which are given as a position vector and 3 Euler angles. Calibration.XML says that the application order of the Euler angles should be z,y,x. It is not specified whether the rotation is around world center or camera center, nor in which sense (world to camera or camera to world frame). Of the 4 combinations of these two choices, only one gives some correctly oriented views of the model, however many are mirrored and about half of the views that appear to have the model in view have it behind the camera. Applying the angles in xyz order and reversed sense gives a similar result but with the model upside down. The camera intrinsics are nearly identical for all 68 views.
My questions:
has it been verified that the poses exported in this format actually are correct?
what else can I try to get the correct projections?
I’ve checked the calibration.xml file, but there could be some issues with exported data (the positions should be right, angles can be different to an ordinary meaning).
I have been trying hard to find the right way to apply exported camera positions and rotations to the simple task of projecting model vertices into the source images, so far without luck. I’ve used various export formats, with and without applied transformations. In most cases the coordinate ranges of the model and of the camera positions agree as to axis orientation. Yet when I project model points with rotation * (model point - camera position) they come out in unexpected places, evidently rotated by the camera azimuth around some unknown axis. Please help me understand what is going on.
The Yaw, Pitch, Roll values are compared to the local North East Down (NED) system. Yaw is around Z NED, pitch around Y NED and roll around X NED. Euler format is ZYX (X first).
But probably the best option for you will be to export the model with cameras, open that model in Blender and check the rotations there. Probably those rotations will work for you.
But my problem goes beyond the Euler angle convention. I have similar (though not identical) bad results using the full rotation/translation matrices exported in the Radiance Fields JSON format.
When I export in Alembic and open in blender, the cameras are shown in what seems to be the correct positions, however the views from those cameras, as generated by Blender, do not correspond to the photos. The view directions are plausible, but the scale and rotation are wrong.
I would appreciate further advice, as my application really needs to be able to project 3D points between images. I am working with undistorted images, so the camera model is not an issue.
Regarding the alembic format, have you tried the workflow from the attached tutorials? Following that tutorial you’ll see the coverage of the camera with the exported model in Blender. This is tested regularly. For me it is almost perfect:
It should. This model is created from portrait orientation originally, but probably the landscape is considered as the original orientation.
Can you check the Rotation of your images in RealityScan (IMAGE 2D/VIEW/Display/Rotation):
If you set it to Normal, how is your image oriented in 2D view?
But for another project which has the portrait orientation set as Normal it worked, too.
After import to Blender, have you changed the Resolution to an actual value? Also, sometimes it is needed to set the Sensor fit to Horizontal or Vertical for some images.