Attached are two orthographic renders of image-based reconstructions for the same subject. In the top render, I show PhotoScan results for the subject; note that the horizontal and vertical lines overlay the scene geometry fairly well. In the bottom render, I show the CR results (source: ~400 20MP photos).
In the CR model, there is an obvious ‘banana’ camber – the alignment drifts as the image sequence, and subject, are highly linear.
Any ideas how to minimize the camber in the CR model to more closely match the PhotoScan results?
Adding ground truth from laser scanning is an obvious route, but not always practical for us. Also, since these architraves are at the top of Egyptian temples, the point density of ground scans is less than optimal, and correspondence between a fairly low resolution .ptx file and high resolution images may be difficult for CR to establish.
If anyone in the CR community is interested in helping, I can provide you the source data.
What I have found with shipwreck recording is that “banana-shaped” reconstructions are typically the result of recording only one “lane” across a site: by recording only one lane, your images only have forward overlap and not sideways overlap, which can mess with camera calibration (have also had this issue in PhotoScan). By ensuring that each part of your survey is covered with both forward and sideways overlap the camera calibration can be more accurately calculated and your results are less likely to have weird deformations.
Go to ALIGNMENT -> SETTINGS - DISTORTION MODEL and change it from the default BROWN3 to K + BROWN4 + TT
make a back-up save of the whole project, DELETE the COMPONENTS already there and then realign the project and check the results.
What camera and lens are you using there ?
Can you make a screenshot of your actual ALIGNMENT settings ?
Thanks but I’m not currently having any issues with curved data just trying to give some advice for image capture procedure to Kevin Cain in order to avoid banana shape. But if I face this issue in the future I’ll try your settings
Hi Thomas Van Damme
Sorry for mentioning you and not Kevin Cain… You pinned it down very well, as for the cause of the issues !!!
Kevin Cain please read the Thomas Van Damme’s comment and try my recommendation for ALIGNMENT settings
Thank you Steph, Thomas Van Damme and Wishgranter,
Thomas, your text improves on my capsule description of the required ‘linear’ shooting pattern. Since this is typical failure case, when I shot this horizontal subject, I used an matrix approach – shooting vertical columns and horizontal rows to maximize the kind of connective ‘scaffolding’ you describe. Because PhotoScan’s native settings are more successful in these trials, hopefully CR can yield similar results. As I mentioned, only when combined (or compared) with ground truth can any image-based model be validated for accuracy.
Wishgranter, here are some salient values from the full set reported by EXIFtool, below. I haven’t yet set up this camera (or lens) but planned to follow your notes for that in subsequent tests:
Camera:
Make: SONY
Camera Model Name: ILCE-6000
Orientation: Horizontal (normal)
Focal Length: 19.0 mm
Full Image Size: 6000x4000
File Format: ARW 2.3.1
Megapixels: 24.0
Shutter Speed: 1/125
Circle Of Confusion: 0.020 mm
Field Of View: 65.5 deg
Focal Length: 19.0 mm (35 mm equivalent: 28.0 mm)
Hyperfocal Distance: 2.49 m
Lens:
Light Value: 12.3
Lens Type: E-Mount, T-Mount, Other Lens or no lens
Lens Spec: E 19mm F2.8
Lens Mount: E-mount
Lens Format: APS-C
Lens Spec Feature: E
Focal Length In 35mm Format: 28 mm
Lens Info: 19mm f/2.8
Lens Model: E 19mm F2.8
Lens ID: Sigma 19mm F2.8 [EX] DN
I will forward my alignment settings soon, once a current CR job is complete.
Thanks to all – changing the lens model, as Wishgranter recommended, helped reduce the camber in the output surface, that progressively accumulating alignment error when a data set covers a region in space along a single axis.
I also used the other alignment settings Wishgranted suggested, and some variants. I was surprised that the results from ~1,000 features/image was indistinguishable from the 8,000 features/images Wishgranter suggested, in terms of alignment error but also final poly count. Usually, increasing feature points increases matches, and therefore yields a denser surface. Perhaps the meshing step does not weight match density very highly when computing the output mesh size.
If there is an in-depth discussion of the alignment settings already on the forum, I’d love to see it.
Features → Camera alignment.
More Features → more smaller features with bigger reprojection error.
Bigger reprojection error → less quality reconstructed Depth maps.
Depth map resolution → count of dense cloud points.
Low quality depth maps → diffused dense cloud.
High quality depth maps → dense dense cloud with points that better follow surfaces.
Both dense cloud from LQ or from HQ depth maps can have same point counts.
But in polygon reconstruction steps points with big deviations will just ignored, so as result you will have lower polygon count on raw mesh. + LQ depth maps add more errors on this poly placements. HQ → more polygons with better precision.
Something like this, as far as i understand how SFM/MVS is work.