Hi everyone,
I’m trying to match a terrestrial LiDAR point cloud with one generated from aerial photogrammetry. I set them up in two separate projects:
-
Project 1: Terrestrial LiDAR cloud.
-
Project 2: Aerial photogrammetry cloud, constrained with about 10 GCPs.
After alignment, the aerial cloud matches the LiDAR cloud in position, but it shows a noticeable “banana” deformation. I tried correcting this using the Brown4 Tangential 2 distortion model, which reduced the bending but caused the component to split into too many parts.
To improve alignment, I fixed several control points in identical positions in both the LiDAR and drone clouds, then merged the two components. I expected to end up with a single component containing both the terrestrial and aerial images, correctly georeferenced and aligned. Instead, I got severe misalignments and only the cameras from one of the clouds.
What’s the best way to constrain the drone cloud to the LiDAR cloud so I can get a single, stable component that’s a true fusion of the two datasets? My end goal is to produce a mesh with a reasonable triangle count and apply a high‑resolution texture.
Thanks in advance for any advice.