Reality Scan Image undistortion

Hello everyone,

I’m currently working with RealityScan 2.0 and wanted to share a workaround I’ve developed, along with a question about the final step of the process.

My Goal:
I’m using RealityScan for camera alignment with equirectangular cameras. Since RealityScan does not support equirectangular image alignment directly, I created a workaround:

The Workaround:

  • I split each equirectangular image into 6 cubemap faces.
  • I only align one face (typically the +Z direction) using RealityScan.
  • The alignment results are excellent — accurate and consistent.
  • I then export the scene using the COLMAP format (images.txt, cameras.txt, points3D.txt).
  • At that point, I manually add the missing 5 cubemap faces back into the files, using consistent intrinsics and poses.

The Problem:
The final step is undistorting the images, and this is where I’m stuck.

I’ve tried:

  • OpenCV’s undistort and custom pipeline implementations
  • COLMAP’s image_undistorter
  • Manual matching and parameter tweaking

None of these approaches produce the same undistortion as RealityScan. The field of view is always slightly off, and the result includes more content on the sides than it should — so the output does not match RealityScan’s internal processing.

Is there a way to:

  • Use RealityScan’s internal undistortion logic on my exported dataset?
  • Or reimport the camera parameters and image data into RealityScan to generate properly undistorted images?

Any help or insight would be greatly appreciated. I believe everything else in the pipeline is working well — just this final undistortion step remains unresolved.

Thanks in advance,