How to config RealityCapture to take user-provided calibration parameters

Hi,

We have a multiple-camera rig and it has been pre-calibrated. I have a few questions on how to config RealityCapture to take user-provided calibration parameters.

  1. I have 12 cameras already calibrated. I want to config RealityCapture to use my camera parameters and disable the registration process. The orientations of cameras are represented by angle-axis. How can I convert them to Yaw, Pitch and Roll used in “Prior pose” and vise versa? How can I disable the registration process and use the fixed prior poses and calibration (user-provided)?

  2. The intrinsic parameters (focal length, principal x and y) of my calibration are represented in pixels. I find focal length, principal point x and y in “Prior calibration” from RealityCapture takes values in millimeter. What does focal length [35 mm] and principal points in millimeters mean and how to convert focal length represented in pixels to them and vise versa?

  3. I find depth maps can be exported from RealityCapture. Are these depth maps generated by stereo algorithm before meshing or simply a rendering of the final mesh at each viewpoint? How is each pixel in the depth map (exr format) represented? Is it a floating point number or an integer? Is the value represented in meter or millimeter?

  4. Are exported depth maps free from distortion? Given distortion parameters generated from registration process, how can I generate a point cloud from the depth map exported from RealityCapture?

Thank you very much for helping me with these questions. Your help will greatly help me on how to use RealityCapture for my needs. I am looking forward to your reply!

Best regards,
Kaiwen Guo

This inquiry has been answered in a support ticket.