Hello RealityScan Community,
I am working on importing my LiDAR scan data into RealityScan for processing and texturing, but I have some questions regarding the correct workflow. I would greatly appreciate any guidance.
My data comes from two related sources, and I’m unsure of the best path forward:
-
Raw LiDAR Data:
- Device: Livox Mid360
- Data Format: A sequence of per-frame
.pcdpoint cloud files, along with ascans_pos.jsonfile (which I believe contains the pose/trajectory information for each frame). - My Question: I am unsure if RealityScan supports importing this kind of sequential, frame-based point cloud with an external pose file. What is the recommended method to merge these individual frames into a single, spatially accurate point cloud that RealityScan can use?
-
Processed Color Point Cloud:
- Processing Pipeline: I used the FAST-LIVO2 algorithm to perform a tightly-coupled LiDAR-Inertial-Visual odometry fusion on the raw data. This resulted in a complete, colorized, and globally consistent point cloud of the entire scene.
- Data Format: The final output is a single
.pcdfile (this is the fused, full-scene point cloud). - My Question: Is importing this pre-reconstructed color point cloud the more straightforward approach for RealityScan? Are there any specific settings or considerations (e.g., point density, color format) I should be aware of during import for optimal results?
To summarize my core questions:
- What is the correct preprocessing and import workflow for the raw, sequential Livox data?
- For the complete color point cloud from FAST-LIVO2, is this the recommended method for use with RealityScan? Are there any known best practices or success stories?
Thank you in advance for any advice, experience, links to tutorials, or pointers to official documentation you can share!

