[Question] Beginner Advice: Workflow for importing Livox Mid-360 (Glim SLAM) data + External Camera into RealityScan 2.1?

Hi everyone,

I am a student currently learning about 3D scanning. I am very interested in the new SLAM support features in RealityScan 2.1 and would like to try them out for a school project.

However, I am a beginner when it comes to coding and hardware engineering, and I am struggling to understand the correct workflow. I would really appreciate some guidance from the community.

My Goal: To build a handheld scanning rig using a Livox Mid-360 and a Consumer Camera (to be purchased), and process the data in RealityScan 2.1.

Current Situation:

  • LiDAR: Livox Mid-360 (Running Glim SLAM for odometry).

  • Camera: Not purchased yet (Planning to buy a 60fps action cam or similar).

  • Skill Level: I can run the SLAM software, but I cannot write complex C++ code or build custom hardware sync circuits.

My Questions:

  1. Exporting Trajectory: I am using Glim SLAM, but I don’t know how to convert its output into the .csv or .log format that RealityScan requires. Is there a known tool or simple script to convert SLAM poses for RealityScan?

  2. Time Synchronization (Without Hardware Sync): Since I cannot create a hardware sync cable, I plan to manually sync the camera and LiDAR by covering the sensors at the start (visual cue). Is this “manual sync” method accurate enough for RealityScan’s SLAM alignment? Or will it fail without precise hardware timestamps?

  3. Camera Choice: Since I haven’t bought the camera yet, do you have any recommendations for a student budget? Is a “Global Shutter” camera absolutely necessary for this SLAM workflow, or can I use a high-fps Rolling Shutter camera (like a GoPro)?

I apologize if these are basic questions. I am trying to learn, but the technical documentation is a bit overwhelming for me. Any step-by-step advice would be incredibly helpful!

Thank you.