Can I Calibrate the Unit Scale of a Model Using GPS Coordinates from Photos in RealityCapture?

I am currently working on a stream scene modeling project. My workflow involves capturing a series of photos using a mobile phone while moving horizontally and recording GPS coordinates for each photo. I then use these images to generate a point cloud model.

To ensure that the final model accurately represents real-world dimensions, I need to calibrate its scale. Does RealityCapture offer a feature that automatically adjusts the model’s scale based on the GPS metadata of the photos, such as an option that can be enabled for automatic correction? Or is there a specific section where I need to manually input the necessary data for scale correction?

Your guidance would be greatly appreciated.

Hello @konoodioda
When you are using georeferenced data, then these are used in the process and the creations are georeferenced. But to use this, you need to set the Project and Output coordinate system to a wanted ones.
The problem here can be that the geoinformation from mobile devices is not so precise, so there could be some issues regarding that.
To get more information about this you can check this tutorial: https://www.youtube.com/watch?v=qb4EPyLBRHM or this Help article: RealityCapture Help

1 Like

In other words, as long as the GPS coordinates in my photos are accurate, can I generate a model that is exactly the same in dimensions as reality? Thank you.