I am currently working on a stream scene modeling project. My workflow involves capturing a series of photos using a mobile phone while moving horizontally and recording GPS coordinates for each photo. I then use these images to generate a point cloud model.
To ensure that the final model accurately represents real-world dimensions, I need to calibrate its scale. Does RealityCapture offer a feature that automatically adjusts the model’s scale based on the GPS metadata of the photos, such as an option that can be enabled for automatic correction? Or is there a specific section where I need to manually input the necessary data for scale correction?
Hello @konoodioda
When you are using georeferenced data, then these are used in the process and the creations are georeferenced. But to use this, you need to set the Project and Output coordinate system to a wanted ones.
The problem here can be that the geoinformation from mobile devices is not so precise, so there could be some issues regarding that.
To get more information about this you can check this tutorial: https://www.youtube.com/watch?v=qb4EPyLBRHM or this Help article: RealityCapture Help
In other words, as long as the GPS coordinates in my photos are accurate, can I generate a model that is exactly the same in dimensions as reality? Thank you.
I suppose they won’t be ever such accurate to get the exact same dimensions as in reality. There will be some error (also using even more precise measurements).
But you can add some scale/ruler to your capturing to validate or use it as a reference.
If I add two control points with precise coordinates—for example, set exactly 5 meters apart—will the model’s dimensions automatically be corrected accordingly?thanks alot