I am currently trying to perform geotagged reconstruction in RealityScan, and I hope that after the reconstruction is complete, I can directly place the model into a map like Cesium for loading, and that the model can retain the true posture, orientation, and position in such a map.
I use an iPhone for shooting and have written my own logic to obtain the GPS information and camera rotation information for each image. After generating the corresponding XMP and importing it, I found that the reconstructed model’s geographic location is still correct, but the rotation is quite odd.
I compared it with the XMP data collected by the official iOS RealityScan mobile app. The official app records relative rotation and local spatial coordinates, and the resulting model poses are all fine.
My rotation data uses the absolute rotation data (gravity and heading) provided by iOS ARKit, so the resulting model poses are all very strange.
I want to ask whether the RealityScan official version provides a way to perform reconstruction using absolute rotation data. Is there a problem with my workflow?