Problem with Model Rotation in RealityScan when Using Absolute Rotation Data (ARKit) instead of Relative Rotation

I am currently trying to perform geotagged reconstruction in RealityScan, and I hope that after the reconstruction is complete, I can directly place the model into a map like Cesium for loading, and that the model can retain the true posture, orientation, and position in such a map.

I use an iPhone for shooting and have written my own logic to obtain the GPS information and camera rotation information for each image. After generating the corresponding XMP and importing it, I found that the reconstructed model’s geographic location is still correct, but the rotation is quite odd.

I compared it with the XMP data collected by the official iOS RealityScan mobile app. The official app records relative rotation and local spatial coordinates, and the resulting model poses are all fine.

My rotation data uses the absolute rotation data (gravity and heading) provided by iOS ARKit, so the resulting model poses are all very strange.

I want to ask whether the RealityScan official version provides a way to perform reconstruction using absolute rotation data. Is there a problem with my workflow?

I am also curious about how Reality Scan ensures that the geographic orientation of the model on Cesium is completely accurate?

Hi @bning_j

If your images contains gravity information you can use the command setCamerasGravityDirection to apply it for the alignment.

Also, to place the created model properly with orientation, the position information of the cameras is enough.

You don’t need to use your created XMPs to achieve the wanted results, the EXIF data should be enough.