Hi, I’m using RealityScan to merge two datasets — one captured with a DSLR and one with a drone. I’ve joined the two components as usual, but since the object is a 1:1 scale statue, I need a bit more refinement.
I’ve noticed some slight discrepancies between the two components — nothing major, but enough to consider whether I can force the drone component to align more precisely with the DSLR data.
Any advice would be greatly appreciated.
Thanks a lot!
Hello @Insideout_78
Probably the only option here will be the control points usage. Can you show some of the discrepancies here? Have you used the targets in your project?
On the shoulders, as you can see, the drone data generates some issues. I think it’s difficult to reduce this problem using control points, since the surface is quite homogeneous.
By the way, I usually correct the DSLR lens distortion in Lightroom before processing.
In your opinion, is that the right approach?
Thanks!
This is a clear misalignment. As I wrote, CPs usage can help here. Also using calibrated cameras with known parameters. And also it depends on the capturing path and the overlap between the datasets.
That is not ideal workaround, you should keep RealityScan to solve the distortion. Or if you correct that, you should set that for the images (when you know the values).