Trouble reading high-resolution drone imagery metadata to prevent doming

Hi everyone,

I am new to the forum but hoping that someone is able to help me with an inquiry. I have imagery from a drone flight (about 300 images) using a DJI Mavic 3 with 75/60% overlap at 120m altitude, without GCP’s. When simply importing the images into RealityScan using Exif/XMP data, the resulting reconstruction is severely domed/convex where it should not be, most probably resulting from lens distortion.

I am trying to fix this issue by properly importing the extensive metadata that the DJI Mavic 3 images provide on distortion coefficients, location, rotation etc. so that the doming is not created. For instance, the image contains distortion coefficients:

(‘XMP:DewarpData’, ‘2022-06-08;3713.290000000000,3713.290000000000,7.020000000000,-8.720000000000,-0.112575240000,0.014874430000,-0.000085720000,0.000000100000,-0.027064110000’), which are (yyyy-mm-dd; fx,fy,cx,cy,k1,k2,p1,p2,k3).

Still, I am having much trouble converting the metadata stored within the .jpg images into a XMP structure that RealityScan reads accurately. I found this sample metadata from the help page on Metadata ( Metadata (XMP) files - RealityScan Help ):

<x:xmpmeta xmlns:x=“adobe:ns:meta/”>
<rdf:RDF xmlns:rdf=“http://www.w3.org/1999/02/22-rdf-syntax-ns#”>
<rdf:Description xmlns:xcr=“http://www.capturingreality.com/ns/xcr/1.1#” xcr:Version=“3”
xcr:PosePrior=“initial” xcr:Rotation=“-1 0 0 0 0 -1 0 -1 0” xcr:Coordinates=“absolute”
xcr:DistortionModel=“division” xcr:DistortionCoeficients=“0 0 0 0 0 0”
xcr:FocalLength35mm=“18” xcr:Skew=“0” xcr:AspectRatio=“1” xcr:PrincipalPointU=“0”
xcr:PrincipalPointV=“0” xcr:CalibrationPrior=“initial” xcr:CalibrationGroup=“-1”
xcr:DistortionGroup=“-1” xcr:Rig=“{1E204070-A17D-444E-9455-493C15B37B93}”
xcr:RigInstance=“{2DC9F356-432F-4234-9148-DC2655788342}” xcr:RigPoseIndex=“3”
xcr:InTexturing=“1” xcr:InMeshing=“1”>
xcr:Position0.262424475861358 -2.26397531586648 7.03879070281982</xcr:Position>
</rdf:Description>
</rdf:RDF>
</x:xmpmeta>

I still did not find a way to find which values exactly fit within some of these parameters, and what I need to provide based on the metadata that is within my DJI Mavic 3 jpg files. Is there anyone who was done this before and is able to help me with this? I could start by providing some metadata snippets if needed.

Thanks a lot for the help already, I appreciate it a lot! Sincerely, Jasper

Hello Jasper,

there is an article how to work with such datasets: Banana Effect - What To Do If My Model is Bent | Tutorial

I would use only the distortion parameters in the application directly (basically it is one of the proposes in the article).

Just import the images into RealityScan, select all images in 1Ds view and set Prior calibration and Prior lens distortion according your values. But for example you are showing f in pixels which RealityScan need the value in mm in 35mm format.

Then just align the data and the banana effect should be lower.

Hi Ondrej, thanks for your very quick response! Sorry for re-iterating your response, but to understand it correctly for myself… if I summarise it correctly you suggest to:

  1. Not write and import a separate metadata XMP for each image based on the information within the metadata of each JPG, but rather manually insert only information on ‘Principal point x [mm]’, ‘Principal point y [mm]’, and distortion coefficient (radial 1-4, tangential 1-2) from the metadata, which should be the same over all images?
  2. Recalculate the principal points (x,y) from pixels to mm based on the sensor information, as requested by the RealityScan prior calibration tab?
  3. Leave all other parameters the same as when imported?

Then, group and lock the camera parameters for all images, and alignment should have less doming?

Thanks a lot for helping out, again.

Separated XMPs are mostly or the regulated environment where you don’t change the camera positions between different sessions, which is not your case.

I suppose that is the optimal way how to solve your issues.

Maybe it won’t be necessary to to lock the camera parameters (I would do it for the calibrated camera), but it is possible that you’ll need to do so.

1 Like

Thanks Ondrej! I believe by setting the distortion model to Brown 3 and using only Radial 1,2 and 3 from the Dewarp parameters in my DJI image metadata, the distortion went away! :slight_smile:

However, if I have a last inquiry I hope you will want and are able to help me with.

I am trying to see how I can also include the Multispectral (MS) Imagery from the DJI Mavic 3 Multispectral (R,G,RE,NIR) in RealityScan, even though it is not formally part of the current processing workflow. Particularly, I hope to achieve a pixel-by-pixel alignment between RGB and MS orthomosaics for subsequent classification of vegetation.

Currently, I stacked the multispectral bands of all individual images using a feature matching algorithm in OpenCV/Python, and now I am trying to find a way where I can use and project the camera poses from the RGB data on the stacked multispectral bands.

Yet I cannot find whether this is possible, and if it is not be more practical to run the alignment separately for the multispectral data. However, this might make pixel-by-pixel alignment more difficult.

Thanks again for considering helping, I appreciate it a lot!

Hi Barry, I am glad the workflow worked for you.
Unfortunately it is not possible to use such data in RealityScan directly.
You can use the layers workflow, but for that there shouldn’t be the movement between the layers (like RGB and MS should be taken from the same position).