Leave Dewarp unchecked! Very important! Let RC do its job.
You obviously must take photos in manual mode, cuz on other modes ISO is locked to auto. You know what happens if ISO on DJI drones automatically jumps to 200+ value.
Set Absolute pose to Position only (not orientation)
Pose accuracy set to custom values: Let RTK do its job. My settings: Lat acc: 0,03, Long acc: 0,03, Alt acc: 0,06.
With that setting I was able to achive ±3cm overlap accuracy between 2 or more same flight plans.
Everything other resulted in ±15cm position and deformation accuracy.
What I wonder is next. What option should I set under Imputs → Prior Lens Distortion → Camera model. By default, value is set to No lens distortion. Is this by default overridden by Alignment settings → Advanced → Distortion model?
Hi ArchCro,
at the beginning of the process RC consider images as without distortion. But during alignment is computed as it is in advanced settings (Brown 3) and the parameters are computed. For most cases it is enough, if you want to get slightly better results, you can use model with more parameters.
Basically it is OK. According to my opinion, unwrap could be changed to a wanted settings, I would kept also orientation (but it depends on the capture sensor), pose accuracy I would use RTK data from flightlog (each image has it slightly different).
I would greatly appreciate it if you could help with this data proper processing settings, as it works fine in Trimble TBC and Pix4D photogrammetry processing engines, but not in a Reality capture. We would like to use Reality capture but so far can’t get the result as good as those two mentioned above.
Data captured with DJI M3E drone, RTK enabled and worked fine (code 50 in .MRK file, fix solution, cors station is near). Around 1000 photos were taken. 5 GCP, 15 CPS to check the module accuracy. GCPs and CPs surveyed with survey grade GNSS. Flight performed at 50 or 60m (don’t remember correctly). Aim is to get a point cloud with a precision of ~5cm, to any point, not only near GCPS.
Data processed without (!) GCPs, to see what result RC gives with only image exif location (RTK data) available. So the photo’s prior pose is set to position and orientation. As there is no geoid correction available in RC, then Ellipsoidal heights and wgs-84 were used for processing. After alignment was done and CP were imported, I noticed that CPs are below sparse point cloud! After further processing and comparing to CP, I can clearly see that the created mesh and ortho is around 3m higher than CPs! Both in ellipsoidal heights.
So the questions arise -
is this some settings fault?
Lens distortion model could cause this issue? What else?
What can be done to be able to use both image location data (RTK precision) and GCPs?
GCPs aren’t used, as it gives inacurate data between GCPs (distance up to 50-100m) (tested it in other RC file). Around GCPs results are ok, but between them data isn’t as accurate. Of course I can disable photos exif location data, but then what’s the point of RTK drone? Purely relying on GCPs doesn’t give the accuracy we are aiming for, as it’s for construction needs.
Further some screenshots and settings used. Shame that new users can post only one attachment per post, as I have more
Hi Harijs,
it seems like you didn’t use the prior pose accuracy values and you used just the global camera prior settings. Have you changed those settings to a proper accuracy? According to your image you are almost achieving the wanted precision for the selected image.
In cases like this is also good to know the camera calibration parameters and use also them in the computation.
As you mentioned the lens distortion, aren’t your images too distorted? I saw some images from Mavic 3 Enterprise with quite big distortion. There is also the influence of the flight height and flight path. Have you used double grid and different height to capture your images?
For your third question, you need to use both data if you want to be able to use them with correct precisions. Then you will see the correct errors on your control points.
Also, as you have MRK file, you can use that to creation of the flight log and use that in your computation (you can compare if flight log data and EXIF data are similar/same).
To find out the camera calibration parameters, there are various calibration software. And for that you need to provide the camera calibration there. RealityCapture is doing this, but sometimes it will help to know these parameters before calculation.
You have some there:
, but they are in pixels. You will need to recalculate those to mm for RealityCapture usage. Also, you need to remember that focal length is in 35 mm camera format.
If it is set off, the images will be distorted (but this is recommended). From the promoted image it is hard to say how, as there are not visible straight lines.
Also, just placing one GCP could help there to get the right heights for your model.
Couldn’t focal lenght be the issue?
Image exif data says it’s 12.29mm. Once I enter this value (instead of 24mm), I get much closer result to CPs. Spare point cloud after alignment is very close to CPs. Processing further to see the end result.
There is no preferred distortion model. If the distortion is quite big, you can try to use division model. For precise work you can use the one with more parameters.
Focal length could be the issue and you should use the proper value to get the best results. This value can be obtain from camera calibration. According to the sources I found it should be 24 mm as 35 mm equivalent.