I’m performing asset reconstruction of outdoor areas and structures using DJI drones and exporting the results to Blender for post-processing. Originally I exported an obj file for the mesh and textures and then found the camera locations from the “Name, X, Y, Z, Heading, Pitch, Roll” csv file. I noticed that this methodology was quite good roughly 95% of the time, but the rest of the time the camera poses in Blender seemed to be quite far off. By this I mean that I would toggle from the rendered camera view to the real picture and see translational and rotational deviation as well as scaling issues.
After following along with this video, I exported the same mesh models and cameras using .abc file format and saw a drastic improvement in the camera placement in Blender, but still not perfect.
I have three questions:
- What is the cause for the improvement between the two methods?
- How can I get the alignment of the cameras even more precisely?
- Given the improvement from .obj to .abc, is there another file type that is likely to be even better?
I’m happy to share any configurations/settings that I use, I haven’t originally because I wasn’t sure what is relevant.
Thanks in advance,
DB
Hi Derek,
first of all, how you exported the OBJ. With or without cameras? Also, the coordinate system and orientations can be defined differently in RealityCapture and in Blender, therefore the first translations and rotation could be there.
Using the second option you should be able to do the wanted thing. It also depends on the used export settings.
Can you show the results you are getting?
This is not related to the exported format. it depends on the used export settings.
Hi Ondrej,
Thanks for the timely reply.
I’ll try and answer your questions directly:
Q - “how you exported the OBJ. With or without cameras?”
A - I exported the obj without cameras. The .OBJ file was exported and then the camera information was exported using the “Name, X, Y, Z, Heading, Pitch, Roll” .csv export option. I would then place the cameras in blender separately. When I did this, I used the “Blender” transformation preset and “Object” for the space in normal transformation.
Q - “Can you show the results you are getting?”
A - Here are some images of the alignment issues. On the left is the rendered image taken from the camera placed in Blender, in the middle is the real image and on the right is the overlay of the rendered version over the real image. The error in the pose of the camera is obvious in the overlay. This data is from an .OBJ model, but the error is the same with the .abc model, just not as bad.
Here is an example of the export configuration file that we’re using.
<Configuration>
<entry key="ModelExportFormatVersion" value="0"/>
<entry key="MvsMeshExportCamerasAsModelPart" value="false"/>
<entry key="MvsMeshExportTexturingAllowed" value="-1"/>
<entry key="calexExportImages" value="false"/>
<entry key="MvsExportScaleZ" value="1.0"/>
<entry key="MvsExportIsModelCoordinates" value="0"/>
<entry key="MvsExportIsGeoreferenced" value="0x1"/>
<entry key="MvsMeshExportTileType" value="0"/>
<entry key="MvsMeshExportNormals" value="true"/>
<entry key="MvsExportScaleY" value="1.0"/>
<entry key="MvsMeshExportTexAlpha" value="false"/>
<entry key="MvsExportScaleX" value="1.0"/>
<entry key="MvsMeshExportTexImgFormat_Color8_0" value="png"/>
<entry key="MvsExportcoordinatesystemtype" value="0"/>
<entry key="MvsMeshExportTexPixFormat_Color8_0" value="32bppBGRA"/>
<entry key="MvsMeshExportNormalsAllowed" value="-1"/>
<entry key="calexExportUndistorted" value="false"/>
<entry key="MvsMeshExportNumberFormatAllowed" value="0"/>
<entry key="MvsExportMoveZ" value="0.0"/>
<entry key="MvsExportMoveX" value="0.0"/>
<entry key="MvsExportNormalRange" value="ZeroToOne"/>
<entry key="MvsExportMoveY" value="0.0"/>
<entry key="MvsMeshExportInfoFile" value="true"/>
<entry key="MvsMeshExportByParts" value="false"/>
<entry key="MvsMeshExportClassificationAllowed" value="0"/>
<entry key="MvsMeshExportCameras" value="true"/>
<entry key="MvsMeshExportMaterialsAllowed" value="0"/>
<entry key="MvsExportRotationY" value="-90.0"/>
<entry key="MvsExportNormalFlipZ" value="false"/>
<entry key="MvsExportRotationX" value="-90.0"/>
<entry key="MvsExportNormalFlipY" value="false"/>
<entry key="MvsExportNormalSpace" value="Mikktspace"/>
<entry key="MvsMeshExportCamerasAllowed" value="-1"/>
<entry key="MvsMeshExportColors" value="false"/>
<entry key="MvsExportNormalFlipX" value="false"/>
<entry key="MvsExportTransformationPreset" value="Blender"/>
<entry key="MvsExportRotationZ" value="0.0"/>
<entry key="MvsMeshExportFileTypeSelectionDisplay" value="0"/>
<entry key="MvsMeshExportTexOneFile" value="0"/>
<entry key="MvsMeshExportEmbeddTxrsAllowed" value="0"/>
<entry key="MvsMeshExportTexturing" value="-1"/>
</Configuration>
Thank you again for your time and consideration. Any help is much appreciated!
Derek
If you want to do camera placement in Blender, it is better to export the model with undistorted images using the settings mentioned in the video you posted.
As you exported the model separately and you used the rotations from camera registration, there could be some issues (as I mentioned, the axis and rotation can be defined differently in RealityCapture and Blender).
For the model creation the undistorted images are used. As you used original image for comparison, there could be some difference, as the rendered image from the model should be undistorted.
Thanks for the reply.
It makes sense that the visual closeness of the render from blender doesn’t match the real image due to the distortion (or undistortion) phenomenon, good comment. I looked into the undistorted images that were exported from reality capture and noticed that they all seem to be distorted differently and in some objects straight line segments are severely warped.
Notice how on the left image the “fisheye” effect is very small which is not the case on the image to the right. I would have expected them to be the same for both images. Also look at the roof at the top right corner of the image on the right, there is significant bowing that is not representative of the real life object. Do you have any ideas on why this is happening? I can’t imagine that type of preprocessing will positively influence the mesh creation and it definitely hinders my final objective. Is there a way to get more realistic undistorted image?
Thanks,
DB
Hi Derek,
this is really strange and it shouldn’t be happening. This seems like double undistortion.
As you are using the DJI drone, is it possible that you have set dewarping during capturing the data?
Which export settings have you used to export undistorted images?
Ondrej,
I’m looking into the DJI side of things now to see if I can learn more about their undistortion settings. Is it possible to turn the undistortion stage or reality capture off and treat the input data as though it’s undistorted?
For the export settings for the undistorted images, I used the defaults. That is I set export images → yes and undistort images → yes.
DB
It is possible, but it is not recommended.
Select all images in 1Ds view and set the Camera model under Prior lens distortion to No lens distortion and Prior to Fixed"
Turns out our Camera model was already set to no distortion. Could changing that resolve the issues we’ve been having?
DB
How have been the Prior set? To Approximative or Fixed?
If Approximative, the undistortion is computed after that.
But as I wrote, it is better to let RealityCapture compute the distortion and not use already undistorted images.
To use the undistorted images inside Blender you need to also change the export model settings (as it is mentioned in the video you posted). You need to use Fit - Keep intrinsics and Resolution - Preserve. The pre-defined settings are different and it could be also the reason of wrongly rendered images in Blender.
The prior lens distortion was set to approximate not fixed. We currently haven’t tried it with fixed. We have also set the other settings you have mentioned. I’m still not sure where the “double undistortion” is coming from in our images.
Derek
Hi Derek,
the only possible reason for the double undistortion is that the images were already undistorted before using them in RealityCapture.
Can you check if you have set dewarping during capturing your images?
How do look the images before using them in RealityCapture?
Sorry for the delay, I was at a conference.
I looked and couldn’t find anything in the DJI settings to adjust the warping. The original image that is passed into reality capture is the one in the center of this image.
Is there a setting in the alignment phase that we could try to improve the alignments.
Hi Derek,
would it be possible to share your data to check if this will happen also on our side?
If so, I will send you an invitation for the data upload.
I don’t think that should be an issue. Just to double check, the data will not be publicly available, right?
It will be used only for our internal testing.
I sent you the invitation. It can be in your spam folder.