Error in projection meta data on mesh using cameras from Reality Capture

Good afternoon,

We are using Reality Capture to generate models from drone images. We are trying to use the camera information and mesh, both generated by Reality Capture, to project additional information on the mesh. The drone camera which we were using did not save rotation information, that is why before doing the alignment, we set in image settings “Prior pose” → Absolute pose" → “Position”. After alignment, we are exporting a csv file with camera information using “Internal/External camera parameters”. We assume that the metadata, which we export contains information about the virtual cameras used by Reality Capture to build the mesh. Then we export mesh in obj format. We are using default parameters to export mesh and camera information. Unfortunately, when I project the additional information on the mesh using camera location and rotation, the results look off on the mesh: they are tilted. What should we should pay attention to in our export process to be able to map new information on mesh using camera position and direction? What parameters should I check for obj and camera information to be synchronised properly?

We are using 1.3.1 version of Reality Capture.

I am looking forward to your reply.
Best regards,

P.S. In attachment I provide examples of projections I mentioned. The red colour indicates a window, blue - chimney. The first example demonstrates the case, when all classes were projected correctly except of one. In this case it is a window. The second example, demonstrate an example, where all projections are tilted.


When exporting the model and the camera parameters, which settings are you using? What were the transformation settings?
How did you get those projections (windows, chimney) and do they have known coordinates?

Thank you for your reply!
We are using the Open3D library to project our masks on 3D mesh. For our calculations, we are using camera position, rotation, and focal length. To gain this information, we export registration as a datatype: “Internal/external camera parameters”.
The projections seem to be relatively accurate for the majority of the cases. You can see from the previous examples, that one of the models is almost correct. However, it seems that we are missing some important transformation information. One example shows that we have projections, which are completely off. Even if it is a minority, we want to avoid this happening.
Our pipeline looks like this: we are importing images → aligning them (twice, because sometimes the models are not perfect after the first alignment)-> creating mesh-> creating texture → simplifying mesh → exporting the model. When we export the model, we are using the default parameters. Do you have any ideas, about what could cause the errors in the projections? Could it be that cameras and mesh are not synchronized? Or maybe we are missing an important transformation parameter, which could be exported (for example, perspective)?
I attach the export parameters to this message.
Nira_settings_registration

I am looking forward to your reply.
Best regards,

@kumateCR , sorry for not tagging you, but I hope that now we could open a discussion :slight_smile:
Best regards,

Hello,

For the masks, we recommend exporting the undistorted images from RealityCapture since they are used to create a model. The undistorted images are created after the alignment and can be exported in the Registration export.

Make sure that the information in the CSV file is the correct information you need. This is the format of the exported file:
$ExportCameras($(imageName)$(imageExt),$(x),$(y),$(z),$(yaw),$(pitch),$(roll),$(f36),$(px),$(py),$(k1),$(k2),$(k3),$(k4),$(t1),$(t2)*
You can see that the focal length is exported in the 35mm format (f*36),so be wary of that.
You can adjust the export formats by modifying the XML files in the installation folder. The calibration.xml file contains this specific format.

Check the rotation values and ensure that the axes are correct in the third-party party, which I assume they are since the model looks all right.

Thank you for a reply, @kumateCR.
I have spent some time trying out new approaches, but unfortunately, it did not work out.
We have exported the CSV files, as you mentioned, and tried to use undistorted images for our masks, but the results stay the same.
I am not sure what exactly you mean by modifying XML files. Is it possible to specify something that we cannot change using the UI later on?
Anyway, maybe there are some specific parameters, which you would recommend me to use in export? For example, a specific coordinate system. I have noticed that some projects have very wrong projections on the Z axis, which gives me a feeling that it might be an issue with an export. If you need any additional information about our code and some parameters, please, feel free to ask.

Regarding a focal length I also have some questions:
I have already noticed that the focal length is exported in 35mm format. Have I understood correctly, that the location and the rotation of a camera are calculated like if they were standard ones. It means that the crop factor of these cameras are one.

I am looking forward to your reply.

Best regards,

@kumateCR,
I have followed the tutorial, which is projecting the texture using Blender (https://www.youtube.com/watch?v=yq0XjvBlsiU) and I tried to replicate the procedure with the projects, which are not working. I did the following:

  1. I exported a mesh as OBJ and camera information as CSV.
  2. I imported an OBJ in Blender.
  3. I manually created a camera, using the parameters from CSV file (x, y, alt, heading, pitch, roll, f).
  4. I used view from the camera to see how they are aligned and you can see the results in the attachment.
    It seems that the mesh and images are not aligned and we are missing some transformations or cameras and obj files are exported independently. I hope that it gives more information about the issue.

Best regards,



Hi Daniele,
in exports like this you need to consider different things. In the mentioned workflow, there is no OBJ export, but I suppose different format (it is a good way to export FBX with undistorted images). Also you need to consider different coordinate system orientations between RealityCapture and Blender).
So, for proper OBJ import to Blender you need to set transformation like:
image
Also, as the orientation is different, the values won’t be on the same position.
Blender:
image
RealityCapture:
image
You can also notice the slight differences for angles. I suppose this is because using distorted images and undistorted results.
So in your workflow, thy to use the rotation values from Blender for undistorted images and use undistorted images in your reprojections.

@OndrejTrhan, Thank you very much for your reply.

Unfortunately, when I tried everything, that you mentioned, it did not work out.

  1. I tried to use undistorted images in Blender but the results do not look better.
  2. When I was first transferring data to Blender, I noticed that the orientation differed. That is why I have already “shuffled” the parameters from Reality Capture. I tried to adjust this “shuffling” using your example but it looks even more off.

But to be honest, I assume that we are getting closer to the solution. There are some things that I do not fully understand from your reply and maybe it is the key:

  1. You said that I should use rotation values from undistorted images. I am just exporting an “Internal/External camera parameters” as a CSV file. How could I reach these rotation values?
  2. It is not clear to me how to get the Blender parameters, which you demonstrated, from CSV file information. the rotation values are significantly different. Could you please share, what I should apply to get the proper Blender parameters?
  3. I am looking forward to your reply and help! I do think that we are getting closer to the solution.

Best regards,

I suppose you need to follow that tutorial properly to export model with undistorted images (https://youtu.be/yq0XjvBlsiU?t=414) and import that into Blender. Also, what do you have set as project and output coordinate system in RealityCapture? Using your settings you are exporting in Grid system, but Output system can be set differently (for IE camera parameters export).
“The Blender parameters” are just the ones after importing the model using the mentioned workflow from the video (but FBX format was used, as OBJ has different orientation there).

Dear @OndrejTrhan ,
thank you for your quick reply and for the link!
I have already followed the tutorial. It is a very great source of information. Unfortunately, our goal is to create the projections with Open3D library of Python. That is why we cannot use “.abc” format. I am using the Blender just to learn better why it is not working. I have tried to export FBX format and map undistorted images using Blender, but the results still do not look good. I am attaching the screenshots to the message.
When it comes to the settings, we are exporting “Grid system”. What would you say to use?

I am looking forward to the reply.
Best regards,



Hi Daniele,
how you followed the tutorial completely (with the export of the model with undistorted images)?
You can use also different formats, but OBJ needs to be imported to Blender in different way and also there is no option to export it with undistorted images.
What kind of drone have you used? Is it possible that the images are already undistorted (for DJOI it is called dewarping)?
I suppose grid system is OK, but what do you have set as Project output CS in the Application settings?
Also, are you sure you are projecting the correct image? As it seems like it it taken from different view as you want to show in Blender.


After image import, are you setting the format resolution?
Also, you mentioned that you are using 1.3.1 RealityCapture version. In that version there was a bug regarding reading the proper focal length by Blender. This bug was solved in 1.3.2 version, so it could provide some errors for you in the workflow.