Building Facades 3D Mesh looks spotty?

Hello RC Community, I’m working on my first 3D model and I’m running into a bit of an issue with the facades of the buildings. For reference, I’m using a WingtraOne Gen II drone with a SonyRX1RII 42 MP payload in addition to a Sony oblique a6100 payload. I combined all of the images (appx. 650) and aligned them all. After that, I generated the mesh model in normal detail and then cleaned, checked, filled holes. I then generated the texture and am seeing that the sides of the buildings are not filled in very well compared to the other data. I know the images are good as I flew the site accordingly. The oblique payload is used to develop digital twins, so I’m not sure if I’m missing a step in the reconstruction, or if I’m not getting enough data. Ultimately, my questions is -  is there a way to fill in the gaps to make the buildings look complete? Any help is greatly appreciated!

Hi VentureUAV,

what angle was used for this as oblique? Is this just a point cloud? Did you tried to compute mesh?

Wingtra is a drone, which is capturing images just from above, if I know it right. In this case for better modeling you also need images, which will capture the walls from perpendicular positions. In this case, as you have the images just from the sky, it is not sufficient to create the walls correctly.

Also, it seems, they are white, so there could be a problem, that the walls are featureless and then it is not possible to create a model there.

You can follow this tutorial for image capturing: https://www.youtube.com/watch?v=9e_NLp_FaUk&t=202s

Hi ,

Thank you for your reply. The images used to recreate this were used from two different payloads. One was the SonyRX1RII 42 MP payload, and the other was the Sony oblique a6100 payload. The RX1RII is strictly NADIR whereas the oblique a6100 is specially designed for 3D reconstruction of buildings and objects. The Horizontal Angle is 90° (-45° … 45°) and the Vertical Angle is 66° (-18° … 48°). I’ve also attached a screenshot of the Technical Specs for your reference. When you say “Compute Mesh”, do you also mean “Create Model”? If so, yes, I generated the model in Normal Detail. However, it seems like it is still just a point cloud. Here are the steps in my workflow:

Loaded both NADIR / Oblique images into RC => Aligned Images => Created Model in Normal Detail => Lasso’d the Project area, inverted, filtered the selection => Used the simplify tool to reduce triangle count to 5,000,000 => Calculated Model Texture.

After following these steps, the screenshots in my previous post were the result. Am I missing a step in the workflow by chance? This is my first time working in RC or any 3D modeling software, so it’s very possible this is user error. Regarding the wall color: Yes, the walls on the building are white vinyl but have some texture. I’ve attached a screenshot of this as well. My main concern is just having the walls filled in. Is there a way for RC to interpolate the data and fill in the gaps? 

 

Thank you for your help with this!

HiVentureUAV,

is this one of the used images?

Yes, by compute mesh I thought Create Model. It could look as point cloud, as there is a restriction for models smaller than 40 M tris. When you simplified it, you should be able to see the mesh. Do you have selected Solid option in Scene render under Scene 3D tab/View:

Were both datasets aligned into one component after alignment? What was the original size of computed model (which you simplified to 5 000 000 tris)?

Is it possible to show the 3D view also with the aligned images?

There is an option Close holes in Scene 3D tab/Tools, which you can try. 

Ondrej,

No, that is an image I just extracted from Google Earth… it was not used for processing. I will attach an example image that the drone collected via the Oblique 3D payload. I do have the Solid option selected, however it doesn’t appear to be displaying. RC is telling me since it is over 40 M tris, it must be viewed as a point cloud. Regarding both datasets being aligned into one: I added all of the images at the same time (NADIR / Oblique) and then aligned afterward, so I assumed they were included in the same component. Can you tell if this is the case? The original size of the computed model was 161.1 M with 81.1 M vertex count.

How do you show the 3D view with the aligned images?

UPDATE: I just got the model to generate. Some of the building facades look a little off… do you know of a way to correct this issue? Thank you!

Hi VentureUAV,

yes, if the model is bigger than 40M tris, it is showed as point cloud. You can see the part of mesh if you use Clipping Box or you need to simplify created model under 40M tris.

I can see, that both datasets are aligned in one component. Also it seems, that the images were taken form the same height:

This is what I thought regarding to 3D view of aligned images.

It seems, that you simplified your model from 161M tris to 41M tris. It is recommend to do this in 50% incremental steps (160 -> 80 -> 40).

This the the result of using the images just form above/flight. To improve this model it is a good option to use also images as from Google Earth/terrestrial images, where these walls will be captured also as it is described in previously attached video. If you want to have modeled a whole object, it is important to follow the shape of the object:

Got it, that makes perfect sense. I will definitely simplify incrementally next time. I have acquired some images from google earth that better show the facades of the building. How do I get these correctly added to the model? Could I also add images taken from my iPhone? Can these be geo-referenced like the other images? If so, can you describe how to do this? Thank you!

You just import them into the project and align it again. If it won’t align, then you will need to use some control points to align those images. Of course, you can use also your iPhone images, I think it would be even better than from Google.

You can turn off georeferencing for these images and keep just georeferencig from first images.

There is a tutorial about merging different components, which could be helpful in your case: https://www.youtube.com/watch?v=rrXuHcqoOjQ&t=419s. The principles are still the same.

Thanks, ! I’m currently getting this error when trying to import control points to my project. Do you know what’s causing this?

This means that there are 211 images in your control points measurement file which are not in your project. So in your file is written more images as in your project. But the file could be imported.

I suppose you used option Control Points under Import & Metadata. This is tool for importing control points in the form of the image measurements.

I was following a YouTube tutorial on how to do this, and it directed me to Alignment > Export > Control Points > then Alignment > Import > Control Points. Is this the correct way to do this? I went through and marked all my control points as shown in the attached. The output .CSV file contains these 211 images.

 

Here’s the video link also for your reference:

Control points and Ground control points are different for RealityCapture. Ground control poits have also coordinates in some coordinate system.

The print screen is from the project, where you measured those? Are you using the same images in another project? What do you want to achieve?

If you want to align your project according the GCPs, go to Alignment tab/Registration/Settings/Camera prior settings and set No in Use camera priors for georeferencing. Then align your dataset again.

 

Thank you, . I’m currently getting this error when trying to Align Images. There are currently 105 in this dataset. I’ve cleared my Cache and restarted my machine. What could cause this?

Hello VentureUAV,

This error shows up when RealityCapture was not closed traditionally, and there are few issues which may cause this.

It usually happens when Windows get updates, and the whole PC restarts.

The other most common cause is CPU/GPU overheating. This usually happens due to having a large number of images, or if RealityCapture is not able to align images (due to images that create a component with geometry issues), or a component with geometry issues was created and RealityCapture is not able to create a model from that component.

Thank you, Matel. I restarted my machine and was able to resolve this issue. I do have another question on the model I am working on. I generated two separate models - one in PIX4D Mapper and the other in Reality Capture. No matter what I’ve done so far, I cannot get Reality Capture to recognize the field goal posts and chain link fence around the stadium. However, the model in PIX4D shows these. Do you know if there is a setting that could be excluding these items from RC? Or, can I import the one from PIX4d into RC and work with it there? Screenshots are attached.

Hi VentureUAV,

what are your reconstruction settings? This could happen, when the object is captured insufficient. 

In Pix, is it just point cloud? How does look mesh?