Filtering based on color

Last year I wrote an article in ArchDaily and more detailed tutorial about the use of photogrammetry for architecture visualization, using a low-cost drone to capture the images. Architects like to show their design in context, i.e. they like to show their design in its future setting. Photogrammetry and/or laser scanning can play an important role there.

The article received 50k views, an indication that many architects are interested in this topic.

Now, I would like to write an update of the article, addressing some points that some of the readers brought up, or that I ran into myself as limitation.

The uneven, “blubbery” surfaces of the walls and roofs resulting from the point clouds is seen as one of the limitations for a wider adoption of photogrammetry and laser scanning for architecture visualization. Same for objects like trees, cars, people, streetlights, benches. Accuracy is here not so important in this context, but visual appeal is very important.

I am now investigating two possible improvements in the process, and I intend to write a new article about this.

  • Easier capturing, using low-cost, accessible equipment a typical architect can afford. Drones are getting increasingly restricted in many countries. The drone images also didn’t capture areas blocked by trees, cars and other objects in front of houses and buildings. Ideally, the architect would walk or drive through the streets with a simple 360 camera or portable laser scanner, and the data is captured automatically. If that’s not sufficient, he would combine the images from a drone with the ground-based laser scanning or 360 photos.

  • Better processing, e.g. by:

    1. automatically turning walls and roofs of houses into simple flat polygonal surfaces
    2. identifying repetitive objects like trees, cars, people and streetlights and replacing these with better modeled equivalent looking objects from a model library, or with placeholder node objects which can be substituted by high quality 3D models in our software Lumion.

I can understand that flat surface detection from a pointcloud or mesh can be hard. But maybe you can use the color of the texture to select areas like a road or a roof, like the Photoshop Magic Wand, and then use the Simplify tool? So you click on a road with a Magic Wand, set the RGB sensitivity, and select all pixels within this RGB range, and the corresponding points or polygons.

Same for the ugly trees. Using RGB selection, you could delete them, and replace them with high-quality trees in your CAD or visualization software (in my case our product Lumion). Of course there must then be a way to create the ground below the trees to look similar to the other ground areas.

Below an image of an area I captured with a drone and RC created from these images.

From this distance it looks reasonable, but if you go closer to the buildings, it sometimes starts to look pretty ugly.

Especially when you look at the polygons without texture. This mesh was reduced from 37M to 1M, then some polygons were filtered (marginal triangles and unconnected triangles left out).

I am sure there must be ways to improve this, e.g. using the RGB selection? Any other ideas are welcome.

My next tests will be to combine the drone images with a hand-held laser scanner pointcloud e.g. the GeoSLAM Horizon. That’s possible with RC, correct?

this would probably require something like this: https://support.capturingreality.com/hc/en-us/community/posts/115000776451-bare-earth-models