Laser scan with images - mesh quality

Hi,

I generate mesh from laser scan - quality is Very good, everything  is sharp, there isn’t  any noise - but when I add images (simple photogrammetry) for texture and make reconstruction one more time there is a quality deflection of mesh - noise, blury edges, blobs.

Is there an option that the images will be use only for reconstruction of textures and wouldn’t be considered to build the mesh?

Hello Michal,

once you align images and laser scans together, you can disable images for meshing. In the 1Ds view select all inputs you wish to disable and in the Selected inputs table change Enable meshing to False.

How intense is this noise? If it is a lot, then it points to a misalignment and the texture will probably not be ideal…

Thank you Zuzana, it is work.

Hello guys,

 

I use this post for my questions because I’ve done few projects combining LiDAR and Photogrammetry.

Usually I only use LiDAR for the meshing step and photogrammetry for the texture step, exactly like you. And I thought the depth map calculations (Alignment) wouldn’t change anything on the sparse point cloud and therefore wouldn’t modify the mesh.

This project is about to scan a cathedral front. And here are my tests on a statue from the front with the following settings :

  • LiDAR only

 

  • Photogrammetry Only

 

  • LiDAR/Photogrammetry (Full settings “enable”)

 

  • LiDAR/Photogrammetry (Mesh disable for photogrammetry)

 

Before this project I used the same settings without any problem.

The “Photogrammetry Only” quality is explained by the large dimensions of the cathedral

 

Maybe I used wrong settings this time, but I can’t combine LiDAR and Photogrammetry correctly. Do you have an idea?

 

See attached pictures, thank you!

 

 

Hi Jerome,

from the example with both enabled it is very clear that there is a case of bad misalignment, that means your images have not latched on to the scan data properly. I think this is indicated by the long orange lines at the cameras. My guess would be that it’s a GPS issue - I imagine you used a drone?

BTW, depth maps are created in the reconstruction step and not during alignment. In theory, you shouldn’t need to reconstruct at all if you only need to texture an imported mesh.

The scan (first example) looks very detailed - I thought at first that this is photogrammetry!  :slight_smile:

What kind of resolution did you use and is that the case for the whole scan or is it only because it’s close to the scanner?

Hello Götz,

I agree with you about the orange lines (residuals?) because on the last projects it was different. Do you have tips to improve it?

I used a DJI Mavic Pro in front of the cathedral and a Canon 5D on the floor. Here’s another image with the LiDAR positions :

 

 

Thanks about the depth maps infos, otherwise I think we can easily find photogrammetry tutorials and infos about the steps, tips/tricks and more. But I would like to learn more about depth maps calculation (focus infos, pixel colors, etc.) I read a lot PDF, tutorials etc. and I can’t find more details about it, do you have more infos about that?

And how do you texture an imported mesh? I never tried it this way, I always align my datas (LiDAR and Photos) set the mesh disable for the photos, simplify and texture it. Sometimes I combine both for the mesh reconstruction, because of the highest parts (cathedral, buildings…) but it’s not easy because photogrammetry gives me artefacts on LiDAR areas. That was another problem on this project.

For small objects, a good photogrammetry workflow is nice, sure! But I definitively prefer LiDAR for large places to capture details. It’s a little bit longer but so nice. Less noise on surfaces and real measures.

I’m a beginner on photogrammetry, I need to learn more…

I used a Faro Focus S150. I would give you the exact resolution but here is the overview map, medium quality, 360 deg on nearest positions, medium quality at mid-distance but 120 deg in front of the cathedral and high quality for the two farest at 120 deg.

Top overview :

 

 

Thanks!

 

 

Hi Jerome,

I got your answer by mail but it’s not yet here in the form…

Since I alway create my meshes within RC, I never really had to think about wheter it is possible to texture an imported mesh after the alignment step. I don’t see why it should need a reconstruction though. In any case, it is very easily tested - just align and don’t reconstruct and see if the texturing works!  :slight_smile:

I am pretty certain that your current problems are caused by a contradiciton of the GPS positions of your drone and the laserscan - did you geo-reference it according to real world coordinates? The solution in similar threads was always to deactivate using GPS data for all images.

In general, if you have artifacts ( I guess you refer to noise or multiple surfaces and features) it is caused by a misalignment of different camera groups. RC treats laserscans similar to photographs, so the general rules for photogrammetry apply for those as well. I guess that you observe those in the higher areas because the angle between the laserscan and the photographs of the drone are too big for the software to match them proberly. You can use the Inspection tool to figure out which groups of images are connected to one another.

 

Hi Gotz

Like you said to me a little while back - you’ve been away - good to have you back and not disappeared! Hope just because v busy or something good.

Hi Tom,

yes, very very busy!

I am still planning to catch up on the old threads but we’ll see…

Anything vital you want to draw my attention to?

If you ever want to get in touch, just search for my name online and you should find my contact details.

How are you doing?

I realize it’s off topic but since there are no PMs anymore, what can we do?  :wink: