Hi @Tecnoart2022,
first of all you need to align all your data (images and laser scans) into one component.
Then select all images in 1Ds view and disable them for meshing.
Then select the sparse point cloud part which you want to reconstruct from the images using the tool Point Rect or Point Lasso
thank you Otrhan i know the workflow to enable o disable part of photo to reconstruction or texturing but i ask if if possible to do automatically.
I have aligned all the photo with my very high density scan. Now i want that the software (if is possible automatically use the photos only to generate mesh on part uncovered by laser scan because the photos introduce noise on perfect quality scan mesh
Doing it by hand is not feasible because there are areas on the spare cloud where both scans and photos are together
No, it is not possible to do it automatically (or it can be used using the image list, but in that case you need to create that list and find the images, so basically it is the same as already advised).
In the proposed workflow the images will be used only on the parts, where the laser scans aren’t, so the laser scan point cloud shouldn’t be influenced by images (as on those parts they will be disabled).
In cases like this you can also change the Default grouping factor (How to give more priority to laser scans in reconstrution - #2 by OndrejTrhan) so the laser scan’s point cloud will have the higher priority on the places where also point cloud from images is appearing.
([How to give more priority to laser scans in reconstrution - #2 by OndrejTrhan)
yes, it’s great, I’ll try it. what is a good value to give the highest priority to the scan mesh but to get a good mesh in the parts where there are only photos?. many many thanks
Thanks for the really interesting workflow you presented.
I’m looking forward to the same information as @Tecnoart2022. I try to use really dense laser scans to scan interiors and exteriors of small buildings + drone images for the roofing and surrounding environment that I can’t scan with laser scanners, in order to have a complete point cloud.
There are some overlapping between laser scan data and photogrammetry data (for example on the exteriors and walls). I use exact positioning for the laser scans for the alignment (laser scans are assembled before with another workflow).
It looks like that I have good component with all the data from the lasers scans and drone images.
When I generate the mesh, it seems like some laser scanned parts try to blend with photo generated parts (for example with thin roofing). It looks like that tweaking the default grouping factor may help there, but there are very little information available about the Default grouping factor parameter influence aside from your posts.
The RC Help however states that “when both images and laser scans are being used to mesh, increasing this value will mean the laser scans will be prioritized for meshing” and in the How to give more priority to laser scans in reconstruction subject you say that “When you want to decline the influence of the images on some parts of the model, then you will set higher value there”.
So I guess that setting a low value would tend to decline the influence of the laser scans on some part of the model ?
I try to decline (suppress) the influence of the images on parts where I have laser scan data.
So what may be a good value to give the highest priority to the scan mesh but to get a good mesh in the parts where there are only photos ?
Would a good low value be 0 ? 0,0001 ? 0,5 ? Same with high values : 10 ? 1000 ? 999999 ?
I would do some experiments but would love to have your feedback on this subject.
NB :
I exported my laser scans separately depending on whether they were interiors or exteriors scans to tweak the weights in texturing parameters (exteriors scans photos may be burned, but i need interior scans imagesfor texturing). Can this be helpful for mesh generation ?
Hi @jarmagnat
I suppose there was a mistake in that post.
To get more influence of the laser scans, you should set higher value there.
Unfortunately, there is no exact value, as each scan can be different. You can try to change the values up to 10 and then compare your results.
Also, how big is the blending of scans and images? It could be also the result of not exact alignment of the images. In that case you can improve it using the control points.
I suppose it won’t help in the meshing process, as you need to merge those scans together anyway.