Virtual scanpositions pointclouds results in holes in 3D mesh

Hi, I am desperately trying to add data from the mobile laser scanning into the model of the building done with stationary laser scan.

I have correct data from the surveyor, processed in Faro SCENE. But the the roofs were scanned by mobile scanner and the export of the mobil scanner data does not provide ordered clouds required by the reality scan, just one large e57 file. I found the workaround here on the forums to create the virtual scan positions in Faro SCENE, but it does ot work properly for me, because the virtual scans show all the points of the pointcloud as there are no real-world physical bariers which would shield the scanpositions from “seeing through walls”. The result is that meshing in Reality Capture creates holes in the model everywhere where there are more points overlapping…

I suspect that this should be somehow solved in the setting of the meshing, but I cannot figure how.


This image shows the model from mobile scanner data exported as virtual scan positions with limitation of 5meters form the scanposition - everywhere where pointclouds from more than one scanposition “intersect” the geometry is missing… all the beams of the roof and the part of the big stair in the middle.


Here is the prinstcreen of the imported scanning data processed as virtualscanpositions in Faro SCENE LT - the pointclouds look correct and complete in Reality Capture, however - when I try to mesh this - tha same happens and overlap of points from the individual scanpositions get lost resulting in holes in the model…


You can see that the internal wooden construction of the roof almost completely disapeared and there are also large holes in the ceiling.

Any idea what could I do about it? I need to process the whole survey of national monument in Reality Capture as it requires to include huge laser scans and photogrammetry datasets and i need to porcess the complete meshed model and othohraphic projections in Reality Capture. Reality capture is the essentil toll in this workflow which connects all differend data sources form the survey, but the data form the mobile scanner are now missing because of theis problem. So please…

HELP ME OBI-WAN KENOBI, YOU ARE MY ONLY HOPE!

1 Like


@OndrejTrhan

Would you be able to help me, please?

Hello Vratislav,
I am sorry for your issues.
In general, the mobile laser scans are not supported by RealityScan. There is this workaround, but it is not ideal.
I would play with the number of the laser scan’s positions.
Are the data from second and third image the same? As the scan’s distribution seems to be different.
Have you tried new aerial lidar functionality? I know it was created for aerial one, but there could be some result also for your data.
What are your reconstruction settings?

1 Like

Thank you for your reply!

I am aware of the problems with mobile scans. Still, since this is the dataset we got, I am trying to process it in the best way possible…

The main issue here is that the export of mobile scanner data from FARO Connect appears to be broken - it is capable of exporting ordered clouds (practically in the same form as from static scans). Still, for some strange reason, each scan position that matches the corresponding panoramic view from the built-in camera is placed in the correct place in space, but it is randomly rotated. It is an error in their export solution that does not orient individual scan positions correctly (we got this pretty much confirmed with their support).


This is data exported from FAR Connect. Individual point clouds are correctly limited by what was seen from each position (no “see through walls” like with the virtual scan positions work around workflow), but the export does not maintain the correct orientation, resulting in a disorganized mess.



The same data in RC + meshed result, which only confirmed the nature of the issue.

I attempted to import this data as unregistered or draft into Reality Capture, locked the position information, and tried to align the scans using the image data stored in each scan. Still, RC could not solve this because the visual information was of too low a quality - the best result I achieved after multiple attempts was 4 scans aligned out of 440.

And since the dataset I have problems with is the attic space, which has very low light and repeating patterns of beams, it is not working at all to align the mismatched rotation in RC.

So, the only way I managed to get point cloud data from mobile scans into RC was via the workaround described here on the forum - through virtual cameras generated in Faro SCENE.

This data looks correct when I import it into RC:

But the issue is here:


As these virtual scan positions only cut from already assembled complete point clouds of the whole space, they do not have the same limitations by physical space as static scans or scans generated in Faro Connect based on the panoramic images. The image data RC generated from this point cloud is essentially a “pointillistic painting” - the view is composed of individual points, regardless of whether they can be seen from the spot in the real-life location or not. This is why we can see the points corresponding to windows in the lower floor, even from the roof space - the ceiling, beams, and walls are not opaque, but “see-through”. The result of this is an incorrect meshich, which “devours” all points that overlap in multiple scan positions.

I would compare this to the phenomenon of a moving object disappearing from photogrammetry models (for example, when doors are moved during a photoshoot of data). We can see what is behind the object in different cameras, so the meshing process in reality capture effectively “deletes” the parts that are in the foreground and only keeps the parts that are farther away and unobscured.

This is the result after I managed to get “virtual scan positions” data to align with the rest of the model done with stationary scanner (using the same coordinate system provided by the surveyor) - as you can see, the ceiling of the topmost floor and the majority of the roof construction got omitted in the meshing process resulting in the holes…

My reconstruction settings are standard, with normal detail and minimal intensity for the point cloud data set to 0, so no points are lost. But I have not found any setting that would allow any change for the meshing process itself (like not forcing water-tight meshes).

Now that I think about it again, it would be great to have an option to use point cloud data directly as a dense cloud for meshing - simply adding the point cloud to the points calculated from photogrammetry and connecting them with triangles.

Is it possible for you to split the dataset following the shape of the floor. Like:


and process the parts separately?

This is not the solution to the problem, because the same holes appear even in the roof space alone - the beams of the roof get lost in the meshing for the reasons described earlier.

In my opinion, there are only two options for resolving this issue.

  1. To correctly split the unified point cloud and export individual scan positions as partial point clouds, which only include points that are really visible from those positions in space (no points of backfacing surfaces).

  2. To have a meshing process that does not delete the overlapping points from multiple scan positions or that includes all points stored in point clouds directly.

I will add this to our feature request database.

Hi, any luck with the meshing problem I described last year? It is still a bit of an issue for my workflows, as sometimes we get merged point clouds. And it seems there is no way to process these correctly in Reality capture and integrate them with the stationary laser scanner data sets and photos…

It would be enough if it were possible to load such data of a unified point cloud directly using the existing registration (it fits in the correct place in space using the correct spatial coordinates) and to use all points within the unified point cloud to calculate the mesh geometry.

The registered, but “unordered” point cloud does not need to be part of the alignment process in RC/RS at all; it just needs to be used alongside the dense points calculated during the mesh calculation phase…

The point is just to get a high-resolution mesh model, which would include the data stored in an unordered point cloud alongside the data RC/RS calculates in its default workflow from photos and ordered scans.

I am insisting on this because RC/RS is the only workspace that seems to enable calculations of meshes with the full quality of the laserscans, resulting in meshes with billions of points, and enables making 2D orthographic projections from such models. Therefore, it is the ONLY tool I found that enables getting the full quality of the scanned data into an easily understandable and readable 2D form and to get simplified 3D models by reducing the geometry according to the surface curvatures and not just by thinning of the point cloud (resulting in losing sharpness of edges).

Other workarounds, such as meshing the unordered point clouds outside of RC/RS (meshlab or cloudcomapre for example) cannot handle so much data in meshing to include all points from the scanning, and the results are blurry, low-res meshes, which could be imported into RC correctly, but cannot be “merged” with the rest of the datasets processed in RC/RS.

So, for example - the case I presented almost a year ago - we have photos and stationary laser scans of a building, but the attic is better to scan with a mobile scanner (almost no light for photos, visually monotonous environment with lots of hard-to-see overlaps of beams).

If we are not able to calculate the mesh in RC/RS from all the data, there is the attic missing from the final model. And even if we calculate the mesh of the attic from a mobile scanner data outside of RC/RS, there is no way to do it in full quality and to integrate it properly with the rest of the data processed in RC/RS… so neither the high-res 3D model in RC/RS, nor the unified pointcloud meshed in CC is complete and they cannot be joined into one correct, high-res mesh in any way as there were data missing in both mesh calculations…

So I wanted to ask - Was there any development in this regard lately? Would it be possible to import the unified point cloud, if only for the mesh calculation phase, so that it behaves as part of the “dense point cloud” used for high-res mesh calculation in RC alongside points RC calculates from photogrammetry and ordered laser scans?

Thanks!

Mobile laser scanners use SLAM technology currently, this produces a very noisy point cloud. The scan could be filtered to reduce the noise but the mesh will never be as clean as one generated from a TLS.

1 Like