Virtual scanpositions pointclouds results in holes in 3D mesh

Hi, I am desperately trying to add data from the mobile laser scanning into the model of the building done with stationary laser scan.

I have correct data from the surveyor, processed in Faro SCENE. But the the roofs were scanned by mobile scanner and the export of the mobil scanner data does not provide ordered clouds required by the reality scan, just one large e57 file. I found the workaround here on the forums to create the virtual scan positions in Faro SCENE, but it does ot work properly for me, because the virtual scans show all the points of the pointcloud as there are no real-world physical bariers which would shield the scanpositions from “seeing through walls”. The result is that meshing in Reality Capture creates holes in the model everywhere where there are more points overlapping…

I suspect that this should be somehow solved in the setting of the meshing, but I cannot figure how.


This image shows the model from mobile scanner data exported as virtual scan positions with limitation of 5meters form the scanposition - everywhere where pointclouds from more than one scanposition “intersect” the geometry is missing… all the beams of the roof and the part of the big stair in the middle.


Here is the prinstcreen of the imported scanning data processed as virtualscanpositions in Faro SCENE LT - the pointclouds look correct and complete in Reality Capture, however - when I try to mesh this - tha same happens and overlap of points from the individual scanpositions get lost resulting in holes in the model…


You can see that the internal wooden construction of the roof almost completely disapeared and there are also large holes in the ceiling.

Any idea what could I do about it? I need to process the whole survey of national monument in Reality Capture as it requires to include huge laser scans and photogrammetry datasets and i need to porcess the complete meshed model and othohraphic projections in Reality Capture. Reality capture is the essentil toll in this workflow which connects all differend data sources form the survey, but the data form the mobile scanner are now missing because of theis problem. So please…

HELP ME OBI-WAN KENOBI, YOU ARE MY ONLY HOPE!

1 Like


@OndrejTrhan

Would you be able to help me, please?

Hello Vratislav,
I am sorry for your issues.
In general, the mobile laser scans are not supported by RealityScan. There is this workaround, but it is not ideal.
I would play with the number of the laser scan’s positions.
Are the data from second and third image the same? As the scan’s distribution seems to be different.
Have you tried new aerial lidar functionality? I know it was created for aerial one, but there could be some result also for your data.
What are your reconstruction settings?

1 Like

Thank you for your reply!

I am aware of the problems with mobile scans. Still, since this is the dataset we got, I am trying to process it in the best way possible…

The main issue here is that the export of mobile scanner data from FARO Connect appears to be broken - it is capable of exporting ordered clouds (practically in the same form as from static scans). Still, for some strange reason, each scan position that matches the corresponding panoramic view from the built-in camera is placed in the correct place in space, but it is randomly rotated. It is an error in their export solution that does not orient individual scan positions correctly (we got this pretty much confirmed with their support).


This is data exported from FAR Connect. Individual point clouds are correctly limited by what was seen from each position (no “see through walls” like with the virtual scan positions work around workflow), but the export does not maintain the correct orientation, resulting in a disorganized mess.



The same data in RC + meshed result, which only confirmed the nature of the issue.

I attempted to import this data as unregistered or draft into Reality Capture, locked the position information, and tried to align the scans using the image data stored in each scan. Still, RC could not solve this because the visual information was of too low a quality - the best result I achieved after multiple attempts was 4 scans aligned out of 440.

And since the dataset I have problems with is the attic space, which has very low light and repeating patterns of beams, it is not working at all to align the mismatched rotation in RC.

So, the only way I managed to get point cloud data from mobile scans into RC was via the workaround described here on the forum - through virtual cameras generated in Faro SCENE.

This data looks correct when I import it into RC:

But the issue is here:


As these virtual scan positions only cut from already assembled complete point clouds of the whole space, they do not have the same limitations by physical space as static scans or scans generated in Faro Connect based on the panoramic images. The image data RC generated from this point cloud is essentially a “pointillistic painting” - the view is composed of individual points, regardless of whether they can be seen from the spot in the real-life location or not. This is why we can see the points corresponding to windows in the lower floor, even from the roof space - the ceiling, beams, and walls are not opaque, but “see-through”. The result of this is an incorrect meshich, which “devours” all points that overlap in multiple scan positions.

I would compare this to the phenomenon of a moving object disappearing from photogrammetry models (for example, when doors are moved during a photoshoot of data). We can see what is behind the object in different cameras, so the meshing process in reality capture effectively “deletes” the parts that are in the foreground and only keeps the parts that are farther away and unobscured.

This is the result after I managed to get “virtual scan positions” data to align with the rest of the model done with stationary scanner (using the same coordinate system provided by the surveyor) - as you can see, the ceiling of the topmost floor and the majority of the roof construction got omitted in the meshing process resulting in the holes…

My reconstruction settings are standard, with normal detail and minimal intensity for the point cloud data set to 0, so no points are lost. But I have not found any setting that would allow any change for the meshing process itself (like not forcing water-tight meshes).

Now that I think about it again, it would be great to have an option to use point cloud data directly as a dense cloud for meshing - simply adding the point cloud to the points calculated from photogrammetry and connecting them with triangles.

Is it possible for you to split the dataset following the shape of the floor. Like:


and process the parts separately?

This is not the solution to the problem, because the same holes appear even in the roof space alone - the beams of the roof get lost in the meshing for the reasons described earlier.

In my opinion, there are only two options for resolving this issue.

  1. To correctly split the unified point cloud and export individual scan positions as partial point clouds, which only include points that are really visible from those positions in space (no points of backfacing surfaces).

  2. To have a meshing process that does not delete the overlapping points from multiple scan positions or that includes all points stored in point clouds directly.

I will add this to our feature request database.