Virtual scanpositions pointclouds results in holes in 3D mesh

Thank you for your reply!

I am aware of the problems with mobile scans. Still, since this is the dataset we got, I am trying to process it in the best way possible…

The main issue here is that the export of mobile scanner data from FARO Connect appears to be broken - it is capable of exporting ordered clouds (practically in the same form as from static scans). Still, for some strange reason, each scan position that matches the corresponding panoramic view from the built-in camera is placed in the correct place in space, but it is randomly rotated. It is an error in their export solution that does not orient individual scan positions correctly (we got this pretty much confirmed with their support).


This is data exported from FAR Connect. Individual point clouds are correctly limited by what was seen from each position (no “see through walls” like with the virtual scan positions work around workflow), but the export does not maintain the correct orientation, resulting in a disorganized mess.



The same data in RC + meshed result, which only confirmed the nature of the issue.

I attempted to import this data as unregistered or draft into Reality Capture, locked the position information, and tried to align the scans using the image data stored in each scan. Still, RC could not solve this because the visual information was of too low a quality - the best result I achieved after multiple attempts was 4 scans aligned out of 440.

And since the dataset I have problems with is the attic space, which has very low light and repeating patterns of beams, it is not working at all to align the mismatched rotation in RC.

So, the only way I managed to get point cloud data from mobile scans into RC was via the workaround described here on the forum - through virtual cameras generated in Faro SCENE.

This data looks correct when I import it into RC:

But the issue is here:


As these virtual scan positions only cut from already assembled complete point clouds of the whole space, they do not have the same limitations by physical space as static scans or scans generated in Faro Connect based on the panoramic images. The image data RC generated from this point cloud is essentially a “pointillistic painting” - the view is composed of individual points, regardless of whether they can be seen from the spot in the real-life location or not. This is why we can see the points corresponding to windows in the lower floor, even from the roof space - the ceiling, beams, and walls are not opaque, but “see-through”. The result of this is an incorrect meshich, which “devours” all points that overlap in multiple scan positions.

I would compare this to the phenomenon of a moving object disappearing from photogrammetry models (for example, when doors are moved during a photoshoot of data). We can see what is behind the object in different cameras, so the meshing process in reality capture effectively “deletes” the parts that are in the foreground and only keeps the parts that are farther away and unobscured.

This is the result after I managed to get “virtual scan positions” data to align with the rest of the model done with stationary scanner (using the same coordinate system provided by the surveyor) - as you can see, the ceiling of the topmost floor and the majority of the roof construction got omitted in the meshing process resulting in the holes…

My reconstruction settings are standard, with normal detail and minimal intensity for the point cloud data set to 0, so no points are lost. But I have not found any setting that would allow any change for the meshing process itself (like not forcing water-tight meshes).

Now that I think about it again, it would be great to have an option to use point cloud data directly as a dense cloud for meshing - simply adding the point cloud to the points calculated from photogrammetry and connecting them with triangles.