Hello, I am often confronted with the following problem:
In very large datasets with more than a billion points, there are sometimes some glitches, like holes in the model. Our way to solve these issues without recalculating everything is to create patches.
On the high model, using a box, filter the points, and re-run a calculation inside this box.
Sometimes we combine a decimated version of these boxes in blender to combine them together, and project a texture afterwards.
But in the case of very large datasets, and in cases where a clean merge of the non-decimated version is needed to reproject the normal map, it is very complex to merge these models in an external software (like blender, meshlab, cloudcompare, houdinin or zbrush) because the number of polygons is very high
So it would be better to have an option directly in reality capture to merge the models (just a basic merge without reconnecting the edges, so the model will still have loose parts)