Removing cloud points

Hi, I would like to be able to remove points from the cloud after photo alignment before proceeding to the mesh reconstruction. I am concerned with the potentiality that removing unwanted points could lead to a higher density mesh focusing on the desired points only (in addition to provide faster calculation time and less hassle to clean up the mesh thereafter). Please let me know if this makes sense.

I have read other threads about the topic but didn’t find a satisfying answer. In particular I am not interested in the Filter tool which is working on the mesh. My point is really to start reconstruction with a clean cloud instead of one with lots of unwanted points. Thanks!

1 Like

Are you talking about removing points from the alignment process? The only thing that I can think of is resizing the reconstruction region. To be honest I’m not quite sure what you mean though.

In the Alignment tab there are tools to select points from the point cloud, but I didn’t find a way to delete them. Yes I can resize the reconstruction region, but shape is not always adapted to what I want to select. Ideally I would like to select cloud points with the lasso tool and delete them.

Ah ok. Your wanting the same function as reconstruction region, but want more complex shapes. I don’t believe that is possible. We’re stuck with a box tool for now. The alternate is masking out unwanted features in the images.

Thanks, I hope this will be added someday!

I think there is a confusion - from what I understand Oliver is talking about being able to edit the SPARSE point cloud, like in “other software”  :-)  This is not supported though and I’m not sure it ever will because RC has afaik a different approach. So basically if there are unwanted points, there is a flaw in the image set that needs to be fixed rather than just sidestepped. It’s the price we have to pay for a higher accuracy and more detail. A few stray points won’t have any negative impact on the mesh though…

The unwanted points are not necessarily flaws in the image. For example, I have shots of a large tree trunk on the ground with sand and rocks around it. I am interested in the trunk only, not the surroundings. I would like to be able to select the sparse cloud points I don’t need and remove them before the mesh calculation. An I am wondering if that would produce a higher definition mesh if i was doing that (working in High Details settings).

 

Removing those adjacent rocks would not give you more detail. By default Reality Capture does not hold back on detail, or give it self some cutoff point like other software. In normal reconstruction it will half the resolution of the images and in high reconstruction it will use the full resolution. From what I’ve read it is able to recreate in theory details .25 pixels in perfect conditions.

The only way to resolve more detail is

  1. Take more pictures

  2. Get closer (kinda goes with 1)

  3. higher megapixel camera

There is a tool in the help document in the program where you can input your camera specs and it will give you in theory what amount of detail you can expect at best. 

You are saying “Removing those adjacent rocks would not give you more detail”, Do you know that for sure or is it an opinion? It would be great to have this confirmed by the company staff.

Even if that does not help in mesh definition, it would certainly help in computation time though.

I’m positive. Where other software solutions are limited by how much they can hold in memory in one go and have to limit the model to fit in memory, Reality Capture will break the model up in chunks/parts as needed. One of my last projects was close to a billion points.

you could test this by reconstructing a bigger area and then just the tree trunk and inspect if there is a quality difference. Run to run, even with the same exact data, settings, and variables, the model is computed differently for some reason. The quality should be more or less the same though.

As far as I am aware, RC does not use the sparse point cloud for its reconstruction work. It is only used as feedback of the alignment process (and for preview reconstruction unless you tell RC not to use it for that).

For the reconstruction, RC calculates depth maps and derives the final mesh from combining those. This is completely unlike all the “other” software around that calculates a dense point cloud for meshing purposes.

Thus, you get all the detail all the time.

@ShadowTail: ok. But more generally, I think it would be good to determine what to reconstruct and what to leave aside in a more precise way than a box (and in a less tedious way than having to mask input images). Even if RC doesn’t require the point cloud for reconstruction, it could use it as a guide in order to know what to reconstruct, thus an edited point cloud would leave the user more options than a simple box, if this is technically feasible.

ShadowTail said:

“As far as I am aware, RC does not use the sparse point cloud for its reconstruction work”

I keep needing to check whether this is true. It’s an idea that I saw as a weak vague clue on this forum, and I have since said it as clearly as poss several times and asked for confirmation. So I do worry that this could be an echo-chamber, where people have taken what I’ve written as evidence of consensus, not of uncertainty.

So I’d like to ask, ShadowTail, Götz Echtenacher and others who have, it seems, after long “don’t know”, recently confirmed more strongly - where did you get the “as far as I am aware …” other than from me?!

It’s such an important question.

What still causes me doubt, is the other half of my question - if RC (uniquely) doesn’t use the Sparse Point cloud for Reconstruction, but starts all over again using Depth Maps, in what way is that different? An xyz point in space still has to be calculated, the only difference being how the z is recorded, not as a number but as a greyscale that has number equivalence.

No one has answered that, esp why it’s so much better. For me I think that would settle the matter!

First, in addition to what Steven said:

If you calculate meshes with differently sized reconstruction regions it will often be somewhat different because if only a part of the model is included, RC will not use all cameras to create the mesh.Sometimes, this can result in more details if for example it leaves out some images that would otherwise introduce some noise.

Concerning the question about confirmation by the devs, I am afraid that you will probably wait until you are (color of your choice) in the face. At least that is my experience - the more in-depth the questions, the more they close up. Understandably, but nonetheless frustrating. I myself draw the line where it goes beyond knowing how settings influence the result. I really am not interested in the mathematical background. To use a drill, I don’t have to know how it is constructes, only I should be aware that I have to hold on tight at high speeds and tricky material.

That sais, I would suggest that everybody who wants to know more to read up on the academic publications of the developers / founders. Because that’s where they come from. All the other “hearsay” either stems from own observation or a lot of it from (by now mythological) Wishgranter, who knows his way around the software and also much of the background.

Thanks for the details! To get back to the original topic, I am not asking for implementation details which obviously the developers won’t provide, but simply to have the ability to define in a finer way than a box the area we want to focus on. An if this can lead to a better output quality, this is even better! If devs could reply on this this would be great, thanks!