What might help making a very sparse point cloud more dense?



I have a model that has an extraordinary sparse point cloud.

Usually, my point clouds are rather dense, but not this one.


Does anybody have any tip for me on how to make it denser?

All cameras are aligned.


Is there still anything I can do?


Thank you.


Edit: Now I’m REALLY confused. I thought that everything was perfectly fine when all cameras are aligned.

Now I’ve run the alignment again, but with Detector Sensity “Ultra” and higher max features per mpx and per image, and now I got a MUCH denser point cloud.

The upper image shows all cameras aligned with lower detector sensity and lower max features.

The lower images shows all cameras aligned with high detector sensity and higher max features.

It’s clear to see that the “high” alignment settings produce a denser point cloud.


What is happening here?

Does this affect the reconstruction?

ps: I have created the low-alignment-settings model AFTER the high-alignment-settings model because I wanted to make sure that the previous model doesn’t (positively) influence the new model and ruin my test.


For what do u need spars cloud (tie points) more dense? Better to have les, but right one, than ton of them with greater error.

Higher sensitivity does mean more points overall but at the cost of RC picking less accurate ones.

So your errors will be bigger.



Let’s get this straight - is it true or not that

as long as all (or enough) photos are Aligned,

then no need for any ‘better’ point cloud - job done -

move on to Reconstruction -

because Reconstruction doesn’t use the point cloud, just the Aligned photos?

@Tom Foster

That is exactely the question!

The devs need to jump in here.

Most photogrammetry software computes first a sparse, then a dense point cloud and finally turns the dense cloud into a mesh. Including all the noise that comes with it.

RC uses a different approach (Depth maps).

RC can (and by default does) use the sparse cloud to give you a quick result when you do a preview reconstruction.

“Including all the noise …” That’s a great insight.

So ‘starting afresh with depth maps from the aligned photos’ is just that - a fresh start (for better or worse) from whatever quality of photo alignment has been achieved (could be great but could also be error-full) but at least it’s without the ‘noise’ accumulated in the ‘try this/try that’ of getting there.

Personally I still don’t understand how a data point in a depth map is any different from a point in a point cloud, nor how that data point is calculated if not by projection to tie point (just like the point cloud).

But they say it’s not only superior, but v much quicker than computing a dense point cloud - hence RC’s speed?