2D Tie Pont = 3D Tie Point = Vertex?

I’ve been trying to figure this one out, but Gotz told me to stop reading the forums and just post questions as I think of them. 

It seems the Tie Points that appear in 2D view are related/equal to Tie Points in 3D view, with some projection error. As discussed in some detail here: https://support.capturingreality.com/hc/en-us/community/posts/115002427611-Pixels-Points-Features-Projections

 

What I’m not sure about is how those 3D tie points are then related to resulting mesh and vertices. 

Empirically, it’s clear that if a part of an image doesn’t show any 2D tie points, then the resulting model of that area will have holes.

Do the 3D Tie Points become vertices of the mesh?

Are triangles just filled in between 3D Tie Points to generate a wire frame?

Or is there more to that (as is usually the case)?

Hi Tim,

I just wanted to appreciate the fact that you read and think first and then ask. But if it encouraged you in this case, all the better!

Interesting question. Digs quite a bit into the innards of photogrammetry. As you are probably aware, I am also self-taught and still learning (does it ever stop?) so take what I say with a healthy portion of scepticism.

Short answers is that I am pretty certain that a calculated vertex has very little to do with a tie point.

As far as I understood, the whole process up to the mesh consists of three more or less independent stages. The first is figuring out where all the cameras go, this is the alignment (aka SFM - structure from motion) with features and tie points (simply reference points) with the side-effect of calculating the lens geometry. After that, the real photogrammetry kicks in, which uses pairs of cameras (I believe) to calculate depth information. If this process still uses tie points I do not know and would doubt it. I guess that where RC won’t find features, photogrammetry won’t find anything either. And finally, I think that all of the depth info will be assembled into a mesh in a third stage.

Please correct me if I’m wrong. Vlad?  :slight_smile:

‘Depth information’ or ‘depth map’ sounds like it should be almost self evident but I’d love to get some explanation (we did so well in  https://support.capturingreality.com/hc/en-us/community/posts/115002427611-Pixels-Points-Features-Projections)

 

Tim Bsaid: “It seems the Tie Points that appear in 2D view are related/equal to Tie Points in 3D view, with some projection error”.

To be clear, I’d say 2D Tie points are related/equal to 3D Tie points exactly - the REprojection error is between the 2D Tie point and the original detected Feature close alongside it (both on the 2D photo plane).

 

Do we agree?!

I think one of us just has to take a degree in photogrammetry. That would probably be quicker!    lol

In my understanding 2D TPs are the marks on the image with x and y coordinates in pixels.

Those are projected according to the outer orientation (alignment) and inner orientation (undistortion). Those beams will never exactly intersect, so RC creates a 3D TP at the coordinates (x y z in project units) with the least remaining error. From this calculated point, each of the “beams” are re-projected onto the image plane. The error is the distance from this new point to the original Tie Point.

I found an illustration where one can see what I mean. I hope the link works since it’s a proxy. But the page where it is implemented doesn’t work for me…

https://ixquick-proxy.com/do/spg/show_picture.pl?l=deutsch&rais=1&oiu=http%3A%2F%2Fncsu-geoforall-lab.github.io%2Fuav-lidar-analytics-course%2Flectures%2Fimg%2FBundle_Block_Adjustment.png&sp=8a64028c4d2daf1097113aea677c7b6d

Very good. It boils down to ‘what are X1j, X2j, X3j called, in RC?’.

I think they are detected Features in each image (or some kind of ‘central’ point of each Feature, which is in fact a visual motif which RC recognises is ‘same thing’ in images 1, 2 and 3).

P1, P2, P3 are the cameras.

Xj is the 3D Tie point, the best compromise between the blue lines projected from the P’s through the X’s, which don’t exactly meet.

P1Xj, P2Xj, P3Xj are the 2D Tie points, resulting from REprojection black lines from Xj back to P1, P2, P3.

The distance between X1j and P1Xj etc is the REprojection error, measured in pixels on the picture plane.

I think that what I’m calling Features (X1j etc), you’re calling 2D Tie points. Then what do you call P1Xj etc, other than ‘new point’?

I don’t want a photogrammetry degree. Got enough degrees already … :frowning:

 

Thanks for the info, Gotz and Tom. That diagram was quite useful. 

It was Tom’s original post on features/pixels/etc. that rekindled my confusion on the subject. 

 

The primary goal of this thought process was to get some sort of quick way of estimating where the holes would be in the model. It’s hard to identify problem areas by looking at 3D view after alignment and reconstructing a model to get vertices or solid view may take a while. 

So maybe a quick scan of images to see if there are “problem areas” with missing 2D Tie Points could be useful. 

Or do you guys have other tips/tricks for this?

Hi Tom,

you’re on the spot!  :-)  I see X1,2,3j as 2D Tie Points, Xj as the 3D result.

I think the confusion is that all Tie Points are Features (or rather the center of it) but not all Features are Tie Points.

Features are a way for RC to determine similarities. For mathematical purposes, one coordinate (center) is neccessary. Those CAN be used as Tie Points, IF they are present in several images AND identified as matches. The Track Length is the way of expressing in how many images this one Feature was identified.

Tim,

I think that the sparse point cloud (3D view) is quite good to identify problematic areas. Whereas the density is only a rough guideline (sometimes thin areas will still yield a viable mesh), you may want to look for fuzziness or several surfaces (someone called that ghosts once). This is a clear sign for a flawed alignment. I don not know of any quicker way to check that. It’s not very difficult with simple objects (like a pebble or flat masonry), but can be quite tricky with more complex objects (like your oil wells, or whatever they are). In this case, you really need to check out every corner. Of course, you will probably get more experienced and know which corners will probably cause trouble. But then there is still a possibility for a surprise, because it can also happen that misaligned photos will not leave any (or very few) points in the sparse cloud but still mess up the surface when creating the mesh.

It’s more of an art than a science in many ways (at least for the user)…

We may have to agree to differ Gotz - and just watch out for ea other’s slightly different terminology - because that’s all it is, we agree on the mechanism (I wonder if RC agrees?! Zuzanna didn’t exactly say).

“I think the confusion is that all Tie Points are Features (or rather the center of it) but not all Features are Tie Points”

I’d say that perhaps one in a million Features (or rather the center of it) are also identical with a 2D Tie point - the rare one where the resultant 2D Tie point happens to land precisely (with abs zero REprojection error) on the Feature from which it’s derived.

I think it’s more than terminology still.  :slight_smile:

That reminds me that I forgot to reply to something in your earlier post. 

I think that the reprojection P1Xj is not really a point that is used other than to calculate the error, which in turn is a way to estiate the quality of the alignment.

I begin to see, perhaps, incl something you said elsewhere - that all this alignment business incl the sparse point cloud (composed of the Xj 3D Tie points) is just a means to align the photos and then gets abandoned, when a new process of depth-mapping from the aligned photos begins?

Any good?

That’s what I cobbled together in my head. The real pros are probably laughing their rears off…  :wink:

We really should work on a “Photogrammetry for Dummies” since we don’t seem to be the only ones.

One can only use a tool with the highest efficiency if one understands every aspect about it.

Except this is specifically ‘RC for Dummies’ because much is unique to RC.

This Depth Map way of proceeding is AFAIK unique to RC and largely explains its speed. So my next question is ‘what is this Depth Map way, and how is it different from the hitherto standard way that all the others use?’.

I have an idea that the others retain the sparse point cloud, build a dense point cloud, and connect the dots.

Whereas RC apparently scraps the sparse point cloud and instead builds a Depth Map from the now-aligned photos - presumably re-using (is that it?) those of the Feature-centre points that previously qualified (within specified accuracy limits) as 2D Tie points?

Hi Tom,

depth maps are VERY common and I am quite certain that all other programs use it, too.

You can look it up, there is tons of material out there.

The short version ist that it is a 2D image with depth information in the grey-scale value of the pixels.

It’s basically a pointcloud on the basis of a pixel file.

Ah, that kind of depth map - I thought they were just a minor aid to visualisation or sorting of laser scan results - something optionally applied to the dense point cloud.

So, is there tons of material on depth maps as used ‘instead of dense point cloud’ in RC? I don’t begin to get how it can hold data on all of the x, y and z of a point in space. And if it’s only z (‘depth’), what’s that relative to?

 

My idea is that there is a minimum and maximum distance for black and white and the rest fits in between.

This is the point where I log oot (mentally) and let RC work it’s magic.  :slight_smile:

Keep up the in-depth conversations guys. I have seen quite a few scattered around and its helped wonders in get the brain ticking and finally beginning to really understand behind the scenes of CR. :slight_smile:

 

Just wanted to say thanks!

I would assume that using depth maps makes it a LOT easier to sort out all the noise and outliers that the competition so greatly suffers from with their dense point clouds.

You can filter/smooth/average the depth map much easier to remove single or multi-pixel errors.

Also, they make it somewhat trivial to find neighbouring points compared to point clouds where you have to do some voodoo of sorts (probably octree or quadtree?) to accomplish the same task.

They are definitely awesome :slight_smile:

I can see that a greyscale map (one attached to each aligned photo?) cd be a v economical way of storing the points data (relative to ea aligned photo?), rather than ‘global’ xyz coordinates of ea point.

But how are the points in the depth map generated, if not by projection to (or of) tie points - just like a point cloud?