Pixels > Points > Features > Projections

In Alignment terminology, there seems to be a hierarchy:

Pixels > Points > Features > Projections

RC identifies certain groups of Pixels as identifiable Points.

If it recognises the same Point in adjacent images, it is treated as a (useful) Feature.

RC then triangulates the Feature’s position in space, calls it a Projection.

Is that all correct? Or are Points and Features the same thing? Or maybe Features and Projections are the same thing?

Because in Alignment Settings we set ‘Max features per image’ and ‘Max features per mpx’,

but Alignment Report reports only ‘Points count’ and ‘Total projections’ (not ‘Features count’)

I’d like to compare ‘Actual features achieved per image’ against the setting ‘Max features per image’.

If I calc ‘Points count’ divided by ‘no. of images in the Component’, does that give me ‘Actual features achieved per image’?

Hi Tom,

I think the term pixel is clear.  :slight_smile:

In my understanding Feature means a certain identifiable phenomenon in the image, I always thought it is more than a pixel but rather a unique distribution of pixels with a certain colour or rather their relation.

I am not really certain about how it goes on but I would think that the referred-to Points are more or less the centres of those features.

A Projection is, as the name says, a projection of a (2D) Point into 3D space. I think it refers to the process or vector rather that the resulting point, but that’s just my layman thought…


Thanks Gotz

So your guess is that ‘Points count’ is another way of saying ‘Features count’? (i.e. one point at the  centre of each feature). I wonder.

Because for example Aligning a small test-photoset of 29 12mpx images, producing one 29/29 Component

Align default (Max features per image = 40000, Max features per mpx = 10000) gives Points count = 50000, total projections = 124000. Then calc ‘Points (features?) count’ divided by 29 images = 1724 features (?) av per image actually achieved (only 4.3% of ‘Max’)

Or optimising Max features per image to 140000, Max features per mpx to 12000 - gives Points count = 109000, total projections = 278000. Then calc ‘Points (features?) count’ divided by 29 images = 3759 features (?) av per image actually achieved (only 2.7% of ‘Max’)

That’s assuming that ‘Points count’ means ‘Total points count for the whole set of 29’. If instead it means ‘Points count av per image’ then 109000 is 78% of ‘Max’ which makes more sense. However in the first example, 50000 is 125% of ‘Max’, which can’t be.

So I’m not so sure that points count is same as features count - I wish it was because I’d love to have features count data, to work out how to optimise.

It’s wierd that we can set ‘Max features’ but then have no feedback data on how many features we actually achieve.

Can it be that ‘Points count’ means points as in ‘sparse point cloud’? but how does that then relate to features or projections?

Hi Tom,

yes, I understand that the terminology seems a bit unclear. Hence my whens and ifs.

You can see the Features for each image, the maximum is very often 40,000, so exactly the Max Features setting. That’s why I think Points is something else. Seems like you suspected to be Tie Points. I just tried something: divide total projections by the number of Points and you will get - tadaaa - average track length. Since we know that track length is the number of projections (as in projected Feature) to one Tie Point, that is the answer.

Brilliant. I realise I had Tie Points confused with Control Points, so as my whole effort is to do without manual tinkering, I had paid Tie/Control Points no attention!

So the hierarchy is corrected to:

Pixels > Features > (Tie) Points > Projections

RC identifies certain groups of Pixels as identifiable Features.

If it recognises the same Feature in adjacent images, it is classed as a (Tie) Point.

If that tie point is seen in two different images (i.e. one pair of images A+B), that is one Projection, triangulating the position in 3D space of the Feature/(Tie) Point.

If that (Tie) Point is seen in three different images (i.e. three pairs of images A+B, A+C, B+C), that is three Projections, providing stronger confirmation of the position in 3D space of the Feature/(Tie) Point.

If that (Tie) Point is seen in four different images (i.e. six pairs of images A+B, A+C, A+D, B+C, B+D, C+D), that is six Projections

The number of pairs, one, three, six or more, is called Track Length (for some obscure reason).

I guess the colours in Inspect correspond to Track Length one, three, six or more? I haven’t seen that stated.

The Max, mean and median error figures are a measure of how much (in pixels) those three Projections differ from each other (the ideal is ‘identical’).

Does that nail it?

Almost!  :slight_smile:

I would turn around Projections and Tie Points. I think the Features are projected into space according to the image geometry. Where they intersect, there lies the Tie Point. I might be wrong but there doesn’t seem a reason to project an already known 3D point. In the end, the image has only 2D info and only by different orientation of several image planes and a vector projecting from those planes will, by intersection, lead to 3D info.

Yeah, I would like to know that obscure reason as well - probably just some remnant of an ancient way to visualize those values…

The colours (of the lines, I guess you mean) in the inspector tool (there are also colours of the images, which represent exif groups I think) represent the number of Tie Points that 2 images have in common. That can say something about the reliability of the alignment and therefore the geometry used to calculate the (internal/external?) orientation of the images, and by extension the projections.

Which leads us to the last one - the error is called REprojection error, which is the clue - the Tie point is being REprojected to the image and the difference between this point and the real feature gives the value. I guess that since it is quite rare that several vectors (aka projections) will intersect at exactly one point, the Tie Point is at a position with the smallest error. I could imagine that the reprojection uses the original vector but from the idealized position of the Tie Point and then counts the pixels to the real Feature. If that were true, then what you said isn’t far off, only that there are no pixels to measure at the Tie Point but only “at home” on the image plane.

Phew - you always make me think about things that I usually just file under “oh, I’ve read that before somewhere”…   :smiley:

Hello everyone,

I would just correct the track length definition. If you have a 3D point (tie point) visible in 3 images, than 3 projections are created and the track length for this point is 3. It is not the pairs of the images, but the number of images. It is correct that the Total projections divided by Points count (the total number of 3D points) gives the Average track length. This number indicates in how many images, on average for the component, a point appears.

Very simple example: For one 3D point visible in 3 pictures, the Total projections value would be 3, and Average track length would be 3. For 2 3D points, from which 1 is visible in 3 pictures and the other in 4 pictures, Total projections would be 7 and Average track length would be 3.5.



Hi Zuzana,

thanks for stepping in. Greatly appreciated!

So that means we are right about the rest?  :slight_smile:

What I don’t understand about your definition is why it is necessary to project a 3D Tie Point? And what is the purpose of that projection? And just to split hairs, nobody mentioned pairs of images…  :wink:   I never think of it that way even though I understand that pairs are somehow relevant in some stage (depth map creation)?

The way I understand it is that an image has features on a 2D plane. This plane has a “beamer” behind it that projects those 2D features out into space along a line. Where those (of the same Tie point on different images with different beamers) intersect, a Tie Point is born. Or are there different levels of Tie Points? A 2D kind on the images and also the results in 3D? Because in terms of nomenclature, Tie Point indicates 2D, as in tying different images together. At least thats how I interpret it logically, which is not always a good idea with technical terms…  :slight_smile:

A quick search in my favorite search engine confirmed it to be the 2D point in an image. So how are the 3D points to be adressed properly? Just Points? 3D Tie Points?

Yes thanks Zuzana (as well as Gotz, again).

Yes, I was talking about pairs of images – but now understand better, from what you both said.

I’m also puzzled by ‘a 3D point (tie point) visible in 3 images’ – must mean ‘a 2D feature that’s visible in at least 2 images and therefore has potential to create a 3D tie point after projection’? Or not? As Gotz says, this needs clarification.

OK is this it then? –

The hierarchy is corrected again (v3) to:

Pixels > Features > Projections > (Tie) Points

RC identifies certain groups of Pixels as identifiable Features.

If it recognises the same Feature in two adjacent images, it creates two Projections – for each image a line (‘beamer’ – I like that!) from its camera through the feature.

Those two Projection lines should intersect, or come close, in 3D space; it creates a ‘best compromise’ (Tie) Point between the near-miss lines, in 3D space; that (Tie) Point is said to have track length 2.

Having located the ‘best compromise’ (Tie) Point, it REprojects lines from there back through each image to its camera; these slightly miss the original Feature on the image, by a distance in pixels that is reported as Error. If that error exceeds the ‘Max feature reprojection error’ setting, that image is disqualified in the Projection process.

If it recognises the same Feature in say three adjacent images, it creates three Projection lines and creates a ‘best compromise’ (Tie) Point between the near-miss lines, in 3D space; that (Tie) Point is said to have track length 3.

I’d like to know what the different line colours in Inspect mean – do they absolutely denote Track length numbers, or is it less precise, more illustrative?

Hello everyone,

please excuse my late response.

Regarding correct nomenclature of 2D and 3D points, I believe in different communities there are different standards for this. We tried to choose the names for different parameters after extensive discussions with the users from different fields of expertise so that it will be most understandable. If in your professional experience have different standards, please let us know. 

In RC, you can enable to display Tie points in 3D SCENE, those are the points registered in the alignment process. You can also display Tie points in 2D scene on specific image, those are the points that were used in triangulation and used for creating a 3D point. Since the 3D point is not created with 100% accuracy, it is reprojected back to the image and the difference between the original and reprojected point on image is the reprojection error. In 2D SCENE you can also display Residuals, those are the differences between original point and the reprojected one. 

The hierarchy in the last post is correct. 

In the inspection tool, the colors are meant to be illustrative. You set the minimal Feature consistency e.g. to 3 and Matches count to 100, this means two cameras are connected if they have at least 100 matches in common, such that those matches are visible in at least 3 cameras. The scale is Jet colormap, it means from dark blue to red. More matches means that connection is stronger and the color is closer to red in the Jet scale. you can find more information about Inspection tool parameters in the Help section.


Hi Zuzana,

much appreciated!

I wasn’t trying to criticise the given names! I think Tom didn’t either…   :-)   We were just trying to pin them down exactly. I’m glad Tom was so persistent because I only had a vague idea myself.

Is it correct to say that Tie points are as well the 2D features on the images selected for alignment as the 3D features that form the sparse point cloud?

Hello Götz,

please excuse my late answer.

I did not take it as a criticism, we are really open for any suggestions for improvement.

Yes, we refer to tie points both in 2D (image) and 3D space (sparse point cloud), since the matching tie points in the images are the projections of the same tie point from the 3D point cloud.

Hi Zuzana,

thank you very much once more. Now we have it sorted I think!  :slight_smile: