Photo coverage

We are told to take photos overlapping so that the same feature in two photos is seen max 30o ‘apart’. As rule of thumb, if I’m 4m range from the subject, if I move sideways half that range = 2m for the next shot, that gives tan-1(2/4) = tan-1(0.5) = 25.6o ‘away’. Does this make sense?

We are also told to get 80% overlap between photos. Half sideways plus half vertically as far as I can see gives 75% overlap. Does that make sense? Otherwise, for 80% overlap, I’m only allowed to move sideways 20% of the 4m range = 0.8m for the next shot - seems excessive.

I can give you the results of a test I did recently:

small chapel about 11.4 meters across 18mm lens - I did the “stand with the back to one wall and shoot across the room towards the opposing wall” thing

I tested doing 50cm as well as 25 cm steppings meaning I offset my camera 25 (or 50)cm alongside the wall after each shot.

The results where impressively different: with 25cm stepping RC was able to use every single image - with a 50cm stepping it was way less - and of course, the result was much worse.

Hope this helps


When you say “4m range”, I’m guessing you mean 4m distance between camera and the subject. In which case your calculation is missing the camera view angle.

E.g. imagine you’re 4m away from a wall with a fancy-pants, full-frame DSLR and using a 200mm lens. Then your view angle (or field of view) is supper narrow. So if you move 2m sideways, you’ll have no overlap at all! Conversely, if you’re shooting with a super wide 10mm lens, then your FOV is giant (roughly 14m wide at 4m away). So moving 2m sideways will result in 12m overlap, or 12/14=86% overlap. 

(If I misunderstood your question and this seems childishly simple, please ignore :slight_smile: )

I think of it in terms of the diagram below. Helps with quick distance calculations.Your field of view, or amount of subject visible to the camera sensor, is a function of the angle of view and capture distance. Angle of view is directly related to your sensor size and lens focal length.

Anyway, you can do all this math manually or there are online calculators that figure these numbers out for you. From there you can determine how much to move horizontally/vertically. 

Exhilarating stuff, I know! This will get even more exciting once you start thinking about rotation angles/distances around large cylindrical objects :slight_smile:


A quick cheat for this: if using a modern digital camera (which I’m guessing you are), especially something with a live-view function, there is almost always a way to display a 3x3 grid on the display. This is an old “rule of thirds” photography trick. Often there are denser grids available as well. Anyhow, you can track where the grid lines land/overlay on the image of the subject. If you move “1 grid sideways” such that whatever was under a grid line before is just at the edge of the view, then that’s 33% sideways motion (or 67% overlap). Move about half that for ~80% overlap. This way you can do a quick estimation of how much you need to move to get the desired overlap. Denser grids help with more precise estimations!

This “trick” works independent of dimensions and camera specs (focal length, distance, etc). And, most importantly, you don’t have to do math!



Given the dimensions you mentioned, I’m surprised that you needed 25cm steps to get all images to align. 

Would you mind sharing a couple of images that were shot 50cm apart and failed to align?

What Tim said!  :-)  Very good rundown and tips!

To be honest, I do this kind of stuff rather intuitively. Rarely failed after the initial learning phase. And if, then only in difficult corners. This rule is just a guideline and stems from aerial photography, where you have the problem the you can only move in a plane and take shots. My objects are much more complex than that and so I always never have an area where I could implement this rule properly anyway…

I think Heiko’s problem is tha relatively textureless plaster surface of the walls.

Heiko, what kind of resolution are you using?

Thanks chaps - really gd answers.

Tim B said:

“When you say “4m range”, I’m guessing you mean 4m distance between camera and the subject”.

Correct. And Heiko, we need to know what range (distance between camera and wall) you had - was it really 11.4m?!

Because I’m pointing out that two separate methods are recommended, for getting enough photo coverage. You can do either one of them perfectly, and still fail on the other method. We need to fulfil both methods at the same time.

The first method recommended is ‘not more than 30o apart’, and in this method the camera view angle is irrelevant - it’s purely about range, and not moving sideways by more than 57.7% of that range (because tan30 = 0.577). In para1 of my OP I was moving sideways (and also up-down) by 50% of range, which works out on the safe side, at 26.6o (because tan-1(0.5) = 26.6o).

The second method recommended is ‘80% overlap between photos’, and in this method the camera view angle is all-important, as TimB fully explains.

I’ve just been trying a 8mm spherical fisheye lens, which has 180o view angle, in other words can see the entire wall in every photo (albeit from differing viewpoints). In that case, overlap is 100% in every pic, regardless of how far I move sideways! That was because strongly recommended as strategy in a tutorial for another photogrammetry s/ware, precisely because of the massive redundancy in multiple views of every Feature (although at cost of image resolution). That s/ware could handle the distortion but it seems RC can’t - yet! Division was useless - simple Brown3 was the only one that just once produced one major component, but with pathetic Feature count.

Back to the conventional 18-55mm Nikkor DX AF-S on Nikon D60, I notice TimB says:

“Angle of view is directly related to your sensor size and lens focal length”

The old D60 isn’t full-size sensor, has 23.6x15.8mm 10.75mpx sensor, so looks like I should be applying a 0.66 crop factor multiplier to my understanding that say a 28mm lens (or rather, a zoom lens set at 28mm) gives view half-angle 33o horizontally and 23o vertically? So I should set the zoom at 18mm.

That 33o (and near enough the 23o) actually are wide enough that the my 26.6o apart practice (both horizontally and vertically) does also fulfil the 80% overlap recommendation, as described in para2 of my OP. Or does it? Edit - no it doesn’t! I’m busy trying to visualise this, will be right back.

Unfortunately the old D60 doesn’t have live-view (only in the optical viewfinder) hence no 3x3 grid - but yes that’s easy enough to estimate, tho I’m actually using a tripod (it’s long-exposure interior photography) and moving it in a strict grid (horizontal and vertical) of measured intervals.

At 2m range (1m sideways) and 10.75mpx, “Heiko’s problem is tha relatively textureless plaster surface of the walls” still remains for me - but that’s another story!

I always thought the angle refers to smaller objects when the camera is circling…

From Help:

How to Take Photographs

Creating 3D models using photographs is fun and easy, but if you want to make a high quality output …

General Rules

A few tips …

  • Use the highest resolution possible.
  • Each point in the scene surface should be clearly visible in at least two high quality images. The more - the better rule applies here.
  • Always move when taking photos. Standing at one point produces just a panorama and it does not contribute to a 3D model creation. But move around the object in a circular way.
  • Do not change a view point more than 30 degrees.
  • Coarse-to-fine rule: Start with taking pictures of the whole object, move around it and then focus on details. Beware of jumping too close at once, make it gradual.
  • Complete loops. For objects like statues, buildings and other you should always move around and end up in the place where you started.

I’d think that’s a general recommendation, no mention of circling.

It must be increasingly hard for the algorithm to recognise ‘same thing-ness’ in adjacent photos if their view angle changes a lot.

Even though the wider the convergence angle the less error-prone the projection/REprojection round-trip. But not by a lot, in the proportion 1/sin30 = 2 to say 1/sin 45 = 1.41.

For that latter reason, also small convergence angles become exponentially error-prone, in the proportion 1/sin30 = 2 to say 1/sin10 = 5.76, or even worse 1/sin5 = 11.5 - dramatic!

I guess the sweet spot must lie between 10o and 30o.



again a very insightful discussion (much more math involved than I hoped though :wink:

Anyways: for anybody interested: here are the chappel shots I was using


It’s just that all the depictions I saw use the angle for circles (where they are native, hehe) and the overlap for sideway movements. I just never thought of using the angle in the way you did, which does not mean it’s not valid. In any case a change of viewing angle is inherent in both, only that with the circling the direction changes as well as the angel.

Anyway, you are so much more systematical than I am!  :slight_smile:

Heiko, I would be more interested in the shoe, but I already said so in the proper thread…


I believe the 30deg guideline is for circling around an object. Such as moving and capturing around a statue. 

If you draw a circle around the object and mark the camera locations, it should look line hour marks on a clock (30deg into 360 => 12 shots). While that may be a good rule of thumb for minimum alignment, I would recommend doubling that at least. 

The 80% overlap is more for moving along a wall or something similar. 

Basically these rules are the same thing; designed to ensure that sufficient visual information is maintained between adjacent images. It’s just that one is designed to help with moving across walls/etc and the other for moving around things. 


Regarding the 8mm fisheye lens:  I think it’s somewhat counterproductive :) 

While you do get 100% overlap, that also means that “pixel resolution per feature” is worse. Keeping all other things constant, as the lens gets wider, pixel density per cm gets worse. Which means less stuff for RC to “see” as features. I’m guessing that’s one of the reasons why your fisheye experiment resulted in poor feature count.

This would be particularly bad for plaster walls and such. Those actually do often have a lot of small features, especially old walls. But if you aren’t “zoomed in” enough, all the little bumps/cracks would be effectively lost.


Regarding your D60 + 18-55mm example: I didn’t quite follow your math. Not saying it’s incorrect, I just didn’t verify it :). But I do think you are convoluting the angular rotation vs lateral movement calculations. 

For a zoom lens, I think it makes sense to set it to one extreme or the other (18mm or 55mm). That way there is less chance of slight zoom variations during camera movement. Unless you have a zoom-lock lens.

(There is an added fun fact that these kit/zoom lenses are usually not the sharpest at highest/lowest zoom, but should be OK at f4 or smaller)

Anyway, set you lens to 18mm (or something else if necessary). Given the distance from the wall, figure out what the field of view is (how much width of the wall do you see). So if you see 5m of the wall horizontally in your image, move 1m (20%) to the side. That’s your 80% overlap. Same thing for vertical 

The 30° rule is mostly sufficient for simple objects such as garden gnomes (I have done successful reconstructions with as little as 8 pictures per revolution). Not so much for intricate statues.

In general, the more complex your object of interest is, the more overlap and lower angle between shots you want to have.

Intuition and experience as well as some trial and error for a small sample area of your project ought to help you find an acceptable approach though.

In the end there is no singular set of parameters for the perfect alignment of all and everything.


Tim Bsaid:

“So if you see 5m of the wall horizontally in your image, move 1m (20%) to the side. That’s your 80% overlap. Same thing for vertical”

Yes I was getting round to that - ‘same thing for vertical’. If the objective is for the same Feature to appear in at least three photos, four better, or even (in theory, with 80% overlap) five, then yes that’s what you get by stepping along the wall with 80% overlap.

Then Tim says ‘same for vertical’. I quite agree, that what goes for horizontal makes sense for vertical too. So not just a single horizontal pass, but a rectangular grid of camera positions (‘rectangular’ rather than ‘square’ because the photo format is 3:2 (or similar) wider horizontally than vertically).

In stepping along a wall horizontally, the camera is seeing both left and right hand side of objects - but objects have tops and undersides too - so stepping also up and down the wall vertically makes sense.

What I’m getting round to, is that this need not mean 2x or 3x as many photos. As long as a Feature appears in 3, 4 or 5 photos, RC doesn’t mind if these are all in a line, or a scattering from different horizontal and vertical angles (within 30o max, I’m saying). In other words, instead of 80% overlap in a horizontal line, you could do say 50% overlap horizontally and 50% vertically. Or 67% H x 67% V.

The 50% x 50% layout means that a Feature would appear in four photos - once in ea of the four quarters of four different photos (draw it out, to see!). The 67% x 67% layout would mean nine appearances for ea Feature! Unfortunately it’s kinda whole-number modular - 40% or 60% overlap (hoping to get 3 or 5 appearances) don’t work, leaving portions of the grid a little or a lot less overlapped.

Tim B said:

“Regarding your D60 + 18-55mm example: I didn’t quite follow your math. Not saying it’s incorrect, I just didn’t verify it”

I think it’s right. In{%22c%22:[{%22f%22:10,%22av%22:%2211%22,%22fl%22:19,%22d%22:3000,%22cm%22:%220%22}],%22m%22:1}

you dial in your camera model and its associated sensor-size crop factor (I said 0.66 for Nikon D60 but it’s customarily given as the inverse - 1.53 for Nikon DX/APS-C) and it gives the vertical and horizontal Field of View (height/width covered along the wall) for your chosen zoom focal length.

Because, as TimB reminded:

“Angle of view is directly related to [the combination of] your sensor size and lens focal length”

Here’s a simplified calculator:

which also gives by far the best collected illustrations and explanations I’ve seen of both Depth of Field and the mysterious (but even better) Hyperfocal Distance concept -

“… the hyperfocal distance setting … is simply a fancy term that means the distance setting at any aperture that produces the greatest depth of field.”

“If you set the camera’s focus to the hyperfocal distance, your depth of field will extend from half of the hyperfocal distance to infinity—a much deeper depth of field.”

In other words, find out what the Hyperfocal Distance is for your intended sensor size/zoom setting/f-setting combination (in my case 1.75m), disable autofocus, focus manually at that distance, leave it at that, shoot away, and everything from 0.87m to infinity (in my case) will be ‘reasonably’ in focus.

Apparently you can’t get a better spread than that. But if you want to get closer than 0.87m (in my case), you can, but at cost of less spread. e.g. focusing at 1.25m gives DoF from 0.73 to 4.35m (in my case), which may be plenty, in an interior say.

My remaining question is ‘how in-focus is ‘reasonably’ - and is this good enough for RC?’. It gets complicated!


Good details!

A couple of comments (with the caveat that the following has worked out for me and is not claimed to be eternally applicable)


Overlap calculations: I don’t know enough about the intricacies of photogrammetry calculations to comment on the efficacy of 50% x 50% or 67x67% overlap. While that may be sufficient to get the required 3 images per feature, in my experience many more images are necessary to get good models. I’m sure there is some math to be done here about reprojection errors and such :slight_smile:

Hyperfocal distance: This is an easy trap to fall into, if not careful. It is “improved” by stopping down your lens (higher F number / lower aperture). At the limit, you get a pinhole camera with infinite zoom! However, your image quality suffers quite a bit. There is a lot of discussion out there about aperture “sweet spot”. I won’t claim to know what’s best there.

Which is a great lead into the infinite rabbit hole of image sharpness discussions (MTF, sharpness, circle of confusion, etc etc). In the end, as it relates to photogrammetry, a lot of it is trial and error. 

Maybe someone has a definitive answer that covers all cases, but I haven’t seen it disclosed yet :slight_smile:

Where I’ve got is to not accept the entire spread normally offered as ‘reasonably’ sharp DoF e.g. (in examples above) 0.87m to infinity, or 0.73 to 4.35m,

but to rely on just the middle half of that spread, in which 'middle half is highly biased to the close end e.g. within a 0.73 to 4.35m spread, I’m guessing 0.82 to 2.5m as the middle half. I hope it should be possible to get that ‘middle half’ (or middle one-third if you prefer) accurately by tinkering with the source equasion.

Hey Guys,

I agree with ShadowTail - those numbers are all a rule of thumb and not carved in stone.

I am pretty certain those numbers have been developed for aerial photography/photogrammetry.

When you have to charter a plane for several k an hour you want to make sure you get the best possible result. And I think that is why they came up with those numbers - they yield on average the best possible result for a certain range of surfaces/objects.

So where this is good to give us a general idea of what might be required,  we ground hogs have the advantage that we can stay in a place as long as we like and/or come back whenever we want to take a few more (in theory at least). Not in your case, Tim, I know! :slight_smile:

I like what Tom says about the feature coverage. That’s how I make my decisions on the spot - from which side do I need to see this part of the object and how do I connect it to the rest of the model. I think of it as airbrushing the whole surface while distributing the camera positions in the form of a web, where all lines or strings need to be connected at both ends at least.

I think what Tom meant by vertical is to use the camera also in portrait mode and not always only in landscape (as opposed to Tim, who was talking about moving it vertically). This is something I should do more often, too, since it can help RC to figure the lens geometry out more precisely, but only with exif grouping.

Actually I did mean move it vertically, all in landscape mode in this case, which is about a grid of shots to ensure consistent no of views of ea Feature throughout - as the basic coverage. Followed, yes, by freehand additional coverage of details, getting behind things etc, which can be portrait/landscape, whatever fits.

I’ve acquired this beautiful tripod

which can go to 3m high (before adding extensions) and the central column can go up/down by over 1m without moving the feet. Only 3kg but v adequately solid. Controlling the camera from on the laptop - 30sec interior exposures pin-sharp. At last! Almost no ‘candy floss’ even though RC’s getting almost no Features on the smooth plaster.

Makes the grid of shots easy, for near-perfect coverage, even tho unfortunately the D60 doesn’t have LiveView so I can’t see what it’s seeing on the laptop.

Hi Tom,

not bad - added it to my wishlist!

What’s candy floss?

Hmmm - in Wishgranter says:

“No we dont use just DEPTH maps for mesh creation”