Generating single 3D model thats been photographed in two different positions - combining

I recently took photographs of a 3D object about 20 cm tall. 

  • I took an initial set of photographs

  • I then flipped the object over and took a second set of photographs to capture the ‘underneath’ part of it.

 

In capturing reality, if I align all the images together, I get this:

 

The 1DS view looks like this:

 

Am I right in thinking that in order to get all the images to align and produce a single model, the best workflow would be to use ‘control points’ and I should be reading up about these? Or is there another optimal workfow?

Thanks in advance

hi, yes, you can help RealityCapture correct the alignment through control points here

Greetings taimur

Yes as you have already said you can fix this issue by adding control points and re-aligning. Though there are two more things you can try to prevent this unwanted effect.

  1. Shoot the object only to the point where you reach its half with a bit of an overlap so let’s say 60% of the object vertically. Then once you flip it over shoot only the rest of it from a side view mostly so you won’t shoot the “ground” or the bottom part after its flipped and connect those two groups of photos in the middle of the object where the overlap was. If you get this method right you should successfully align it all together without errors.

  2. Use a single colour featureless background when shooting your object. This way RC should ignore the background and connect both sides correctly.