Reality Capture not reconstructing all features from an image

Hi, I am having a slight issue, I am very new to this program still so don’t fully know what techniques can be used to solve something like this but if anyone was willing to help I’d appreciate it. Here is an image which is part of the sequence I am using to reconstruct…

However this is what it has modelled.

As evident, the close up bits model really well, however some bits that are still reasonably close are not modelling properly, for example the full width of the road is clearly in shot however it has only modelled a small portion on the left of it, to the right of it there is a massive hole. And it is also not modelling the full fence on the left etc, the top is made up of junk. What is the best technique to ensure everything in a frame is captured when it comes to a reconstruction? I can’t go onto the road to get the data to ensure it models the whole lot as that would be dangerous… So whats the way to make it model as much of a frame as possible?

Hi MattTF1,

according to your previous questions I suppose that you are capturing your images in one line. It is not ideal, as you want to model bigger detail and have better quality.

It seems that your image is quite blurry. You shouldn’t use blurry images in your process:

Also, you should have a correct angle between images:

Regarding to wall:

And the area:

How was your reconstruction region set there? Are you using video for capturing? How many images were used for this reconstruction?

I am using video from a DSLR with shutter speed as high as it could be without it being too dark (to reduce blurry images), with an interval of half a second I think (can’t quite remember exactly what it is) Here is an example of the image overlap

I do have an angle going the other way on the other side of the path however it completely misses a lot of the road…

Another issue involving it cutting bits from a reconstruction, here in the single alignment of cameras going one way it has modelled this signage…

However when I merge it with another component (which is walking along the other path in the opposite direction) 

Some of it has been chipped away even though it is visible from the other side in the component I have merged it with. Why is this and how can I solve this?

Honestly, these inputs are not ideal.

How did you merge the opposite components? Did you realigned the images or used another tool?

This could be a result of misalignments. It could help when you will do another camera paths with different angle or the height of capturing, or will be pointing also to the side:

Also, I think it is not needed to have 0.5 interval between images, as the ideal overlap is around 75% (but it should be side overlap).