Photos not aligning

Hello!

I am having an issue while trying to scan a full asset. This is for

I have the black background on this figurine, and my idea was to do a full rotation of around 40 photos, rotate the asset and then repeat the process until I have multiple 360 degree photos of the asset.

However, I am not getting RealityCapture to align the photos of multiple rotations. For some reason the software only recognizes the photos of a single rotation.

My only working theory is that the photos are not overlapped enough, not sharp enough or I am not doing enough photos.

I am adding two example photos of the rotations that don’t align.


So I’m guessing that my question is, what am I doing wrong on this process?

Thanks in advance for the help!

Hello Jon,
the attached images don’t look ideal: only small part of the image is covered by the model, there are unfocused pars, some of the parts of the model are not visible, etc.
There is a tutorial how to capture images for RealityCapture: https://www.youtube.com/watch?v=9e_NLp_FaUk

Hi,

This two images are just a sample of 100 images per rotation that I did for the figure.

In case I didn’t explain myself well, I took over 500 photos of the figure, 100 per position of the figure, posing it in different angles.

I can fix the blurry parts, but I can’t do much about the space on the frame.

Maybe the recomendation you are making is that I take the photos closer instead of trying to get the entire figure in the photos?

I looked at the tutorial video, and I did follow up another one where it explains the use of masks, but even with that method I am not getting the photos with masks to align.

To show case this, I did another test with a lemon and still many of the photos that would cover the part from below are not being aligned.

Putting here so that you see that I have the files masked already.

For such amount of the images it is quite strange that they are not aligning.
What is the angle difference between the capturing levels? What are your alignment settings?
As you wrote, that the app recognises the images from a single rotation, is this true for all of your rotation? If so, there is a possibility, that you will need to add another levels or control point to merge the data together.

In case you want to try, I am adding the images in this link:

I am curious to know if you also get only individual rotations identified.

I will try more tests doing some extra rotations, just in case thats it. Maybe adding extra markers in the lemon would help?

I have tried adding control points, but either I’m doing it wrong (adding them to the first aligment didn’t change anything) or they are not working as intended. Is there any video for control points in specific?

Hi,
I checked the data and they are not captured well. The capturing path is quite unregular.


It should look more like:

from both sides of the object.
So, you are missing some levels in your capturing.
Also, sometimes it looks like you are doing quite close image captures in the level and then the images are almost the same. Also, I noticed changed lighting conditions in your dataset (the lemon has different yellow tone).
You could add another levels like:

And about control points: https://www.youtube.com/watch?v=S00_mLfbx6o&t=24s

Hello!

I followed suit and did some changes to make it work. I got a couple of lessons out it too.

To keep tabs on the rotation angle I did on the lemon each time, I added markers to it with a sharpie so I can make sure that A) I could track the orientation properly each time and B) make sure the program had enough track points from the images.

I also changed the camera I was using due to the lens.

The Nikon D750 I was using has 24 megapixels, and the Sony ZV-1 I used next has 20 megapixels, BUT my Nikon had a 55mm lens tops. The Sony can go up to 70mm.

I mention this the 70mm lens allowed me to get photos that took more of the space of the photo with the asset. So I traded megapixels for pixel space.

I priorized aperture in the photos and I have tried on two lighting setups, studio and flash.

I am glad to report that with those changes (thanks orthan for the insights) I have managed to get all my scans aligned succesfully at first try. I am attaching a gif of one example.

yashica_photogrammetry

Thanks for all the help, now onwards and upwards!