I’m having trouble with Reality Capture and my turntable setup. When having only one loop of photographs, the software works reasonably fine and outputs only one component. However, things get tough whenever I try to take different loops for different heights. There, RC splits the alignment into multiple components.
First of all, I’m trying to avoid having to merge components manually through control points, as I would like most of the pipe to be automatic, even headless. From there, I tried many things:
I did try to align manually, and it took really a lot of time, without success in the end. I couldn’t merge the components.
I tried working with masks, generated with another software
I tried isolating one particular loop that would cause issues, but all of them seem to create problems when processed separately.
So now I’m questionning my setup itself. I’ve read that it is possible to process turntable setups with RC. However, I suspect my turntable setup is wrong, particularly with the background. Maybe the way I’m taking pictures (rotation angle etc) is not the best either.But as RC didn’t work even with masking the background, I’m really at a loss to understand what more could be wrong. Especially, the same dataset works perfectly well in Metashape, with one swoop alignment.
Anyway, I’m hereby providing my dataset https://we.tl/t-IinbWx1TAk . Can you tell me what is wrong with it, and what I should do so that RC processes the object within only one component?
Many thanks and sorry if I missed the answer somewhere on the forum.
Regarding my issue, I’m trying to process a hotdog. My overlap is quite abundant because I used an automatic turntable with pictures taking every 2 second or so. And yes, I did try this workflow, without success.
Since my last post, however, I think I found something interesting. I used the masks of the photos to actually cut the image so that there is only the hotdog on them, and not the background. Then I fed these cut images to Reality Capture, and the result is perfect! I guess that means that my background is indeed really wrong in my images.
Hi Popi, thank you for additional information. I sent you the invitation for dataset uploading.
What was the difference between first masks and second? Just the covered area?
If there is a background behind object during rotation table’s capturing and it has some visible features, then these features have still the same position and the application align those images on one spot. This could be the main issue here. More I will wrote you after I will see your data.
OK I’ve uploaded the files. I’m so sorry I made a mistake, I also uploaded part of the cut photos I’ve used in the end in Reality Capture. I can’t delete them so I put them into a toDelete collection.
Here’s what I uploaded for you to see:
The compressed file of the original pictures
An example of a mask (generated with Metashape). I generated all the masks like this, and then tried to use them in RC (by respecting the naming convention, with the .geometry, .mask and .texture folder). But this process didn’t help.
So instead, I wrote a Python script to actually delete all the parts of the image that corresponded to the black part on the respective masks. The images that you can see in “ToDelete” are the result of my script. Feeding THESE to Reality Capture works much better, even though there are still about 80 pictures that are processed in a second component.
Thank you so much and don’t hesitate should you need any further information.
Hi again, I’ve made another test with another object, and my hypothesis of the cut background seems not to work anymore… I’m putting the photos in the box as well, in “Figurine_Masked_Images.zip”. Right now I am really at a loss. I’ve tried about 5 different setups, but Reality Capture just doesn’t want to align this set correctly.
Can’t wait to hear your advice, because I’m desperate right now!
I checked your images and the hot dog worked for me. It is basically an ideal object as there is a lot of possible features to find and use for alignment. The first issue was with the background as it wasn’t changing during capturing.
I also tested the second dataset and it is not ideal as it is hard to find those features over the object. Basically, it needs to have a better texture on it. There are some workflows you can use as http://janebeecr.blogspot.com/2017/07/how-to-scan-shiny-surfaces.html, but it wasn’t captured using rotation table.
But your hypothesis regarding to cut the background is right. You can also try smaller heights changes between your loops.
Thank you very much for your feedback! I’m curious though, did the hotdog work for you within a single component, or did it produce two? I’m looking to avoid multi component creation altogether.
All right, that’s what I don’t understand : mine came up with two components, with the masked images. Maybe something different in the configuration ? Could you give me your settings ?
Hi, with all images I also got two components (the upper loop is separate).
I would add one more loop as on the attached image. There is too big difference between two upper loops and that is why it is not aligned.
All right, I understand I’ll be doing more tests then, and keep this post under hand if I had other observations and questions. You’ve been really helpful already though, thank you very much for that!