Hello… not completely sure… but I tried with some object on a turntable, but CR has a hard time aligning…
I guess because the background remains the same all the time… in Pscan, one can mask the Background…
what’s the plan…?
Hello… not completely sure… but I tried with some object on a turntable, but CR has a hard time aligning…
I guess because the background remains the same all the time… in Pscan, one can mask the Background…
what’s the plan…?
Hi Steff
for now is masking NOT used in ALIGNMENT step, only on recon and txtr step.
can PM me on muzeumhb@gmail.com with few images to see what setting could help eventually ?
In meantime try change settings for alignment to this values.
Dear Steff,
is the problem in the scene background only? I would propose to add some background with a solid color and lit it properly so that feature detector does not find too many features on the background.
To explain the masking in RealityCapture - even we do not have a tool for editing masks build in our app, the app itself supports masks to some extent. For alignment step all you need to do is to modify image color channel and paint it with a solid color. This will cause that natural features will not be detected there and thus will not influence camera alignment. For meshing we support alpha-channel masks. Adding alpha channel and masking only important parts actually speeds up whole computation. Another benefit is that you would not need to use reconstruction region to filter parts which are not important.
You should be able to generate these masks with 3rd party software in batch process easily.
Hellooo…
thanks Milos and Martinb,
yes this is what I did, and it works… BUT with a couple tests, I get several components…
let;s say I have 250 cams… then 120 cams in 1 component and 130 in another component. And these 2 components, while being correctly aligned on each component, are not aligned between them…
How do I align these two components…?
S/
oki, that worked
Jaws-001.jpg
Over next few days will show 2 dataset on turntable
How to shoot what to watch, and if it cannot be aligned in one piece how to get it together…
awesome… Thank you Milos
Hello and Merry Christmas!
Any news on the data sets? Very curious:)
Best,
/Kaj
any news Milos…?
martinb wrote:
… For alignment step all you need to do is to modify image color channel and paint it with a solid color. This will cause that natural features will not be detected there and thus will not influence camera alignment. For meshing we support alpha-channel masks. Adding alpha channel and masking only important parts actually speeds up whole computation. …
Handy to know. I saw the alphas masking in the thumbnails. Will add a fill step to my photoshop actions.
Wishgranter wrote:
Over next few days will show 2 dataset on turntable
How to shoot what to watch, and if it cannot be aligned in one piece how to get it together…
I am sorry to resurrect this old thread, but I am very interested on how to shoot and how to align top and bottom of an object on a turntable.
Anastasios wrote:
Wishgranter wrote:
Over next few days will show 2 dataset on turntable
How to shoot what to watch, and if it cannot be aligned in one piece how to get it together…
I am sorry to resurrect this old thread, but I am very interested on how to shoot and how to align top and bottom of an object on a turntable.
I’ll resurrect again. I’m new to photogrammetry but I’ve been playing around with the turntable aspect and thought I’d share something that’s worked really well for me. This is just experimenting and done very cheaply.
I created a box with an open front from plain white poster board. $.50 per sheet at my local store. Robbed a 10" turntable base from a spice rack in the kitchen. 4 cheap lamps with 60w led bulbs shining into the front of the box. Bulbs are just open as the white poster board seems to do a good job of diffusing light from the source. I place a featureless plain white glass dinner plate over the turntable near the center of the box and put the object in the center of the plate. What I’ve found is that with a camera on a tripod in front of the box, I can shoot a complete revolution of an object on the turntable, then rotate the object on any axis and shoot another revolution. Most of the time I get a 100% alignment and little to no cleanup after reconstruction.
Like I said I’m new at this but I guess the glass plate doesn’t register? Whatever the reason it works very well. Never have to move the camera, just rotate the object to get top and bottom. I usually only have to rotate the object once 90 degrees to get the top and bottom as well. Very few to no stray points and good reconstruction detail of all sides with good lighting and 100-150 photos.
Have you tried setting Background detection feature to True? This can be found in the Alignment Settings.
When set to true, Background detection feature appears to ignore the non-changing background when calculating the camera positions.
Ken Brain wrote:
Have you tried setting Background detection feature to True? This can be found in the Alignment Settings.
When set to true, Background detection feature appears to ignore the non-changing background when calculating the camera positions.
I’m not looking at the software but I thought from the tooltip description that Background Feature Detection ran feature detection in a background thread as soon as photos were uploaded. You’re saying it actually has to do with detecting image features in the subject background? Interesting. I’ll have to experiment with that.
background detection features is about processing in the background. and has nothing to do with which parts image gets features detected.
not sure on the latest version. but i ended up turning this off as i couldn’t start the align until it had finished. if its off, this happens when you hit align.
Hoping someone can help…
I’m hoping to capture similar sized objects to the image attached using the turntable method. Obviously this image does not have sufficient lighting which i’m currently in the processing of sourcing, nor does it have an effective ‘featureless’ background required for the Photogrammetry process.
I’m in the process of sourcing a green screen but as far as I can work out I need a huge screen. The object is 6ft away from the back wall (a specified minimum distance to stop green spill onto the model) and the camera (Pi cam V2) is at a sufficient distance away from the object to ensure I get the full object in frame.
However as you can see these distances leave a huge amount of background space needing to be filled with the green screen, I calculate a screen size of 16ft x 16ft . My questions is this, if I used a green screen the same size as the grey wall on the image (which would obviously create a green floor too!) could I just crop each and all of my images to get rid of the unwanted background to the sides of the image? I’ve read that cropping the images pre-processing is a big no no but doesn’t explain why… As far as I can work out, as long as all images are cropped to the same size I don’t see why this would effect processing? Might be an extremely obvious reason why so apologies if it is!
I would just test it, but I don’t really want to buy a green screen if it turns out not to be big enough!
Thanks in advance!
Hi Paul,
Why don’t you turn the camera to portrait???
You are not supposed to crop images because that alters the geometry of the distortion. My guess is that if it is done maintaining the same orientation and image center. I am saying this because rectified images also work to a certain extent, depending on your camera and requirements for accuracy. If you can use an automated process for cropping, why don’t you just try it and tell us?
Hi Götz,
Really appreciate your reply! I had thought of rotating the images to portrait but that would result in redesigning the cameras mounts slightly, not a problem, I just thought cropping would be a quick solution!
I’ll take your word for it that cropping causes distortion, however, I would like to know what is physically happening to the image once its cropped to cause this distortion, only because i’m interested not because I don’t believe you! If you could shed some more light on that, that would be great?!
As cropping isn’t normal practice for photogrammetry I will try it simply as a test and report back my findings, but for now, i’ll rotate the images into the portrait orientation! :-)
It wont be straight away but watch this feed and I’ll report back asap!
Hi Paul,
I’m far from being an expert on image geometry but here are some thoughts:
The thing with cropping is that the algorythms can only operate within a certain range of typical lens distortions. The need to be describable by mathematical formulas. So if you brutally alter that premise, they might get confused. How much depends on the initial distortion. As I said, it MIGHT work to a certain degree. You need to play around with the distortion model in the alignment settings. There is no way to predict which one will be best, only trail and error. In general, brown 4 has more variables to work with and the following add-ons to brown (K and tangential) will add more.
Good luck and I’m curious about the outcome!