Thanks Tom, much appreciated. Pick up a shovel Götz, I was beginning to wonder why no response to my last post, spent a long time drafting it, now see it never got posted, must have timed out. If I can retrace my steps, it’s worth working this through, as I’m seeing some dots come together, and others with increased residuals
Firstly, thanks for clarifying the nomenclature, most useful. Two, and preferably more RC users comparing notes on how this powerhouse software thinks isn’t unlike photogrammetry, two minds are better than one in triangulating shared experience alongside the unique insights we pick up with individual experiences. What’s missing here is for WishGranter to grant our wish, if nothing more, to dispel misconceptions what’s happening under the hood, and ideally, to kick in with relevant details we’ve not even thought to bring up. Thank you WishGranter for your time in providing feedback!
Can we confirm this statement, please?:
In this case, I am pretty certain that Detected Features are the ones in the 2D imagery, whereas Tie Points are the 3dimensional result. So basically in each alignment DFs will be used to calculate TPs, generating a component.
DFs: 2D and in chache, TPs 3D and in alignment/component
I question (only musingly) whether TPs seen in the 3D view aren’t the same as the TPs seen per 2D image, putting aside for now where they’re stored, just to get the terms and moving parts defined. When I grab TPs in 3D view with the Points Lasso, click Find Images, switch to 2Ds on the left pane and 2D on the right, I then enable Tie Points under Image tab, would seem possible that the same TPs seen in 3D are simply represented flat per image in 2D. No? Doesn’t make so big a difference, but to the extent they’re the same or different data forces a question about where they’re stored, one data set stored in cache and/or in the rcproject file within a Component?, or two different types of data stored in cache, the other in the project file. I not only want to know where it’s stored, it’s useful to know what function it serves in either place. Specifically, stored in cache to speed recovery, but if cache has no influence on future Alignment, then emptying cache would only benefit managing storage (or is there some additional benefit?). Stored in Component/rcproject file, how exactly do one set of TPs in an older Component influence a younger Component, or do they at all? Does a subsequent Alignment only consider the TPs in the previous generation Component(s) that it drew from?
Can a younger Component influence (harm) an older Component? When we report our belief in good hygiene, not just to avoid clutter, but to not invite unintended consequences having one set of TPs exerting unwanted influence, is that idea founded?
So do you think that they are still flexible a bit, meaning RC will still improve the cameras relative to one another in one parent component?
My understanding is that with Merge components only = True, RC either joins Components or not, doesn’t generate many equal or smaller Components. I think there’s plasticity in the Components it’s attempting to merge, but not because of this setting, which would also apply to Force component rematch. The plasticity is controlled per image, or Input, so that if you select all images…
…you may already know about the Feature source settings. “Use all image features” provides the highest plasticity, especially if applied to all images, in that RC considers all DFs (or might these be synonymous with TPs?). “Merge using overlap” being the first in that list might indicate the lowest level of plasticity, but fastest processing time, doesn’t consider all the TPs in Components, only the TPs and CPs in the images common to at least two Components, that being an overlap. “Use component features”, the second choice, would fall in the middle, RC considers all the TPs and CPs in all Components, whether they overlap or not. So, if two Components have overlapping real estate, but totally different images, then this route tests for that condition. This raises a question, what happens when some images in Components are set one way and images not belonging to a Component are set another? Not to parse out every permutation and combination, though you’d have to do that to really get it.
If you’re working in chunks, exporting Registration and importing Components, then “Merge using overlap” seems like the way to go if your Compositions include common images, but then why not combine those to begin with in one Component? Okay, file management might urge you that direction. If your Components share common real estate, but not imagery, then “Use component features” in combination with CPs should get you there. The moment you introduce new images, e.g. data that perhaps hasn’t behaved well within image sequences comprising Compositions, then I’d think setting all photos, not just the new ones but also any belonging to the Components, should be set to “Merge using all features”. You wouldn’t want to lock plasiticity between the new images and any of the images belonging to existing Compositions since how do you know which ones they overlap with (real estate-wise). The first two settings applied to images belonging the Components considered next to the third setting on the remaining images would potentially force the new images to conform to the edges of a fixed chunk of world, soft on the inside, too crunchy on the outside.
. I think by raising the weight ridiculously high, you eliminate the good influence of TPs in the vincinity, thus making it easier for an error in one of your CPs to do its evil work and thereby stick out more clearly…
I now believe this statement holds an important clue to limited plasticity in RC via human intervention, this assuming that all your images were set to “Use all image features”. Since I often use that setting, though still encounter the stepping and issues south of that, e.g multiple Components, I wonder if increased weight on a CP is considered equally among all TPs relevant to imagery containing a given CP. If each TP hears the same weight, then the TP(s) closest to the CP carries the greatest burden to make the adjustment. Think about ironing a wrinkled garment. If you focus too much on a small area with the iron, you simply iron in the wrinkle or move the fold over a ways, like passing the buck.
If you encountered a stepped section - wall, floor, or ceiling separates - the user places CPs along the area in as many photos as feature that area, confident a) those features are strong candidates, appearing similar from different vantage points and b) confident of executing with precision in placing the CPs, then RC gets the memo, “close this stepped section!”. “Use all image features” may not go far enough. What if each TP in the vacinity were given a “listening weight”, as it were, based on proximity. The nearest TPs respond the most dramatically to close the gap. And so we don’t simply displace the load onto neighboring TPs, shifting the step, the listening weight drops off with distance, perhaps controlled by the user. Maybe, the issue is localized, maybe the step relates to a huge space where a giant loop has difficulty closing.
This kind of thing must be at work in RC, in any photogrammetry engine, to work at all, since the order in which an engine loads images would cause all kinds of “wrinkles” if it didn’t adapt by spreading the mathematical load. What we’re after here is how to enable the user to push the limits. RC is truly amazing, as you say, but system resource alone limit RC, if not the inherent limitations of the software to support greater control over plasticity during human intervention. To your comment about not always being able to go back, I recently returned from Siberia 3D mapping in a salt mine 300 meters down. Because I knew we wouldn’t be returning, I strongly recommended my client purchase a fast laptop to run each day’s data to protect against data gaps, stepping, and issues with alignment. That worked for two days before the machine wouldn’t even allow offloading files without crashing, so I’m flying without a net. I was super conservative and all that data has every camera aligning, but the last day, my client asked if I couldn’t loosen up, our last chance to get a long tunnel shot in the bag. I changed my (proven) workflow, switched camera bodies with one of our Russian fixers, her Sony A7SII had a fourth the resolution of my A7RII, but really sweet high ISO, so I was able to shoot from a greater distance, map with a broader paint brush, if you will. Most of this data also worked, but I was right at the edge, gaps, a stepped area, the issues that prompted this thread reared their head.
Good news, I’ve now, thanks to you, Points lasso> Find Images> refined image selection> Find Points - pin pointed the problem children, gotten everything to align and without stepping, can go back to the client with good news. I’m unfamiliar with Gradual Selection, but I do hope that WishGranter humors our lengthy dispatches here and weighs in with key facts about what types of features influence Alignment, straightens out any misconceptions, and hears our plea for possible improvement to optimize plasticity/control. BTW, I’m very curious about your work, truly amazing that you were able to get two highly separated spaces sharing those thin floor boards to properly align, and yes, that also speaks to RC’s killer algorithms at work. I’d welcome us connecting in real time over TeamViewer or Google Hangouts (former is way better), if you’re up for that.
To Tom’s point, and thank you again for taking note, communication is a good thing. Many will say, who has time to read so many words, why all this? I don’t turn to forums for my social fix. We’re working across the world on lonely planets, at least I often feel that way in my sanctum/bubble. You were one of the folks early on, open book, who I greatly valued and value still to tell me things I didn’t know, compare notes, bear down on important steps to becoming a power user worthy of the name. Keep it coming, Götz.
Best,
Benjy