Hello,
I have a relatively large aerial drone image set of a building I am trying to reconstruct. It contains ~1200 48 MP images, and ~9000 8 MP images.
For smaller buildings/image sets using the same image capture density/pattern, we can process the whole set at once on our machine (we have an i7-1200K (20 threads), 128 GB system memory and 2 x NVidia 4090s). For this building, during the alignment phase we run out of system memory and have to rely on swap which makes the process very very slow.
I have read about component workflows in Reality Capture that are supposed to help with this issue. My understanding is that you can group the images geospatially into zones before feeding them into RC, process the zones one at a time in individual RC projects, and export each of the aligned zone components as ‘.rcalign’ files. Then, in a final RC project you merge all components.
According to the post linked below, component workflows allow an unlimited amount of images to be aligned on a machine with minimal specs compared to what we are running:
When I try this component approach with the large data set mentioned above, it appears to take just as much time and system memory to create and merge the components as it does to just put all images through in one go. I have attached an example screenshot of how the zones are being split up. Each dot is a location that an image is taken, and the green dots represent images that are common to multiple neighboring zones (we used a 2 meter overlap).
I have tried several different approaches and settings, but nothing seems to work in terms of reducing the system memory usage when performing the merge of the final fully aligned set before moving on to mesh reconstruction. Am I misunderstanding the post above? If not, what is the workflow that permits the alignment of ‘unlimited’ images on a system with limited memory?
Any help would be greatly appreciated.
Thanks,
Scott