The depth maps calculation seems to be the most time consuming step for huge projects since it uses single core processing for the most part—at least on our system(Pls correct me if you have it different). So the question is: Is it possible to generate depth maps for the same temp-/cache folder of a project by opening up instances and calculating different regions? Has anyone experience in doing so?
So the goal is to get all the depth maps in one project file as fast as possible. (If there is an easier way to achieve this please let me know. I also want to mention that I was not sure whether to post this in general or workflows or even feature request since the question has also a general technical aspect to it).
Hello Jonathan,
distributed computing of depth maps would not be possible as we cannot distribute jobs at all currently. You could do this workflow, but using multiple PCs instead of instances and having the model split into parts by selecting appropriate images for different projects. Then you would have to somehow connect all of the parts topologically. In other words… I don’t recommend this over the classic workflow.
Well with RC I always suggest users to rather go for the pure power of both CPU and GPU rather than the cores number, the cores are much more suited for long time rendering like Vray and such. RC pushes the hardware as far as it can go most of the time so the power is the significant one.
Component workflow helps in alignment, for example: You would have an 8000 photos dataset, but only 16gb ram. Normally you would not be able to align it at once, but with a use of the component workflow, you can make different parts separately and then merge them together. So by saying this, the workflow is aimed towards alignment not the reconstruction phase.