Are there any limitations to the new RC steam conditions upon February 2020?
I am currently trying to process a project with data set of 986 images. But I notice that this current project in ‘normal reconstruction’ is not going to reconstruct. As far as I understand you can upload 1000 photos independently of the MPX per project. This is what is happening. First this project, half way through already got an “out of memory” error when reconstructing in normal detail. I have already overcome this problem, because my C drive was overloaded due to the “paging files”, my RC cache runs on D. I placed the paging files on a SDD with enough memory (= Ddrive).
I now notice that with this specific project the reconstruction is standing still/ comes to a halt halfway through, without visible notification (the process bar is still there as does the reconstruction timer). With task manager I can see that RC is only running 0.1% and there are no more files uploaded in the RC cache. Apparently this is a known issue according to various forum posts. They dedicate this to corrupt files… I have tried everything within my- and the forums knowledge to get this project back running again in RC, e.g. changing the reconstruction region by 15% etc. Without success.
I have therefore deleted this project with its cache and project related files and created a new data set, aligned an refined. This is now running, and again halfway I notice that this reconstruction is standing still as well. Other projects do work with the current RC steam conditions (projects of 700-800 photos). Can I conclude that a project of 986 photos can no longer run with steam? What else can be wrong with this project and what else can I do?
This should not be limitations related dear user. Is this a high detail or a normal detail? Could you share with the alignment and reconstruction settings along with what console writes down when you process? (number of parts, etc.)
these are the alignment and reconstruction settings. I have tried to reconstruct this data set several times, including importing a component into a new project, also the same data set with 600 images, all with the same result. The reconstruction runs halfway and the process bar then comes to a standstill and RC then only runs 0.1%. The processbar is nog moving for as long as you let it run (>12 hours). I have until the end of the week to process the last datasets so it gets a bit more stressful … I don’t know what exactly you mean by ‘what console writes down’.
Hello again,
could you please try the following settings and run the normal detail for example?
Also, please don’t run other applications while RC is processing.
I ran this project again in normal detail with the new reconstruction settings, together with the cached data of previous attempts the model was created within an hour, a mesh of 1.9M triangles. Can I now assume that with changing the reconstruction settings as specified, the model is less truthful and less detailed (given the low number of triangles)? I am used to my models having so many triangles that I simplify them to 14 and 3 M triangles afterwards.
I’m going to do a high detail reconstruction as well. But what can I do with the specified reconstruction values to get a sharp printable mesh from a model? Is the detail decimation factor of great importance?
In any case, thanks for the info, at least I have a mesh.
Yes Sir, detail decimation factor is a value that represents the downscale of an image. In this case 2 represents 50% resolution of every image is used. Which is mostly just enough for the meshing pass. Of course it depends on your megapixels and the count of the images. You can try running high detail or just use a lesser value in minimal distance between to vertices. Let’s say 0.001 for example. This should also have an effect on how many polygons are created. Although 2M for a small not too complicated model should be just enough. If it is a very detailed large model then the values should be changed to get more.