So I got a model that’s been running for a while and shows no signs of finishing. Not sure if I’ll even have enough space to hold it when/if it does. That’s “normal detail” model.
Preview model is, not surprisingly, pretty useless.
The question is: Is there a way to get a smaller model than “Normal Detail”? (but better than “Preview”)
Maybe effectively 10-30% quality or poly count of Normal Detail.
Any help/thoughts/suggestions are greatly appreciated.
I was worried that it would destroy the finer details of the model. Going from downscale factor of 2 (normal) to 4 (preview) results in quite a drastic degradation of the model.
Going from 2 to 3 seems to be pretty bad too.
So wondering if there is anything else that could be done.
Resize the less drastically in a third party program. Like 24 megapixel to 18. I’m racking my brain for another in app solution, but nothing is coming to mind. The mods might have some ideas in about 8-10 hrs from now when their active.
usually you reconstruct then decimate (simplify) after. Not really a problem I’ve put much thought behind. I’m not at my work computer so I can’t test. Can you use a decimal like 2.3 for the image downscale?
Yes, I’ve always reconstructed then decimated before. And still believe that it’s the correct way to go about things. But running up against time here.
RC doesn’t seem to let me do decimal downscale. I can batch scale everything in PS (or similar), but really don’t like that option. Got a lot of areas with equipment with flat, shiny panels with barely any detail as is. If I downscale, all the features I’ll have left will be the dirt and surrounding vegetation.
I think Steven’s suggestion is the only feasible. Just remember that normal reconstruction is already half the resolution (a quarter of the pixels). If you are worried about the small details, then there is literally nothing you can do because no software can do magic. If you say downsample 3 (I think it means half of half, if I’m not mistaken), then there is no point trying preview, which can be set to “Use sparse point cloud = false” so that it actually calculates depth maps instead of using the tie points…
you can also set downscale for depth maps for each image individually. In such case the final downscale is the product of the numbers defined for each image and defined in the reconstruction settings. You can find detailed explanation for example here:
Tim, I had to let my project run close to 48hrs to reconstruct on high. To make sure it is indeed progressing keep an eye on task manager and the resource monitor. In resource monitor check off RealityCapture and System. You will be able to see the pictures being read, cache files being read and writen. Just peak in every once in a while to make sure its not stuck.
For my purposes (Turntable photogrammetry) I usually use preview quality reconstruction with sparse cloud usage set to false and preview downscale set to 2 (same as normal).
From starting a new project, importing images, aligning, reconstructing, texturing and finally exporting the model it usually takes me less than 30 minutes. But I typically use less than 300 source images to begin with.
Depending on the complexity of the object and quality of the input images the resulting mesh has anywhere from 200k to over one million tris.