I’m modeling large cave chambers, have been getting my feet wet with RC working on one mesh, only 100-300 images per mesh, but these are 52 MP, so with default settings I’ve seen 400 M tris for even the smaller images sets of just 130 or so. When I ran into problems with Texture crashing in High, I then encountered no issues in Normal. Wishgranter advised in a separate post to control vertex size and texture size (4K or 8K) to remain in keeping with system resources. So, if I’m understanding it, texel size describes the density of the point cloud, the resolution of the geometry. Are Normal and High settings for Reconstruction then simply two basic levels of texel size? I yet have questions about how texture size and utilization, etc. function, but that can wait.
I’ve begun a separate thread here to shift topics, as I’d like to use what I’ve learned to now address a broader question about workflow with large sets.
I found in Help and was excited to learn how components are used to export point clouds with all metadata attached, how large sets are managed by importing these component point clouds with metadata, then aligned via common images, manually set control points, etc. When everything enjoys a common scale and coordinate system and is co-registered, the constituent components are then easily exported as right-sized chunks for use downstream. With this workflow in mind, I’m curious about a few things, like how one approaches setting reconstruction regions, not within any particular component which is clear, but relative to the reconstruction region of neighboring components. Ideally, there’d be no overlap between constituent meshes, as ideally they’d enjoy perfect alignment and a tight seam wouldn’t show, but I suspect the need to compromise, so how does one deal with what in the end should look like a seamless set?
Also, I’m seeing lighting artifacts in UE4 game engine related to all those tiny UV islands, am curious how others approach that issue. I’m clear that small UV islands relate to the challenge of unfolding intricate geometry, that through retopology and unwrapping in an app like ZBrush one can simplify UVs. Is this how it’s done when taking on large extended sets, one at a time each exported component mesh goes through this separate pipeline?
I read about projects with thousands of images. Surely there’s a workable pipeline for these kinds of issues. Thanks for insights.
Benjy