Using components in large sets

I’m modeling large cave chambers, have been getting my feet wet with RC working on one mesh, only 100-300 images per mesh, but these are 52 MP, so with default settings I’ve seen 400 M tris for even the smaller images sets of just 130 or so. When I ran into problems with Texture crashing in High, I then encountered no issues in Normal. Wishgranter advised in a separate post to control vertex size and texture size (4K or 8K) to remain in keeping with system resources. So, if I’m understanding it, texel size describes the density of the point cloud, the resolution of the geometry. Are Normal and High settings for Reconstruction then simply two basic levels of texel size? I yet have questions about how texture size and utilization, etc. function, but that can wait.

I’ve begun a separate thread here to shift topics, as I’d like to use what I’ve learned to now address a broader question about workflow with large sets.

I found in Help and was excited to learn how components are used to export point clouds with all metadata attached, how large sets are managed by importing these component point clouds with metadata, then aligned via common images, manually set control points, etc. When everything enjoys a common scale and coordinate system and is co-registered, the constituent components are then easily exported as right-sized chunks for use downstream. With this workflow in mind, I’m curious about a few things, like how one approaches setting reconstruction regions, not within any particular component which is clear, but relative to the reconstruction region of neighboring components. Ideally, there’d be no overlap between constituent meshes, as ideally they’d enjoy perfect alignment and a tight seam wouldn’t show, but I suspect the need to compromise, so how does one deal with what in the end should look like a seamless set?

Also, I’m seeing lighting artifacts in UE4 game engine related to all those tiny UV islands, am curious how others approach that issue. I’m clear that small UV islands relate to the challenge of unfolding intricate geometry, that through retopology and unwrapping in an app like ZBrush one can simplify UVs. Is this how it’s done when taking on large extended sets, one at a time each exported component mesh goes through this separate pipeline?

I read about projects with thousands of images. Surely there’s a workable pipeline for these kinds of issues. Thanks for insights.

Benjy

Hi Benjamin
Please fill out your signature with your PC configuration: ucp.php?i=profile&mode=signature

So, if I’m understanding it, texel size describes the density of the point cloud, the resolution of the geometry. Are Normal and High settings for Reconstruction then simply two basic levels of texel size? I yet have questions about how texture size and utilization, etc. function, but that can wait.

NO, a texel is just another name for a pixel… so your texture resolution depends only on the source image resolution + UV results + image downscale for texturing.

When I ran into problems with Texture crashing in High, I then encountered no issues in Normal.

Can you be more specific with some screenshots on MODEL details etc ?

Also, I’m seeing lighting artifacts in UE4 game engine related to all those tiny UV islands, am curious how others approach that issue. I’m clear that small UV islands relate to the challenge of unfolding intricate geometry, that through retopology and unwrapping in an app like ZBrush one can simplify UVs. Is this how it’s done when taking on large extended sets, one at a time each exported component mesh goes through this separate pipeline?

I highly recommend you to read about the UVs and issues regarding display: http://wiki.polycount.com/wiki/Edge_padding

I found in Help and was excited to learn how components are used to export point clouds with all metadata attached, how large sets are managed by importing these component point clouds with metadata, then aligned via common images, manually set control points, etc. When everything enjoys a common scale and coordinate system and is co-registered, the constituent components are then easily exported as right-sized chunks for use downstream. With this workflow in mind, I’m curious about a few things, like how one approaches setting reconstruction regions, not within any particular component which is clear, but relative to the reconstruction region of neighboring components. Ideally, there’d be no overlap between constituent meshes, as ideally they’d enjoy perfect alignment and a tight seam wouldn’t show, but I suspect the need to compromise, so how does one deal with what in the end should look like a seamless set?

Depending on different needs, I personally prefer to reconstruct in one piece and then cut out the parts ( with overlap ) that I need…

A bit of progress, learned to export alignment components, bring them in and begin consolidating them with control points. The Help refers to a Feature Source setting to relate alignment components via merge using overlaps, use component features, or use all features, but it’s not indicated how to bring these up in Settings, thanks for pointing me.

Secondly, when attempting to find an overlapping region between two alignment components (overlap isn’t via shared images, rather there’s overlap in the coverage), I keep a grid of the master folder with all images open on a second monitor for quick reference, the question then turning to which images appear in a given alignment component. I struggle with this in that the camera poses aren’t listed sequentially. Any reason for that? One wants to readily glance at a list of poses/images in a component, look over to the grid to get oriented, then back again to make selections in populating a 6-panel view to begin settings tie ins. I’m having to hunt just to find the first or last image in the sequence.

Lastly, you say you’ll often work up as big a composition as alignment will carry you, then segment via reconstruction zones with slight overlap and export in right-size mesh chunks. When you’re working on a large set, what control is there in RC to make these reconstruction areas somewhat uniform and logical, i.e. following a pattern that allows you to keep track of what next needs to be exported? Maybe, I just haven’t come across it yet, but I’m thinking a tool for segmenting on all three axis according to chunk size would make this workflow yet more procedural. Just a thought.

Hi Benjamin

A bit of progress, learned to export alignment components, bring them in and begin consolidating them with control points. The Help refers to a Feature Source setting to relate alignment components via merge using overlaps, use component features, or use all features, but it’s not indicated how to bring these up in Settings, thanks for pointing me.

Select images and in the 1Ds view you can change the feature source but you need to understand the settings.
The OVERLAP is great setting but it does not take feature overlap but CAMERA overlap, that means the same cameras in different COMPONENTS. this way you can align ultra fast as it takes into account just the “overlapping” cameras.

Lastly, you say you’ll often work up as big a composition as alignment will carry you, then segment via reconstruction zones with slight overlap and export in right-size mesh chunks. When you’re working on a large set, what control is there in RC to make these reconstruction areas somewhat uniform and logical, i.e. following a pattern that allows you to keep track of what next needs to be exported? Maybe, I just haven’t come across it yet, but I’m thinking a tool for segmenting on all three axis according to chunk size would make this workflow yet more procedural. Just a thought.

It is easier to use the RECONSTRUCTION REGION ( aka BBox ) and INFO TOOL ( numerical input on RRegion position, scale, orientation ) to get desired reconstruction areas. It is even better reconstruct as whole and then use the FILTERING tool and use the BBox to filter it for your use case…

Hello Wishgranter,

Thank you for the follow up - on both emails regarding this question about related sets. It’s starting to make sense. I’ll be digging into the ins and outs of the larger pipeline, but for now just have a question or two. What’s the difference between High and Normal quality? That is, simply in terms of tris count, is there any difference between the solution afforded by Normal and High, when the High model is then simplified back to equal the tris count of Normal? Is it like starting off with higher res in printing, squeeze more detail out in the end when you start higher?

I found my answer to a question I posted elsewhere regarding Smoothing Groups as required by Unreal Engine, something I now take care of in 3DS Max. That said, I see the Smoothing tool in RC. Is the choice to use this pretty much an automatic yes and is this ideally done to the High model before decimation? I see without smoothing the blocky polys in many of the details, and I’m uncertain when to intervene with smoothing options. I’m attaching two images, one a render out RC before any smoothing applied and at High quality, caver was about 8’ from 52 MP camera, seems like his face/body features might be acceptable downstream just looking at the RC render. When I applied Smooth 30 in 3DS Max, smooths any poly with an angle greater than 30 degrees, it looks like I’m putting shrink wrap over everything, see second image of render mesh inside UE4, looks pretty bad - either overly smoothed or then these polys standing up (face and rock).

I’d planned on experimenting with High, then Smoothing, Simplify to 10 M, bring into ZBrush, decimate to 1M, retopologize, subdivide 3-4 times, transfer detail high to lo, export as render mesh, apply 8K texture. Your thoughts about generating and preserving detail on the way to real-time render engines like UE4.

Many thanks for your guidance. I’m extremely pleased with RC performance and features. Your team has clearly worked very hard to think through a mountain of considerations. Love the autozoom function when setting CPs.

Best,
Benjy

Hi Benjamin
The SMOOTHING tool is not related to smoothing groups… it is a pure mesh smoothing to “remove” noise etc…