I’m testing a small subset area of a large object, to confirm maximum texture quality visually - before trying to find settings that will give the same result to the entire object.
Texel size value / quality and the unwrap tool’s checker-board size / sharpness of the full sized object seem to have little relationship to the smaller area’s resolve.
Are there any reliable visual ques, consistent value multipliers or rules of thumb, to obtain similar texture detail between a models subset area and it’s full area?
* I’d hoped to use the unwrap tool as a visual guide - noting checker-board size and sharpness.
or
* “texel size” and “texture quality” values.
I’m currently over-dialing settings on a small area to see everything it can give me, then running multiple texture calculations at increasing settings until I have a similar quality match for the full object (visual check only, after many stab in the dark setting tests).
Here’s an example:
Cropped area vs full object:
Optimal Texture quality settings on the cropped area vs same Optimal settings on the full object:
Max texture resolve (by eye) settings on the cropped area vs Max texture resolve settings on the full model
The process to find the max texture resolve:
Note the lack of relationship in the Max texture resolves, between texture settings and unwrapped check-boards.
Based on what you’re saying I believe what you’re seeing is the limitations of the Realitycapture 3D viewer.
The 3D viewer is unable to show the full resolution of your texture model in real-time. It samples it down before it visualizes it, even though the texture quality of your full model is 100%.
With those parameters you’d expect the cropped model and the full model to be just as sharp. But instead you’re seeing that the full model looks blurrier even though it has the same texel size and texture quality.
To try and test whether this is the case you should use the ‘Render’ button to generate a high resolution screen capture of your model and save it as a JPG. This screen capture is the actual quality of your model without any visual down-sampling (for performance reasons) within the RC viewer.
A little visually confusing that RC does manage to resolve in viewer at higher texture quality variables, but you’re correct, it’s all same/same when output via render.
Prior to your reply…
I went back to testing new alignments; Removing a couple less than ideal images, which tightened up the geometry and texture - Reducing the Optimal texel value from the prior solve, producing 1x 16k texture vs. 9x 16k textures prior and calculated in 1/10th of the time. - It’s well worth increasing QC of all images prior to alignment!
Additionally I came across another General discussion thread - Which I thought worth including here for anyone searching for texture quality wisdom (Looks like you replied to this one too).
Max # of camera resolves / “Point count” #, “Total projection” # may look good, but also seems to increase the % of less than ideal data bring introduced into an alignment / Geometry / Texture solve.
Tightening up QC of images, reducing (tightening) “Max feature re-projection errors”, “Image overlap” and “detector sensitivity” resulted in a 1/3rd reduction of my data set (and likely cache size), but gave significant bump in texture detail in soft seam areas, with a slight improvement in texture sharpness overall - I’d say a likely improvement to the geometry accuracy too; But that’s just me theorising wishfully.
I’m still not near image sharpness (I’d think there’s a way to squeeze a little more out of the images), but these steps have been good - Thanks for your reply.