One feature that would be very helpful is to manually align an image to a 3D model for texturing.
How it would work:
a) select an image
b) pick points in the 2D images and corresponding points in the 3D mesh/model (3+ points at least)
Result: compute the camera for texture projection, and include the camera in export.
There is some other software (e.g. Thinkbox Sequoia, 3DReshaper) that can do this.
It’s very helpful when one wants to project a particular image onto part of the model. Alpha blending the borders with other textures would be a nice add-on to avoid sudden changes in color / exposure and a point of differentiation to accelerate the workflow as coloring is often a problem.
Hope this helps.
I guess you would want to apply a totally different image resp. texture, right?
Because otherwise you could simply align it…
That sounds more like post processing to me though.
One workaround I can imagine is to trick RC by swapping one aligned image with your desired texture file.
And then disable all others for texturing that would be interfering.
Then hope that the borders are all right…
This type of manual correspondence picking would be mostly about time savings and a more efficient workflow to get quality textures. We have found it in practice a very powerful tool to achieve production quality, textured models (much more controllable than just relying on photogrammetry). It would be best to have such functionality directly in RC though and it seems like 95% of the functionality is already there with the point matching between images (just need to match image against 3D the mesh/cloud and create another camera for it that creates priority during texturing over other cameras).
A few use cases:
-
For laser scans, where we want to add extra texture detail and control exactly what textures goes where on the mesh. And maybe do not have enough images in the first place or e.g. have to go back another day with different lighting to take another pictures that may not align. This would save a lot of time.
-
For fixed models we re-import, where there is gaps or holes that we filled, and we want a particular texture
- E.g. if alignment failed, there is a hole in the model (e.g. for very uniform areas), we then create a simple polygon to fill in (unfortunately we have to waste a lot of time with I/O to go to another package and then back) and we then want to texture it (there is no aligned camera in that particular area of the model often when there is a hole in the photogrammetry model)
- In a laser scan, some areas have gaps where the laser did not return data and it’s not always viable / economical to take that many pictures just to get photogrammetry to work with sufficient overlap so it may be easier to just use that 1-5 photos to texture the gaps in many cases or at least in some areas of the model where there wasn’t that much detail or no time to take more photos.
- for structured light meshes we created and imported and want to texture, where maybe the photogrammetry image cameras do not perfectly align or something is off with the rotation.
- in general, when the auto texturing fails / does not work well e.g. when there is thin lines and the software blends automatically many overlapping pictures creating misalignment, blurs, or artifacts. A tool this allows to precisely control which texture goes where rather than just hoping the algorithm figures it out.
- Ability to use a lot less images where not economical or one simply forgot some pictures. Takes seconds to texture rather than a long time to compute on GPU.
- Ability to use less sharp images / different images e.g. to achieve a certain look.
- In general we may e.g. only need 4-5 images to texture an object that we already have a mesh for rather than 200s.
Hope this helps