Hello,
I had a slightly weird use case, where I would like to align/create model/generate texture but then I would like to swap out the original images on the cameras for other images and generate textures again without having to recalculate the alignment/model.
This is usually used when we artificially add contrast to the images in order for the alignments to pick up more points, but then want a more de-lit image for the actual texture.
Another use case that we’ve been discussing is for generation of game LODs using photogrammetry. The idea is you take screenshots of your game asset with the full shader (this might have multiple materials and complex blending), but also render out the diffuse/spec/normal/etc maps. Then you use photogrammetry to generate a single model out of your complex model(s). And by having the ability of swapping out the images from the cameras, we are able to generate the different textures for the LODs.
Hope that makes sense!
Luiz
Greetings.
Unfortunately, right now there is no direct way to swap source image data entering the texturing process. It shall be available in the future.
Meanwhile, You can try these two hacks. (At Your own risk :).)
-
Before the texturing process, swap the actual files on disk or swap the directory for one containing the altered images.
-
Before the texturing process, create a new IStructureFromMotion and register the altered images preserving the original order, i.e. the nth altered image is added as nth image.
Thanks! That’s worth a try!
On a separate note (I can make a new post). Is there a way to create a Model object in memory without having to load it from disk? I’m wanting to pass a custom model for the texture to be applied to, from my application into RC. Ideally I don’t have to write to disk for that.
Hi Luiz.
A model object can be imported with the Import() sdk api call. Unfortunately, it accepts only file name, not a stream, so You have to store the model on disk and import from it. Right now it is not planned to extend this functionality.