Will be great to have better texture projection process.
- Really need to extrapolate low res texels on texture. I think it a BUG. Need to be fixed. http://take.ms/8ah3X
- Will be great to have priority/weight blending for textures.
For example: Normal priority - pixels what turned to projector/camera must have bigger priority (more visible).
Deph priority - pixels closed to camera/projector must have bigger priority - cause closer mean more information, more details.
Focus priority - (almost as previous) but must have bigger priority then pixel IN FOCUS depth. (ability to tune it fir each photo will be helpfull)
Center priority - distance from photo center mean more distortion and aberration. so point in center must have bigger priority.
Tell me that do you think.
Hi Yegor
all your mentioned stuff is already implemented
Looking on the dataset you’ve sent to me, I can see one primary problem which can be solved pretty easily… Just take MORE images. you have 40 images altogether in 3 elevations and it’s NOT enough.
It‘s recommended to take one image every 10 degree per elevation (36 images - 18 alternatively good too, if more elevations are present) and for the subject like you try to do, I would say at least 4-5 elevations - 144+ images. and you’ll always get perfect results.
So in short, the Tx reprojection issues can be traced to bad alignment as there is not enough overlap among images.
Hi.
If all this implemented - its don`t work.
I agree what I have some problem with aligment if I shut from the hand, but anyway I have opinion what methods I wrote here will improve quality of textures and no “stamping” or “ghosting” will be. Texture sample must be taken from One, most quality and detailed source image with some little blending on the endge of sample.
I have new tests taken from studio. So no more motion blur and enough images.
And all works fine except texturing.
Images taken with macro lens. And some parts of model its out of focus. But other images hase same places in focus.
But on the final texture I see strange mixed parts in and out of focus.
Hi again.
Im back to this post cause its still very problematic part of work for me. I
ll done some tests and here is my results:
-
Why do I need more than 3 elevation? for what? Here is example - One elevation and only 23 images in terreble quality (overbrighted and blumed by sky) and… Great mesh result! Smooth and clean. Except few places in blind zones. But its just a test.
So seems its more than enoung to get good result.
https://monosnap.com/file/fBKWaDxmFu7cA … SHlJgDbP0Z
-
But Textures are blured! And stop telling me what images its blured. its not blured. its OK.
Yes, I belive images can be not alinged perfectly and can be shifted for a few pixels… but why mesh its Ok in this case?
Also, no mater how shifted images, no problems will be if RC will not blend ALL images in same weight. Its need to make shorter transition betwen fragment and priority for One Best texture fragment for each part. Its all.
For example, here two textures. One baked in standart mode, from all images. Second - Only 3 selected images was baked.
http://take.ms/2xhFY
-
More Images - more problems! More errors. more missaligns and bugs on mesh. Longer computation. More blured textures. And very very very slow texture baking procces.
So please, rethink again. May be its possible to do some improvements in baking? Some other algoritm?
Hi Yegor
- But Textures are blured! And stop telling me what images its blured. its not blured. its OK.
Yes, I belive images can be not alinged perfectly and can be shifted for a few pixels… but why mesh its Ok in this case?
Also, no mater how shifted images, no problems will be if RC will not blend ALL images in same weight. Its need to make shorter transition betwen fragment and priority for One Best texture fragment for each part. Its all.
For example, here two textures. One baked in standart mode, from all images. Second - Only 3 selected images was baked.
http://take.ms/2xhFY
This one is clear sign of misalignment of imgs.
- More Images - more problems! More errors. more missaligns and bugs on mesh. Longer computation. More blured textures. And very very very slow texture baking procces.
there is no other algo that can solve this if images are not properly captured. Its all about properly taking images, not overshoot ( too much unecesary imgs ) good angles and etc.
Can we setup TV session so im can take a look on your data ? Contact me on my email.
Wishgranter wrote:
there is no other algo that can solve this if images are not properly captured. Its all about properly taking images, not overshoot ( too much unecesary imgs ) good angles and etc.
I can`t agree with it.
It simple enough - do not blend all images! Just extract islands of good quality and choose best part by priority (i wrote about it in first message).
For example Agisoft. Same images.
Mesh is terrible… but texures its more sharp and detailed.
http://take.ms/4hkwC
http://take.ms/5yvJi
One more thing… More optimized mesh - more terrible texture we got.
Shape its changet, and CR can`t trace textures correctly from all cameras.
Reason the same - need better algoritm. Mixing all images - its a bad idea.
Here is example:
http://take.ms/wyC8Z
May be it will be interesting and helpfull https://www.youtube.com/watch?v=j_g_QA47aX8
wow, the displayed texture projection algorithm looks very potent and robust. I was always wondering, why it shouldn’t be possible to take out objects and automatically get rid of their texture being projected onto whatever is around.
+1 for integration of this method
+1 this dose look interesting.
This technique in this paper relies on depth (looks like Tango), while many RC users are working from images and have no ground truth depth for each frame. So perhaps an approach like this could work in combining laser scan ‘scaffolding’ with RGB views, but the core contribution in the paper is dependent on ground truth of some kind for the (great!) results they show.