Hello everyone,
I am a geomatics engineering student and am currently working on a thesis aimed at improving the process of generating meshes from point clouds.
I am trying to understand how images are projected onto a mesh to generate textures in photogrammetry/3D reconstruction software.
More specifically:
How is the projection calculated? When multiple images see the same surface, how are they combined or weighted to produce the final texture?
I am also wondering if it would be possible to modify this process to calculate a “confidence score” for areas of the mesh, based on criteria such as: the number of images capturing the surface, the viewing angle, the distance from the camera, the image quality
The goal would be to more easily detect unreliable areas (holes, artifacts, false surfaces).
Are there any articles, algorithms, or open source implementations that I should consult?
Thank you!