Hello, sorry if this question is a bit off topic but I know Reality Capture has been used for videogrammetry before, so…
With a camera rig I successfully reconstructed a sequence of textured meshes, when I try to playback my computer has to show 1 mesh every frame which uses a lot of resources from most devices if they can do that in real time in the first place, I’m looking for a way to create a temporal coherent mesh, so then the whole mesh doesn’t have to be replace completely every frame but only the texture, something like this https://www.youtube.com/watch?v=7nVJs31W1DY
Any guidance will be greatly appreciated, thanks!