Reality Capture: Under the Hood

I’m looking for answers to general questions about how Reality Capture works. For example,

  1. How does Reality Capture differ from Agisoft Photoscan in terms of its primary VSFM algorithms?

  2. Why is Reality Capture so much faster than Agisoft Photoscn for meshes for identical photo sets?

  3. What does Reality Capture use for depth, inverse shading?

  4. Brief technical explanation for how Reality Capture works as a software?

Hello Christopher,

thank you for your inquiry. However, algorithms used in RealityCapture are confidential therefore we cannot answer your questions. Thank you for your understanding.

 

Right, and thank you for your feedback. I understand the need for confidentiality, but I am asking from more of an academic perspective - I have no plans to implement a competing algorithm, or the like.

With that being said, is there any public domain knowledge about how Reality Capture works?

While I am extremely satisfied with the results that it has produced thus far, it is limiting as a user to click through the software without ever knowing how those operations effect the underlying application.

Researchers behind RealityCapture are well recognized in the computer vision community, you can find their publications e.g. in references here: https://en.wikipedia.org/wiki/RealityCapture

The description of the individual settings and tools can be found in the application Help.