Private support

Hi, I want to share some person scan to discuss some issues in Reality Capture, but don’t to want to that in a public forum. Is there a way to share a private data with Reality Capture support?

Hi Serob,
you can contact us here: Contact - Capturing Reality
But since April the areas covered by customer support are just bug reports, crashes, and licensing requests for the most recent three versions of RealityCapture. Epic Direct Support is available to purchase for an additional $1,500 per Unreal Subscription seat annually with a minimum of ten Unreal Subscription seats. Epic Direct Support is not available for the standalone RealityCapture subscription seat.

Can’t we consider a case where your software is unable to process a model at all, but your competitor software from agisoft does it with ease as bug?

Sure, we can.

Then how can I share a data with you to investigate that unexpected behavior?

Hi Serob,
I thought that you would use the l sent you in a previous post.
To save some time I sent you the invitation for the data upload to Box.
Can you also add your project there?
Also, what were your settings and repro steps?

Thanks for sharing the link.

I’ve added both images and some version of the project (Look inside RC folder).

The issue starts in the beginning with alignment, so what I do is to align twice, after it has most of the images aligned, but seems the features are not enough for further processing.
So overall workflow:
align
align
marker detect
mesh with normal detail
texture

settings:
alignment:

  • Max feature mps: 20 000
  • Max feature image: 50 000
  • image overlap: tried with High and Medium
  • Image downscale: tried with 1 and 2

mesh model:

  • Image downscale: tried with 1 and 2
  • Maximal depth-map pixel: 0
  • Maximal vertex count per part: 1250000

Texture:

  • Gutter: 2
  • Min/Max resolution: 4096
  • Image downscale: 1

Hi Serob,
thank you for your data.

I checked it and to be honest, it is quite poor dataset. This is not a way how to provide the full body scans. Ideally, you need to capture all images at one moment, on your images there is a visible movement. Also, the “cage” is not ideal, as it is quite featureless and only a small amount of features were used for alignment (from possible 10000 less than 1000). I checked also in other software and the results were better, but still not good. These features weren’t found also on the scanned object, which could be improved using the texture with more features. Such texture is basically featureless.
image

I suppose RealityCapture needs to have a proper captures to make a good model. In this dataset there are quite a lot of not proper ways of using photogrammetry.
What kind of camera have you used?

Thanks a lot for the investigation.

The cameras used are 12Mpx pi cam v3 wide lens
Don’t you mind if I upload another image set without the “cage” and you check what’s the issue there?

Were there used more cameras to capture the dataset? How the images were captured?

Sure, feel free to upload. Can you also describe the issue?

We used more images (~280) to capture the scans inside the same cage, results were not much better. Images are captured using a rotary scanner - there’s a pole which has 7 cameras on it, it rotates and takes photos during 5 seconds.

I’ve uploaded another scan without a cage.
For this specific cage-free case that I shared with you, there were less images used (187). The issue is that ~ 80% of images are aligned, and the aligned ones do not have enough tie points. As a result we get a model that has nothing to do with a real subject.

I suppose the same problem is applied also for second capturing.
Poor lighting conditions, object doesn’t cover most part of the image, it is moving during capturing, cameras with small resolution, sometimes unfocused, not taken images in the same time…
I am sorry, but this workflow won’t work in RealityCapture.

It’ sad to hear that :slightly_frowning_face:

Appreciate your time and effort for trying to help.