VFX process and STMAP export

Hi, I am trying to develop workflow for 3D camera tracking and model generation from Reality Scan to other 3D software. Currently I am doing the following:

  1. import image sequence
  2. Select all input images and change the ‘-1’ value by Prior Calibration->Calibration Group and Prior Lens Distortion->Lens Group to 1 (I believe this tells RS that they are all the same camera, so that it creates a single focal length and distortion parameter for all… at least this is the result of doing this)
  3. Align and texture.
  4. Select all of the images and export an fbx with the following settings (including the undistorted image sequence):.
  5. Import fbx into 3D software

At this stage I have the 3D model and cameras created for every frame, and if I set the camera to the same resolution as the image sequence and load an undistorted image into its background display, the undistorted images all seem to line up correctly with the 3D model. I have a python script that then creates a single animated camera through them.

Is all of the above the correct workflow for doing this? I’m concerned I am losing some lens correction cleverness by doing this, ie the lens principal point. But I also don’t think I can let Reality Scan change the camera parameters every frame (focal length and lens distortion) in terms of the 3D rendered content being distorted to match the original footage?

Now the bit I can’t get working at all is the STMAP export to enable distortion of the 3D rendered footage to match the original footage in compositing. If I select the STMAP export in the exports, a lot of time Reality Scan crashes after ten or so seconds before I even click OK. And when I do export, the STMAP images do not seem to be showing a distortion map:

What am I doing wrong here with the STMAP export? The following are my settings for that export:

Hello @badbunny_uk
Regarding one animated camera you can check this post: extract camera trajectory from model and import (position and orientation) into blender or unity3d

The crash for ST maps export is quite strange. How many images do you have in your project? Is it also happening after application’s reset?
Also, where are you checking your ST map?

Hi,

its a pretty small project this one, 184 images at 720x480 resolution (just a lightweight practice image sequence I use to play around with).

Thanks for the link to that other thread with the blender addon that can merge the cameras.

I’m reviewing the STMAP EXRs in blender’s compositor. I’ve attached one of the STMAP EXR files.

UPHILL0000.JPG.stmap.exr (1.6 MB)

With regards to the rest of my question, am I utilising the correct workflow and settings for doing this? I couldn’t find any documentation talking about this process, so I have been figuring this out by a lot of trial and error. I assume from all of these options that these functions were added specifically for doing what I’m trying to do, I just don’t know the correct workflow in Reality Scan, or where I can read about and learn how these functions are intended to be used.

These are the only documentation sections I’ve found so far that touch on some of these topics:

Undistorted Images - RealityScan Help

ST Maps - RealityScan Help

Camera Priors - RealityScan Help

Also what do you mean by “application reset” ?

Thanks, Pete.

Hi,

I’ve checked your ST Map and it looks OK to me.

The workflow seems to be correct for such work.

Application’s reset: Reset RealityCapture | Knowledge base

1 Like

Hi, the STMap should be a vector field of only Red and Green values showing the (re)distortion of the image with all blue values zero. See attached image showing a typical STMap and what Reality Scan has exported for this:

Yes, that’s how I see your STMap:

Hi, this seems to be maybe a blender issue. For some reason it doesn’t recognise that this EXR file has multiple layers and that the U and V data is stored on separate layers. I think it is is just showing the V layer.

Other STMAP files I’ve used just have the U and V stored as RG data in the same layer so I wasn’t expecting this. Not sure how common one way or the other is.

Davinci Resolve Fusion and Cinesync Play both recognise this as having U and V layers. In Fusion when I recombine those channels it then looks like your image, ie as I expect it to look.

I haven’t run into an issue with blender dealing with multi layer EXRs before. Not sure if there is anything you guys can investigate on your end to see if there is something about how this file is being written that might cause blender to not recognise it? Or can an option be added to the export to just write to a single layer?

To cover the blender side, I will submit a bug report to the blender dev site and update here when they respond. https://projects.blender.org/blender/blender/issues/144957

[EDIT] Reading a bit more about multiple layers and channels in EXRs, I’m not 100% sure this is a multi-layer EXR.

Thanks for helping investigate this, Pete.

Hello Pete,
I am not sure if this will be changed, as there are no other complains about it.

Hi, it is definitely a blender issue from the discussion on that blender bug report. Looks like they have some logic to auto map channel names to RGBA, with no ability to manually select channels or to set the channel assignments to RGBA outputs. Currently that logic doesn’t handle the U and V names. Hopefully will get sorted there. Thanks, Pete.