Texture Re-Projection in Mari/Nuke

Hi Everyone,

I am working on texture cleanup in Mari. Using the camera data from RC to re-project high-res full bit depth etc images. I’m having some issues with matching the camera projections and images. Please see below:

In Mari or Nuke to get the projections to match I have to scale the images down by a random value. Once the image is undistorted it should just match. I have tried un-distorting the images myself using checkerboards, which fixes the distortion, but not the scale issue. I have also tried using the undistorted image export function in RC, but the scale seems even more random. Here are some examples of the changes:

Mari-RC.jpg

Undistorted export:

Screen Shot 2016-03-23 at 12.02.29.png

Values to match the export in mari:

Screen Shot 2016-03-23 at 12.01.40.png

Essentially, my question is: How do I compute this value rather than estimating it by eye each time? Which is time consuming and its easy to get it wrong. A projection once undistorted should just match? Or am I missing something.

The built in undistorted exporter has strange export settings. I have tried a combination of the most of them. The main options are:

  1. FIT (I think squashes the undistorted image to a boundry, not sure what the difference is. And why would in-between ever be handy?)
    Outer boundary
    Inner boundary
    In-between

  2. Resolution (preserve and fit are the same thing right?)
    Preserve
    Custom
    Fit

  3. Undistort principal point (shifts the canvass, attempting to correct the prior calibration offset).
    True
    False

Any help would be greatly appreciated. As I said, I can get a perfect matchup, but there is alot of guessing to get it to match.

Thanks in Advance,

Joe Steel.

Hi Joseph Steel

For the undistortion parameters:

  1. best if you use INNER region fit -> look at the attachment which describes the regions fits

  2. Resolution -> PRESERVE

  3. I would recommend you to use FALSE

Is there a more in-depth NUKE + MARI camera description so that we could understand the SCALE and other parameters used for proper camera export?

Hi Milos,

Thank you for the quick reply, that image makes perfect sense. I just ran your settings. I still have to scale the images down to get them to match. Please see details below:

Values in RC:

Screen Shot 2016-03-29 at 13.47.01.jpg

Camera data in Mari with principle point translation (this works):

Screen Shot 2016-03-29 at 13.47.09.jpg

FOV is calculated using the horizontal aperture and the focal legth ( 2 * atan(hap / (2 * focal)) * 180 / pi ):

Screen Shot 2016-03-29 at 13.47.18.jpg

Mari doesn’t show the camera details, so here is the same camera in Nuke:

Screen Shot 2016-03-29 at 13.47.38.png

To get the images to lock I have to scale the image down by 0.03.

The other main issue; is the bit depth and colour space. When exporting undistorted image they are reduced to 8bpc SRGB. Is there a button to retain colour space + Bit depth?

Cheers,

Joe.

Hi Joseph,

We had the same issue with Maya export, which was observed after adding image plane. We’ve add image plane to Maya export too. It will come with the next update (or attached).

In general, the “scaling issue” is due to the film gate size and versions of that. For Maya we are exporting it with the fix to avoid default parameters fighting.

For mari/nuke I would recommend using un-distorted images since lens distortion is always present and it always causes some image deformation. To avoid wasting of pixels we have support several modes for image un-distortion. Since cropping may happen the focal length is changes. But values exported with xml/maya/whatever output are always compensated. If mari/nuke supports some ascii scene format, then it should be very easy to write custom export for it. Check “calibration.xml” file in the application folder. It defines how to export registered cameras and you can make your own export easily.

Here is some topic on that - Camera export and file formats

Solution for this problem is posted here Camera Projection matching