Undistortion calculation


As i understood, image undistortion calculation using .XMP camera format described there: https://support.capturingreality.com/hc/en-us/articles/360017783459-RealityCapture-XMP-Camera-Math

But i don’t get, how do you convert homogeneous coordinates to pixel, so that there are no black areas in the image (for inner boundary).

I have tried to use the formula:

scale = max( image width, image height ),
px = scale*m[0] + image width / 2,
py = scale*m[1] + image height / 2

But as a result, image has a large encircling black area.

What image transformations are used when converting to pixel coordinates?

Hi UnknownTiredMan, how did you computed m parameters? Did you used the previous equations? 

These equations are for computing 2D image coordinates, why did you get there black areas?

Hi. Thanks for your answer!

I compute m parameters next way:

  1. Convert to homogeneous coordinates using pixel transformations and K-matrix (The reverse formula to the one I described below).

  2. Compensable browns distortion (OpenCV implementation).

  3. Convert to pixel coordinates with the formula.

    scale = max(width, height)
    mx = (px * focal / 35 + ppU) * scale + width / 2
    my = (py * focal / 35 + ppV) * scale + height / 2

    focal,ppU,ppV - received from .XMP

Thus, I get a matrix of points on which I calculate an undistorted image. But as a result, I get a non-matching image compared to RC. And with positive radial distortion, I get an encircling black area on image.


Hi, is it possible to share a whole your script?

Is it possible, that there wasn’t distortion over the image, as it is created digitally?

  1. Yeah. Below is a Python script. I used OpenCV to transform image by coordinates.

    import numpy as np
    import cv2 as cv2

    def toHomo(mx, my, width, height, focal, ppU, ppV):
    scale = max(width, height)
    px = ((mx - width / 2) / scale - ppU) * 35.0 / focal
    py = ((my - height / 2) / scale - ppV) * 35.0 / focal
    return px, py

    def fromHomo(px, py, width, height, focal, ppU, ppV):
    scale = max(width, height)
    mx = (px * focal / 35 + ppU) * scale + width / 2
    my = (py * focal / 35 + ppV) * scale + height / 2
    return mx, my

    def brownDistortion(x0, y0, k1, k2, k3, p1, p2):
    x2 = x0 * x0
    y2 = y0 * y0
    xy = x0 * y0
    r2 = x2 + y2
    k = 1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2
    tx = p1 * (r2 + x2) + 2 * p2 * xy
    ty = p2 * (r2 + y2) + 2 * p1 * xy
    return x0 * k + tx, y0 * k + ty

    original = cv2.imread(“image.jpg”)

    #data from .XMP
    height = len(original)
    width = len(original[0])
    focal = 60
    scale = max(width, height)

    #order k1 k2 p2 p1 k3
    distKoeff = np.array([0.1, 0.5, 0, 0, 1])
    pp = [0.001, 0.002]
    mapX = np.zeros((height, width), np.float32)
    mapY = np.zeros((height, width), np.float32)

    for row in range(0, height):
    for col in range(0, width):
    mx, my = toHomo(col + 0.5, row + 0.5, width, height,focal, pp[0], pp[1])
    mx, my = brownDistortion(mx,my,distKoeff[0], distKoeff[1],distKoeff[4],distKoeff[2], distKoeff[3])
    px, py = fromHomo(mx, my,width, height, focal,pp[0],pp[1])
    mapX[row][col] = px - 0.5
    mapY[row][col] = py - 0.5

    result = cv2.remap(original,mapX,mapY, cv2.INTER_CUBIC)

  2. Yeah this image created digitally, but i added a real camera distortion with a script.

Hi, what I noticed, that you are using this:

focal / 35

According to our equations, you should use 36. 

Also, in Brown there are some differences:

tx = p1 * (r2 + x2) + 2 * p2 * xy

tx = t1*(r^2 + 2*xx^2) + 2*t2*xx*xy

Yeah thanks. I have corrected the script. But it didn’t solve my problem. The black encircling area still remains.


Hi, sorry for my late answer.

Here you can find c# code with comments how to undistort the image: https://we.tl/t-wiF8mOyZlc

For undistort you need to computer the edge points and set, which image part is interesting for you (more about this you can find in the application help under Undistorted images)

Hi, thanks for sharing the code.

I looked throw it. It pretty similar with my script, except division and perspective distortion. But the last part is interesting to me. How do you compute the edge points? As i understood from help, there are three types of edges (inner, outer boundary and in between), and it could be computed from edge points with undistortion function.

Do you use openCv formula for this? 

I am not sure how it is computed internally, but there is an option in RealityCapture to export undistorted image. Then you can find the boundary values from this image and use them in the script.