TSDF volume integration

Hi, everybody!

I am working on a 3d reconstruction project using open3d and reality capture.

So I want to do TSDF volume integration in open3d using the data exported from reality capture.

I tried, but didn’t get the desired result. Can you tell me what I did wrong?

I proceeded like this:

For information on TSDF volume integration : http://www.open3d.org/docs/0.13.0/tutorial/pipelines/rgbd_integration.html

The depth images and XMP format were exported and used.

  1. I make trajectory matrix.

    I use Rotation(r) and Position(p) into XMP.

    [r[0], r[1], r[2], p[0]]

    [r[3], r[4], r[5], p[1]]

    [r[6], r[7], r[8], p[2]]

    [0, 0, 0, 1]

  1. Make volume

    I use ScalableTSDFVolume. The parameters are as follows.

  • voxel_length = 4.0 / 512.0
  • sdf_trunc = 0.04
  • convert_rgb_to_intensity = True
  1. Make CameraIntrinsic

    PinholeCameraIntrinsic has width, height, fx, fy, cx, cy parameters.

  • width, height : image size
  • fx, fy : focal length. fx, fy are pixel unit and computed from xcr:FocalLength35mm attribute from xmp file. -> max(img_height, img_width)*(FocalLength35mm/36.0)
  • cx, cy : principal point. cx,cy are pixel unit and computed from PrincipalPointU, PrincipalPointV attribute from xmp file. -> img_width*0.5+img_width*PrincipalPointU , img_height*0.5+img_height*PrincipalPointV

 

Please let me know if i have misunderstood or missed something.

Thanks!

Hi zoozoo9610,

about your trajectory matrix, did you create it as x = [R t] * X?

What results did you achieved?

Thank you for answering my question.

 

In open 3d, the trajectory matrix is a 4x4 extrinsic matrix, so I used only [Rt].

 

If I use x = [Rt] * X, how should I use it?

R is 3x3 rotation matrix, t is camera translation vector, X is homogenous 3D point. right?

I’m not sure how to use the ‘x = [R t] * X’.

 

I really appreciate it if you let me know.

 

The follows image is the result I want (export from reality capture to ply).

 

However, the results of my attempt are as follows(I just did it with one image).

Thanks! 

 

Hi zoozoo9610,

I better looked on the link, which you sent and it seems, that you did it good. How does look camera pose after this command: read_trajectory ?

Do you have some example, which is working for you?

Thank you for answering my question.

 

I know my problem. My problem was that the voxel length size was large. As the value of the voxel length decreased, the shape gradually started to be formed.

Thanks to Ondrej Trhan, I found a clue to solve my problem.

 

You’re welcome, good luck with your work.