Using RGB-D in reconstruction

I have a large number of RGB images and depth images, with 1-to-1 pixel mapping.  What is the best workflow to get the best output as possible?

I intend to use the RGB and depth information to compute a colored point cloud (.PTX), and import only the .PTX point cloud (because the RGB images are technically redundant if the .PTXs are colored).  However, I don’t know if the program treats the .PTXs and RGB images differently.  I.e. will I get a visually similar result if I just use the .PTX vs. using both the .PTX and RGB?  If .PTX and RGB images are better used together, is there a way to tell the program that a specific .PTX and RGB image should be paired?