Question on merging components of laser scan and images

I am trying to create a 3D model of a sea wall using laser scans and photography. The scans were acquired with a terrestrial Topcon scanner, registered only a local grid system and exported with colour as .e57 files. My workflow, using one scan and a series of nearby images is as follows:

  1. Import one scan. Tick registered/georeferenced=TRUE and confirm that COLOUR is present Import the images

  2. Run alignment. Two components result. Now, this is the problem part. I want to merge the two components. This does not happen automatically, so I add four control points to two of the images. The XYZ coordinates of these control points are taken from the pointcloud, so are on the same coordinate system.

  3. Re-run alignment. I find it impossible to merge the two components and end up with one good model that is a hybrid of both the laser scan and images. HELP!

Possible issues: I have read that the software uses the colour to match the laser scan and images. In the case of this site, there is very little colour (it is a grey Limestone wall) so I wonder if that would adversely affect it?

The images from the camera are geotagged with lat/long. Could this be confusing things? I did change Absolute Pose to “unknown” that removed the EXIF position from the list, 

I have tried the setting camera priors=false I have read all the forum posts and watched all the videos that, when it works, makes this look very easy. I think my case is a very common one, so it would be good to see a list of all the alignment settings to get it to work. Thanks in advance. Nick

Update: I managed to get everything into one component (scan and images) by importing the reference points as GCPs and picking four of them in three images.

Also you should add the control points to more than one image from each component. I try to do at least 3 images from each component.

How did you measure the reference points? Did you use Reality capture or third part software? 

 

Regards