Merging laser scans and photogrammetry analysis: quality issue and possible solution

Hi everyone.
There is an ongoing problem with merging laser scans and photogrammetry, which has to be addressed. That can be checked at least by simply looking at the number of posts on the forum regarding this issue.
But let’s start from scratch. We have been using this approach on a number of different projects through roughly 6 years. Mostly we use Faro s70 scanner + photogrammetry. There were always some issues, so I’ve already went through all the post on the forum regarding this topic, took part in big number of conversations on previous forum before RC was acquired by EPIC, talked to support number of times. So from my experience it looks like this:

Whatever you do, it’s impossible to get a decent alignment between LS and PG. On projects that do not demand high precision, results can be okayish. But whenever there is need for increased precision and details, problems start to show.
If you tried it yourself you already know what I’m talking about, or have checked the forum (trust me, most of the topics in this case are regarding same problem) you know that the problem usually looks like double geometry or ripped geometry, where the final mesh takes parts from photogrammetry and parts from laser scan, which makes surfaces looks like some kind of exaggerated stucco wall. If you post smth like that on the forum, there are few regular solutions that are given to you: you are either told that you have to place control points manually, or you can try adjusting a specific setting that prioritizes LS over PG, or that your data is bad. And most of the time - you do not get the result you want and go try over and over again. Sometimes it does help, especially if laser scan covers PG data fully. But if you use PG to reach areas that were not covered by LS, or your resolution is higher that LS, well, not this time pal..

But sometimes there are people that try to investigate the problem deeper, their post get upvoted and developers kindly tell you that your wish will be placed in the long list of wishes to get implemented someday. Time passes, but nothing happens. And don’t get me wrong, I’m not saying devs are not doing their job, I’m saying that issue might be bigger than we think it is. From a few interesting posts like this, tons of test, external resources and articles, I think I have a bigger picture now and assume I see the issue of the problem.

If support or devs can correct me here, I would be more that happy: RC (RS now) treats LS as a bunch of photos basically, so contrary to what a lot of people might think (me included), it doesn’t look like it’s performing any cloud to cloud registration, like ICP etc. Instead, it takes a point cloud, bakes a cubemap from it, and then basically does to these cubemaps exactly what it does to all other photos. And I believe here is the biggest problem - baked cubemaps are terrible quality. And you can’t even change them in any way. For some reason, they get very low resolution and are highly pixelated (you can easily check it by comparing images from Faro Scene in my case, to these generated cubemaps). And since images from your regular full-frame camera would be way better, whatever amount of control points you place, it just can’t alight it nicely when the difference in pixels is so big, not even talking about pretty bad color data from most scanners.
So I guess my question is next: can this even be fixed? Should we even wait for it to get implemented? Because for now I can’t think of any workaround…

P.s. even if this issue with resolution can be fixed, I suspect since there is no cloud to cloud registration, we will still be limited by these generated images.