Spherical mesh with real DTM for modeling entire Moon

Hello to all,
I am writing because I would like some advice on how to model the Moon with real DTM imported in Unreal. I have already processed the downloaded data online so that they are compatible with the software. I converted the heightmap into a 16-bit grayscale png image of proficiency and complied with the size recommendations allowed to optimize performance.
The problem is that I would like to import in Unreal a static mesh that corresponds to a certain portion of the sphere relative to the longitudes and latitudes required; in practice I would like to get a curved mesh and not flat (because the flat one is associated with a cylindrical stereographic projection). Therefore, for every point of my mesh I have obtained the values of the coordinates (both in polar and Cartesian frame) that also took into account the elevations of the ground and I have obtained the matrices with these values. The question for you is: how do you import such coordinates into the UE by getting a static mesh? Which type of file extension shoud I use? Do i need to import cartesian frame coordinate (x,y,z) or first create a sperical mesh and then add DTMs on it with a png graysacale image? I know it’s possible to do it because in this video in the following link, you get exactly what I would like (minute 5:30 until 6:30). Below is the link:

Also after completing this step my idea would be to make the entire lunar surface, associating lower resolution global DTMs to a spherical object and then insert the portion of high resolution spherical mesh imported only where exactly I need (i.e., where the lunar lander should land). My question in this regard is: is it possible to put these two types of mesh together? If so, how should I go about it?

Thank you all for your help. I’m sure your advice will be very helpful for me :))

The engine isn’t capable.

Think of it this way.

The terrain can be projected to a sphere via math taking the Observer viewpoint into account.

As you reach the edges of the area, the distortion is ever more prevalent.

To do what you are asking, the math would have to run in realtime (which leads to being unable to compute collision, for one).

Now, the regular undistorted DTM will usually result in false height readings. Montains and valleys, as well as distance between places being wrong.
That is normal for the most part. Especially for earth. Unless you project to a specific ESRI tile the height will almost never match accurately.
(Theres also the point that reported world heights are at the mercy of the margin of error of altimeters, which is sometimes great depnding on when the reading was taken).

To get a realistic-ish map as the end result, you have to import in all the Flat files into Blender, and then process the planes into the sperical shape you want:
This is simple enough Math with pyton, however the distortion itself - as well as the calculations - will amplify the already flawed margin of error on peaks and valleys.

Additionally, most tools in engine - like measuring height by looking at the Z value - do not reflect real heights.

The earth (or moon) being round, looking at a vertical value is prone to an ever-increasing (exponential) margin of error.
The taller you go, the more the error.

Height calculations should be done using the same spherical approximation for the lat/lon at the site of the measure being taken.
This increases the margin for error again. More math = more error.

Now, it is possible to either appeoximate all tools distance as well, or to bend the mesh and manually adjust it.
But it’s more important to adjust your expectations of the end result to be within a margin of error of ±30m.

Even if you process the math on a super computer, you will still get some error amount.
The engine itself tends to use Float, so the stored value are far more inaccurate than if you were to use Double via C++.

In Blender, via python, the storage of the numbers is faily accurate, so you can process things with a bit less of an error.
At the cost of processing time. Shifting billions of vertex around is going to take time no matter what you invent…

Ps:
There is no DTM for the darkside of the moon afa Nasa lists things. What’s there is less then accurate as obviously (the moon being Geosyncronous) we can’t shoot lasers at mirrors to get readings…

So if all you have is the 30m dtm stuff that can commonly be sourced, you almost always have half a sphere at best?