I am trying to replicate a Moon terrain from NASA datasets, Not sure if it is possible.

Hi, I am trying to reconstruct NASA datasets into a 3D world to test different Moon activities like landing, rover simulations, etc. I have files in TIFF format with resolutions of 1km, 100m, 10m, 5m, and 2m… (GB->TB). Ideally, I would like to try to have some areas in the range of 1km, some in the 100m range, with some sites down to 5m. I can try to push the mesh points programmatically, but it feels too big. Does anyone have any experience attempting something like this? The Moon is 37.94 million square kilometers, and two triangles 1km is not really good enough when you walk for hours in a grey triangle.

So if I’m understanding you correctly, you have point cloud data for the surface of the moon and you’re trying to construct a game-ready 3D model from this? Hmm that’s tough.

I would imagine that Nanite will make this possible. You would create the 3D mesh from the point data in a DCC, and I would also recommend making custom optimized collision at this step, since Nanite still uses traditional collision meshes. When you import the mesh and convert it to Nanite, it will handle all of the local optimizations you need.

Nanite’s cluster system should automatically cull occluded triangles and prioritize nearby areas for higher detail. I’m normally not someone to recommend Nanite, but this seems like a great justification for it.

You could still break the surface up into mesh sections and add each to its own sublevel, then procedurally load them in whenever the player is in range. This is very similar to Unreal’s new World Partition system, though WP relies on Landscape tiles so you would have to create your “tiles” manually. That being said, there may be a way to incorporate your system into world partition so it handles all of the overhead for you.

As for texturing if you plan to use real images and not generated terrain, you will most definitely have to use virtual textures, and even then you will likely have to break the mesh up into multiple textures (I know virtual textures have a tile limit, so you’ll likely run into that)

At the end of the day, this should be possible but you will have to keep everything super optimized. This is essentially just a large open world, which people have been making for years and Unreal’s new suite of tools are specifically designed to target.

Some of the Unreal documentation on their large open world tooling:

Thanks so much for your input, I am a bit lost TBH. I could also split the TIFF into tiles and assemble it all…? Maybe I can manage larger datasets by only use the tiles i am close to…

So, for clarification, what data do you have and what result are you trying to achieve? You mentioned TIFF files, so do you just have images? Are you trying to construct a 3D landscape from these images or do you have other data to use for that?

NASA is providing TIFF geo-referenced images, PGDA - A New View of the Lunar South Pole from LOLA , this images are cylindrical projections with X,Y,Z and R,G,B values for each pixel.