Constant VRAM Overuse While Importing Tiled Landscapes

Basically what the title says: every time I try to import a tiled heightmap, which I mainly do to get greater detail in large landscapes than a 4k-8k PNG can provide, I get a crash and a “not enough VRAM” error message.

I’m not running the most recent specs, but my card (2080 Ti) has 11GB, plus an additional 8GB shared. It idles at 0.5-0.7GB, and with a blank level open, it rests between 3-5GB. So I don’t think I’m missing any outside optimization. I usually use Gaea Professional for landscape builds, but I don’t think that matters. The error has occurred on maps as small as a 3x3 of 4k tiles, which is as small as I see as reasonable, since an 8129 heightmap can be built for Unreal without any tiles. But with landscapes more than 10km in any direction, an 8k resolution is very visibly low-fidelity when observed up close.

Does the tiled import really just blow through a constant 12-20GB of video memory? I thought the whole point of tiled heightmaps was to compartmentalize the build and make it smoother than a single, massive heightmap image; does that not also make the import any smoother? Is it even possible to do this without going out and buying a monster GPU? Is there anything at all I can do within Unreal to lighten the load and optimize the process? Any tips would be greatly appreciated.

Try switching your Default RHI to DX11 when importing.

I just did a test with a tiled landscape with a total resolution of 36,705x16,322, and under DX12 I got the ‘Out of Video Memory’ crash, but under DX11 it successfully completed and saved the import, using UE 5.4.2. And my video card is an 8GB Radeon 5700xt.

Failing that, you can split up the tiled landscape and import it as multiple landscape actors and align them in-editor. That’s what I did with this height map when I was working with it under UE 5.1. If you don’t need to sculpt or paint on the borders of the actors, it should work fine.

1 Like

Close your Levels minimap.

Just tried the first fix, total success! I have to admit, I wasn’t expecting an RHI issue, especially since DX12’s main selling point is more efficient GPU usage. This is also the first time I’ve seen that solution suggested on the subject. Thanks a lot!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.