Strategies for reducing landscape complexity ...

I’ve created a 4098 x 4098 world for my project and uses the Brushify auto landscape material, but although only using 4 of the landscape layers, I’m still finding that the landscape is taking up more resources than I would like or expect (see image below). Is this normal in terms of large landscapes? I’ve tried tinkering with the landscape LOD settings and reducing the textures that Brushify uses to 2048, but it seems to make little difference in terms of complexity. Anything else I should be thinking of?

Hey @Astro-Chris!

Well, from the look of things, it’s definitely not the foliage/grass, but the landscape mesh itself. Have you thought of maybe reducing it into chunks that can be culled when they’re not seen by the camera? Thing is, you’re looking at a 4km x 4km area, all having to remain loaded at all times even 1cm of it is visible. That’s a lot of space to keep rendered!

Take a look at this section of the documentation and see if any of it connects any dots for how to move forward!

1 Like

I find that landscapes overall tend to be heavy. Everyone wants to stay in the green but given you have to pipe out to grass-maps and/or physics, it can add up on the cost. Whatever textures you use to determine where something is, like the height-map in your heightblend, will count towards those layers. I honestly think this alone drives a lot of the simple/retro styling you see see with solid colors for landscapes, like in Zelda, etc.

If you are using heightmaps, then successive height-blends will compound the overhead as well and each of those will be accounted for by the grass/physics nodes too; it adds up.

The good news is that even modest graphics-cards can sample textures at a ridiculous rate so unless your counts are very high, you have some solace. The downside is since landscapes tend to occupy the majority of the screen, a rising-count will eventually come to dominate your overhead.

Can you post the instruction-count in the material editor?

Changing the texture-size will affect RAM, not the instruction-count/complexity; you are still doing the same maths, just with a ‘diluted’ texture.

I’ve found that using RVTs with the heightmesh offers better performance with much finer detail on the mesh. This does mean you might need to compromise on the shaders going into the RVT, as you cannot readily use dynamic-calculations, do distance-math etc. The upside is you CAN do those things on the other side of the RVT so it’s not that the math is different, you are just shifting where/when it’s done. Think of the landscape math into the RVT as the pre-process and the RVT as the post-process. What you can’t do in the pre, you can as they say, “just pick it up in post”… :smiley:

FYI it seems expected that your texture-samples will about-double when piping out to the RVT. This seems to be the norm but because you are cramming-all-that-stuff into the RVT you claw-back the cost on the other side, and for my experience seems to perform better vs a normal-landscape. Looks alarming given the nature of the topic, but seems expected for the tech.

1 Like

Number one cost of landscapes is overtexturing.

You can probably get something to look decent with a maximum res of 1k. And using any texture above that is usually detrimental.

That, and, overlayering.

Once you have one layer you also have a second layer in which the painted layer is not, for instance. Which could reduce some of the load.
Adding paint of any type, will add performance loss - even for grass outputs.

Shader complexity wise, the more alpha blend or height blend you have in a single component, the higher the cost.

Theres reasons that the epic team gutted the landscape material for kite and re-purposed R/G/B of the base color - and that’s the fact that the more textures you put in, the higher the cost to use it…

1 Like

Thank you all for your responses, which I will certainly try out in the coming days and report back.

Many thanks for your detailed response. Is the following what you are looking for in terms of instruction-count?

I meant from the upper-left of the material instance or from the material-editor. How many instructions, texture-samples, etc.

The color-graph only shows instruction-count on the Pixel/Vertex-shaders, it doesn’t account if you have 4 samples vs 40 or whatever.

Ah I see, sorry … see below:

Instructions

1 Like

Those don’t matter as the material preview isn’t using all the layers…

I think you can set the preview percentages, but I’m not sure you can set all of them together / more than 2.
Which is the situation for the red spots in the preview mode.

1 Like

Following useful advice given here (thanks all), I’ve manged to reduce landscape complexity by implementing runtime virtual texturing (RVT) in UE5.03. My large Brushify landscape (4033 x 4033) is now in the green and RVT has added around 10 FPS to my project with little appreciative difference in visual quality.

Check your RVT settings. I find that something like 12/0 is better than 10/2. More, smaller tiles means more granularity when rendering, so you (more closely) only render the parts you actually see vs getting small corner of a bigger tile and incurring overhead for something you mostly don’t see. With smaller-tile-sizes you tend to end up with seeing more of each tile, only rendering things you actually will see.

Also, oddly, I found there where sweet-spots for performance. For some reason using a 1MP setting was running worse than using a 4MP RVT… Play around here.

1 Like

I’m not really sure there is a true benefit to VTs for landscape.
Your shader’s grass is always tiling, so the textures are always all fully visible.

I doubt the back end of VT makes any different use of the landacape’s painted maps. Simply because it can’t even work properly for tiled worlds, so why would it work any differently for a single tile of landacape than for multiple?
(Why do i mention this? Because a single 16k pixel wide image with VT would work faster than 4 tiles of 8k).

Chanches are you enabled VT and something else benefitted from it - on top of you obviously touching up the material, since complexity is less.

Be warned, VT probably still do not work on world partition. Nor do they capture grass (but they do capture foliage for whatever backwards reason) into the landscape VT system.

1 Like

Hello,

I’ve been experimenting with using UDIM’s with Virtual Textures & doing most of the math in UV’s. Then I hope to have most of that UV work run trough the vertex shader instead of the pixel shader, as they might become quite heavy and run before the texture sampler. I’m also combining it with some tight channel packing in order to reduce texture sampler lookup. So far I think 2x3 channels should be sufficient for a landscape.
However, the biggest struggles I’m having is that I need Frac (or Mod(x,1)) to select the UDIM - and later on blend between. And Frac is not linear math, so it will create weirdness if you do it via custom UV’s.

Luckily there is a node called ‘VertexInterpolator’, but when I apply it (before the Frac-), I get some weird stuff on landscapes specifically. Only landscape segments that have an invisibility mask, actually work.
I tried doing the UDIM selection via VertexInterpolator as well, but that splits the UDIMS back - The offset and tiling does not work on landscapes specifically.

I’m probably doing multiple things wrong here, but I would like to understand why I’m doing them wrong.
Anyone know what this could be related to? Material works fine on basic shapes.

This is the material;

This is the result on a landscape with both UV calculations in the vertex shader;
(Top right is a random basic shape, below is the streched lines of the UDIM on a landscape section that has visibility holes in it, in the back is the rest of the landscape: a bland brown surface with bad lighting and no roughness

(My landscape is 8k, streched up to 150% scale, not using world partition yet)

Try using Custom UVs vs the Vertex Shader. Put your maths here:

Pic

ref: Customized UVs in Unreal Engine Materials | Unreal Engine 5.3 Documentation

Then you can use TexCoord(x) to sample that UV channel.

As for tiling, it doesn’t have to (using RVTs here):

Pic

FYI, you can use customized UVs on any material, on the landscape side feeding into the RVT and on the heightmesh side when you read out:

Pic


Using a temporally-stochasticly-blended material to help eliminate tiling. The ‘user interpolators’ is the carried-maths I run of the customized UVs, so as you can see, you have some capacity here.

If you want to use a minimum of texture-samples and have a way to eliminate tiling, start here: Randomized tiling function, eliminate repeating patterns in your textures!. All I did was extend it to 3axis.

It DOES use temporal sampling (which I know can be unpopular). The stock denoiser is OK, but suffers as you use less and less of the screen as a base. I find the denoiser in the DLSS 2+ packages have been near perfect, even under close examination when I literally am inches from my screen looking for visual-artifacts; I likey. IF you choose to upscale, this solution appears to work well even down to ~40ish percent before you notice some speckling at distance.

To the concern around using a wide texture set, texture-palette, I use a texture-array. For the uninitiated, you can pack a bunch of (similarly-sized!) textures into an array. In a material, you can append X to your 2channel UVs and pass in a 3vec. The 3rd value is what texture in the array you are picking from. It saves some samples/slots and in the object-explorer, appears to offer a modest savings in space.

Lastly, regarding my particular implementation, these particulars won’t work if you are painting PBR information into the RVT. I track alphas for rock, dirt, veggies, water, etc. This lets me do-stuff in the heightmesh independently of the RVT; what you end up seeing PBR-wise isn’t set in stone b/c such is not s rendered into the RVT; only alpha information. I can do dynamic stuff on the heightmesh side of the house, so things like raindrops, running water, etc can be animated (and even that layer can be part of the texture-array!). As well, since it’s alpha, anything else that can sample the RVT can ‘know about’ the landscape and react accordingly like materials for foliage, rocks, etc. It’s not just a visual-sink but a meta-information-sink.

So! It offers advantages but also drawbacks but to illustrate that the various challenges CAN be overcome. RVTs tend to be more performant, for me, vs using a plain landscape, then if you add nanite + displacement it drops even more.

Applications abound. Have fun!

1 Like

Would be curious to see a bench of a regular (one tile) landscape VS a regular (4 tile always loaded) world composition vs RVT (1 tile) vs RVT ( 4 tile world comp).

In the end, the bench means little to nothing but for the project you are working on though.
Eveb if you see a clear 50% plus cost reduction the end results are probably just a decent reference to get a general idea rather than gospell anyway.

Amyway,
Im more curious about what i mentioned above.

Did they evert actually FIX the system so that both Foliage and Landscape Grass can be output into the texture?
This was my last experiment with it way back when:

Through it, you get really cheap shadows - as well as the fake distant impostors (though flat).

If the same were a thing for the Landscape Grass nodes, then maybe the landscapes would have some use…

This is actually fantastic, thank you so much for taking the time writing down such an elaborate answer!
I heard Texture Arrays support mip maps now, so that’s a fair point will try them out straight away. No more frac = A-ok for me!
I have so much to look up now, thanks again Frenetic!

(OP, apologies to hijack your post)

Unsure. I might play around with this but I use the landscape-grass layers to help drive all my stuff w/intent to use the PCG-tool for larger objects like trees, etc. I don’t use the foliage painter and never really have.

For my state-of-the-art, I’m no longer tracking PBR information in the RVT-sink so I’d have to set something up to test w/foliage. I might if I find cycles to test.

What’s the expectation, that the PBR of the foliage would render to the RVT? I’ve done this w/rock and the like, but you won’t get motion. Nothing dynamic can be put into the RVT (hence me tracking alpha and doing dynamic-stuff on the post-RVT side of the house).

You don’t need motion, it adjusts rendering when you cull distance based and makes it so you see grass (though flat) when far out…
Matches more closely to the colors you see up close.

Say your floor is bright green, your grass is darker… id you output into the texture the floor is automatically the same color as grass, for one…