Landscape performance issue

Here, I will go trigger happy and paint a rainbow on a 2x2km map for you…

Try it on a 8x8km map with 1024 components. 2x2 is too small and not called “open world”. The cost problem gets more obvious the bigger the landscape gets. Or just take my word for it because I’ve done it. :slight_smile:

Edit: Make sure each component has like 5 different layers on it.

that depends on your setup. some people use a big texture (encompassing the entire landscape) which allows painting less layers without sacrificing so much visual variety.
but again if your components are smaller, there’s much higher chances that you can get away with less layers in each of them. however I can understand the need for a minimum number of layers, so moving on

I’m suggesting it because in our tests it really didn’t bring more weight on the CPU, and wasn’t slower at all. in fact it was slightly faster (in negligible numbers), as in our case smaller components led to less landscape vertices processed overall, which apparently made up for the cost related to processing more components.
I would suggest you to not disregard this test, unless you’re really sure that you cannot live with less than 4 layers as a ‘base’ on all components. and even then it would help with hand-painted components that are less frequent.

I only mentioned World Composition because I know it to make things worse, not better :wink:
with 1 vert per meter and the difference you get with default material vs. your material, it’s clear the issue comes from somewhere in the material. And no, I’m not suggesting your material is wrong :slight_smile:

yeah let’s ignore the tessellation issues (which I’m well aware of), I just wanted to get it out of the way.

I’m still curious about your instruction count (with 4 layers enabled as preview blend weighs in your material)

but as your problems seem to be something deeper / less obvious, we’ll need to start inspecting smaller things.
you’re blending 2 textures per layer, each of them 2 times (with different UVs). in your landscape (where 4 layers are painted everywhere) that’s 8 different 4k textures sampled, for a total of 16 texture samplers processed per pixel. the way I see it, that’s never going to fare so well. for me that’s most likely the cause of the issue.
at this point I’m not sure anything will help you. in fact I don’t think there’s any game-changing optimizations Epic could potentially do (other engines have other compromises as described by ), unless they’d write a completely different material system for Landscapes (something that completely ditches the current weightmap approach, i.e. handling things per vertex while limiting the amount of blending at any given point, etc) which I doubt will happen.
if you’re serious about your openworld game I’m afraid you’ll have to attempt smaller optimizations that add up in the end. Expecting an improvement from Epic that bumps your 79 FPS into 90 or 100 is something I wouldn’t expect. you might even need to go much more beyond with tricks, but this is something that happens with all games (like for example unloading pieces of land and loading simplified staticmeshes in their place, like GTA5 does)

in the effort of trying to find ways to improve it I’ll suggest a few things. none of them are guaranteed to work, none of them will come without drawbacks, and none of them are representative of a one-size-fits-all final solution (slow performance is usually a sum of many things, so these are more like checks to find the biggest culprit)

  • try using 2k textures instead of 4k
  • try removing your 2nd sampling (the one for far distance)
  • try reducing the anisotropy in your engine settings (anisotropy seems to be much more relevant the bigger your textures are)
  • try dynamic shader branching (yes you’d need to rewrite your material as shader code)

I appreciate your effort here but that’s not the point I’m discussing, I get 79 FPS in that test because I have 4 layers. I have 4 layers not because I want to have 4 layers, but because having more layers reduces performance more and more. We don’t need Virtual Texturing so our 4 layers bump from 79 FPS to 100 FPS. Maybe we could do all the trickery in the world to make our 4 layers give us 100 FPS. But we need Virtual Texturing so our 16 layers gives us 100 FPS. And that’s a huuuuuge difference.

still I guess you get my point? getting 60 FPS with 8+ layers etc.
honestly I doubt you’ll ever get 60 FPS (on current hardware) with 16 layers sampling 2x each texture per layer. no amount of optimizations will give you that. 32 texture samplers per pixel? I doubt any of the other engines you mention is capable of that and achieve decent framerates

Size means nothing, you could take this 2x2km map and scale it up to 8x8km and it would still have the same resolution. I have 1024 components btw…

2x2km 63x63 quads, 1x1 section, 32x32 components(1024 components) and an overall resolution of 2017x2017. Every single component now contains 10 layers to it (I got bored of waiting for shader compiles, but could have gone up to 16 on them all no problem). I set brush to component 8x8 with alpha of 0.05 so that it ensures I’d never fully coat a component. Here is the landscape>layer usage visualizer and the in game frame rate. Keep in mind that the layer visualizer caps out and stops adding more bands in it past a certain point.

Also keep in mind that the layers were only slightly tinted over top of what I had earlier, but trust me, there are 10 layers per component now as in it has paint from 10 separate layers total per EACH component.

720p, frame capped to 32fps, high scalability, standalone and fully dynamic lighting+dfao; all on this 540m laptop with 6gb of system ram…

@ how big are your textures? and you’re using 1 diffuse + 1 normalmap unique per layer, right?

not sure why you’re capping it at 32 FPS. but in any case it’s really hard to compare the performance of 720p on a 540m GPU with current gen target hardware

Just out of curiosity, what was the reaction of a person, responsible for terrain, when this restriction was stated?
Mine went nuts :frowning:
And she was probably right, because terrain was created in external software, where adhering to this limitation is so impractical, that terrain would need a ton of manual touch-ups in the engine. Depends on the size of the world of course, but still.

  1. Add ability to work with texture arrays into out of box engine version, including tool to create array from texture assets.
  2. Add dynamic branching into material editor.
  3. Overhaul weightmaps.( ex. allowing 16 bit weightmaps, storing layer ID rather than layer weight, and ideally allowing you to customize terrain painting brush to the point where you could manually specify the way to encode custom data)
  4. Expose terrain heightfield / normal map to material editor( You should be able to sample those in terrain’s pixel shader. Weight maps too btw.)
  5. Add a dedicated terrain color map.
  6. Untie physical surface ID from terrain weightmaps and allow to bake surface ID from terrain material.

This would be more than enough to bring UE4 on par in regards to out of the box open world capabilities. I left out misc things like virtual texturing/ sparse textures on purpose, because in my view, the list above would yield better usability improvement at lower worktime investments.

This is actually essential thing to be done, yeah.

All this is more than relevant, yeah.

Surely, no engine or hardware is/will be capable of this in nearest future. The question is why UE4 landscape system is designed in a way to bump into this limitation, forcing sacrifices in visuals, when this bottleneck can and should be avoided ?

Anyone could potentially say that any studio can overhaul it as per their needs. That is true. The reality however is, that at least 3 shipped, well-known and widely marketed UE4 titles, that are known to me, have performance issues which can be attributed to landscape texturing system in one way or another.

The need for default landscape improvement is not project specific, but is common for any open world project, hence this thread exists.

Check into what I’ve mentioned a few times now: shared wrap. I don’t recall the exact numbers, but I think it can allow you to blend up to 128 layers; if you desire. Also, keep in mind that those games are/were running on older versions of the engine. The landscape system needs work, sure, but it’s nowhere near as bad as people are painting it out to be. Most of the other game engines out there that are running 16 layers, etc etc, are using cheap layers that are mostly just using straight D+N on them.

They are textures from the starter content, so they are probably 2k. And yes, it’s 1 diffuse and 1 normal, but I’m also adding in more filler by taking channels from the materials and feeding them into specular, metallic and roughness; to simulate it having to pull more info.

I’m capping to 32fps because the game I’m making is going to be a 30fps game. It’s an RPG, not a shooter, so I’d rather pack in twice as much detail than skimp and go for 60. The game will be a 1080p game, but this laptop isn’t really rated for much more than 720p.

Can you upload it and put the link for us here to try it out? :slight_smile:

when the game’s performance isn’t optimal the LevelDesigners can see it coming when such restrictions arrive :smiley:
with terrain created in external software it means you get a lot of painted information in places where you won’t even see it, so they just had to learn to unpaint those areas. yes the terrain needs a lot of manual touchups, but that’s to be expected if any optimization is ever going to take place. just as coders are meant to optimize their code, level designers should also optimize their levels.
so all in all -and given that we use a global colormap (so the compromise to visual variety wasn’t big)-, and with the possibility to make exceptions here and there, the restriction was accepted with understanding.

  1. we tried it briefly but have yet to fully explore it, but preliminary results showed a very marginal benefit
  2. we tried it briefly but have yet to fully explore it, but preliminary results showed a small benefit. not something to dismiss, but clearly not as much as we expected. also the benefit was more significant on older hardware
  3. while that would help a bit, I believe it would only reduce the cost of the weightmap sampling which seems very little compared to the cost of the shader (i.e. the sampling of the actual layer textures)
  4. I asked for this more than 2 years ago but only got silence. however I don’t see how this would improve the performance (which is the entire point of this topic)
  5. this helps for variety, but I don’t see how this helps improving the performance (unless you use it as leverage to reduce your painted layers like I described above)
  6. I also asked for this more than 2 years ago but only got silence. but again I don’t see how this improves performance

I’m all up for usability improvements and other changes that would give more flexibility, but for the sake of this thread maybe we should stick to discussing performance

I’d like to de-mistify this concept. game studios don’t have unlimited time and resources to overhaul all systems to their needs, which is the reason game studios use a 3rd party engine in the first place.
big overhauls are only at the reach of a few bigger studios (in the case of Unreal I’d only be able to mention Rocksteady for the big graphic overhaul they made over UE3 for Batman Arkham Night, and Coalition for their engine-side changes to both UE3 and UE4 when working on the Gears series). small studios only have the luxury of small adjustmets to the engine and maybe a handful of small implementations (usually to make up for a non-existing feature, not to overhaul existing ones), while in most cases have to live with what the engine offers.


shared samplers allow you to go beyond the 16 texture limit, but that’s basically it. the shared sampler gives a very minimal boost on performance. so just because you can blend up to 128 textures it doesn’t mean you should (unless you want to bring your GPU down to its knees) :wink:

What I posted should be viewed as a complex of measures to be taken as a single step, not as individual features. They are chained together.

2 alone is good performance boost(I actually have no clue how you ended up marking it with being marginally useful. I have opposite results in all respects, including increased gain on newer hardware), yet when coupled with 1, It opens you possibilities to do things, that would be otherwise prohibitory, like having 3-way mapping on all your layers. And if you can pack more features into same rendering time, what is this if not a performance gain ?
With 1 being implemented, 3 becomes natural evolution step and reduces memory footprint.
4 5 6 are not really directly relevant to performance indeed, but their absence drives users into hacking around and thus, affects rendering time.

Thread starter was refereeing to performance vs layer number, so the thread is mostly about it. There is not much to be discussed about aspects of UE4 landscape system, other than texturing, because they are not lacking in any way.

It’s like 300mb, so no. You can easily recreate it though. Just make a new landscape, use default settings, hit fill world. Make a new material, add in two layer blend nodes(or add in a few more to artificially bog the system down with putting random channels into them and hook them into specular/metallic/roughness), add in a bunch of layer slots, grab a bunch of the starter content textures that are D+N, make sure to set each one of them to shared: wrap (both diffuse and normal), get a landscape coordinate node, hook it up to all the textures, hook all the textures into the layer nodes and save it. Scroll up and you’ll see the big node array mess with it all. Apply that material to the landscape, add in each layer info channel and paint away…

Also, one thing I never asked was are you testing your map out in editor or in standalone? You should ALWAYS test anything performance related out in standalone or packaged. At the very minimum, standalone.

EDIT: And what does your VRAM look like when you’re having issues with being bogged down to a low FPS? Or any other GPU stats that might indicate you’re bottle-necking.

Sadly, we don’t get any good results like yours with 10 layers on 8k landscape.
If it happened you created an actual production quality landscape shader with 10 layers on an 8k landscape and get +100 FPS on 1080p with same hardware, and don’t cap FPS to 32, then if you share the project we can go by that and see what’s giving you 4 times more frames if it does, and what’s bottlenecking ours, if it is. :slight_smile:
Also, hooking up textures to their corresponding slots, isn’t really what they do for AAA projects. There are tons of effects and calculations involved in landscape shaders in other games.
Setting textures to shared wrap and things like that are all what is doing already, and these don’t make up for the absence of a proper landscape texturing workflow.

Not totally fair to compare, as this was from an engine where they made it a priority for open world, but a look at ghost recons environment tech shows a lot of technical complexity is possible if focused on, especially with a graphics programmer who understands what the artists want.

Also posted for general interest for in this topic, as its the best gdc talk ive seen on this topic. Their road system in particular looks fantastic.

Just saying again here tho, i dont think its fair to expect all this from epic, theyre building a general engine not a specific one, and this wildlands tech took a small team 4 years to make, but to say more is not possible with current PC technology i think is not completely true either.

The overall takeaway from this GDC talk tho, is that there is still a large gulf between what triple A studios/engines can do, and what we can do. You might be able to compete making a small fighting game, or a corridor shooter, but large open world games need large investments in clever people and time, and for that you have to pay. And theres no real shortcut around that, except accepting that your game wont look as good as theirs.

Here is a new version that I made with a semi-complex material function(modified version of the started content grass material). 10 layers used on all 1024 components, graphics set to high, dynamic shadows+dfao and I am still frame capped at my 32fps that I’m set to. Oh and it’s pointless for me to uncap my frame rate, even with a blank starter level, I will only get like 50-60fps (with graphics set to high), on this laptop. Remember, it only has a 540m and is like five or six years old.

EDIT: On the material, the open spots on the MF’s inputs will default to a preview texture/value/etc; from within the material function. So they are definitely hooked up. Also, again, I just filled the whole map with the first material and then using a low alpha value of like 0.05 or something, I filled each extra blend layer. So the map will look predominantly like the first layer; with only a slight tint of the other layers. It still means that it’s having to blend all the layers together though.




Can you upload so we can test.

Put the contents of this zip inside your root project folder or just open the zip, click into the content folder and then copy the landscapegroundtest folder into your project’s content folder. Go into the material function and click on the four nodes, in the red comment box, in their settings, put some real textures that it can default to (the function defaults to them, but needs them to have something in them). Go into the material and start plugging in your actual textures that you want to use. Apply it to your landscape.

There is no way I’m trying to upload 330mb on a weak connection.

Thanks for sharing it!
Truly awesome.
For rest of us, who are producing something other than slide-show, the problem is still valid.

The biggest advice I can give is to watch how many large textures you’re blending at a time. For instance, you can probably use a 4k base, but why would you use a 4k micro or macro variant texture? Even for an up close micro variant, you don’t need it to be 4k because it’s usually just a tiling texture to add in stuff like little blades of grass or surface noise. Is your camera ever going to get closer than 50 units to the ground? Probably not. Along the same lines, how often are you going to be zoomed out to 200,000 units to the point that you’ll be able to tell the difference in macro texture resolution? Blending to color gradients, the further you go out, also helps out a lot as well. You can pretty much switch to a macro variant+single color vector when you get more than X number of meters away. You aren’t going to be able to see the detail anyways.

There are a lot of tricks and shortcuts you can use. Any of the big name games, that have these alleged “super” landscape systems, are using engine shortcuts like these(probably internal to the actual engine); in exchange for performance vs fidelity. Just remember that a single 4k texture is 64mb alone(assuming RGBA).