Procedural Landscape Layers

Does anyone know of a way (or is it even possible) to procedurally generate landscape layers instead of having to paint them?

At the moment, I have a landscape material that will automatically paint the enter landscape based on various algorithms I’ve created in the material and sampling noise textures to produce masks that I use to lerp between different textures. This works great and has painted the landscape just how I need and automatically updates as I change the landscape or create new ones. The issue though is that there is no landscape layer information baked into the landscape as I haven’t “painted” it with the landscape painting tools like you normally would. Essentially, I’m creating these paint masks procedurally, but I have no way of accessing them outside of the landscape material.

This now becomes an issue when I try to use PCG as it can sample the landscape and extract information such as the landscape layers. This is useful for then spawning procedural content (like trees) on certain layers and such. But unfortunately, I don’t have this information to sample. So, I’m wondering if there is a way around this? Can I generate this information somehow or is there some other way in PCG to determine what the ground type is on the landscape (i.e. grass, dirt, etc.).

For now I’m just using the landscape grass layers system as I can feed my generated masks into it in the landscape material. This works fine, but I would love to find away to get these masks over to PCG so that I can do some more interesting things. Any ideas?

1 Like

for shure there are “plugins” in fab cause I downloaded one and never used, un ue5 presentation they even showed a similar funtion

Hi, you could at least bake them out. Won’t be fast to iterate though. You can let the landscape write to a Runtime Virtual Texture, and write out your masks to it, either one by one or three at once (RGB). Then create a streaming virtual texture from the runtime virtual texture (button click in the settings). Then export the streaming virtual texture which will give you those masks as image. Then you can make them into individual grayscale images and resize them to the size of the landscape, then import them as landscape layer. Repeat until you have all your layers.

The landscape setup in a “dynamic" way will not work at runtime (on a published game) - so don’t waste time on it.

Whatever you come up with would have to use actual textures and be built for deployment if you use the epic sorry excuse for “landscape" system.

Instead, you can levarage some voxel plugin or any other Items you may find on the marketplace that are created to work at runtime.

Performance will vary - landscapes done wrong account for about 90% of extra scene costs in projects.

Finding a pre-made system that works and takes performance into account without extra work is pretty much the equivalent to the search for the holy grail - only, you are always the nazi in riders? :laughing:

Be aware that PCG should work much the same - it precomputes and sets the positions of objects so that all of the instances end up stored and baked via their transform. A game running will not automatically adjust said positions.

Realistically you would need to come up with a way to place meshes at runtime if you plan to have things change at runtime - and that will require you to have some sort of a loadong screen and a 30s or so loading time at a minimum (depends on instance count).

It’s not impossible, but you can’t levarage PCG afaik.

Last thing worth a mention - the trash unreal peddles uses a simple 16bit grayscale to generate the terrain - known as heightmap.

Normally, you process a heightmap in some other program GIS - and extract a slope map to do the painting and placement of stuff from.

You could levarage the same setup to make changes at runtime, many systems work that way - just not epic’s.

(post deleted by author)

Thanks for all the feedback guys. I have thought about just exporting the masks to some maps I can use in PCG, but was hoping to find a more dynamic way to speed up iteration time. I’m not looking to do this stuff at runtime or try and generate landscapes at runtime or anything like that. By “procedural” I just mean in-editor stuff to help me speed up the creation of my landscapes and placement of foliage and such with a consistent look and feel rather than having to try and do it manually by painting in the landscape layers and the foliage by hand all the time.

1 Like

So this is something I’ve been, not trying to crack, but see what I can make the engine let me get away with.

I don’t care for runtime-effects either, except for things like getting wet/snow(buildup) and other environmental things, but those also apply to meshes and whatnot so I don’t consider them tied to the landscape-proper. What I wanted was a way to make producedural (math-driven) layers (like what you feed into landscape grassmasks) available to PCG, and there is a path, but it’s not real-time, it will just save you time on the layout.

In the PCG graph the landscape nodes only offer information around physics and painted layers. There isn’t even a way to sample to grass-layers directly. Unless you feed grass-masks into another physics-channel, you don’t get them there.

Otherwise the only other way to get information out of the landscape is via RVT. The good-news (everybody!) here is that you can seemingly use RVT-sampling to drive the grass-layers. In this pic, I’m testing the grass-masking effect of the red-meshes (look down in the pic). The landscape itself feeds into the RVT and the straight-sampling of that same RVT goes right into the grass-node. Other-meshes feed into the same channel to mask out area’s I don’t want grass. It’s not PCG, but decently low-cost for a collective-information-sink, and auto-pilot, just rebuild the RVT and the landscape-grass.

Pic

This helps automate landscape-grass to some extent.

Otherwise, you can use the RVT volume to bake out textures as you like once you get the layout you want, so there is also that at least.

In 561, you can use a template:

And in the landscape-grass example, you can see we’re somewhat limited to NOT being able to sample the grassmask, but the physics, mask is almost-as good. The rest, the sizing, randomizing, etc is all done in PCG.

So you can sorta-fake-it in PCG and likely get what we have in landscape-grass. Otherwise there is always the RVT path if you want mathy-in-the-shader-grass.

EDIT: so to be clear, I’ve not tried this, but I might. RVTs have a continuous update option, which is costly to re-render all the bits you might be looking at, but it’s gotten better each version so I don’t know where it is today. I’ve also not tried to put any real-time maths into the RVT to use this, but if you were feeding such maths to the physics layer, and tied it to runtime-generation on the PCG??

In this case, can I point you both to the good ol Procedural Foliage tools that were left in Beta and never finished?

Instead of PCG you can just set the slope value you want the spawner to work on, the various mesh parameters that create somewhat realistic fields such as the ability to grow or not grow in the shadow of another tree(mesh).

In all cases - do away with the “auto-material” as it’s just going to eat performance for no reason.
Take the final product of the height-map you make to Grass GIS and derive the slope map. Pack a few different slopes into the same RGBA texture, and use the different channels to drive different landscape layers at a fraction of the cost you would otherwise incur in the auto-material - All giving you the Exact same visual result btw.

As always - avoid the landscape system. Avoid Grass maps (use procedural foliage to place grass meshes as foliage).

I think that if you marry this to PGC you can get some very good results with minimum user input on your part.

To generate the initial height-map I would suggest a cloud texture process where anything with a slope greater than say 60% is avoided (otherwise you have to come up with a PCG to place cliff faces).

Should mention - Houdini by SFX does terrains automagically with a slur of functions that help like erosion. And the end results like seen in that one Far Cry 5 I think are decent to say the least when it comes to geometry.
Maybe look into it and see if you are able to work in that DCC directly - I’m sure you can do so now even better than you could back in 2020ish, so it’s worth a try.

So Procedural Foliage appears to have been rolled into PCG, which is probably why it was never finished. Just compare the settings of Procedural Foliage with the settings of the Static Mesh Spawner node in PCG for example.

I have looked into the RVT stuff in PCG in UE5.6, but couldn’t get it working. Could just be the lack of documentation as the feature is so new.

I have since come up with a solution that appears to be working fine for me now. I’m using the landscape physical material output node and just writing my masks to it. Then I’m sampling it in PCG. Actually works really well, there is just a couple of “gotchas.”

Firstly, you basically have to restart the editor if you make any changes to the physical material set up (i.e. in the project settings and such). Secondly, you can only have a single physical material per vertex it would appear. So if two masks apply to the same vertex, one will overwrite the other and you will only get one result in PCG. It just means that the masks need to be “excluded” from one another in the material set up before pushing them in the physical material node.

I’m still using the older grass layers for a lot of the small foliage, but am now using PCG for the larger things like trees as I need the player to be able to interact with them. PCG allows spawning meshes in an HISM, which you can then interact with via a line trace, remove it, swap it out with an actor, etc. at runtime.

As far as the performance cost goes with doing complex landscape painting logic as an “auto material,” I’ve been careful to keep it all “static” and then write the whole lot to an RVT that is then sampled in the material’s final output. So it really doesn’t matter how complex it gets as it will pretty much always just be written once to the RVT and then just becomes a texture sample after that. From what I understand, the RVT write, as well as the landscape grass layers output and the physical materials output are all one-time costs, assuming you don’t do anything dynamic before the RVT node. Correct me if I’m wrong.

Well hold on there - the latest engine doesn’t work and pretty much none of the 5.x do - so in order to get something going you have to be on 4.something (25 for performance?) And use world composition or whatever the old system is called, with proper streaming and proper world origin shifting.

Otherwise you are at the mercy of epic and stuff not working. In which case, follow @Frenetic ‘s advice as he’s been barking up that tree at least 2 years now I think.

Regarding your last comment you may be misunderstandng the implication - yes you write to a texture, and then read it, but are you sure that the rendering cost of that is not “per vertex on each screen pixel per every frame"? RTV should update at runtime (hence the Runtime in runtime virtual texturing)…

While this doesn’t pass my smell test, it’s also possible you got it working right - just check with scene stats…

Also Again, I encourage you to find ways to work that are Ourside of the engine, so you can jump ship and keep your work once you realize the limitations/implications of using Unreal on something even minimally large scale.

Plus, if you develop your own pipeline you build up real knowledge instead of learning things that are specific to the side of the bed Epic woke up on this particular day/month/year… :wink:

RVTs update on demand, and only the tiles that are invalidated by a change. So, yes, all that auto-logic will process at runtime, but in my case it should only be once at level load as I’m not streaming a large level or using world partitioning. I might even end up baking out the mask maps of each level at the end and paint the landscape using that just to speed up the level loads.

Not sure what you mean about 5.x not working. I suppose you can find something in any UE version that doesn’t work right, but I just find ways around it without rolling my own stuff from scratch.

My undestanding was in line with what @wilberolive suggested, that the RVTs update on demand, otherwise if the source does not change at runtime they are essentially a ‘regular’ (static) VT. There is a switch to actively update the tiles you can see but this is usually expensive at runtime for large, open worlds IMHE.

You can also pay to precompute various mips up front and store them in a larger and larger RVT file on disk, otherwise functionally they are a big-texture with the shader-logic hidden behind the wall of the texture.. You pay it once and what comes after sampling the output is it’s own thing.

To wit, if you precompute 4 levels, and export the resulting texture, you can get an 8k png. 3 levels gets you a 16k, scale up/down as required. @MostHost_LA suggested using GIS data and yes of course do, but you can also use the shader logic on top of that to mod it, then generate/export a texture if you want something static. You’re only limited by resources here; the unreal engine isn’t the most resource-efficient way to do all that but it is a functional-path that keeps you largely inside the engine.

I mean - the only thing I use the engine for now a days is to render textures out in some way.

Bake to RT, export the RT.

You can 90% setup a baking pipeline to produce the heightmap - the difficulty of doing it entierly in engine is the 16bit format.

I have never tried to bake and export out 16bit directly from the engine (as what I - and most of us - deal with are material maps that only beed a max of 256 values)…

You could attempt this. It would make your pipeline faster and possibly enable you to create random levels at runtime to some extent…

It’s a decently-good path to rescale a heightmap. I can, say, use a 2k map, slap it on a landscape, makes some edits, address some sore-spots sculpting and/or in a shader. Feed the z-position out to worldheight into an rvt and export at desired res. The WorldHeight RVT option already creates a normalized heightmap in this regard so it’s good to go if you want to turn a 2k into an 8k and have it scale well.

Stupid-app-tricks but it can be handy. At the very least, you can freeze things made in the shader portion of the RVT and then just directly-sample the baked texture, which agreed, is usually the most-performant path. At least in this manner one can add-value via a shader if it could apply.

For at least my unkind-testbed, I can say using the RVT as a source at all is a more-performant path for me, not by much, but enough to register on my test-case (~7-10).

Ahah, funny that I found this post, I actually built a system that solves this, but I can’t share the exact setup since I’m under NDA.

To give you some guidelines: you’ll need to feed your masks into a physical material within your current material setup, and then find a way to sample your landscape material to send that information to PCG. (The “Get Physical Material” option in the Get Landscape Data node doesn’t work correctly.)

It works at runtime, so you can do some cool stuff with it, but I’d highly recommend not relying on it too much since it’s quite heavy on compute time. Still, if it’s just for iteration and populating your landscape with props, it’s the fastest pipeline.

Good luck, you’ll definitely figure it out. :smiling_face_with_sunglasses:

So you manually touch up with landscape sculpting?

I run smooth on 30m lidar to 1m lidar expansions with nip2 using a custom python derived smoothing algorithm - because the terracing you get is otherwise insane.

Are you telling me that it’s now possible to run something like that in engine, get a decent smoothing result that’s custom handled by clicking a smooth tool where you need - and you can save the 16bit output?

If so, I should set up a “builder" project that works this way to improve on the usual pipeline somewhat. (Nip2 smooth, add both maps to the engine, manually adjust to better match the original geo).

(And also it’s rediculous that we still can’t get 1m lidar for most of the world with all the stuff we have orbiting around).

1 Like

Actually I had the same problem and I came up with 2 solutions to this problem.

  1. You can make a logic inside the landscape material to have Physical materials assigned to the corresponding layers with weights…basically masks….
    and then sample that in PCG, using a filter to include or exclude points.
    Worked extremely well for me…

  2. I made a second version ( a bit slower to iterate on)… with the use of static switch in the material… I can turn on a Plain color on the landscape layers in stead of texture… and rendering a texture with Render Target (from a unlit scene) you can then very easy sample it in PCG.. and do point logic per color…

  • the down side here is that you need to resave the texture out from the RT if you make a change, the rest is easy.

You might be doing something wrong, but works perfectly fine for me. I’m using UE5.6 if that matters.

Yep, that’s what I was doing if you read my last post. Anyway, I’ve scrapped the physical materials approach as it turns out you can just sample the landscape grass layer maps directly in PCG now. Works so much better. Does require writing custom HLSL code though as it is not built-in functionality in PCG so you have to use the Custom HLSL node. I pieced it all together by just digging through the docs. Now I have a truly automatic solution that both paints and populates any landscape in a believable way.

(post deleted by author)

Quoth Professor Farnsworth: Apparently

I wouldn’t need to manually touch up in many/most cases, but if you did need to, you could. As well it opens up the path to just-do-stuff in the engine.

And FWIW, to test, I did exactly what I described in exporting the RVTs, physically ripping out the code that generates it from the landscape-material, and just using the exported textures in the shader. Same performance WRT using the RVT as the source. My understanding is that unless you have ‘continuous updates’ enabled the worst you would have to pay for a tile of the RVT is to render it 1x for a given mip. Once you load it and keep it resident, you won’t have to pay for it again unless that tile incurs some kind of update requiring it to be re-rendered. It did ultimately give me a benefit in less texture-samples (not samplers) so there was that, but aside from maybe 1-2 FPS difference, performance was identical.

This is where the precomupting of mippage comes in handy since you can pay that upfront and trade for disk-space. As well r.VT.Residency.Show is a handy command here; if you run out of streaming-pool-overhead, you will DEFINITELY pay for it.

That all said, there’s a lot going for the idea of using the RVT as a kitchen-sink type deal for information-exchange across multiple items. Per the above, being able to make a singular grass-mask and just-feed the RVT into the physics-layer is quite handy. Adding roads, etc via PCG can all render-down to a single mask, export, boom! Custom-stenciled-mask built in game on top of whatever value you wanted to add via the shader.

this is what you get via the export. i pushed the basecolor RVT and it comes with the alpha packed with Specular (I think, it’s in the docs the exactly layout for each format):

1 Like