Get Landscape height in material applied to a static mesh

While rendering a static mesh, like a tree or rock, i need to get the Landscape height in the material.
I know how to get the ‘Absolute World Position’ in the material, I need to get the ‘Landscape height’ at that current absolute world position.

Loading a copy of my height map the terrain is using and working out height values for any world position may be possible but that seems like a hack way to go if I can just query the terrain landscape from ‘any’ material applied to any static mesh and get the height value.

To clarify, I need to do this in a material NOT applied the the landscape, but applied to static mesh.

I’ve searched forums and just cant find any info or doc’s on this.

ANSWER
The question has been asked in the forums.
Using 16bit heightmap in material to recreate landscape

The Epic ‘Kite Demo’ has Terrain Heightmap sampling from a Material applied to static mesh on the rivers and lakes in the sub level maps.

For best results the height map needs to be 16 bit .png (can use 8 bit but that may cause stair stepping ).
Load the .png Heightmap as HDR (RGB, no sRGB) compression. In the material your Heightmap sample type should be ‘Linear Color’ and sample from the Red channel.

Material node example: Here is the material nodes, I got the UV calculation and Heightmap lookup from the ‘KiteDemo’.


Your ‘HeightmapScaleZ’ and ‘TerrainOffset’ will differ, I found mine through trial and error. Subtracting ‘Absolute World Position’ locks the height to your terrain, so what ever your vertices world Z is, this node graph will give you an offset to your terrain. The ‘MakeFloat3’ vector plugs into your materials ‘WorldPositionOffset’ attribute. The ‘KiteDemo’ has this graph as a material function, great idea, I am going to do the same so it can be easily applied in any material.

If you don’t have your height map or have modified one with sculpting, the Unreal Editor can export it for you in 16bit ‘,png’ format.

It’s not possible. Landscape height isn’t stored as single big texture. So it can’t be possible to sample from arbitary position at material level. You could trace under object to get single component and get height texture from that. But even with this setup you get problems when object is at component seams. You could render landscape to render target where you store the depth and use that texture at material. If you don’t have too big landscape this might work quite well. If you have too big landscape then render just closest landscape components and rerender when camera shift too much.

Or, alternatively, have a duplicate of your landscape’s heigthmap as a texture, that you would sample in mesh’s material. Unlike rendering landscape height from top, it incurs less runtime performance hit, but consumes a bit of a memory and works only for fixed size landscapes.

huh, is it really more of a performance hit to capture the topdown landscape once and leave it untouched forever?
also the rendertarget would consume some more memory than the heightmap texture (as it’s uncompressed), wouldn’t it?

Thanks for the feedback. It looks like just loading the ‘.png’ height map and sampling it in the material is the way to go. I have the landscape min/max world coordinates for its simple to get UV coordinates to sample the texture.
In case anyone’s interested I want to do this so I can dynamically offset vertices in a static mesh to the landscape. Here is an example from Farcry 5, they offset tree trunk/roots to form to the landscape.

Image is from GDC 2018 about Rendering Terrain in Farcry 5.

1 Like

I think the attachment is not showing up either for me or for everyone.

Sorry, I was having problems embedding an image, I got it figured out and edited the original post.
Thank you guys for the quick feedback.

Would be quite happy if you share your results if you did it. ^^

just offsetting the vertices won’t give you the same result from FC5 though. there’s more things going on to achieve the actual blending

Yep, you need either soft depth test or dithered mask to achieve that kind of blend. The latter one can be applied to trees, since the heightmap already drives the data, where the blend needs to be applied, but it will never look as cool as actual alpha blend.

How would you actually make so only those tree base verts get offseted and keep upper verts unaffected?

You could vertex-paint the bottom vertices, or even just mask them with a texture mask. Vertex Paint is probably easier.

EDIT: In fact you don’t even have to do that. Just paint a second UV channel on the trees as a gradient from top -> bottom, then mask to only the lowest part in the material. Not sure which is the cheapest method.

How would you make it so verts are automatically offeseted only as much until they match terrain surface regardless of where in the air they’re placed? :rolleyes:

^^ compare the bottom of the vertices worldposition Z against the heightmap (making sure the heightmap and the landscape positions are mapped correctly)

I’m pretty sure the workflow is a bit different from trees to rock piles though as for trees you only want the base verts to be affected but for rock piles all verts should be affected otherwise the mesh gets stretched a lot. For tree as you guys suggested, masking might work. But for rock piles masking doesn’t work and without masking entire mesh gets flattened to terrain surface.

Crytek has this for grass since Cryengine 2, it bends grass patches depending on terrain slopes so you never get a grass cluster aligned poorly on a slope, without requiring vertex paint or masking in general. I’m going to take a look at their code and see if I can figure anything out and will pass the info to @**Deathrey **as usual to see if he’s interested in. ^^

It turns out this question has been asked in the forums.
Using 16bit heightmap in material to recreate landscape

The Epic ‘Kite Demo’ has Terrain Heightmap sampling from a Material applied to static mesh on the rivers and lakes in the sub level maps.

For best results the height map needs to be 16 bit .png (can use 8 bit but that may cause stair stepping ).
Load the .png Heightmap as HDR (RGB, no sRGB) compression. In the material your Heightmap sample type should be ‘Linear Color’ and sample from the Red channel.

Here is results I got on the kite demo asset ‘SM_Scree002b’ rocks. I exported the FBX and painted the the vertices that the terrain blending would effect. You can just paint the vertices per-instance in the Unreal Engine Editor.

I also set the material blend mode to ‘Masked’ and checked ‘Dither Opacity Mask’ and used the painted vertices to set the ‘OpacityMask’ material attribute. The result is a decent blending to the terrain that looks good from both a standing and crouched position of the player. A player in prone position would see some blending artifacts but its good enough for in game imo.

Here is the material nodes, I got the UV calculation and Heightmap lookup from the ‘KiteDemo’.

Your ‘HeightmapScaleZ’ and ‘TerrainOffset’ will differ, I found mine through trial and error. Subtracting ‘Absolute World Position’ locks the height to your terrain, so what ever your vertices world Z is, this node graph will give you an offset to your terrain. The ‘MakeFloat3’ vector plugs into your materials ‘WorldPositionOffset’ attribute. The ‘KiteDemo’ has this graph as a material function, great idea, I am going to do the same so it can be easily applied in any material.

If you don’t have your height map or have modified one with sculpting, the Unreal Editor can export it for you in 16bit ‘,png’ format.

I agree, my rocks got stretched a bit. I got some tree models with roots that this could work on and I might try to add some ground mesh on some of my tree models like the Farcry5 example. Good tip about grass from Crytek Engine, I’m going to test this on my grass. It could solve were grass patches float out in the air on edge of cliffs and ditches.

the workflow is different from anything that needs a different kind of masking of course.
nothing has to be flattened to terrain surface though. you can offset the entire mesh using the center as the point of comparison (instead of per polygon), which would literally just “move down” the mesh. as long as it’s small things without collision (i.e. individual stones) you should be fine. then again if the object is a small thing with its own center I don’t see why it wouldn’t be correctly snapped in the first place, so…
if the object in question is a cluster of meshes then using the pivot painter workflow would provide a good solution to be able to offset each piece of the cluster-mesh from its individual center

interesting, good info!

you can propagate the painted information from the instance to the mesh asset, no need to export anything :slight_smile:
one of the buttons of the mesh paint tool does exactly that. make sure you don’t use “small icons” in the editor because if so, that button is invisible (small icons setup is full of bugs)

interesting results :slight_smile:
still not a fan of using the Dither technique for such things myself. you get all sorts of small artifacts depending on many factors but mostly it bothers me that the more a game relies on TemporalAA, the more you end up building an “ulgy mode” for when the player disables TemporalAA :slight_smile:

As applicable to the blend, you don’t really need any vertex paint. Vertex distance to the height field will define your blend. Modulate it with a noise texture, and you are good to go. Keep in mind, that terrain heightfield needs to be sampled in vertex shader, not pixel shader, to avoid things being expensive. Worth mentioning that whole thing breaks on high slopes, and needs least distance to heightfield search instead of simply comparing Z position. Gets expensive at this point.

There is one huge downside of dithering mesh opacity, instead of using alpha blended terrain. It is huge amounts of parallax induced. Good for stills, but It is very noticeable in motion.
Think of it as being able to see through your rock indefinitely far, until something else blocks the sight. In most cases, it will be terrain itself, but not always.

Speaking about offsetting grass to fit terrain, it is a good method, but it is also situational. First of all, on high slopes it sill breaks, but it is not that significant. You ain’t using grass on such angles anyway.
Secondly, It depends on heightmap size. Large heightfields will cost more to sample.
Thirdly, without a system, that would bind a proper heightmap to a foliage instance, using world composition, the system is not maintainable for production.
But even then,I am not sure how to handle the case, where instance is located on the border of two landscapes.

But the biggest issue here is performance tradeoff. If you are using classical approach to grass rendering, you can allow yourself sampling terrain heightfield in vertex shader, it is fine.
If,however, you are running depth pre-pass in favor of zero overdraw(which seems to be becoming more of a standard nowdays), you would be doing that twice and I’d question feasibility of the method at all. Your vertex shader will be already loaded decently with stuff like wind and interaction. Adding one more fetch to it should be weighted carefully. You might end up trading off instance placement density to maintain the budget and ending up in a situation, where using smaller clusters will yield same performance as large clusters with offset.

One alternative method of getting terrain height would be using depth scene capture, but in a wicked way. Instead of covering whole area of interest with a capture, the capture should be done for a sector of the area. That way you can maintain a pretty large resolution heightmap, while capturing at sane resolution. It will need a composite shader pass, but seems doable. I’ve used this system for ocean/shore intersection detection in a large world and it was fine, but I am not a fan of introducing spikes by running captures on demand though. It is also sort of raindance, instead of a proper system.