Best way to do triplanar projection on landscape master materials?

Hello!

This is my first time posting on the UE Forum, though I’ve participated in Discourse Forums before.

I’m currently developing a Landscape Master Material in UE5.5. While this is my first attempt, I consider myself a fast learner and have managed to create a decent version based on my experience so far. I’d greatly appreciate any feedback or suggestions you may have to help me improve.

Here’s what I have so far:

Current Features:

  • Base textures with Albedo, ORM Mask (currently using only the Roughness channel), and Normal maps (sourced from MegaScans)
  • Additional functionality includes:
    1. Distance Scaling
    2. Fresnel-driven Specular for PBR optimization
    3. Material Tint
  • Material Blending using Perlin Noise
  • Macro Variation
  • Slope-based Auto Material

At this stage, I’m looking to incorporate Triplanar Projection into the material. My goal is to add a Static Switch Parameter that allows for toggling between Triplanar and standard UV projection. So far, I’ve found many existing tutorials to be either outdated, not landscape-specific, or otherwise inconsistent in their approaches.

How would you recommend implementing Triplanar Projection in this context? I’d love to hear how you’d approach it. I need to balance visual quality with performance, as I’m aiming for a steady 60 FPS in the final project. Any general tips for optimizing materials would also be invaluable.

Future Plans:

  • Adding Nanite Height Displacement
  • Procedural Foliage Placement

Thank you for your time and insights! I’m eager to learn and improve with the help of this amazing community. As I mentioned, while I may be new to this, I’m passionate about mastering the process and welcome any advice or resources you can provide.

Best regards, Tigger

P.S. If you need me to post screenshots of the material graph, I can try to do so. IDK how that will be handled as a new user though, I may need to do a few posts to get the first level.

You do triplanar on landscapes the same way you do it on meshes.

You need to create triplanar UVs (multiple tutorials on this) and then adjust the sampling/number-of-texture samples you use.

So where you calculate your UVs, add your switch between the UVs and the texture-samplers, and for the 2nd path, plug in your triplanar UVs.

For the texture-samplers, do the same thing. If you are using a single-sample you would keep it the same, and if you are using 3-samples, then add another path like above.

Past that, mathwise, is where you will still do all your specular, etc, etc.

1 Like

I’d recommend following Visual Tech Art’s tutorial for Tri-Planar Projection. Based on what you’ve described already doing, it sounds like you know your way around the material editor, so his tutorial shouldn’t be too difficult to follow. IMO he does a great job showing you what’s common, why it’s “bad”, and what you should do instead.

IIRC, he does specifically account for landscape, even creating some variants which work better for landscapes than traditional Tri-Planar does (Enta-Planar, I believe is what he called it).

He also balances the performance of Tri-Planar by calculating the projection in the UVs of the texture, so he doesn’t have to sample the same texture 3 times (like in traditional methods). This also allows you to use the same UV set for multiple textures (eg: Albedo, Normal, ORM), whereas in traditional Tri-Planar, you’d have to repeat the same process for every texture. This means it’s scalable, particularly useful if you have many landscape layers requiring world-projection.

He even takes it a step further in another video where he focuses on assets, tackling problems like animated meshes and scaling, if you’re interested.

1 Like

Visual Tech Art is very good and has useful information.

I also recommend Alex (BananableOffense):

2 Likes

Awesome, thank ya’ll so much @Frenetic and @blyxzen. I’ll look at this later and let you know what I end up doing in case anyone comes upon this in the future.

Any suggestions for mat optimization too? I don’t care how difficult it is, I’m working on a RTX 2070 Notebook and want to hit 60FPS (ambitious, but possible. My goal is a FPS Sci-if Fantasy game). I know a good deal from extensive research but learning from others who have done it is always good :slight_smile:

My understanding, generally speaking, is that the most-expensive single operation in the material shader is the sampling of a texture. It’s not the most-expensive thing to do by a long-shot, but it’s near/at the top of the list. This is why being able to get away with a single-sample vs 3 (one per axis) is so desirable as you can cut that overhead to 1/3 of your typical cost.

Otherwise, the idea that you bake as much as possible into the texture is a good idea; this is the equivalent of pre-computation, doing/storing work up front so you don’t have to pay a runtime cost.

For example, if you are using a texture and decide you need to adjust it’s brightness in the shader, don’t. Go back to your content-creation tool and up the brightness there so you don’t have to do any follow-up maths in the shader and incur a cost at runtime.

Otherwise, things like rain, snow, etc are layers that might benefit from a switch. Some maps in your game just might not have rain, so why have that math on? This will potentially lead to a shader-explosion if you have too many options/switches but with careful planning you can minimize this type of overhead.

1 Like

@Frenetic made the best points. Texture packing is always the best option (for example, if you find you never use the metallic and occlusion channels of your ORM, maybe you can pack the roughness map into the Albedo’s alpha). And switches are powerful, since they prevent unused logic from even running. If you have something in a material not every instance needs, use a switch and enable it only for the instances which require it.

A simple mistake I’ve seen often is not reusing logic. If you run the same material function twice, it will execute twice, even if it has the same inputs. Instead, execute logic once and use it everywhere you can. For my landscape material, I determine the world-projected UVs once, and reuse it for every texture which I needed projected (use named reroutes if you like, I think it makes things a little easier to understand).

If you notice a material function seems particularly expensive, take a look inside. Often times material functions can be bloated to support multiple use cases. This means it’s possibly performing operations which are unnecessary for your use case. Recreate the function, cutting out all parts which you don’t need.

Also, something with a more selective use case is the Vertex Shader. I frequently work with stylized assets, so I can take advantage of this regularly, perhaps you can as well. There’s two types of shaders: Pixel and Vertex. Pixel shaders run for every pixel on screen (this is how textures can be displayed on your mesh) and they’re the default shader method for the engine. Vertex shaders, on the other hand, run for every vertex of your mesh. They can’t achieve nearly the same detail of a pixel shader, but it can be incredibly more performant. So, if you don’t need the detail, you can move portions of your material onto the vertex shader instead of the pixel shader (typically via the VertexInterpolator node). You should probably look up some more in-depth tutorials on this, I’m not exactly an expert. :slight_smile:

With most other techniques, it’s up to you to determine if the gain in performance is worth the cost in quality (things like texture size, as an example). Optimization, more than anything, is about balancing what you’re willing to accept in quality vs. what you’re willing to accept in efficiency.

1 Like

Specifically about the vertex shader: Customized UVs in Unreal Engine Materials | Unreal Engine 5.5 Documentation | Epic Developer Community

Any math you put on the vertex shader is something you will want to be a linear-math, meaning that from vertex-to-vertex, the math/derived-value will be the same, or at least will interpolate linearly; like a gradient vs a set of values that skips around. This is b/c the derived-value will be interpolated between successive vertices. The actual math is only calc’d on the vertices so any pixels in between will be a mix of two or more results, not calc’d directly on their own.

This means that anything you do that is pixel-dependent, like PixelNormalWs, won’t work well on the vertex-shader. It WILL work, functionally, but won’t output pixel-perfect-results and in many cases that is less than desirable.

Thus things like UVs and some other linear math will do just fine since you don’t need to calculate it per-pixel. The documentation above has some examples of this. You can also just plug in some maths and see the visuals for yourself.

That being said, if you CAN run on the vertex-shader, you are likely best-served by doing so as you will very likely gain.

Caveats include Nanite, which will triple-up the cost owing to how it has to trip through a mesh for visibility etc. This is in comparison to a regular-mesh, but even with the extra overhead should (generally) be a net benefit. Second, with regards to LOD’s, if your mesh is only 1 pixel on the screen then any pixel shader math would run 1 time, but if that 1pixel mesh has 100 vertices, then… So just be aware there is a give-take to this.

2 Likes

So running on a Vertex Shader versus a Pixel Shader will be more performant, but less detailed. And running that through Nanite will essentially triple the cost because of how it deals with the triangles. What would the performance difference be for e.g. running Vertex Shader through Nanite versus Pixel Shader with/without Nanite? And would Nanite bring the quality of Vertex Shaders back up towards that of Pixel Shaders?

Side note on textures, I’m currently using 2K. Obviously the balance between quality and storage versus 4K. Which would you typically use?

Now, from what I understand from this, by combining all of these I can essentially get the Shader count down to 2 per material by using the 1 shader method of Triplanar and combining the Roughness into Diffuse Alpha. Then I could use a shader array to combine all of the materials into one? I’m not entirely sure how the array works, but do you know how difficult the array is and if it works like that? My goal is not to get a AAA game to run on mobile, target is PC and Console. So I can afford a bit. But I still want it to be optimal, so would that really be worth it?

Edit: Last thing, I have distance scaling too which requires duplicate textures. I don’t know if there is a way to shrink this into one, but going with two means double the shaders. If I went with what I have right now, I have six Shaders per Material and would have the same with Triplaner/whateverplanar. Shrink this down to 4 once I combine Roughness into Albedo. Is that an ok amount? I don’t really know what’s industry standard but that seems pretty good to me.

sorry for this very long-winded comment :slight_smile:

1 Like

So as far as 4k textures, unless you are on a very large screen, very close, you will likely not notice a difference between 4k and 2k. Visual Tech Art has a video on this:

For reference, I work off a 44inch HD TV that sits about 4’ from my eyes (I know…) and I can BARELY tell when I go to 4k. Between the untiling functions to mix-match and the idea that my texture are somewhat scaled down (tiled across a smaller linear distance in worldspace) you just can’t see the difference unless you are looking for it. As it is, I have to lean-in to see and when I sit back at a playable distance, it’s a non-issue.

The idea here is texels, how much worldspace/screenspace does a single pixel of that texture occupy? If you scale the texture up to cover a larger area, then you need more details to cover that area, otherwise if you scale it down then it tiles more, but each pixel in the texture occupies less space on screen, so you cannot see that it’s 2k, 4k, etc. Heck I can go to 1k and see a bit of a difference but it’s not game breaking.

You also get a performance boost from using smaller textures since there are less bits to load into the card, shuffle-around, etc.

My suggestion is to work natively with 4k and then use either max-texture-size in the texture-properties in the unreal content-browser or use an LOD-bias in the material itself. LOD-bias would be preferred as that is something you can change on the fly. As well since the source-textures are 4k, you can always go back to ‘full resolution’ and you don’t lock yourself to just-2k.

WRT Nanite and the vertex-shader, if you are using the maths wisely, you should still come out ahead if you load up some things into the vertex-shader. It’s almost always a good thing and since nanite creatively destroys the mesh vs prebuilt LODs, that at-a-distance caveat with the 1pixel mesh is less-impactful in that particular regard. Nanite will still 3x the cost but the way it simplifies should end up with less-vertices than what you might get with a hand-made LOD at that distance. So, ‘yes’ you should still do it, but be aware there might be use-cases where you come out behind is all.

The quality of things would be the same paradigm as regular meshes: values on the vertices will be interpolated across vertices and if sampled in the pixel-shader, be beholden to that behavior. Nanite wouldn’t change this, it’s more fundamental to the multiple passes in the display-layer.

As for arrays, they work the same as ‘regular’ textures in that you still need an UVs (XY coordinates) but they also append a 3rd value (the Z) which is what slot in the array you are looking at. So you could load up 50 textures into that array and the maths you make come up with whatever UVs you need to mix/match, untile, etc, that wouldn’t change from the single-texture use-case. But then you just append that 3rd value to say which texture in particular you want the XY coordinates to apply to.

In my case I opted to use UDIMs instead as they are virtual-textures which offer some performance benefits, but also the way they are addressed, I can put ALL my textures into one big lookup table and go from there. ref: Streaming Virtual Texturing in Unreal Engine | Unreal Engine 5.5 Documentation | Epic Developer Community

For my landscape material I worked off BananableOffense’s tutorial and tried both Texture Arrays and UDIMs. They both work fine, just some slight adjustment in how to calculate the UVs, but the UDIMs do seem to offer a distinct performance benefit since they stream in.

Lastly, for your distance stuffs, that wouldn’t change either, you can still do what you already do, but with my particular untiling solution you don’t seem to really need distance textures. There is really no tiling up close, and the distance takes care of itself, comes out in the wash as it were.

EDIT: you’ll want something that basically follows this form:

picoritdidnthappen

EDIT2: the solution works well on nanite, but its not cheap. However, texture samples are very low, 20ish for the entire thing which includes puddles, animated raindrops (flipbook layer), snow, water, ice, as well as physics and grass-logic. Also includes a few misc features (glints on snow/ice, etc). Alex’s video is what really brings the number of texture samples down, so much-much credit to him he literally helped me make my world work.

THIS guy was my first video, and he’s got a great 4part series, but the puddles I found most useful:

2 Likes

As far as I know, the vertex shader is only more performant because (in most cases) it results in less times the material is calculated.

There’s over 2 million pixels in a 1920x1080 resolution. If you’re material is displaying on half the screen, you can imagine how many times it’s being calculated. So, if your mesh has 5k verts, and it only calculates per vertex, you can see the gain.

So, even if your mesh had enough verts for the vertex shader to reach the same fidelity as the pixel shader, there wouldn’t be any performance gain. In fact, I imagine it would be worse because of the screen-space issue @Frenetic mentioned. If your mesh takes less screen space, it’ll cost less on the pixel shader, but the vertex shader will remain the same (outside of LODs). Throw nanite’s 3x and it gets even worse.

TL;DR: The vertex shader is not inherently more performant than the pixel shader. Whether you should use it comes down to the fidelity you need in a material. The vertex shader generally isn’t useful for displaying textures, it’s more useful for displaying static or gradient values (eg: flat/gradient colors, like you see in stylized assets, like leaves on a tree).

Whether you should use the vertex shader should generally feel obvious. If it doesn’t, you probably should stick with the pixel shader.

But, I’d also add the vertex shader can be used for all parts of a material individually (eg: metallic and roughness could be on the vertex shader, but color could be pixel). Do with that what you will.

And, 4 textures really isn’t an issue. I’ve seen much, much worse. :slight_smile:

2 Likes

Yeah this, better-stated.

1 Like

I’ve got a lot of work to do :smiley:

In Visual Tech Art’s video, he uses a node called VertexNormal (4:21). I did that section without the node, I assume it’s custom. Is it necessary? I don’t think it is but I could be wrong.

1 Like

caveat: I am not a professional-Dev, yet. Yet to publish but that is the plan. Do with that as you will.

Red-nodes offer information that comes-along with the mesh (position, scaling, etc) or some information about the world; think of the chunk of information that the cpu sends to the GPU, you can access some of that in the shader.

ref: Unreal Engine Material Expressions Reference | Unreal Engine 5.5 Documentation | Epic Developer Community

1 Like

Lol, I can believe it :slight_smile: Thanks!

1 Like

Think @Tiggerljc is talking about the purple-ish blue “VertexNormal”. Correct me if I’m wrong.

That’s a “Named Reroute Node”. It’s a reroute pin you can call by name anywhere in the material. Kind of like a variable of sorts.

No, it’s not necessary, but it is useful at times.

ahh, yes, I just assumed; my-bad

FYI to Tigger this was what I was referencing:

1 Like

Yeah I am, it’s in the line VertexNormalWS–Normalize–VertexNormal–Sign–NamedReroute.

I’m not sure what’s supposed to go there, but he didn’t give any code that would go in a named reroute. I just excluded it from the line and it seemed to work fine.

Edit: It pops up later as VertexNormalWS–Normalize.

1 Like

you can get the material I believe. he has a gumroad link, below the video I won’t post here, give him the credit :smiley:

2 Likes

I believe he reuses the VertexNormal reroute often throughout the rest of the video. I think he’s just doing it so he doesn’t have to do the VertexNormalWS-Normalize bit over and over again.

But yeah, just ignoring it should work fine. However, if you ever see the “VertexNormal” node again, that’s why it exists.

1 Like