Landscape tiling based on camera Distance " using UVs lerping "

Hello everyone

I wanna discuss something about Landscape material tiling based on camera distance as the most common method to achieve that was to create 2x textures samples of the same texture map to give a unique tiling for each texture map and lerp between them based on camera distance something like this https://prnt.sc/sl0r14 from Unreal Getting Rid of Tiling - Distance based Texture Scale Blending - UE4 Open World Tutorials #10 - YouTube

so i thought it would be easier if we managed all that camera distance lerping on the uvs themselves
and that’s what i came up with https://prnt.sc/sl0usw
and of-course using the IF node only gives me 2 solid values without blending between them so i get sharp edges between the far and close tiling sizes like shown in here https://prnt.sc/sl0w2u
which makes using 2x textures samples like shown in the video is more appealing, but i believe if I found out away to make the blending similar to the x2 textures samples method.
using the uvs lerping method would be much less painful for the artists and the performance if looked good.

First of, The calculations happening before or after doesn’t matter UNLESS you plug them into a custom UV pin and use the custom UV value - Do correct me if I have this wrong, as it is mostly what I gather from usage rather then from testing it.
The theory behind that is that the calculation is not really occurring on the GFX unless you do that specifically.

Second, you can technically Lerp the UVs too. You just need to do it on R and on G separately, then combine them back, and feed them into the UV/Custom UV node.
I don’t actually think you even have to split them. It takes in any input so long as both match. - So you need to multiply by .5 append .5

And the append node

Third. As I’m currently giving Virtual Texturing a new try, the Distance thing is kinda pointless. That is, there are “cheats” that can look decent and maintain the performance, but if you just had a material rendering without variously scaled textures as a base, then just rendered a Non Virtual material on top, around your immediate area, you may get a better result.
The VT set up correctly is essentially a gigantic Render Target, meaning you can Mix/Match the final result to lay a real material on top when up close - I think. I need to further experiment with this since it takes performance up by around 20MS when properly enabled on a terrain with 10 layers of paint per comportment… It would still cost a lot when you see the up -close material and you have 10 layers in the same 10m but that’s hopefully a rarity more then the status quo of a landscape. still, it’s a performance game changer that you should keep in mind.

When I first started trying this, my instinct was that I could LERP the UVs using one set of texture samples, and use the World Position or Pixel Depth notes to control the transition. My hope was that I would need to use duplicate texture paths with different scaling. It kind of worked, but I was ultimately no able to find a way to make the transition band more subtle. The seam was obvious no matter what I did. So, I just accepted that and usually make two or three layers, each with different scale (same texture samples for each per landscale layer), and use the World Position node, parameterized, to get the results I want. I usually do this in conjunction with Macro Variation and Texture Variation for every landscape layer (when I have time). One of thse days, I’ll just make a master material I can use for everything and make the texture samples parameters as well. There are still some other things I need to learn first.

The best way to dither the line area is to implement some version of a texture based height lerp.

The cost is high for what it is, because its a per-pixel calculation. But generally speaking the output higes the seam so well you can barely tell what"s happening.

To the reduce the cost, you want to isolate an area around the seam - if you are using distance off the camera, then you are using circles/radius value.

So if you have a seam at Xr, you do something silly like Xr/1000 and then
Xr-the result for the lower edge of the zone.
Xr+the result for the high edge.

High edge > subtract low edge > clamp = dougnut representing the area you need to apply blending in.

With that, you can overlay a texture that applies the blend, which is only calculated per pixel within that doughnut area (lessing performance drag quite significnatly).

But still, all of this is pityful looking copared to say using meshes that have a complete material with proper Mips. I wouldn’t suggest that anyone go out of their way to implement anyting of the sort. Particularly On epic’s landscapes that don’t work/perform…