The default technique of rendering terrain by blending layers on the terrain geometry is limiting for performance and especially when trying to blend it with static meshes for smooth transitions.
The solution I’m thinking about is: Render the terrain to a series of textures with decreasing density centered around the camera (similar to cascading shadow maps), these textures would then contain the blended terrain textures and could be used by any mesh including the terrain and static meshes at a very low cost.
Since the terrain doesn’t change from frame to frame the terrain textures would only need to be partially updated every time the camera moves further than a certain threshold, additionally updates could be distributed across multiple frames.
Far distance terrain should be precomputed at the height maps density.
Example image of what I mean by terrain cascades:
Selecting the correct terrain texture LOD to use for blending and it’s UV coordinates should be possible to handle through dynamic branching in the shader at fairly low cost since it can be expected that most neighboring pixels will end up using the same terrain texture LOD (dynamic branching currently requires a custom HLSL node in the material, but at least it’s not a complex one).
Conceptually the system above should be possible to create with unreal engine as is without the need to modify any of unreals source code.
There are a few optimizations about which I’m not sure whether they are possible as is:
- Rendering to multiple textures in a single pass. For terrain I expect to require at least albedo(3 channels), normals(2-3 channels, depending on how they are packed, with something like octahedral compression world space normals in two channels could be good enough), roughness (1 channel), ambient occlusion (1 channel). This would require 8 channels if normals take up 3 channels and would allow one additional channel if normals get packed in two channels for metallic, specular, emissive or a channel for detail height could also be used to allow tessellation and better blending with static meshes.
Anyway, point is: It would be advantageous if rendering to multiple textures in a single pass is possible, since no matter what rendering to at least two textures will be necessary either way.
- Mipmaps for render targets, there is a check box for it in the editor but I couldn’t get it to work.
- Access to the height map in a 16 bit texture (as in 16 bit integer, not float) internally unreal uses a texture where the height is encoded in two 8 bit channels, which would require nearest neighbor filtering and manual blending in the material in order to not mess up the encoding, this more an annoyance than anything else. (At least there is a node to render out the height map to a texture now.)
I would like to know whether anyone has tried that kind of system before or if you can point out any obvious issues with what I’m describing. One drawback I’m aware of is that triplanar mapping won’t work with that kind of system, but on the other hand using static meshes to cover steep terrain will be much easier. Another is that it obviously takes up additional memory but I think it’s manageable (memory cost and performance could be adjusted by changing the base resolution and the scaling factor between the LODs).
I will try to get a basic prototype of this running in the next weeks (as soon as I find time for it) and report back on my results.
PS: The ability to conveniently create a height map from the terrain is already a huge help as the height map alone is already useful for many things, like terrain conforming tree roots and shore lines for opaque water.