Material optimization

@mariomguy

We are speaking about different things. When I mention landscape layer, I mean exactly the part, that takes a weightmap, and brings it into the material, disregarding how it is used in there.

sRGB is defined as exactly 2.2 gamma. If someone uses another value, then it’s not sRGB but some other gamma format, or they like the different look that results and are not very careful about how they label it.

@Maximum-Dev
Meh. Don’t know why it happens. Maybe 1-normal.r^2-normal.g^2 becomes less than zero. It can happen if normal map is not normalized before B is taken out. You could try doing a safe normalize material function on your resulting normals, with a default vector of (normal.r,normal.g,0), to check if that is your problem.

Well, in that case, you can have a million layers in the landscape material, as long as the shader compiles with no more than 3-4 layers for the individual landscape components. Whatever layers and textures go unused will be removed during the compilation. And with shared samplers, unless you intend specifically to make sure your product works on PS4 or non-Direct X platforms, in which case you are limited to 13 texture samplers in your landscape material, you can use up to 128 without any significant reduction in performance. Your biggest bottleneck with landscape will always be the shader complexity first and foremost. This is the shader cost to render each pixel to the screen. Since landscape and the sky takes up the most pixels of any shader in an open world environment, if either of those two things are too expensive, they will definitely eat away at your overall performance the most.

@mariomguy

Please define, what are you referring to as shader complexity.

The shader complexity. How long it takes for one pixel to be rendered. A more complex shader will take more time to render. In the image below, you’ll notice the shoreline components are more expensive than the components hidden inside. That’s because the shoreline has the sand layer blended into the rock and grass layers. Other components only have grass, so they’re even cheaper. The more layers you have, the more difficult the shader complexity becomes because all of the layers affecting the component need to be blended together.

shot32.jpg
4a7ccf97e92650fafc3a7ffc0718fb6d9d96e892.jpeg

@mariomguy

And I am discussing what contributes the most to the render times of one pixel, and what is the first thing to bottleneck it. And in majority of cases(I am speaking mostly regarding high-end desktop here) for landscapes it won’t be neither layer number, nor shader math. Texture fetches will be your bottleneck.

Shader complexity view you’ve linked can be good tool to quickly estimate performance, and is quite handy when dealing with things like foliage overdraw, but in no way it conclusively indicates shader performance, especially with complex shaders. Essentially it is just number of instructions mapped to a color range.

These black artifacts still happen. They also happen when I append with flat blue too. I have tried default compression and masks compression. Result is the same as below.

Edit: Upon further tests I noticed this only appears like that on landscape. When the same materials are applied to static meshes the normal maps look fine…

Edit 2: It’s happening on every landscape layer regardless of whether the normal is packed or not. I have simply blended the materials using MatLayerBlend_Standard. Not sure what’s wrong. I’m breaking it down and trying to find the spot that’s causing the issue.

@Maximum-Dev

Are you sure it is caused by normal maps?
The issue I’ve described looks slightly different

http://image.prntscr.com/image/29c8654974d44eaca22d20f61c5946fa.png

http://image.prntscr.com/image/6cb1d6eaf0c34743a9aacde009978b2c.png

http://image.prntscr.com/image/fa7da6ee163d4ae8be0e95af170d8191.png

http://image.prntscr.com/image/77044f67d49d4c84948246abc426a0ae.png

Alternatively, this is exactly how it is done if you use default normal map sampler node:


	Normal.xy = (Normal.xy *2 -1);
	Normal.z = sqrt( saturate( 1 - dot( Normal.xy, Normal.xy )   )   );
	

@Deathrey, Thanks.

Another question. Texture count affects the shader complexity, does texture size have the same effect? using smaller size textures reduce shader complexity or not?

@Maximum-Dev
Short and general answer-yes, it affects rendering performance.
Long and specific answer would be a bit more complicated.

These two resources touch the problem:

http://http.developer.nvidia.com/GPUGems/gpugems_ch28.html

@Maximum-Dev

Another example of thing that could be optimized.
Each time you use a normal map sampler in your node network, there is an UnpackNormals function called, something like that:


float4 UnpackNormalMap( float4 TextureSample )
{

		float2 NormalXY = TextureSample.rg;
	
	NormalXY = NormalXY * float2(2.0f,2.0f) - float2(1.0f,1.0f);
	float NormalZ = sqrt( saturate( 1.0f - dot( NormalXY, NormalXY ) ) );
	return float4( NormalXY.xy, NormalZ, 1.0f );
}

This is not many instructions, but if you are sampling a normal map, something like 10 times, this function would be run 10 times. It makes sense to sample the maps, mix the results, and only then unpack normals manually.

I’ve been experimenting with using DXT normal maps for terrain lately and I can say that for the purpose of project i’m working, I would take the quality downgrade from using more lossy compression in favor of better channel packing. So i’ve kinda settled with using G and A channels for normals Y and X, B channel for heightmap, and R channel for gloss map.

1 Like