Unreal’s handling of normal maps confuses me.
I noticed alot of people just used an add-node to combine normal maps, which seemed really wrong to me (coming from Unity) so I started looking into it.
I learned that the maps are BC5 compressed so they are only RG and mapped to the -1>1 range.
Also, it seemed like a normalization occurs on anything fed into the Normal input in the material.
So BC5 is really a vector2 so adding and then normalizing should be somewhat okay, no?
Well it seems these maps then have the blue channel generated on import, as I have a texture open in UE4, which says BC5 compression, but it has info in the blue channel.
I then found this website:
This result is way nicer than an add.
So a BC5 is only RG, but B is then generated on import? Makes no sense for build size.
So why does the blue channel play a role when combining normal maps in the material editor?
Shouldn’t it be that you really only have RG and then B is generated in the shader just before the normals are written to the buffer?
Good day. BC5 compression throws away B channel and uses only R and G.
Blue channel is displayed in texture view is most likely just for visuals. Compressed normal texture has it discarded.
B is not generated on import, but rather in the shader, after the normal map has been sampled. Each time you use a normal map sampler node, function somewhat like this is called:
float4 UnpackNormalMap( float4 TextureSample )
float2 NormalXY = TextureSample.rg;
NormalXY = NormalXY * float2(2.0f,2.0f) - float2(1.0f,1.0f);
float NormalZ = sqrt( saturate( 1.0f - dot( NormalXY, NormalXY ) ) );
return float4( NormalXY.xy, NormalZ, 1.0f );
If you are not doing anything fancy with the normals in your shader, than yep, you can safely go with RG only up until final output. However many operations do rely on having blue channel. Blue channel plays role in blending normal maps, because adding up RG channels of two normals and deriving blue channel yields different result than retrieving blue channels separately, then adding RG components and multiplying B channels, with normalization afterwards.
So the shader “unpacks” the blue channel? And this is done before any of the logic done in the material editor node system? So you can kind of think of it as happening as part of the Texture Sample node?
Which would explain why the blue channel has any effect at all when doing anything with the normals before feeding them into the material input…
Exactly. It is happening as a part of Texture Sample node, before material logic is applied to it.