Nanite and Lumen slightly alter the focus of how textures are handled in UE5 (for example legacy AO is set to be superseded by Lumen), so I wanted to revisit the texture packing method I’ve been using on my projects up until now because the more I read, the more I realised how un-optimised and redundant it was. Even with the advances in poly count for meshes in Lumen and the help provided by virtual texturing, there’s still no getting around the limits of the good ol’ texture pool. I’m by no means an expert in compression standards or how unreal handles them so I wanted to improve my knowledge by throwing this over to the community. Any feedback on this is much appreciated.
This process is inspired by a few forum posts here –
https://polycount.com/discussion/184005/normal-map-compression-and-channel-packing
https://forums.unrealengine.com/t/using-red-and-green-channels-as-normal-map-and-using-blue-as-mask/104587/10
and this article -
https://www.reedbeta.com/blog/understanding-bcn-texture-compression-formats/
For this exercise I’ll be focusing on opaque materials without a height map only – I think for most games these types of textures form the majority of background props and can be used for things like grass, gravel, sand, bricks, rocks, wood, metal surfaces etc.
The first material graph is my current ‘catch all’ for most static materials (for translucent material I’d normally add the alpha to the alpha channel of the basecolour RGBA).
This second material is my first updated option, it derives the R and G normal channels from the two alpha channels of the textures. As Normal maps only need two dimensions (R and G) Blue is always a flat value. The channels need to have a scale of -1 to 1 (not just 0 to 1 like the other channels in the material) which is achieved using the ‘ConstantBiasScale’ (set to -0.5 and 2) node and then the Blue channel is added with the ‘DeriveNormalZ’ node. I’d make this into it’s own custom node for a project to save on a bit of performance across multiple materials.
Note: some users have suggested using the ‘normalise’ node after the DeriveNormalZ node, but I’ve found that to make no visual difference in my tests, so I saved on the instruction.
Another thing to note is that this option gives the best result in terms of compression artifacts because the alpha channel of the textures is compressed at 8bit without cross talk from the other colours (as I understand it, could be wrong). The RGB channels are compressed at 5bits for R and B and 6bits for G (don’t ask me why).
This third option cuts down on overhead by doing away with the alpha channel from the BaseColour and instead used the G (remember slightly better quality than R and B) from our composite texture. This does introduce higher compression and cross talk from the other channels but imo it’s hardly noticeable, especially if the textures tend to be in the background more often. We surrender the Specular slot here but for this texture I just added it back as a constant 0, adjust to your own liking.
Lastly I reduced my 4K textures down to 512x512 to try to exaggerate the differences after compression, and yes the third option began to show it’s compression more, but it really wasn’t that bad, you decide.
Overall I’m pretty happy with the new system, I noted that the base pass instruction count was the same for material 1 and 3 and only 1 extra instruction for material 2 (even though 2 and 3 both have three additional nodes in them to handle the normal map generation). There two important metrics here; 1. Texture lookups (or how many texture calls the system has to make every time a mesh using these materials is loaded in) and 2. Texture pool size (the total amount of textures loaded on the graphics card memory at any one time). I know this technique reduces lookups (by -33% obviously), where I’m not so sure is where using fewer textures with alpha channels actual saves over more textures without them, because of how uncompressed alpha channels are in Unreal. Perhaps someone with more knowledge could shed some light on this.