Derive Normal Z performance VS RGB Normal Texture

Hi everyone!
While trying to deepen my knowledge on some technical aspects of the engine, I stumbled on this post by the good ol’ Sjoerd De Jong
It’s about UE3 but he basically gives the advice to drop the Z channel of a normal map and derive it from X and Y… but deriveing a Z channel makes use of a square root operation, an expansive operation as far as I know.
So my question is: speaking in terms of performance at runtime does saving some memory space with the textures win over having a square root operation added to (almost) every material??
I often work with VR, where framerate is crucial… so my concerns.

By default, UE4 uses BC5 compression for normal maps. Reconstruction of Z channel already happens for every normal map sampler you are putting in your material and it takes few extra instructions in addition to square root, including dot product.

Potentially you could switch to using BC1 for normal map compression. That would save you some pixel shader instructions for every normal map sampler. Overall, performance gain will most likely be hardly measurable, while quality loss being more than significant.

Derive Z is actually faster because that will give you already normalized normal so normalization can be omitted. But this is never going to be bottleneck of your materials. Simple forward shaded objects has hundreds of instructions. Single SQRT is not going to be your problem.

Thanks guys for these informations… very helpful!
With a fast DeriveZ I could actually put more infos in a single texture… keeping in mind artifacts from DXT1/BC1 compression of course.

Don’t do that. BC5 is good quality two channel format. When using BC1 you get lower bit endpoits and less indice bits. You also get random looking errors from channel cross talk.(rgb is stored as single value not as three separate channel.)
It’s better for memory and quality to store normal maps as BC5 and then make that texture bit smaller than trying to channel pack.