I understand that packing grayscale channels such as roughness, specular, ambient occlusion, or displacement/height as channels in a single RGB texture or as the alpha channel in a diffuse texture is a smart performance trick as you reduce the number of texture samples needed.
I was curious that since the alpha channel in a normal map is not used, why not store it there? I know that Unreal prohibits this by requiring texture samples that input to the Normal pin must have “Normal” as the sample type - thus completely discarding any use of the alpha channel. Is there a unavoidable technical reason for this? Would there be any extra problems with this approach if we were able to use the alpha channel?
And I guess on a related note, why is it that packing several single-channel textures into a single multi-channel (RGBA) texture so much more efficient? Why wouldn’t one channel of an image be equivalent to a separate grayscale image? Because of things like UV coordinates being shared in a single texture’s channels?