There are lots of different ways to explain the vectors involved in transforms.
Jon Lindquist went into some pretty good detail on transforms on a twitch stream a while back (around 13m mark):
https://www.youtube.com/watch?v=564OYZanl3A
The tl:dr version is that a normal map texture exists in tangent space which means the vectors only have meaning to the local frame of the texture. In tangent space R is always the left-right axis, G is the up-down axis, and B is an axis perpendicular to the texture plane, pointing straight up towards the viewer.
You could call the default vectors for each of these X(1,0,0), Y(0,1,0), Z(0,0,1). X is the tangent, Y is the bi-normal and Z is the normal. So Z is simply represented by the vertex normal of the mesh.
If you have a plane in the world facing up along Z with no rotation (and unrotated/unmirrorred UVs), it’s transform will actually match the default layout of the tangent space texture, and thus the normal map will not be changed by the transform at all. You could turn off the “tangent space normal” option to save a few instructions if you could guarantee the floor never rotated or if you were using world XY coordinates.
In order to support other orientations, the direction of each of these axes needs to be updated, and the normal map needs to be “transformed” into the new space.
The engine stores the Tangent vector as part of the static mesh data. The bi-normal is derived by doing a cross product with the vertex normal and stored tangent vector.
If you do 90 degree rotations, this has the effect of “swapping” the vectors or “swizzling” using a single vector rotation. Ie, rotating 90 degrees around Z would make the R (or X) axis (0,1,0) and the G (or Y) axis into (1,0,0) so they merely swapped.
Negative scaling also simply negatively scales the normal map along that axis. say you have a floor like the unrotated floor example and set the Z scale to -1. In that case the blue channel of the normal map would be multiplied by -1 but the other channels would be unchanged.
The limitation at the root of this thread is that the engine only currently stores that tangent vector for the first UV layout. So deriving additional tangent vectors requires more work and hardware tricks such as ddx/ddy.