After a closer look into the code, I forced the lightmaps to be grayscale to inspect the luminance (Y) component.
So it’s fair to say the tile-like artifacts come from the luminance component.
Then I noticed this in the HQ encoding scheme:
float Residual = LogL * 255.0f - FMath::RoundToFloat(LogL * 255.0f) + 0.5f;
...
DestCoefficients.Coefficients[1][3] = (uint8)FMath::Clamp<int32>(FMath::RoundToInt(Residual * 255.0f), 0, 255);
This effectively means HQ lightmaps have an extra 8-bit input into the texture compressor (16 bits) in total, and since this residual is stored on a second texture (the spherical harmonics texture’s A channel), it ultimately has more bits available after texture compression. I think this is the answer to why LQ encoding looks bad with texture compression, but I don’t have any good advice. Trying to represent HDR data (lightmaps are essentially HDR) in 8 bits (before compression) and ~3 bits (after compression) is just way too hard. If you want to take a look at the best GPU supported compressed format, see BC6H Format - Win32 apps | Microsoft Learn