Normal map import problem

So I’ve been having some problems with normal maps. The normals outside UE look fine but, as soon as they get imported (via .sbsar file), weird artifacts appear and the overall quality seems worse.

I’m not sure if it’s an issue with the process of importing or with the actual substance published by Substance Designer (again, the original texture that comes out of Substance Painter looks fine)

The attached images are screenshots of how the jacket looks in Substance Painter and in UE and its normals (both in 2k) outside and inside UE.


The textures in Substance Painter are in raw RGBA while the textures in the UE4 are DXT compressed. DXT (s3tc) is a block level compression, with such high frequencies (small details) this kind of artifacts are alas to be expected. A higher resolution texture would help, or you could adapt your UVs on the high frequency parts.
Best regards,


Also, make sure that the normal map is using normal map compression or it won’t render correctly

Thanks for the replies!

The texture was compressed as Normal Map. The problem here is, no doubt, the compression itself and its losses. Is there no way of maintaining detail this small then? I mean, my UVs were already laid out so that those detailed sections would get as much texture space as possible and I doubt bumping up to 4K would make a lot of difference (big enough to justify the performance hit anyways).

if you set the texture to UI it will be uncompressed, then I’m sure you can use constant bias scale or some such on it to make the channels between -1 and 1 and plug that into normal map

it may need a tangent to world space transform first, I’m not sure

a better alternative imo would be to not render that pattern into the normal map and make a tileable texture out of it instead, you can have much higher detail in the thread in a tiny image, 128 pixels maybe

I’m pretty sure leaving it uncompressed would be just as bad as switching to 4K. The tileable texture option seems like a very good idea, I’ll try that.


only as far as storage space is concerned, once it’s in use it takes up the same memory regardless, but yes I think the tileable thing is a better solution

If you can tile it that would be much better–character work is using that type of technique these days, you just need a mask for the area that would be tiled.

Compressed normal maps use 25% of uncompressed texture memory. Doubling texture width and height make memory usage 4X. So just naively increasing resolution does not help at all. Tileable is only way to get that high frequency pattern.

What I noticed is that if you haven´t flipped the normals in PS or before importing in general, then they have a weird appearance. So make sure you flip the green channel in the texture adjustments :slight_smile:

I’ll go with making the tileable texture and building a mask to cover that area in specific. As for flipping normals, I’m almost certain they don’t need any adjustments at all, since they are behaving as they should. The problem here is the detail lost during compression.

I’ll post the results here as soon as I can.

Isn’t there a use BC5 for normal map compression in the project settings? Do you get better results if you use that?

There is option to use DXT5 instead of BC5. These differ only by x component, y component is identic. For X component BC5 has 8bit endpoints and 3bit indices(meaning 6 mid values) and DXT5 has 6bit endpoints with 2bit indices(meaning 2 mid values). Both formats use same amount of memory and is almost identic for performance.(BC5 might has slight edge)

Alright, here are the results of using a tileable texture:

I made a 128x128 tileable image, created a mask and then blended with the existing normals (high poly to low poly bake). The results, while not as good as inside Substance Painter, are more than enough for what I need. Another advantage of using this technique is, since the masked parts no longer need as much texture space in the original normal map, being able to rearrange the UVs so that other areas can have more detail.

Thanks again!

When you say “blended with the existing normals” do you mean actual blend? That would be incorrect. I suggest you to use BlendAngleCorrectNormal function.

That’s exactly what I did. I used BlendAngleCorrectNormal to combine the normals. I thought the term “blend” would be correct given the name of the function.