Multiple Normal Maps on Separate UV Channels?


I was just testing out some ideas to create a material that would allow me to blend two sets of normals together from separate UV channels on a single asset. The goal was to see if I could achieve something like the little edge chips/cracks seen in the P.T. Silent Hills demo (screenshot below).

What I was attempting to do is create one UV channel with all my surfaces unwrapped normally with standard UV seam placement for use with a tiling wall texture. On a second UV channel, I would unwrap all the chamfered edge polys and align them in a single direction that could then take a tiling ‘worn edges’ normal map. I can assign the multiple UV channels with the TextureCoords node in the material editor, but the way the normals plugged into the second UV channel react to lighting is wrong. UE3 had issues with correctly handling lighting direction from normal maps assigned to multiple UV channels, and I was hoping this may have changed in UE4, but that doesn’t appear to be the case.

Attached are a few images illustrating my test setup. Note that I made the bricks fairly extreme just to see what was going on. Also, it’s always the second UV channel that experiences the bad lighting flips. If I swap UV channels so the bricks are on UV0 and the tiled rough edges are on UV1, then the rough edges are reversed in odd directions (it’s just harder to see in a still image post).

If this issue with the way normals are handled on a second UV channel isn’t something that can be resolved, is there a particularly good way to go about adding a tiling strip of edge decals to corners like the P.T. image below? It’s definitely a tiled effect that gets repeated elsewhere, so they’re not just using high-res custom models everywhere. Also in particular, note the rounded corner, which isn’t conducive to using the standard decal sticker projectors. I’m guessing they have a setup similar to Cryengine, which allows the use of deferred decal materials to be applied to any arbitrary geometry (it just writes whatever material inputs are set in the decal material on top of the rest of the deferred render pass), but that’s currently not something UE4 allows (might make a request post later, since it’s an insanely useful tool for level art using tiling textures).

Reference image from P.T.

My test setup:

Yep, you can definitely set a different UV channel for different normal maps and then combine them. It’s one way of combining a general normal map with small surface detail

…Do you care to elaborate as to how without getting normal/lighting errors? I guess if it wasn’t clear reading my first post, here’s an animated gif showing how changing UV channel 0 affects the lighting/normals of UV channel 1 in an undesirable way. The second UV channel doesn’t change at all (the one where the obvious brick normals are applied) so regardless of what I do to the first UV channel, the second one with the bricks shouldn’t be changing at all if this system works the way you’re saying it does.

This obviously isn’t the case, however, since the shading and highlights on the bricks shift whenever I alter the first UV channel - even making the bricks look inverted at times when I move the other set of UVs around.

  • UV0 - UVs shown off to the right moving around
  • Arrow texture - mapped to UV0 (the one moving around)
  • UV1 - Never changes
  • Brick normals - mapped to UV1 (lighting should be static, not moving around)

I’m not sure if this is the source of your problem, but, as far as I can tell, your normal map is upside down.

Your UV shells should be arranged exactly the same in both of your UV channels. You answered your own question with your gif - rotating the first UV channel affects how the surface is rendered in engine, and will cause shading errors if your 2nd channel doesn’t have the faces oriented in the same direction.

EDIT: When I say “arranged exactly the same,” I mean that you shouldn’t rotate them on one and leave them unrotated on the other. If you rotate a face in one UV channel, you must rotate it the same in the 2nd UV.

No, you should be able to have completely different UV’s in your channel and use them the way you want.
Maybe it’s my screen though but it looks like the normal map doesn’t look like the right color, are you sure it’s using Normal Map compression?
What’s happening there is that the normals of the surface are getting turned around, and that doesn’t make much sense. Changing the first UV channel should not effect the normals in any way.

Actually, you’re wrong. UV Shell direction in different UV channels does affect how normal maps render, in the same way mirrored normal maps do. The engine struggles because you are giving each vertex normal information that is conflicting based on the UV shell direction.

OP, a simple test I would suggest - map your normal to UV set 0, lightmaps to UV set 1, and your diffuse to UV set 2. That should solve the issue, because the normals will be controlled by set 0 and won’t conflict. Hope that makes sense, this worked in UDK but it might not work in UE4.

No, UV direction should not in any way change the effect of a normal map or the normals of the mesh, there’s nothing that’s tied into that. And certainly there shouldn’t be an issue where the first UV channel is effecting what happens on the second.

Mirrored normal maps have an issue because to be able to mirror the UVs you’re flipping the mesh, if you’re only rotating a normal map then it’s not a problem.

You’re right! UV direction in different sets shouldn’t affect the normal map, but for some reason in Unreal it does. OP even notes that in his initial post. This was a known problem in UE3 and it seems to have carried over sadly. The best way to solve it is use your unique normal map in UV set 0.

This is just unequivocally 100% wrong though, like I’m not sure why this misinformation keeps getting posted when a 5 minute test in the engine itself will show this is factually incorrect. It’s the exact issue I posted in the animated GIF above where I put the arrows in UV1 and brick normals in UV2 and rotated only UV1, which resulted in the highlights/shadows shifting around on the unchanged brick normals.

The whole reason I can’t simply map all the normals to UV1 is for the issue shown in my first post. For large level architecture, there’s a need for heavily repeated tiling normals using one UV set, and then another set of normals applied to the corner edges of geometry that require being mapped differently to separate UV set. If these edge textures were only albedo/roughness/metallic/etc. it would work, but as soon as there’s a normal in there the whole thing is broken because the lighting is incorrect. Ideally we could just handle the second set of normals with manually-created decals like Cryengine allows and as I ask for in this post (apparently to no avail :().

I’m going to do some tests and try getting something working with technique tonight, hopefully something quick to set up and has good results. None of the edges where anything is painted looks hard, wish I had P.T. so I could try to break it down more.

I’m definitely interested if you’re able to get something working, but I don’t have high hopes without the lighting/rendering/vertex math in the core engine being adjusted in some way.

Regarding the P.T. example - that’s a pretty subtle use of the technique. Check out the images in my post here (the text is basically irrelevant for what we’re discussing here) - the really obvious normal mapped edge cracks are probably a better test scenario.

Alternatively, try to replicate the stonework edges from Z-enzyme’s post in the same thread (except with normal maps). The only thing I’d keep in mind here that while the stonework in this post could likely survive as a floating set 1-bit alpha masked cards (which would then light correctly), the Cryengine shots in the previous example show the kind of smooth alpha blending that I’d really like to see (which, as it stands in UE4 right now won’t light the same as the rest of the opaque geometry).

Nick! is right, the tangent basis is determined by the first UV channel. That is the basis from which the transform is performed when transforming from tangent into world space which is a required step when using tangent space normal maps. The engine actually only stores a single tangent vector. By doing a cross product with the tangent vector and the vertex normal, the other vector (some call it the bi-normal) can be derived without storing anything.

So if your normal map UVs disagree with the direction of the base UVs, they will be transformed in the wrong direction (ie if its upside down the G channel will be flipped).

That said, you could in theory attempt to bypass the final tangent->world transform and transform into a custom space for another normal channel as long as you impose a rigid structure for the orientation. But that isn’t possible for a curve…You may even need to encode the tangent vector into another UV channel using a script for that to work like you want in the 1st arch image.

For the non-curve case, you should be able to derive a stable tangent basis using the vertex normal and the material function “Create Third Orthogonal Vector” combined with “Matrix 3x3 Tranform” or possibly “Inverse3x3Transform”. But it won’t know about the curve so will gradually be wrong until it is 90 degrees wrong at the top of the arch.

If you are wondering why Tangent Space normal maps are the standard there is a pretty big reason: Compression!!

Local or World Space normal maps can be used, but then they are storing the full curvature of the vertex normals as well which tends to compress poorly. With tangent space, only the difference of the surface from the vertex normals is stored which compresses much better when combined with uncompressed vertex normals.

FWIW I have been wanting to do this for years as well. For the most part I worked around it by using floating shell geometry that has the UVs as UV0 etc. But that is way more work than it would be if it just worked with multiple UVs. I just ran into your other post about decal geometry and I am getting some conversations going among the rendering guys. It turns out that idea has been bounced around already and the time may be soon to get something like that scheduled. Seems related to this.

Great write up RyanB! I noticed this little hitch a while ago and just figured it was a limitation of tangent space normal maps, but it sounds like it’s actually more of an optimization thing.

You mentioned that you yourself hoped for this feature in the past…I suppose that means there’s a strong reason (performance?) calculating the tangent on uv1 as well is not in the engine and most likely wont be in the future?

Great to hear about the geometry decals possibly making their way into the engine someday though :slight_smile:

i’m fairly certain that geometry decals are already possible. not much stopping you from assigning a decal material to geometry :smiley:

Turns out that is is possible to derive a tangent basis by using DDX and DDY of the UV channel in question. I haven’t figured out the details yet but I imaging you just get the ratio of the x and y UV slopes which gives you a screen space vector then you need to convert that into world space.

Then you have the tangent vector. To come up with the binormal should just be a cross product with that tangent vector. Then you have all 3: Normal (which is vertex normal), Tangent (our first derive) and binormal (derived with cross).

Then you simply need to transform the normal map into this new world basis. I may have some time tomorrow to experiment with this.

It lets you apply it, but it will not be a deferred decal material. I just tried this and it looks like the material is basically compiling an ultra basic fallback standard material.

This is a “normal” deferred decal. On the left its on a mesh and you can see it basically gets a black background.

On the right is what it looks like as an actual deferred decal, rendering only the normals to the underlying ground material.

Sounds exciting :slight_smile:

Thanks for taking the time to try and tackle this!

Hey guys,
I did a quick test deriving the second tangent basis. It works but unfortunately the result is faceted. In some cases it may not matter. I’ll show some images so you can see what I mean.

Test mesh setup:

Derived Tangent Basis (arch trim only):

Here is how the tangent vector is derived:

And here is how the normal map is transformed using this vector and vertex normal:

Note: The material must have “Tangent Space Normal” = False. This means your other regular normals need to have a transform tangent->world to work alongside this.

The result:

This is the faceting in the tangent basis, each triangle gets a solid color without interpolation:

With lighting on this shows up as minor seams:

And for comparison, here is what the normals would look like using the default tangent basis (using tangent space normal = true):


I think code will be required to get interpolation to work as expected. Maybe it is possible to fix though with some other technique.

Turns out that with some textures, the seams are really hard to see. With destruction stuff I think this may work pretty well. For super clean stuff, not so much.


Really curious to see how this looks with some better test content like Nick!'s arch stuff. I won’t have time to try better art with this for a while.

(nevermind the black on the flat wall, unrelated bug in my material).

There is also something a bit wrong with how I am doing the transform from view to world space.