Multiple Normal Maps on Separate UV Channels?

Awesome! I’m glad you were able to get this working. You’re killing it between your POM shader and now this.

The triangulation really is invisible once you put the rest of the textures on, but I’ll need to try a few more examples to see how it holds up.

I did run into a compiling error while trying it out though…

The vector transform node you’re using looks different than mine, so I must be doing something dumb or missing something.

Most likely because ddx and ddy are actually float 2’s, so its trying to do a v4. Try a component mask with RGB. I’m not even sure why that worked for me haha since it probably shouldn’t. I knew doing it using just the view-> world transform was a quick shortcut. That works when the camera angle is pretty much looking along the vertex normal but it doesn’t handle the case where the view angle is almost perpendicular to the normal.

MartinM came by yesterday and helped me reodo that part with more accurate math. Unfortunately it also added around 10 instructions to the cost so now it costs around 20 instructions. Still a WIP and something like this will be much more efficient as a script, but it is possible to do dynamically. I’ll post that updated stuff shortly.

Hi Ryan

I’m not too bad when it comes to 3d math, but one area though when it comes to shaders is that my knowledge of tangent/view spaces are a bit lacking. I’ve never really understood the binormal etc. Is it possible that you could maybe come up with a learning document with lots of pretty pictures to illustrate all the terms and what the spaces are? Maybe with a nice example scene :slight_smile:

Cheers
Dan

Dokipen - I’m sure Ryan can shed a ton more light onto that subject, but this article from the polycount.com wiki Normal Map Technical Details - polycount and this paper from autodesk talking about syncing normal maps within 3ds max https://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf might be of some interest to you.

Ryan - The RGB component mask did the trick, and the normals from uv1 on my test model are looking much better! I’m having a bit of trouble combining the normals from uv0 and uv1 together, but I’ll just wait until you’re able to post the updated code before diving any deeper.

There are lots of different ways to explain the vectors involved in transforms.

Jon Lindquist went into some pretty good detail on transforms on a twitch stream a while back (around 13m mark):
https://www.youtube.com/watch?v=564OYZanl3A

The tl:dr version is that a normal map texture exists in tangent space which means the vectors only have meaning to the local frame of the texture. In tangent space R is always the left-right axis, G is the up-down axis, and B is an axis perpendicular to the texture plane, pointing straight up towards the viewer.

You could call the default vectors for each of these X(1,0,0), Y(0,1,0), Z(0,0,1). X is the tangent, Y is the bi-normal and Z is the normal. So Z is simply represented by the vertex normal of the mesh.

If you have a plane in the world facing up along Z with no rotation (and unrotated/unmirrorred UVs), it’s transform will actually match the default layout of the tangent space texture, and thus the normal map will not be changed by the transform at all. You could turn off the “tangent space normal” option to save a few instructions if you could guarantee the floor never rotated or if you were using world XY coordinates.

In order to support other orientations, the direction of each of these axes needs to be updated, and the normal map needs to be “transformed” into the new space.

The engine stores the Tangent vector as part of the static mesh data. The bi-normal is derived by doing a cross product with the vertex normal and stored tangent vector.

If you do 90 degree rotations, this has the effect of “swapping” the vectors or “swizzling” using a single vector rotation. Ie, rotating 90 degrees around Z would make the R (or X) axis (0,1,0) and the G (or Y) axis into (1,0,0) so they merely swapped.

Negative scaling also simply negatively scales the normal map along that axis. say you have a floor like the unrotated floor example and set the Z scale to -1. In that case the blue channel of the normal map would be multiplied by -1 but the other channels would be unchanged.

The limitation at the root of this thread is that the engine only currently stores that tangent vector for the first UV layout. So deriving additional tangent vectors requires more work and hardware tricks such as ddx/ddy.

Thanks those are great links. My main confusion was that I wasn’t aware that the mesh stored the tangent basis vectors (well one of them, which as ryan just said the bi-normal vector can be derived with a cross product). It’s actually something I cam across a while ago when i was making a c++ procedural mesh plugin).

I can see how from those papers the tangent vectors are generated. Since it’s UV dependent, thats how the uv can affect the lighting.

Cheers guys.

Hi Ryan, I hate to bump this but I was just wondering if there’s been any updates or progress recently?

Bumping the living **** out of this! This method could be a real time saver for some things.

The material function for deriving the tangent basis has been checked into the engine. I think it went in for 4.11 but not sure. It is called “Derive Tangent Basis”.

No progress on the auto chamfer stuff. I don’t think anything is likely to happen on that front anytime soon.

Amazing, thanks Ryan!

So if i want to blend two normalmaps using different channels i have to first convert them both into world space using Derive Tangent Basis?

You only need to convert the one that is using a UV channel other than UV0. So if normal one is UV0 and normal two is UV1, then you only need to do the transformation on normal two.

But the conversion is to world space. If the UV0 stays in tangent - how do i blend them?

Everyone in this thread should try mesh based normal map decals

[video]http://youtube.com/watch?v=66IGMnPgEW0[/video]

You uncheck the ‘tangent space normal’ option in the material and then you transform the other normal map using the “transform” node (regular vector, not position) set to tangent-> world. Then you can blend the two normals as worldspace.

I’m trying this but run into strange lighting issues once I’ve blended two normals. I’ve unchecked ‘tangent space normal’, converted UV0 to world space, and used DeriveTangentBasis on UV2 before blending. The lighting looks correct when viewing either UV on its own but, once blended, it appears as if lighting is coming from below.

In the below image, UV0 is a simple leather nrm, UV2 is a (terrible) stitching nrm, and this is the default preview cube from the material editor (not some mesh of my own).

Am I doing things in the wrong order or missing a step?
blending.jpg

Hi,
BlendAngleCorrectNormals is purely a tangent space operation. You can just add the world space normals together. If you want to see the effect of doing the BlendAngleCorrectedNormals, you would have to first transform each into tangent space.

Thank you so much Ryan! I made the change and everything is working as expected. In case anyone else runs into something similar, here’s the graph I used. I also re-checked “tangent space normal.”

1 Like

Hi,

As far as I can see this operation is to transform a vector from tangent space to world space. How would the other way around work? So from world space to tangent space? Is it as simple as inverting something in that material function? I’m asking in regards to distorting the second UV channel of a mesh with distance fields (which are in world space).