Tessellation and/or normals?

I’ve been reading about and experimenting with dx11 tessellation in a test project. Would it be proper to think of it as an alternative to using normals/lighting to fake depth? In other words, if you tessellate the mesh you get actual depth, so would you still need to process normals on the material in that case? Or would that be essentially redundant?

Along the same lines, if you do distance-based tessellation using the multiplier, at the point where you fade to zero tessellation would you want to be fading in a normal map to provide the now missing depth?

And I guess a third related question is: are normal maps the fallback for cases where tessellation is not supported, and if so how is that handled in material construction?

Thanks!

Tesselation would be like displacement maps, you can use them with normal maps so that the displacement causes the larger details to stick out, and then the normal map would be for the finer details. If the tesselation is turned off, the normal maps would still be working.

Ah, ok, thanks. I think I see. One of the reasons I was confused is that I watched a tessellation tutorial that recommended using the alpha channel of the normal map as the input height for the tessellation offset. In that case it seemed to me that the resolution of the tessellation and the normals would be the same, and that no additional detail would be provided by retaining the normal map input. And in fact I think in that tutorial the author did not have the normal wired into the input, but I don’t know if that was intentional.

The detail level of the tesselation depends on how many times the mesh is subdivided, not just the resolution of the displacement map. Most of the time you’re not going to have the tesselation high enough to get the detail that the normal map provides. As far as where to put the displacement map, it’s grayscale so it only needs a single channel, that’s why in that tutorial you were looking at they were putting it in the alpha channel. UE4 might not keep the alpha channel in the normal map though, you’d have to test that.
You can combine it with other grayscale textures if you have any—use the Red/Green/Blue for separate grayscale images, like maybe Roughness/Metallic/Emissive/Masks

Right, ok, that makes sense too. Especially if you are using a distance variable multiplier to fade out the tessellation, then of course the level of detail in the mesh would be lower than the map.

In the tutorial I watched (https://youtu.be/o3L-GlYWmpc?t=350) the author really made it seem like you could use the alpha channel of any normal map as a height map.

Thanks again!

You might be able to use it in the normal map alpha channel, I just haven’t tested it to be sure.

You probably can’t in Unreal Engine because they do something special to compress it with a compression specifically for normal maps. I’m guessing it’s some flavor of DXT.

In other engines you probably can. In Unreal I’d probably just use some other map channel. Then again, no harm in testing it first.

I haven’t UE4 here for looking it up, but I remember that you could just change the compression type of the normal map in order to avoid this in UDK.
Plus, you can also use the blue channel of the normal map and recalculate its content in the material editor (DeriveNormalZ)?

Normal maps need to be converted from 0:+1 to -1:+1, and the blue channel needs to stay 0-1. But tessellation without any normal maps will give you a very flat, smooth shaded surface that just happens to be displaced. In other words, even though the mesh is being displaced, the surface normals are not. So you will need a normal map on your mesh regardless.

Even though you can use the alpha channel of a normal map to house the displacement map, I find it much easier to just split it off as a separate texture with linear grayscale settings.

Very interesting, and thanks for the reply. I’m currently using dynamic lights and I guess I expected that the lighting of the displaced mesh would accomplish the same thing the normals would, but of course I also didn’t take into account the difference in resolution, as mentioned above.

>> Normal maps need to be converted from 0:+1 to -1:+1, and the blue channel needs to stay 0-1.

Could you expand on that a bit?

It has nothing to do with resolution: tessellation all by itself will only displace the mesh, not the normals. So if you put a normal map on it the normal map will be displaced and the lighting will be accurate, but without that normal map, no matter how crazy the displaced mesh is, it will still render lighting as a smooth surface.

As for Normal Maps, the detail in Red and Green channels shifts the lighting on the surface to the left, right, forwards, and backwards along the surface. These channels need to be remapped so -1 is the light glancing to the left, and +1 is light glancing to the right. The Blue channel is a forward-facing normal where 1 is all the way forward above the surface, 0 is right at the surface, and -1 is on the opposite side of the surface. So you only really need 0-1 on the blue channel because if a surface faces upside down from the surface, then it should not be seen by the user in the texture at all. There are some people who’ve managed to use the blue channel in the normal map for storing other things, but you should exercise caution when doing this: you have to derive normal Z and append it to the RG values, costing extra instructions, and if you simply append a 1 for blue to save instructions, you may have issues with the resulting normals. On top of that, the blue channel is typically heavily compressed in linear space which makes it unsuitable for both texture data AND displacement data.

Thanks again. The second paragraph I am going to have to digest a little longer. The first is a little more immediately relevant to my understanding. Pardon the extreme noobishness, but my last exposure to 3D programming in general was 20 years ago, when I did a little coding of primitives in c++, and I am getting back into it now just for fun.

I think I may see where I’m going wrong, but I am not sure. It centers around why tessellation would not affect light intensity. When a mesh is tessellated this creates new geometry, doesn’t it? Doesn’t the normal of each of those new triangles affect light intensity? I guess I thought that texture normal maps were basically a way to fake depth within a single triangle by varying the light intensity of a pixel or group of pixels during pixel shading. I was thinking that after tessellation you would at least pick up the differences in light intensity caused by the variations in the normals of the new triangles.

Before I confuse you, I was only talking about the process to convert an image of a normal map to the final values used in the shader in that second paragraph. By simply choosing that you’re using a normal map on import, UE4 will do this for you automatically.

Now, light intensity is only determined by the light source. With physically based shading, no control in the material is capable of increasing or decreasing the “light intensity” at all: those values will be determined by the material’s roughness and color properties. Even a completely black surface will still reflect light. We shade by reflections now, along with GGX specular.

The surface normal is the direction the surface faces when performing light and reflection calculations. And smooth shading is done to smooth the normals between vertices so even a low-poly mesh will have everything smoothed out. Tessellating will add more vertices and make this smoothing process more accurate. However, displacing the tessellation does not affect the surface normal: the only way to do that is to use a normal map.

I wouldn’t recommend that since there’s probably a reason they have that compression type in the first place. It probably stores the data in a way that results in both high quality and good storage space. It’d be much better to just store the displacement as a grayscale image in some other texture’s channel.

Thanks again mariomguy. That last sentence cleared it up for me.

The blue channel stores information according to the tangent of the surface. So 1 is directly above the vertex normal, 0 is perpendicular to the surface, and -1 is facing the opposite direction. You can replace the blue channel with a 1 constant and get the same result if the original normal map was not intended to face the opposite side of the surface and contains a solid blue channel, but if your normal map is extreme and faces in the opposite direction, the standard normal map configuration is the best for overall quality, memory, and efficiency.

But since the blue channel is heavily compressed and in linear space, it’s not suitable for displacement maps or masks. The only information you can really put there is low-quality, incorrect textural detail, and if you use shared texture samplers that reduce texture draw calls, the benefit to channel packing in a normal map is absolutely zilch. It will cost more to extract the data in the shader than to throw another texture in there. Now if you’re making a mobile game with heavy memory constraints, I understand this can mean the difference between something like Jett Rocket and something like the N64, but this type of optimization is quickly becoming outdated, especially as memory improves and pixel shading performance becomes the new bottleneck.

Thanks for this good discussion guys.some of the folks here are saying that we can use B channel to store some other values.I am somewhat confused in how can one determine a normal vector in a + 180 degree space without using B channel. if i ll use a predefined value of 1 rather than channel B, didn’t it end up making the normal pointing out skipping the value of R and G.I am new to learning this.please explain in detail if my question i wrong.Thanks