What is a material, texture and normal map

Can some explain to me what each is used for, in beginner terms?

I think it’s like this:

  1. A material is a 2D image that gets applied to a mesh to make it look like something (rock, brick, etc.).
  2. A texture is a 2D image that is used in the material editor to give the material surface detail (roughness).
  3. I don’t know what a normal map is for. I’ve read about it a little. Is it a method to give 3D models more detail without making them more complex?

Thank you.

I am a beginner too , but will try to explain things , as i think:

  1. Material is not actually an 2D image. In a material you can use 2D images , to give the mesh or surface the look you are looking for. You can control the diffuse color, the reflection, refraction , opacity and a lot of other parameters. You can tweak all the parameters to achieve the result you are looking for.
  2. Textures are 2D images most of the time and yes they are used in the material editor to achieve different results, but most of the times when you are applying 2D texture to a 3D model you must “wrap around” that 2D image , by Unrapping the objects UVs or applying a simpler method of UV mapping(box, spherical, etc.)
    3)Normal mapping is a method of giving a 3D model more detail without the model actually having it. This is achieved by faking the lighting of bumps and dents, so if you want to preview your normal map you will need some lighting. Bottom line, you have more detail on the same low polygon mesh.
1 Like

You have the right idea but let me introduce you to a whole new world of possibilities: shaders.

A material is not a just 2D image, it’s a shader.
A shader in UE4 is normally written in HLSL (it’s a shading programing language) although the material editor makes this really easy so you don’t have to actually write any code :stuck_out_tongue: much like blueprints instead of C++.

So as you can imagine you can do all sorts of awesome things with a material that go far beyond what a texture can do… custom lighting models, parallax occlusion, ocean displacement and so on… the possibilities are endless really.
There are cases where you don’t need to use a texture in a material at all too.

1 Like

As tk-master said -

material == shader ( a small program that runs on the gpu and colors each pixel)
texture == 2d image data ( like png or bmp )
normal map== a specially created texture that contains lighting information of the model it is intended to be used with. (also png or bmp)
model == 3d vertexes representing a 3d shape (like fbx or obj)

btw - there are many types of commonly created textures that can be used by the shader like the normal map - e.g. diffuse, ambient occlusion, specular, roughness

1 Like

You are actually incredibly close to being correct on what each does. Here is a breakdown that I use when explaining this to people.

Texture: A Texture is a static 2D image that can be applied to meshes to give them the appearance of real world objects like rocks or bricks.
Material: A Material is generally made up of a combination of 2D images and / or Math expressions that you manipulate to produce an image. Materials are also dynamic meaning that they can change over time and you can also use a Material to adjust the look of a Texture.
Normal Map: You were actually correct on your description for a Normal map. All a Normal map does it help to re-create small details that would otherwise be too costly to render in real time.

Hope that helps. Please let us know if you have have any more questions.

1 Like

A normal map doesn’t contain lighting information, it essentially contains an ‘offset’ to the normal of a surface for each pixel.

Basically, 0.5,0.5,0 (or 128,128,0 - corresponding to X,Y,Z) is ‘flat’ meaning the normal map will not affect how light hits the surface. If you start adjusting the first two values (X, Y) then the light hitting it will think the surface is angled in a different direction. Hence, you can use it to add detail like bumps.

1 Like

Thanks Dave - but I am intentionally trying to keep things simple and clear.

Although I am completely unclear why you insinuate it does not contain lighting information. It contains the normal of the surface which is the primary information needed to calculate the lighting of the fragment. The normal map is absolutely used by the shader to evaluate lighting at the surface in both deferred and forward rendering. The bump detail you speak of is determined by perturbing the fragment color based on the incident angle.

In deferred shading, (the default in UE) the lighting calculation is deferred until the g-bufffer is built. However, the normal map directly contributes to the g-buffer.

Hence my statement “a specially created texture that contains lighting information of the model it is intended to be used with” is completely accurate.

The normal contains information required to light the surface. It is probably the most important piece of lighting information in the model, after the position in 3d space.

I am sorry if you thought I meant “information about lights in the scene”, but rather I intended as “information about the model used to apply lighting it”.

That’s my point: although (I guess…) what you said is accurate, I would describe the normal map as contains information for lighting, rather than lighting information. I think that is clearer for an end user to understand what the normal map is doing as a component of the shader.

So is this how it’s done (my example will be a small first aid kit)?

  1. create a 3D mesh
  2. wrap a 2D image or “material” (white with red cross) around the mesh
  3. use another 2D image “texture” to give the red cross a “raised” look
  4. use a normal map to make the first aid kit look like it has scratches/marks on the surface

A material doesn’t necessarily have to use a 2D image, it can be just a solid color without any special details, you use textures when you want to add details to the material. So a texture might be a red cross on a white background that you would plug into the Diffuse (color) slot of the material, and then the normal map could be used to make the red cross look like it has a raised effect.

Franktech -

#1 - A 2D Texture can only be applied to a mesh via a Material/Shader.

#2 - A material in a 3D package (and, in this case, Unreal Engine) is just a user-friendly interface for a shader. When you save the material, Unreal creates the actual shader in the background.

#3 - Normal maps are the most common and therefore important to understand, but they all achieve different things. POM tries to mimic depth, which a normal map cannot do. Bump maps are effectively the precursor to normal maps, but they can only infer depth away from the surface normal (not change it’s direction), meaning you can only mimic things like scratches and knocks on a surface. Normal maps used to be called Dot3 Bump Maps because they could manipulate a normal in X,Y,Z instead of just Z. Displacement / Height maps are used for physically displacing the vertexes/polygons of a surface, usually with tessellation.

OK. So to update my previous post:

  1. create first aid kit mesh
  2. apply a texture to the first aid kit
  3. apply a material to the first aid kit (if needed)
  4. use a normal map (if needed)

Is a texture used before applying a material, or the other way around?
Is a texture and material used at the same time to achieve a wanted look?
A normal map is only used if added detail is needed?

The material has a bunch of different slots that you can plug textures into, you apply the material to the mesh, the texture has to be plugged into the material. The normal map is another type of texture that gets plugged into the material as well.

Now I get it. I didn’t know if they were used separately, or all together. I followed a tutorial where I made a “Rock” texture, helpful, but it didn’t explain what each item did.

I also found the material and texture documentation for UE4.

#1: This is correct, you can only ever apply a Texture to an object via a Material.

#2: Shader and Material are interchangeable terms that reffer to the same thing.

#3: All of the maps you talked about are important and used for different reasons. Depending on what you want to do, or what type of detail you are trying to achieve will determine which type of method you will use. If I had to pick one to learn it would be normal maps as they are the most used.

Hope this helps.

Texture - A 2D image that can be applied to a 3D object
Normals (in general) - The surface definition of an object
Normal textures - A 2D image that defines the surface definition of an object. This may be used to bend light around and make bumps along the surface, for example.
Material - A code used to shade a 3D object. Without materials, 3D objects only exist as polygons and vertices in space. The material defines what that 3D object looks like, and how it is rendered.

Parallax - A technique that bends a texture based on the viewing angle to give the illusion of depth along a surface.
Parallax Occlusion - A technique that bends a texture and occludes details to give the illusion of real depth along a surface using only 2D textures.
Displacement map - A texture to provide depth information to use in techniques like parallax, parallax occlusion, or tessellation.

How you decide to make your normal maps is up to you. Some people like to sculpt detail on a high resolution mesh in Z-brush and bake them down onto a lower resolution mesh, generating an accurate normal representation of the object. Others like to make normal maps by hand with tools like XNormal, and then paint texture maps like the color and displacement. But the end result must give you some way to bend the light around details in your mesh to provide a more detailed result than what can be obtained through polygons alone.

In the screenshot below, there are only 2 polygons: the appearance of depth and form comes from the normal maps and displacement maps in the parallax occlusion shader. No polygons.

460791a0f96f826e3a41712097cf68f5415932f5.jpeg

So I made a small box in 3DS Max. Exported it as a .FBX file. Imported it into UE4. Applied a material to it (just a white background with a red cross on it). It put the red cross on all 6 sides of the box.

I think I first have to do UVW mapping before exporting/importing it and then open the material in the Material editor for further work. Or do you create the material in UE4 first then use that for UVW mapping?

Is there a tutorial that shows the steps in the correct order?

Maybe it would help to work up from fundamentals. This info is not specific to Unreal, general real time 3D graphics.

In 3D you have a three dimensional coordinate system. You define where something is in your 3D space with X,Y and Z coordinates.

A model is just a collection of points marking positions in X,Y,Z. People will refer to these three number coordinates as vectors or positions interchangeably. The truth is in 3D a position is where something is, and a vector is where it’s pointing or it’s position relative to something else in the space.

So a model is just a collection of point positions. The actual surface is defined by connecting those positions into triangles. The triangles are referred to as polygons or faces, and in a 3D program they might not appear as triangles at all, they might have 4 or more sides, but in a game engine they are all triangles.

Each of those triangles has a normal. A normal is a vector (X,Y,Z numbers) that tells where something is pointing, and we call it a normal because it is “normalized” meaning the vector has a length of 1. It’s only giving a direction, not a distance. Those face or polygon normals also exist on each of the points that define the model. Since one of those points might be a corner for many triangles, the point normal will often be something like an average of the normals of all the triangles it helps create. A person can go and change that though. The game engine only considers point normals, and we can manipulate them to make edges appear hard, or smooth. The game engine is mainly concerned with these point normals.

So you have an X,Y and Z for every point saying where it is and another X,Y and Z saying where it’s pointing. You can also have an R,G,B telling what color it is. And a U,V,W (we generally only care about U and V, W is how high above or far below the point is in relation to the texture. Not terribly useful for reasons I won’t get into) that tells where that point is on a 2D plane which we can use to apply texture maps. Infact, you can have multiple sets of UVW cords for each point so you can use multiple different texture maps. A texture map is just a 2D image, and it’s applied to the model using these UV point positions like wrapping a gift. A texture map can be encoded to contain all sorts of information, but in general it has 4 channels (it’s another type of vector) R,G,B, and A. Red Green Blue Alpha. These numbers together most often represent what color a particular position (not a point, a position on one of the triangles) is and how transparent it is.

When the engine goes to render the model, it needs to now where a pixel is, what color it is, where is it pointing, etc. It gets that information from the points (vertices) near the pixel it is trying to draw, and from the textures that at that position on the model. This is where the normal map comes in. The normal map is storing an XYZ value that says where this position on the surface is pointing, and it is putting those values in the RGB values of a texture. If you didn’t have this map, the renderer would do like a weighted average of the nearby vertices to get this direction. If you have that info in a texture, you have a lot of normals for the surface and you can create more variations in direction across the surface that can describe things like pores on skin, the roughness of an orange peel, the pitts of poured concrete, the grooves of milled steel, the fine carved details of an old wooden chair. If you only rely on the point normals every pixel the renderer creates inside of a triangle on the surface will point in the same direction. With a normal map, you could have hundreds of slightly different directions for every triangle that makes up the model.

You can have all sorts of maps doing all sorts of things, but the texture that says what color is this position on the surface and how transparent is it, and the normal map which says where is this position pointing, are the two main ones.

A shader, technically is a small program the renderer runs to determine what a pixel in the final image should look like. The shader takes the information about the surface (textures, normals, positions) plus other information from the scene like lighting direction and color and out puts a pixel color, position, opacity, etc for the final image.

People often use shader and material interchangeably but they are different things. The shader is the little program that does the calculation. The material is a collection of shaders and textures that define a look. It is possible for a material to have more than one shader. You may for instance have a glass or metal material, capable of producing different types of glass or metal using different shaders internally, but it is one material.

1 Like

Normals are calculated per vertex, not per face. That’s why engines actually calculate more vertices if there are UV splits or smoothing splits.

I know this.

I was puprosefully talking from a simplistic veiw to avoid going down the rabbit hole of explaining how you can define a hard edge when you only have one normal direction, vertex splitting, atlases etc. I think it’s fundamentally easy to visualize that a triangle can only face it one direction and that a point that defines a corner for many triangles is going to point somewhere relative to where those triangles point.