Maybe it would help to work up from fundamentals. This info is not specific to Unreal, general real time 3D graphics.
In 3D you have a three dimensional coordinate system. You define where something is in your 3D space with X,Y and Z coordinates.
A model is just a collection of points marking positions in X,Y,Z. People will refer to these three number coordinates as vectors or positions interchangeably. The truth is in 3D a position is where something is, and a vector is where it’s pointing or it’s position relative to something else in the space.
So a model is just a collection of point positions. The actual surface is defined by connecting those positions into triangles. The triangles are referred to as polygons or faces, and in a 3D program they might not appear as triangles at all, they might have 4 or more sides, but in a game engine they are all triangles.
Each of those triangles has a normal. A normal is a vector (X,Y,Z numbers) that tells where something is pointing, and we call it a normal because it is “normalized” meaning the vector has a length of 1. It’s only giving a direction, not a distance. Those face or polygon normals also exist on each of the points that define the model. Since one of those points might be a corner for many triangles, the point normal will often be something like an average of the normals of all the triangles it helps create. A person can go and change that though. The game engine only considers point normals, and we can manipulate them to make edges appear hard, or smooth. The game engine is mainly concerned with these point normals.
So you have an X,Y and Z for every point saying where it is and another X,Y and Z saying where it’s pointing. You can also have an R,G,B telling what color it is. And a U,V,W (we generally only care about U and V, W is how high above or far below the point is in relation to the texture. Not terribly useful for reasons I won’t get into) that tells where that point is on a 2D plane which we can use to apply texture maps. Infact, you can have multiple sets of UVW cords for each point so you can use multiple different texture maps. A texture map is just a 2D image, and it’s applied to the model using these UV point positions like wrapping a gift. A texture map can be encoded to contain all sorts of information, but in general it has 4 channels (it’s another type of vector) R,G,B, and A. Red Green Blue Alpha. These numbers together most often represent what color a particular position (not a point, a position on one of the triangles) is and how transparent it is.
When the engine goes to render the model, it needs to now where a pixel is, what color it is, where is it pointing, etc. It gets that information from the points (vertices) near the pixel it is trying to draw, and from the textures that at that position on the model. This is where the normal map comes in. The normal map is storing an XYZ value that says where this position on the surface is pointing, and it is putting those values in the RGB values of a texture. If you didn’t have this map, the renderer would do like a weighted average of the nearby vertices to get this direction. If you have that info in a texture, you have a lot of normals for the surface and you can create more variations in direction across the surface that can describe things like pores on skin, the roughness of an orange peel, the pitts of poured concrete, the grooves of milled steel, the fine carved details of an old wooden chair. If you only rely on the point normals every pixel the renderer creates inside of a triangle on the surface will point in the same direction. With a normal map, you could have hundreds of slightly different directions for every triangle that makes up the model.
You can have all sorts of maps doing all sorts of things, but the texture that says what color is this position on the surface and how transparent is it, and the normal map which says where is this position pointing, are the two main ones.
A shader, technically is a small program the renderer runs to determine what a pixel in the final image should look like. The shader takes the information about the surface (textures, normals, positions) plus other information from the scene like lighting direction and color and out puts a pixel color, position, opacity, etc for the final image.
People often use shader and material interchangeably but they are different things. The shader is the little program that does the calculation. The material is a collection of shaders and textures that define a look. It is possible for a material to have more than one shader. You may for instance have a glass or metal material, capable of producing different types of glass or metal using different shaders internally, but it is one material.