Procedural Mesh Component and Normal vectors

In header file of ProceduralMeshComponent we have the following description of class methods:

   /*
	*	Create/replace a section for this procedural mesh component.
	*	@param	SectionIndex		Index of the section to create or replace.
	*	@param	Vertices			Vertex buffer of all vertex positions to use for this mesh section.
	*	@param	Triangles			Index buffer indicating which vertices make up each triangle. Length must be a multiple of 3.
	*	@param	Normals				Optional array of normal vectors for each vertex. If supplied, must be same length as Vertices array.
	*	@param	UV0					Optional array of texture co-ordinates for each vertex. If supplied, must be same length as Vertices array.
	*	@param	VertexColors		Optional array of colors for each vertex. If supplied, must be same length as Vertices array.
	*	@param	Tangents			Optional array of tangent vector for each vertex. If supplied, must be same length as Vertices array.
	*	@param	bCreateCollision	Indicates whether collision should be created for this section. This adds significant cost.
	*/

Now I am confused - why does Normals array is the same size as Vertices array, instead of Indices? Here is how I understand mesh rendering - when we get basic shape from vertices and indices, we would like to have also map of normal vectors - every triangle have at least one normal vector, if we want to simulate some depth, we could apply linear change of normal vector from one triangle vertex to another. If I would design library for mesh processing, my first decision would be to make Normals array in the size of Indices array (here described as Triangles). Of course it can be wrong - but I want to know why. When I make Normals array in the size of Vertices array, I decide that every Vertex have one and only one Normal vector. Every vertex is used by at least two different triangles, which in general don’t face the same direction. So one vertex participate in having two (or more) Normal vectors.

The only argument for actual solution I can think of right now is the sake of size (which is also important). We have much more indices than vertices, so even losing some data from used solution we can get some additional computing time.

I know it has been a little bit, but hopefully this is still helpful:

Having multiple normals for a vertex associated to the polygons is a technique usually called “split normals” and is primarily used in non-game modelling or very specific cases for games. Using a single normal per vertex is the “standard” approach for two reasons:

  1. It is significantly more efficient (for obvious reasons), and that can matter quite a lot.
  2. In practical terms, the vast majority of the meshes that are dealt with in a game engine either look terrible with split normals (if they are supposed to model a surface that is not infinitely sharp at the polygon edge) or aren’t being used in a context where that level of detail is going to have any effect on the visuals produced for the vast majority of machines running the game.

At least for skeletal meshes you can use the import option “Keep Overlapping Vertices” to get around this fairly straightforwardly, you just have to make sure your modelling software outputs a separate copy of the vertex for each normal. There shouldn’t be enough of those to cause issues, since they really only matter in “edge cases.”