I am looking into ways to implement simplistic terrain surface which would allow for deformation in realtime. Procedural meshes seem to be the answer.
Recently I have put together the following setup:
This is a procedural mesh component defined by 4 (relative) points.
P1 = 0x, 0y, 0z
P2 = 0x, 200y, 0z
P3 = 200x, 0y, 0z
P4 = 200x, 200y, 0z
In my event graph, I create a mesh section, feeding in the 4 points as vertices. When I start it up, the mesh is generated as a 200x200 tile. Great!
I then used a nested loop to create a 10x10 grid of these meshes, essentially creating 100 tiles in a square to be used as a terrain surface. Now it takes a couple seconds for the game to start but it’s not really an issue. The problem is the continued low performance when moving around. Turns out having 100 procedural mesh actors on the screen at the same time takes its toll on the performance. My camera is top-down so I can limit the number of actors on screen by lowering the camera or increasing the size of the actors but this seems like a poor solution.
So I guess it would make sense to use a single 2000x2000 mesh rather than 100 200x200 meshes. But this introduces a problem - I don’t know how to do that. I would need to have vertices every 200 (or less) units, effectively ending up with a mesh like this:
I need all these vertices because I will be manipulating them in real time to deform the terrain. The value I pass to the Vertices would need to be an array vector containing all these points. I suppose I could run another nested do loop, adding vertices to an array until it encompasses the grid, but what about the triangles? Honestly, I don’t know how to tackle this.
Is this even how it is normally done? I feel like people have to have come across this issue and dealt with it so I’m looking for other solutions here.