Ideas on how Blizzard implemented the Warcraft 3 Terrain Editor

Hey everyone, I’ve wondered for a long time on how Blizzard implemented Warcraft 3’s Terrain Editor. I have some guesses on how it was done, but I’d like your input on if its…

  • The right approach to doing this.
  • What are things that I got wrong or are missing.
  • What are considerations I should take in account, if I wanted to implement this as well.
  • Any info on newer techniques or improvements.

Warcraft 3 Terrain Editor Example (Youtube Video)

P.S. - This does predicate on having a 2D Pathfinding system implemented. I’ve been testing UE’s default Pathfinding and while its been okay for some early prototyping. It’s not the same as Warcraft3’s smoother pathfinding that I like.
This is a separate challenge that I will be looking into the future, but these 2 systems will have to work together.


Warcraft3’s Implementation

I assume they used the Wave Function Collapse algorithm for creating the Terrain or at-least one inspired by WFC. I can’t find any explicit material describing it, but it feels very close to it. (Youtube Example of another person’s implementation of WFC in 3D)
Instead of randomly generating procedural worlds, they use it to inform the Terrain Editor on what to fill the world with. Your cursor is constrained by the grid in the world and does stuff based on the tile you select and the surrounding tiles. For example in the video above, if you place a Wall that is one unit higher than the surrounding tiles, then all 8 sides will create geo to form a Wall.

They even have multiple geo variations of the Wall, so their is variety in the Terrain.

Something to consider is the Height elevation that gives advantages/disadvantages for units. For example, you cannot see units at a higher elevation than yourself and they serve as natural path blockers.
One way around this, is to create Ramps that allow you to traverse different Elevations. I would need some way to inform the pathfinding that a Unit is at a certain elevation based on their position and allow a seamless transition between the two.
This will be fed later into a Fog of War (Shader implemented) system.

The Apply Height: shown in the video is purely cosmetic. Even if I raise terrain to be higher than an actual Wall. I would still treat that raised Terrain as being lower than the Wall. It’s one less thing to be worried about and how I would want the Terrain to work.

For simplicity/performance sake, the WFC will only work in 2D. Although its data will be translated for the 3D world, its behind the scenes logic is only aware of a 2D world. I have no plans of implementing multiple floors of Grid based path-finding. Allowing for units to walk over each-other, but at different heights.

I think a brute force implementation would be doing all of this on the CPU. But should eventually be moved over to the GPU to take advantage of the massive parallel processing.

Other performance improvements, would be implementing a BVH. So when I update the terrain, it only updates the bare minimum required. Instead of the entire Terrain.

Lastly, I would need to create a custom Editor in Unreal Engine. It would be an exact replica of the “Terrain Palette” in the video. Where I create a custom Widget, that lets you choose between Applying Textures, Increasing/Lowering Walls, Creating Ramps and doing cosmetic Terrain height changes.
I don’t know how I would go about starting this. But something that comes to mind, is how this would work with the existing UE Vanilla Terrain Editor. Being able to use the already existing features (e.g - Instanced meshes) would be great to use.

No this actually looks quite simple. WFC would fill in areas with tiles that are allowed in combination and most likely to occur. WFC works better for small areas than entire landscape and would be pretty overkill here. Besides the difficulty of initial setup it is also not a perfect algorithm since it creates unsolved areas.

The editor you show is all manual work.

We can make Point a struct:
* float ZHeight
* LandscapeType

We can make Grid a class:
* Map<FIntPoint, Point> PointsOnGrid
* float PointSize (say X100cm,Y100cm,Z100cm).

Here’s where the fun starts.

What we see in the editor is that every point on the grid is somehow connected to another. At this moment we should have collected “Point” structs on all XY positions of the Grid class, where we need one (air etc has none, Z height is stored on a point).

Every point represents a tile connected to another one. First we need to triangulate the points, Grab my Delaunay implementation from here:

Math in Delaunay triangulation algorithm, too many triangles appearing - #4 by Roy_Wierer.Seda145

Now you have the entire surface plane triangulated, ready to be passed on to Unreal’s procedural mesh system.

Next up, you only have the center points of your tiles, which doesn’t look as interesting , they are just points and don’t contain the nice randomized edges we see on the land to mountain / water tiles. We can calculate those either before or after triangulation.

Next up we see how materials are blended from one tile to another. You could say “Z level 0 is water”, or “a slope of 50 degrees up from water is land, from land to mountain is stone, from mountain to mountain is stone”, since there are not that many possibilities here. Anything you paint over it manually would be stored as a “post” effect to apply on that base data. This allows to create that neat grass but also custom details like fallen leafs or bones or scorched earth.

For pathing you could think of AStar or vector fields. Z pathing is irrelevant. You could say “If a point is missing, or if Z difference to a neighbor tile > 2 then obstacle” or “if Z 0 == water then obstacle”. Of course you could store this data in advance since it is a constant on the map during gameplay. Having an editor for this could give some unique benefits as you could manually say “you can walk through this / up this” without depending on such hardcoding. This is common when you decide to path ladders for AI or magic bridges etc.

https://www.youtube.com/watch?v=-L-WgKMFuhE

https://www.youtube.com/watch?v=ZJZu3zLMYAc

Shader implemented is a start, if you just want to darken space or hide units but it also means they are actually still there, just rendered differently. This leads to silly issues even seen in games as AOE4 today. Being able to click on mines you can’t see, know when trees are gone when the renderer doesn’t yet show it, or be a smartass and read the data from memory so you can hack the fog of war.

Put the grid on a GameInstanceSubsystem or other place you can globally reach it, GameMode is a good place for a server, then convert your unit location from FVector to FIntPoint (divide it by the point dimensions, then round it to int), then read what XY position on the grid your unit is on. Retrieve the Z of the point. Do the same for the enemy unit. check the difference between your unit and the target (TargetZ - YourZ), if it’s up then disadvantage otherwise advantage. How you implement this will probably affect a lot of things from AI to fog to squad to per unit logic, so it is interesting enough to collect in a subsystem of some sort.

Unless you need the 1001 features which I doubt for an RTS, make your own. It’s as simple as described above. Unreal comes with a 1001 things getting in the way when you want 20. It does come with some useful stuff like the ProceduralMesh which you can feed from the delaunay, no need for the UE landscape itself.

I would not even use the movement components, perhaps not even the pawn class for units but that’s another story. You don’t need the default pathing implementation or movement if you are simply going to move things on XY snapped to a Z at a constant speed like warcraft does. Saves the headache.

FDelaunay Delaunay = FDelaunay();
const TArray<FIntVector> Triangles = Delaunay.Triangulate2D(Points);

// Do whatever you want with the triangles if need be. can be used for pathing on its own, can be modified here, could also just go with another algorithm to generate a fancy mesh.

// TArray<FVector> Verts;
// TArray<int32> Indices;

// Spawn the mesh
ProceduralMeshComponent->CreateMeshSection_LinearColor(0, Verts, Indices, TArray<FVector>(), TArray<FVector2D>(), TArray<FLinearColor>(), TArray<FProcMeshTangent>(), true);
// Enable collision data
ProceduralMeshComponent->ContainsPhysicsTriMeshData(true);

“I don’t know how I would go about starting this.”:
Make a plugin as an editor module, write the editor there. Export created data in any way you like and try to avoid UAssets and Blueprints like the plague.

1 Like