Navmesh for spherical world

Along with the everyone else, I came to the conclusion simple hexagon based graph search for my spherical world is not going to cut it.

So I’ve implemented kind of a navmesh solution. It is very loosely inspired by recast (combining hexagons into big convex regions), but has very similar properties that would be expected from navmesh (faster path finding, it is dynamic, quicker to return to path). It is no where as good, but I kind of think it might be good enough. Not shown is the ability to path find through hexagon vertices (you don’t see it because it is low priority and not needed in the demo).

The entire path finding and updating of the navmesh runs in its own low priority thread using a lockless design. This means the AI would queue up a path finding task and get a call back once the pathing is done. The call back schedules a task on the game thread. The schedule task then reads from the task output queue and then updates the AI’s path. The updates to the navmesh happen the same way.

I was not clear in the video, but at one point you see a path running through a hexagon. That is because the path find happens just before the update.

Here is the video with an actual pawn (it uses collisions to determine if it is stuck)

Very cool stuff, nice work!

What’s the reason the pawn keeps going, hits the obstacle and then repaths? It seems like quite a long delay, is it actually taking that long to recalculate or it’s merely the repathing frequency you’re using?


It is just my dumb stuck detection logic - has to collide 3 times over a period of time to decide that it is stuck. In practise, it would be simpler to look ahead to see if anything had changed, but there would still have similar fallback just in case. When it gets to the end, like wise, it just waits for a bit and goes off again.

The actual path finding is on its own separate thread, so depending on how far it needs to go, it could be quite slow. However, I think this will be augmented by a dynamic policy graph.

Great work! We spoke previously in e-mail about just this it is great work and glad to see you progressing with it. Hopefully it becomes less ingrained as you mentioned before as this would be a big help on a personal level :smiley:

Here are multi agents following a policy map. There is no flocking style logic going on, just pawns with a capsule collision and following a policy. When I add in the flocking bits (probably inspired by RVO), it should be much better.

It looks 2.5D pathing, but it really is not (at least in theory). That means you can have bridges or tunnels.

I found why it was taking so long to repath in my earlier video - I had a 1 second delay between working out it needed to create a new path and actually creating a new path.

with 256 pawns…

just showing region path finding still works with policy…