Navmesh Pathfinding Heuristic Scale

I’ve recently been working with Navmeshes, and noticed that when placing a NavMeshBoundsVolume, letting the NavMesh build, and then inspecting the RecastNavMesh-Default object in a level, the default value of the ‘‘Heuristic Scale’’ property under the Pathfinding category is ‘‘0.999’’. I did not yet look extensively enough through the engine’s source code to track down exactly where this value is used, but the name of the property combined with the tooltip (’‘Euclidean distance heuristic scale used while pathfinding’’) leads me to believe that the heuristic cost in the A* pathfinding algorithm is multiplied with this value.

So, if that’s correct, where normally the cost in A* of moving to a node would be f = g + h, with this parameter it’ll be f = g + 0.999 * h. If the property is indeed used like this I’d expect it to be a fairly bad default value? With an admissible heuristic cost (which Euclidean distance is), optimal paths would still be guaranteed for a scalar of 1.0, and when used in this way a larger scaler generally tends to result in faster pathfinding. So, a value of 1.0 instead of 0.999 would probably be the best scalar if optimality of shortest paths is required, and a value slightly larger than 1.0 (such as 1.001) would generally result in finding paths more quickly (with a possibility of paths being slightly sub-optimal).

I’d be interested to learn why the value 0.999 was chosen as a default value here in UE, or what’s wrong with my logic above.

1 Like

I’ve done some more work tracking down where the pathfinding implementation is in the engine’s source code, and it seems to be the dtNavMeshQuery::findPath() function implemented in Source\Runtime\Navmesh\Private\Detour\DetourNavMeshQuery, and I instantly recognized this as an A* implementation (which was to be expected), and indeed some H_SCALE parameter initialized as


const float H_SCALE = filter->getHeuristicScale();

is used in precisely the way that I described in the original post. Assuming this is the same Heuristic Scale property (I didn’t fully track down where filter->getHeuristicScale() leads, but it’d be very weird if it doesn’t ultimately end up being the same property), I’m pretty sure that a value of 1.0 would be strictly better than 0.999 (same guarantee for optimality, but a search more focused towards the goal node), and a value slightly larger than 1.0 would still be more desirable for better runtime performance in most cases. I’d still be very interested to hear where I’m wrong here if I’m wrong, or why 0.999 was chosen as default value in the engine, now that I know that some of the assumptions I made in the OP are correct.

1 Like

Dennis, hi!

This just caught my eye. I too need A* pathing in my RTS game. If UE4 has already implemented it, all the better. Have you had any resolution to your queries above? Would be cool to know if you found out more?

Also, have you used multithreading for pathing calculations?

I haven’t looked at the nav code at all myself, but I’m not sure why the implementation would scale the addition of the heuristic instead of just provide methods of altering it? Isn’t that the whole point of a modifiable heuristic? Seems like adding in a scale post heuristic definition would be like adding another step for the hell of it.

Is it possible that the value refers to the optimism of the heuristic? If i remember correctly, heuristic h(n) must be admissible (never overestimates the cost to reach a goal… ie: optimistic) and ideally h(n) would be consistent as well. Consistency being the triangle inequality (each side of a triangle must not be longer than the other two sides combined)… in other words, if A and B are states, this could be expressed as: h(A) - h(B) <= cost(A,B). Consistency is stronger than admissibility since admissibility simply concerns what happens between a given state and the goal state, whereas consistency concerns what happens between any 2 given states. Any consistent heuristic should also be admissible. Perhaps Epic made this optimism somewhat scale-able.

Edit: saw your second post, can’t think of why they actually added a scale post-definition of the heuristic. I’m sure there’s a reason behind it, I’d be curious to know what.

No I haven’t found out more. At the time of that post I was only looking a little bit into the navigation system, and happened to see that parameter and was curious about it, but I never actually got deep enough into the whole navigation thing that I really really needed / wanted to know the details, so I didn’t continue looking into it.

The heuristic of A* only needs to be admissible if you want a guarantee of finding an optimal (shortest distance) path. Euclidean distance (which I’m fairly sure is what is used as heuristic) is indeed admissible, and it’s even exactly correct in cases where there are no obstacles (since you can then move in a straight line towards the goal, which means the Euclidean distance is exactly the distance you have to travel). So, if you really need a guarantee of optimality and you’re using Euclidean distance as a heuristic, the heuristic should be scaled by 1.0 at most.

Generally though, in a game, you don’t really need exactly optimal paths and it’s fine if the agents take slightly longer paths than necessary. In such a case, I would say using a scale slightly larger than 1.0 (for instance, 1.01), would be good, since that still results in near-optimal paths and can often speed up the pathfinding process since it focuses more towards the goal. See: Heuristics

So, with that in mind… I’m still not sure what the point would be of a scale less than 1.0. The way I see it, 1.0 is the value of the scale with the best performance with respect to processing time that can still guarantee finding an optimal path, using a scale lower than 1.0 will only increase processing time at no gain, and a scale larger than 1.0 will in most cases reduce processing time, but can no longer guarantee finding an optimal path.

The only reason I can think of for using a scale less than 1.0 at this time is if the scale is actually used in an entirely different way than I think it is, or maybe because they want to really REALLY guarantee finding an optimal path and are worried that Euclidean distance might in rare cases overestimate the distance to the goal due to rounding errors in floating point arithmetic.