Improving Heuristics

 from Red Blob Games
DRAFT
19 May 2015

I had put this on hold in 2015 and revisited it in late 2016, and then again in 2018 and 2019 and 2022 and 2024. Consider it an early draft that needs a lot of work. This is how I often work — I have a jumbled mess of ideas, put them into an outline, find the flow or explanations don’t work, and then rewrite it. And then rewrite it again and again until I make an explanation I really like. It can take a long time.

For optimizing A* we usually look at the priority queue or the map representation. Often overlooked is improving the heuristic function. Consider this map:

The shortest path follows “travel time” numbers downwards, ending in 0 when there’s no more travel time remaining:

But to calculate that “distance field” we need to run Dijkstra’s Algorithm. Instead, A* uses a “heuristic function” to estimate the travel time:

The estimate and the true travel time are not the same of course. In this map they are pretty close near the goal (purple X). But near the starting point (red blob) the estimated travel time is only half the true travel time. Let’s look at them side by side:

Let’s look at the difference:

Why does this difference matter? Ideally they would be close to 0.

The closer the estimate is to the true travel time, the faster A* runs.

The rest of the page is incomplete; skip ahead to the demos!!

 1  TODO Demo#

I’ve claimed that improving the heuristic can make A* faster, but does it? Here’s a real-world example, using the Denerim Bloodmage map from Dragon Age: Origins. It’s a × grid with walkable tiles.

Try moving the red and blue circles. The straight line distance between them is but the actual path is . The mismatch means A* has to explore nodes, marked in blue:

What if we could improve the heuristic from to ? (The best would be the actual graph distance of .) Instead of exploring nodes, A* would explore only nodes:

These are interactive diagrams. You can move the blue and red points around. What are the green nodes? They’re the locations we use for precalculating some of the distances.

{{ TODO: add arrowhead to path, as well as blob & X for start/end, maybe green Δ for landmarks }}

{{ TODO: because this is a grid, the differential heuristics don’t work as well as if it were a navmesh. But it’d be interesting to compare the number of nodes explored with with the landmark heuristic with the number of nodes explored with the exact cost. That’s the best case! And it’s still not great because of the grid. Maybe an appendix. }}

{{ TODO: as you drag we can collect info on heuristic value vs nodes explored, and start making a scatter plot }}

 2  Precalculated costs#

Suppose we’re finding the shortest path from A to Z. A* needs a heuristic function giving an estimated cost from n to Z, for many different nodes n, but always to the destination Z. Let’s suppose for this particular Z we have precalculated cost(n, Z) for all nodes n. What would happen? We could use this as our heuristic, and A* would run fast! But we could only use it when finding a path to Z.

What if there’s some other node L where we have precalculated the cost(n, L) from all other nodes n? Can we use that to find the shortest path from A to Z? Yes, but only when Z is on the path from A to L:

AZLcost(A, Z)cost(Z, L)cost(A, L) = cost(A, Z) + cost(Z, L)

In this case we know cost(A, L) and cost(Z, L) because we have precalculated cost(*, L), but we need to know cost(A, Z). If Z is on the shortest path from A to L, then we can calculate cost(A, Z) = cost(A, L) - cost(Z, L).

Can we handle more cases? What if Z is near the shortest path from A→L?

AZLcost(A, Z)cost(Z, L)cost(A, L) < cost(A, Z) + cost(Z, L)

We can’t exactly calculate cost(A, Z) in this situation. However, the triangle inequality[1] adapted for directed graphs, we can construct both lower and upper bounds:

cost(A, L) - cost(Z, L) ≤ cost(A, Z) ≤ cost(A, L) + cost(Z, L)

That’s the key idea here. It’s impractical to precalculate all costs to all locations, but if we’ve precalculated the costs to a specific location, we can use that to help us estimate other costs that don’t involve that location!

{{ Heuristic is a lower bound so we normally only care about cost(A, Z) ≥ cost(A, L) - cost(Z, L), but see references — one paper found the upper bound to be useful too }}

{{ if the graph is directed we can only say this, but if it’s not directed we can use abs(cost(A, L) - cost(Z, L)) which allows for the landmark to be on the other side too }}

 3  Triangle geometry#

How often is this triangle inequality useful?

AZLcost(A, Z)cost(Z, L)cost(A, L)

It depends on where L is in relation to A and Z. We need L to be “behind” Z when coming from A.

Since we want to find paths for any A and any Z, there’s not going to be a single L that is always “behind” Z. That means we need to have multiple nodes L₁, L₂, L₃, etc. Each one gives us some lower bound for the heuristic. The heuristic value will then be the highest of these values:

  heuristic(A, Z) = max(
      distance(A, Z),
      cost(L₁, Z) - cost(L₁, A),
      cost(L₂, Z) - cost(L₂, A),
      cost(L₃, Z) - cost(L₃, A),
      …
      cost(Lₙ, Z) - cost(Lₙ, A)
  )

{{ TODO demo: one goal, one landmark, reader moves N to see triangle, costs of each of the three sides, calculated lower bound, and distance heuristic }}

 4  Implementation#

As the name “differential heuristics” suggests, this is a change to the heuristic given to A*, but not a change to the A* algorithm itself.

I wrote cost(Lᵢ, n) in the previous section but for implementation we can store it in a 2D array, cost[i][n]. The heuristic is calculated with a few array lookups, so it should be fairly fast.

To calculate the cost[i][n] array, we need to add a map analysis step. My article on A* and Dijkstra’s Algorithm shows how to calculate cost[i][…] with Dijkstra’s Algorithm for a single landmark Li.

function zero_heuristic(a, b) { return 0; }

const L = [ /* array of landmark locations */ ];
const L_cost = [];
for (let i = 0; i < L.length; i++) {
    /* use dijkstra's algorithm if available, otherwise use a_star with the zero heuristic */
    const output = astar_search(L[i], null, zero_heuristic);
    L[i] = output.cost_so_far;
}

What changes with the A* code? Nothing. We only have to change the heuristic function. Previously I set the heuristic to be distance(A, Z), because that was the only lower bound I had. Now I have an additional lower bound for each landmark Li, cost(Li, Z) - cost(Li, A), stored as L_cost[i][Z] - L_cost[i][A]. Here’s what the code looks like before landmarks:

function distance_heuristic(a, z) {
    return Math.abs(a.x - z.x) + Math.abs(a.y - z.y);
}

Each landmark serves as a lower bound, so it might increase the heuristic:

function landmark_heuristic(a, z) {
    let h = distance_heuristic(a, z);
    for (let i = 0; i < L.length; i++) {
        let lower_bound = L_cost[i][z] - L_cost[i][a];
        if (lower_bound > h) { h = lower_bound; }
    }
    return h;
}

That’s all!

Note that if your edge costs are symmetric, then cost(a, z) == cost(z, a) so we can reuse the landmark in the reverse direction:

function landmark_heuristic(a, z) {
    let h = distance_heuristic(a, z);
    for (let i = 0; i < L.length; i++) {
        let lower_bound = abs(L_cost[i][z] - L_cost[i][a]); /* abs for symmetric edges */
        if (lower_bound > h) { h = lower_bound; }
    }
    return h;
}

The idea is extremely simple to implement compared to just about any other technique that gives such a nice speedup. Bonus: these distance fields can be updated in a background thread.

 5  Placement of landmarks#

If we have just one landmark, where should we place it? A landmark’s location will help some goal locations but not all. Let’s explore this to build up some sense of what landmark locations are most useful. Move the landmark around to see which goal locations are helped:

{{ TODO demo: one Z, one P, all N: show how much the landmark improves the heuristic }}

{{ offline calculation: lots of starts, lots of goals, improvement for each landmark candidate site; no interaction here because there are no free variables anymore }}

{{ conclusion on placement to be written after I have the demo implemented }}

Another idea: do we need complete data? No, we don’t! That means we could use the cost_so_far from the previous few A* runs. They’re already computed, so it’s almost free to keep that around. If we’re finding a lot of paths in the same areas, that data would probably help a lot.

{{ which landmarks help the most? the LPI paper suggests picking the landmarks closest to the start or goal nodes }}

Another idea: place the landmark near the centroid of the last N destinations

 6  Appendix: Map changes#

We need to precalculate cost(L, __) to all other points, but what if the map changes? This is not that expensive to calculate occasionally. It’s Dijsktra’s Algorithm starting at L. If all your movement costs are 1, you can use the much faster Breadth First Search. And either way you can spread this out over multiple frames.

There are two problems that happen if you use an outdated cost(L, __) after the map changes:

  1. The new cost is lower than the old cost (you broke a wall). The heuristic will overestimate sometimes, and A* will return a non-shortest path until you’ve updated the cost table. Think of it this way: you broke a wall but the unit doesn’t know right away to take that into account.
  2. The new cost is higher than the old cost (you added a wall). The heuristic will be lower than desired, and A* will take a little longer to run until you’ve updated the cost table. It will still be faster than if you weren’t using landmarks. Think of it this way: you added a wall so the unit might think that area’s safe to walk through but will soon find a path around it.

If you need to recalculate often, consider Breadth First Search instead of Dijkstra’s Algorithm. It runs much faster but will produce worse distance values (but still better than straight line distance). In many maps this will be a good tradeoff.

 7  Appendix: data#

Collect data about heuristic vs true cost (available every time you run the pathfinder), plot on scatter plot. Might be easy to adjust without doing all this extra work

 8  Appendix: space#

What happens if you drop the last N bits? need the subtraction to be pessimistic, probably

 9  Appendix: more demos#

The light blue is the area searched by A* with the regular (manhattan) heuristic. The dark blue is the area searched by A* with the new (differential) heuristic. The red and blue points are the endpoints of the path. The green points control the differential heuristic.

Dragon Age and maze maps provided by movingai.com. Cogmind maps provided by Josh Ge.

{{ TODO: compare true distance to the distance heuristic and the landmark heuristic }}

Demo #2 - Dragon Age, The Circle Tower, tiles
Demo #3 - maze with tiles. 1. It’s not all connected, and going outside the connected area will be slow (I need to fix this). 2. A* behaves particularly badly with mazes
vs
tiles vs tiles
Demo #4 - Dragon Age, Lothering, tiles
Demo - Cogmind level - Research 2, tiles
Demo - Cogmind level - Factory 4, tiles
tiles vs tiles

In the next demo the green points are in places that don’t help. Try moving the green points around to see if you can find the best spots:

Demo - Cogmind level - Factory 5, tiles

As before, the light area is A* with a (manhattan) distance heuristic, and the dark area is A* with the landmark heuristic.

The green points seem to help more when closer to the blue point than to the red point, and they help more when they’re “past” the blue point, but there are a lot of behaviors that I find unintuitive in these demos. I’d like to build a better sense of where they should go, but maybe it’s easier to place lots of them all over the map. Move the blue and red points around and see that there’s a big improvement (light blue area) no matter which path you want to find:

Demo - Cogmind level - Factory 5, lots of landmarks

This gives an idea for another strategy for placing these points:

  1. Place them randomly.
  2. Keep track of which ones are being used for the heuristic calculation.
  3. When you calculate a path, with some probability move the least used landmark to the blue point (maybe a higher probability when the explored area is large)

This way, if you are finding lots of paths to the same places, you’ll end up placing a green point by those destinations, and lots of your paths will be fast. I haven’t tried this. I’d like to first build my intuition for where the points should be.

 10  More reading#

Terminology isn’t consistent.

This is the paper I found first https://www.microsoft.com/en-us/research/publication/computing-the-shortest-path-a-search-meets-graph-theory/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F64511%2Ftr-2004-24.pdf[2] (“landmarks”) but this paper had it earlier: https://www.cs.rice.edu/~eugeneng/papers/INFOCOM02.pdf[3] (“landmarks”, “base nodes”, “triangulated heuristic”). The latter paper also suggests using the upper bound; that’s something I should investigate for game maps. There are also papers from Sturvetant that use “pivot points” and “differential heuristic”.

https://faculty.cc.gatech.edu/~thad/6601-gradAI-fall2014/02-search-01-Astart-ALT-Reach.pdf[4]

There’s a different approach in pathfinding also called “landmarks” (e.g. https://www.semanticscholar.org/paper/LPI-%3A-Approximating-Shortest-Paths-using-Landmarks-Grant-Mould/f9185ed02848c1cd3e0ccdda16fa7a32f7428a8a?p2df[5]) which is not the same how these points are used. Those landmarks need to be between the start and end points, whereas the landmarks on this page are used on the edges of the map. “If you want to walk to Z, walk to L, and then to Z” vs “If you want to walk to Z, walk to L, and Z is on the way”.

TODO: also try breadth first search with the two-insertonly-queue trick on these large maps. Breadth first search is faster than A* but I’d be scanning the entire map instead of just part of it; I’m not sure which will win overall. Mazes are probably faster with breadth first search, and big open areas might be too.

http://webdocs.cs.ualberta.ca/~bowling/papers/11aaai-heuristicopt.pdf[6] - euclidean geometry flattening

https://www.cs.du.edu/~sturtevant/papers/GPPC-2014.pdf[7]

Pieeter Geerkens has this implemented in his hex grid pathfinding lib: https://gamedev.stackexchange.com/a/93849[8]

Andrei Kashcha has landmarks and the rest of ALT implemented in his graph pathfinding lib https://github.com/anvaka/ngraph.path[9]

Related: distance oracles[10]