19 May 2015

Table of Contents

(Warning: I had put this on hold in 2015 and am revisiting it in late 2016. It’s still incomplete.)

For optimizing A* we usually look at the priority queue or the map representation. Often overlooked is improving the heuristic function.

A* uses the heuristic function to get an estimate of the path length. The better the estimate, the faster A* runs (for the most part). Normally in game maps we use straight line distance as the heuristic. This is a great estimate in wide open maps with few obstacles. Here’s a × map of Denerim from Dragon Age:Origins with walkable tiles. The straight line distance is but the actual path is . The mismatch means A* has to explore nodes:

What if we could improve the heuristic from to ? (The best would be the actual distance of .) Instead of exploring nodes, A* would explore only nodes:

These are interactive diagrams. You can move the blue and red points around. What are the green nodes?

#Triangle geometry

Suppose we’re finding the shortest path from A→Z. Ideally we’d use the exact path length cost(A, Z) as the value of heuristic(A, Z). To get the exact path length cost(A, Z) we need to calculate the shortest path from A→Z. But to calculate the shortest path from A→Z we need the value of heuristic(A, Z). We’re stuck in a loop.

Instead, we’ll construct an approximation to cost(A, Z) and then use that approximation as our heuristic(A, Z). Here’s the trick: suppose we pick a third point L that’s closer to A than to Z, and we’ve precalculated cost(L, _) to all other points. Let’s try to use that to figure out cost(A, Z):

ZALcost(A, Z)?cost(L, A)cost(L, Z)

The shortest path from LAZ might be longer than the direct route from LZ, but it can never be shorter than the direct route. This is expressed as the triangle inequality:  cost(L, Z)cost(L, A) + cost(A, Z).

The triangle inequality is nice, but as written, it doesn’t help us. We can rewrite it as: cost(A, Z)cost(L, Z) - cost(L, A). Now we’re getting somewhere! That’s a lower bound on cost(A, Z), which is exactly what we need for the heuristic.

This only works when L is behind the route A→Z. To make it work for many different paths, we need to have multiple landmarks L placed around the map. Then we calculate final heuristic as:

  heuristic(A, Z) = max(
      distance(A, Z),
      cost(L₁, Z) - cost(L₁, A),
      cost(L₂, Z) - cost(L₂, A),
      cost(L₃, Z) - cost(L₃, A),
      …
      cost(Lₙ, Z) - cost(Lₙ, A)
  )

Although I write it as cost(Lᵢ, __), it’s really stored as a precalculated array cost[i][__]. Since distance() is fast and cost[] is precalculated for each of the L points, this heuristic is fairly fast to calculate.

{{ demo: one goal, one landmark, reader moves N to see triangle, costs of each of the three sides, calculated lower bound, and distance heuristic }}

#Map changes

We need to precalculate cost(L, __) to all other points, but what if the map changes? This is not that expensive to calculate occasionally. It’s Dijsktra’s Algorithm starting at L. If all your movement costs are 1, you can use the much faster Breadth First Search. And either way you can spread this out over multiple frames.

There are two problems that happen if you use an outdated cost(L, __) after the map changes:

  1. The new cost is lower than the old cost (you broke a wall). The heuristic will overestimate sometimes, and A* will return a non-shortest path until you’ve updated the cost table. Think of it this way: you broke a wall but the unit doesn’t know right away to take that into account.
  2. The new cost is higher than the old cost (you added a wall). The heuristic will be lower than desired, and A* will take a little longer to run until you’ve updated the cost table. It will still be faster than if you weren’t using landmarks. Think of it this way: you added a wall so the unit might think that area’s safe to walk through but will soon find a path around it.

If you need to recalculate often, consider Breadth First Search instead of Dijkstra’s Algorithm. It runs much faster but will produce worse distance values (but still better than straight line distance). In many maps this will be a good tradeoff.

#The A* Heuristic - REMOVE

{{ This section should probably become a separate page about A* heuristic design, and not on the landmark page. Maybe in /pathfinding/a-star/heuristics ; it could cover the topics from http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html }}

The A* heuristic is a lower bound on the remaining cost to the goal. A* uses it to decide which paths to evaluate first. When the heuristic is zero, A* has no information, and explores all directions equally. When the heuristic is exact, A* has complete information, and explores the best paths first. The closer we can get to an exact cost, the fewer nodes A* will explore. On game maps we usually choose distance because it’s very easy to implement and gives a reasonable estimate.

I’m going to write cost(A,B) for the cost of the shortest path from A to B. In simple cases, the cost is the length of the path, but we can also use A* for variable cost movement. For example, in Civilization, moving on grassland/desert tiles costs 1 and moving on forest/hill tiles costs 2.

I’m going to write distance(A,B) for the graph distance between A and B. The main difference between graph distance and cost is that graph distance ignores obstacles and slow movement tiles (like forests). On a grid, the movement style determines which distance function to use. For 4-way square tile movement, use Manhattan distance; for 8-way tile movement, use Diagonal distance; for 6-way hex grids, use Hex distance. The graph distance may not be the same as the map distance. For example, if your graph edge costs are times, then your graph distance also needs to be times, and you can convert distance to time by dividing by maximum speed.

{{ diagram to show the difference between cost and distance – show N, G, the shortest path, its cost, and the distance }}

Since the heuristic is an estimate of the cost, we need it to match the “units” of the cost. On a grid, if each step takes 10, then you’ll want to make sure the distance is scaled the same amount — the distance between adjacent tiles should be 10. On a non-grid, you’ll still want to match the units as much as possible. Don’t use minutes for the cost function and meters for the distance function. Convert distance to minutes so that both cost and heuristic are in minutes.

{{ diagram that lets you try different movement types (4-way, 8-way, 8-way with sqrt(2)) and different heuristics (manhattan, chebyshev, diagonal, euclidean, euclidean squared, exact}}


{{ chart for intuition about g/h relationship, and why they need to match: show x-axis as time on resulting path, y-axis being h, g, f values. The f value should rise over time. If you set heuristic to 0, h is always 0, f rises with g. If you set heuristic to exact cost, then f never rises. }}

{{ chart for intuition about performance: x-axis is g value, y-axis is number of nodes explored. When heuristic is 0, this will be y=x^2 I think, with few obstacles. When heuristic is exact, this will be a constant line x=1 or maybe x=4 depending on how I count. Keeping this line low is the key to performance, so the closer we get to exact, the better we do }}

#Better estimates - REMOVE

If the exact heuristic is so great, why don’t we use it? We don’t use it because it’s expensive. First, we have to calculate it using Floyd-Warshall or Johnson’s Algorithm, and then we have to store it in an N ✕ N table, where N is the number of nodes. If N is small, you wouldn’t need to use A*. Instead, you’d save the pathfinding table. You only need A* when N is large. A large map might be over 100,000 nodes, which would require an array of 10,000,000,000 integers (40 GB).

Most of us using A* will use distance heuristics. They’re cheap and work reasonably well. They use 0 storage.

The idea behind differential heuristics is that if you have exact distances for some locations you can construct an estimate for all locations.

#Implementation

As the name “differential heuristics” suggests, this is a change to the heuristic given to A*, but not a change to the A* algorithm itself.

To use our landmark L, we need to add a map analysis step and modify the heuristic function.

The map analysis, run once per map, is to calculate cost(L, __) for all locations and save those costs in a data structure. My article on A* and Dijkstra’s Algorithm shows how to calculate these with Dijkstra’s Algorithm, and save it in a variable cost_so_far[]. For the landmark L, I’m going to copy that to L_cost[]. This is roughly what we have to do:

var output = astar_search(L, null, function(a, b) { /* heuristic */ return 0; });
var L_cost = output.cost_so_far;

What changes with the A* code? Nothing. We only have to change the heuristic function. Previously I set the heuristic to be distance(A, Z), because that was the only lower bound I had. Now I have an additional lower bound, cost(L, Z) - cost(L, A), which has been calculated during the map analysis and saved to L_cost[Z] - L_cost[A]. I need the heuristic to pick the better lower bound. Here’s what the code looks like before landmarks:

function distance_heuristic(a, z) {
    return Math.abs(a.x - z.x) + Math.abs(a.y - z.y);
}

Here’s what we do with one landmark:

function landmark_heuristic(a, z) {
    return Math.max(distance_heuristic(a, z), L_cost[z] - L_cost[a]);
}

That’s all!

Note that if your edge costs are symmetric, then cost(a, z) == cost(z, a) so we can reuse the landmark in the reverse direction:

function landmark_heuristic(a, z) {
    return Math.max(distance_heuristic(a, z), Math.abs(L_cost[z] - L_cost[a]));
}

If your edge costs are asymmetric, then you can’t reuse the landmark.

To handle multiple landmarks, we’ll loop over them in the heuristic function:

function landmark_heuristic(a, z) {
    // Assume landmarks are in array L_costs[]
    var d = distance_heuristic(a, z);
    for (var i = 0; i < L_costs.length; i++) {
        d = Math.max(d, L_costs[i][z] - L_costs[i][a]);
    }
    return d;
}

The idea is extremely simple to implement compared to just about any other technique that gives such a nice speedup.

#Landmarks: placement

If we have just one landmark, where should we place it? A landmark’s location will help some goal locations but not all. Let’s explore this to build up some sense of what landmark locations are most useful. Move the landmark around to see which goal locations are helped:

{{ demo: one goal, one landmark, all N: show where (or how much) the landmark improves the heuristic }}

{{ offline calculation: lots of starts, lots of goals, improvement for each landmark candidate site; no interaction here because there are no free variables anymore }}

{{ conclusion on placement to be written after I have the demo implemented }}

#More demos

Demo #2
Demo #3 - maze

As before, the light area is A* with a (manhattan) distance heuristic, and the dark area is A* with the landmark heuristic.

#More reading

This is the paper I found first http://research.microsoft.com/pubs/64511/tr-2004-24.pdf

http://www.cc.gatech.edu/~thad/6601-gradAI-fall2014/02-search-01-Astart-ALT-Reach.pdf

TODO: also try breadth first search with the two-insertonly-queue trick on these large maps. Breadth first search is faster than A* but I’d be scanning the entire map instead of just part of it; I’m not sure which will win overall. Mazes are probably faster with breadth first search, and big open areas might be too.

http://webdocs.cs.ualberta.ca/~bowling/papers/11aaai-heuristicopt.pdf

http://www.cs.du.edu/~sturtevant/papers/GPPC-2014.pdf