Dijkstra's algorithm

Up to this point, there has been no reason to prefer one action over any other in the search. Section 2.3 will formalize optimal discrete planning and will present several algorithms that find optimal plans. Before going into that, we present a systematic search algorithm that finds optimal plans because it is also useful for finding feasible plans. The result is the well-known Dijkstra's algorithm for finding single-source shortest paths in a graph [273], which is a special form of dynamic programming. More general dynamic programming computations appear in Section 2.3 and throughout the book.

Suppose that every edge, $ e \in E$, in the graph representation of a discrete planning problem has an associated nonnegative cost $ l(e)$, which is the cost to apply the action. The cost $ l(e)$ could be written using the state-space representation as $ l(x,u)$, indicating that it costs $ l(x,u)$ to apply action $ u$ from state $ x$. The total cost of a plan is just the sum of the edge costs over the path from the initial state to a goal state.

The priority queue, $ {Q}$, will be sorted according to a function $ {C}: X \rightarrow [0,\infty]$, called the cost-to-come. For each state $ x$, the value $ C^*(x)$ is called the optimal2.1 cost-to-come from the initial state $ {x_{I}}$. This optimal cost is obtained by summing edge costs, $ l(e)$, over all possible paths from $ {x_{I}}$ to $ x$ and using the path that produces the least cumulative cost. If the cost is not known to be optimal, then it is written as $ {C}(x)$.

The cost-to-come is computed incrementally during the execution of the search algorithm in Figure 2.4. Initially, $ C^*({x_{I}}) =
0$. Each time the state $ x'$ is generated, a cost is computed as $ {C}(x') = C^*(x) + l(e)$, in which $ e$ is the edge from $ x$ to $ x'$ (equivalently, we may write $ {C}(x') = C^*(x) + l(x,u)$). Here, $ {C}(x')$ represents the best cost-to-come that is known so far, but we do not write $ C^*$ because it is not yet known whether $ x'$ was reached optimally. Due to this, some work is required in line 12. If $ x'$ already exists in $ {Q}$, then it is possible that the newly discovered path to $ x'$ is more efficient. If so, then the cost-to-come value $ {C}(x')$ must be lowered for $ x'$, and $ {Q}$ must be reordered accordingly.

When does $ {C}(x)$ finally become $ C^*(x)$ for some state $ x$? Once $ x$ is removed from $ {Q}$ using $ {Q}.GetFirst()$, the state becomes dead, and it is known that $ x$ cannot be reached with a lower cost. This can be argued by induction. For the initial state, $ C^*({x_{I}})$ is known, and this serves as the base case. Now assume that every dead state has its optimal cost-to-come correctly determined. This means that their cost-to-come values can no longer change. For the first element, $ x$, of $ {Q}$, the value must be optimal because any path that has a lower total cost would have to travel through another state in $ {Q}$, but these states already have higher costs. All paths that pass only through dead states were already considered in producing $ {C}(x)$. Once all edges leaving $ x$ are explored, then $ x$ can be declared as dead, and the induction continues. This is not enough detail to constitute a proof of optimality; more arguments appear in Section 2.3.3 and in [243]. The running time is $ O(\vert V\vert \lg \vert V\vert + \vert E\vert)$, in which $ \vert V\vert$ and $ \vert E\vert$ are the numbers of edges and vertices, respectively, in the graph representation of the discrete planning problem. This assumes that the priority queue is implemented with a Fibonacci heap, and that all other operations, such as determining whether a state has been visited, are performed in constant time. If other data structures are used to implement the priority queue, then higher running times may be obtained.

Steven M LaValle 2012-04-20