Navigation functions

As in Section 8.2.2, potential functions can be used to represent feedback plans, assuming that a local operator is developed that works for continuous state spaces. In the discrete case, the local operator selects an action that reduces the potential value. In the continuous case, the local operator must convert the potential function into a vector field. In other words, a velocity vector must be defined at each state. By default, it will be assumed here that the vector fields derived from the navigation function are not necessarily normalized.

Assume that $ \pi (x)
= u_T$ is defined for all $ x \in {X_{G}}$, regardless of the potential function. Suppose that a potential function $ \phi : X
\rightarrow {\mathbb{R}}$ has been defined for which the gradient

$\displaystyle \nabla\phi = \left[ \frac{\partial \phi}{\partial x_1} \;\; \frac...
...\phi}{\partial x_2} \;\; \cdots \;\; \frac{\partial \phi}{\partial x_n} \right]$ (8.40)

exists over all of $ X \setminus {X_{G}}$. The corresponding feedback plan can then be defined as $ \pi (x) = - \nabla \phi \vert _x$. This defines the local operator, which means that the velocity is taken in the direction of the steepest descent of $ \phi $. The idea of using potential functions in this way was proposed for robotics by Khatib [525,526] and can be considered as a form of gradient descent, which is a general optimization technique.

It is also possible to work with potential functions for which the gradient does not exist everywhere. In these cases, a continuous-space version of (8.4) can be defined for a small, fixed $ \Delta t$ as

$\displaystyle u^* = \operatornamewithlimits{argmin}_{u \in U(x)} \Big\{ \phi(x') \Big\} ,$ (8.41)

in which $ x'$ is the state obtained by integrating velocity $ u$ from $ x$ for time $ \Delta t$. One problem is that $ \Delta t$ should be chosen to use the smallest possible neighborhood around $ \phi $. It is best to allow only potential functions for which $ \Delta t$ can be made arbitrarily small at every $ x$ without affecting the decision in (8.41). To be precise, this means that an infinite sequence of $ u^*$ values can be determined from a sequence of $ \Delta t$ values that converges to 0. A potential function should then be chosen to ensure after some point in the sequence, $ u^*$, exists and the same $ u^*$ can be chosen to satisfy (8.41) as $ \Delta t$ approaches 0. A special case of this is if the gradient of $ \phi $ exists; the infinite sequence in this case converges to the negative gradient.

A potential function, $ \phi $, is called a navigation function if the vector field that is derived from it is a solution plan. The optimal cost-to-go serves as an optimal navigation function. If multiple vector fields can be derived from the same $ \phi $, then every possible derived vector field must yield a solution feedback plan. If designed appropriately, the potential function can be viewed as a kind of ``ski slope'' that guides the state to $ {X_{G}}$. If there are extra local minima that cause the state to become trapped, then $ {X_{G}}$ will not be reached. To be a navigation function, such local minima outside of $ {X_{G}}$ are not allowed. Furthermore, there may be additional requirements to ensure that the derived vector field satisfies additional constraints, such as bounded acceleration.

Example 8..17 (Quadratic Potential Function)   As a simple example, suppose $ X = {\mathbb{R}}^2$, there are no obstacles, and $ q_{goal} = (0,0)$. A quadratic function $ \phi(x,y) = \frac{1}{2}
x_1^2 + \frac{1}{2} x_2^2$ serves as a good potential function to guide the state to the goal. The feedback motion strategy is defined as $ f = -\nabla \phi = [-x_1 \;\; -x_2]$, which is the inward vector field shown in Figure 8.5b.

If the goal is instead at some $ (x'_1,x'_2) \in {\mathbb{R}}^2$, then a potential function that guides the state to the goal is $ \phi(x_1,x_2)
= (x_1-x'_1)^2 + (x_2-x'_2)^2$. $ \blacksquare$

Suppose the state space represents a configuration space that contains point obstacles. The previous function $ \phi $ can be considered as an attractive potential because the configuration is attracted to the goal. One can also construct a repulsive potential that repels the configuration from the obstacles to avoid collision. Let $ \phi_a$ denote the attractive component and $ \phi_r$ denote a repulsive potential that is summed over all obstacle points. A potential function of the form $ \phi = \phi_a + \phi_r$ can be defined to combine both effects. The robot should be guided to the goal while avoiding obstacles. The problem is that it is difficult in general to ensure that the potential function will not contain multiple local minima. The configuration could become trapped at a local minimum that is not in the goal region. This was an issue with the planner from Section 5.4.3.

Steven M LaValle 2012-04-20