12.2.2 Combinatorial Methods for Continuous Localization

Now consider localization for the case in which $ X$ is a continuous region in $ {\mathbb{R}}^2$. Assume that $ X$ is bounded by a simple polygon (a closed polygonal chain; there are no interior holes). A map of $ X$ in $ {\mathbb{R}}^2$ is given to the robot. The robot velocity $ {\dot x}$ is directly commanded by the action $ u$, yielding a motion model $ {\dot x}= u$, for which $ U$ is a unit ball centered at the origin. This enables a plan to be specified as a continuous path in $ X$, as was done throughout Part II. Therefore, instead of specifying velocities using $ u$, a path is directly specified, which is simpler. For models of the form $ {\dot x}= u$ and the more general form $ {\dot x}=
f(x,u)$, see Section 8.4 and Chapter 13, respectively.

The robot uses two different sensors:

  1. Compass: A perfect compass solves all orientation problems that arose in Section 12.2.1.
  2. Visibility: The visibility sensor, which was shown in Figure 11.15, provides perfect distance measurements in all directions.
There are no nature sensing actions for either sensor.

As in Section 12.2.1, localization involves computing nondeterministic I-states. In the current setting there is no need to represent the orientation as part of the state space because of the perfect compass and known orientation of the polygon in $ {\mathbb{R}}^2$. Therefore, the nondeterministic I-states are just subsets of $ X$. Imagine computing the nondeterministic I-state for the example shown in Figure 11.15, but without any history. This is $ H(y)
\subseteq X$, which was defined in (11.6). Only the current sensor reading is given. This requires computing states from which the distance measurements shown in Figure 11.15b could be obtained. This means that a translation must be found that perfectly overlays the edges shown in Figure 11.15b on top of the polygon edges that are shown in Figure 11.15a. Let $ \partial X$ denote the boundary of $ X$. The distance measurements from the visibility sensor must correspond exactly to a subset of $ \partial X$. For the example, these could only be obtained from one state, which is shown in Figure 11.15a. Therefore, the robot does not even have to move to localize itself for this example.

Figure 12.4: An example of the visibility cell decomposition. Inside of each cell, the visibility polygon is composed of the same edges of $ \partial X$.

Figure 12.5: Rays are extended outward, whenever possible, from each pair of mutually visible vertices. The case on the right is a bitangent, as shown in Figure 6.10; however, here the edges extend outward instead of inward as required for the visibility graph.
...e=figs/bitangentdef2.eps,width=2.5in} \\

Figure 12.6: A reflex vertex: If the interior angle at a vertex is greater than $ \pi $, then two outward rays are extended from the incident edges.

As in Section 8.4.3, let the visibility polygon $ V(x)$ refer to the set of all points visible from $ x$, which is shown in Figure 11.15a. To perform the required computations efficiently, the polygon must be processed to determine the different ways in which the visibility polygon could appear from various points in $ X$. This involves carefully determining which edges of $ \partial X$ could appear on $ \partial V(x)$. The state space $ X$ can be decomposed into a finite number of cells, and over each region the invariant is that same set of edges is used to describe $ V(x)$ [136,415]. An example is shown in Figure 12.4. Two different kinds of rays must be extended to make the decomposition. Figure 12.5 shows the case in which a pair of vertices is mutually visible and an outward ray extension is possible. The other case is shown in Figure 12.6, in which rays are extended outward at every reflex vertex (a vertex whose interior angle is more than $ \pi $, as considered in Section 6.2.4). The resulting decomposition generates $ O(n^2 r)$ cells in the worse case, in which $ n$ is the number of edges that form $ \partial X$ and $ r$ is the number of reflex vertices (note that $ r < n$). Once the measurements are obtained from the sensor, the cell or cells in which the edges or distance measurements match perfectly need to be computed to determine $ H(y)$ (the set of points in $ X$ from which the current distance measurements could be obtained). An algorithm based on the idea of a visibility skeleton is given in [415], which performs these computations in time $ O(m + \lg n + s)$ and uses $ O(n^5)$ space, in which $ n$ is the number of vertices in $ \partial X$, $ m$ is the number of vertices in $ V(x)$, and $ s = \vert H(y)\vert$, the size of the nondeterministic I-state. This method assumes that the environment is preprocessed to perform rapid queries during execution; without preprocessing, $ H(y)$ can be computed in time $ O(mn)$.

Figure 12.7: Consider this example, in which the initial state is not known [298].
\begin{figure}\centerline{\psfig{figure=figs/dudek0.eps,width=4.0in} }\end{figure}

Figure 12.8: The four possible initial positions for the robot in Figure 12.7 based on the visibility sensor.
\begin{figure}\centerline{\psfig{figure=figs/dudek1.eps,width=4.0in} }\end{figure}

Figure 12.9: These motions completely disambiguate the state.
\begin{figure}\centerline{\psfig{figure=figs/dudek5.eps,width=4.0in} }\end{figure}

Figure 12.10: There are now only two possible states.
\begin{figure}\centerline{\psfig{figure=figs/dudek3.eps,width=3.0in} }\end{figure}

What happens if there are multiple states that match the distance data from the visibility sensor? Since the method in [415] only computes $ H(y)
\subseteq X$, some robot motions must be planned to further reduce the uncertainty. This provides yet another interesting illustration of the power of I-spaces. Even though the state space is continuous, an I-state in this case is used to disambiguate the state from a finite collection of possibilities.

The following example is taken from [298].

Example 12..3 (Visibility-Based Localization)   Consider the environment shown in Figure 12.7, with the initial state as shown. Based on the visibility sensor observation, the initial state could be any one of the four possibilities shown in Figure 12.8. Thus, $ H(y_1)$ contains four states, in which $ y_1$ is the initial sensor observation. Suppose that the motion sequence shown in Figure 12.9 is executed. After the first step, the position of the robot is narrowed down to two possibilities, as shown in Figure 12.10. This occurs because the corridor is longer for the remaining two possibilities. After the second motion, the state is completely determined because the short side corridor is detected. $ \blacksquare$

The localization problem can be solved in general by using the visibility cell decomposition, as shown in Figure 12.4. Initially, $ X_1({\eta}_1) = H(y_1)$ is computed from the initial visibility polygon, which can be efficiently performed using the visibility skeleton [415]. Suppose that $ X_1({\eta}_1)$ contains $ k$ states. In this case, $ k$ translated copies of the map are overlaid so that all of the possible states in $ X_1({\eta}_1)$ coincide. A motion is then executed that reduces the amount of uncertainty. This could be performed, by example, by crossing a cell boundary in the overlay that corresponds to one or more, but not all, of the $ k$ copies. This enables some possible states to be eliminated from the next I-state, $ X_2({\eta}_2)$. The overlay is used once again to obtain another disambiguating motion, which results in $ X_3({\eta}_3)$. This process continues until the state is known. In [298], a motion plan is given that enables the robot to localize itself by traveling no more than $ k$ times as far as the optimal distance that would need to be traveled to verify the given state. This particular localization problem might not seem too difficult after seeing Example 12.3, but it turns out that the problem of localizing using optimal motions is NP-hard if any simple polygon is allowed. This was proved in [298] by showing that every abstract decision tree can be realized as a localization problem, and the abstract decision tree problem is already known to be NP-hard.

Many interesting variations of the localization problem in continuous spaces can be constructed by changing the sensing model. For example, suppose that the robot can only measure distances up to a limit; all points beyond the limit cannot be seen. This corresponds to many realistic sensing systems, such as infrared sensors, sonars, and range scanners on mobile robots. This may substantially enlarge $ H(y)$. Suppose that the robot can take distance measurements only in a limited number of directions, as shown in Figure 11.14b. Another interesting variant can be made by removing the compass. This introduces the orientation confusion effects observed in Section 12.2.1. One can even consider interesting localization problems that have little or no sensing [751,752], which yields I-spaces that are similar to that for the tray tilting example in Figure 11.28.

Steven M LaValle 2012-04-20