11.3.3 The Probabilistic Case: POMDPs

Example 11.14 generalizes nicely to the case of $ n$ states. In operations research and artificial intelligence literature, these are generally referred to as partially observable Markov decision processes or POMDPs (pronounced ``pom dee peez''). For the case of three states, the probabilistic I-space, $ {\cal I}_{prob}$, is a $ 2$-simplex embedded in $ {\mathbb{R}}^3$. In general, if $ \vert X\vert = n$, then $ {\cal I}_{prob}$ is an $ (n-1)$-simplex embedded in $ {\mathbb{R}}^n$. The coordinates of a point are expressed as $ (p_0,p_1,\ldots,p_{n-1}) \in {\mathbb{R}}^n$. By the axioms of probability, $ p_0 + \cdots + p_{n-1} = 1$, which implies that $ {\cal I}_{prob}$ is an $ (n-1)$-dimensional subspace of $ {\mathbb{R}}^n$. The vertices of the simplex correspond to the $ n$ cases in which the state is known; hence, their coordinates are $ (0,0,\ldots,0,1)$, $ (0,0,\ldots,0,1,0)$, $ \ldots $, $ (1,0,\ldots,0)$. For convenience, the simplex can be projected into $ {\mathbb{R}}^{n-1}$ by specifying a point in $ {\mathbb{R}}^{n-1}$ for which $ p_1 + \cdots + p_{n-2} \leq 1$ and then choosing the final coordinate as $ p_{n-1} = 1 - p_1 + \cdots +
p_{n-2}$. Section 12.1.3 presents algorithms for planning for POMDPs.

Steven M LaValle 2012-04-20