Lie brackets

The key to establishing whether a system is nonholonomic is to construct motions that combine the effects of two action variables, which may produce motions in a direction that seems impossible from the system distribution. To motivate the coming ideas, consider the differential-drive model from (15.54). Apply the following piecewise-constant action trajectory over the interval $ [0,4
\Delta t]$:

$\displaystyle u(t) = \left\{ \begin{array}{ll} (1,0) & \mbox{ for $t \in [0,\De...
... (0,-1) & \mbox{ for $t \in [3 \Delta t,4 \Delta t]$ } .  \end{array}\right.$ (15.71)

The action trajectory is a sequence of four motion primitives: 1) translate forward, 2) rotate forward, 3) translate backward, and 4) rotate backward.

Figure 15.16: (a) The effect of the first two primitives. (b) The effect of the last two primitives.
...dewaysb.eps,width=2.5in} \\
(a) & & (b)

The result of all four motion primitives in succession from $ {q_{I}}= (0,0,0)$ is shown in Figure 15.16. It is fun to try this at home with an axle and two wheels (Tinkertoys work well, for example). The result is that the differential drive moves sideways!15.9 From the transition equation (15.54) such motions appear impossible. This is a beautiful property of nonlinear systems. The state may wiggle its way in directions that do not seem possible. A more familiar example is parallel parking a car. It is known that a car cannot directly move sideways; however, some wiggling motions can be performed to move it sideways into a tight parking space. The actions we perform while parking resemble the primitives in (15.71).

Algebraically, the motions of (15.71) appear to be checking for commutativity. Recall from Section 4.2.1 that a group $ G$ is called commutative (or Abelian) if $ ab = ba$ for any $ a, b \in G$. A commutator is a group element of the form $ aba^{-1}b^{-1}$. If the group is commutative, then $ aba^{-1}b^{-1} = e$ (the identity element) for any $ a, b \in G$. If a nonidentity element of $ G$ is produced by the commutator, then the group is not commutative. Similarly, if the trajectory arising from (15.71) does not form a loop (by returning to the starting point), then the motion primitives do not commute. Therefore, a sequence of motion primitives in (15.71) will be referred to as the commutator motion. It will turn out that if the commutator motion cannot produce any velocities not allowed by the system distribution, then the system is completely integrable. This means that if we are trapped on a surface, then it is impossible to leave the surface by using commutator motions.

Now generalize the differential drive to any driftless control-affine system that has two action variables:

$\displaystyle {\dot x}= f(x) u_1 +g(x) u_2 .$ (15.72)

Using the notation of (15.53), the vector fields would be $ h_1$ and $ h_2$; however, $ f$ and $ g$ are chosen here to allow subscripts to denote the components of the vector field in the coming explanation.

Figure 15.17: The velocity obtained by the Lie bracket can be approximated by a sequence of four motion primitives.

Suppose that the commutator motion (15.71) is applied to (15.72) as shown in Figure 15.17. Determining the resulting motion requires some general computations, as opposed to the simple geometric arguments that could be made for the differential drive. If would be convenient to have an expression for the velocity obtained in the limit as $ \Delta t$ approaches zero. This can be obtained by using Taylor series arguments. These are simplified by the fact that the action history is piecewise constant.

The coming derivation will require an expression for $ {\ddot x}$ under the application of a constant action. For each action, a vector field of the form $ {\dot x}= h(x)$ is obtained. Upon differentiation, this yields

$\displaystyle {\ddot x}= \frac{dh}{dt} = \frac{\partial h}{\partial x} \frac{dx}{dt} = \frac{\partial h}{dx} {\dot x} = \frac{\partial h}{dx} h(x).$ (15.73)

This follows from the chain rule because $ h$ is a function of $ x$, which itself is a function of $ t$. The derivative $ \partial
h/\partial x$ is actually an $ n \times n$ Jacobian matrix, which is multiplied by the vector $ {\dot x}$. To further clarify (15.73), each component can be expressed as

$\displaystyle {\ddot x}_i = \frac{d}{dt} h_i(x(t)) = \sum_{j=1}^n \frac{\partial h_i}{\partial x_j} h_j .$ (15.74)

Now the state trajectory under the application of (15.71) will be determined using the Taylor series, which was given in (14.17). The state trajectory that results from the first motion primitive $ u = (1,0)$ can be expressed as

\begin{displaymath}\begin{split}x({\Delta t}) & = x(0) + {\Delta t}\; {\dot x}(0...
...}{\partial x}\Big\vert_{x(0)} \; f(x(0)) + \cdots , \end{split}\end{displaymath} (15.75)

which makes use of (15.73) in the second line. The Taylor series expansion for the second primitive is

$\displaystyle x(2 {\Delta t}) = x({\Delta t}) + {\Delta t}\; g(x({\Delta t})) +...
...partial g}{\partial x} \Big\vert_{x({\Delta t})} \; g(x({\Delta t})) + \cdots .$ (15.76)

An expression for $ g(x({\Delta t}))$ can be obtained by using the Taylor series expansion in (15.75) to express $ x(\Delta t)$. The first terms after substitution and simplification are

$\displaystyle x(2 {\Delta t}) = x(0) + {\Delta t}\; (f + g) + ({\Delta t})^2 \l...
...{\partial x} f + \frac{1}{2} \frac{\partial g}{\partial x} g \right) + \cdots .$ (15.77)

To simplify the expression, the evaluation at $ x(0)$ has been dropped from every occurrence of $ f$ and $ g$ and their derivatives.

The idea of substituting previous Taylor series expansions as they are needed can be repeated for the remaining two motion primitives. The Taylor series expansion for the result after the third primitive is

$\displaystyle x(3 {\Delta t}) = x(0) + {\Delta t}\; g + ({\Delta t})^2 \left( \...
...}{\partial x} g + \frac{1}{2}\frac{\partial g}{\partial x} g \right) + \cdots .$ (15.78)

Finally, the Taylor series expansion after all four primitives have been applied is

$\displaystyle x(4 {\Delta t}) = x(0) + ({\Delta t})^2 \left( \frac{\partial g}{\partial x} f - \frac{\partial f}{\partial x} g \right) + \cdots .$ (15.79)

Taking the limit yields

$\displaystyle \lim_{\Delta t \rightarrow 0} \frac{x(4 {\Delta t}) - x(0)}{({\Delta t})^2} = \frac{\partial g}{\partial x} f - \frac{\partial f}{\partial x} g ,$ (15.80)

which is called the Lie bracket of $ f$ and $ g$ and is denoted by $ [f,g]$. Similar to (15.74), the $ i$th component can be expressed as

$\displaystyle [f,g]_i = \sum_{j=1}^{n} \left( f_j \frac{\partial g_i}{\partial x_j} - g_j \frac{\partial f_i}{\partial x_j} \right) .$ (15.81)

The Lie bracket is an important operation in many subjects, and is related to the Poisson and Jacobi brackets that arise in physics and mathematics.

Example 15..9 (Lie Bracket for the Differential Drive)   The Lie bracket should indicate that sideways motions are possible for the differential drive. Consider taking the Lie bracket of the two vector fields used in (15.54). Let $ f = [ \cos\theta \;\;
\sin\theta \;\; 0]^T$ and $ g = [0 \;\; 0 \;\; 1]^T$. Rename $ h_1$ and $ h_2$ to $ f$ and $ g$ to allow subscripts to denote the components of a vector field.

By applying (15.81), the Lie bracket $ [f,g]$ is

\begin{displaymath}\begin{split}[f,g]_1 & = f_1 \frac{\partial g_1}{\partial x} ...
...a} - g_3 \frac{\partial f_3}{\partial \theta} = 0 . \end{split}\end{displaymath} (15.82)

The resulting vector field is $ [f,g] = [ \sin\theta \;\; -\cos\theta
\;\; 0]^T$, which indicates the sideways motion, as desired. When evaluated at $ q
= (0,0,0)$, the vector $ [0\;\;-1\;\;0]^T$ is obtained. This means that performing short commutator motions wiggles the differential drive sideways in the $ -y$ direction, which we already knew from Figure 15.16. $ \blacksquare$

Example 15..10 (Lie Bracket of Linear Vector Fields)   Suppose that each vector field is a linear function of $ x$. The $ n \times n$ Jacobians $ \partial f/\partial x$ and $ \partial g/\partial
x$ are constant.

As a simple example, recall the nonholonomic integrator (13.43). In the linear-algebra form, the system is

$\displaystyle \begin{pmatrix}{\dot x}_1  {\dot x}_2  {\dot x}_3 \end{pmatri...
... -x_2  \end{pmatrix} u_1 + \begin{pmatrix}0  1  x_1  \end{pmatrix} u_2.$ (15.83)

Let $ f = h_1$ and $ g = h_2$. The Jacobian matrices are

$\displaystyle \frac{\partial f}{\partial x} = \begin{pmatrix}0 & 0 & 0  0 & 0 & 0  0 & -1 & 0 \end{pmatrix}$$\displaystyle \mbox {\;\;\; and \;\;\;} \frac{\partial g}{\partial x} = \begin{pmatrix}0 & 0 & 0   0 & 0 & 0   1 & 0 & 0 \end{pmatrix} .$ (15.84)

Using (15.80),

$\displaystyle \small \frac{\partial g}{\partial x} f - \frac{\partial f}{\parti...
...0  1  -x_1  \end{pmatrix} = \begin{pmatrix}0  0  2  \end{pmatrix} .$ (15.85)

This result can be verified using (15.81).

$ \blacksquare$

Steven M LaValle 2012-04-20