9.4.1 Two-Player Games

To help make the connection to Section 9.3 smoother, two-player games will be considered first. This case is also easier to understand because the notation is simpler. The ideas are then extended without difficulty from two players to many players. The game is formulated as follows.

Formulation 9..8 (A Two-Player Nonzero-Sum Game)  
  1. The same components as in Formulation 9.7, except the cost function.
  2. A function, $ L_1: U \times V \rightarrow {\mathbb{R}}\cup \{\infty\}$, called the cost function for $ {{\rm P}_1}$.
  3. A function, $ L_2: U \times V \rightarrow {\mathbb{R}}\cup \{\infty\}$, called the cost function for $ {{\rm P}_2}$.

The only difference with respect to Formulation 9.7 is that now there are two, independent cost functions, $ L_1$ and $ L_2$, one for each player. Each player would like to minimize its cost. There is no maximization in this problem; that appeared in zero-sum games because $ {{\rm P}_2}$ had opposing interests from $ {{\rm P}_1}$. A zero-sum game can be modeled under Formulation 9.7 by setting $ L_1 = L$ and $ L_2 = -L$.

Paralleling Section 9.3, first consider applying deterministic strategies to solve the game. As before, one possibility is that a player can apply its security strategy. To accomplish this, it does not even need to look at the cost function of the other player. It seems somewhat inappropriate, however, to neglect the consideration of both cost functions when making a decision. In most cases, the security strategy results in regret, which makes it inappropriate for nonzero-sum games.

A strategy that avoids regret will now be given. A pair $ (u^{*},v^{*})$ of actions is defined to be a Nash equilibrium if

$\displaystyle L_1(u^{*},v^{*}) = \min_{u \in U} \Big\{ L_1(u,v^{*}) \Big\}$ (9.66)


$\displaystyle L_2(u^{*},v^{*}) = \min_{v \in V} \Big\{ L_2(u^{*},v) \Big\} .$ (9.67)

These expressions imply that neither $ {{\rm P}_1}$ nor $ {{\rm P}_2}$ has regret. Equation (9.66) indicates that $ {{\rm P}_1}$ is satisfied with its action, $ u^{*}$, given the action, $ v^{*}$, chosen by $ {{\rm P}_2}$. $ {{\rm P}_1}$ cannot reduce its cost any further by changing its action. Likewise, (9.67) indicates that $ {{\rm P}_2}$ is satisfied with its action $ v^{*}$.

Figure 9.5: A Nash equilibrium can be detected in a pair of matrices by finding some $ (i,j)$ such that $ {L^*_a}= L_1(i,j)$ is the lowest among all elements in column $ j$ of $ A$, and $ {L^*_b}= L_2(i,j)$ is the lowest among all elements in row $ i$ of $ B$. Compare this with Figure 9.2.
$A$: \hspace*{5mm}
\begin{tabular}{ccc\vert c\vert ...
...& & & \\
& & & & & & \\
& & & & & & \\

The game in Formulation 9.8 can be completely represented using two cost matrices. Let $ A$ and $ B$ denote the cost matrices for $ {{\rm P}_1}$ and $ {{\rm P}_2}$, respectively. Recall that Figure 9.2 showed a pattern for detecting a saddle point. A Nash equilibrium can be detected as shown in Figure 9.5. Think about the relationship between the two. If $ A = -B$, then $ B$ can be negated and superimposed on top of $ A$. This will yield the pattern in Figure 9.2 (each $ \geq$ becomes $ \leq$ because of negation). The values $ {L^*_a}$ and $ {L^*_b}$ coincide in this case. This observation implies that if $ A = -B$, then the Nash equilibrium is actually the same concept as a saddle point. It applies, however, to much more general games.

Example 9..17 (A Deterministic Nash Equlibrium)   Consider the game specified by the cost matrices $ A$ and $ B$:

$\displaystyle A: \hspace*{5mm} \begin{tabular}{cc} & $V$  $U$ & \begin{tabu...
... \hline 5 & 0 & 2  \hline 2 & 2 & 5  \hline \end{tabular} \end{tabular} .$ (9.68)

By applying (9.66) and (9.67), or by using the patterns in Figure 9.5, it can be seen that $ u = 3$ and $ v = 1$ is a Nash equilibrium. The resulting costs are $ L_1 = 1$ and $ L_2 = 2$. Another Nash equilibrium appears at $ u=2$ and $ v = 2$. This yields costs $ L_1 = -1$ and $ L_2 = 0$, which is better for both players.

For zero-sum games, the existence of multiple saddle points did not cause any problem; however, for nonzero-sum games, there are great troubles. In the example shown here, one Nash equilibrium is clearly better than the other for both players. Therefore, it may seem reasonable that a rational DM would choose the better one. The issue of multiple Nash equilibria will be discussed next. $ \blacksquare$

Steven M LaValle 2012-04-20