You are on page 1of 7

Dynamic Games of Complete Information

Dynamic games are comprised of sequential and repeated games, as opposed to static
games, which are comprised of single-stage, simultaneous move games.
Complete Information means that the players payoffs (or payoff functions) are
common knowledge.
Perfect Information means that at each move of the game, the player with the move
knows the full history of the game to that point.
Imperfect Information means that at some move, the player with the move does not
know the history of the game.
Obviously one could encounter games with complete but imperfect information,
as well as complete and perfect information. Sequential games of complete and perfect
information may be solved by backward induction and the resulting outcome is the
subgame perfect Nash equilibrium of the game. The simplest sequential games are
derived from static games by merely designating one player to go first.
A Pricing Game
AMD
High Price

AMD
Low Price

Intel
High Price

$4.0

$4.0

$0.0

$5.0

Intel
Low Price

$5.0

$0.0

$1.0

$1.0

1. Find the Nash equilibrium of this static game in normal form.


2. Is it a Prisoners Dilemma Game?
3. Convert to a sequential game in extensive form.
4. Find the subgame perfect Nash equilibrium by backward induction.

Now consider a similar sequential or two-stage game of complete but imperfect


information.
1. Players 1 and 2 simultaneously choose actions a1 and a2 from feasible sets A1 and
A2, respectively.
2. Players 3 and 4 observe the outcome of the first stage (a1, a2) and then
simultaneously choose actions a3 and a4 from feasible sets A3 and A4, respectively.
3. Payoffs are ui(a1, a2, a3, a4) for i = 1, 2, 3, 4.
Note that information is imperfect because no player knows the moves of all the other
players prior to making a move. To find the equilibrium of this game, we start at the end
and find the Nash equilibrium for the simultaneous game in stage 2. Suppose that for
each feasible outcome for the first stage game, (a1, a2), there is a unique Nash equilibrium
for the second stage represented as (a3*(a1, a2), a4*(a1, a2)). Moving backward to stage 1,
players 1 and 2 can solve the second stage just as well as 3 and 4 can, so the game
becomes:
1. Players 1 and 2 simultaneously choose actions a1 and a2 from feasible sets A1 and
A2.
2. Payoffs are ui(a1, a2, a3*(a1, a2), a4*(a1, a2)) for i = 1,2.
Suppose (a1*, a2*) is the unique Nash equilibrium of this simultaneous-move game. Then
(a1*, a2*, a3*(a1*, a2*), a4*(a1*, a2*)) is the subgame perfect outcome of this game.
Definition (Selten): A Nash equilibrium is subgame perfect if the players strategies
constitute a Nash equilibrium in every subgame.
A subgame starts at any move in a game at which the player that has the move knows the
moves of all the other players up to that point in the game and includes the remainder of
the game. Each stage of the two-stage game above is a subgame and such games are
called stage games.
Finitely Repeated Games
Definition: Given a stage game G, let G(T) denote the finitely repeated game in which G
is played T times, with the outcomes of all preceding plays observed before the next play
begins. The payoffs for G(T) are the sum of the payoffs from the T stage games.
Proposition: If the stage game G has a unique Nash equilibrium then, for any finite T,
the repeated game has a unique subgame perfect outcome: the Nash equilibrium of G
played in every stage.

One might consider the repeated pricing game as an example of this type, as well as a
repeated prisoners dilemma game.
Infinitely Repeated Games
In infinitely repeated games, several anomalies occur. One is that the sum of the payoffs
in any infinite set of outcomes of the game is infinity. Hence, the nature of the payoffs
we consider must be revised by introducing a discount factor representing the value
today of a dollar received one stage later: = 1/(1 + r), where r is the interest rate per
stage.
Definition: Given a discount factor , the present value of the infinite sequence of
payoffs 1, 2, is

1 2 2 3 ....... t 1 t
t 1

Note: may also be used to represent a repeated game that ends after a random number
of repetitions. Suppose p is the probability that the game ends at the next stage and 1-p is
the probability that it continues, determined by the flip of a weighted coin after each stage
is played. The payoff to be received in the next period is only worth (1 p)/(1 + r)
and the payoff two stages from now is only worth (1 p)2/(1 + r)2 and so on. If we let
= (1 p)/(1 + r), then the present value above reflects both the time value of money and
the possibility that the game may end.
Now consider the infinitely repeated pricing game in which each players discount factor
is and each players payoff in the repeated game is the present value of the payoffs from
the stage game. Suppose Intel plays the trigger strategy:
Play high in the first stage. In the tth stage, if the outcome of all t-1 preceding
stages has been (high, high) then play high; otherwise play low.
If both players adopt this strategy, then (high, high) will be the outcome of the game in
every stage. This can be a Nash equilibrium of the game (and subgame perfect) if is
close enough to 1. Suppose Intel has adopted the trigger strategy and will play low
forever once one stages outcome differs from (high, high). Its fairly obvious that
AMDs best response to Intels play of low is also to play low forever in this case. But
what is AMDs best response to any stage that results in (high, high)? Playing low at this
stage will yield AMD a payoff of 5 followed by a payoff of 1 in all future stages as Intel
responds with low. Since 1 + + 2 + .= 1/(1 ), the present value of this sequence
of payoffs for AMD is
5 + (1) + 2(1) +. = 5 + /(1 )
If AMD plays high every period, however, then the present value of its payoffs is
(1 ). Playing high is best for AMD if and only if

4/

4/(1 ) > 5 + /(1 )

or

> .

In fact, as the payoffs are symmetric, it is in each players interest to adopt the trigger
strategy only if > . Thus, both players adoption of the trigger strategy is a Nash
equilibrium if and only if > . To show that this outcome is also subgame perfect,
some definitions are necessary.
Definition: Given a stage game G, let G(,) denote the infinitely repeated game in
which G is repeated forever and both players discount factor is . For each stage of the
game t, the preceding t-1 plays of the game are observed before the tth stage begins. Each
players payoff is the present value of that players payoffs from the infinite sequence of
stage games.
Definition: In the finitely repeated game G(T) or the infinitely repeated game G(,), a
players strategy specifies the action the player will take in each stage, for each previous
history of play through the previous stage.
Definition: In the finitely repeated game G(T), a subgame beginning at stage t + 1 is the
repeated game in which G is played T t times, denoted G(T-t). For each of the possible
histories of play through stage t, there is a subgame that begins at stage t + 1. For the
infinitely repeated game G(,), each subgame beginning at stage t + 1 is identical to the
original game G(,). As in the finite-horizon case, there are as many possible subgames
beginning at stage t + 1 of G(,) as there are possible histories of the play through stage
t.
Note that the tth stage of a repeated game taken on its own is not a subgame of the
repeated game. A subgame is a piece of the original game that not only starts at a point
where the history of play thus far is common knowledge among the players, but also
includes all the moves that follow this point in the original game.
Definition: In any game, a Nash equilibrium is a collection of strategies, one for each
player, such that each players strategy is a best response to the other players strategies.
Definition (Selten): A Nash equilibrium is subgame perfect if the players strategies
constitute a Nash equilibrium in every subgame.
Now consider the trigger strategies in the infinitely repeated pricing game above. Weve
seen that these are a Nash equilibrium in the game as a whole. Any subgame of an
infinitely repeated game is the same as the original game, except that there are two types
depending on whether (high, high) was played in the previous stage or not. In the first
case, the trigger strategies constitute a Nash equilibrium as weve already seen. If (high,
high) was not played in the previous period, then the players strategies are to play (low,
low) forever after, which is also a Nash equilibrium of the game as a whole. Thus, the
trigger-strategy Nash equilibrium of the infinitely repeated Prisoners Dilemma is
subgame perfect.

Here are some additional definitions that may become useful for extensive form games..
Definition: An information set for a player is a collection of decision nodes such that:
(i)
(ii)

the player has the move at every node in the information set, and
when play of the game reaches a node in the information set, the
player with the move does not know which node in the information set
has (or has not) been reached.

Definition: A subgame in an extensive form game


(a) begins at a decision node n that is a singleton information set (but is not the
games first decision node),
(b) include all the decision and terminal nodes following n in the game tree (but
no nodes that do not follow n), and
(c) does not cut any information sets (i.e., if a decision node n follows n in the
game tree, then all other nodes in the information set containing n must also
follow n, and so must be included in the subgame).
Note that the equilibria found by backward induction are not necessarily subgame
perfect.
Perfect Bayesian Equilibrium
Requirement 1: At each information set, the player with the move must have a belief
about which node in the information set has been reached by the play of the game. For a
nonsingleton information set, a belief is a probability distribution over the nodes in the
information set; for a singleton information set, the players belief puts probability one on
the single decision node.
Requirement 2: Given their beliefs, the players strategies must be sequentially
rational. That is, at each information set the action taken by the player with the move
(and the players subsequent strategy) must be optimal given the players belief at that
information set and the other players subsequent strategies (where a subsequent
strategy is a complete plan of action covering every contingency that might arise after
the given information set has been reached).
Definition: For a given equilibrium in a given extensive form game, an information set is
on the equilibrium path if it will be reached with positive probability if the game is
played according to the equilibrium strategies, and is off the equilibrium path if it is
certain not to be reached if the game is played according to the equilibrium strategies
(where equilibrium may be Nash, subgame perfect, Bayesian, or perfect Bayesian
equilibrium).

Requirement 3: At information sets on the equilibrium path, beliefs are determined by


Bayes rule and the players equilibrium strategies.
Requirement 4: At information sets off the equilibrium path, beliefs are determined by
Bayes rule and the players equilibrium strategies where possible.
Definition: A perfect Bayesian equilibrium consists of strategies and beliefs satisfying
Requirements 1 through 4.
Note that different authors define perfect Bayesian equilibrium in different ways. All
include requirements 1 3, most include requirement 4, and some impose additional
requirements. A sequential equilibrium (Kreps and Wilson) is a slightly stronger
equilibrium concept than perfect Bayesian equilibrium, but is equivalent in many
economic applications, leading to some confusion in the application of these concepts.
Most now use perfect Bayesian equilibrium as it is less complicated to define. Kreps and
Wilson show that for any finite game (finite numbers of players, types, and possible
moves) a sequential equilibrium exists; this also implies that a perfect Bayesian
equilibrium exists for any finite game.

References
Gibbons, Robert (1992) Game Theory for Applied Economists, Princeton: Princeton
University Press.
Kreps, D. and R. Wilson (1982) Sequential Equilibrium, Econometrica, 50:863-94.

Puzzlers
1. For the pricing game below, find the Nash equilibrium for the static (simultaneous
move) game.
2. Convert this game to a sequential game in which Intel goes first, show the
extensive form for this game, and find the backward induction equilibrium.
3. Suppose the static game is repeated a finite number of times. What is the
(subgame perfect) Nash equilibrium set of strategies?
4. Suppose the static game is infinitely repeated with both players using the discount
factor . Find the value of for each player that makes the trigger strategy a
subgame perfect equilibrium strategy for this game.

AMD
High Price

AMD
Low Price

Intel
High Price

$5.0

$2.5

$2.0

$3.0

Intel
Low Price

$6.0

$0.5

$3.0

$1.0

You might also like