Professional Documents
Culture Documents
Dynamic games are comprised of sequential and repeated games, as opposed to static
games, which are comprised of single-stage, simultaneous move games.
Complete Information means that the players payoffs (or payoff functions) are
common knowledge.
Perfect Information means that at each move of the game, the player with the move
knows the full history of the game to that point.
Imperfect Information means that at some move, the player with the move does not
know the history of the game.
Obviously one could encounter games with complete but imperfect information,
as well as complete and perfect information. Sequential games of complete and perfect
information may be solved by backward induction and the resulting outcome is the
subgame perfect Nash equilibrium of the game. The simplest sequential games are
derived from static games by merely designating one player to go first.
A Pricing Game
AMD
High Price
AMD
Low Price
Intel
High Price
$4.0
$4.0
$0.0
$5.0
Intel
Low Price
$5.0
$0.0
$1.0
$1.0
One might consider the repeated pricing game as an example of this type, as well as a
repeated prisoners dilemma game.
Infinitely Repeated Games
In infinitely repeated games, several anomalies occur. One is that the sum of the payoffs
in any infinite set of outcomes of the game is infinity. Hence, the nature of the payoffs
we consider must be revised by introducing a discount factor representing the value
today of a dollar received one stage later: = 1/(1 + r), where r is the interest rate per
stage.
Definition: Given a discount factor , the present value of the infinite sequence of
payoffs 1, 2, is
1 2 2 3 ....... t 1 t
t 1
Note: may also be used to represent a repeated game that ends after a random number
of repetitions. Suppose p is the probability that the game ends at the next stage and 1-p is
the probability that it continues, determined by the flip of a weighted coin after each stage
is played. The payoff to be received in the next period is only worth (1 p)/(1 + r)
and the payoff two stages from now is only worth (1 p)2/(1 + r)2 and so on. If we let
= (1 p)/(1 + r), then the present value above reflects both the time value of money and
the possibility that the game may end.
Now consider the infinitely repeated pricing game in which each players discount factor
is and each players payoff in the repeated game is the present value of the payoffs from
the stage game. Suppose Intel plays the trigger strategy:
Play high in the first stage. In the tth stage, if the outcome of all t-1 preceding
stages has been (high, high) then play high; otherwise play low.
If both players adopt this strategy, then (high, high) will be the outcome of the game in
every stage. This can be a Nash equilibrium of the game (and subgame perfect) if is
close enough to 1. Suppose Intel has adopted the trigger strategy and will play low
forever once one stages outcome differs from (high, high). Its fairly obvious that
AMDs best response to Intels play of low is also to play low forever in this case. But
what is AMDs best response to any stage that results in (high, high)? Playing low at this
stage will yield AMD a payoff of 5 followed by a payoff of 1 in all future stages as Intel
responds with low. Since 1 + + 2 + .= 1/(1 ), the present value of this sequence
of payoffs for AMD is
5 + (1) + 2(1) +. = 5 + /(1 )
If AMD plays high every period, however, then the present value of its payoffs is
(1 ). Playing high is best for AMD if and only if
4/
or
> .
In fact, as the payoffs are symmetric, it is in each players interest to adopt the trigger
strategy only if > . Thus, both players adoption of the trigger strategy is a Nash
equilibrium if and only if > . To show that this outcome is also subgame perfect,
some definitions are necessary.
Definition: Given a stage game G, let G(,) denote the infinitely repeated game in
which G is repeated forever and both players discount factor is . For each stage of the
game t, the preceding t-1 plays of the game are observed before the tth stage begins. Each
players payoff is the present value of that players payoffs from the infinite sequence of
stage games.
Definition: In the finitely repeated game G(T) or the infinitely repeated game G(,), a
players strategy specifies the action the player will take in each stage, for each previous
history of play through the previous stage.
Definition: In the finitely repeated game G(T), a subgame beginning at stage t + 1 is the
repeated game in which G is played T t times, denoted G(T-t). For each of the possible
histories of play through stage t, there is a subgame that begins at stage t + 1. For the
infinitely repeated game G(,), each subgame beginning at stage t + 1 is identical to the
original game G(,). As in the finite-horizon case, there are as many possible subgames
beginning at stage t + 1 of G(,) as there are possible histories of the play through stage
t.
Note that the tth stage of a repeated game taken on its own is not a subgame of the
repeated game. A subgame is a piece of the original game that not only starts at a point
where the history of play thus far is common knowledge among the players, but also
includes all the moves that follow this point in the original game.
Definition: In any game, a Nash equilibrium is a collection of strategies, one for each
player, such that each players strategy is a best response to the other players strategies.
Definition (Selten): A Nash equilibrium is subgame perfect if the players strategies
constitute a Nash equilibrium in every subgame.
Now consider the trigger strategies in the infinitely repeated pricing game above. Weve
seen that these are a Nash equilibrium in the game as a whole. Any subgame of an
infinitely repeated game is the same as the original game, except that there are two types
depending on whether (high, high) was played in the previous stage or not. In the first
case, the trigger strategies constitute a Nash equilibrium as weve already seen. If (high,
high) was not played in the previous period, then the players strategies are to play (low,
low) forever after, which is also a Nash equilibrium of the game as a whole. Thus, the
trigger-strategy Nash equilibrium of the infinitely repeated Prisoners Dilemma is
subgame perfect.
Here are some additional definitions that may become useful for extensive form games..
Definition: An information set for a player is a collection of decision nodes such that:
(i)
(ii)
the player has the move at every node in the information set, and
when play of the game reaches a node in the information set, the
player with the move does not know which node in the information set
has (or has not) been reached.
References
Gibbons, Robert (1992) Game Theory for Applied Economists, Princeton: Princeton
University Press.
Kreps, D. and R. Wilson (1982) Sequential Equilibrium, Econometrica, 50:863-94.
Puzzlers
1. For the pricing game below, find the Nash equilibrium for the static (simultaneous
move) game.
2. Convert this game to a sequential game in which Intel goes first, show the
extensive form for this game, and find the backward induction equilibrium.
3. Suppose the static game is repeated a finite number of times. What is the
(subgame perfect) Nash equilibrium set of strategies?
4. Suppose the static game is infinitely repeated with both players using the discount
factor . Find the value of for each player that makes the trigger strategy a
subgame perfect equilibrium strategy for this game.
AMD
High Price
AMD
Low Price
Intel
High Price
$5.0
$2.5
$2.0
$3.0
Intel
Low Price
$6.0
$0.5
$3.0
$1.0