You are on page 1of 151

Classnotes for the

Game Theory
course oered by Professor Ferenc Szidarovszky

c Ferenc Szidarovszky, 2008

Contents
1 Decision making 2 Examples of games
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Prisoners' dilemma Chicken Sharing a pie Chain store War game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1
1 2 3 4 5 6 8 10 10 11 12 14 15 16 17 17 17 19 21 22 22 23 24 24 24 25 25

Two-person, zero-sum, discrete games . . . . . . . . . . . . . . . . . . . . . Coin in pocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cournot oligopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bertrand oligopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.10 Special duopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Quality control 2.13 Market share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Advertisement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Advertisement budget allocation . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Inventory control 2.16 Price strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.17 Duel without sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.18 Duel with sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.19 Spying game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.20 Matching pennies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.21 Modied war game 2.23 Second price auction 2.24 Irrigation system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.22 Hidden bomb in a city

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.25 Waste water management 2.27 Chess game

2.26 Multipurpose water management system

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Discrete games with nitely many strategies 4 Existence of Equilibria


4.1 Games representable by nite rooted trees . . . . . . . . . . . . . . . . . .

25 26
26

5 Continuous games
5.1 5.2 5.3 5.4 5.5 Brouwerxed point theorem . . . . . . . . . . . . . . . . . . . . . . . . . . Kakutani xed point theorem for point-to-set mappings . . . . . . . . . . . Banachxed point theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditions to be contraction . . . . . . . . . . . . . . . . . . . . . . . . . . Relation of EP and xed points 2. Best reply is dened as 1. Consider game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28
28 29 30 32 37 38 38

6 NikaidoIsodatheorem
6.1 6.2 Concave functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Counterexamples for NikaidoIsoda theorem . . . . . . . . . . . . . . . . .

39
39 42

7 Applications
7.1 7.2 7.3 7.4 7.5 7.6 Matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bimatrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed nite games Polyhedron games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiproduct oligopolies Single-product oligopolies Assumptions: Best responses:

44
44 44 44 45 45 49 49 50

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 Relation of EP problems and nonlinear programming 9 How to compute EP?


9.1 9.2 9.3 9.4 Lagrange method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . KuhnTucker (K-T) conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relation of Kuhn-Tucker conditions and Lagrange method Methodology to compute equilibria

51 52
52 53 54 55

10 Applications
10.1 Bimatrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Oligopoly game (single-product model) . . . . . . . . . . . . . . . . . . . . 10.4 Special matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Method of ctitious play . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 von Neumann's method Fictitous play application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57
57 59 60 61 62 66 66 70 71

Neumannmethod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 Uniqueness of Equilibrium
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fixed point problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72
73 73

12 Leaderfollower games
12.1 Application to duopolies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Nash equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Stackelberg equilibrium 3. Optimum subsidy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76
77 77 78 78

13 Games with incomplete information 14 Cooperative games


14.1 Characteristic functions 14.2 Core of game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Stable sets (or von NeumannMorgenstern solution) . . . . . . . . . . . . . 14.4 The Shapleyvalue 14.5 Social choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Plurality voting 2. Borda count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 83
83 86 87 87 88 89 89 89 89

3. Hare system (successive deletions)

ii

4. Pairwise comparisons 5. Dictatorship

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

90 91

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 Conict resolution
15.1 Bargaining as noncooperative game . . . . . . . . . . . . . . . . . . . . . . 15.2 Single-player decision problem . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Axiomatic bargaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Idea of proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Nonsymmetric Nash solution . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Area monotonic solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 Equal sacrice solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7 KalaiSmorodinsky solution

92
93 93 94 95 96 97 98 98

16 Multiobjective optimization, concepts and methods


16.1 Existence of nondominated solutions 16.3

101

. . . . . . . . . . . . . . . . . . . . . 104

16.2 Method of sequential optimization . . . . . . . . . . . . . . . . . . . . . . . 107

-constraint
Diculties:

method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

16.4 Weighting method

16.5 Distancebased methods Distance type

16.6 Directionbased methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

17 Dynamic games
17.2 Best response dynamics

126
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

17.1 Cournot oligopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

18 Controllability in oligopolies 19 Illustrative case studies


19.2 Example 2. Buying a Family Car

130 133
. . . . . . . . . . . . . . . . . . . . . . . 140

19.1 Example 1. Selecting a Restaurant for Lunch . . . . . . . . . . . . . . . . . 133 Collective choice for Restaurant Example . . . . . . . . . . . . . . . . . . . 136 19.3 Example 3. Restoration of Chemical Landll . . . . . . . . . . . . . . . . . 146

iii

Game theory

Decision making

Three major elements:


Who is in charge to make the decision? The decision maker (DM):

one or more

What choices the DM has? Alternatives:

nitely many (discrete problem),

A1 , A2 , . . . , Am

described by continuous variables (continuous problem), like

X = x | x Rm , g(x) 0
What are the consequences of the decision? Objective functions,

1 , 2 , . . . , n .

Many cases:
1 DM with 1 objective:
optimization multiobjective optimization game Pareto game

1 DM with multiple objectives:

multiple DMs with 1 objective each:

multiple DMs with multiple objectives each:

Games:
strategies; objective functions are called the payo functions.
decision alternatives are called the DMs are called the

players;

History:

John von Neumann (1928) John Nash (1950-53) Nobel laureates Nash, Selten, Harsanyi (1996)

2
2.1

Examples of games
Prisoners' dilemma
Two prisoners who robbed a jewellery store for hire, got cut, but police does not have enough evidence to convict them with the full crime, only for a much lesser crime of driving a stolen car.

Players:

Strategies:

Confess to the police (C) or not (NC).

Game theory

Payos:

If only one confesses, then he recieves very light sentence (1 year), the other gets very harsh sentence (10 years) If both confess, they get medium long (5 year) sentence If none of them confesses, then they are convicted with lesser crime, 2 years sentence for each

1\ 2 NC C

NC (-2, -2) (-1,-10)

C (-10,-1) (-5,-5)

(1 , 2 )

in table

Question:

What to do? Payo of each depends on the choice of the other player. Best choice of each player as a function of the choice of the other: C C if player 2 selects NC if player 2 selects C Same for player 2, who also should always

Best response:

R1 =

So player 1 always should cooperate. cooperate.

C = dominant strategy Solution is (C,C) = both cooperate

2.2

Chicken
Two kids with motorcycle driving toward each other in a narrow alley. Give way to the other (C =chicken) or not (C ).

Players: Payos:

Strategies:

Being a chicken looks bad in the gang, but by having a crash both might die,

even worth 1\ 2

C (3, 3) (2,4) C (4,2) (1,1) (1 , 2 ) in table

R1 (C) = C R1 (C) = C R2 (C) = C R2 (C) = C

Game theory

Both points called the

equilibria.

(C, C)

and

(C, C)

are common in the two best responses, so they

No player wants to move away from his equilibrium strategy

assuming that the other player keeps his corresponding equilibrium strategy. Problem & diculty: in a particular game which equilibrium is selected?

2.3

Sharing a pie
2 people to share a pie of unit size. Requests from the pie,

Players: Payos:
(x

Strategies:

0 x 1, 0 y 1. + y 1)
both get requested amount, if infeasible

If the requests are feasible (x

+ y > 1),

then none of them recieves anything:

1 (x, y) =

x 0 y 0

if

x+y 1

otherwise

2 (x, y) =

if

x+y 1

otherwise

Game theory

Innitely many equilibria:

{(x, y) | 0 x, y 1, x + y = 1}

2.4

Chain store
Chain store (C) & enterpreneur (E).

Players:

Strategies:
For C: soft or hard on E (S,H) E: stay in business or out (I,O).

Payos:
C\ E S H I (2, 2) (0,0) O (5,1) (5,1)

Normal form (n; S1 , . . . , Sn , 1 , . . . , n ) giving



number of players strategy sets payo functions.

Game theory

Extensive form, shows development & dynamism of game


RC (I) = S RC (O) = {S,H} RE (S) = I RE (H) = O

2.5

War game
0 1

Players:

Airplane (A) and a submarine (S).

Strategies:
For A: for S:

x [0, 1] y [0, 1]

where to drop a bomb where to hide.

Payos:
1 = e(xy) 2 = 1
2

damage to submarine

Zero sum game if

k = 0

Game theory

R1 (y) = drop bomb where submarine R1 (y) = y R2 (x) = as far as possible from x, so 1 if x < 1 2 1 0 if x > R2 (x) = 2 1 {0, 1} if x = 2

is hiding, so

No common point, no equilibrium exists.

2.6

Two-person, zero-sum, discrete games

Two players,

Strategies: {1, . . . , m} and {1, . . . , n}. Payos: 1 (i, j) = aij


and

2 (i, j) = aij 1\2 1


. . .

1 a11
. . .

... j . . . a1j
. . .

... n . . . a1n
. . .

i
. . .

ai1
. . .

...

aij
. . .

...

ain
. . .

am1 . . . amj . . . amn 1

(aij )

is equilibrium

Game theory

aij

is the largest among the largest

a1j , . . . , aij , . . . , amj ai1 , . . . , aij , . . . , ain aij


is smallest among

-aij is

among

ai1 , . . . , aij , . . . , ain


That is,

aij

is largest in its column and smallest in its row

saddle point

What is the chance to have equilibrum?

Theorem 2.1 Assume that all aij are independent, identically distributed with a continuous distribution function. Then

P(equilibrium exists) = Pmn =

m!n! (m + n 1)!

Proof.

Notice that

P(all elements aij are dierent) = 1 P(aij is equilibrium) is same for all elements P(equilibrium exists) = mnP(a11 is equilibrium) a11
is equilibrium if

a11

is largest in its column and smallest in its row. So if we order

elements of rst row and column in increasing order,

am1 , . . . , a21 , a11 , a12 , . . . , a1n ,


then

a11

must not change position, the elements before and after

a11

can be only inter-

changed. So

P(a11

is equilibrium)

P(equilibrium

(m 1)!(n 1)! (m + n 1)! (m 1)!(n 1)! m!n! exists) = mn = (m + n 1)! (m + n 1)! =

Example 2.1
m=n=2 P22 = m = 2, n = 5 P25 = m = 1, n = arbitrary P1n = 1!n! = 1, (1 + n 1)! 2!5! 2 120 1 = = 6! 720 3 2!2! 4 2 = = 3! 6 3

the smallest element in the only row is the equilibrium. m = 2, n 2

P2n =

2n! 2 2!n! = = 0 as n (2 + n 1)! (n + 1)! n+1


7

Game theory What happens if m increases by 1:

(m + 1)!n! (m + n 1)! Pm+1,n = Pm,n (m + 1 + n 1)! m!n! (m + 1)!n!(m + n 1)! m+1 = = (m + n)!m!n! m+n
which equals 1 if n = 1, and is less than 1 if n 2 with other size 2.

Pmn 0 as m or n tent to

Example 2.2 With discrete distribution theorem fails:


m = n = 2, P(aij = 0) = p, P(aij = 1) = 1 p = q We have 24 = 16 possible matrices: 1 is in 0 or 1 position: 0 0 0 0
1 is in 2 positions

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

1 1 0 0

1 0 1 0

1 0 0 1

0 1 1 0

0 1 0 1

0 0 1 1

1 is in 3 or 4 positions:

1 1 1 0

1 0 1 1

0 1 1 1

1 1 0 1

1 1 1 1

No equilibrium is only in cases

1 0 0 1
with probability 2p2 q 2 , so

and

0 1 1 0

P(equilibrium exists) = 1 2p2 q 2

2.7

Coin in pocket
Player 1 guesses total number of coins (no blung, so with a coin in his pocket

Two players, each puts 0 or 1 coin into his pocket.

Step 1. Step 2.

he cannot guess 0). Player 2 guesses total number of coins (no blung and cannot repeat guess of

player 1). Whoever's guess is correct wins $1 from other player.

Game theory

Extensive form:

Normal form:
1\2 0 1 (0, 0) 1 1 (0, 1) 1 1 (1, 1) 1 1 (1, 2) 1 1 1 2 = 1
Notice, guess of player 2 was always unique (no blu, no repeat)

Strategy of player 1: (0, 0), (0, 1), (1, 1), (1, 2); Strategy of player 2:
0 or 1 (No. of coins in pocket).

No equilibrium exists.

Game theory

2.8

Cournot oligopoly
Produced amounts,

Players: n rms producing same product. Strategy: Payos:


k (x1 , . . . , xn ) = xk p
l=1
where

x 1 , x2 , . . . , x n , 0 x k L k .
n

xl

Ck (xk )

p Ck

= price function, decreases in total supply = cost function of rm

k.

Example 2.3 n = 2, Ck (xk ) = xk + 1, 0 xk 5


p(x1 + x2 ) = 10 (x1 + x2 ) 1 = x1 (10 x1 x2 ) x1 1 = x2 + 9x1 x1 x2 1 max 1
Best response of player 1:

1 = 2x1 + 9 x2 = 0 x1 9 x2 x1 = 2
always interior, so

R1 (x2 ) =
Similarly,

9 x2 2 9 x1 2

R2 (x1 ) =
Equilibrium: x1 = 9x2 2 x2 = 9x1 2

2x1 + x2 = 9 x2 = 9 2x1 2x2 + x1 = 9 18 4x1 + x1 = 9 9 = 3x1 x1 = 3 x2 = 3

2.9

Bertrand oligopoly
Setting prices for own products,

Players: n rms producing similar products. Strategies: Payos:


k (p1 , p2 , . . . , pn ) = pk dk (p1 , . . . , pn ) Ck (dk (p1 , . . . , pn ))
where

0 p k Pk .

dk = demand

of the product made by rm

k.

10

Game theory

2.10

Special duopoly
2 rms, price setting. Prices but giving discounts to faithful customers.

Players: Payos:

Strategies:

1 = 2 =
Assume max price,

p1 p1 c p2 p2 c

if if if if

p1 p2 p1 > p 2 p2 p1 p2 > p 1

max

is large enough:

R1 (p2 ) =

p2 P max R2

if if

p2 P max c p2 P max c

is the same

Innitely many equilibria:

{(p1 , p2 ) | P max c p1 = p2 P max }


11

Game theory

2.11

Quality control
If equipment is good, customer pays

A salesman sells an equipment to a customer with rules:

to salesman

if equipment is defective, salesman pays

to customer.

Equipment has 3 parts, they can be defective with equal probabilities.

Players:

Salesman and equipment (S & E ).

Strategies:
For S, how many parts to check before selling equipment: 0, 1, 2 or 3. Cost of each checking is

$ .

for E, how many parts are defective: 0, 1, 2 or 3.

Payos:
1 =
expected prot of salesman

2 = 1 S\E 0 1 2 0 2 1 3 1 3 5 4 2 2 1 3 3 3 4 3 3 2 3 A11 : A12 : A21 :


2 3 1 defective part is not found with probability 3
defective part is not found with probability defective part is found either in rst or second checking, or not

2 1 1 1 + 2 + 2 3 3 2 2 A22 : A31 :
same principle

2 1 + 2 3 3

defective part is found either in rst, second or third checking

1 2 1 1 + 2 + 3 3 3 2 2 A32 :
defective part is found either in rst or second checking

2 1 + 2 3 3

12

Game theory
Equilibrium?

Row 0
a01

has three smallest elements is equilibrium if

a01 , a02 , a03

1 2 , 3 5 , 2 3 3 3 5 2 2

a02

is equilibrium if

1 , 4 , 4 3 3 3 3 2 4 3 a03
is equilibrium if

Row 1
a11

has one smallest element is equilibrium if

a11

2 , 3 3

1 2 3 5 , 3 3 2

2 2 3 3 2

contradiction

Row 2
a20 a21

has two potential smallest elements

a20 , a21

is not equilibrium, it is not largest in column is equilibrium if

5 1 3 , 3 5 2

5 1 3 2 , 3 3 2

5 1 3 2 3

contradiction and

5 1 3 2 3
irrelevant

Row 3
a30 a31

has two possible smallest elements

a30

and

a31

is not largest in column, it is not equilibrium is equilibrium if

2 , 2
and

2 2 3 , 3 2

5 2 1 3 3

2 3

13

Game theory

2.12

Advertisement
m markets with number of potential customers a1 > a2 > > am .
2 rms. Which market is selected to conduct intensive advertisement (they can select

Two rms compete for

Players:

Strategies: Payos:

only one),

1 i, j m.

If they advertise in dierent markets, then they get all customers, and if they

advertise on the same market, they share customers:

1\2 1 2 ... 1 p1 a1 a1 . . . 2 a2 p2 a2 . . .
. . . . . .

m a1 a2
. . .

am

am 1

. . . p m am

1\2 1 2 ... 1 q1 a1 a2 . . . 2 a1 q2 a2 . . .
. . . . . .

m am am
. . .

a1

a2 2

. . . qm am

(i, j) is equilibrium if corresponding element in 1


it is largest in its row. In column 1 the largest is

is largest in its column and in

14

Game theory

(1, 1) (2, 1)

if if

p1 a1 a2 a2 p1 a1 .
elements

In columns

2, . . . , m,

(1, 2), . . . , (1, m)

are the largest.

In row 1, the largest is

(1, 1) (1, 2)
In rows

if if

q 1 a1 a2 a2 q 1 a1 .
elements

2, . . . , m,

(2, 1), . . . , (m, 1)

are the largest.

Only matches are

(1, 1) p1 a1 a2 &q1 a1 a2 (2, 1) a2 p1 a1 (1, 2) a2 q1 a1 .

Modied game: 1
so it believes Equilibrium in

is as before but rm 1 believes that rm 2 wants damage it,

2 = 1 1
matrix is largest in its column but smallest in its row:

Smallest elements in rows are

(1, 1), (2, 2), . . . , (m, m)

Largest elements in columns are: in column 1,

(1, 1) (2, 1)

if if

p1 a1 a2 a2 p1 a1 2, . . . , m,
only

in columns Only match is

(1, 2), . . . , (1, m)


otherwise there is no equilibrium.

(1, 1) p1 a1 a2

2.13

Market share
Two rms compete for a business of unit value. Eorts in order to get larger portions of the business (e.g. market)

Players: Payos:

Strategies:

x, y 0.

1 = 2

x x x+y y = y x+y

(business share value  cost)

Best responses:
1 1 (x + y) x 1 = 1 x (x + y)2 (x + y)2 x+y x 2 1 y 2(x + y) = 2 x (x + y)4 y 1=0 (x + y)2 = y = y = y y stationary = = <0
15

point

Game theory

is strictly concave, vertex can be negative or nonnegative:

R1 (y) =

if

yy

if

y1 , R2 (x) = y1

if

xx

if

x1 x1

Intercepts are

(0, 0)

and

1 (4, 1) 4

x = y y y = xx

x + y = y x+y = x x=y

2x = x 4x2 x = 0 x(4x 1) = 0 1 x = 0 or x = 4

2.14

Advertisement budget allocation


2 competing rms on

Players:

markets.

Strategies:

Amounts spent on the markets in advertisement

(x1 , . . . , xm )

and

(y1 , . . . , ym ).
16

Game theory

Payos:
m

1 =
i=1 m

xi ai xi + yi + zi yj aj xj + yj + zj i.

2 =
j=1
where

zi = total

spending of others in market

2.15

Inventory control
A retailer and a wholesaler. Inventory,

Players: Payos:

Strategies:

y, z 0. x
with pdf

Random demand

f (x)

y+z

1 = a1
0

xf (x)dx +
y

a1 y + a2 (x y) f (x)dx +
y+z

a1 y + a2 z f (x)dx b1 y

1st term: a1 = unit prot from own inventory (x y) 2nd term: a2 = unit prot from back order (y < x y + z) 3rd term: same (x > y + z ) 4th term: b1 = unit inventory cost of retailer
y+z

2 = a3
y

(x y)f (x)dx +
y+z

a3 zf (x)dx b2 z

1st term: a3 = unit prot of wholesaler from back order (y < x y + z) 3rd term: b2 = unit inventory cost of wholesaler.

2.16

Price strategy
Time varying price of own product,

Players: n rms producing similar goods. Strategy: Payos:


pk (t) [0, Pk ]. k,
then

Let

k (p1 , . . . , pn ) k =
0

be demand of good

k (p1 (t), . . . , pn (t))pk (t)dt = total

revenue.

2.17

Duel without sound


E
1 1

'

Two duelists are placed 2 units from each other, each has a gun with 1 bullet in it. For a signal they start walking toward each other, and can shoot at any time. Their speeds are equal, guns have silencers.

17

Game theory

Players: Payos:

Two participants. Where to shoot,

Strategies:

0 x, y 1. P1 (x)
and

Hitting probabilities

P2 (y),

then if if if

P1 (x) 1 (1 P1 (x)) P2 (y) P1 (x) P2 (y) 1 = P2 (y) (1) + (1 P2 (y))P1 (x) P2 (y) 1 (1 P2 (y)) P1 (x) P2 (y) P1 (x) 2 = P1 (x) + (1 P1 (x))P2 (y)

x<y x=y x>y

if if if

y<x y=x y>x

Example 2.4 P1 (x) = x, P2 (y) = y


x 1 (1 x) y xy 1 = y + (1 y)x
if x < y x y + xy if x = y xy if x > y x y xy

R1 (y) exists only if y 2 1 2y y 2 + 2y 1 0 2 4 + 4 y12 = = 1 2 = 0.4142 or 2.4142 2 y 0.4142

R1 (y) =

1 if y 0.4142 does not exist, otherwise


18

Game theory

No match, no equilibrium.

2.18

Duel with sound


P1 (x) (1 P1 (x)) = 2P1 (x) 1 P1 (x) P2 (y) 1 = P2 (y) + (1 P2 (y)) = 1 2P2 (y)
if if if

Same as previous problem, but guns have no silencers:

x<y x=y x>y

Example 2.5 P1 (x) = x, P2 (y) = y


2x 1 0 1 = 1 2y
if x < y if x = y if x > y

19

Game theory Case 1: y <


1 2

Case 2: y =

1 2

and Case 3: y >

1 2

{x | x > y} {x | x 1 } R1 (y) = 2 does not exist

if y < if y = if y >

1 2 1 2 1 2

20

Game theory

R2 (x) is mirror image, only equilibrium is x = y = do not shoot early and also do not shoot late.

1 2

2.19

Spying game
Spy and counterespionage. Eorts,

Players: Payos:
P (x, y) V (x) U

Strategies:

x, y 0.

Let = probability of arrest = value of information collected by spy = value of spy.

Then

1 = P (x, y) (U ) + (1 P (x, y)) V (x) 2 = 1

Example 2.6 U = 4, V (x) = x, P (x, y) = A (x + y) (A > 0 is small)


1 = A (x + y)(4) + [1 A (x + y)]x = 4Ax 4Ay + x Ax2 Axy strictly concave in x 1 = 4A + 1 2Ax Ay = 0 x 1 4A Ay x = stationary point 2A 14AAy if y 14A 2A A R1 (y) = 0 otherwise 2 = 4Ax + 4Ay x + Ax2 + Axy strictly inreases in y , so R2 (x) = ymax
21

Game theory

Equilibrium:

y = ymax x= 0
if ymax 14A A 14AAymax , otherwise 2A

2.20

Matching pennies
Two participants. Each has a coin, and can show head (H) or tail (T).

Players: Payo:

Strategy:

If the coins have identical sides, then player 1 wins $1, otherwise player 2 wins

$1 from other player: 1 \2 H T H 1 -1 T -1 1

2 = 1
No equilibrium exists.

2.21

Modied war game


airplane (A) and a submarine (S).

Players:

Strategies:
22

Game theory
for A: for S:

x [0, 1] y [0, 1]
if

where to drop a bomb where to hide. then submarine is destroyed (

Payos:

|x y| < ,

>0

small):

1 = 2

1 0 = 1

if

|x y| <

otherwise

R1 (y) = {x | 0 x 1, y < x < y + } R2 (x) = {y | 0 y 1, y x or y x + }


No match, no equilibrium.

2.22

Hidden bomb in a city


m horizontal and n vertical blockrows and blockcolumns (i, j) is aij . a12 a22
. . .

City with rectangular shape with respectively. Value of block City map:

a11 a21
. . .

a13 a23
. . .

... ...

a1,n2 a2,n2
. . .

a1,n1 a2,n1
. . .

a1n a2n
. . .

am1 am2 am3 . . . am,n2 am,n1 amn


A terrorist group places a bomb in one of the blocks, and requests release of criminals from city prisons. City can check only one row block or row column to nd the bomb.

Players:

City (C) and terrorists (T).

Strategies:
For C: row

or column

to search

for T: where to place the bomb.

Payos: 1 = value of block if the bomb was there and became found:
2 = 1
Equilibrium: matrix element is largest in its column and smallest in its row. Facts: largest elements in all columns are positive, smallest elements in all rows are zeros

no element satises both

no equilibrium

23

Game theory

1\2 (1,1) . . . . . . 1 a11 ... ... 2 . . . m 1 a11 2 a12 . ... . . n

(1,n) (2,1) . . .
a1n a21

... ...

(2,n) . . .
a2n

(m,1)

...

...

(m,n)

rows

...

...
am1 am1

...
am2

... ...

amn

columns

a21 a22 a1n 1

...
a2n

amn

2.23

Second price auction

One unit is sold in an auction.

Players: n potential buyers with subjective valuation of the unit 0 v1 v2 vn


(known to all).

Strategies: Payos:

Each of them presents a bid,

x1 , . . . , x n ,

simultaneously, nobody knows the

bids of the others. Highest bidder wins the unit, but he has to pay second highest bid only:

k =

vk maxl=k {xl } 0

if

xk = max{x1 , . . . , xn }

otherwise

2.24

Irrigation system

Farms use common water supply to irrigate.

Players: n farms. Strategy: Payos:


Amount of water used,

x1 , . . . , x n .

Benet of irrigation  cost of water

K k = Bk (xk ) xk

n l=1 n l=1

xl

xl

unit cost of water


Similar to oligopoly,

p(s) =

K(s) , Ck (xk ) s

= Bk (xk ).

2.25
n

Waste water management

rms treat waste water in a common plant.

Players: n rms. Strategy:


Amounts of treated waste water,

x1 , . . . , x n .

24

Game theory

Payos:

Benet (reuse of water, not paying penalty, etc.)  cost of water treatment

K k = Bk (xk ) xk
Same as previous example.

n l=1 n l=1

xl .

xl

2.26
n

Multipurpose water management system


Water users. Amounts of allocated water to users,

water users (industry, agriculture, domestic, recreation).

Players: Payos:

Strategies:

x1 , . . . , x n .

Benet  cost, same as above.

2.27

Chess game
2 players controlling

Players: Payos:

and

gures.

Strategies:

For all possible congurations on the board a selected next move.

W B

1 1 = 0 = W .

if if

W wins B wins ,

if tie.

Discrete games with nitely many strategies

Example 3.1 No equilibrium exists


1 2 2 0 2 4 1 5

1 and 2  no match

Example 3.2 Unique equilibrium


-2 -1 -10 -5 -2 -10 -1 -5

1 and 2  one match

Example 3.3 Multiple equilibria


25

Game theory

1 1

1 1

1 1

1 1

1 and 2  everything is equilibrium

Method Do loop going through every element (i, j) and checks if 1 (i, j) is largest in its
column and

2 (i, j)

is largest in its row. If yes, then

(i, j)

is equilibrium, otherwise not.

4
4.1

Existence of Equilibria
Games representable by nite rooted trees

Assume: full information to all players; (i) game starts at the root of the tree; (ii) to each node of the tree a player is assigned and the game proceeds on an arc originating at this node to its end vertex (by the decision of the player which is assigned to the originating node); (iii) each player knows the vertices at which he/she has to make decisions; (iv) at each terminal node, each player has a given payo value

Example 4.1

Example 4.2 Chess-game Theorem 4.1 The game has at least one equilibrium pont (EP). Proof.
From root to each terminal node there is a unique path, since otherwise cycle would arise:

26

Game theory

Length of the path= number of arcs on it Proof by induction with respect to the length of the tree from root to any endpoint). If If

(which is the largest path

h 1,

then assume

connected directly to the lengths, each has EP with

h = 0, only one point, no decision, that point is the EP. player l is assigned to the root. Let I1 , I2 , . . . , Im be vertices root. With roots I1 , I2 , . . . , Im we have games with smaller (1) (1) (m) (m) payos (1 , . . . , n ), . . . , (1 , . . . , n ).

The EP of original game is obtained as follows: (k) (k0 ) = max {l }, then Let l 1km

For player l , the EP of subgame For players

k0

with the initial step from the root to

Ik0

i = l,

the EP of subgame

k0

Note.

We got a method: backward induction.

Example 4.3

So EP is with payos (2, 2, 3) and path A B C

27

Game theory

Note. Project opportunity to develop a program. Remark. Some vertices might not be assigned to
proof applies to show existence of EP.

players, they might be random with

a given discrete distribution dened on arcs originating from this random vertex. Same

Example 4.4 Chain store

Equilibrium is (soft,in) Note. The other EP, (hard,out) is lost.

5
5.1

Continuous games
Brouwerxed point theorem
x = f (x )

Theorem 5.1 (In one dimension) Let f : [a, b] [a, b] be continuous, then there exists an x [a, b] such that

Proof.

28

Game theory

Case 1 f (a) = a, then x = a Case 2 f (b) = b, then x = b Case 3 a < f (a) and b > f (b), then curve must intercept 45degree line.

Theorem 5.2 (Generally) Let D Rn be convex, closed, bounded, nonempty set, f :


D D continuous function. Then there exists x D such that x = f (x ).

Examples showing that without conditions theorem fails:


1. Drop convex:

D = [2, 1] [1, 2],

f (x) = x

2. Drop closed:

D = (0, 1), f (x) =

x 2

3. Drop bounded:

D = R, f (x) = x + 1
4. Drop continuity:

D = [0, 1], f (x) =

x 2

if if

x=0 x=0

5.2

Kakutani xed point theorem for point-to-set mappings

Theorem 5.3 (Kakutani xed point theorem) Let f (x) be a point-to-set mapping
such that for all x D, f (x) is a subset of D. Assume (i) D is nonempty, convex, closed, bounded in Rn , (ii) f (x) is nonempty, closed, convex subset of D for all x D, (iii) Gf = {(x, y) | x D, y f (x)} is closed. Then there is an x such that x f (x ).

Remark.

Generalization of Brouwer's, since if

is one-to-one and continuous, then

Gf

is the curve of the function, which is a closed set.

29

Game theory

5.3

Banachxed point theorem

Theorem 5.4 (BolzanoWeierstrass theorem) In Rn , every bounded sequence has


convergent subsequence.

Proof.

In one dimension, in

Rn

same by repeating it component-wise.

There are Let

many points in [A, B]; so one half also has many points from sequence.
select

I0 = [A, B],
Let Let

x0 I0
having having

I1 I2

be half of be half of

I0 I1

points, select points, select

x1 = x0

and

x1 I1 x2 I2
where and so on.

x 2 = x 0 , x1

and

Intervals shrink to a single point, this is limit of sequence. xn x as n (or (xn , x )

Convergent:

0,

(, )

is distance of

vectors)

Cauchy: (xn , xm ) 0 as n, m
Convergent

Cauchy

0 (xn , xm ) (xn , x ) + (x , xm ) 0
0 0

Cauchy For any

Convergent there exists

>0

N0 ,

such that

(xm , xn ) <

if

m, n N0

x N0

Only

x1 , x2 , . . . , xN0 1

are outside interval

has convergent subsequence

since

[xN0 , xN0 + ] sequence is bounded (xn , xm ) 0, whole sequence converges

to same limit.

Theorem 5.5 (Banach xed-point theorem) Let D Rn be a closed set, f : D D


be a contraction:

f (x), f (y) q (x, y)


with some xed 0 q < 1. Then there is a unique x D such that x = f (x ).

30

Game theory

Proof.

Let

x0 D

arbitrary, consider sequence

x1 x2
Since

= f (x0 ) = f (x1 ) ...

f (x) D

for all

x D, {xk }

sequence is well denied.

(i) We rst prove that

is a Cauchysequence:

(xk+1 , xk ) = n < m,

f (xk ), f (xk1 )

q (xk , xk1 ) q q (xk1 , xk2 ) q k (x0 , x1 ),

so for

(xn , xm ) (xn , xn+1 ) + (xn+1 , xn+2 ) + + (xm1 , xm ) q n (x0 , x1 ) + q n+1 (x0 , x1 ) + + q m1 (x0 , x1 ) qn (x0 , x1 ) q n + q n+1 + . . . until = (x0 , x1 ) . 1q
If

n, m , {xk }

this converges to zero.

(ii) Since

is a Cauchysequence it has a limit in limit denoted by x , is also in D .

Rn ,

and since

is closed, this

(iii)

xn+1 = f (xn )

for all

n.

Let

and since

is continuous,

x = f (x ) x
(iv) Let

is a xed point.

x , y

be 2 xed points:

(f (x ), f (y )) q (x , y )
x y
contradiction.

(x , y ) [1 q] 0
+ +

Examples showing that without conditions the theorem may fail:


Drop that

is closed:

D = (0, 1], f (x) =


Drop contraction:

x 2

no xed point

D = (, ), f (x) = x + 1

no xed point

Remark.
(i) in (ii) in

Comparing Brouwer and Banach theorems: Banach is weaker (no condition on boundedness or convexity) Brouwer is weaker, contraction

D, f,

continuity

(f xn ), f (x ) q (xn , x ),
so if

xn x ,

then

(xn , x ) 0,

so

f (xn ) f (x )

as well.

31

Game theory

5.4

Conditions to be contraction
f (x) f (y) = f (c)(x y) c x y

In one dimension:

If

|f (c)| q < 1 contraction So |f | has to be small: xk+1 = f (xk )

g(x) = 0, how x = x + g(x)


f (x)

to rewrite?

f (x) = 1 + g (x)
negative?

x = x + g(x) h(x)
f (x)

(h(x) = 0)

f (x) = 1 + g (x)h(x) + g(x)h (x)


We want this to be zero at the root

x ,

then it will be small around

x :

1 + g (x )h(x ) = 0 h(x ) = 1 g (x )
so select

h(x) =

1 g (x)

form of xed point problem:

x = x
Let

g(x) , g (x)

iteration

xk+1 = xk

g(xk ) Newton's method g (xk )

g (xk )

g(xk )g(xk1 ) , then xk xk1

xk+1 = xk

g(xk )(xk xk1 ) g(xk ) g(xk1 )

Secant method

32

Game theory

In n dimension:
(i)

Mean value theorems in Rn Theorem 5.6 Let f : Rn R be dierentiable, then


f (z) f (z) (x1 y1 ) + + (xn yn ) x1 xn where z is a point in the linear segment connecting x and y . f (x) f (y) =
Proof.

Let

g(t) = f (y + t(x y)),


then

g(0) = f (y),
So

g(1) = f (x),

g (t) =

f y + t(x y) (x y)

g(1) g(0) = g (c)(1 0) with some c (0, 1), so f (x) f (y) = f (z)(x y), with z being on the linear segment

Theorem 5.7 Let f : Rn Rn be dierentiable, then


1

f (x) f (y) =
0

f y + t(x y) (x y)dt,

where
Proof.

f is the Jacobian matrix of f .


Let g(t) = f y + t(x y) , then

g(1) = f (x),

g(0) = f (y), so
1

f (x) f (y) = g(1) g(0) =


0 1

g (t)dt

=
0

f y + t(x y) (x y)dt

(ii)

Cauchyinequality
a1 , a2 , . . . , an ; b1 , b2 , . . . , bn
are real numbers, then

ai b i

a2 i

b2 i

Proof.

Let

f (x) = (a1 xb1 )2 + + (an xbn )2 =


A

a2 2x i

ai bi +x2
C B

b2 0 i
discriminant must not be

This quadratic polynomial has 0 or 1 real root only positive:

4C 2 4AB 0 C 2 AB.
33

Game theory
(iii)

Vector and matrix norms


Lengths of vectors in (a) (b) (c)

Rn

are called norms and denoted as

. Axioms of norms:

x 0

and

x = 0 x = 0;
with all vectors .

x = || x

and real (or complex) scalars

x+y x + y

Example 5.1 Three particular norms are used most frequently:


l1 -norm:
n

x
Proof.

=
i=1

|xi |

(1)

(iii)(a), (iii)(b) trivial (iii)(c) proved as follows:

|xi + yi | |xi | + |yi |


and by adding it for all i the result is obtained.

l2 -norm:
n

x
Proof.

=
i=1

|xi |2

(2)

(iii)(a), (iii)(b) trivial, (iii)(c) is proven by using Cauchy-inequality


n n

x+y

=
i=1 n

|xi + yi | =
i=1 n

2 |x2 + 2xi yi + yi | i n

i=1 n

|xi |2 +
i=1 n 2

|yi |2 + 2
i=1

|xi | |yi |
n n

i=1

|xi | +
i=1 n

|yi | + 2
i=1 n

|xi 2

|2
i=1

|yi |2

|xi +
i=1 i=1

|2

|yi |2 =

x + y

l -norm: x

= max |xi |

Proof. (iii)(a), (iii)(b) trivial, (iii)(c) can be proven as follows. Let maxi |xi + yi | = |xi0 + yi0 |, then

x+y

= |xi0 + yi0 | |xi0 | + |yi0 | max |xi | + max |yi | = x


i i

+ y

34

Game theory
Let (a) (b) (c)

be

nn
and

matrix, norm of this matrix

satises axioms:

A 0

A =0

if and only if

A = 0;
and constants

A = || A

for all matrices .

A+B A + B

Denition 5.1 A matrix norm is generated from a vector norm, if


A = max A x .
x =1

Example 5.2 It can be proved that the generated matrix norms are:
n

A A A

= max
j i=1 n

|aij | (column-norm) |aij | (row-norm)


j=1

(3)

= max
i

(4)

= max

AT A

(Euclidean-norm)

(5)

where AT A are the eigenvalues of matrix AT A.

Lemma 5.1 If matrix norm . is generated from vector norm . , then for all matrices
A and vectors x, Ax A x

Proof.
A x = A

x x

x max A x x = A x .
x =1

Lemma 5.2 Any matrix-norm generated from a vector norm satises axioms (a), (b)
and (c), furthermore (d) A B A B

Proof.
(a)

A 0
for

trivial,

A =0

Ax =0

for all for all

x =1 Ax=0 x A = 0;

x =1 Ax=0

(b) Trivial; (c)

A+B

= max (A + B)x max ( A x + B x )


x =1 x =1 x =1 x =1

max A x + max B x = A + B .

35

Game theory
(d) With some

z = 1,

AB = ABz A Bz A B z = A B .

Example 5.3
n n

A
Proof.

=
i=1 j=1

|aij |2

(Frobenius-norm)

(6)

Axioms (a), (b) trivial, (c) can be proven as axiom (c) for x 2 .

Lemma 5.3 For all vectors x and matrices A,


Ax
2

Proof.

By Cauchy-inequality,

Ax

2 2

=
i=1 j=1 2 F

aij xj A x
2 2

i=1 j=1

|aij |

2 j=1

|xj |2

Denition 5.2 Matrix norm A and vector norm x are compatible, if for all matrices
and vectors, A x A x
The norm pairs are compatible.

Corollary. Remark.

{ A 2 , x 2 }, { A 1 , x 1 }, { A

} and

{ A

F,

x 2}

Not all matrix norms satisfying axioms (a), (b), (c) are compatible with a

vector norm, as following example shows.

Example 5.4 Let A

1 2

A 1 , clearly axioms are satised, and

I x = x I
contradiction.

x =

1 x , 2

vector norm.

Denition 5.3 Distance of vectors x and y can be dened as (x, y) = xy with some
From Theorem 5.7 (page 33),

f (x) f (y)|
0
so if we assume that

f y + t(x y) f

x y dt D,
furthermore for all

D Rn

is convex and

is dierentiable on

z D, f (z) q < 1,
then (7)

f (x) f (y) q x y

Remark. (7) might hold with one norm but not with others.
36

is contraction on

D.

Game theory

Example 5.5 Consider


A1 =
with

0.8 0.8 , A2 = 0 0

0.8 0 , A3 = 0.8 0 = 0.8 2 1.13 = 0.8 2 1.13


3+ 5 2

0.51 0.51 0.51 0

A1 A2 A3

1 1

= 0.8 < 1, A1 = 1.6, A2 = 1.02, A3

= 1.6, A1

2 2

= 0.8 < 1, A2 = 1.02, A3


2

= 0.51

0.825 < 1

g(x) = 0,
Rewrite

how to rewrite?

x = x + H(x)g(x),
f (x)
Derivative of

where

H(x)

is invertible

nn

matrix

f (x)

is

Jacobian of g(x)

I + H (x)g(x) + H(x) J(x)


If this is zero at

x ,

then

g(x ) = 0,

so

I + H(x )J(x ) = 0 H(x ) = J 1 (x )


So we select for all

x, H(x) = J 1 (x),

so xedpoint equation:

x = x J 1 (x)g(x),
with iteration method:

xk+1 = xk J 1 (xk )g(xk )


It is the

multivariable Newton's method.

5.5

Relation of EP and xed points

We can see two dierent ways.

37

Game theory

1. Consider game
G = {n; S1 , . . . , Sn ; 1 , . . . , n }
strategy sets payo functions
Dene

(x, y) =
k=1
for all

k (x1 , . . . , xk1 , yk , xk+1 , . . . , xn )

x, y S = S1 S2 Sn .

Lemma 5.4 (x , . . . , x ) = x is an EP 1 n
(x , x ) (x , y)
for all y S .
adding up for (8)

Proof.

for all k , k (x , . . . , x , . . . , x ) k (x , . . . , yk , . . . , x ) 1 n 1 n k 1, 2, . . . , n, (8) is obtained select y = (x , . . . , yl , . . . , x ), then from (8), 1 n


n

k =

k (x , . . . , x ) l (x , . . . , yl , . . . , x ) + 1 n 1 n
k=1
after cancellation

k (x , . . . , x ) 1 n
k=l

l (x , . . . , x ) l (x , . . . , yl , . . . , x ) x 1 n 1 n
Dene

is EP.

H(x) = z S | (x, z) = max{(x, y) | y S}

Lemma 5.5 x is EP x H(x ). Remark.


EP is equivalent to a xed point problem.

2. Best reply is dened as


max k (x , . . . , x , xk , x , . . . , x ) 1 k1 k+1 n
subject to Set of optimal solutions

x k Sk

Rk (x ):

Rk (x ) = argmax k (x , . . . , x , xk , x , . . . , x ) | xk Sk 1 k1 k+1 n

Lemma 5.6 x is EP x R(x ) where


R(x ) = (R1 (x ), . . . , Rn (x ))

Remark.

EP is equivalent to a xed point problem, but this is dierent than the problem

with mapping

H(x).

Existence results from xed-point theorems:

38

Game theory

Theorem 5.8 Assume R(x) is unique (e.g k is strictly concave in xk ) and continuous
in x, furthermore S = S1 Sn is nonempty, convex, closed, bounded. Then there is at least one EP.

Proof.

Trivial, from Brouwer xedpoint theorem.

Theorem 5.9 Assume R(x) is unique and contraction, furthermore S = S1 Sn is


closed. Then EP exists and is unique.

Proof.

Trivial, from Banach's xedpoint theorem.

Theorem 5.10
(i) R(x) is nonempty, closed, convex for all x S ; (ii) S is nonempty, convex, closed, bounded; (iii) GX = {(x, y) | x S, y R(x)} is closed. Then there is an EP.

Proof.

Trivial from Kakutani's xed point theorem.

In using Brouwer xedpoint theorem only continuity of best reply is needed, but in applying Banach's xedpoint theorem it has to be contraction.

NikaidoIsodatheorem
(i) Sk is nonempty, convex, closed, bounded in nite dimensional vector space;

Theorem 6.1 Assume that for all k


(ii) k is continuous in all variables; (iii) k is concave in xk with xed x1 , . . . , xk1 , xk+1 , . . . , xn . Then the game has at least one EP.
The proof can be based on Kakutani's xed point theorem. Some simple lemmas are needed.

6.1

Concave functions
convex set,

D Rn

f :DR

is called concave if

f (x + y) f (x) + f (y)
for all

0 , 1, + = 1.

39

Game theory

Assume

is dierentiable, then with

= 1 ,

f (x + (1 )y) f (x) + (1 )f (y) f (y + (x y)) f (y) f (x) f (y) f (y + (x y)) f (y) f (x) f (y)
If

then the limit of left hand side is

f y + (x y)
so

,
=0

f (y)(x y) f (x) f (y) and also x y f (x)(y x) f (y) f (x)


In one dimension,

x<y f (y)
and

f (x) f (y) xy f (y) f (x) f (y) f (x) yx

f (x)

40

Game theory

decreases, so if

exists, then

f (x) 0.

Lemma 6.1 Let f : D R be a real valued concave, continuous function and D a


nonempty, convex, closed set. Then the solutions of

maximizexD f (x)
form a convex, closed set.

Proof.
(i)

Let

be the optimal objective function value. optimal solution

xk D, xk x , xk is continuous, f (x ) = f .
Since

f (xk ) = f

for all

k, k

and

is closed,

x D

and so optimal solution.

(ii)

x, y D
concave

and optimal solutions. Let

z = x + (1 )y

(0 1).

Since

is

f (z) = f (x + (1 )y) f (x) +(1 ) f (y) = f


f
and because D is convex, z D f z is also optimal solution.

f (z)

must not be larger than

f ,

so

f (z) =

Lemma 6.2 Let D be convex in Rn and f strictly concave. Then f cannot have multiple
maximum points.

Proof.
then

Assume

x, y

are both maximum points, let

z = x + (1 )y D, 0 < < 1,

f (z) = f (x + (1 )y) > f (x) + (1 )f (y) = f + (1 )f = f


contradiction,

f (z)

cannot be better than maximum. Conditions of the Kakutani xed-point theorem

Proof of NikaidoIsoda theorem.


are veried.

41

Game theory
Consider

max k (x1 , . . . , xk1 , yk , xk+1 , . . . , xn )


subject to

y k Sk Rk (x).
is also closed, convex

Best response is the set of optimal solutions:

is continuous and concave

Rk (x)

is closed, convex

R(x) = R1 (x) Rn (x) S = S1 S2 SN


Only (iii) has to be shown. The set

is closed, convex, bounded by assumptions of the theorem.

G = (x, y) | x D, y R(x)
has to be closed. Select

x(l) x ,
(l)

and

y (l) R(x(l) )
(l)

such that

y (l) y .
(l)

Then

(l) k x1 , . . . , xk1 , yk , xk+1 , . . . , xn


for all

(l)

(l)

(l)

k x1 , . . . , xk1 , yk , xk+1 , . . . , x(l) n k ,

(l)

and

y k Sk .

Let

l ,

by continuity of

k x , . . . , x , yk , x , . . . , x 1 k1 k+1 n yk
is optimal,

k x , . . . , x , yk , x , . . . , x 1 k1 k+1 n

yk Rk (x ), y R(x ) (x , y ) G.

6.2

Counterexamples for NikaidoIsoda theorem


Sk Sk Sk k
nonconvex: any discrete game without equilibrium not closed:

S1 = S2 = [0, 1), 1 = 2 = x + y S1 = S2 = [0, ), 1 = 2 = x + y n=2

not bounded:

not continuous:

1 = k

x+y y

if if

x<1 , 2 = x=1

x+y x

if if

y<1 y=1

no best responses exist

not concave:

n = 2, 1 = 2 = [0, 1] 1 = (x y)2 + 1, 2 = (x y)2 + 1


42

Game theory

1 2

1 0 R1 (y) = {0, 1} R2 (x) = x


no match

y<1 2 1 if y > 2 1 if y = 2
if same

no EP exists.

Example 6.1 n = 2, S1 = S2 = [0, 10]


1 = 2 = 2x + 2y (x + y)2
strictly concave in x and in y

1 = 2 2(x + y) = 0, 1 x y = 0 x x=1y R1 (y) = 1y 0


if 0 y 1 if y > 1

Innitely many equilibria Strict concavity of k is not enough for uniqueness, but in optimization it is enough.

43

Game theory

7
7.1

Applications
Matrix games
A (m n).
(i)
Dene

Players select strategies randomly according to selected discrete distributions on their strategy sets. Zero-sum 2-person game, payo matrix

S1 = S2 = x1
(i)

x1 | (x1 , . . . , x1 ) = x1 , x1 0,
i

(1)

(m)

x1 = 1 x2 = 1
j (j)

(i)

x2 | (x2 , . . . , x2 ) = x2 , x2 0,
is chosen by player 1) is chosen by player 2)

(1)

(n)

(j)

= P(strategy i = P(strategy j

(j) x2

1 (x1 , x2 ) = E(payo) =
i j

aij x1 x2 xT A 1 x2

(i) (j)

2 (x1 , x2 ) = 1 (x1 , x2 )
there is at least one EP.

7.2

Bimatrix games
A, B.
With random strategies

Strategy sets as above, payos are expectations again. 2-person nite game, payo matrices: before, and

S1

and

S2

are as

1 (x1 , x2 ) = xT A x2 , 2 (x1 , x2 ) = xT B x2 1 1
there is at least one EP.

7.3

Mixed nite games


nite game, with original strategy sets

n-players,

Sk = {1, 2, . . . , mk }
and payos

k (i1 , i2 , . . . , in ) = ai1 i2 ...in


In mixed extension

Sk =
expected payo:

xk | (xk , . . . , xk

(1)

(mk )

) = xk , xk 0,
i

(i)

xk = 1

(i)

k (x1 , . . . , xn ) =
i1 i2

in

ai1 i2 ...in x1 1 x2 2 . . . x(in ) n

(i ) (i )

EP exists.

44

Game theory

7.4
n=2

Polyhedron games
Sk = {xk | B k xk bk } 1 (x1 , x2 ) = xT A1 x2 , 1 2 (x1 , x2 ) = 1 (x1 , x2 )

EP exists. Select as a special case

I 0 B k = 1T , bk = 1 , 1T 1
then

I 0 B k xk bk 1T xk 1 1 1T xk 0 1T xk 1 1T xk 1 xk 0
T

1T xk = 1

1 xk 1

Strategy sets of matrix games.

7.5
n

Multiproduct oligopolies
M
products

rms,

xk = (xk , . . . , xk ) =
n

(1)

(M )

production vector of rm

p = p
l=1

xl
n

= xl

price vector

k (x1 , . . . , xn ) =

xT p k
l=1

Ck (xk )
cost

revenue
Assumptions: (i)

Sk =

nonempty, closed, convex, bounded in

RM +

45

Game theory
(ii) (iii) (iv)

and

Ck

are continuous functions

xT p ( k Ck

n l=1

xl )

is concave in

xk

is convex

EP exists. Question: When is (iii) satised?

Denition 7.1 Let f be a function, f : D RM , where D RM such that for all


x, y D, (x y)T (f (x) f (x)) 0
Then f is called monotone.

Note.

If

M = 1,

then this means that

is increasing.

Before nding conditions for monotonicity, a matrix-theoretical reminder. T Let S be a real symmetric matrix (S = S ).

Denition 7.2 S is positive denite if either


(i) xT S x > 0 for all x = 0 (x is complex conjugate of x) or (ii) all eigenvalues of S are positive.

Theorem 7.1 (i) and (ii) and equivalent. Proof.


Fact, all eigenvalues of

have to be real: by complex conjugate of

xT / S x = x (multiplied xT S x = xT x
Notice,

xT )
(9)

xT S x

is real, since

xT S x = xT S x
Similarly,

= xT S xT = xT S x.

xT x =

xi xi =

|xi |2

real and positive.

From (9) and (i),

=
From diagonal form of

xT S x >0 xT x U,

S,

with real invertible matrix

S = UT
so

1
.. .

U, n

xT S x = xT U T

1
.. .

U x n

46

Game theory
With

z = U x, xT S x = z T 1
.. .

z= n i z i zi = i |zi |2 > 0

if

z = 0,

which is the case if

x = 0.

Similarly

Denition 7.3 S is negative denite if


(i) xT S x < 0 for all x = 0 or (ii) all eigenvalues of S are negative.

Denition 7.4 S is positive semidenite if


(i) xT S x 0 for all x or (ii) all eigenvalues of S are nonnegative.

Denition 7.5 S is negative semidenite if


(i) xT S x 0 for all x or (ii) all eigenvalues of S are nonpositive.

Denition 7.6 A symmetric matrix S is indenite, if


(i) xT S x is positive with some x, and also negative with another x; or (ii) among the eigenvalues of S there are positive and negative numbers.

Theorem 7.2 Assume D is convex in RM . Let J(x) denote the Jacobian of f , assume
J(x) is continuous on D. Function f is monotone J(x) + J(x)T is positive semidenite.

Proof.
Consider function

g(t) = f y + t(x y)
with

g(0) = f (y),
Then

g(1) = f (x).

g (t) = J y + t(x y) (x y),

47

Game theory
so

f (x) f (y) = g(1) g(0) =


0 1

g (t)dt

=
0
Therefore

J y + t(x y) (x y)dt

(x y)

f (x) f (y)

=
0

(x y)T J y + t(x y) (x y)dt 1 2


1

=
since for any matrix

(x y)T J y + t(x y) + J T y + t(x y)


0

(x y)dt 0

and vector

u,

uT A u = (uT A u)T = uT AT u.
scalar

If

J(x) + J(x)T

is not positive semidenite, then there exist

y0

and

u=0

such that

uT J(y 0 ) + J(y 0 )T u < 0,


so same holds in neighborhood of

y0.

Take

1 u = u

with large enough

such that

y 0 + tu
belongs to this neighborhood for

0 t 1.
1 0

Then let

x = y0 + u

and

(x y 0 )

f (x) f (y 0 )

1 = 2 =

u J y 0 + tu + J T y 0 + tu
1 0

udt udt < 0,

1 22

uT J y 0 + tu + J T y 0 + tu

contradiction.

Denition 7.7 Let f : D Rn with D Rn . This function is called


tone

if for all x, y D and x = y ,

strictly mono-

(x y)T f (x) f (y) > 0.


more J(x) + J(x)T is negative denite for all x D. Then f is strictly monotone.

Theorem 7.3 Assume D Rn is convex, the Jacobian J(x) of f is continuous, furtherProof. Same as rst part of previous theorem. Note. In one dimension J(x) = f (x), so we get back well-known monotonicity condition.
M Lemma 7.1 Let f : D RM with D R+ being a convex set, f is monotone and all

components of f are concave. Then g(x) = xT f (x) is concave on D.

48

Game theory

Proof.

Let

, 0, + = 1,

and

x, y D.

Then

(x y)T (f (x) f (y)) 0


T T T

(/ )

x f (x) + y f (y) y f (x) + xT f (y).


Since

= (1 ) = 2 , = (1 ) = 2 ,

we get

(x + y)T f (x) + f (y) xT f (x) + y T f (y),


so

g(x + y) = (x + y)T f (x + y)
f (x)+f (y)

(x + y) f (x) + f (y) xT f (x) + y T f (y) = g(x) + g(y) g(x) is concave.

Corollary.

If

is monotone and each component of

is concave, then

k (x1 , . . . , xn ) = xT p xk + k
l=k
is concave in

xl

Ck (xk )

xk

EP exists.

7.6

Single-product oligopolies
n
rms production levels,

x1 , . . . , x n Ck (xk ) p(s)

0 x k Lk

cost functions

price function with

s = x1 + + xn
n

Prot of rm

k: k = xk p

xl
l=1

Ck (xk )

Assumptions:
Functions (i) (ii) (iii)

and all

Ck

are twice continuously dierentiable, and

p <0

(price decreases if supply becomes larger) (which holds if

p + xk p 0 p Ck < 0

is concave or slightly convex) is convex or slightly concave)

(which holds if

Ck

49

Game theory

Best responses:
xk p(xk + sk ) Ck (xk ) max
where

sk =

l=k

xl =

output of the rest of the industry

k = p(xk + sk ) + xk p (xk + sk ) Ck (xk ) xk 2 k = 2p + xk p Ck < 0 x2 k k


is strictly concave in

xk ,
if if

so

0 Lk Rk (sk ) = xk
where

p(sk ) Ck (0) 0 p(Lk + sk ) + Lk p (Lk + sk ) Ck (Lk ) 0

(10)

otherwise,

x k

is the solution of the rst order condition

g(xk ) = p(xk + sk ) + xk p (xk + sk ) Ck (xk ) = 0


Note in case 3 of (10),

g(0) = p(sk ) Ck (0) > 0 g(Lk ) = p(Lk + sk ) + Lk p (Lk + sk ) Ck (Lk ) < 0


and

g (xk ) = 2p + xk p Ck < 0
unique solution. In cases 1 and 2 of (10),

Rk = 0,

in case 3 by implicit dietentiation:

xk = Rk (sk ),

p (Rk + 1) + xk p (Rk + 1) + Rk p Ck Rk = 0 Rk (sk ) = p + xk p (1, 0] , 2p + xk p Ck

which holds also in cases 1 and 2 except on boarder lines.

Special case of duopoly:


Assume

n = 2,

then equilibrium is the solution of equations

x = R1 (y) y = R2 (x)
with Jacobian

J=

0 R1 (y) , R2 (x) 0

and since |R1 (y)| q < 1, |R2 (x)| q < 1 with some q (because of continuous derivatives), best response mapping is contraction unique EP.

General case of n:

We can also prove existence and uniqueness of EP if We can also rewrite

n 2.
as

xk

as function of total output

50

Game theory

0 Lk Rk (s) = xk
where

if if

p(s) Ck (0) 0 p(s) + Lk p (s) Ck (Lk ) 0

(11)

otherwise,

x k

is the solution of equation

g(xk ) = p(s) + xk p (s) Ck (xk ) = 0


Note in case 3 of (11),

g(0) = p(s) Ck (0) > 0 g(Lk ) = p(s) + Lk p (s) Ck (Lk ) < 0 g (xk ) = p (s) Ck (xk ) < 0
unique solution.

In cases 1 and 2 of (11),

Rk = 0,

in case 3 by implicit dierentiation:

xk = Rk (sk ),

p + Rk p + xk p Ck Rk = 0 Rk (s) = p + xk p 0 p Ck

which also holds in cases 1 and 2 except boarder lines.

Equilibrium:
Consider equation

h(s) =
k=1

Rk (s) s = 0
n k=1

Note.

h(s) strictly decreases in s, h(0) 0, h ( unique equilibrium.


We also found method to compute EP.

Lk ) 0

Relation of EP problems and nonlinear programming


maximize (P) s. to

Consider optimization problem

f (x) xX g(x) 0

Rn ) n (X R any set) m (g(x) R )


(x

Denition 8.1 Lagrangean is


F (x, u) = f (x) + uT g(x) for all u 0.
Dene the 2-person zero-sum game as

S1 = X, S2 = Rm , 1 = F, 2 = F. +

Theorem 8.1 If (x , u ) is an EP, then x is optimal solution for the original problem.
51

Game theory

Proof.
f (x ) + uT g(x ) f (x) + uT g(x) (x) f (x ) + uT g(x ) (13)
with (12)

f (x ) + uT g(x ) < 0,

(u)

(13) (14)

u = 0, u g(x ) 0.
and select suciently large

Next we show that g(x ) 0. Assume gi (x ) violate (13) x is feasible solution of (P). T Since u 0 and g(x ) 0, u g(x ) 0.
Together with (14) we conclude that

ui

to

uT g(x ) = 0.
Finally we show that

is optimal for (P):

(12)

f (x ) = f (x ) + uT g(x ) f (x) + uT g(x) f (x).


0

for all feasible


x

In summary:

If we have a good general method for solving 2-person zero-sum games,

then we can solve any nonlinear programming problem.

9
9.1

How to compute EP?


Lagrange method
maximize f (x) s. to g(x) = 0
(15)

Theorem 9.1 Under certain regularity conditions problem (15)


maximize f (x) + uT g(x)
L(x,u)
At optimum

Lagrangean

f (x) +uT
gradient

L =0 & xi g(x) = 0T
Jacobian

L =0 uj

system of nonlinear equation

g(x) = 0

Note that the gradient is a row vector here.

Example 9.1
maximize s. to
Method 1.:

2x1 (x1 x2 )2 x1 + x2 = 1

x2 = 1 x1 , so objective is

2x1 (x1 x2 )2 = 2x1 (x1 1 + x1 )2 = 2x1 (2x1 1)2 = 4x2 + 6x1 1 1


52

Game theory

8x1 + 6 = 0 3 6 = x1 = 8 4 x2 = 1 x1 =
Method 2.:

1 4

2x1 (x1 x2 )2 + u (x1 + x2 1) max

: x1 : x2 : u by adding (16) and (17) (17) (18)


by substracting

2 2(x1 x2 ) + u = 0 2(x1 x2 )(1) + u = 0 x1 + x2 1 = 0 2 + 2u = 0 u = 1 2(x1 x2 ) 1 = 0 2x1 + 2x2 2 = 0 1 4x2 + 1 = 0 x2 = 4 3 x1 = 1 x2 = 4

(16)

(17)

(18) (19) (20) (21) (22) (23)

(18)

9.2

KuhnTucker (K-T) conditions


maximize s. to (x) g(x) 0 (x Rn ) (g(x) Rm )
(24)

Theorem 9.2 Under certain regularity conditions and dierentiability of and g let x
be an optimal solution. Then there exist u , . . . , u such that 1 m u 0 g(x ) 0 Kuhn-Tucker necessary conditions (x ) + uT g(x ) = 0T uT g(x ) = 0
If

Note.

and all components of g(x) are concave, then K-T-conditions are also sucient.

53

Game theory

Example 9.2
maximize s. to x1 + 2x2 5 x 1 , x2 0 n = 2, m = 3, we need u1 , u2 , u3 such that u1 , u2 , u3 x1 2x2 + 5 x1 x2 0 0 0 0 = ln(x1 + x2 ) g1 = x1 2x2 + 5 g2 = x1 g3 = x2

1 2 1 1 0 = (0, 0) , + (u1 , u2 , u3 ) 1 x1 + x2 x1 + x2 0 1 x1 2x2 + 5 = 0 x1 (u1 , u2 , u3 ) x2


By components
+

1 u1 + u2 x1 + x2 1 2u1 + u3 x1 + x2 u1 (x1 2x2 + 5) u2 x 1 u3 x 2 (25), (26) (29) u1 = 0, (27) hence

= = = = =

0 u1 = 0 0 0 0 0 u3 > u 2 u3 > 0 x2 = 0 x1 2x2 + 5 = 0 x1 = 5

(25)

(26) (27) (28) (29) (30) (31) (32) (33)

9.3

Relation of Kuhn-Tucker conditions and Lagrange method


maximize s. to f (x) g(x) = 0

Problem:

Rewrite:

maximize f (x)
54

Game theory

s. to
K-T conditions:

g(x) 0 g(x) 0 u, v 0

g(x) 0 g(x) 0

g(x) = 0

f (x) + uT g(x) v T g(x) = 0T uT g(x) v T g(x) = 0 (trivially


Introduce satised)

u v = ,

without sign constraint:

g(x) = 0 f (x) + T g(x) = 0T

Lagrange method

9.4
If

Methodology to compute equilibria


Sk = xk | g k (xk ) 0
is an EP, then

(x , . . . , x ) 1 n

x k

is optimal solution for

maximize s. to

k (x , . . . , x , xk , x , . . . , x ) k+1 1 k1 n g k (xk ) 0

Assume that regularity conditions for K-Trelations are satised:

uk 0 g k (xk ) 0 k (x, uk ) =
k k (x)

+ uT k

= 0T T uk g k (xk ) = 0
k g k (xk )

for all

k = 1, . . . , n

(34)

Consider problem:

minimize
k=1

uT g k (xk ) k
(35)

s. to uk 0 g k (xk ) 0 k = 1, 2, . . . , n k (x, uk ) = 0T (x , . . . , x , u , . . . , u ) is optimal solution for (35). 1 n 1 n

Theorem 9.3 If (x , . . . , x ) is an EP, then there exist u , . . . , u such that 1 n 1 n Proof.


is If

(x , . . . , x ) 1 n

is EP, then objective function zero, and at any feasible solution it

0.

Theorem 9.4 Assume k is concave in xk and all components of g k are also concave.
Then x is equilibrium it is optimal for (35) with some u , . . . , u . 1 n

55

Game theory

Note. Optimal objective value is zero. Method 1. Write up conditions (34) for all players simultaneously and solve the resulted
system (nd feasible solution).

Example 9.3 n = 2 oligopoly


0 x, y 10 C1 (x) = x, C2 (y) = y P (x, y) = 20 (x + y) 1 (x, y) = x(20 x y) x = 19x x2 xy 2 (x, y) = y(20 x y) y = 19y y 2 xy

KuhnTucker conditions:
u1 x 0, v1 10 x 0 u2 y0 v2 10 y 0 0 x, y 10

Player 1:
u 1 , v1 0 x 0, 10 x 0 1 T =0 k k (x1 , . . . , xn ) + uk k g k (xk ) = 0 2x + 19 y + (u1 , v1 ) 1 uT g k (xk ) = 0 u1 x + v1 (10 x) = 0 k u1 x = 0, v1 (10 x) = 0 uk 0 g k (xk ) 0

Player 2:
u 2 , v2 0 y 0, 10 y 0 1 T =0 k k (x1 , . . . , xn ) + uk k g k (xk ) = 0 2y + 19 x + (u2 , v2 ) 1 uT g k (xk ) = 0 u2 y + v2 (10 y) = 0 k u2 y = 0, v2 (10 y) = 0
14 constraints, 6 variables (x, y, u1 , v1 , u2 , v2 )

uk 0 g k (xk ) 0

Method 2.

Solve problem (35)

Example 9.4 In the case of previous example, the KuhnTucker conditions are equivalent
to the optimization problem:

minimize {u1 x + v1 (10 x) + u2 y + v2 (10 y)} subject to u1 , v1 , u2 , v2 0 0 x, y 10 19 2x y + u1 v1 = 0 19 2y x + u2 v2 = 0


If optimal objective value is zero, then we got solution of KuhnTucker conditions (so we got the equilibrium). Otherwise no solution exists, therefore there is no equilibrium.
56

Game theory

Example 9.5 n = 2, x, y 0 S1 = S2 = [0, ),


1 (x, y) = x + y (x + y)2 = x + y x2 2xy y 2 2 (x, y) = x + y 2(x + y)2 = x + y 2x2 4xy 2y 2 u, x 0 v, y 0 1 2x 2y + u 1 = 0 1 4x 4y + v 1 = 0 ux=0 vy =0
Optimization problem:

minimize {u x + v y} subject to u, v 0 x, y 0 1 2x 2y + u = 0 1 4x 4y + v = 0

10
10.1

Applications
Bimatrix games
1 = xT A x2 , 1 S1 = S2 = x1 x2
(i)

2 = x T B x 2 1 x1 = 1
i (i)

x1 0, x2 0,
j (1) (j)

x2 = 1 I , g = 1T 1 1T I , g = 1T 2 1T

(j)

. . . (m) g 1 (x1 ) = x1 (1) (m) x1 + + x1 1 (1) (m) x1 x1 + 1 (1) x2 . . . (n) g 2 (x2 ) = x2 (1) (n) x2 + + x2 1 (1) (n) x2 x2 + 1

x1

Objective:

Let's formulate problem (35):

u1 x 1 + u1
i=1 n

(i) (i)

(m+1) i

x1 1 x2 1
j (j)

(i)

+ u1

(m+2)

x1 + 1
j

(i)

+
j=1

u2 x 2 + u2

(j) (j)

(n+1)

+ u2

(n+2)

x2 + 1
57

(j)

Game theory
Introduce

u1

(m+2)

u1

(m+1)

= , u2

(n+2)

u2

(n+1)

(without sign constraints), then

objective becomes

uT x1 + uT x2 1T x1 1 1T x2 1 1 2

Constraints:
u1 0, u2 0 x1 0, x2 0 1T x = 1, 1T x2 = 1 1 xT AT + uT + u1 2 1 xT B + uT + u2 1 2
From last two equations:

(m+1)

u1

(m+2) T

= 0T

(n+1)

u2

(n+2) T

= 0T

uT = 1T xT AT , 1 2
so objective:

uT = 1T xT B, 2 1

xT AT + 1T x1 + xT B + 1T x2 1T x1 1 1T x2 1 2 1
1 1

= xT A x2 xT B x2 + + 1 1 maximize s. to
since

xT (A + B)x2 1 x1 0, x2 0 1T x1 = 1, 1T x2 = 1, u1 , u2 0 A x2 1 B T x1 1

quadratic programming problem

(36)

Theorem 10.1 EP optimal solution Example 10.1


A=
so

2 1 ,B = 1 1
(1) (1) (2)

1 1 ,A + B = 1 2
(1) (2) (2) (2) (1)

3 2 2 3
(2) (2)

maximize 3x1 x2 2x1 x2 2x1 x2 + 3x1 x2 s. to x 1 , x1 , x2 , x2 0 x1 + x1 = 1, x2 + x2 = 1 2x2 x2 x2 + x2 x1 x1 x1 + 2x1


58
(1) (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) (1)

Game theory Optimal solutions:

x1 (1, 0) (0, 1) x2 (1, 0) (0, 1) 2 1 1 2

3 2 , 5 5 2 3 , 5 5 1 5 1 5

10.2

Matrix games
A+B =0 maximize x2 0 x1 0, 1T x1 = 1 1T x2 = 1 A x2 1, B T x1 1 AT x1 1
and

(P2 )

Separate

(x1 , )

(x2 , ): A x2 1 1T x2 = 1 x2 0
minimize s. to

minimize s. to

AT x1 1 1T x1 = 1 x1 0

Both are linear programming problems.

Theorem 10.2 (x , x ) is EP they are optimal solutions with some and . 1 2 Example 10.2
2 1 0 A = 2 0 3 1 3 3 minimize
(1) (2)

minimize s. to

2x2 + x2 0 2x2 + 3x2 0 x2 + 3x2 + 3x2 0 x2 + x2 + x2 = 1 x 2 , x2 , x2 0


(1) (2) (3) (1) (2) (3) (1) (2) (3) (1) (3)

2x1 + 2x1 x1 + 0 x1 + 3x1 + 0 3x1 + 3x1 + 0 x1 + x1 + x1 = 1 x 1 , x1 , x1 0


(1) (2) (3) (1) (2) (3) (2) (3) (1) (3) (1) (2) (3)

s. to

Optimal solutions:

x1 = x2 =

4 4 5 , , 7 21 21 3 3 1 , , 7 7 7

,
T

= = 9 7

9 7

Note.

Since optimal objective value in (36) is zero, we may select

is called the

value of the matrix game.

= .

The optimal

59

Game theory

10.3
where

Oligopoly game (single-product model)


Sk = {xk | xk 0, Lk xk 0} Lk
is capacity limit:

g k (xk ) =

xk Lk x k

k (x1 , . . . , xn ) = xk p(x1 + + xn ) Ck (xk )


The K-T condition-based optimum problem:

minimize
k=1

uk xk + uk (Lk xk )
n

(1)

(2)

(let

k = uk uk , k = uk
no sign constraint

(1)

(2)

(2)

=
k=1
Constraints:

(k xk + k Lk )
(1) (2)

uk , uk 0 0 x k Lk

k 0, k + k 0
k

xl + xk p k = p

xl Ck (xk ) + uk , uk xl + xk p

(1)

(2)

1 1

=0

xl Ck (xk ) ,

so problem is rewritten as follows.

minimize s. to

n k=1

(xk [p (

xl ) + xk p ( xl ) + xk p (

xl ) Ck (xk )] + k Lk ) xl ) Ck (xk )] 0

k 0, k [p ( 0 x k Lk

Example 10.3 n = 3,

Ck (xk ) = k x3 + xk k Lk = 1 p(s) = 2 2s s2

60

Game theory
3

maximize
k=1

(xk (2 2s s2 2xk 2xk s 3kx2 1) 1 2 3 ) k

s. to

0 xk 1, 1 , 2 , 3 0 k (2 2s s2 2xk 2xk s 3kx2 1) 0 (k = 1, 2, 3) k x1 + x2 + x3 = s

Optimal solution:

x = 0.1077 1 x = 0.0986 2 x = 0.0919 3

10.4
(i)

Special matrix games


... . . . A = . . . . . . ... 1 =
i j

x1 x2 =
i

(i) (j)

x1
1

(i) j

x2 =
1

(j)

(ii)

arbitrary strategy pair is equilibrium.

a1 0 . . . 0 0 a2 . . . 0 A=. . .. . . . . . 0 0 . . . an
Constraints:

(diagonal game)

ak x 2 ak x 1 x 1 , x2 0 x1 =
k k
Since

(k) (k)

(k)

(k)

since we selected

= ,

all feasible solutions are EP

(k)

x2 = 1 x2 > 0 x2
k k (k) (k) (k) (k)
with some

(k)

Case 1. ak > 0 for all k .

k,

> 0. x1 = 1,
(k)

Therefore

1=

ak

so equality everywhere

x1 = x2 =

, but ak

x1 = 1

(k)

so

P1

1 k ak

61

Game theory

Case 2. ak < 0 for all k , then


(ak )x2 (ak )x1 1=
k
and since

(k) (k)

x1 = 1,
(k)
so

x2
k

(k)

ak =

x1 = x2 =

(k)

(k)

k P1
1 k ak

, ak

x1 = 1,

(k)

we have

Case 3. ai 0 and aj 0 with some i and j .


ai x2 0 aj x1 = 0, ak x 2 0 ak x 1
Set

(i)

(j)

(k)

(k)

(all k)

uk

=0 0 =0 0

if if if

ak > 0 ak 0,

uk = u

vk

ak < 0 if ak 0,

vk = v

x1 = x2
(iii)

1 (v1 , . . . , vn ) , v 1 = (u1 , . . . , un ) u

AT = A symmetric matrix
minimize s. to

game

A x2 1 1T x2 = 1 x2 0

minimize s. to

Ax1 1 1T x1 = 1 x1 0

identical problems

x 1 , x2 X =
Since

set of optimal solutions we have

+ = 0

at optimum and

= ,

= = 0,

so the value of game

is zero and

X = x | x 0, 1T x = 1, A x 0

10.5

Applications
x0 Axb max cT x y0 AT y c min bT y

(i) Consider primaldual LP's

(primal) &

(dual)

62

Game theory

Lemma 10.1 Let x and y be feasible solutions of the primal and dual problems, respectively. Then cT x bT y. Proof.
c T x AT y
T

x = y T A x = y T (A x) y T b = bT y. x y are feasible solutions of the primal and c x = bT y , then x is an optimal solution of


T

Corollary.

(Duality theorem) If

and

dual problems, respectively, such that the primal problem and Construct next matrix

is an optimal solution of the dual problem.

0 A b 0 c P = AT T T c b 0

which generates a symmetric matrix game.

Theorem 10.3 Let z = (u, v, ) be an equilibrium of the symmetric matrix game


1 1 v and y = u are optimal solutions of the primal and dual problems. x=
such that > 0. Then

Proof.

If

is equilibrium, then

P z0 A v b 0 AT u + c 0 bT u c T v 0
(37) (38) (39)

Since

>0

and

z 0, x= 1 v0
and

y=

1 u 0,

furthermore

(37) A x b (38) AT y c (39) b y c x (40), (41) x, y


However from above lemma, are feasible for primal and dual.

(40) (41)

(42)

bT y c T x
we conclude

bT y = cT x.
The duality theorem implies optimality.

63

Game theory

Example 10.4 Consider LP:


maximize x1 + 2x2 s. to x1 0, x2 no sign constraint x1 + x2 1 5x1 + 7x2 25

Step 1:

Rewrite to primal form:

x2 = x+ x , 2 2 x2 if x2 0 x+ = 2 0 otherwise x = 2 0 x2
if

x2 0

otherwise

Objective: x1 + 2x2 = x1 + 2x+ 2x 2 2


cT = (1, 2, 2)

Constraints:
x1 + x+ x 1 2 2 + x1 x2 + x 1 2 5x1 + 7x+ 7x 25 2 2 A= P =
(ii) Consider a matrix game

/ (1)
(43) (44)

1 1 1 , 5 7 7

b=

1 25

0 0 1 1 1 1 0 0 5 7 7 25 1 5 0 0 0 1 1 7 0 0 0 2 0 0 0 2 1 7 1 25 1 2 2 0

A>0

(adding the same constant to all elements does not

change equilibrium), and a symmetric matrix game

0 A 1 P = AT 0 1 T T 1 1 0

Theorem 10.4 Matrix games A and P are equivalent:


(a) If z = (u, v, ) is EP of P , then with a =
1 , 2

x=
give equilibrium of A, and v =
a

1 1 u and y = v a a
is the value of the game;
64

Game theory (b) If (x, y) is equilibrium of A, then

z=
is equilibrium for P .

1 (x, y, v) 2+v

Proof.
(a) Let

z = (u, v, )

be an equilibrium of

P,

then

0 A AT 0 T 1 1T
That is,

P z 0 1 0 u v 0 1 0 0

A v 1 0 AT u + 1 0 1T u 1T v 0
First we show that If

0 < < 1.

= 1, then (since z is probability vector), u = 0 and v = 0 contradict 2nd inequality. T T If = 0, then 1 u + 1 v = 1, then by 3rd inequality v must have
component, so contradiction to the rst inequality. Next we show that

positive

1T u = 1T v.

From 1st and 2nd relations

uT A v uT 1 0 v T AT u + v T 1 0 v T 1 uT 1 0
Compare it to 3rd inequality to see Select

(+) adding them

/ v T 1 uT 1 0 v T 1 = uT 1.
so

a=

1 , then 2

v T 1 = u T 1 = a, x= 1 u a

and

y=

1 v a

are both probability vectors, furthermore

1 A T x = AT u 1 a a 1 Ay= Av 1 a a
So choose

(2nd inequality) (1st inequality)

and = , x1 = x and x2 = a a of the pair of LP's for matrix games with +

y , they satisfy the constraints = 0. So they are optimal

equilibrium of

A.

(b) Similar, omitted

Sucient to solve symmetric matrix games.

65

Game theory

10.6

Method of ctitious play


A.
initial strategy of player 1. If the basis vectors are

Consider a matrix game Initial step, denoted by

k = 1. Let x1 be e1 , e2 , . . . , then dene


j

xT A ej1 = min{xT A ej } smallest element of row vector xT A 1 1 1


and choose In

y 1 = ej1 . steps k 2, let y = k1


i

1 k1

k1 t=1

yt

and

xk = eik

such that

eTk A y k1 = max eT A y k1 max element of column vector A y k1 i i


Let

xk =

1 k

k t=1

xt ,

and select

y k = ejk

such that

xT A ejk = min xT A ej smallest element of row vector xT A k k k


j
Then go back with next value of

k.

Theorem 10.5 Any limit point of the sequences {xk } and {y k } gives equilibrium strategies.

Proof. Complicated, not presented. Note. We got an iterative method for solving matrix games and so for solving LPs
project opportunity.

10.7

von Neumann's method


P.
element of

Consider a symmetric matrix game Dene:

ui : : : :

Rn R, R R, Rn R, Rn R,

ui (y) = eT P y (i = 1, 2, . . . , n) (ith i (u) = max {0, u} 0 (y) = n ui y 0 i=1 (y) = n 2 ui y 0 i=1 y y n y

P y)

Lemma 10.2 Proof.


n n 2

(y) =
i=1

2 (ui (y))
i=1

ui y

= 2 y

cross products are nonnegative


n n
1 2

1 2

(y) =
i=1

1 ui y

i=1

12
i=1

2 (ui (y))

n y .

CauchySchwarz

66

Game theory
Assume that at a strategy player 2 needs to increase component

y , (uj (y)) > 0. Then eT P y > 0, however eT P ej = 0, j j yj to 1 (since at y its payo is negative, and at ej it is zero),

so so

has to increase. This is represented by the rst term of the following ODE

system. The second term is used to guarantee that solution is always probability vector. Consider the system of ODE's:

yj (t) = uj (y(t)) y(t) yj (t) 0 yj (0) = yj


where

1jn

y0

is a probability vector.

Theorem 10.6 Let tk (k = 1, 2, . . . ) be a positive, strictly increasing sequence that converges to . Then any limit point of sequence y(tk ) is equilibrium strategy, and there exists a constant c > 0 such that n T ei P y(tk ) . c + tk
Several steps.

Proof.

(i) We show that

y(t), t 0

is always a probability vector. with some

Assume rst that Let

yj (t1 ) < 0

and

t1 > 0.

t0 = sup {t | 0 < t < t1 , yj (t) 0} .


By continuity, yj (t0 ) = 0 0, y(t) 0 so and for all

t0 < t1 , yj ( ) < 0.

By denition

(uj )

yj ( ) = uj (y( )) y( ) yj ( ) 0.
0
By Lagrange's mean-value theorem

<0

yj (t1 ) = yj (t0 ) + yj ( ) (t1 t0 ) 0


0
contradiction to

yj (t1 ) < 0

which was the assumption.

Next we show that for all Notice that

t 0,
n

n j=1

yj (t) = 1,
n

so

y(t)

is probability vector.

1
j=1

yj (t)

=
j=1

yj (t) =
ODE's
j=1

uj (y(t)) + y(t)
j=1 (y(t)) n

yj (t)

= y(t)
n j=1

1
j=1

yj (t) .

So function

f (t) = 1

yj (t)

satises the initialvalue problem

f(t) = y(t) f (t),


and the unique solution is

f (0) = 0,

f (t) 0.
67

Game theory
(ii) Assume

ui (y(t)) > 0

for some

t 0,

then

d ui (y(t)) dt

= =
ODEs

d ui y(t) = eT P y(t) = i dt
n n

pij yj (t)
j=1

pij uj (y(t))
j=1 j=1 (y(t))

pij y(t) yj (t)


P
j

pij yj (t)=(y(t))(ui )

Multiply both sides by

(ui )

and add for all

i,

i=1

d (ui ) (ui ) = dt

pij (ui )(uj ) y(t) y(t)


i=1 j=1 0

since

P T = P ,

the rst term is zero:

T P = T P
Thus

= T P T = T P T P = 0.

1 d y(t) = y(t) y(t) 2 dt (ui ) = 0, this equation remain valid, since zero terms are added to both t0 > 0, (y(t0 )) = 0.
Then from the ODE,

(Notice, if sides.)

(iii) Assume rst that with some for all

t t0

(since ODE has zero solution with initial value 0), so for

(y(t)) = 0 all i,

i y(t) 0 P y(t) 0 y(t)


(iv) If

is equilibrium stategy.

(y(t)) > 0

for all

t,

then from lemma

1d y(t) y(t) 2 dt
or

3 2

1d y(t) y(t) 2 dt
1

3 2

/
0

both sides

2 y(t) + c t
where

c = y(0)

1 2

Hence
1

2 y(t)
By lemma

1 . c+t

eT P y(t) ui (y(t)) (y(t)) i n n(y(t)) . c+t


68

Game theory
Taking an increasing sequence t1 y and all i eT P i

< t2 < t3 < . . .

with

tk ,

for any limit point

y 0 P y 0

is equilibrium strategy.

Example 10.5

2 1 A= 2 0 1 3 0 0 0 0 0 0 4 4 P = 3 2 2 5 1 1

4 3 2 0 3 (+2) 4 2 5 1 5 5 3 4 3 2 1 0 0 4 2 5 1 0 1 5 5 1 1 0 0 0 1 5 0 0 0 1 5 0 0 0 1 1 1 1 1 0

The ODE system is solved by 4th order Runge-Kutta method on [0, 100] with h = 0.01 :

y(100) = (u, v, ), a=

0.627163

1 0.1864185 2

x =

1 v (0.563619, 0.232359, 0.241988) a 1 y = u (0.485258, 0.361633, 0.115144) a v= 2 1.364274 a

The value of the game is

In Example 10.2 we determined the equilibrium:

x = y =

4 4 5 , , 7 21 21 3 3 1 , , 7 7 7
T

(0.571429, 0.190476, 0.238095)T , (0.428571, 0.428571, 0.142857)T , v=

and the true value of the game is

9 1.285714. 7 Comparing these true values to the results obtained by using the von Neumann method

we can see that the maximum error in the equilibrium components is about the error in the value of

0.0669,

and

is

0.07856.

This inaccuracy is probably due to the cumulative

error in using the RungaKutta method with a relatively large (h

Note.

= 0.01)

stepsize.

We got another method for solving matrix games and so for LPs

project

opportunity.

Example 10.6 Matrix game A =

1 2 2 1

has no pure strategy equilibrium.


69

Game theory

Fictitous play application


k = 1, x1 = 1 , xT A = 1, 0 1 0 1 2 2 1 = 1, 2

minimum component is the rst, so

y1 = k = 2, y1 = Ay 1 =

1 . 0

1 1

1 0

= 1 0

1 0 = 1 2

1 2 2 1

maximum component is the second, so

x2 = xT A = 2 k = 3,
1 1 , 2 2

1 0 , x2 = 1 2 1 2 2 1 =
3 3 , 2 2

1 0 + 0 1

1 2 1 2

same values, select

y2 =

1 . 0

y2 =

1 2

1 1 + 0 0 1 2 2 1 1 0

1 0 1 2

Ay 2 =

maximum component is the second, so

x3 =

1 0 , x3 = 1 3 xT A = 3
1 2 , 3 3

1 0 0 + + 0 1 1 1 2 2 1

1 3 2 3

5 4 , 3 3

minimum component is the second,

y3 = k = 4, y3 =

0 . 1

1 3

1 1 0 + + 0 0 1 1 2 2 1
2 3 1 3

=
4 3 5 3

2 3 1 3

Ay 3 =

70

Game theory
maximum component is the second so

x4 =

1 0 , x4 = 1 4 xT A = 4

1 0 0 0 + + + 0 1 1 1 1 2 2 1

1 4 3 4

1 3 , 4 4

7 5 , 4 4

minimum component is the second so

y4 =
and so on for

0 , 1

k = 5, 6, . . .

Neumannmethod
0 A 1 T P = A 0 1 = 1T 1T 0
Five-dimensional problem. Remember that

0 0 1 2 1 2 1 1 0 0 1 2 0 0 1 2 1 0 0 1 1 1 1 1 0

ui (y) = eT P y = component i i (ui ) = max {0; ui } = (y) =


i

of

P y ui

larger of 0 and

(ui )

u1 (y) u2 (y) u3 (y) u4 (y) u5 (y)


So the

= = = = =

y3 + 2y4 y5 2y3 + y4 y5 y1 2y2 + y5 2y1 y2 + y5 y1 + y2 y3 y4

5-dimensional

system of dierential equations is:

y1 = max {0; y3 + 2y4 y5 } y1 max {0; y3 + 2y4 y5 } + max {0; 2y3 + y4 y5 } + max {0; y1 2y2 + y5 } + max {0; 2y1 y2 + y5 } + max {0; y1 + y2 y3 y4 } y2 = max {0; 2y3 + y4 y5 } y2 . . . y3 = max {0; y1 2y2 + y5 } y3 . . . y4 = max {0; 2y1 y2 + y5 } y4 . . . y5 = max {0; y1 + y2 y3 y4 } y5 . . .

71

Game theory

11

Uniqueness of Equilibrium
C1 (x1 ) = 0.5x1 , C2 (x2 ) = 0.5x2 , p(s) = 1.75 0.5s if 0 s 1.5 2.5 s if 1.5 s 2

Example 11.1 n = 2 oligopoly with S1 = S2 = [0, 1]

with s = x1 + x2 . So prot of player i is

i (x1 , x2 ) = xi p(x1 + x2 ) Ci (xi ).


It satises all conditions of NikaidoIsoda theorem, and i is strictly concave. In optimization, if objective function is strictly concave, maximal solution is unique. But, in games, there might be multiple EP. In this case set of equilibria:

X = {(x1 , x2 ) | 0.5 x1 1, 0.5 x2 1, x1 + x2 = 1.5} .

Another such example was shown earlier.

Something else is needed for uniqueness.

Consider game

the best reply mapping is

{n, S1 , . . . , Sn , 1 , . . . , n } and let best R(x) = (R1 (x), . . . , Rn (x)).

reply of player

be

Rk (x),

so

Lemma 11.1 Assume that the best reply mapping is point-to-point and either
(i) (R(x), R(y)) < (x, y) for all x, y S1 Sn . or (ii) (R(x), R(y)) > (x, y) for all x, y S1 Sn . Then equilibrium cannot be multiple.
72

Game theory

Proof.

Assume that

and

are both equilibria, then

R(x ) = x

and

R(y ) = y ,

so

R(x ), R(y ) = (x , y )

Note.

contradicting to (i) and (ii). Condition (i) is much weaker than the assumption that

R is contraction.

Existence

is not implied by (i). Another way of guaranteeing uniqueness is based on monotinicity.

Equation f (x) = 0
(i) In one-dimension, if

f (x) strictly increases or strictly decreases then f (x) = 0 cannot f (x) = 0,


component-wise monotonic-

have multiple solutions. (ii) In multiple-dimension, more complicated, in ity is not sucient:

x+y =0 2x + 2y = 0

both components are strictly increasing in both variables, but

many solutions:

y = x

x y = 0 2x 2y = 0

both components are strictly decreasing in both variables, but

many solutions:

y = x

Fixed point problem x = f (x)


(i) In one-dimension if

f (x)

decreases, then multiple xed points are impossible.

(ii) In multiple dimension, more complicated:

x = x 2y y = 2x y

both components are strictly decreasing in both variables, but

many solutions:

y = x

Dierent kind of monotonicity is needed.

monotonic, if for all x and y D,

We already introduced monotonicity in

Rn ,

a function

f : D Rn (D Rn ) is called

(x y)T f (x) f (y) 0


and it is called

strictly monotonic if for all x = y D,


(x y)T f (x) f (y) > 0.

Consider rst equation

f (x) = 0.

Theorem 11.1 If f or f is strictly monotonic, then the solution cannot be multiple. Proof.
Assume

and

are both solutions, then

f (x ) = f (y ) = 0,

which contradicts

to strict monotonicity:

(x y)T f (x ) f (y ) = 0.
Consider next equation

x = f (x).
73

Game theory

Theorem 11.2 If f is monotonic, then solution cannot be multiple. Proof.


Let

and

be solutions, then

(x y )T f (x ) f (y ) = (x y )T x y > 0,
contradiction.

Corollary.
all

If the negative of the best reply mapping (R(x)) is monotonic, then the EP

cannot be multiple. This is true, even if

R(x)

is a set, then we have to assume that for

x, y S1 Sn , (x y)T (u v) 0 u R(x)
and

for all

v R(y).
T

R(y)

R(x)

<

We can also check uniqueness without determining the best response mapping. Consider next game (i)

{n, S1 , . . . , Sn , 1 , . . . , n },

and assume

Sk = {xk | xk Rmk , g k (xk ) 0} Sk k

is nonempty, each component of

and continuously dierentiable in an open set containing (ii) (iii) satises the Kuhn-Tucker regularity condition;

Sk ;

gk

is concave

is twice continuously dierentiable in an open set containing

S = S1 Sn .

Dene

r1

h(x, r) = rn
where

1 1 (x) . . , . n n (x)

xk .

If

r 0 and k k (x) is the M = m1 + + mk , then

gradient (now as column vector) of

with respect to

h : RM RM

with any xed

r.

74

Game theory

Denition 11.1 The game is called strictly diagonally concave on S , if for all x(0) =
x(1) , x(0) , x(1) S and with some r 0, (x(1) x(0) ) h(x(1) , r) h(x(0) , r) < 0.

Note.

This inequality means that

is strictly monotone, so we have the following

condition:

Lemma 11.2 Assume S is convex, for all k , k is twice continuously dierentiable. Then
the game is strictly diagonally concave, if

J(x, r) + J(x, r)T


is negative denite (where J is the Jacobian of h with respect to x) and the game is diagonally strictly concave. Then there is at most one equilibrium.

Theorem 11.3 (Theorem of Rosen) Assume conditions (i), (ii), (iii) are satised
x(0) = x1 , . . . , xn and x(1) = x1 , . . . , xn Kuhn-Tucker conditions for l = 0, 1,
Assume

Proof.
the

(0)

(0)

(1)

(1)

are both EP. Then from

uk g k x k
k k
That is,

(l)T

(l)

= 0 = 0

(45)

x(l) + uk
pk

(l)T

gk xk

(l)

(l) + k k x j=1

ukj gkj xk

(l)

(l)

= 0

here pk is the size of g k


For for

l = 0, l = 1,

multiply by multiply by

rk x k x k rk x k x k
T (0)

(1)

(0)

and and add for all

(1)

k,
T

0=

x(1) x(0)
n pk

h(x(0) , r) + x(0) x(1)


+

h x(1) , r

+
k=1 j=1

rk ukj

(0)

xk xk

(1)

(0)

gkj xk

(0)

(1) (0) gkj xk gkj xk

ukj

(1)

xk xk

(0)

(1)

gkj xk

(1)

(0) (1) gkj xk gkj xk


From (45),

ukj gkj xk
j n pk (0) (1)

(l)

(l)

= 0,

so

0>
k=1 j=1

rk ukj gkj xk

+ ukj gkj xk

(1)

(0)

0,

contradiction.

75

Game theory

Example 11.2 Duopoly with n = 2, S1 = S2 = [0, 1], Ck (xk ) = xk , p(s) = 2 s with


s = x1 + x2 . Then 1 (x1 , x2 ) 2 (x1 , x2 ) 1 1 (x1 , x2 ) 2 2 (x1 , x2 ) x1 (2 x1 x2 ) x1 = x2 + x1 (1 x2 ) 1 x2 (2 x1 x2 ) x2 = x2 + x2 (1 x1 ) 2 2x1 + 1 x2 2x2 + 1 x1 r1 (2x1 + 1 x2 ) . h(x, r) = r2 (2x2 + 1 x1 ) = = = =

So

J(x, r) =

2r1 r1 , r2 2r2 2r1 r1 r2 2r2 2r1 r2 r1 2r2

J(x, r) + J(x, r)T = =

4r1 r1 r2 . r1 r2 4r2

This is negative denite, if eigenvalues are negative. Characteristic polynomial:

det

4r1 r1 r2 r1 r2 4r2

= 2 + (4r1 + 4r2 ) + 16r1 r2 (r1 + r2 )2

Select r1 = r2 = 1, then the characteristic polinomial:

2 + 8 + 12 = 0 = unique equilibrium. 8 8 4 64 48 = = 2 and 6 2 2

12

Leaderfollower games
n = 2,
Player 1 = leader, Player 2 = follower Player 2 choses

R2 (x1 )

(follows player 1)

Player 1's payo

1 (x1 , R2 (x1 )) max

Theorem 12.1 Assume both 1 and R2 (x1 ) are continuous, S1 is nonempty, closed and
bounded. Then there is optimal solution.

Proof.

From Weierstrass' theorem.

In general:

n > 2,

Player 1 = leader,

76

Game theory
Players For each

2, . . . , n

= followers others form Nash equilibrium

x 1 S1 ,

(x2 = E2 (x1 ), . . . , xn = En (x1 ))


then payo of player 1:

max
s. to Optimal solution

1 (x1 , E2 (x1 ), . . . , En (x1 )) x 1 S1

x , 1

and for

k = 2, . . . , n, x = Ek (x ) 1 k

12.1

Application to duopolies
n = 2,
oligopoly with 2 rms Firm 1: home-rm with subsidy from its government Firm 2: foreign-rm

Price function:

p(x + y) = a b(x + y)
Prots of the rms:

1 = x(a bx by) (c s)x (s = subsidity 2 = y(a bx by) cy

for home rm)

1. Nash equilibrium
Assuming interior optimum,

1 = a bx by bx (c s) = 0 x ac+s a by c + s , y= 2x x= 2b b 2 = a bx by by c = 0 y a bx c y= 2b
Equating the two expressions

y=

a bx c ac+s = 2x (/ 2b) 2b b a bx c = 2a 2c + 2s 4bx 3bx = a c + 2s x = ac+2s 3b

so

y=

a c a c + 2s 2a 2c 2s acs = = 2b 6b 6b 3b y=
acs 3b

Here we assumed players move simultaneously.

77

Game theory

2. Stackelberg equilibrium
Player 1 is leader, player 2 follows whatever he/she does. y = abxc whatever x is, so prot of player 1: 2b

1 = x a bx

a bx c (c s)x 2 2a 2bx a + bx + c = x (c s)x 2 x = (a bx + c) (c s)x 2

Dierentiate:

1 a bx + c bx = (c s) = 0 x 2 2 a 2bx + c 2c + 2s = 0 x=
so from above

ac+2s 2b

y =

a c 2s a c a c + 2s = 2b 4b 4b y=
ac2s 4b

Dierent than Nash equilibrium,

is now larger and

is smaller as they should be.

3. Optimum subsidy
Welfare of home country

= 1 subsidy

With Stackelberg equilibrium:

1 subsidy =

a c + 2s a c 2s a c + 2s a c + 2s a c 2b 2 4 2b a c + 2s 4a 2a + 2c 4s a + c + 2s 4c = 2b 4 a c + 2s a c 2s = max 2b 4

2(a c 2s) 2(a c + 2s) = 0 a c 2s a + c 2s = 0 4s = 0 s = 0 , no


Stackelberg equilibrium without subsidy (with

subsidy

s = 0):

x =

a c + 2s ac = 2b 2b a c 2s ac y = = 4b 4b
78

Game theory
With Cournot equilibrium:

1 subsidy =

a c + 2s a c s a c + 2s a c + 2s a c 3b 3 3 3b a c + 2s 3a a + c 2s a + c + s 3c = 3b 3 acs a c + 2s max = 3b 3

2(a c s) (a c + 2s) 2a 2c 2s a + c 2s

= =

0 0 s=

ac 4

Nash equilibrium with this optimal subsidy:

ac+ x = 3b ac y = 3b

ac 2 ac 4

ac 2b 3a 3c ac = = 4 3b 4b

Two equilibria are the same, so optimal subsidy is equivalent to leader's advantage.

13

Games with incomplete information

Note. Incomplete information refers to amount of information the players have about the game, while imperfect information refers to the amount of information they have
on others' and their own previous moves (and on previous chance moves). Incomplete information results from

lack of information on physical outcomes lack of information on own or others' utility function (which are the payos) lack of information on strategy sets

Problems:

wrong physical outcome

wrong or uncertain payo

wrong or uncertain payo by assigning

(or large negative) payos for infeasible strategies, payo becomes

only uncertain all reduces to uncertain payos, so we will assume that strategy sets are known for

every player, only payos are uncertain.

Example 13.1 Modied prisoner's dilemma: DA's brother

Player 1 is the DA's brother, who will go free if none of the players confesses.
79

Game theory NC C NC (0, -2) (-10,-1) C (-1,-10) (-5,-5) Type I Probability = Assume that with probability this is the case (player 2 is of type I), but with probability 1 player 2 pays an emotional penalty or friends of his partner will take revenge (equivalent to 6 years in prison for giving up his partner), so payo table is NC C NC (0, -2) (-10,-7) C (-1,-10) (-5,-11) Type II Probability = 1 Extensive form:

Prisoner 2 now has four possible pure strategies: (i) C if Type I, C if Type II (ii) C if Type I, NC if Type II (iii) NC if Type I, C if Type II (iv) NC if Type I, NC if Type II
80

Game theory and player 1 still has his original two strategies: C and NC.
Next we give formal denition of such games:

Denition 13.1 (Bayesian game) Bayesian game, payo of player i (i = 1, 2, . . . , n),


i (si , si , i ) (i i is random variable)

where si Si , si = (s1 , . . . , si1 , si+1 , . . . , sn ), furthermore we assume that random variables 1 , . . . , n are selected by nature according to a joint distribution function F (1 , . . . , n ), and the actual value of i is known by only player i, unknown to all others. With normal form notation:

(n, S1 , . . . , Sn , 1 , . . . , n , , F )
with = 1 2 n and known F by all players.

Denition 13.2 (Pure strategy) Pure strategy in a Bayesian game for player i is a
decision rule si (i ) for all i i , where si : i Si .

Denition 13.3 (Expected payo) Player i's expected payo given a prole of pure
strategies (s1 (.), . . . , sn (.)) is given as

i (s1 (.), . . . , sn (.)) = E [i (s1 (1 ), . . . , sn (n ), i )]

Denition 13.4 (Pure strategy Bayesian equilibrium) Bayesian equilibrium is a


prole of decision rules (s (.), . . . , s (.)) such that 1 n

i s (.), s (.) i si (.), s (.) i i i


for all functions si (.).

Theorem 13.1 A decision prole (s (.), . . . , s (.)) is Bayesian equilibrium for all i 1 n
and i i accuring with positive probability,

Ei i s (i ), s (i ), i | i i i

Ei i si , s (i ), i | i i

for all si Si , where Ei is the conditional expectation with respect to i given i = i .

Example 13.2 (Continuation of DA's brother example)


In Type I, player 2 must play C (dominant strategy) in Type II, player 2 must play NC (dominant strategy) So and therefore

s (I) = C, 2

s (II) = N C 2

1 (C, s2 (.)) = (5) + (1 )(1) = 4 1 1 (N C, s2 (.)) = (10) + (1 )0 = 10.


81

Game theory So,

s = C, 1 s = N C, 1
In Summary:

if 4 1 10

1 6 1 if 4 1 10 6
if > if < if =
1 6 1 6 1 6

C, N C, s = 1 C and N C,

weak, a2 = strong; for player 2, b1 = weak, b2 = strong,). 4 possibilities for payo functions. Player 1's payo tables:

Example 13.3 2-player game, both can be in weak or strong position (for player 1, a1 =

y1 y2

z1 2 -1 (a1 , b1 ) z1 28 40 (a2 , b1 )

z2 5 20

y1 y2

z1 -24 0

z2 -36 24

(a1 , b2 ) z2 15 4 z1 12 2 z2 20 13

y1 y2

y1 y2

(a2 , b2 )

For player 2, 2 = 1 for all cases. Joint distribution:

a1 a2

b1 b2 r11 = 0.40 r12 = 0.10 0.50 r21 = 0.20 r22 = 0.30 0.50 0.60 0.40

Dene the following strategies for the players:

(i, j) = (pure strategy if weak, pure strategy if strong)


Payo table for player 1:

(1, 1) (1, 2) (2, 1) (2, 2)

(1, 1) (1, 2) (2, 1) (2, 2) 7.6 8.8 6.2 7.4 7.0 9.1 1.0 3.1 8.8 13.6 14.6 19.4 8.2 13.9 9.4 15.1

For example: for pair ((1, 2), (1, 1)) we have case strategy

0.4(2) + 0.1(24) + 0.2(40) + 0.3(2) = 7.0 (a1 , b1 ) (a1 , b2 ) (a2 , b1 ) (a2 , b2 ) (y1 , z1 ) (y1 , z1 ) (y2 , z1 ) (y2 , z1 )
82

Game theory or for pair ((1, 2), (2, 1)) we have case strategy

0.4(5) + 0.1(24) + 0.2(4) + 0.3(2) = 1.0 (a1 , b1 ) (a1 , b2 ) (a2 , b1 ) (a2 , b2 ) (y1 , z2 ) (y1 , z1 ) (y2 , z2 ) (y2 , z1 )

equilibrium is unique: (2, 1) and (1, 1) are equilibrium strategies.

14
14.1
then

Cooperative games
Characteristic functions

Denition 14.1 (Characteristic function) Let S {1, 2, . . . , n} = N be a coalition,


v(S) = max min
Meaning: maximal utility of coalition

iS,xi jS,xj

i (x1 , . . . , xn )
iS

(46)

without the cooperation of others or payo value

for coalition what they can get regardless what others are doing. Also

v() = 0

and

v({1, 2, . . . , n}) = max


i=1

i (x1 , . . . , xn ).

Denition 14.2 (Superadditive game) Game is superadditive, if S, T N and S


T = , then v(S T ) v(S) + v(T ).
By induction, (47)

v(S1 S2 Sk ) v(S1 ) + v(S2 ) + + v(Sk )


if

(48)

Si Sj =

for all

i = j.

Denition 14.3 (Monotone game) Game is monotone, if S T implies v(S) v(T ).


Convexity is dened by increasing rst dierences. Dene

di (S) =

v(S {i}) v(S) v(S) v(S {i})

if if

iS iS

Denition 14.4 (Convex game) Game is convex, if for each i N, S T implies


that

di (S) di (T ). S N, v(S) + v(N S) = v(N ).

(49)

Denition 14.5 (Constant sum game) Game is constant sum, if for all coalitions
(50)

83

Game theory

Denition 14.6 (Rational game) Game is rational, if


v(N)
iN

v({i}).

(51)

In case of =, game is inessential, if > then it is called essential.

Denition 14.7 (Weakly superadditive game) Game is weakly superadditive, if for


all S N,

v(N) v(S) +
iN S

v({i})

Note.

Superadditive

weakly superadditive (use (48)).

Denition 14.8 (Stragetically equivalent games) Games (N, v) and (N, v ) are
strategically equivalent, if there exist > 0 and 1 , . . . , n such that for all S N,

v(S) = v (S) +
iS

Denition 14.9 (Normalized game) A game is normalized, if


v({i}) = 0 for all i N
and

v(N) = 1.

Note.

Normalized

essential.

Denition 14.10 (Simple game) A superadditive normalized game is called simple, if


for all S N, v(S) = 0 or v(S) = 1. A coalition S is winning if v(S) = 1 and losing if v(S) = 0.

Theorem 14.1 If a function v is dened on all coalitions such that v() = 0 and is
superadditive, then there is an n-person game such that v is its characteristic function.
The central question is what is given to the players in case of cooperation. Payo vector

x = (x1 , . . . , xn )

gives amounts for each player such that

xi v(N)
i=1
(we cannot give out more than we might have). Here is not strategy.

xi

is payment to player

and it

dividually rational, if

Denition 14.11 (Individually rational payo vector) A payo vector is called inn

xi = v(N) and xi v({i}) for all i


i=1

(sometimes they are called imputations)


84

Game theory

Example 14.1 3-person oligopoly, 0 xk 3


p(s) = 10 s Ck (xk ) = xk + 1 (symmetric) k (x1 , x2 , x3 ) = xk (10 x1 x2 x3 ) (xk + 1)
Construction of v(S) : (i) S = {1} (or {2}, or {3})

v({1}) = max min x1 (10 x1 x2 x3 ) (x1 + 1)


x1 x2 ,x3

minimum occurs at x2 = x3 = 3
x1 (4x1 )(x1 +1)

x2 1

+ 3x1 1 max 2x1 + 3 = 0 3 x1 = (feasible) 2 5 9 9 v({1}) = v({2}) = v({3}) = + 1 = 4 2 4

(ii) S = {1, 2} (or {1, 3}, or {2, 3})

v({1, 2} = max min (x1 + x2 )(10 x1 x2 x3 ) (x1 + x2 + 2)


x1 ,x2 x3

minimum occurs at x3 = 3
(x1 +x2 )(7x1 x2 )(x1 +x2 +2)

u = x1 + x2 [0, 6] u(7 u) (u + 2) = u2 + 6u 2 2u + 6 = 0 u = 3 (feasible) v({1, 2}) = v({1, 3}) = v({2, 3}) = 9 + 18 2 = 7


(iii) S = {1, 2, 3}

v({1, 2, 3}) = max(x1 + x2 + x3 )(10 x1 x2 x3 ) (x1 + x2 + x3 + 3)

u = x1 + x2 + x3 [0, 9] u(10 u) u 3 = u2 + 9u 3 2u + 9 = 0 u = 4.5 (feasible) 81 81 81 + 162 12 69 + 3= = = 17.25 4 2 4 4


85

v ({1, 2, 3}) =

Game theory Imputations:

5 69 x = (x1 , x2 , x3 ) | xi (i), x1 + x2 + x3 = 4 4
Checking superadditivity:

v({1, 2}) v({1}) + v({2})


7
5 4 5 4

v({1, 2, 3}) v({1}) + v({2, 3})


69 4 5 4

this is sucient because of symmetry

Denition 14.12 (Dominating imputation) Imputation x dominates imputation y


on coalition S , if (i) xi > yi (i S) and (ii)
iS

xi v(S)

least 2 players. Notations: x

Denition 14.13 We simply say x dominates y if x dominates y on a coalition of at


S

y or x

Note.
(i) means all members of

are better o with

(x1 , . . . , xn )

(ii) coalition is able to provide

xi

to players

iS n:

Note.
|S| = 1 |S| = n

No dominance is possible if ,

|S| = 1

or

S = {i},

then

v(S) xi > yi v({i}) = v(S)

v(S)
i

xi >
i

yi = v(N) = v(S)

contradiction in both cases.

14.2
Set of

Core of game
x
vectors such that

x(S) =
iS n

xi v(S) xi = v(N)
i=1

for all

SN

x(N) =

86

Game theory

Example 14.2 Previous example:


5 x 1 , x2 , x3 4 x1 + x2 , x1 + x3 , x2 + x3 7 convex polyhedron x1 + x2 + x3 = 69 4

Theorem 14.2 For weakly superadditive games, the core is exactly the set of nondominated imputations.

14.3

Stable sets (or von NeumannMorgenstern solution)


V = set
of imputations such that

(i) there is no (ii) if

x, y V

such that

for some

S
S

(internal stability); with some

yV

then there exists

xV

such that

(external stability).

Problem:

No general description, no general method.

Not unique and not necessarily exists. n (Remember! Imputation: i=1 xi = v(N) and

xi v({i})

for all

i)

14.4
S.

The Shapleyvalue
to coalition

Note. di (S) = v(S) v(S {i}) is the marginal contribution of player i S


In this case we dene

di (s) = 0

if

i S. N
by entering one by one in a random order. is given as

Assume players form the grand coalition The average contribution of player

xi =
where

(s 1)!(n s)! di (S), n! SN

s = |S|, since there are (s 1)! orderings of other players already in S , and (n s)! ordering of the players outside S . xi = Shapley
value for player

Theorem 14.3 For any superadditive game, x = (xi ) is an imputation. Example 14.3 n = 2, coalitions , {1}, {2}, {1, 2}
x1 = (1 1)!(2 1)! (2 1)!(2 2)! v ({1}) v() + v({1, 2}) v({2}) 2! 2! v({1}) + v({1, 2}) v({2}) = 2

and similarly,

x2 =

(1 1)!(2 1)! (2 1)!(2 2)! v ({2}) v() + v({1, 2}) v({1}) 2! 2! v({2}) + v({1, 2}) v({1}) = 2
87

Game theory Notice that

x1 + x2 = v({1, 2}).

Example 14.4 n = 3, coalitions: , {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}
x1 = (1 1)!(3 1)! (2 1)!(3 2)! v ({1}) v () + v ({1, 2}) v ({2}) 3! 3! (2 1)!(3 2)! (3 1)!(3 3)! + v ({1, 3}) v ({3}) + v ({1, 2, 3}) v ({2, 3}) 3! 3! 1 2 v ({1}) + v ({1, 2}) v ({2}) + v ({1, 3}) v ({3}) + 2 v ({1, 2, 3}) 2 v ({2, 3}) = 6

Similarly

x2 = x3

1 2 v ({2}) + v ({2, 1}) v ({1}) + v ({2, 3}) v ({3}) + 2 v ({1, 2, 3}) 2 v ({1, 3}) 6 1 = 2 v ({3}) + v ({3, 1}) v ({1}) + v ({3, 2}) v ({2}) + 2 v ({1, 2, 3}) 2 v ({1, 2}) 6 x1 + x2 + x3 = 1 6 v ({1, 2, 3}) = v ({1, 2, 3}) 6

Notice that as it is always the case.

Example 14.5 In the case of symmetric games


xi = 1 v({1, 2, . . . , n}), n

since the players have to get equal payments. So in the case of the symmetric oligopoly of Examples 14.1 and 14.2, 23 x1 = x2 = x3 = . 4

Example 14.6 Simple game, where v(S) = 0 or v(S) = 1 for all coalitions. In this
special case

(s 1)!(n s)! n! where summation is over all pivotal coalitions such that S is winning (v(S)=1), and S{i} is losing (v(S {i}) = 0) xi = average contribution when player i turns losing coalitions into winning ones. xi = power index of player i xi =

14.5

Social choice

Based on rankings of alternatives

88

Game theory

Practical methods are introduced in a forest treatment problem


Data: Water users Wildlife advocates Livestock producers Wood producers Environmentalists Managers Clear cut 1 4 1 3 4 4 Uniform thinning 2 2 2 4 3 2 Strip cut and thinning 3 1 3 2 2 3 4 3 4 1 1 1 Control

= criteria,

= alternatives,

aij = ranking

1. Plurality voting
f (aij ) = 1 0
if

aij = 1

otherwise

Aj =
i

f (aij ) = number

of times alternative

is the best

Aj = max{Aj } A1 = 2, A2 = 0, A3 = 1, A4 = 3
control is the social choice

2. Borda count
Bj =
i

aij

(total points),

Bj = min{Bj }

B1 = 17, B2 = 15, B3 = 14, B4 = 14


control & (strip cut and thinning) are best

3. Hare system (successive deletions)



if any alternative is considered the best by more than half of the players, then stop, it is the social choice delete the alternative that is considered the best by the least number of players, and adjust table as

anew = ij
if alternative

aij aij 1

if

aij < aij

otherwise

is deleted

go back to rst step

In previous example

A1 = 2, A2 = 0, A3 = 1, A4 = 3

Uniform thinning is deleted

89

Game theory
Clear cut 1 3 1 3 3 3 Strip cut and thinning 2 1 2 2 2 2 3 2 3 1 1 1 Control

A1 = 2

A3 = 1

A4 = 3
Clear cut 1 2 1 2 2 2

strip cut and thinning is deleted Control 2 1 2 1 1 1

A1 = 2 A4 = 4
Control is the social choice

4. Pairwise comparisons
N (j1 , j2 ) = No.
Then of players where

j1

is more preferred then

j2 .

j1

j2 N (j1 , j2 ) > N (j2 , j1 ) N (j1 , j2 ) 1 2 3 4 1 3 2 2 2 3 3 3 3 4 3 3 4 4 3 3

Preference graph:

Only conclusion: clear cut is eliminated.

90

Game theory

5. Dictatorship
Dictator = player

j ,

then

ai j = 1

decides on social choice (what is best for dictator)

Example 14.7

A1 A2 A3 A4 weights 1 2 3 4 2 2 1 3 4 1 1 2 4 3 1 total 8 4 3 2 1 2 3 4 1 2 1 4 3 1 2 1
Best

Voting: Borda: Hare: A2

3 1 2 2

= A1 = A3 A1 A3 A4 1 2 3 1 2 3 1 3 2 3 2 1 3 1 2 3 1 2 4 2 2

20 20 19 21 out

Best

2 1 1 2 1 1

Either If

A3

or

A4

can be eliminated.

A3

is eliminated, then new table

A1 A4 1 2 1 2 1 2 2 1 2 1 2 1 4 4
equally good, If

2 1 1 2 1 1

A1

and

A4

are solutions

A4

is eliminated, then new table

A 1 A3 1 2 1 2 1 2 2 1 2 1 2 1 4 4
equally good,

2 1 1 2 1 1

A1

and

A3

are solutions

91

Game theory

Pair-wise comparisons
A1 A1 A1 A2 A2 A3 A2 A3 A4 A3 A4 A4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1+1=5 1 2 3 4 1 4 4 4 2 4 4 4 3 4 4 5 4 4 4 3
   
5 1

#
2

"!

  '  
3 4

15

Conict resolution
quo point

Conict: (H, f ), where H R2 convex, closed, comprehensive (f f H f H ).


f = status

92

Game theory
Assume at the solution,

f1 f1 ,

and

f2 f2

(rational players). Assume therefore

that solution is chosen in set

H = {(f1 , f2 ) | (f1 , f2 ) H, f1 f1 , f2 f2 }
The Pareto frontier of

is assumed to be a function

f2 = g(f1 ), f1 f1 F1
which is strictly decreasing and concave. Assume there exists f1 f1 and f2 f2 .

(f1 , f2 ) H

such that

15.1

Bargaining as noncooperative game


1 (f1 , f2 ) = 2 (f1 , f2 ) = f1 f1 f2 f2
if

(f1 , f2 ) H (f1 , f2 ) H

otherwise if

otherwise

S1 = [f1 , F1 ] , S2 = [f2 , F2 ]

Theorem 15.1 A point is equilibrium Pareto in H .

15.2

Single-player decision problem


f2
in interval

Player 1 assumes that player 2 will select to maximize his own expected payo:

[f2 , g(f1 )]

uniformly, and wants

maximize f1 P (f2 g(f1 )) + f1 P (f2 > g(f1 )) = (f1 f1 ) 1P(f2 g(f1 ))


is maximal if and only if

g(f1 ) f2 + f1 g(f1 ) f2

(f1 f1 )(f2 f2 )
is maximal on the Pareto frontier. Product is called the

Nash product

maximize s. to

(f1 f1 )(f2 f2 ) f2 = g(f1 ) f1 f1 F1 f1 = f1 ,


and

Assume

is dierentiable, and since at

f1 = F1 ,

objective is zero, we

must have local optimum:

h(f1 ) = [(f1 f1 )(g(f1 ) f2 )] = g(f1 ) f2 + (f1 f1 )g (f1 ) = 0


strictly decreases, 2g + (f1 f1 )g < 0

h(f1 ) = g(f1 ) f2 > 0 h(F1 ) = (F1 f1 )g (F1 ) < 0

unique solution

93

Game theory

Example 15.1 Sharing a pie


g(f1 ) = 1 f1 f1 = 0 f2 = 0

1 f1 0 + (f1 0)(1) = 0 1 f1 f1 = 0 1 = 2f1 1 1 f1 = f2 = 2 2

fair share

15.3
Solution 1. 2. 3. 4.

Axiomatic bargaining
f = (H, f )
which satises certain axioms:

(H, f ) H (H, f ) f

(feasibility) (rationality) (Pareto optimality) (independence from unfavor-

f H, f (H, f ) f = (H, f ) H1 H, (H, f ) H1


able alternatives)

(H, f ) = (H1 , f )

5. Let

1 , 2 > 0, 1 , 2 f H

some constants,

= (1 f1 + 1 , 2 f2 + 2 ) = {(1 f1 + 1 , 2 f2 + 2 ) | (f1 , f2 ) H} .

Then

(H, f ) = (f 1 , f 2 ) (H , f ) = 1 f1 + 1 , 2 f2 + 2 (f1 , f2 ) H (f2 , f1 ) H) f1 = f2


where

(independence

from linear, monotone transformations)

6. (f1 = f2 and (symmetry)

(H, f ) = (f1 , f2 )

94

Game theory

Theorem 15.2 (Theorem of Nash) There is a unique solution, which maximizes the
Nash product on H :

maximize s. to

(f1 f1 )(f2 f2 ) f f f H c x

and its tangent line at x = a. Then this point is in the middle of the segment of the tangent line in the rst quartal.

Lemma 15.1 Consider hyperbola y =

Proof.
y=

c c , y = 2 x x

Targent line: y b = ac2 (x a)


x-intercept: y = 0, a2 b = cx ca x= y -intercept: x = 0, y =b+ a2 b + ac (ab)a + ac = = 2a c c ca ab a = b + 2 = 2b 2 a a

Idea of proof
A B Solution of the above optimization problem satises axioms. A solution that satises axioms is necessarily the optimal solution. Several steps: (i)

95

Game theory

By 3. and 6.,

(H, f ) =

1 1 , 2 2

optimal

solution

(ii) By 5., Same for any triangle (iii) By 4., for all cases

15.4

Nonsymmetric Nash solution


(0, 1)
such that

By dropping 6., for all solutions there exists

maximize s. to

(f1 f1 ) (f2 f2 )1 f f f H

(Harsnyi and Selten) Critics of Axiom 4,

96

Game theory

15.5

Area monotonic solution

Arc from status quo point divides region to equal areas

A1 =
f1

1 g(t)dt (f f1 ) (f2 + g(f )) 2


F1

increasing in

A2 =

1 (f f1 )(f2 + g(f )) + 2
f

g(t)dt (F1 f1 )f2

decreasing in

A1 = A2 = relative

or

A1 = A2

power of player 1 above player 2

97

Game theory
Solve equation

A2 A1
strictly decreasing
at at

=0

f = f1 , A1 = 0, A2 A1 > 0 f = F1 , A2 = 0, A2 A1 < 0

unique solution

15.6
found.

Equal sacrice solution

Both players decrease their maximum payos with equal speed until feasible solution is

F1 f = F2 g(f )
or

(F1 f ) = F2 g(f ) h(f ) = f g(f ) F1 + F2 = 0

strictly increasing
at at

f = f1 , g(f1 ) = F2 , h = f1 F1 < 0 f = F1 , g(F1 ) = f2 , h = f2 + F2 > 0

unique solution

15.7

KalaiSmorodinsky solution

Arc starts at status quo point and moves toward ideal point, then the last feasible point is accepted as the solution.

98

Game theory

g(f ) f2 =

F2 f2 (f f1 ) F1 f1 S=slope

h = Sf g(f ) +

f2

Sf1

= 0

strictly increasing
at at

f = f1 , h = f2 g(f1 ) < 0 f = F1 , h = SF1 Sf1 > 0

unique solution

Example 15.2 Consider conict with disagrement payo vector (0, 0) and feasible set
H = (x, y) | x, y 0, y 1 x2

1. Nash solution

99

Game theory

max
s.to Objective

(1 x2 0)(x 0) 0x1 x=0


and

x x3

is zero at endpoints

x = 1.

Derivative:

1 3x2 = 0 1 x2 = 3 x =

3 0.5774, 3

g(x) =

2 0.6667, 3

so

(0.5774, 0.6667)

2. Area monotonic solution

Total area:

(1 x2 )dx = x
0
so

x3 3

1 0

2 = , 3
1 u

1 u(1 u2 ) 1 = + (1 x2 )dx 3 2 u 3 uu 2 u3 + u+ 2 3 3 3u 3u3 + 4 6u + 2u3 f = u3 + 3u 2

= = = =

u u3 x3 + x 2 3 1 3 2 0, f = 3u2 + 3

u0 = 1,

Newton's method:

u f f 1 2 6 0.6667 0.2964 4.3335 0.5983 0.0091 4.0739 0.5961 0.0001 4.0660


100

Game theory

(0.60, 0.64)

3. Equal sacrice solution

1 (1 x2 ) = 1 x x2 + x 1 = 0 1 + 5 1 + 1 + 4 x = = 0.62 2 2 1 x2 = 0.62 (0.62, 0.62)

4. Kalai-Smorodinsky solution
x = 1 x2 x2 + x 1 = 0, same
as above

16

Multiobjective optimization, concepts and methods


maximize s. to f (x) xX
are on the real line

In the case of one objective

All values of

f (x)

when

runs through

Maximal solution has properties: (i) maximal solution is at least as good as any other solution

101

Game theory
(ii) there is no better solution (iii) all maximal solutions are equivalent, i.e., they have the same objective values In the case of several objectives no optimal solution exists

(1) (2) In the case of one objective, any two decisions x and x can be compared since (1) (2) (1) (2) (1) (2) either f (x ) > f (x ), or f (x ) = f (x ), or f (x ) < f (x ).
In the case of multiple objectives, this is not true: for example

1 2
cannot be compared.

and

2 1

Denition 16.1 A solution x X is weakly nondominated, if there is no x X such


that

fi (x) > fi (x ) for all i

102

Game theory

Denition 16.2 A solution x X is (strongly) nondominated, if there is no x X


such that

fi (x) fi (x )

for all i, and with strict inequality for at least one i. I.e., no objective can be improved without worthening another one.

Denition 16.3 (Payo set)


H = f | such that there exists x X with fi = fi (x), i .

Example 16.1
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 x1 + 2x2 2

f1 = x1 + x2 f2 = x1 x2 (+) x1 = f1 +f2 2
Constraints:

() x2 =

f1 f2 2

f1 + f2 2 f1 f2 2 f1 + f2 f1 f2 +2 2 2 f1 + f2 + 2f1 2f2

0 f2 f1 0 f2 f1 2 4 f2 3f1 4

103

Game theory

Denition 16.4 A point f H is weakly nondominated, if there is no f H such that


fi > fi for all i.

Denition 16.5 A point f H is (strongly) nondominated, if there is no f H such


that fi fi for all i, with strict inequality for at least one i.

Example 16.2 In previous example (2, 2) is weakly and strongly nondominated. Note.
If

is strongly nondominated, then it is also weakly nondominated.

16.1

Existence of nondominated solutions


maximize s. to x 1 , x2 x 1 , x2 0

Example 16.3 No nondominated solution exists:

104

Game theory

Example 16.4 There is a unique nondominated solution (earlier example) Example 16.5 There are multiple nondominated solutions

In Example 16.3,

was not bounded.

Even if

is bounded there might be no

nondominated solution:

Example 16.6
maximize s. to x 1 , x2 x2 + x2 < 1 1 2

105

Game theory

Lemma 16.1 Let f H be an interior point of H . Then f must not be nondominated. Proof.
Take

>0

very small, then

f +1H

but

f + 1 > f.

Theorem 16.1 Assume H is nonempty, closed, and for all i


Fi = sup{fi | f H} < .
Then there is at least one nondominated solution.

Proof.

Consider:

maximize subject to
(i) The set

g(f ) = f1 + f2 + + fI f H.

(52)

H( ) = f | f H, g(f )
is bounded:

fi Fi
and

f1 + f2 + + fI fi
(ii) Let


j=i

j=i

fj

Fj

be a feasible solution, then (52) is equivalent to

maximize s. to

g(f ) f H g(f ) g(f )

where new feasible set is compact (closed and bounded) that is obviously strongly nondominated.

there is optimal solution,

Note.

All conditions are essential, as earlier examples 16.3, 16.6 show. Assume

Corollary 1.
continuous

is nonempty, closed, bounded and all objectives functions

fi

are

there exists nondominated solution. Assume

Corollary 2.

consists of nitely many points

there is at least one

nondominated solution.

106

Game theory

Example 16.7
alternatives

objectives 2 3 2 1 4 3 3 3 2 4 2 5

First alternative is dominated:

(2, 3, 2) (3, 3, 2),


all others are nondominated.

16.2
Given:

Method of sequential optimization


order of objectives (I

= number (f1

of objectives):

f2

fI )

Step 1.
max f1 (x) s.to x X
optimum

f1

Step 2.

max f2 (x) s.to x X optimum f2 f1 (x) = f1

. . . Step k.
max fk (x) s.to x X f1 (x) = f1
. . .

fk1 (x) = fk1


Process terminates if either a unique optimal solution is found in any of the steps, or we proceeded

steps

107

Game theory

Theorem 16.2 The solution is always nondominated. (Trivial) Example 16.8


maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

108

Game theory

f1

f2 (1, 1) is solution; f2

f1

4 ,0 3

is solution.

Easier with pay-o set: f1 = x1 + x2 f2 = x1 x2 (+) x1 = f1 +f2 () x2 = 2 Constraints:

f1 f2 2

f1 + f2 2 f1 f2 2 f1 + f2 f1 f2 + 3 2 2 3f1 + 3f2 + f1 f2 4f1 + 2f2 f1 + f2 f1 f2 +3 2 2 f1 + f2 + 3f1 3f2 4f1 2f2

0 f2 f1 0 f2 f1 4 8 8 2f1 + f2 4 4 8 8 2f1 f2 4

109

Game theory

f1 f2

f2 (2, 0) x1 = 1, x2 = 1 f1
4 4 , 3 3

4 x 1 = , x2 = 0 3

Example 16.9

No point on the nondominated curve can be obtained as solution the nondominated solutions by the method selection.

we lose most of

Example 16.10
alternatives

objectives 2 3 2 1 4 3 3 3 2 4 2 5

f1
alternative 4 is selected

f2

f3

Problem:

If there is a unique solution in an earlier step, the later objectives are not

considered at all.

Modication.

Relaxing optimality conditions in steps:

110

Game theory

Step k :
maximize s. to fk (x) xX f1 (x) f1 1
. . .

fk1 (x) fk1 k1

Theorem 16.3 Solution is weakly nondominated, and if solution in step I is unique, then it is strongly nondominated.

16.3
Given:

-constraint method
most important objective and minimal acceptable lower bounds for all other

objectives.

maximize s. to

f1 (x) xX f2 (x) 2
. . .

(53)

fI (x) I

111

Game theory

Theorem 16.4 Solution is always weakly nondominated. Proof.


fi () > fi (x ) (i) where x is the solution. x Then x satises all constraints and gives better value for f1 . Contradicts to the optimality of x .
If not, there is an

xX

such that

Example 16.11 Solution is not necessarily strongly nondominated

can be selected so that x is an optimal solution for problem (53).

Theorem 16.5 Let x be a strongly nondominated solution. Then the bounds 2 , . . . , I

Proof. Take i = fi (x ) for i 2. Note. By method selection we do not lose (strongly) nondominated solutions.
112

Game theory

Example 16.12
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

Assume f1

f2 , 2 = 1 maximize s. to x1 + x2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4 x1 x2 1 x2 x1 1

Solution is intercept of two lines: x2 = x1 1 3x1 + x2 = 4 3x1 + x1 1 = 4 4x1 = 5 x1 = 5 4

x2 =

1 4

With payo set it is easier:


113

Game theory

2f1 + f2 = 4 f2 = 1 3 f1 = 2 3 +1 f1 + f2 = 2 = 5 x1 = 4 2 2 3 1 f1 f2 x2 = = 2 = 1 4 2 2

Example 16.13
alternatives

objectives 2 3 2 1 4 3 3 3 2 4 2 5

Assume: f1 is most important, 2 = 3, 3 = 2. Alternative 4 is infeasible, and alternative 3 is the choice.

16.4

Weighting method
positive weights,

Given: c1 , c2 , . . . , cI

I i=1 ci

= 1,

relative importances of objectives.

114

Game theory

maximize s. to

c1 f1 (x) + + cI fI (x) xX

(54)

Theorem 16.6 Solution is always strongly nondominated. Proof.


If not, there is an

xX

such that

fi (x) fi (x ) fi (x )
i

for all

with strict inequality

for at least one

i.

Then

fi (x) >
i
contradicting to the optimality of

Example 16.14 By the method selection we might lose nondominated solutions:

Theorem 16.7 Assume H is convex and x is strongly nondominated Then there are
nonnegative weights such that x is optimal solution for problem (54).
115

Game theory

Proof.

Let

H = g | there

exists

f H

such that

gf H.
Let

which is also convex, and has the same nondominated points as is nondominated existence of an boundary point. vector

f = f (x ), which

Theorem of separating hyperplanes implies the such that for all

I -dimensional

c = (ci )

cT (f f ) 0

f H .

We next show that c 0. Assume next, ci < 0 f = f ei H ei = (0, . . . , 0, 1, 0, . . . , 0)T and

for some

i.

Then for arbitrary

> 0,

cT (f f ) = cT ei = ci < 0,
contradiction. Notice:

H H , f H,

and for all

f H , cT f cT f

vector

is an optimal solution of (54).

Example 16.15 We do not have always positive weights.

116

Game theory

Questions:
1. How can we guarantee that

is convex?

2. When do positive weights exist?

For Question 1: Theorem 16.8 Assume X is convex and all fi are linear. Then H is convex. Proof. f 1 , f 2 H
there exist

x 1 , x2 X
and

such that

f 1 = f (x1 )
Then for

f 2 = f (x2 ).

[0, 1], f 1 + (1 )f 2 = f (x1 ) + (1 )f (x2 )


X
(f is linear)

= f x1 + (1 )x2 H

Example 16.16 If X is convex, and all fi are concave then H still might be nonconvex:
X = [0, 1], f1 (x) = x1 , f2 (x) = x2 1

Theorem 16.9 Assume that X is convex and for all x X and g f (x), there exists
an x X such that g = f (x) (that is, H is comprehensive). Assume furthermore that all fi are concave. Then H is convex.
there exist

Proof. f 1 , f 2 H

x 1 , x2 X
and

such that

f 1 = f (x1 )

f 2 = f (x2 ).
117

Game theory
Then for

[0, 1], f 1 + (1 )f 2 = f (x1 ) + (1 )f (x2 )


X

(concavity of

fi 's)

f x1 + (1 )x2 H,
so from assumption, there exists

xX

such that

1 f 1 + (1 )f 2 = f (x).

For Question 2: Theorem 16.10 If X is a polyhedron, all fi are linear, and x is a strongly nondominated
solution, then there are positive weights ci such that x is optimal solution of (54).
Uses duality theorem twice, complicated.

Proof.

Example 16.17
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

118

Game theory Assume: c1 = c2 =


1 2

c1 (x1 + x2 ) + c2 (x1 x2 ) = x1
So

maximize s. to

x1 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

Diculties:
I

Composite objective

ci fi (x)
i=1

has no meaning

Solution changes if we change units of objectives

Solution for this problem: normalizing objectives to satisfaction levels. If a utility function is given for objective

i,

then

Ui (fi ) = satisfaction

level with value

fi

or by linear transformation into unit interval

[0, 1].

Mi = max {fi (x) | x X} mi = min {fi (x) | x X} fi (x) mi [0, 1] Mi m i

f i (x) =
I

without units

ci f i (x) means overall satisfaction level on the scale between worst and best case i=1 scenarios.

Example 16.18
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

We saw earlier that (from payo set)

so

4 4 M1 = 2, m1 = 0, M2 = , m2 = 3 3

119

Game theory

x1 + x2 0 1 = (x1 + x2 ) 20 2 4 x1 x2 + 3 3 1 f 2 (x) = = (x1 x2 ) + 4 4 8 2 ( 3 ) 3 f 1 (x) =


Assume c1 = c2 = 1 , then composite objective: 2

1 3 1 7x1 + x2 (x1 + x2 ) + (x1 x2 ) + 4 16 4 16

7x1 + x2 x2 7x1 + x2 x2
Optimal solution 4 x1 = 3 , x2 = 0

= = = =

0 7x1 1 7x1 + 1

Example 16.19 c1 = 1, c2 = 3, c3 = 2
alternatives objectives 2 3 2 1 4 3 3 3 2 4 2 5

ci fi 2 + 9 + 4 = 15 1 + 12 + 6 = 19 3 + 9 + 4 = 16 4 + 6 + 10 = 20

Alternative 4 is selected When objectives are normalized:

120

Game theory

i m i Mi 1 1 4 2 2 4 3 2 5 c1 = 1, c2 = 3, c3 = 2
objectives ci f i 1 1 1 0 3 + 3 + 0 = 11 3 2 2 6 1 0 1 3 0 + 3 + 2 = 11 3 3 1 2 0 2 + 3 + 0 = 13 3 2 3 2 6 1 0 1 1+0+2=3 alternatives Alternative 2 is selected.

16.5
Given:

Distancebased methods
Ideal point (computed or subjectively given) and a distance function of

I-

dimensional vectors

minimize
s. to

f , f (x) xX
or (55)

minimize
s. to

f ,f f H

Distance types between u = (ui ) and v = (vi ).


(u, v)

= max{ci |ui vi |}
i

(weighted l -distance)

121

Game theory

I 1 (u, v)

=
i=1

ci |ui vi |

(weighted l1 -distance)

I 2 (u, v)

1 2

=
i=1

ci |ui vi |

(weighted l2 -distance)

I G (u, v)

=
i=1

|ui vi |ci

(weighted geometric distance)

122

Game theory

Distance axioms may be violated. Let 1. 3.

I = 2, c1 = c2 = 1.

Then

((2, 3), (2, 2)) = 0

even if

(2, 3) = (2, 2)

((1, 1), (1, 3)) + ((1, 3), (3, 3)) < ((1, 1), (3, 3)) 0 + 0 < |3 1| |3 1| 0 < 4 contradiction

Assume ideal point satises:

fi max {fi (x) | x X}

(i)

Theorem 16.11 The optimal solution of (55) is always weakly nondominated, and if
optimal solution is unique then it is strongly nondominated.

Proof.

Obvious.

Example 16.20
maximize s. to x 1 , x2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4

So f1 = x1 and f2 = x2

123

Game theory Ideal point:


4 4 , 3 3

c1 c1 c1 c1

= c2 = c2 = c2 = c2

= = = =

1 , 2 1 , 2 1 , 2 1 , 2

= = = =

1 2 G

} in all
0, 4 3

(1,1) is optimal solution or


4 ,0 3

with

=0

Theorem 16.12 Assume that for all i


fi > max {fi (x) | x X}
and ci > 0. Then all solutions by the geometric distance are strongly nondominated.

Proof.

Obvious. compromise programming, when all objectives are normalized

Special case:

f i (x) =

fi (x) min{fi (x)} [0, 1]. max{fi (x)} min{fi (x)}

Example 16.21
alternatives 2 1 3 4

objectives 3 4 3 2 2 3 2 5

Select c1 = c2 = 1, c3 = 2, f = (5, 5, 7) and

For alternative 1: For alternative 2: For alternative 3: For alternative 4:

1 1 1 1

= |2 5| + |3 5| + 2|2 7| = 15 = |1 5| + |4 5| + 2|3 7| = 13 = |3 5| + |3 5| + 2|2 7| = 14 = |4 5| + |2 5| + 2|5 7| = 8

Alternative 4 is the choice

Example 16.22 Best design, 5 alternatives to choose from:


Alternatives 1 2 3 4 5 where
124

Cost saving 1.4 106 1.8 106 2.0 106 2.1 106 2.1 106

Reliability above 95% 2.4 2.2 1.9 1.6 1.5

Resiliency % 87 76 91 89 77

-Vulnerability (scale) 34 41 29 40 44

Game theory

reliability: P(works until given time period T ) resiliency: how quickly a system is likely to recover or bounce back from failure once
failure has occured

P(works at time period t + 1 | failure at time period t)

vulnerability: for each failure event let s(X) denote a numerical indicator how severe
is the consequence of a failure, and p(X) the probabilty that it occurs. Then

v=
Xall

s(X)p(X)
failure modes

Worstcase, if all objectives are on minimal value:

f = (1.4 106 , 1.5, 76, 29)


Geometric distance with equal ci s is maximized

A1 : A2 : A3 : A4 : A5 : A4 is selected

0 0.9 11 5 = 0 (0.4 106 ) 0.7 0 12 = 0 (0.6 106 ) 0.4 15 0 = 0 (0.7 106 ) 0.1 13 11 = 1.001 107 (0.7 106 ) 0 1 15 = 0

Note. Same as Nash's bargaining solution Other way: nding feasible point with maximal distance from ideally worst point (nadir)

16.6
Given:

Directionbased methods
a status quo point (worst case scenario or current situation) and a vector of

improvement

125

Game theory

Starting from status quo point we improve objectives in given direction as much as we can. Similar to KalaiSmorodinsky solution or:

We relax objectives in the given direction until feasible solution is obtained.

17
17.1

Dynamic games
Cournot oligopoly
N = number xk = output
N
of rms; of rm

k , k = 1, . . . , N ;

p
l=1

xl

= unit

price of product;

126

Game theory

Ck (xk ) = cost

of rm

k , k = 1, . . . , N k: Xk = [0, Lk ] ,

Set of strategies for rm

Lk = capacity
Prot of rm

of rm

k , k = 1, . . . , N
N

k: k (x1 , . . . , xN ) = xk p

xl
l=1

Ck (xk )

17.2

Best response dynamics


k
is

We already saw that best response of rm

0 Lk Rk (sk ) = zk
with

if if

p(sk ) Ck (0) 0 p(Lk + sk ) + Lk p (Lk + sk ) Ck (Lk ) 0

otherwise

sk =

l=k

xl ,

where

zk

is the unique solution of equation

p(xk + sk ) + xk p (xk + sk ) Ck (xk ) = 0.


We also saw that under assumptions (i) (ii) (iii)

p < 0; p + xk p 0; p Ck < 0

the best response functions are dierentiable (except the boundary points between the three cases) and

1 < Rk (sk ) 0.
In the case of discrete time scales the dynamic model is:

xk (t + 1) = xk (t) + Kk
where

Rk
l=k

xl (t)

xk (t) k.

(1 k N )

(56)

0 < Kk 1

is the speed of adjusment of rm

In the case of continuous time scales the model is:

xk (t) = Kk
where

Rk
l=k

xl (t)

xk (t) k.

(1 k N )

(57)

Kk > 0

is the speed of adjustment of rm

In both cases rm

moves into the direction of its best response.

Before analysing stability, a matrix theoretical result is shown.

Lemma 17.1 If a, b RN , then det I + a bT = 1 + bT a

127

Game theory

Proof.

By induction, if

N = 1, det(1 + ab) = 1 + ab = 1 + ba

N 1 N 1 + a1 b1 a1 b 2 a2 b1 1 + a2 b2 . . . . DN = det . . aN 1 b1 aN 1 b2 aN b1 aN b2
Substract substract . . . substract

... ...

a1 bN 1 a2 bN 1
. . .

. . . 1 + aN 1 bN 1 . . . 1 + aN bN 1
from row from row

a1 bN a2 bN . . . aN 1 bN aN bN

aN  multiple of row aN 1 aN 1  multiple of row aN 2

N 1 N 2

N , then N 1, then

etc.

a2  multiple of row a1 Determinant does not change:

from row

2.

DN

1 + a1 b1 a1 b2 a2 1 a1 . . . . = det . . 0 0 0 0 = DN 1 + (1)N 1 a1 bN = DN 1 + aN bN = 1 +

. . . a1 bN 1 a1 bN ... 0 0 . . . . . . ... 1 0 a 1 . . . aNN 1 a2 a1 a3 a2 aN aN 1


N

expanding with respect to last row

N 1

ak bk + aN bN = 1 +
k=1 k=1

ak bk

= 1 + bT a.

Corollary.

Consider

a1 b1 ... b1 b2 a2 . . . b2 () = det . = det(D I + a 1T ) . . . . . . . . bN bN . . . aN


with

D=

a1 b1 a2 b2
.. .

aN bN
Then

b1 b2 a= . . . bN

and

1T = (1, 1, . . . , 1).

() = det(D I) det(I + (D I)1 a 1T )


N N

=
k=1

(ak bk ) 1 +
k=1

bk . ak b k
128

Game theory
The Jacobian of the discrete system (56) has the form

JD

1 K1 K1 r 1 . . . K 1 r 1 K2 r 2 1 K2 . . . K 2 r 2 = . . . . . . KN r N KN r N . . . 1 KN

with characteristic polynomial (using above Corollary)

() =
k=1
Eigenvalues are

(1 Kk (1 + rk ) ) 1 +
k=1

Kk r k 1 Kk (1 + rk )

(58)

= 1 Kk (1 + rk )
N

and/or solutions of equation

g() = 1 +
k=1
The values

Kk r k =0 1 Kk (1 + rk )

(59)

1 Kk (1 + rk )

are inside the unit circle if

Kk <
which is always the case, since

2 , 1 + rk
Notice that

1 < rk 0.

lim g() = 1,
N

lim
1Kk (1+rk )0

g() = ,

g () =
k=1
(If

Kk r k <0 (1 Kk (1 + rk ) )2 N .)

rk = 0,

then term is zero, same equation with smaller value of

Figure. Shape of

g()

129

Game theory
So there are

found, they are real and between

N 1 roots between the poles and one before the rst pole. 1 and 1 if and only if g(1) > 0
N

All roots are

k=1
In the linear case

Kk r k > 1. 2 Kk (1 + rk )

(60)

p(s) = A Bs, Ck (xk ) = ck xk + dk , so if A Bsk ck 0 0 Lk if A B(sk + Lk ) BLk ck 0 Rk (sk ) = zk otherwise,

where

zk

is the solution of equation

A Bsk 2Bxk ck = 0,
so

A ck 1 . zk = sk + 2 2B
Therefore

1 rk = 2

in the case of interior EP. So asymptotical stability condition is:

k=1
Assume

Kk < 1. 4 Kk

Kk K ,

then this holds if

NK <1 4K
or

K<

4 . N +1

The Jacobian of the continuous system (57) has the form

K1 K1 r1 . . . K1 r1 K2 r2 K2 . . . K2 r2 JC = . = J D I, . . . . . KN rN KN rN . . . KN
and since eigenvalues if

JD

are less then 1, all eigenvalues of

JC

are real and negative. So

continuous system (57) is always asymptotically stable.

18

Controllability in oligopolies
N
denote the number of rms, let

Only single-product oligopolies will be examined. Let

p(s) = A Bs (A, B > 0) be the price function and for k = 1, 2, . . . , N , let Ck (xk ) = ck xk + dk (ck , dk > 0) denote the cost function of rm k . Assume furthermore that the
market is controlled with the cost function of the rms, which can be interpreted as subsidies, tax breaks, etc. Under this assumption the prot of rm

can be expressed as

k (x1 , . . . , xN ) = xk

AB
l=1

xl

u(ck xk + dk ),

(61)

130

Game theory
where

is the control variable.

If tax and subsidy are imposed on unit output, then

equation (61) has to be modied as follows:

k (x1 , . . . , xN ) = xk

AB
l=1

xk

(ck xk u + dk ).

(62)

The resulting dynamic models will be the same regardless which of the controls (61) or (62) is assumed. So in the following discussion, only the control (61) will be considered. Assuming discrete time scales and best response dynamics (Kk period

= 1) rst, at each time

t 1,

each rm maximizes its expected prot

xk

A Bxk B
l=k

xl (t 1)

(ck xk + dk ) u(t 1)

Excluding corner optimum,

xk (t) =

1 2

xl (t 1) +
l=k

A u(t 1)ck 2B

(k = 1, 2, . . . , N ).

Introduce the new state variables

zk (t) = xk (t)
to have a discrete control system

A (N + 1)B

zk (t) =
for

1 2

zl (t 1)
l=k

ck u(t 1) 2B

(63)

k = 1, 2, . . . , N .

Notice that this system can be written as

z(t) = A z(t 1) + bu(t 1)


with

(64)

1 z1 0 2 . . . 1 2 z2 1 0 . . . 1 2 2 z = . , A = . . . , . . . . . . . . 1 1 zN 2 2 . . . 0

and

c1 2B c2 2B b = . . . . cN 2B

It is well known from the theory of linear systems that system (63) is completely controllable if and only if the rank of the Kalman-matrix

K = (b, A b, A2 b, . . . , AN 1 b)
is

N.
Consider rst the special case of a duopoly (that is when

N = 2).

In this case

K = (b, A b) =

c1 2B c2 2B

c2 4B c1 4B

det(K) =

c2 c2 1 + 22 8B 2 8B

which has full rank if and only if

c1 = c2 .
131

Game theory
Assume next that

N 3.

We will now verify that always

rank(K) < N ,

that is the

system is not controllable. Observe rst that

1 A = (I E) 2
where

is the

equal one. Since

N N identity E 2 = N E,

matrix and

is the

N N

real matrix with all elements

A2 =
and

1 1 N 1 2N I 2E + E 2 = (I + (2 + N )E) = I+ A 4 4 4 2 A2 b = N 1 2N b+ Ab 4 2

showing that the columns of

are linearly dependent.

Let us modify the above model by assuming that dierent rms are controlled by dierent control variables. The modied model can be written as

z(t) = A z(t 1) + B u(t 1)


where

(65)

is as before,

B = diag
and

is an

Kalman matrix

N -element vector. K are linearly

c2 cN c1 , ,..., , 2B 2B 2B Since B itself is nonsingular, the rst N columns of independent. Therefore rank(K) = N showing that

the the

system is completely controllable. Assume next that the time scale is continuous. The dynamic system now has the form

xk (t) = Kk
By introducing

1 2

xl (t) +
l=k

A ck u(t) xk (t) 2B

(k = 1, 2, . . . , N )

Kk =
it can be rewritten as

Kk 2B

(k = 1, 2, . . . , M )

xk (t) = K k
where

2Bxk (t) B
l=k

xl (t) + A ck u(t)

(k = 1, 2, . . . , N ), zk (t) =

xk (t)

is a positive constant for all k . Introduce the new state variables A to have the continuous control system (N +1)B

Kk

z(t) = A z(t) + bu(t)


where

2B B . . . B c1 B 2B . . . B c2 A=K . . . , b = K . . . . . . . . . B B . . . 2B cN K = diag(K 1 , K 2 , . . . , K N )
132

with

Game theory
Consider rst the special case of a duopoly (that is when matrix is the following:

N = 2).

Then the Kalman

K = (b, A b) =
which has full rank if and only if

K 1 c1 B(2K 1 c1 + K 1 K 2 c2 ) 2 K 2 c2 B(K 1 K 2 c1 + 2K 2 c2 )

K 1 c2 + 2(K 2 K 1 )c1 c2 K 2 c2 = 0. 1 2
For example, if

K 1 = K 2,

then this condition is equivalent to the assumption that

c1 = c2 . If N 3,

then the sucient and necessary controllability conditions are even more

complicated. However, if

K1 = K2 = = KN ,

then

A = BK(I + E)
and

A2 = B 2 K (I + 2E + E 2 ) = B 2 K (I + (N + 2)E) 1 2 A = B 2 K I + (N + 2) I BK = (N + 1)B 2 K I (N + 2)BK A


showing that the columns of the Kalman matrix are linearly dependent. The case of dierent control for dierent rms can be discussed analogously to the discrete case. The details are left as an exercise as well as the cases of adaptive and extrapolative expectations.

19
19.1

Illustrative case studies


Example 1. Selecting a Restaurant for Lunch
4 people to go together from dierent locations Chinese (C), American (A), Mexican (M), Italian (I), no food (N)

Decision makers: Alternatives: Attributes:


A1 A2 A3 A4 A5

Quality/taste Quantity/amount Price Service/speed Distance to restaurant Find collective decision Individual preferences assessed Collective choice is made

Problem:

Step 1: Step 2:

133

Game theory
Decision maker No. 1 Individual preferences

Attributes
A1 A2 A3 A4 A5 A1 , A2
(0-100) (0-100) ($2-12)

Alternatives
A 80 60 8 25 0.8 M 95 70 7 20 1.5 I 65 70 11 35

Importance
0.3 0.3 0.2 0.1 0.1

C 75 80 5 15 0.7

N 0 0 0 0 0

(0-40 mins) (0-8 miles)

2.75

are given in satisfaction levels.

A3 , A4

and

A5

have to be given in a similar

way: How satised one is with an attribute. We cannot use the numbers directly from the table; money, minutes, miles cannot be compared and added directly. We need common measures of satisfaction to be introduced and used.

Value function for price (A3 )


Value in (0,100) scale

V (0) = 100 V (12) = 0


Therefore:

Linear function:

V (P ) = 100

100 P 12

V (5) = 100 V (8) = 100 V (7) = 100 V (11) = 100 V (0) = 100

100 (5) = 58.3% 12 100 (8) = 33.3% 12 100 (7) = 41.7% 12 100 (11) = 8.3% 12 100 (0) = 100% 12

134

Game theory

Value function for service (A4 )


Value in (0,100) scale

If people are in a hurry, then 20 minutes is considered a slow service, so 25% satisfaction is given there. Between 0 and 20 minutes, and between 20 and 40 minutes value function linearly decreases:

V1 (0) = 100 V1 (20) = 25 V2 (20) = 25 V2 (40) = 0


Therefore

Linear function:

V1 (S) = 100 V2 (S) = 50

300 (S) 80 100 (S) 80

Linear function:

V (15) V (25) V (20) V (35) V (0)

= = = = =

V1 (15) = 43.8% V2 (25) = 18.8% V1 (20) = 25% V2 (35) = 6.3% V1 (0) = 100%

Value function for distance (A5 )


Value in (0,100) scale

135

Game theory
Similar reason for distance as for service time, 4 miles already considered a large distance:

V1 (0) = 100 V1 (4) = 25 V2 (4) = 25 V2 (8) = 0


Therefore

Linear function:

V1 (D) = 100 V2 (D) = 50

300 (D) 16

Linear function:

100 (D) 16

V (0.7) V (0.8) V (1.5) V (2.75) V (0)

= = = = =

V1 (0.7) = 86.9% V1 (0.8) = 85% V1 (1.5) = 71.9% V1 (2.75) = 48.4% V1 (0) = 100%

Attributes
A1 A2 A3 A4 A5

Table with satisfaction levels for Decision maker No. 1

Alternatives
A M 95 70 41.7 25 71.9

Importance
0.3 0.3 0.2 0.1 0.1

C 75 80 58.3 43.8 86.9

I 65 70 8.3 6.3 48.4

N 0 0 100 100 100

80 60 33.3 18.8 85

Individual overall satisfaction level for each alternative is the weighted average:

For C: 75(0.3) + 80(0.3) + 58.3(0.2) + 43.8(0.1) + 86.9(0.1) = 71.23 For A: 80(0.3) + 60(0.3) + 33.3(0.2) + 18.8(0.1) + 85(0.1) = 59.04 For M: 95(0.3) + 70(0.3) + 41.7(0.2) + 25(0.1) + 71.9(0.1) = 67.53 For I: 65(0.3) + 70(0.3) + 8.3(0.2) + 6.3(0.1) + 48.4(0.1) = 47.63 For N: 0(0.3) + 0(0.3) + 100(0.2) + 100(0.1) + 100(0.1) = 40 In summary:
Preference for decision maker 1:

Collective choice for Restaurant Example


After individual preferences are assessed for the other three persons as well (similar to Person 1 shown on previous pages) the following preference table is established: C Person 1 Person 2 Person 3 Person 4 1 3 1 4 A 3 1 4 5 M 2 4 5 2 I 4 5 2 1 N 5 2 3 3 POWER 5 8 6 10 29

Total:

136

Game theory

Majority Rule (voting) Persons 1 & 3


is 5+6=11 choose

as their rst choice. Using the power factors, the total for

Person 2 Person 4 Nobody

was the only one choosing selects selects

A as the rst choice, hence A : 8

I, so I : 10

M and N, so M, N : 0 C, so Chinese restaurant is the collective choice.

Most votes were given to are ignored.

The drawback with this technique is that it only considers rst choices, later ratings

Weighting (Borda count)


Weighted sum of rankings is computed for each alternative (weight with the lowest sum is the nal choice. For: C: A: M: I: N:

ranking), the one

1 5 + 3 8 + 1 6 + 4 10 = 75 3 5 + 1 8 + 4 6 + 5 10 = 97 2 5 + 4 8 + 5 6 + 2 10 = 92 4 5 + 5 8 + 2 6 + 1 10 = 82 5 5 + 2 8 + 3 6 + 3 10 = 89

Smallest number is at

C, so Chinese restaurant is the collective choice Hare system

Deleting

N, new table
C 1 2 1 3 A 3 1 3 4 M 2 3 4 2 I 4 4 2 1 POWER 5 8 6 10

Deleting

M, new table
C 1 2 1 2 A 2 1 3 3 I 3 3 2 1 POWER 5 8 6 10

Deleting

A, new table

137

Game theory
C 1 1 1 2 Total weights for for So, I 2 2 2 1 POWER 5 8 6 10

C = 19 I = 10

C is the social choice. Pair-wise comparison

This technique utilizes a pair-wise comparison by choosing the better between two options for all individuals. This comparison continues with the others until we arrive at the nal choice. Starting from the top

Choice between C and A


C A A C
by total power 8

5 + 6 + 10 = 21

C is the winner Choice between C and M


C M M C
by total power 10

5 + 8 + 6 = 19

C is the winner again Choice between C and I


C I I C
by total power 10

5 + 8 + 6 = 19

C is the winner again Choice between C and N


C N N
by total power

5 + 6 = 11

C 8 + 10 = 18

N is the winner
138

Game theory
So

N (no lunch) is the collective social choice.

The drawback with this technique is that the choice may depend on the order in which comparisons are made.

N then: Choice between N and I


bottom with

By reversing the order of pair-wise comparisons for the example starting from the

N I

I by 8 N 5 + 6 + 10 = 21

I is the winner, previous winner is out Choice between I and M


I M by 6 + 10 = 16 M I 5 + 8 = 13

I is the winner Choice between I and A


I A A by 6 + 10 = 16 I 5 + 8 = 13

I remains winner Choice between I and C


I C C by 10 I 5 + 8 + 6 = 19

C is the winner
So

Again, the drawback with this technique is that the choice may depend on the order in which comparisons are made. Contradictory preferences may occur at the collective level.
By comparing all pairs the preference graph becomes:

C (Chinese restaurant) is the collective choice.

139

Game theory
Consider:

C C

Consistent
However:

I A

I
but

N I

Contradiction!

19.2

Example 2. Buying a Family Car


Father, mother, child Volvo (V), Geo Metro (GM), Dodge Status (DS), Corvette (C)

Decision makers: Alternatives: Attributes:


A1 A2 A3 A4 A5 A6 A7

Style/look Speed Safety Maintenance Gas usage Price Comfort Find collective family decision Individual preferences assessed Collective choice is made

Problem:

Step 1: Step 2:

140

Game theory
Father, individual preferences

Attributes
A1 A2 A3 A4 A5 A6 A7 A1 , A3 , A4
and (0-100) (mph) (0-100) (0-100) (mpg) ($1000s) (0-100)

GM DS
55 80 50 70 35 10 60 75 120 90 80 25 20 80

Importance
0.05 0.25 0.25 0.15 0.13 0.10 0.07

90 150 100 90 20 40 95

70 150 75 50 10 55 70

A7

are given in satistaction levels (0-100) while

A2 , A5

and

A6

are not.

Therefore value functions should be assessed for

A2 , A5

and

A6 .

Value function for speed (A2 )


Value in (0,100) scale

Father does not want a fast car, since teenage child would endanger himself:

V2 (75) = 100 V2 (100) = 0


Therefore:

Linear function

V2 (S) = 400 4S

V (150) V (80) V (120) V (150)

= = = =

0 80 0 0

141

Game theory

Value function for mileage (A5 )


Value in (0,100) scale

Better gas mileage means better satisfaction uniformly:

V (0) = 0 V (40) = 100


Therefore

Linear function

V (M ) =

100 (M ) 40

V (20) V (35) V (25) V (10)

= = = =

50 87.5 62.5 25

Value function for price (A6 )


Value in (0,100) scale

V2 (20) = 100 V2 (30) = 50 V3 (30) = 50 V3 (60) = 0

Linear function

V2 (P ) = 200 5(P ) 100 (P ) 60


142

Linear function

V3 (P ) = 100

Game theory
Therefore:

V (40) V (10) V (20) V (55)

= = = =

V3 (40) = 33.3 100 100 8.3

Attribute
A1 A2 A3 A4 A5 A6 A7
(0-100) (mpg) (0-100) (0-100) (mpg)

Table with satisfaction levels:

Alternative V GM DS C
55 80 50 70 87.5 100 60 75 0 90 80 62.5 100 80 0 0

Importance
0.05 0.25 0.25 0.15 0.13 0.10 0.07

90 100 90 50 33.3 95

70 75 50 25 8.3 70

($1000s) (0-100)

Overall satisfaction level is computed by weighted averages:

For V: 90 0.05 + 0 0.25 + 100 0.25 + 90 0.15 + 50 0.13 + 33.3 0.10 + 95 0.07 = 59.48 For GM: 550.05+800.25+500.25++700.15+87.50.13+1000.10+600.07 = 71.33 For DS: 75 0.05 + 0 0.25 + 90 0.25 + 80 0.15 + 62.5 0.13 + 100 0.10 + 80 0.07 = 61.98 For C: 70 0.05 + 0 0.25 + 75 0.25 + 50 0.15 + 25 0.13 + 8.3 0.10 + 70 0.07 = 38.73 In summary:
Preference of father as single decision maker:

GM

DS

Collective choice for family car example


After individual preferences are assessed for all members of the family, we obtain the following preference table:

V GM DS C POWER
Father Mother Child 3 2 3 1 3 4 2 1 2 4 4 1 Total 4 3 3 10

In this example, the Father pays for the car, so he has slightly higher power than others in the family

Majority rule (voting)


Total vote for V: 0 GM: 4

143

Game theory
DS: 3 C: 3

The collective choice is GM. Weighting (Borda Count)


V: GM: DS: C:

3 4 + 2 3 + 3 3 = 27 1 4 + 3 3 + 4 3 = 25 2 4 + 1 3 + 2 3 = 17 4 4 + 4 3 + 1 3 = 31

The collective choice is DS. Hare system


Deleting

V, new table GM DS C POWER


1 2 3 2 1 2 3 3 1 4 3 3

DS and C are with lowest vote, deleting DS new table GM C POWER


1 1 2 Total vote For GM=7 For C=3 So GM is the social choice. If we delete C instead of DS, then new table 2 2 1 4 3 3

GM DS POWER
1 2 2 Total vote For GM=4 For DS=6 So DS is the social choice. 2 1 1 4 3 3

144

Game theory

Pair-wise comparison V or GM:


V GM GM V
3+3=6 4

V is the winner V or DS:


V DS DS
0

V 4 + 3 + 3 = 10

DS is the winner DS or C:
DS C C DS
4+3=7 3

The collective choice is DS


By comparing all pairs, the preference graph becomes:

C is less preferred than all the others. After C is eliminated, GM is the second least preferred since both V and DS are more preferred than GM. After GM is eliminated, then between V and DS, DS is collectively more preferred. Very consistent preferences.

145

Game theory

19.3

Example 3. Restoration of Chemical Landll


1. Digging and disposing 2. In situ remediation with established technology 3. In situ remediation with new technology 4. Ex situ remediation with established technology 5. Ex situ remediation with new technology

Alternatives:

Criteria:
C1 : C2 : C3 : C4 : C5 :
human health (death rate) clean-up level

6 life cycle cost (10 $)


timing social and political issues

C1 C2 C3 C4 C5

T1 106 10% 0 0% 7.6 10% 0.15 10% 0.8 5%

T2 T3 6 10 2 106 10% 10% 3 10 102 30% 30% 3.32 2.9 20% 40% 0.6 0.6 20% 30% 1.2 1.2 10% 30%

T4 T5 6 10 2 106 10% 10% 3 10 5 103 20% 20% 4.8 3.8 20% 40% 0.1 0.1 15% 20% 1 1 10% 30%

For each entry

mean value standard deviation

Decision makers (interest groups)


C1
site owners stake holders regulators 0.25 0.50 0.25

C2
0.20 0.60 0.20

C3
0.70 0.25 0.05

C4
0.25 0.05 0.70

C5
0.20 0.60 0.20

weights 0.50 0.25 0.65

W1 W2 W3 W4 W5

= = = = =

0.25(0.50) + 0.50(0.25) + 0.25(0.65) = 0.4125 0.20(0.50) + 0.60(0.25) + 0.20(0.65) = 0.3800 0.70(0.50) + 0.25(0.25) + 0.05(0.65) = 0.4450 0.25(0.50) + 0.05(0.25) + 0.70(0.65) = 0.5925 0.20(0.50) + 0.60(0.25) + 0.20(0.65) = 0.3800

146

Game theory

Normalized table
A1 C1 C2 C3 C4 C5
1 1 0 0.9 1

A2
1 0.9 0.91 0 0

A3
0 0 1 0 0

A4
1 0.9 0.6 1 0.5

A5
0 0.5 0.81 1 0.5

weights 0.4125 0.3800 0.4450 0.5925 0.3800

Weighting method with deterministic data


A1 : A2 : A3 : A4 : A5 :
1.706 1.159 0.445 1.804 1.332

A4

A1

A5

A2

A3

small dierence

Simulation results (N = 100, 000)


Normally distributed simulated data were used with distance based methods:

minimize

maximize

l A1 A2 A3 A4 A5
0.10 0.01 0.00 0.82 0.07

l1
0.22 0.02 0.01 0.75 0.00

l2
0.30 0.04 0.01 0.63 0.02

l
0.25 0.01 0.00 0.71 0.03

l1
0.33 0.01 0.01 0.54 0.11

l2
0.11 0.01 0.01 0.82 0.05

Relative frequencies of optimality

A4

is best

Accuracy of relative frequencies


Chebyshev inequality

P
In our case

X p N

pq 1 1 . 2 N 4N 2

= 0.01, N = 105 4 105 1 1 =1 = 0.975 = 97.5% 4 10 40

P(|error| 0.01) 1

147

You might also like