Professional Documents
Culture Documents
Game Theory
course oered by Professor Ferenc Szidarovszky
Contents
1 Decision making 2 Examples of games
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Prisoners' dilemma Chicken Sharing a pie Chain store War game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1
1 2 3 4 5 6 8 10 10 11 12 14 15 16 17 17 17 19 21 22 22 23 24 24 24 25 25
Two-person, zero-sum, discrete games . . . . . . . . . . . . . . . . . . . . . Coin in pocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cournot oligopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bertrand oligopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.10 Special duopoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Quality control 2.13 Market share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Advertisement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Advertisement budget allocation . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Inventory control 2.16 Price strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.17 Duel without sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.18 Duel with sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.19 Spying game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.20 Matching pennies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.21 Modied war game 2.23 Second price auction 2.24 Irrigation system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.22 Hidden bomb in a city
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 26
26
5 Continuous games
5.1 5.2 5.3 5.4 5.5 Brouwerxed point theorem . . . . . . . . . . . . . . . . . . . . . . . . . . Kakutani xed point theorem for point-to-set mappings . . . . . . . . . . . Banachxed point theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditions to be contraction . . . . . . . . . . . . . . . . . . . . . . . . . . Relation of EP and xed points 2. Best reply is dened as 1. Consider game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
28 29 30 32 37 38 38
6 NikaidoIsodatheorem
6.1 6.2 Concave functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Counterexamples for NikaidoIsoda theorem . . . . . . . . . . . . . . . . .
39
39 42
7 Applications
7.1 7.2 7.3 7.4 7.5 7.6 Matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bimatrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed nite games Polyhedron games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiproduct oligopolies Single-product oligopolies Assumptions: Best responses:
44
44 44 44 45 45 49 49 50
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51 52
52 53 54 55
10 Applications
10.1 Bimatrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Oligopoly game (single-product model) . . . . . . . . . . . . . . . . . . . . 10.4 Special matrix games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Method of ctitious play . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 von Neumann's method Fictitous play application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
57 59 60 61 62 66 66 70 71
Neumannmethod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 Uniqueness of Equilibrium
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fixed point problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
73 73
12 Leaderfollower games
12.1 Application to duopolies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Nash equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Stackelberg equilibrium 3. Optimum subsidy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
77 77 78 78
79 83
83 86 87 87 88 89 89 89 89
ii
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
90 91
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 Conict resolution
15.1 Bargaining as noncooperative game . . . . . . . . . . . . . . . . . . . . . . 15.2 Single-player decision problem . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Axiomatic bargaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Idea of proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Nonsymmetric Nash solution . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Area monotonic solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 Equal sacrice solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7 KalaiSmorodinsky solution
92
93 93 94 95 96 97 98 98
101
. . . . . . . . . . . . . . . . . . . . . 104
-constraint
Diculties:
17 Dynamic games
17.2 Best response dynamics
126
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
130 133
. . . . . . . . . . . . . . . . . . . . . . . 140
19.1 Example 1. Selecting a Restaurant for Lunch . . . . . . . . . . . . . . . . . 133 Collective choice for Restaurant Example . . . . . . . . . . . . . . . . . . . 136 19.3 Example 3. Restoration of Chemical Landll . . . . . . . . . . . . . . . . . 146
iii
Game theory
Decision making
one or more
A1 , A2 , . . . , Am
X = x | x Rm , g(x) 0
What are the consequences of the decision? Objective functions,
1 , 2 , . . . , n .
Many cases:
1 DM with 1 objective:
optimization multiobjective optimization game Pareto game
Games:
strategies; objective functions are called the payo functions.
decision alternatives are called the DMs are called the
players;
History:
John von Neumann (1928) John Nash (1950-53) Nobel laureates Nash, Selten, Harsanyi (1996)
2
2.1
Examples of games
Prisoners' dilemma
Two prisoners who robbed a jewellery store for hire, got cut, but police does not have enough evidence to convict them with the full crime, only for a much lesser crime of driving a stolen car.
Players:
Strategies:
Game theory
Payos:
If only one confesses, then he recieves very light sentence (1 year), the other gets very harsh sentence (10 years) If both confess, they get medium long (5 year) sentence If none of them confesses, then they are convicted with lesser crime, 2 years sentence for each
1\ 2 NC C
C (-10,-1) (-5,-5)
(1 , 2 )
in table
Question:
What to do? Payo of each depends on the choice of the other player. Best choice of each player as a function of the choice of the other: C C if player 2 selects NC if player 2 selects C Same for player 2, who also should always
Best response:
R1 =
2.2
Chicken
Two kids with motorcycle driving toward each other in a narrow alley. Give way to the other (C =chicken) or not (C ).
Players: Payos:
Strategies:
Being a chicken looks bad in the gang, but by having a crash both might die,
even worth 1\ 2
Game theory
equilibria.
(C, C)
and
(C, C)
assuming that the other player keeps his corresponding equilibrium strategy. Problem & diculty: in a particular game which equilibrium is selected?
2.3
Sharing a pie
2 people to share a pie of unit size. Requests from the pie,
Players: Payos:
(x
Strategies:
0 x 1, 0 y 1. + y 1)
both get requested amount, if infeasible
+ y > 1),
1 (x, y) =
x 0 y 0
if
x+y 1
otherwise
2 (x, y) =
if
x+y 1
otherwise
Game theory
{(x, y) | 0 x, y 1, x + y = 1}
2.4
Chain store
Chain store (C) & enterpreneur (E).
Players:
Strategies:
For C: soft or hard on E (S,H) E: stay in business or out (I,O).
Payos:
C\ E S H I (2, 2) (0,0) O (5,1) (5,1)
Game theory
2.5
War game
0 1
Players:
Strategies:
For A: for S:
x [0, 1] y [0, 1]
Payos:
1 = e(xy) 2 = 1
2
damage to submarine
k = 0
Game theory
R1 (y) = drop bomb where submarine R1 (y) = y R2 (x) = as far as possible from x, so 1 if x < 1 2 1 0 if x > R2 (x) = 2 1 {0, 1} if x = 2
is hiding, so
2.6
Two players,
1 a11
. . .
... j . . . a1j
. . .
... n . . . a1n
. . .
i
. . .
ai1
. . .
...
aij
. . .
...
ain
. . .
(aij )
is equilibrium
Game theory
aij
-aij is
among
aij
saddle point
Theorem 2.1 Assume that all aij are independent, identically distributed with a continuous distribution function. Then
m!n! (m + n 1)!
Proof.
Notice that
P(all elements aij are dierent) = 1 P(aij is equilibrium) is same for all elements P(equilibrium exists) = mnP(a11 is equilibrium) a11
is equilibrium if
a11
a11
a11
changed. So
P(a11
is equilibrium)
P(equilibrium
Example 2.1
m=n=2 P22 = m = 2, n = 5 P25 = m = 1, n = arbitrary P1n = 1!n! = 1, (1 + n 1)! 2!5! 2 120 1 = = 6! 720 3 2!2! 4 2 = = 3! 6 3
P2n =
(m + 1)!n! (m + n 1)! Pm+1,n = Pm,n (m + 1 + n 1)! m!n! (m + 1)!n!(m + n 1)! m+1 = = (m + n)!m!n! m+n
which equals 1 if n = 1, and is less than 1 if n 2 with other size 2.
Pmn 0 as m or n tent to
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
1 1 0 0
1 0 1 0
1 0 0 1
0 1 1 0
0 1 0 1
0 0 1 1
1 is in 3 or 4 positions:
1 1 1 0
1 0 1 1
0 1 1 1
1 1 0 1
1 1 1 1
1 0 0 1
with probability 2p2 q 2 , so
and
0 1 1 0
2.7
Coin in pocket
Player 1 guesses total number of coins (no blung, so with a coin in his pocket
Step 1. Step 2.
he cannot guess 0). Player 2 guesses total number of coins (no blung and cannot repeat guess of
Game theory
Extensive form:
Normal form:
1\2 0 1 (0, 0) 1 1 (0, 1) 1 1 (1, 1) 1 1 (1, 2) 1 1 1 2 = 1
Notice, guess of player 2 was always unique (no blu, no repeat)
Strategy of player 1: (0, 0), (0, 1), (1, 1), (1, 2); Strategy of player 2:
0 or 1 (No. of coins in pocket).
No equilibrium exists.
Game theory
2.8
Cournot oligopoly
Produced amounts,
x 1 , x2 , . . . , x n , 0 x k L k .
n
xl
Ck (xk )
p Ck
k.
1 = 2x1 + 9 x2 = 0 x1 9 x2 x1 = 2
always interior, so
R1 (x2 ) =
Similarly,
9 x2 2 9 x1 2
R2 (x1 ) =
Equilibrium: x1 = 9x2 2 x2 = 9x1 2
2.9
Bertrand oligopoly
Setting prices for own products,
0 p k Pk .
dk = demand
k.
10
Game theory
2.10
Special duopoly
2 rms, price setting. Prices but giving discounts to faithful customers.
Players: Payos:
Strategies:
1 = 2 =
Assume max price,
p1 p1 c p2 p2 c
if if if if
p1 p2 p1 > p 2 p2 p1 p2 > p 1
max
is large enough:
R1 (p2 ) =
p2 P max R2
if if
p2 P max c p2 P max c
is the same
Game theory
2.11
Quality control
If equipment is good, customer pays
to salesman
to customer.
Players:
Strategies:
For S, how many parts to check before selling equipment: 0, 1, 2 or 3. Cost of each checking is
$ .
Payos:
1 =
expected prot of salesman
2 1 1 1 + 2 + 2 3 3 2 2 A22 : A31 :
same principle
2 1 + 2 3 3
1 2 1 1 + 2 + 3 3 3 2 2 A32 :
defective part is found either in rst or second checking
2 1 + 2 3 3
12
Game theory
Equilibrium?
Row 0
a01
1 2 , 3 5 , 2 3 3 3 5 2 2
a02
is equilibrium if
1 , 4 , 4 3 3 3 3 2 4 3 a03
is equilibrium if
Row 1
a11
a11
2 , 3 3
1 2 3 5 , 3 3 2
2 2 3 3 2
contradiction
Row 2
a20 a21
a20 , a21
5 1 3 , 3 5 2
5 1 3 2 , 3 3 2
5 1 3 2 3
contradiction and
5 1 3 2 3
irrelevant
Row 3
a30 a31
a30
and
a31
2 , 2
and
2 2 3 , 3 2
5 2 1 3 3
2 3
13
Game theory
2.12
Advertisement
m markets with number of potential customers a1 > a2 > > am .
2 rms. Which market is selected to conduct intensive advertisement (they can select
Players:
Strategies: Payos:
only one),
1 i, j m.
If they advertise in dierent markets, then they get all customers, and if they
1\2 1 2 ... 1 p1 a1 a1 . . . 2 a2 p2 a2 . . .
. . . . . .
m a1 a2
. . .
am
am 1
. . . p m am
1\2 1 2 ... 1 q1 a1 a2 . . . 2 a1 q2 a2 . . .
. . . . . .
m am am
. . .
a1
a2 2
. . . qm am
14
Game theory
(1, 1) (2, 1)
if if
p1 a1 a2 a2 p1 a1 .
elements
In columns
2, . . . , m,
(1, 1) (1, 2)
In rows
if if
q 1 a1 a2 a2 q 1 a1 .
elements
2, . . . , m,
Modied game: 1
so it believes Equilibrium in
2 = 1 1
matrix is largest in its column but smallest in its row:
(1, 1) (2, 1)
if if
p1 a1 a2 a2 p1 a1 2, . . . , m,
only
(1, 1) p1 a1 a2
2.13
Market share
Two rms compete for a business of unit value. Eorts in order to get larger portions of the business (e.g. market)
Players: Payos:
Strategies:
x, y 0.
1 = 2
x x x+y y = y x+y
Best responses:
1 1 (x + y) x 1 = 1 x (x + y)2 (x + y)2 x+y x 2 1 y 2(x + y) = 2 x (x + y)4 y 1=0 (x + y)2 = y = y = y y stationary = = <0
15
point
Game theory
R1 (y) =
if
yy
if
y1 , R2 (x) = y1
if
xx
if
x1 x1
Intercepts are
(0, 0)
and
1 (4, 1) 4
x = y y y = xx
x + y = y x+y = x x=y
2x = x 4x2 x = 0 x(4x 1) = 0 1 x = 0 or x = 4
2.14
Players:
markets.
Strategies:
(x1 , . . . , xm )
and
(y1 , . . . , ym ).
16
Game theory
Payos:
m
1 =
i=1 m
xi ai xi + yi + zi yj aj xj + yj + zj i.
2 =
j=1
where
zi = total
2.15
Inventory control
A retailer and a wholesaler. Inventory,
Players: Payos:
Strategies:
y, z 0. x
with pdf
Random demand
f (x)
y+z
1 = a1
0
xf (x)dx +
y
a1 y + a2 (x y) f (x)dx +
y+z
a1 y + a2 z f (x)dx b1 y
1st term: a1 = unit prot from own inventory (x y) 2nd term: a2 = unit prot from back order (y < x y + z) 3rd term: same (x > y + z ) 4th term: b1 = unit inventory cost of retailer
y+z
2 = a3
y
(x y)f (x)dx +
y+z
a3 zf (x)dx b2 z
1st term: a3 = unit prot of wholesaler from back order (y < x y + z) 3rd term: b2 = unit inventory cost of wholesaler.
2.16
Price strategy
Time varying price of own product,
Let
k (p1 , . . . , pn ) k =
0
be demand of good
revenue.
2.17
'
Two duelists are placed 2 units from each other, each has a gun with 1 bullet in it. For a signal they start walking toward each other, and can shoot at any time. Their speeds are equal, guns have silencers.
17
Game theory
Players: Payos:
Strategies:
0 x, y 1. P1 (x)
and
Hitting probabilities
P2 (y),
then if if if
P1 (x) 1 (1 P1 (x)) P2 (y) P1 (x) P2 (y) 1 = P2 (y) (1) + (1 P2 (y))P1 (x) P2 (y) 1 (1 P2 (y)) P1 (x) P2 (y) P1 (x) 2 = P1 (x) + (1 P1 (x))P2 (y)
if if if
R1 (y) =
Game theory
No match, no equilibrium.
2.18
19
Case 2: y =
1 2
1 2
if y < if y = if y >
1 2 1 2 1 2
20
Game theory
R2 (x) is mirror image, only equilibrium is x = y = do not shoot early and also do not shoot late.
1 2
2.19
Spying game
Spy and counterespionage. Eorts,
Players: Payos:
P (x, y) V (x) U
Strategies:
x, y 0.
Then
Game theory
Equilibrium:
y = ymax x= 0
if ymax 14A A 14AAymax , otherwise 2A
2.20
Matching pennies
Two participants. Each has a coin, and can show head (H) or tail (T).
Players: Payo:
Strategy:
If the coins have identical sides, then player 1 wins $1, otherwise player 2 wins
2 = 1
No equilibrium exists.
2.21
Players:
Strategies:
22
Game theory
for A: for S:
x [0, 1] y [0, 1]
if
Payos:
|x y| < ,
>0
small):
1 = 2
1 0 = 1
if
|x y| <
otherwise
2.22
City with rectangular shape with respectively. Value of block City map:
a11 a21
. . .
a13 a23
. . .
... ...
a1,n2 a2,n2
. . .
a1,n1 a2,n1
. . .
a1n a2n
. . .
Players:
Strategies:
For C: row
or column
to search
Payos: 1 = value of block if the bomb was there and became found:
2 = 1
Equilibrium: matrix element is largest in its column and smallest in its row. Facts: largest elements in all columns are positive, smallest elements in all rows are zeros
no equilibrium
23
Game theory
(1,n) (2,1) . . .
a1n a21
... ...
(2,n) . . .
a2n
(m,1)
...
...
(m,n)
rows
...
...
am1 am1
...
am2
... ...
amn
columns
...
a2n
amn
2.23
Strategies: Payos:
x1 , . . . , x n ,
bids of the others. Highest bidder wins the unit, but he has to pay second highest bid only:
k =
vk maxl=k {xl } 0
if
xk = max{x1 , . . . , xn }
otherwise
2.24
Irrigation system
x1 , . . . , x n .
K k = Bk (xk ) xk
n l=1 n l=1
xl
xl
p(s) =
K(s) , Ck (xk ) s
= Bk (xk ).
2.25
n
x1 , . . . , x n .
24
Game theory
Payos:
Benet (reuse of water, not paying penalty, etc.) cost of water treatment
K k = Bk (xk ) xk
Same as previous example.
n l=1 n l=1
xl .
xl
2.26
n
Players: Payos:
Strategies:
x1 , . . . , x n .
2.27
Chess game
2 players controlling
Players: Payos:
and
gures.
Strategies:
W B
1 1 = 0 = W .
if if
W wins B wins ,
if tie.
1 and 2 no match
Game theory
1 1
1 1
1 1
1 1
Method Do loop going through every element (i, j) and checks if 1 (i, j) is largest in its
column and
2 (i, j)
(i, j)
4
4.1
Existence of Equilibria
Games representable by nite rooted trees
Assume: full information to all players; (i) game starts at the root of the tree; (ii) to each node of the tree a player is assigned and the game proceeds on an arc originating at this node to its end vertex (by the decision of the player which is assigned to the originating node); (iii) each player knows the vertices at which he/she has to make decisions; (iv) at each terminal node, each player has a given payo value
Example 4.1
Example 4.2 Chess-game Theorem 4.1 The game has at least one equilibrium pont (EP). Proof.
From root to each terminal node there is a unique path, since otherwise cycle would arise:
26
Game theory
Length of the path= number of arcs on it Proof by induction with respect to the length of the tree from root to any endpoint). If If
h 1,
then assume
h = 0, only one point, no decision, that point is the EP. player l is assigned to the root. Let I1 , I2 , . . . , Im be vertices root. With roots I1 , I2 , . . . , Im we have games with smaller (1) (1) (m) (m) payos (1 , . . . , n ), . . . , (1 , . . . , n ).
The EP of original game is obtained as follows: (k) (k0 ) = max {l }, then Let l 1km
k0
Ik0
i = l,
the EP of subgame
k0
Note.
Example 4.3
27
Game theory
Note. Project opportunity to develop a program. Remark. Some vertices might not be assigned to
proof applies to show existence of EP.
a given discrete distribution dened on arcs originating from this random vertex. Same
5
5.1
Continuous games
Brouwerxed point theorem
x = f (x )
Theorem 5.1 (In one dimension) Let f : [a, b] [a, b] be continuous, then there exists an x [a, b] such that
Proof.
28
Game theory
Case 1 f (a) = a, then x = a Case 2 f (b) = b, then x = b Case 3 a < f (a) and b > f (b), then curve must intercept 45degree line.
f (x) = x
2. Drop closed:
x 2
3. Drop bounded:
D = R, f (x) = x + 1
4. Drop continuity:
x 2
if if
x=0 x=0
5.2
Theorem 5.3 (Kakutani xed point theorem) Let f (x) be a point-to-set mapping
such that for all x D, f (x) is a subset of D. Assume (i) D is nonempty, convex, closed, bounded in Rn , (ii) f (x) is nonempty, closed, convex subset of D for all x D, (iii) Gf = {(x, y) | x D, y f (x)} is closed. Then there is an x such that x f (x ).
Remark.
Gf
29
Game theory
5.3
Proof.
In one dimension, in
Rn
many points in [A, B]; so one half also has many points from sequence.
select
I0 = [A, B],
Let Let
x0 I0
having having
I1 I2
be half of be half of
I0 I1
x1 = x0
and
x1 I1 x2 I2
where and so on.
x 2 = x 0 , x1
and
Convergent:
0,
(, )
is distance of
vectors)
Cauchy: (xn , xm ) 0 as n, m
Convergent
Cauchy
0 (xn , xm ) (xn , x ) + (x , xm ) 0
0 0
>0
N0 ,
such that
(xm , xn ) <
if
m, n N0
x N0
Only
x1 , x2 , . . . , xN0 1
since
to same limit.
30
Game theory
Proof.
Let
x0 D
x1 x2
Since
f (x) D
for all
x D, {xk }
is a Cauchysequence:
(xk+1 , xk ) = n < m,
f (xk ), f (xk1 )
so for
(xn , xm ) (xn , xn+1 ) + (xn+1 , xn+2 ) + + (xm1 , xm ) q n (x0 , x1 ) + q n+1 (x0 , x1 ) + + q m1 (x0 , x1 ) qn (x0 , x1 ) q n + q n+1 + . . . until = (x0 , x1 ) . 1q
If
n, m , {xk }
(ii) Since
Rn ,
and since
is closed, this
(iii)
xn+1 = f (xn )
for all
n.
Let
and since
is continuous,
x = f (x ) x
(iv) Let
is a xed point.
x , y
be 2 xed points:
(f (x ), f (y )) q (x , y )
x y
contradiction.
(x , y ) [1 q] 0
+ +
is closed:
x 2
no xed point
D = (, ), f (x) = x + 1
no xed point
Remark.
(i) in (ii) in
Comparing Brouwer and Banach theorems: Banach is weaker (no condition on boundedness or convexity) Brouwer is weaker, contraction
D, f,
continuity
(f xn ), f (x ) q (xn , x ),
so if
xn x ,
then
(xn , x ) 0,
so
f (xn ) f (x )
as well.
31
Game theory
5.4
Conditions to be contraction
f (x) f (y) = f (c)(x y) c x y
In one dimension:
If
to rewrite?
f (x) = 1 + g (x)
negative?
x = x + g(x) h(x)
f (x)
(h(x) = 0)
x ,
x :
1 + g (x )h(x ) = 0 h(x ) = 1 g (x )
so select
h(x) =
1 g (x)
x = x
Let
g(x) , g (x)
iteration
xk+1 = xk
g (xk )
xk+1 = xk
Secant method
32
Game theory
In n dimension:
(i)
Let
g(0) = f (y),
So
g(1) = f (x),
g (t) =
f y + t(x y) (x y)
g(1) g(0) = g (c)(1 0) with some c (0, 1), so f (x) f (y) = f (z)(x y), with z being on the linear segment
f (x) f (y) =
0
f y + t(x y) (x y)dt,
where
Proof.
g(1) = f (x),
g(0) = f (y), so
1
g (t)dt
=
0
f y + t(x y) (x y)dt
(ii)
Cauchyinequality
a1 , a2 , . . . , an ; b1 , b2 , . . . , bn
are real numbers, then
ai b i
a2 i
b2 i
Proof.
Let
a2 2x i
ai bi +x2
C B
b2 0 i
discriminant must not be
4C 2 4AB 0 C 2 AB.
33
Game theory
(iii)
Rn
. Axioms of norms:
x 0
and
x = 0 x = 0;
with all vectors .
x = || x
x+y x + y
x
Proof.
=
i=1
|xi |
(1)
l2 -norm:
n
x
Proof.
=
i=1
|xi |2
(2)
x+y
=
i=1 n
|xi + yi | =
i=1 n
2 |x2 + 2xi yi + yi | i n
i=1 n
|xi |2 +
i=1 n 2
|yi |2 + 2
i=1
|xi | |yi |
n n
i=1
|xi | +
i=1 n
|yi | + 2
i=1 n
|xi 2
|2
i=1
|yi |2
|xi +
i=1 i=1
|2
|yi |2 =
x + y
l -norm: x
= max |xi |
Proof. (iii)(a), (iii)(b) trivial, (iii)(c) can be proven as follows. Let maxi |xi + yi | = |xi0 + yi0 |, then
x+y
+ y
34
Game theory
Let (a) (b) (c)
be
nn
and
satises axioms:
A 0
A =0
if and only if
A = 0;
and constants
A = || A
A+B A + B
Example 5.2 It can be proved that the generated matrix norms are:
n
A A A
= max
j i=1 n
(3)
= max
i
(4)
= max
AT A
(Euclidean-norm)
(5)
Lemma 5.1 If matrix norm . is generated from vector norm . , then for all matrices
A and vectors x, Ax A x
Proof.
A x = A
x x
x max A x x = A x .
x =1
Lemma 5.2 Any matrix-norm generated from a vector norm satises axioms (a), (b)
and (c), furthermore (d) A B A B
Proof.
(a)
A 0
for
trivial,
A =0
Ax =0
x =1 Ax=0 x A = 0;
x =1 Ax=0
A+B
max A x + max B x = A + B .
35
Game theory
(d) With some
z = 1,
AB = ABz A Bz A B z = A B .
Example 5.3
n n
A
Proof.
=
i=1 j=1
|aij |2
(Frobenius-norm)
(6)
Axioms (a), (b) trivial, (c) can be proven as axiom (c) for x 2 .
Proof.
By Cauchy-inequality,
Ax
2 2
=
i=1 j=1 2 F
aij xj A x
2 2
i=1 j=1
|aij |
2 j=1
|xj |2
Denition 5.2 Matrix norm A and vector norm x are compatible, if for all matrices
and vectors, A x A x
The norm pairs are compatible.
Corollary. Remark.
{ A 2 , x 2 }, { A 1 , x 1 }, { A
} and
{ A
F,
x 2}
Not all matrix norms satisfying axioms (a), (b), (c) are compatible with a
1 2
I x = x I
contradiction.
x =
1 x , 2
vector norm.
Denition 5.3 Distance of vectors x and y can be dened as (x, y) = xy with some
From Theorem 5.7 (page 33),
f (x) f (y)|
0
so if we assume that
f y + t(x y) f
x y dt D,
furthermore for all
D Rn
is convex and
is dierentiable on
z D, f (z) q < 1,
then (7)
f (x) f (y) q x y
Remark. (7) might hold with one norm but not with others.
36
is contraction on
D.
Game theory
0.8 0.8 , A2 = 0 0
A1 A2 A3
1 1
= 1.6, A1
2 2
= 0.51
0.825 < 1
g(x) = 0,
Rewrite
how to rewrite?
x = x + H(x)g(x),
f (x)
Derivative of
where
H(x)
is invertible
nn
matrix
f (x)
is
Jacobian of g(x)
x ,
then
g(x ) = 0,
so
x, H(x) = J 1 (x),
so xedpoint equation:
x = x J 1 (x)g(x),
with iteration method:
5.5
37
Game theory
1. Consider game
G = {n; S1 , . . . , Sn ; 1 , . . . , n }
strategy sets payo functions
Dene
(x, y) =
k=1
for all
x, y S = S1 S2 Sn .
Lemma 5.4 (x , . . . , x ) = x is an EP 1 n
(x , x ) (x , y)
for all y S .
adding up for (8)
Proof.
k =
k (x , . . . , x ) l (x , . . . , yl , . . . , x ) + 1 n 1 n
k=1
after cancellation
k (x , . . . , x ) 1 n
k=l
l (x , . . . , x ) l (x , . . . , yl , . . . , x ) x 1 n 1 n
Dene
is EP.
x k Sk
Rk (x ):
Rk (x ) = argmax k (x , . . . , x , xk , x , . . . , x ) | xk Sk 1 k1 k+1 n
Remark.
EP is equivalent to a xed point problem, but this is dierent than the problem
with mapping
H(x).
38
Game theory
Theorem 5.8 Assume R(x) is unique (e.g k is strictly concave in xk ) and continuous
in x, furthermore S = S1 Sn is nonempty, convex, closed, bounded. Then there is at least one EP.
Proof.
Proof.
Theorem 5.10
(i) R(x) is nonempty, closed, convex for all x S ; (ii) S is nonempty, convex, closed, bounded; (iii) GX = {(x, y) | x S, y R(x)} is closed. Then there is an EP.
Proof.
In using Brouwer xedpoint theorem only continuity of best reply is needed, but in applying Banach's xedpoint theorem it has to be contraction.
NikaidoIsodatheorem
(i) Sk is nonempty, convex, closed, bounded in nite dimensional vector space;
6.1
Concave functions
convex set,
D Rn
f :DR
is called concave if
f (x + y) f (x) + f (y)
for all
0 , 1, + = 1.
39
Game theory
Assume
= 1 ,
f (x + (1 )y) f (x) + (1 )f (y) f (y + (x y)) f (y) f (x) f (y) f (y + (x y)) f (y) f (x) f (y)
If
f y + (x y)
so
,
=0
x<y f (y)
and
f (x)
40
Game theory
decreases, so if
exists, then
f (x) 0.
maximizexD f (x)
form a convex, closed set.
Proof.
(i)
Let
xk D, xk x , xk is continuous, f (x ) = f .
Since
f (xk ) = f
for all
k, k
and
is closed,
x D
(ii)
x, y D
concave
z = x + (1 )y
(0 1).
Since
is
f (z)
f ,
so
f (z) =
Lemma 6.2 Let D be convex in Rn and f strictly concave. Then f cannot have multiple
maximum points.
Proof.
then
Assume
x, y
z = x + (1 )y D, 0 < < 1,
f (z)
41
Game theory
Consider
y k Sk Rk (x).
is also closed, convex
Rk (x)
is closed, convex
G = (x, y) | x D, y R(x)
has to be closed. Select
x(l) x ,
(l)
and
y (l) R(x(l) )
(l)
such that
y (l) y .
(l)
Then
(l)
(l)
(l)
(l)
and
y k Sk .
Let
l ,
by continuity of
k x , . . . , x , yk , x , . . . , x 1 k1 k+1 n yk
is optimal,
k x , . . . , x , yk , x , . . . , x 1 k1 k+1 n
yk Rk (x ), y R(x ) (x , y ) G.
6.2
not bounded:
not continuous:
1 = k
x+y y
if if
x<1 , 2 = x=1
x+y x
if if
y<1 y=1
not concave:
Game theory
1 2
y<1 2 1 if y > 2 1 if y = 2
if same
no EP exists.
Innitely many equilibria Strict concavity of k is not enough for uniqueness, but in optimization it is enough.
43
Game theory
7
7.1
Applications
Matrix games
A (m n).
(i)
Dene
Players select strategies randomly according to selected discrete distributions on their strategy sets. Zero-sum 2-person game, payo matrix
S1 = S2 = x1
(i)
x1 | (x1 , . . . , x1 ) = x1 , x1 0,
i
(1)
(m)
x1 = 1 x2 = 1
j (j)
(i)
x2 | (x2 , . . . , x2 ) = x2 , x2 0,
is chosen by player 1) is chosen by player 2)
(1)
(n)
(j)
= P(strategy i = P(strategy j
(j) x2
1 (x1 , x2 ) = E(payo) =
i j
aij x1 x2 xT A 1 x2
(i) (j)
2 (x1 , x2 ) = 1 (x1 , x2 )
there is at least one EP.
7.2
Bimatrix games
A, B.
With random strategies
Strategy sets as above, payos are expectations again. 2-person nite game, payo matrices: before, and
S1
and
S2
are as
1 (x1 , x2 ) = xT A x2 , 2 (x1 , x2 ) = xT B x2 1 1
there is at least one EP.
7.3
n-players,
Sk = {1, 2, . . . , mk }
and payos
Sk =
expected payo:
xk | (xk , . . . , xk
(1)
(mk )
) = xk , xk 0,
i
(i)
xk = 1
(i)
k (x1 , . . . , xn ) =
i1 i2
in
(i ) (i )
EP exists.
44
Game theory
7.4
n=2
Polyhedron games
Sk = {xk | B k xk bk } 1 (x1 , x2 ) = xT A1 x2 , 1 2 (x1 , x2 ) = 1 (x1 , x2 )
I 0 B k = 1T , bk = 1 , 1T 1
then
I 0 B k xk bk 1T xk 1 1 1T xk 0 1T xk 1 1T xk 1 xk 0
T
1T xk = 1
1 xk 1
7.5
n
Multiproduct oligopolies
M
products
rms,
xk = (xk , . . . , xk ) =
n
(1)
(M )
p = p
l=1
xl
n
= xl
price vector
k (x1 , . . . , xn ) =
xT p k
l=1
Ck (xk )
cost
revenue
Assumptions: (i)
Sk =
RM +
45
Game theory
(ii) (iii) (iv)
and
Ck
xT p ( k Ck
n l=1
xl )
is concave in
xk
is convex
Note.
If
M = 1,
is increasing.
Before nding conditions for monotonicity, a matrix-theoretical reminder. T Let S be a real symmetric matrix (S = S ).
xT / S x = x (multiplied xT S x = xT x
Notice,
xT )
(9)
xT S x
is real, since
xT S x = xT S x
Similarly,
= xT S xT = xT S x.
xT x =
xi xi =
|xi |2
=
From diagonal form of
xT S x >0 xT x U,
S,
S = UT
so
1
.. .
U, n
xT S x = xT U T
1
.. .
U x n
46
Game theory
With
z = U x, xT S x = z T 1
.. .
z= n i z i zi = i |zi |2 > 0
if
z = 0,
x = 0.
Similarly
Theorem 7.2 Assume D is convex in RM . Let J(x) denote the Jacobian of f , assume
J(x) is continuous on D. Function f is monotone J(x) + J(x)T is positive semidenite.
Proof.
Consider function
g(t) = f y + t(x y)
with
g(0) = f (y),
Then
g(1) = f (x).
47
Game theory
so
g (t)dt
=
0
Therefore
J y + t(x y) (x y)dt
(x y)
f (x) f (y)
=
0
=
since for any matrix
(x y)dt 0
and vector
u,
uT A u = (uT A u)T = uT AT u.
scalar
If
J(x) + J(x)T
y0
and
u=0
such that
y0.
Take
1 u = u
such that
y 0 + tu
belongs to this neighborhood for
0 t 1.
1 0
Then let
x = y0 + u
and
(x y 0 )
f (x) f (y 0 )
1 = 2 =
u J y 0 + tu + J T y 0 + tu
1 0
1 22
uT J y 0 + tu + J T y 0 + tu
contradiction.
strictly mono-
Theorem 7.3 Assume D Rn is convex, the Jacobian J(x) of f is continuous, furtherProof. Same as rst part of previous theorem. Note. In one dimension J(x) = f (x), so we get back well-known monotonicity condition.
M Lemma 7.1 Let f : D RM with D R+ being a convex set, f is monotone and all
48
Game theory
Proof.
Let
, 0, + = 1,
and
x, y D.
Then
(/ )
= (1 ) = 2 , = (1 ) = 2 ,
we get
g(x + y) = (x + y)T f (x + y)
f (x)+f (y)
Corollary.
If
is concave, then
k (x1 , . . . , xn ) = xT p xk + k
l=k
is concave in
xl
Ck (xk )
xk
EP exists.
7.6
Single-product oligopolies
n
rms production levels,
x1 , . . . , x n Ck (xk ) p(s)
0 x k Lk
cost functions
s = x1 + + xn
n
Prot of rm
k: k = xk p
xl
l=1
Ck (xk )
Assumptions:
Functions (i) (ii) (iii)
and all
Ck
p <0
p + xk p 0 p Ck < 0
(which holds if
Ck
49
Game theory
Best responses:
xk p(xk + sk ) Ck (xk ) max
where
sk =
l=k
xl =
xk ,
if if
so
0 Lk Rk (sk ) = xk
where
(10)
otherwise,
x k
g (xk ) = 2p + xk p Ck < 0
unique solution. In cases 1 and 2 of (10),
Rk = 0,
xk = Rk (sk ),
n = 2,
x = R1 (y) y = R2 (x)
with Jacobian
J=
0 R1 (y) , R2 (x) 0
and since |R1 (y)| q < 1, |R2 (x)| q < 1 with some q (because of continuous derivatives), best response mapping is contraction unique EP.
General case of n:
n 2.
as
xk
50
Game theory
0 Lk Rk (s) = xk
where
if if
(11)
otherwise,
x k
g(0) = p(s) Ck (0) > 0 g(Lk ) = p(s) + Lk p (s) Ck (Lk ) < 0 g (xk ) = p (s) Ck (xk ) < 0
unique solution.
Rk = 0,
xk = Rk (sk ),
p + Rk p + xk p Ck Rk = 0 Rk (s) = p + xk p 0 p Ck
Equilibrium:
Consider equation
h(s) =
k=1
Rk (s) s = 0
n k=1
Note.
Lk ) 0
f (x) xX g(x) 0
S1 = X, S2 = Rm , 1 = F, 2 = F. +
Theorem 8.1 If (x , u ) is an EP, then x is optimal solution for the original problem.
51
Game theory
Proof.
f (x ) + uT g(x ) f (x) + uT g(x) (x) f (x ) + uT g(x ) (13)
with (12)
f (x ) + uT g(x ) < 0,
(u)
(13) (14)
u = 0, u g(x ) 0.
and select suciently large
Next we show that g(x ) 0. Assume gi (x ) violate (13) x is feasible solution of (P). T Since u 0 and g(x ) 0, u g(x ) 0.
Together with (14) we conclude that
ui
to
uT g(x ) = 0.
Finally we show that
(12)
In summary:
9
9.1
Lagrangean
f (x) +uT
gradient
L =0 & xi g(x) = 0T
Jacobian
L =0 uj
g(x) = 0
Example 9.1
maximize s. to
Method 1.:
2x1 (x1 x2 )2 x1 + x2 = 1
x2 = 1 x1 , so objective is
Game theory
8x1 + 6 = 0 3 6 = x1 = 8 4 x2 = 1 x1 =
Method 2.:
1 4
(16)
(17)
(18)
9.2
Theorem 9.2 Under certain regularity conditions and dierentiability of and g let x
be an optimal solution. Then there exist u , . . . , u such that 1 m u 0 g(x ) 0 Kuhn-Tucker necessary conditions (x ) + uT g(x ) = 0T uT g(x ) = 0
If
Note.
and all components of g(x) are concave, then K-T-conditions are also sucient.
53
Game theory
Example 9.2
maximize s. to x1 + 2x2 5 x 1 , x2 0 n = 2, m = 3, we need u1 , u2 , u3 such that u1 , u2 , u3 x1 2x2 + 5 x1 x2 0 0 0 0 = ln(x1 + x2 ) g1 = x1 2x2 + 5 g2 = x1 g3 = x2
= = = = =
(25)
9.3
Problem:
Rewrite:
maximize f (x)
54
Game theory
s. to
K-T conditions:
g(x) 0 g(x) 0 u, v 0
g(x) 0 g(x) 0
g(x) = 0
u v = ,
Lagrange method
9.4
If
(x , . . . , x ) 1 n
x k
maximize s. to
k (x , . . . , x , xk , x , . . . , x ) k+1 1 k1 n g k (xk ) 0
uk 0 g k (xk ) 0 k (x, uk ) =
k k (x)
+ uT k
= 0T T uk g k (xk ) = 0
k g k (xk )
for all
k = 1, . . . , n
(34)
Consider problem:
minimize
k=1
uT g k (xk ) k
(35)
(x , . . . , x ) 1 n
0.
Theorem 9.4 Assume k is concave in xk and all components of g k are also concave.
Then x is equilibrium it is optimal for (35) with some u , . . . , u . 1 n
55
Game theory
Note. Optimal objective value is zero. Method 1. Write up conditions (34) for all players simultaneously and solve the resulted
system (nd feasible solution).
KuhnTucker conditions:
u1 x 0, v1 10 x 0 u2 y0 v2 10 y 0 0 x, y 10
Player 1:
u 1 , v1 0 x 0, 10 x 0 1 T =0 k k (x1 , . . . , xn ) + uk k g k (xk ) = 0 2x + 19 y + (u1 , v1 ) 1 uT g k (xk ) = 0 u1 x + v1 (10 x) = 0 k u1 x = 0, v1 (10 x) = 0 uk 0 g k (xk ) 0
Player 2:
u 2 , v2 0 y 0, 10 y 0 1 T =0 k k (x1 , . . . , xn ) + uk k g k (xk ) = 0 2y + 19 x + (u2 , v2 ) 1 uT g k (xk ) = 0 u2 y + v2 (10 y) = 0 k u2 y = 0, v2 (10 y) = 0
14 constraints, 6 variables (x, y, u1 , v1 , u2 , v2 )
uk 0 g k (xk ) 0
Method 2.
Example 9.4 In the case of previous example, the KuhnTucker conditions are equivalent
to the optimization problem:
Game theory
minimize {u x + v y} subject to u, v 0 x, y 0 1 2x 2y + u = 0 1 4x 4y + v = 0
10
10.1
Applications
Bimatrix games
1 = xT A x2 , 1 S1 = S2 = x1 x2
(i)
2 = x T B x 2 1 x1 = 1
i (i)
x1 0, x2 0,
j (1) (j)
x2 = 1 I , g = 1T 1 1T I , g = 1T 2 1T
(j)
. . . (m) g 1 (x1 ) = x1 (1) (m) x1 + + x1 1 (1) (m) x1 x1 + 1 (1) x2 . . . (n) g 2 (x2 ) = x2 (1) (n) x2 + + x2 1 (1) (n) x2 x2 + 1
x1
Objective:
u1 x 1 + u1
i=1 n
(i) (i)
(m+1) i
x1 1 x2 1
j (j)
(i)
+ u1
(m+2)
x1 + 1
j
(i)
+
j=1
u2 x 2 + u2
(j) (j)
(n+1)
+ u2
(n+2)
x2 + 1
57
(j)
Game theory
Introduce
u1
(m+2)
u1
(m+1)
= , u2
(n+2)
u2
(n+1)
objective becomes
uT x1 + uT x2 1T x1 1 1T x2 1 1 2
Constraints:
u1 0, u2 0 x1 0, x2 0 1T x = 1, 1T x2 = 1 1 xT AT + uT + u1 2 1 xT B + uT + u2 1 2
From last two equations:
(m+1)
u1
(m+2) T
= 0T
(n+1)
u2
(n+2) T
= 0T
uT = 1T xT AT , 1 2
so objective:
uT = 1T xT B, 2 1
xT AT + 1T x1 + xT B + 1T x2 1T x1 1 1T x2 1 2 1
1 1
= xT A x2 xT B x2 + + 1 1 maximize s. to
since
xT (A + B)x2 1 x1 0, x2 0 1T x1 = 1, 1T x2 = 1, u1 , u2 0 A x2 1 B T x1 1
(36)
2 1 ,B = 1 1
(1) (1) (2)
1 1 ,A + B = 1 2
(1) (2) (2) (2) (1)
3 2 2 3
(2) (2)
3 2 , 5 5 2 3 , 5 5 1 5 1 5
10.2
Matrix games
A+B =0 maximize x2 0 x1 0, 1T x1 = 1 1T x2 = 1 A x2 1, B T x1 1 AT x1 1
and
(P2 )
Separate
(x1 , )
(x2 , ): A x2 1 1T x2 = 1 x2 0
minimize s. to
minimize s. to
AT x1 1 1T x1 = 1 x1 0
Theorem 10.2 (x , x ) is EP they are optimal solutions with some and . 1 2 Example 10.2
2 1 0 A = 2 0 3 1 3 3 minimize
(1) (2)
minimize s. to
s. to
Optimal solutions:
x1 = x2 =
4 4 5 , , 7 21 21 3 3 1 , , 7 7 7
,
T
= = 9 7
9 7
Note.
is called the
= .
The optimal
59
Game theory
10.3
where
g k (xk ) =
xk Lk x k
minimize
k=1
uk xk + uk (Lk xk )
n
(1)
(2)
(let
k = uk uk , k = uk
no sign constraint
(1)
(2)
(2)
=
k=1
Constraints:
(k xk + k Lk )
(1) (2)
uk , uk 0 0 x k Lk
k 0, k + k 0
k
xl + xk p k = p
xl Ck (xk ) + uk , uk xl + xk p
(1)
(2)
1 1
=0
xl Ck (xk ) ,
minimize s. to
n k=1
(xk [p (
xl ) + xk p ( xl ) + xk p (
xl ) Ck (xk )] + k Lk ) xl ) Ck (xk )] 0
k 0, k [p ( 0 x k Lk
Example 10.3 n = 3,
Ck (xk ) = k x3 + xk k Lk = 1 p(s) = 2 2s s2
60
Game theory
3
maximize
k=1
s. to
Optimal solution:
10.4
(i)
x1 x2 =
i
(i) (j)
x1
1
(i) j
x2 =
1
(j)
(ii)
a1 0 . . . 0 0 a2 . . . 0 A=. . .. . . . . . 0 0 . . . an
Constraints:
(diagonal game)
ak x 2 ak x 1 x 1 , x2 0 x1 =
k k
Since
(k) (k)
(k)
(k)
since we selected
= ,
(k)
x2 = 1 x2 > 0 x2
k k (k) (k) (k) (k)
with some
(k)
k,
> 0. x1 = 1,
(k)
Therefore
1=
ak
so equality everywhere
x1 = x2 =
, but ak
x1 = 1
(k)
so
P1
1 k ak
61
Game theory
(k) (k)
x1 = 1,
(k)
so
x2
k
(k)
ak =
x1 = x2 =
(k)
(k)
k P1
1 k ak
, ak
x1 = 1,
(k)
we have
(i)
(j)
(k)
(k)
(all k)
uk
=0 0 =0 0
if if if
ak > 0 ak 0,
uk = u
vk
ak < 0 if ak 0,
vk = v
x1 = x2
(iii)
1 (v1 , . . . , vn ) , v 1 = (u1 , . . . , un ) u
AT = A symmetric matrix
minimize s. to
game
A x2 1 1T x2 = 1 x2 0
minimize s. to
Ax1 1 1T x1 = 1 x1 0
identical problems
x 1 , x2 X =
Since
+ = 0
at optimum and
= ,
= = 0,
is zero and
X = x | x 0, 1T x = 1, A x 0
10.5
Applications
x0 Axb max cT x y0 AT y c min bT y
(primal) &
(dual)
62
Game theory
Lemma 10.1 Let x and y be feasible solutions of the primal and dual problems, respectively. Then cT x bT y. Proof.
c T x AT y
T
Corollary.
(Duality theorem) If
and
dual problems, respectively, such that the primal problem and Construct next matrix
0 A b 0 c P = AT T T c b 0
Proof.
If
is equilibrium, then
P z0 A v b 0 AT u + c 0 bT u c T v 0
(37) (38) (39)
Since
>0
and
z 0, x= 1 v0
and
y=
1 u 0,
furthermore
(40) (41)
(42)
bT y c T x
we conclude
bT y = cT x.
The duality theorem implies optimality.
63
Game theory
Step 1:
x2 = x+ x , 2 2 x2 if x2 0 x+ = 2 0 otherwise x = 2 0 x2
if
x2 0
otherwise
Constraints:
x1 + x+ x 1 2 2 + x1 x2 + x 1 2 5x1 + 7x+ 7x 25 2 2 A= P =
(ii) Consider a matrix game
/ (1)
(43) (44)
1 1 1 , 5 7 7
b=
1 25
0 0 1 1 1 1 0 0 5 7 7 25 1 5 0 0 0 1 1 7 0 0 0 2 0 0 0 2 1 7 1 25 1 2 2 0
A>0
0 A 1 P = AT 0 1 T T 1 1 0
x=
give equilibrium of A, and v =
a
1 1 u and y = v a a
is the value of the game;
64
z=
is equilibrium for P .
1 (x, y, v) 2+v
Proof.
(a) Let
z = (u, v, )
be an equilibrium of
P,
then
0 A AT 0 T 1 1T
That is,
P z 0 1 0 u v 0 1 0 0
A v 1 0 AT u + 1 0 1T u 1T v 0
First we show that If
0 < < 1.
= 1, then (since z is probability vector), u = 0 and v = 0 contradict 2nd inequality. T T If = 0, then 1 u + 1 v = 1, then by 3rd inequality v must have
component, so contradiction to the rst inequality. Next we show that
positive
1T u = 1T v.
uT A v uT 1 0 v T AT u + v T 1 0 v T 1 uT 1 0
Compare it to 3rd inequality to see Select
/ v T 1 uT 1 0 v T 1 = uT 1.
so
a=
1 , then 2
v T 1 = u T 1 = a, x= 1 u a
and
y=
1 v a
1 A T x = AT u 1 a a 1 Ay= Av 1 a a
So choose
equilibrium of
A.
65
Game theory
10.6
1 k1
k1 t=1
yt
and
xk = eik
such that
xk =
1 k
k t=1
xt ,
and select
y k = ejk
such that
k.
Theorem 10.5 Any limit point of the sequences {xk } and {y k } gives equilibrium strategies.
Proof. Complicated, not presented. Note. We got an iterative method for solving matrix games and so for solving LPs
project opportunity.
10.7
ui : : : :
Rn R, R R, Rn R, Rn R,
P y)
(y) =
i=1
2 (ui (y))
i=1
ui y
= 2 y
1 2
(y) =
i=1
1 ui y
i=1
12
i=1
2 (ui (y))
n y .
CauchySchwarz
66
Game theory
Assume that at a strategy player 2 needs to increase component
y , (uj (y)) > 0. Then eT P y > 0, however eT P ej = 0, j j yj to 1 (since at y its payo is negative, and at ej it is zero),
so so
has to increase. This is represented by the rst term of the following ODE
system. The second term is used to guarantee that solution is always probability vector. Consider the system of ODE's:
1jn
y0
is a probability vector.
Theorem 10.6 Let tk (k = 1, 2, . . . ) be a positive, strictly increasing sequence that converges to . Then any limit point of sequence y(tk ) is equilibrium strategy, and there exists a constant c > 0 such that n T ei P y(tk ) . c + tk
Several steps.
Proof.
y(t), t 0
yj (t1 ) < 0
and
t1 > 0.
t0 < t1 , yj ( ) < 0.
By denition
(uj )
yj ( ) = uj (y( )) y( ) yj ( ) 0.
0
By Lagrange's mean-value theorem
<0
yj (t1 ) < 0
t 0,
n
n j=1
yj (t) = 1,
n
so
y(t)
is probability vector.
1
j=1
yj (t)
=
j=1
yj (t) =
ODE's
j=1
uj (y(t)) + y(t)
j=1 (y(t)) n
yj (t)
= y(t)
n j=1
1
j=1
yj (t) .
So function
f (t) = 1
yj (t)
f (0) = 0,
f (t) 0.
67
Game theory
(ii) Assume
ui (y(t)) > 0
for some
t 0,
then
d ui (y(t)) dt
= =
ODEs
d ui y(t) = eT P y(t) = i dt
n n
pij yj (t)
j=1
pij uj (y(t))
j=1 j=1 (y(t))
pij yj (t)=(y(t))(ui )
(ui )
i,
i=1
d (ui ) (ui ) = dt
since
P T = P ,
T P = T P
Thus
= T P T = T P T P = 0.
1 d y(t) = y(t) y(t) 2 dt (ui ) = 0, this equation remain valid, since zero terms are added to both t0 > 0, (y(t0 )) = 0.
Then from the ODE,
(Notice, if sides.)
t t0
(since ODE has zero solution with initial value 0), so for
(y(t)) = 0 all i,
is equilibrium stategy.
(y(t)) > 0
for all
t,
1d y(t) y(t) 2 dt
or
3 2
1d y(t) y(t) 2 dt
1
3 2
/
0
both sides
2 y(t) + c t
where
c = y(0)
1 2
Hence
1
2 y(t)
By lemma
1 . c+t
Game theory
Taking an increasing sequence t1 y and all i eT P i
with
tk ,
y 0 P y 0
is equilibrium strategy.
Example 10.5
2 1 A= 2 0 1 3 0 0 0 0 0 0 4 4 P = 3 2 2 5 1 1
4 3 2 0 3 (+2) 4 2 5 1 5 5 3 4 3 2 1 0 0 4 2 5 1 0 1 5 5 1 1 0 0 0 1 5 0 0 0 1 5 0 0 0 1 1 1 1 1 0
The ODE system is solved by 4th order Runge-Kutta method on [0, 100] with h = 0.01 :
y(100) = (u, v, ), a=
0.627163
1 0.1864185 2
x =
x = y =
4 4 5 , , 7 21 21 3 3 1 , , 7 7 7
T
9 1.285714. 7 Comparing these true values to the results obtained by using the von Neumann method
we can see that the maximum error in the equilibrium components is about the error in the value of
0.0669,
and
is
0.07856.
Note.
= 0.01)
stepsize.
We got another method for solving matrix games and so for LPs
project
opportunity.
1 2 2 1
Game theory
y1 = k = 2, y1 = Ay 1 =
1 . 0
1 1
1 0
= 1 0
1 0 = 1 2
1 2 2 1
x2 = xT A = 2 k = 3,
1 1 , 2 2
1 0 , x2 = 1 2 1 2 2 1 =
3 3 , 2 2
1 0 + 0 1
1 2 1 2
y2 =
1 . 0
y2 =
1 2
1 1 + 0 0 1 2 2 1 1 0
1 0 1 2
Ay 2 =
x3 =
1 0 , x3 = 1 3 xT A = 3
1 2 , 3 3
1 0 0 + + 0 1 1 1 2 2 1
1 3 2 3
5 4 , 3 3
y3 = k = 4, y3 =
0 . 1
1 3
1 1 0 + + 0 0 1 1 2 2 1
2 3 1 3
=
4 3 5 3
2 3 1 3
Ay 3 =
70
Game theory
maximum component is the second so
x4 =
1 0 , x4 = 1 4 xT A = 4
1 0 0 0 + + + 0 1 1 1 1 2 2 1
1 4 3 4
1 3 , 4 4
7 5 , 4 4
y4 =
and so on for
0 , 1
k = 5, 6, . . .
Neumannmethod
0 A 1 T P = A 0 1 = 1T 1T 0
Five-dimensional problem. Remember that
0 0 1 2 1 2 1 1 0 0 1 2 0 0 1 2 1 0 0 1 1 1 1 1 0
of
P y ui
larger of 0 and
(ui )
= = = = =
5-dimensional
y1 = max {0; y3 + 2y4 y5 } y1 max {0; y3 + 2y4 y5 } + max {0; 2y3 + y4 y5 } + max {0; y1 2y2 + y5 } + max {0; 2y1 y2 + y5 } + max {0; y1 + y2 y3 y4 } y2 = max {0; 2y3 + y4 y5 } y2 . . . y3 = max {0; y1 2y2 + y5 } y3 . . . y4 = max {0; 2y1 y2 + y5 } y4 . . . y5 = max {0; y1 + y2 y3 y4 } y5 . . .
71
Game theory
11
Uniqueness of Equilibrium
C1 (x1 ) = 0.5x1 , C2 (x2 ) = 0.5x2 , p(s) = 1.75 0.5s if 0 s 1.5 2.5 s if 1.5 s 2
Consider game
reply of player
be
Rk (x),
so
Lemma 11.1 Assume that the best reply mapping is point-to-point and either
(i) (R(x), R(y)) < (x, y) for all x, y S1 Sn . or (ii) (R(x), R(y)) > (x, y) for all x, y S1 Sn . Then equilibrium cannot be multiple.
72
Game theory
Proof.
Assume that
and
R(x ) = x
and
R(y ) = y ,
so
R(x ), R(y ) = (x , y )
Note.
contradicting to (i) and (ii). Condition (i) is much weaker than the assumption that
R is contraction.
Existence
Equation f (x) = 0
(i) In one-dimension, if
have multiple solutions. (ii) In multiple-dimension, more complicated, in ity is not sucient:
x+y =0 2x + 2y = 0
many solutions:
y = x
x y = 0 2x 2y = 0
many solutions:
y = x
f (x)
x = x 2y y = 2x y
many solutions:
y = x
Rn ,
a function
f : D Rn (D Rn ) is called
f (x) = 0.
Theorem 11.1 If f or f is strictly monotonic, then the solution cannot be multiple. Proof.
Assume
and
f (x ) = f (y ) = 0,
which contradicts
to strict monotonicity:
(x y)T f (x ) f (y ) = 0.
Consider next equation
x = f (x).
73
Game theory
and
be solutions, then
(x y )T f (x ) f (y ) = (x y )T x y > 0,
contradiction.
Corollary.
all
If the negative of the best reply mapping (R(x)) is monotonic, then the EP
R(x)
x, y S1 Sn , (x y)T (u v) 0 u R(x)
and
for all
v R(y).
T
R(y)
R(x)
<
We can also check uniqueness without determining the best response mapping. Consider next game (i)
{n, S1 , . . . , Sn , 1 , . . . , n },
and assume
and continuously dierentiable in an open set containing (ii) (iii) satises the Kuhn-Tucker regularity condition;
Sk ;
gk
is concave
S = S1 Sn .
Dene
r1
h(x, r) = rn
where
1 1 (x) . . , . n n (x)
xk .
If
with respect to
h : RM RM
r.
74
Game theory
Denition 11.1 The game is called strictly diagonally concave on S , if for all x(0) =
x(1) , x(0) , x(1) S and with some r 0, (x(1) x(0) ) h(x(1) , r) h(x(0) , r) < 0.
Note.
condition:
Lemma 11.2 Assume S is convex, for all k , k is twice continuously dierentiable. Then
the game is strictly diagonally concave, if
Theorem 11.3 (Theorem of Rosen) Assume conditions (i), (ii), (iii) are satised
x(0) = x1 , . . . , xn and x(1) = x1 , . . . , xn Kuhn-Tucker conditions for l = 0, 1,
Assume
Proof.
the
(0)
(0)
(1)
(1)
uk g k x k
k k
That is,
(l)T
(l)
= 0 = 0
(45)
x(l) + uk
pk
(l)T
gk xk
(l)
(l) + k k x j=1
ukj gkj xk
(l)
(l)
= 0
l = 0, l = 1,
multiply by multiply by
rk x k x k rk x k x k
T (0)
(1)
(0)
(1)
k,
T
0=
x(1) x(0)
n pk
h x(1) , r
+
k=1 j=1
rk ukj
(0)
xk xk
(1)
(0)
gkj xk
(0)
ukj
(1)
xk xk
(0)
(1)
gkj xk
(1)
ukj gkj xk
j n pk (0) (1)
(l)
(l)
= 0,
so
0>
k=1 j=1
rk ukj gkj xk
+ ukj gkj xk
(1)
(0)
0,
contradiction.
75
Game theory
So
J(x, r) =
4r1 r1 r2 . r1 r2 4r2
det
4r1 r1 r2 r1 r2 4r2
12
Leaderfollower games
n = 2,
Player 1 = leader, Player 2 = follower Player 2 choses
R2 (x1 )
(follows player 1)
Theorem 12.1 Assume both 1 and R2 (x1 ) are continuous, S1 is nonempty, closed and
bounded. Then there is optimal solution.
Proof.
In general:
n > 2,
Player 1 = leader,
76
Game theory
Players For each
2, . . . , n
x 1 S1 ,
max
s. to Optimal solution
x , 1
and for
k = 2, . . . , n, x = Ek (x ) 1 k
12.1
Application to duopolies
n = 2,
oligopoly with 2 rms Firm 1: home-rm with subsidy from its government Firm 2: foreign-rm
Price function:
p(x + y) = a b(x + y)
Prots of the rms:
1. Nash equilibrium
Assuming interior optimum,
1 = a bx by bx (c s) = 0 x ac+s a by c + s , y= 2x x= 2b b 2 = a bx by by c = 0 y a bx c y= 2b
Equating the two expressions
y=
so
y=
a c a c + 2s 2a 2c 2s acs = = 2b 6b 6b 3b y=
acs 3b
77
Game theory
2. Stackelberg equilibrium
Player 1 is leader, player 2 follows whatever he/she does. y = abxc whatever x is, so prot of player 1: 2b
1 = x a bx
Dierentiate:
1 a bx + c bx = (c s) = 0 x 2 2 a 2bx + c 2c + 2s = 0 x=
so from above
ac+2s 2b
y =
a c 2s a c a c + 2s = 2b 4b 4b y=
ac2s 4b
3. Optimum subsidy
Welfare of home country
= 1 subsidy
1 subsidy =
a c + 2s a c 2s a c + 2s a c + 2s a c 2b 2 4 2b a c + 2s 4a 2a + 2c 4s a + c + 2s 4c = 2b 4 a c + 2s a c 2s = max 2b 4
subsidy
s = 0):
x =
a c + 2s ac = 2b 2b a c 2s ac y = = 4b 4b
78
Game theory
With Cournot equilibrium:
1 subsidy =
a c + 2s a c s a c + 2s a c + 2s a c 3b 3 3 3b a c + 2s 3a a + c 2s a + c + s 3c = 3b 3 acs a c + 2s max = 3b 3
2(a c s) (a c + 2s) 2a 2c 2s a + c 2s
= =
0 0 s=
ac 4
ac+ x = 3b ac y = 3b
ac 2 ac 4
ac 2b 3a 3c ac = = 4 3b 4b
Two equilibria are the same, so optimal subsidy is equivalent to leader's advantage.
13
Note. Incomplete information refers to amount of information the players have about the game, while imperfect information refers to the amount of information they have
on others' and their own previous moves (and on previous chance moves). Incomplete information results from
lack of information on physical outcomes lack of information on own or others' utility function (which are the payos) lack of information on strategy sets
Problems:
only uncertain all reduces to uncertain payos, so we will assume that strategy sets are known for
Player 1 is the DA's brother, who will go free if none of the players confesses.
79
Game theory NC C NC (0, -2) (-10,-1) C (-1,-10) (-5,-5) Type I Probability = Assume that with probability this is the case (player 2 is of type I), but with probability 1 player 2 pays an emotional penalty or friends of his partner will take revenge (equivalent to 6 years in prison for giving up his partner), so payo table is NC C NC (0, -2) (-10,-7) C (-1,-10) (-5,-11) Type II Probability = 1 Extensive form:
Prisoner 2 now has four possible pure strategies: (i) C if Type I, C if Type II (ii) C if Type I, NC if Type II (iii) NC if Type I, C if Type II (iv) NC if Type I, NC if Type II
80
Game theory and player 1 still has his original two strategies: C and NC.
Next we give formal denition of such games:
where si Si , si = (s1 , . . . , si1 , si+1 , . . . , sn ), furthermore we assume that random variables 1 , . . . , n are selected by nature according to a joint distribution function F (1 , . . . , n ), and the actual value of i is known by only player i, unknown to all others. With normal form notation:
(n, S1 , . . . , Sn , 1 , . . . , n , , F )
with = 1 2 n and known F by all players.
Denition 13.2 (Pure strategy) Pure strategy in a Bayesian game for player i is a
decision rule si (i ) for all i i , where si : i Si .
Denition 13.3 (Expected payo) Player i's expected payo given a prole of pure
strategies (s1 (.), . . . , sn (.)) is given as
Theorem 13.1 A decision prole (s (.), . . . , s (.)) is Bayesian equilibrium for all i 1 n
and i i accuring with positive probability,
Ei i s (i ), s (i ), i | i i i
Ei i si , s (i ), i | i i
s (I) = C, 2
s (II) = N C 2
s = C, 1 s = N C, 1
In Summary:
if 4 1 10
1 6 1 if 4 1 10 6
if > if < if =
1 6 1 6 1 6
C, N C, s = 1 C and N C,
weak, a2 = strong; for player 2, b1 = weak, b2 = strong,). 4 possibilities for payo functions. Player 1's payo tables:
Example 13.3 2-player game, both can be in weak or strong position (for player 1, a1 =
y1 y2
z1 2 -1 (a1 , b1 ) z1 28 40 (a2 , b1 )
z2 5 20
y1 y2
z1 -24 0
z2 -36 24
(a1 , b2 ) z2 15 4 z1 12 2 z2 20 13
y1 y2
y1 y2
(a2 , b2 )
a1 a2
b1 b2 r11 = 0.40 r12 = 0.10 0.50 r21 = 0.20 r22 = 0.30 0.50 0.60 0.40
(1, 1) (1, 2) (2, 1) (2, 2) 7.6 8.8 6.2 7.4 7.0 9.1 1.0 3.1 8.8 13.6 14.6 19.4 8.2 13.9 9.4 15.1
For example: for pair ((1, 2), (1, 1)) we have case strategy
0.4(2) + 0.1(24) + 0.2(40) + 0.3(2) = 7.0 (a1 , b1 ) (a1 , b2 ) (a2 , b1 ) (a2 , b2 ) (y1 , z1 ) (y1 , z1 ) (y2 , z1 ) (y2 , z1 )
82
Game theory or for pair ((1, 2), (2, 1)) we have case strategy
0.4(5) + 0.1(24) + 0.2(4) + 0.3(2) = 1.0 (a1 , b1 ) (a1 , b2 ) (a2 , b1 ) (a2 , b2 ) (y1 , z2 ) (y1 , z1 ) (y2 , z2 ) (y2 , z1 )
14
14.1
then
Cooperative games
Characteristic functions
iS,xi jS,xj
i (x1 , . . . , xn )
iS
(46)
for coalition what they can get regardless what others are doing. Also
v() = 0
and
i (x1 , . . . , xn ).
(48)
Si Sj =
for all
i = j.
di (S) =
if if
iS iS
(49)
Denition 14.5 (Constant sum game) Game is constant sum, if for all coalitions
(50)
83
Game theory
v({i}).
(51)
v(N) v(S) +
iN S
v({i})
Note.
Superadditive
Denition 14.8 (Stragetically equivalent games) Games (N, v) and (N, v ) are
strategically equivalent, if there exist > 0 and 1 , . . . , n such that for all S N,
v(S) = v (S) +
iS
v(N) = 1.
Note.
Normalized
essential.
Theorem 14.1 If a function v is dened on all coalitions such that v() = 0 and is
superadditive, then there is an n-person game such that v is its characteristic function.
The central question is what is given to the players in case of cooperation. Payo vector
x = (x1 , . . . , xn )
xi v(N)
i=1
(we cannot give out more than we might have). Here is not strategy.
xi
is payment to player
and it
dividually rational, if
Denition 14.11 (Individually rational payo vector) A payo vector is called inn
Game theory
minimum occurs at x2 = x3 = 3
x1 (4x1 )(x1 +1)
x2 1
minimum occurs at x3 = 3
(x1 +x2 )(7x1 x2 )(x1 +x2 +2)
v ({1, 2, 3}) =
5 69 x = (x1 , x2 , x3 ) | xi (i), x1 + x2 + x3 = 4 4
Checking superadditivity:
xi v(S)
y or x
Note.
(i) means all members of
(x1 , . . . , xn )
xi
to players
iS n:
Note.
|S| = 1 |S| = n
No dominance is possible if ,
|S| = 1
or
S = {i},
then
v(S)
i
xi >
i
yi = v(N) = v(S)
14.2
Set of
Core of game
x
vectors such that
x(S) =
iS n
xi v(S) xi = v(N)
i=1
for all
SN
x(N) =
86
Game theory
Theorem 14.2 For weakly superadditive games, the core is exactly the set of nondominated imputations.
14.3
x, y V
such that
for some
S
S
yV
xV
such that
(external stability).
Problem:
Not unique and not necessarily exists. n (Remember! Imputation: i=1 xi = v(N) and
xi v({i})
for all
i)
14.4
S.
The Shapleyvalue
to coalition
di (s) = 0
if
i S. N
by entering one by one in a random order. is given as
Assume players form the grand coalition The average contribution of player
xi =
where
s = |S|, since there are (s 1)! orderings of other players already in S , and (n s)! ordering of the players outside S . xi = Shapley
value for player
Theorem 14.3 For any superadditive game, x = (xi ) is an imputation. Example 14.3 n = 2, coalitions , {1}, {2}, {1, 2}
x1 = (1 1)!(2 1)! (2 1)!(2 2)! v ({1}) v() + v({1, 2}) v({2}) 2! 2! v({1}) + v({1, 2}) v({2}) = 2
and similarly,
x2 =
(1 1)!(2 1)! (2 1)!(2 2)! v ({2}) v() + v({1, 2}) v({1}) 2! 2! v({2}) + v({1, 2}) v({1}) = 2
87
x1 + x2 = v({1, 2}).
Example 14.4 n = 3, coalitions: , {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}
x1 = (1 1)!(3 1)! (2 1)!(3 2)! v ({1}) v () + v ({1, 2}) v ({2}) 3! 3! (2 1)!(3 2)! (3 1)!(3 3)! + v ({1, 3}) v ({3}) + v ({1, 2, 3}) v ({2, 3}) 3! 3! 1 2 v ({1}) + v ({1, 2}) v ({2}) + v ({1, 3}) v ({3}) + 2 v ({1, 2, 3}) 2 v ({2, 3}) = 6
Similarly
x2 = x3
1 2 v ({2}) + v ({2, 1}) v ({1}) + v ({2, 3}) v ({3}) + 2 v ({1, 2, 3}) 2 v ({1, 3}) 6 1 = 2 v ({3}) + v ({3, 1}) v ({1}) + v ({3, 2}) v ({2}) + 2 v ({1, 2, 3}) 2 v ({1, 2}) 6 x1 + x2 + x3 = 1 6 v ({1, 2, 3}) = v ({1, 2, 3}) 6
since the players have to get equal payments. So in the case of the symmetric oligopoly of Examples 14.1 and 14.2, 23 x1 = x2 = x3 = . 4
Example 14.6 Simple game, where v(S) = 0 or v(S) = 1 for all coalitions. In this
special case
(s 1)!(n s)! n! where summation is over all pivotal coalitions such that S is winning (v(S)=1), and S{i} is losing (v(S {i}) = 0) xi = average contribution when player i turns losing coalitions into winning ones. xi = power index of player i xi =
14.5
Social choice
88
Game theory
= criteria,
= alternatives,
aij = ranking
1. Plurality voting
f (aij ) = 1 0
if
aij = 1
otherwise
Aj =
i
f (aij ) = number
of times alternative
is the best
Aj = max{Aj } A1 = 2, A2 = 0, A3 = 1, A4 = 3
control is the social choice
2. Borda count
Bj =
i
aij
(total points),
Bj = min{Bj }
anew = ij
if alternative
aij aij 1
if
otherwise
is deleted
In previous example
A1 = 2, A2 = 0, A3 = 1, A4 = 3
89
Game theory
Clear cut 1 3 1 3 3 3 Strip cut and thinning 2 1 2 2 2 2 3 2 3 1 1 1 Control
A1 = 2
A3 = 1
A4 = 3
Clear cut 1 2 1 2 2 2
A1 = 2 A4 = 4
Control is the social choice
4. Pairwise comparisons
N (j1 , j2 ) = No.
Then of players where
j1
j2 .
j1
Preference graph:
90
Game theory
5. Dictatorship
Dictator = player
j ,
then
ai j = 1
Example 14.7
A1 A2 A3 A4 weights 1 2 3 4 2 2 1 3 4 1 1 2 4 3 1 total 8 4 3 2 1 2 3 4 1 2 1 4 3 1 2 1
Best
3 1 2 2
= A1 = A3 A1 A3 A4 1 2 3 1 2 3 1 3 2 3 2 1 3 1 2 3 1 2 4 2 2
20 20 19 21 out
Best
2 1 1 2 1 1
Either If
A3
or
A4
can be eliminated.
A3
A1 A4 1 2 1 2 1 2 2 1 2 1 2 1 4 4
equally good, If
2 1 1 2 1 1
A1
and
A4
are solutions
A4
A 1 A3 1 2 1 2 1 2 2 1 2 1 2 1 4 4
equally good,
2 1 1 2 1 1
A1
and
A3
are solutions
91
Game theory
Pair-wise comparisons
A1 A1 A1 A2 A2 A3 A2 A3 A4 A3 A4 A4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1=4 2+1+1+1=5 1 2 3 4 1 4 4 4 2 4 4 4 3 4 4 5 4 4 4 3
5 1
#
2
"!
'
3 4
15
Conict resolution
quo point
92
Game theory
Assume at the solution,
f1 f1 ,
and
f2 f2
H = {(f1 , f2 ) | (f1 , f2 ) H, f1 f1 , f2 f2 }
The Pareto frontier of
is assumed to be a function
f2 = g(f1 ), f1 f1 F1
which is strictly decreasing and concave. Assume there exists f1 f1 and f2 f2 .
(f1 , f2 ) H
such that
15.1
(f1 , f2 ) H (f1 , f2 ) H
otherwise if
otherwise
S1 = [f1 , F1 ] , S2 = [f2 , F2 ]
15.2
Player 1 assumes that player 2 will select to maximize his own expected payo:
[f2 , g(f1 )]
g(f1 ) f2 + f1 g(f1 ) f2
(f1 f1 )(f2 f2 )
is maximal on the Pareto frontier. Product is called the
Nash product
maximize s. to
Assume
f1 = F1 ,
objective is zero, we
unique solution
93
Game theory
fair share
15.3
Solution 1. 2. 3. 4.
Axiomatic bargaining
f = (H, f )
which satises certain axioms:
(H, f ) H (H, f ) f
(H, f ) = (H1 , f )
5. Let
1 , 2 > 0, 1 , 2 f H
some constants,
= (1 f1 + 1 , 2 f2 + 2 ) = {(1 f1 + 1 , 2 f2 + 2 ) | (f1 , f2 ) H} .
Then
(independence
(H, f ) = (f1 , f2 )
94
Game theory
Theorem 15.2 (Theorem of Nash) There is a unique solution, which maximizes the
Nash product on H :
maximize s. to
(f1 f1 )(f2 f2 ) f f f H c x
and its tangent line at x = a. Then this point is in the middle of the segment of the tangent line in the rst quartal.
Proof.
y=
c c , y = 2 x x
Idea of proof
A B Solution of the above optimization problem satises axioms. A solution that satises axioms is necessarily the optimal solution. Several steps: (i)
95
Game theory
By 3. and 6.,
(H, f ) =
1 1 , 2 2
optimal
solution
(ii) By 5., Same for any triangle (iii) By 4., for all cases
15.4
maximize s. to
(f1 f1 ) (f2 f2 )1 f f f H
96
Game theory
15.5
A1 =
f1
increasing in
A2 =
1 (f f1 )(f2 + g(f )) + 2
f
decreasing in
A1 = A2 = relative
or
A1 = A2
97
Game theory
Solve equation
A2 A1
strictly decreasing
at at
=0
f = f1 , A1 = 0, A2 A1 > 0 f = F1 , A2 = 0, A2 A1 < 0
unique solution
15.6
found.
Both players decrease their maximum payos with equal speed until feasible solution is
F1 f = F2 g(f )
or
strictly increasing
at at
unique solution
15.7
KalaiSmorodinsky solution
Arc starts at status quo point and moves toward ideal point, then the last feasible point is accepted as the solution.
98
Game theory
g(f ) f2 =
F2 f2 (f f1 ) F1 f1 S=slope
h = Sf g(f ) +
f2
Sf1
= 0
strictly increasing
at at
unique solution
Example 15.2 Consider conict with disagrement payo vector (0, 0) and feasible set
H = (x, y) | x, y 0, y 1 x2
1. Nash solution
99
Game theory
max
s.to Objective
x x3
is zero at endpoints
x = 1.
Derivative:
1 3x2 = 0 1 x2 = 3 x =
3 0.5774, 3
g(x) =
2 0.6667, 3
so
(0.5774, 0.6667)
Total area:
(1 x2 )dx = x
0
so
x3 3
1 0
2 = , 3
1 u
= = = =
u u3 x3 + x 2 3 1 3 2 0, f = 3u2 + 3
u0 = 1,
Newton's method:
Game theory
(0.60, 0.64)
4. Kalai-Smorodinsky solution
x = 1 x2 x2 + x 1 = 0, same
as above
16
All values of
f (x)
when
runs through
Maximal solution has properties: (i) maximal solution is at least as good as any other solution
101
Game theory
(ii) there is no better solution (iii) all maximal solutions are equivalent, i.e., they have the same objective values In the case of several objectives no optimal solution exists
(1) (2) In the case of one objective, any two decisions x and x can be compared since (1) (2) (1) (2) (1) (2) either f (x ) > f (x ), or f (x ) = f (x ), or f (x ) < f (x ).
In the case of multiple objectives, this is not true: for example
1 2
cannot be compared.
and
2 1
102
Game theory
fi (x) fi (x )
for all i, and with strict inequality for at least one i. I.e., no objective can be improved without worthening another one.
Example 16.1
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 x1 + 2x2 2
f1 = x1 + x2 f2 = x1 x2 (+) x1 = f1 +f2 2
Constraints:
() x2 =
f1 f2 2
f1 + f2 2 f1 f2 2 f1 + f2 f1 f2 +2 2 2 f1 + f2 + 2f1 2f2
0 f2 f1 0 f2 f1 2 4 f2 3f1 4
103
Game theory
Example 16.2 In previous example (2, 2) is weakly and strongly nondominated. Note.
If
16.1
104
Game theory
Example 16.4 There is a unique nondominated solution (earlier example) Example 16.5 There are multiple nondominated solutions
In Example 16.3,
Even if
nondominated solution:
Example 16.6
maximize s. to x 1 , x2 x2 + x2 < 1 1 2
105
Game theory
Lemma 16.1 Let f H be an interior point of H . Then f must not be nondominated. Proof.
Take
>0
f +1H
but
f + 1 > f.
Proof.
Consider:
maximize subject to
(i) The set
g(f ) = f1 + f2 + + fI f H.
(52)
H( ) = f | f H, g(f )
is bounded:
fi Fi
and
f1 + f2 + + fI fi
(ii) Let
j=i
j=i
fj
Fj
maximize s. to
where new feasible set is compact (closed and bounded) that is obviously strongly nondominated.
Note.
All conditions are essential, as earlier examples 16.3, 16.6 show. Assume
Corollary 1.
continuous
fi
are
Corollary 2.
nondominated solution.
106
Game theory
Example 16.7
alternatives
objectives 2 3 2 1 4 3 3 3 2 4 2 5
16.2
Given:
= number (f1
of objectives):
f2
fI )
Step 1.
max f1 (x) s.to x X
optimum
f1
Step 2.
. . . Step k.
max fk (x) s.to x X f1 (x) = f1
. . .
steps
107
Game theory
108
Game theory
f1
f2 (1, 1) is solution; f2
f1
4 ,0 3
is solution.
f1 f2 2
0 f2 f1 0 f2 f1 4 8 8 2f1 + f2 4 4 8 8 2f1 f2 4
109
Game theory
f1 f2
f2 (2, 0) x1 = 1, x2 = 1 f1
4 4 , 3 3
4 x 1 = , x2 = 0 3
Example 16.9
No point on the nondominated curve can be obtained as solution the nondominated solutions by the method selection.
we lose most of
Example 16.10
alternatives
objectives 2 3 2 1 4 3 3 3 2 4 2 5
f1
alternative 4 is selected
f2
f3
Problem:
If there is a unique solution in an earlier step, the later objectives are not
considered at all.
Modication.
110
Game theory
Step k :
maximize s. to fk (x) xX f1 (x) f1 1
. . .
Theorem 16.3 Solution is weakly nondominated, and if solution in step I is unique, then it is strongly nondominated.
16.3
Given:
-constraint method
most important objective and minimal acceptable lower bounds for all other
objectives.
maximize s. to
f1 (x) xX f2 (x) 2
. . .
(53)
fI (x) I
111
Game theory
xX
such that
Proof. Take i = fi (x ) for i 2. Note. By method selection we do not lose (strongly) nondominated solutions.
112
Game theory
Example 16.12
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4
Assume f1
x2 =
1 4
Game theory
2f1 + f2 = 4 f2 = 1 3 f1 = 2 3 +1 f1 + f2 = 2 = 5 x1 = 4 2 2 3 1 f1 f2 x2 = = 2 = 1 4 2 2
Example 16.13
alternatives
objectives 2 3 2 1 4 3 3 3 2 4 2 5
16.4
Weighting method
positive weights,
Given: c1 , c2 , . . . , cI
I i=1 ci
= 1,
114
Game theory
maximize s. to
c1 f1 (x) + + cI fI (x) xX
(54)
xX
such that
fi (x) fi (x ) fi (x )
i
for all
i.
Then
fi (x) >
i
contradicting to the optimality of
Theorem 16.7 Assume H is convex and x is strongly nondominated Then there are
nonnegative weights such that x is optimal solution for problem (54).
115
Game theory
Proof.
Let
H = g | there
exists
f H
such that
gf H.
Let
which is also convex, and has the same nondominated points as is nondominated existence of an boundary point. vector
f = f (x ), which
I -dimensional
c = (ci )
cT (f f ) 0
f H .
for some
i.
> 0,
cT (f f ) = cT ei = ci < 0,
contradiction. Notice:
H H , f H,
f H , cT f cT f
vector
116
Game theory
Questions:
1. How can we guarantee that
is convex?
For Question 1: Theorem 16.8 Assume X is convex and all fi are linear. Then H is convex. Proof. f 1 , f 2 H
there exist
x 1 , x2 X
and
such that
f 1 = f (x1 )
Then for
f 2 = f (x2 ).
= f x1 + (1 )x2 H
Example 16.16 If X is convex, and all fi are concave then H still might be nonconvex:
X = [0, 1], f1 (x) = x1 , f2 (x) = x2 1
Theorem 16.9 Assume that X is convex and for all x X and g f (x), there exists
an x X such that g = f (x) (that is, H is comprehensive). Assume furthermore that all fi are concave. Then H is convex.
there exist
Proof. f 1 , f 2 H
x 1 , x2 X
and
such that
f 1 = f (x1 )
f 2 = f (x2 ).
117
Game theory
Then for
(concavity of
fi 's)
f x1 + (1 )x2 H,
so from assumption, there exists
xX
such that
1 f 1 + (1 )f 2 = f (x).
For Question 2: Theorem 16.10 If X is a polyhedron, all fi are linear, and x is a strongly nondominated
solution, then there are positive weights ci such that x is optimal solution of (54).
Uses duality theorem twice, complicated.
Proof.
Example 16.17
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4
118
c1 (x1 + x2 ) + c2 (x1 x2 ) = x1
So
maximize s. to
x1 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4
Diculties:
I
Composite objective
ci fi (x)
i=1
has no meaning
Solution for this problem: normalizing objectives to satisfaction levels. If a utility function is given for objective
i,
then
Ui (fi ) = satisfaction
fi
[0, 1].
f i (x) =
I
without units
ci f i (x) means overall satisfaction level on the scale between worst and best case i=1 scenarios.
Example 16.18
maximize s. to x 1 + x 2 , x1 x 2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4
so
4 4 M1 = 2, m1 = 0, M2 = , m2 = 3 3
119
Game theory
7x1 + x2 x2 7x1 + x2 x2
Optimal solution 4 x1 = 3 , x2 = 0
= = = =
0 7x1 1 7x1 + 1
Example 16.19 c1 = 1, c2 = 3, c3 = 2
alternatives objectives 2 3 2 1 4 3 3 3 2 4 2 5
ci fi 2 + 9 + 4 = 15 1 + 12 + 6 = 19 3 + 9 + 4 = 16 4 + 6 + 10 = 20
120
Game theory
i m i Mi 1 1 4 2 2 4 3 2 5 c1 = 1, c2 = 3, c3 = 2
objectives ci f i 1 1 1 0 3 + 3 + 0 = 11 3 2 2 6 1 0 1 3 0 + 3 + 2 = 11 3 3 1 2 0 2 + 3 + 0 = 13 3 2 3 2 6 1 0 1 1+0+2=3 alternatives Alternative 2 is selected.
16.5
Given:
Distancebased methods
Ideal point (computed or subjectively given) and a distance function of
I-
dimensional vectors
minimize
s. to
f , f (x) xX
or (55)
minimize
s. to
f ,f f H
= max{ci |ui vi |}
i
(weighted l -distance)
121
Game theory
I 1 (u, v)
=
i=1
ci |ui vi |
(weighted l1 -distance)
I 2 (u, v)
1 2
=
i=1
ci |ui vi |
(weighted l2 -distance)
I G (u, v)
=
i=1
|ui vi |ci
122
Game theory
I = 2, c1 = c2 = 1.
Then
even if
(2, 3) = (2, 2)
((1, 1), (1, 3)) + ((1, 3), (3, 3)) < ((1, 1), (3, 3)) 0 + 0 < |3 1| |3 1| 0 < 4 contradiction
(i)
Theorem 16.11 The optimal solution of (55) is always weakly nondominated, and if
optimal solution is unique then it is strongly nondominated.
Proof.
Obvious.
Example 16.20
maximize s. to x 1 , x2 x 1 , x2 0 3x1 + x2 4 x1 + 3x2 4
So f1 = x1 and f2 = x2
123
c1 c1 c1 c1
= c2 = c2 = c2 = c2
= = = =
1 , 2 1 , 2 1 , 2 1 , 2
= = = =
1 2 G
} in all
0, 4 3
with
=0
Proof.
Special case:
f i (x) =
Example 16.21
alternatives 2 1 3 4
objectives 3 4 3 2 2 3 2 5
1 1 1 1
Cost saving 1.4 106 1.8 106 2.0 106 2.1 106 2.1 106
Resiliency % 87 76 91 89 77
-Vulnerability (scale) 34 41 29 40 44
Game theory
reliability: P(works until given time period T ) resiliency: how quickly a system is likely to recover or bounce back from failure once
failure has occured
vulnerability: for each failure event let s(X) denote a numerical indicator how severe
is the consequence of a failure, and p(X) the probabilty that it occurs. Then
v=
Xall
s(X)p(X)
failure modes
A1 : A2 : A3 : A4 : A5 : A4 is selected
0 0.9 11 5 = 0 (0.4 106 ) 0.7 0 12 = 0 (0.6 106 ) 0.4 15 0 = 0 (0.7 106 ) 0.1 13 11 = 1.001 107 (0.7 106 ) 0 1 15 = 0
Note. Same as Nash's bargaining solution Other way: nding feasible point with maximal distance from ideally worst point (nadir)
16.6
Given:
Directionbased methods
a status quo point (worst case scenario or current situation) and a vector of
improvement
125
Game theory
Starting from status quo point we improve objectives in given direction as much as we can. Similar to KalaiSmorodinsky solution or:
17
17.1
Dynamic games
Cournot oligopoly
N = number xk = output
N
of rms; of rm
k , k = 1, . . . , N ;
p
l=1
xl
= unit
price of product;
126
Game theory
Ck (xk ) = cost
of rm
k , k = 1, . . . , N k: Xk = [0, Lk ] ,
Lk = capacity
Prot of rm
of rm
k , k = 1, . . . , N
N
k: k (x1 , . . . , xN ) = xk p
xl
l=1
Ck (xk )
17.2
0 Lk Rk (sk ) = zk
with
if if
otherwise
sk =
l=k
xl ,
where
zk
p < 0; p + xk p 0; p Ck < 0
the best response functions are dierentiable (except the boundary points between the three cases) and
1 < Rk (sk ) 0.
In the case of discrete time scales the dynamic model is:
xk (t + 1) = xk (t) + Kk
where
Rk
l=k
xl (t)
xk (t) k.
(1 k N )
(56)
0 < Kk 1
xk (t) = Kk
where
Rk
l=k
xl (t)
xk (t) k.
(1 k N )
(57)
Kk > 0
127
Game theory
Proof.
By induction, if
N = 1, det(1 + ab) = 1 + ab = 1 + ba
N 1 N 1 + a1 b1 a1 b 2 a2 b1 1 + a2 b2 . . . . DN = det . . aN 1 b1 aN 1 b2 aN b1 aN b2
Substract substract . . . substract
... ...
a1 bN 1 a2 bN 1
. . .
. . . 1 + aN 1 bN 1 . . . 1 + aN bN 1
from row from row
a1 bN a2 bN . . . aN 1 bN aN bN
N 1 N 2
N , then N 1, then
etc.
from row
2.
DN
1 + a1 b1 a1 b2 a2 1 a1 . . . . = det . . 0 0 0 0 = DN 1 + (1)N 1 a1 bN = DN 1 + aN bN = 1 +
N 1
ak bk + aN bN = 1 +
k=1 k=1
ak bk
= 1 + bT a.
Corollary.
Consider
D=
a1 b1 a2 b2
.. .
aN bN
Then
b1 b2 a= . . . bN
and
1T = (1, 1, . . . , 1).
=
k=1
(ak bk ) 1 +
k=1
bk . ak b k
128
Game theory
The Jacobian of the discrete system (56) has the form
JD
1 K1 K1 r 1 . . . K 1 r 1 K2 r 2 1 K2 . . . K 2 r 2 = . . . . . . KN r N KN r N . . . 1 KN
() =
k=1
Eigenvalues are
(1 Kk (1 + rk ) ) 1 +
k=1
Kk r k 1 Kk (1 + rk )
(58)
= 1 Kk (1 + rk )
N
g() = 1 +
k=1
The values
Kk r k =0 1 Kk (1 + rk )
(59)
1 Kk (1 + rk )
Kk <
which is always the case, since
2 , 1 + rk
Notice that
1 < rk 0.
lim g() = 1,
N
lim
1Kk (1+rk )0
g() = ,
g () =
k=1
(If
Kk r k <0 (1 Kk (1 + rk ) )2 N .)
rk = 0,
Figure. Shape of
g()
129
Game theory
So there are
N 1 roots between the poles and one before the rst pole. 1 and 1 if and only if g(1) > 0
N
k=1
In the linear case
Kk r k > 1. 2 Kk (1 + rk )
(60)
where
zk
A Bsk 2Bxk ck = 0,
so
A ck 1 . zk = sk + 2 2B
Therefore
1 rk = 2
k=1
Assume
Kk < 1. 4 Kk
Kk K ,
NK <1 4K
or
K<
4 . N +1
K1 K1 r1 . . . K1 r1 K2 r2 K2 . . . K2 r2 JC = . = J D I, . . . . . KN rN KN rN . . . KN
and since eigenvalues if
JD
JC
18
Controllability in oligopolies
N
denote the number of rms, let
p(s) = A Bs (A, B > 0) be the price function and for k = 1, 2, . . . , N , let Ck (xk ) = ck xk + dk (ck , dk > 0) denote the cost function of rm k . Assume furthermore that the
market is controlled with the cost function of the rms, which can be interpreted as subsidies, tax breaks, etc. Under this assumption the prot of rm
can be expressed as
k (x1 , . . . , xN ) = xk
AB
l=1
xl
u(ck xk + dk ),
(61)
130
Game theory
where
k (x1 , . . . , xN ) = xk
AB
l=1
xk
(ck xk u + dk ).
(62)
The resulting dynamic models will be the same regardless which of the controls (61) or (62) is assumed. So in the following discussion, only the control (61) will be considered. Assuming discrete time scales and best response dynamics (Kk period
t 1,
xk
A Bxk B
l=k
xl (t 1)
(ck xk + dk ) u(t 1)
xk (t) =
1 2
xl (t 1) +
l=k
A u(t 1)ck 2B
(k = 1, 2, . . . , N ).
zk (t) = xk (t)
to have a discrete control system
A (N + 1)B
zk (t) =
for
1 2
zl (t 1)
l=k
ck u(t 1) 2B
(63)
k = 1, 2, . . . , N .
(64)
1 z1 0 2 . . . 1 2 z2 1 0 . . . 1 2 2 z = . , A = . . . , . . . . . . . . 1 1 zN 2 2 . . . 0
and
c1 2B c2 2B b = . . . . cN 2B
It is well known from the theory of linear systems that system (63) is completely controllable if and only if the rank of the Kalman-matrix
K = (b, A b, A2 b, . . . , AN 1 b)
is
N.
Consider rst the special case of a duopoly (that is when
N = 2).
In this case
K = (b, A b) =
c1 2B c2 2B
c2 4B c1 4B
det(K) =
c2 c2 1 + 22 8B 2 8B
c1 = c2 .
131
Game theory
Assume next that
N 3.
rank(K) < N ,
that is the
1 A = (I E) 2
where
is the
N N identity E 2 = N E,
matrix and
is the
N N
A2 =
and
1 1 N 1 2N I 2E + E 2 = (I + (2 + N )E) = I+ A 4 4 4 2 A2 b = N 1 2N b+ Ab 4 2
Let us modify the above model by assuming that dierent rms are controlled by dierent control variables. The modied model can be written as
(65)
is as before,
B = diag
and
is an
Kalman matrix
c2 cN c1 , ,..., , 2B 2B 2B Since B itself is nonsingular, the rst N columns of independent. Therefore rank(K) = N showing that
the the
system is completely controllable. Assume next that the time scale is continuous. The dynamic system now has the form
xk (t) = Kk
By introducing
1 2
xl (t) +
l=k
A ck u(t) xk (t) 2B
(k = 1, 2, . . . , N )
Kk =
it can be rewritten as
Kk 2B
(k = 1, 2, . . . , M )
xk (t) = K k
where
2Bxk (t) B
l=k
xl (t) + A ck u(t)
(k = 1, 2, . . . , N ), zk (t) =
xk (t)
is a positive constant for all k . Introduce the new state variables A to have the continuous control system (N +1)B
Kk
2B B . . . B c1 B 2B . . . B c2 A=K . . . , b = K . . . . . . . . . B B . . . 2B cN K = diag(K 1 , K 2 , . . . , K N )
132
with
Game theory
Consider rst the special case of a duopoly (that is when matrix is the following:
N = 2).
K = (b, A b) =
which has full rank if and only if
K 1 c1 B(2K 1 c1 + K 1 K 2 c2 ) 2 K 2 c2 B(K 1 K 2 c1 + 2K 2 c2 )
K 1 c2 + 2(K 2 K 1 )c1 c2 K 2 c2 = 0. 1 2
For example, if
K 1 = K 2,
c1 = c2 . If N 3,
then the sucient and necessary controllability conditions are even more
complicated. However, if
K1 = K2 = = KN ,
then
A = BK(I + E)
and
19
19.1
Quality/taste Quantity/amount Price Service/speed Distance to restaurant Find collective decision Individual preferences assessed Collective choice is made
Problem:
Step 1: Step 2:
133
Game theory
Decision maker No. 1 Individual preferences
Attributes
A1 A2 A3 A4 A5 A1 , A2
(0-100) (0-100) ($2-12)
Alternatives
A 80 60 8 25 0.8 M 95 70 7 20 1.5 I 65 70 11 35
Importance
0.3 0.3 0.2 0.1 0.1
C 75 80 5 15 0.7
N 0 0 0 0 0
2.75
A3 , A4
and
A5
way: How satised one is with an attribute. We cannot use the numbers directly from the table; money, minutes, miles cannot be compared and added directly. We need common measures of satisfaction to be introduced and used.
Linear function:
V (P ) = 100
100 P 12
V (5) = 100 V (8) = 100 V (7) = 100 V (11) = 100 V (0) = 100
100 (5) = 58.3% 12 100 (8) = 33.3% 12 100 (7) = 41.7% 12 100 (11) = 8.3% 12 100 (0) = 100% 12
134
Game theory
If people are in a hurry, then 20 minutes is considered a slow service, so 25% satisfaction is given there. Between 0 and 20 minutes, and between 20 and 40 minutes value function linearly decreases:
Linear function:
Linear function:
= = = = =
V1 (15) = 43.8% V2 (25) = 18.8% V1 (20) = 25% V2 (35) = 6.3% V1 (0) = 100%
135
Game theory
Similar reason for distance as for service time, 4 miles already considered a large distance:
Linear function:
300 (D) 16
Linear function:
100 (D) 16
= = = = =
V1 (0.7) = 86.9% V1 (0.8) = 85% V1 (1.5) = 71.9% V1 (2.75) = 48.4% V1 (0) = 100%
Attributes
A1 A2 A3 A4 A5
Alternatives
A M 95 70 41.7 25 71.9
Importance
0.3 0.3 0.2 0.1 0.1
80 60 33.3 18.8 85
Individual overall satisfaction level for each alternative is the weighted average:
For C: 75(0.3) + 80(0.3) + 58.3(0.2) + 43.8(0.1) + 86.9(0.1) = 71.23 For A: 80(0.3) + 60(0.3) + 33.3(0.2) + 18.8(0.1) + 85(0.1) = 59.04 For M: 95(0.3) + 70(0.3) + 41.7(0.2) + 25(0.1) + 71.9(0.1) = 67.53 For I: 65(0.3) + 70(0.3) + 8.3(0.2) + 6.3(0.1) + 48.4(0.1) = 47.63 For N: 0(0.3) + 0(0.3) + 100(0.2) + 100(0.1) + 100(0.1) = 40 In summary:
Preference for decision maker 1:
Total:
136
Game theory
as their rst choice. Using the power factors, the total for
I, so I : 10
The drawback with this technique is that it only considers rst choices, later ratings
1 5 + 3 8 + 1 6 + 4 10 = 75 3 5 + 1 8 + 4 6 + 5 10 = 97 2 5 + 4 8 + 5 6 + 2 10 = 92 4 5 + 5 8 + 2 6 + 1 10 = 82 5 5 + 2 8 + 3 6 + 3 10 = 89
Smallest number is at
Deleting
N, new table
C 1 2 1 3 A 3 1 3 4 M 2 3 4 2 I 4 4 2 1 POWER 5 8 6 10
Deleting
M, new table
C 1 2 1 2 A 2 1 3 3 I 3 3 2 1 POWER 5 8 6 10
Deleting
A, new table
137
Game theory
C 1 1 1 2 Total weights for for So, I 2 2 2 1 POWER 5 8 6 10
C = 19 I = 10
This technique utilizes a pair-wise comparison by choosing the better between two options for all individuals. This comparison continues with the others until we arrive at the nal choice. Starting from the top
5 + 6 + 10 = 21
5 + 8 + 6 = 19
5 + 8 + 6 = 19
5 + 6 = 11
C 8 + 10 = 18
N is the winner
138
Game theory
So
The drawback with this technique is that the choice may depend on the order in which comparisons are made.
By reversing the order of pair-wise comparisons for the example starting from the
N I
I by 8 N 5 + 6 + 10 = 21
C is the winner
So
Again, the drawback with this technique is that the choice may depend on the order in which comparisons are made. Contradictory preferences may occur at the collective level.
By comparing all pairs the preference graph becomes:
139
Game theory
Consider:
C C
Consistent
However:
I A
I
but
N I
Contradiction!
19.2
Style/look Speed Safety Maintenance Gas usage Price Comfort Find collective family decision Individual preferences assessed Collective choice is made
Problem:
Step 1: Step 2:
140
Game theory
Father, individual preferences
Attributes
A1 A2 A3 A4 A5 A6 A7 A1 , A3 , A4
and (0-100) (mph) (0-100) (0-100) (mpg) ($1000s) (0-100)
GM DS
55 80 50 70 35 10 60 75 120 90 80 25 20 80
Importance
0.05 0.25 0.25 0.15 0.13 0.10 0.07
90 150 100 90 20 40 95
70 150 75 50 10 55 70
A7
A2 , A5
and
A6
are not.
A2 , A5
and
A6 .
Father does not want a fast car, since teenage child would endanger himself:
Linear function
V2 (S) = 400 4S
= = = =
0 80 0 0
141
Game theory
Linear function
V (M ) =
100 (M ) 40
= = = =
50 87.5 62.5 25
Linear function
Linear function
V3 (P ) = 100
Game theory
Therefore:
= = = =
Attribute
A1 A2 A3 A4 A5 A6 A7
(0-100) (mpg) (0-100) (0-100) (mpg)
Alternative V GM DS C
55 80 50 70 87.5 100 60 75 0 90 80 62.5 100 80 0 0
Importance
0.05 0.25 0.25 0.15 0.13 0.10 0.07
90 100 90 50 33.3 95
70 75 50 25 8.3 70
($1000s) (0-100)
For V: 90 0.05 + 0 0.25 + 100 0.25 + 90 0.15 + 50 0.13 + 33.3 0.10 + 95 0.07 = 59.48 For GM: 550.05+800.25+500.25++700.15+87.50.13+1000.10+600.07 = 71.33 For DS: 75 0.05 + 0 0.25 + 90 0.25 + 80 0.15 + 62.5 0.13 + 100 0.10 + 80 0.07 = 61.98 For C: 70 0.05 + 0 0.25 + 75 0.25 + 50 0.15 + 25 0.13 + 8.3 0.10 + 70 0.07 = 38.73 In summary:
Preference of father as single decision maker:
GM
DS
V GM DS C POWER
Father Mother Child 3 2 3 1 3 4 2 1 2 4 4 1 Total 4 3 3 10
In this example, the Father pays for the car, so he has slightly higher power than others in the family
143
Game theory
DS: 3 C: 3
3 4 + 2 3 + 3 3 = 27 1 4 + 3 3 + 4 3 = 25 2 4 + 1 3 + 2 3 = 17 4 4 + 4 3 + 1 3 = 31
GM DS POWER
1 2 2 Total vote For GM=4 For DS=6 So DS is the social choice. 2 1 1 4 3 3
144
Game theory
V 4 + 3 + 3 = 10
DS is the winner DS or C:
DS C C DS
4+3=7 3
C is less preferred than all the others. After C is eliminated, GM is the second least preferred since both V and DS are more preferred than GM. After GM is eliminated, then between V and DS, DS is collectively more preferred. Very consistent preferences.
145
Game theory
19.3
Alternatives:
Criteria:
C1 : C2 : C3 : C4 : C5 :
human health (death rate) clean-up level
C1 C2 C3 C4 C5
T2 T3 6 10 2 106 10% 10% 3 10 102 30% 30% 3.32 2.9 20% 40% 0.6 0.6 20% 30% 1.2 1.2 10% 30%
T4 T5 6 10 2 106 10% 10% 3 10 5 103 20% 20% 4.8 3.8 20% 40% 0.1 0.1 15% 20% 1 1 10% 30%
C2
0.20 0.60 0.20
C3
0.70 0.25 0.05
C4
0.25 0.05 0.70
C5
0.20 0.60 0.20
W1 W2 W3 W4 W5
= = = = =
0.25(0.50) + 0.50(0.25) + 0.25(0.65) = 0.4125 0.20(0.50) + 0.60(0.25) + 0.20(0.65) = 0.3800 0.70(0.50) + 0.25(0.25) + 0.05(0.65) = 0.4450 0.25(0.50) + 0.05(0.25) + 0.70(0.65) = 0.5925 0.20(0.50) + 0.60(0.25) + 0.20(0.65) = 0.3800
146
Game theory
Normalized table
A1 C1 C2 C3 C4 C5
1 1 0 0.9 1
A2
1 0.9 0.91 0 0
A3
0 0 1 0 0
A4
1 0.9 0.6 1 0.5
A5
0 0.5 0.81 1 0.5
A4
A1
A5
A2
A3
small dierence
minimize
maximize
l A1 A2 A3 A4 A5
0.10 0.01 0.00 0.82 0.07
l1
0.22 0.02 0.01 0.75 0.00
l2
0.30 0.04 0.01 0.63 0.02
l
0.25 0.01 0.00 0.71 0.03
l1
0.33 0.01 0.01 0.54 0.11
l2
0.11 0.01 0.01 0.82 0.05
A4
is best
P
In our case
X p N
pq 1 1 . 2 N 4N 2
P(|error| 0.01) 1
147