You are on page 1of 295

SOCIAL AND ECONOMIC NETWORKS

IN COOPERATIVE GAME THEORY


THEORY AND DECISION LIBRARY

General Editors: W. Leinfellner (Vienna) and G. Eberlein (Munich)

Series A: Philosophy and Methodology of the Social Sciences

Series B: Mathematical and Statistical Methods

Series C: Game Theory. Mathematical Programming and Operations Research

Series D: System Theory, Knowledge Engineering and Problem Solving

SERIES C: GAME THEORY, MATHEMATICAL PROGRAMMING


AND OPERATIONS RESEARCH

VOLUME 27

Editor-in-Chief' H. Peters (Maastricht University); Honorary Editor: S. H. Tijs


(University of Tilburg); Editorial Board: E.E.C. van Damme (University of Tilburg), H.
Keiding (Copenhagen), I.-F. Mertens (Louvain-la-Neuve), H. Moulin (Rice University,
TX, USA), S. Muto (Tokyo University, Japan), T. Parthasarathy (Indian Statistical
Institute, New Delhi), B. Peleg (Jerusalem), T. E. S. Raghavan (Chicago, IL, USA), 1.
Rosenmiiller (Bielefeld), A. Roth (Pittsburgh, PA, USA), D. Schmeidler (Tel-Aviv), R.
Selten (Bonn), W. Thomson (University of Rochester, NY, USA)

Scope: Particular attention is paid in this series to game theory and operations research,
their formal aspects and their applications to economic, political and social sciences as well
as to socio-biology. It will encourage high standards in the application of game-theoretical
methods to individual and social decision making.

The titles published in this series are listed at the end of this volume.
SOCIAL AND ECONOMIC
NETWORKS IN COOPERATIVE
GAMETHEORY

by

MARCO SLIKKER
Technische Universiteit Eindhoven, The Netherlands
and
ANNE VAN DEN NOUWELAND
University of Oregon, U.SA .

....
"
SPRINGER SCIENCE+BUSINESS MEDIA, LLC
ISBN 978-1-4613-5619-6 ISBN 978-1-4615-1569-2 (eBook)
DOI 10.1007/978-1-4615-1569-2

Library of Congress Cataloging-in-Publication Data

A C.I.P. Catalogue record for this book is available from the Library of Congress.

Copyright It> 2001 Springer Science+Business Media New York


Originally published by Kluwer Academic Publishers in 2001
Softcover reprint of the hardcover 1st edition 2001

AII rights reserved. No part of this publicat ion may be reproduced, stored in a retrieval
system or transmitted in any form or by any means, mechanical, photo-copying, recording,
or otherwise, without the prior written permis sion of the publisher,
Springer Science+Business Media, LLC

Printed on acid-free paper.


Contents

Preface IX

Part I Social and Economic Networks in Cooperative Situations


1. GAMES AND NETWORKS 3
1.1 Coalitional games 3
1.2 Networks 13
2. RESTRICTED COOPERATION IN GAMES 21
2.1 Games with restricted communication 21
2.2 The Myerson value 29
2.3 Other allocation rules 43
3. INHERITANCE OF PROPERTIES IN COMMUNICATION
SITUATIONS 53
3.1 Superadditivity 54
3.2 Balancedness and totally balancedness 55
3.3 Convexity 58
3.4 Average convexity 62
3.5 Core-inclusion of the Shapley value 73
3.6 Population monotonic allocation schemes 81
3.7 Review and remarks 86
4. VARIANTS ON THE BASIC MODEL 89
4.1 Games with coalition structures 90
4.2 Hypergraph communication situations 95
4.3 Probabilistic communication situations 100
4.4 NTU communication situations 107
4.5 Reward communication situations 115
4.6 Directed communication situations 124
VI NETWORKS IN COOPERATIVE GAME THEORY

Part II Network Formation


5. NONCOOPERATIVE GAMES 135
5.1 Games in extensive form 135
5.2 Games in strategic form 140
6. A NETWORK-FORMATION MODEL IN EXTENSIVE FORM153
6.1 Description of the model 153
6.2 Some examples 155
6.3 Weighted majority games 160
6.4 Symmetric convex games 164
7. A NETWORK-FORMATION MODEL IN STRATEGIC FORM173
7 .1 Description of the model 174
7.2 Nash equilibrium and strong Nash equilibrium 179
7.3 Undominated Nash equilibrium and coalition-proof Nash
equilibrium 183
7.4 Comparison of the network-formation models 189
7.5 Remarks 190
8. NETWORK FORMATION WITH COSTS FOR ESTABLISHING
LINKS 193
8.1 The cost-extended Myerson value 193
8.2 Network-formation games in extensive form 198
8.3 Network-formation games in strategic form 202
8.3.1 Nash equilibrium and strong Nash equilibrium 203
8.3.2 Undominated Nash equilibrium 205
8.3.3 Coalition-proof Nash equilibrium 206
8.4 Extensions 209
8.5 Comparison of the network-formation models 210
9. A ONE-STAGE MODEL OF NETWORK FORMATION AND
PAYOFF DIVISION 213
9.1 The model 213
9.2 Nash equilibrium 218
9.3 Strong Nash equilibrium 223
9.4 Coalition-proof Nash equilibrium 229
10.NETWORK FORMATION AND POTENTIAL GAMES 249
10.1 Potential games 249
10.2 A representation theorem 254
10.3 Network formation 257
10.4 Remarks 262
l1.NETWORK FORMATION AND REWARD FUNCTIONS 265
11.1 Pairwise stability 265
11.2 Weak and strong stability 271
Contents VB

11.3 Dynamic models of network formation 276


References 281
Notations 287
Index 289
Preface

In many social or economic settings, communication between the par-


ticipants is important for the dissemination of information. Much of
this communication takes place through networks, systems of decentral-
ized bilateral relationships between the participants. Networks are the
primary vehicle for the diffusion of information on job openings, busi-
ness opportunities, and new products, to name a few. The importance
of social and economic networks has been extensively documented in
empirical work.
In recent years, there has been increased attention for theoretical mod-
els that help us understand how networks affect economic outcomes and
that analyze how networks emerge. In this book, we present a coherent
overview of a branch of theoretical literature that studies the influence
and formation of networks in social and economic situations in which
the relations between participants who are not included in a particular
participant's network are not of consequence to this participant. This
means that we concentrate on a particular strain of the theoretical liter-
ature and exclude others. The strain that we report on is arguably not
the most general one. However, it is one for which research has been
very fruitful. We hope that the reader of this book will be inspired by
it to study the influence and formation of networks in more complicated
settings and that this will lead to progress in this important branch of
research.

We have organized the material in two parts. In part I we concen-


trate on the question how network structures affect economic outcomes.
In chapter 1 we discuss the basic concepts and results in cooperative
game theory and graph theory that are used throughout the book. In
chapter 2 we show how networks are integrated into a coalitional game
to form a so-called network-restricted game and we define several al-
x NETWORKS IN COOPERATIVE GAME THEORY

location rules for games with communication restrictions. One of the


rules we discuss is the Myerson value, which plays an important role in
most of the network-formation models in part II. We discuss this rule
and its properties extensively. In chapter 3 we study how restrictions on
communication influence various properties of coalitional games, such as
sllperadditivity, balancedness, and convexity. We consider conditions on
the underlying network that are necessary and sufficient to guarantee
that such desirable properties of the underlying game are still satisfied
by the corresponding network-restricted game. In chapter 4 we discuss
several variants of the basic model in chapter 2. These variants involve
alternative representations of restrictions on communication, of the eco-
nomic possibilities of the players, or a combination of both.
In part II of the book we study the formation of networks by agents
who engage in a network-formation process to be able to realize the pos-
sible gains from cooperation. The agents form bilateral relations with
each other if doing so is to their advantage. We study which networks are
formed under various assumptions about the process of network forma-
tion. We start part II with chapter 5, in which we introduce the concepts
from noncooperative game theory that will be used in the chapters to
come. We discuss games in extensive form and games in strategic form
and several solution concepts for such garnes, all of which are based on
Nash equilibrium, which is the standard solution concept in noncoop-
erative game theory. In chapter 6 we consider extensive-form games of
network formation that describe network-formation processes in which
links are formed one at a time and players observe which links are formed
as the process progresses. The models in this chapter use an exogenously
given allocation rule to describe the payoffs to the agents in any of the
networks that they can form. In chapter 7 we study strategic-form games
of network formation that describe network-formation processes in which
agents have to decide on the formation of links in a setting where they
are not aware of which other links may have been formed. The models
in this chapter also use exogenously given allocation rules to determine
agents' payoffs. In chapter 8 we incorporate costs for the formation of
communication links into the network-formation models in chapters 6
and 7. Our main interest is to analyze the influence of costs for estab-
lishing links on the networks that result according to several equilibrium
concepts. In chapter 9 we study a model of network formation in which
players bargain over the formation of links and the division of the pay-
offs simultaneously. This makes the model very different from those in
previous chapters, where bargaining over payoff division occurred only
when the network-formation stage had been completed. In chapter 10 we
revisit the network-formation model in strategic form of chapter 7. \Ve
Preface Xl

study the conditions under which these games are potential games. For
such games, all the information that is necessary to determine Nash equi-
libria can be captured in a single function and we can apply a refinement
of the Nash-equilibrium concept to study which network structures are
formed according to this refinement. In chapter 11 we consider questions
related to the formation of networks in cases where a reward function
gives the value of each network. Reward functions allow us to model
situations in which the value that can be obtained by a group of agents
does not depend solely on whether they are connected or not, but also
on exactly how they are connected to each other.

We have worked hard to make the book self-contained and, in princi-


ple, it is possible to read it without any prior knowledge of its subject.
However, the proofs rely heavily on mathematics and it might be hard
to read those without some level of proficiency in several areas of math-
ematics. However, the reader should still be able to get a feel for the
types of results that we report if he or she skips the proofs. This book
should appeal to researchers who are interested in networks and it could
form the basis for a graduate course on networks.

We have opted to consistently use 'he' whenever we need a third-


person singular pronoun. If there were a gender-neutral third-person
singular pronoun available, we would have used that, but one simply
does not exist. In many languages, the convention is to use the male
pronoun when the gender is unknown. In such situations, the male
pronoun refers to both males and females. We have decided to follow
this convention because we believe that the use of he/she will divert the
readers' attention from the main issues.

Acknow ledgements
We would like to thank the people whose contributions to this book
are greatly appreciated. This, of course, includes all our co-authors.
It also includes Stef Tijs, who introduced us to game theory and who
motivated us to write this book. We thank Van Kolpin for proof-reading
parts of the book for us and we thank Larry Singell for answering our
many questions about the English language. All remaining errors are,
of course, ours.
I

SOCIAL AND ECONOMIC NETWORKS


IN COOPERATIVE SITUATIONS
Chapter 1

GAMES AND NETWORKS

In the current chapter, we discuss the basic concepts that will surface
throughout the book. In section l.1 we introduce coalitional games and
related concepts. Among others, this section contains the definitions of
the core, a solution concept that has very appealing stability properties,
and of the Shapley value, the (single-valued) allocation rule that will
be predominant throughout the rest of the book. In section l.2 we
discuss the use of networks to model restrictions in cooperation. We
also introduce several special types of networks, such as cycle-complete
networks, stars, and wheels.

1.1 COALITIONAL GAMES


Oftentimes, situations in which different parties cooperate to reach
a common goal, can be better understood when they are modeled as a
cooperative game. Suppose there are n different parties pursuing similar
objectives. These parties might be airlines forming alliances to offer their
customers convenient connections between as many airports across the
world as possible so that they can increase their profits, or they might be
farmers in Montana who need to dig and maintain ditches to be able to
irrigate their land so that they can grow crops. We refer to these parties
as players and we denote the set of players by N. For convenience, we
often number the players such that the player set is N = {I, 2, ... , n}.
For every subset of players S, called a coalition, we figure out to what
extent this coalition of players can accomplish the common goal without
the assistance of the players who are not a member of the coalition. If
the proceeds from cooperation are transferable between the players, then
the extent to which the common goal can be accomplished by a coalition
S is expressed by some number v(S), which can be the profits obtainable
4 Games and networks

by the airlines in coalition S if they form an alliance, or the cost savings


obtainable by the farmers in coalition S if they jointly maintain a system
of ditches that is suitable to irrigate their land rather than each their own
ditches. The function v that assigns to every coalition S r;; N its value
or worth v(S) is commonly referred to as the characteristic function. It
is always assumed that v(0) = o. A pair (N, v) consisting of a player
set N and a characteristic function v constitutes a cooperative game or
coalitional game. These games are also referred to as TU games, where
TU stands for transferable utility. Sometimes, we want to focus on only
a few of the players involved in a coalitional game (N, v). For a coalition
S r;; N, VIS denotes the restriction of the characteristic function v to
the player set S, i.e., vls(T) = v(T) for each coalition T r;; S. The pair
(S, vIs) is a cooperative game with player set S that is obviously closely
related to the game (N, v). For each coalition S r;; N, the coalitional
game (S,vls) is called a subgame of the game (N,v).
If the proceeds from cooperation are not transferable between the
players, then one gets a nontransferable 'Utility game, or NTU game for
short. This would be the case if in a parliamentary system the politi-
cal parties forming an alliance to pass a certain law are predominantly
interested in how this will effect the percentage of votes each party will
get in the next election. If that is the case, then we need to specify what
the respective shares of total votes are expected to be for each of the
parties in a cooperating coalition S and we cannot express this using a
single number. Hence, we associate with every coalition S r;; N a set
V(S) r;; R S , where x = (:r;)iEs E V(S) if and only if there is a way for
the parties in coalition S to form an alliance and implement it in such a
way that each party i E S gets an expected share Xi of the votes. The
function V is also called a characteristic function and a pair (N, V) is
called an NTU game. Most of this book is concerned with transferable
utility garnes, however, and we will see nontransferable utility games
only in chapter 4.
Modeling a certain aspect of a situation as a coalitional game often
allows us to abstract from minor details and bring out the underlying
structure of it more clearly. Considering the characteristic function can
help us focus on important aspects of that underlying structure.
We might well find, for example, that the characteristic function has
the property that for any two disjoint coalitions Sand T of players it
holds that v(S) + v(T) S; v(S U T). Such a characteristic function is
called superadditive. If the characteristic function is superadditive, two
disjoint coalitions of players can corne further towards accomplishing
their common goal if they decide to form one big coalition rather than
operate as two separate coalitions. In situations where the solutions
Coalitional games 5

employed by two disjoint coalitions Sand T to obtain their respective


values can be implemented side-by-side, one of the options open to the
big coalition S U T is to implement these two solutions. Hence, the
characteristic function will naturally be superadditive.
If there are no negative effects from having larger coalitions, then the
addition of more players will not decrease the value obtainable. Hence,
v(S) :s; v(T) if T contains all players in S and possibly more. A charac-
teristic function with this property is called monotonic.
The players in a coalitional game are eventually interested in what
they individually will get out of cooperating with the other players. It is
nice to know what profits can be obtained by coalitions of players, but
how will individual players benefit from cooperation? A payoff vector
or allocation is a vector x = (X;}iEN E RN that specifies for each player
i E N the profit Xi that this player can expect when he cooperates
with the other players. An allocation is called efficient if the payoffs
to the various players add up to exactly v(N). The set consisting of all
efficient allocations is the set {x E RN I LiEN Xi = v(N)}. However, not
all these allocations will be acceptable to the players. A very minimal
requirement is that each player gets at least as much as what he can
obtain when staying alone. An allocation x E RN with the property
that Xi 2 v( i) for all i E N is individually rational. 1 The set of all
individually rational and efficient allocations is the imputation set

J(N,v) = {x E EN I L X i = v(N) and Xi 2 v(i) for each i EN}.


iEN

This type of rationality requirement can be extended to all coalitions,


not just individual players, to obtain the core 2

C(N,v) = {x E RN I L X i = v(N) and LXi 2 v(S) for all S ~ N}.


iEN iES

The core can be interpreted in two ways. The first interpretation is that
it consists of all imputations that are such that no group of players has
an incentive to split off from the grand coalition N and form a smaller
coalition S because they collectively receive at least as much as what they
can obtain for themselves as a coalition. The second interpretation of

ITo improve readability, we will, with a slight abuse of notation, omit the parentheses { } as
much as possihle. Hence, we write v( i) instead of v( {i}), v( i, j) instead of v( {i, J}), and so on
whenever this can be done without ambiguity. We adopt similar notations for functions other
than v. At the same time, we do use the parentheses whenever we think this is necessary to
avoid confusion.
2We define the empty sum to be equal to O.
6 Games and networks

the core is that no group of players gets more than what they collectively
add to the value obtainable by the grand coalition N. This follows since
for each x E C(N, v) and S <;;; N it holds that LiEs Xi = LiEN Xi -
LiEN\S Xi S v(N) - v(N\S).
Since allocations in the core are such that all groups of players have
an incentive to stay in the grand coalition, one might like to choose a
core allocation to divide the proceeds obtained by the grand coalition.
However, the core might be empty, as is shown in the following example.

EXAMPLE 1.1 Consider a situation with three players in which at least


two players are needed to accomplish a certain task, the completion
of which will generate a monetary value of 1 unit. This situation can
be modeled by the game (N,v), where N = {1,2,3} and the char-
acteristic function v defined by v(S) = 1 if lSI :::.: 2 and v(S) = 0
if lSI S 1. Here, lSI denotes the number of players in S.3 This
game is superadditive and monotonic. The set of efficient allocations
for this game is {(Xl,X2,X3) E H3 I Xl +X2 +X3 = I} and the set
of all individually rational and efficient allocations is the imputation
set J(N,v) = {(Xl,X2,X3) E H3 I Xl + X2 + X3 = 1 and Xi :::.: 0 for
each i E {I, 2, 3} }. Allocation (~, i, -
~), for example, is an allocation
that is efficient, but not individually rational. The core of the game
is C(N,v) = {(Xl,X2,X3) E R3 I Xl +X2 +X3 = 1, Xi:::': 0 for each
'i E {I, 2, 3}, and Xi + Xj :::.: 1 for all i, j E {I, 2, 3}, such that i i- j}.
It is easily seen that the core is empty, for if Xl + X2 :::.: 1, X3 :::.: 0 and
Xl + X2 + X3 = I, then X3 = 0 and either Xl < 1 or X2 < 1. Without
loss of generality, we assume that Xl < 1. This implies that Xl + X3 < 1,
which violates the core condition Xl + X3 :::.: 1. Hence, C(N, v) = 0. <)

Bondareva (1963) and Shapley (1967) independently identified the

er
class of games that have nonempty cores as the class of balanced games.
To describe this class, we define for all S <;;; N the vector e S by =1
for all i E Sand er
= 0 for all i E N\S. A map"" : 2 N \{0} -+ [0,1] is
called a balanced map if

L K:(S)e s = eN,
SE2 N \ {0}

3This game is popularly known as "The lady with the bags". The story is that a lady arrives
at the airport with a lot of luggage. At least two skycaps are needed to bring all of her bags
to her car. Since it is customary to tip the skycabs according to the number of bags they
are carrying, the lady is going to tip a fixed amount no matter how many skycabs carry her
bags.
Coalitional games 7

where 2N denotes the set consisting of all subsets of N. Further, a game


(N, v) is called balanced if for every balanced map"" : 2N \{0} -+ [0,1] it
holds that
L ,,"(S)v(S):::; v(N).
SE2 N \{0}

The following theorem is due to Bondareva (1963) and Shapley (1967).

THEOREM 1.1 Let (N,v) be a TU game. Then C(N,v) =I- 0 if and only
if (N, v) is balanced.

Theorem 1.1 is often helpful when trying to show that a game has a
nonempty core, specifically if the game has many players or if the game
has a specific structure that can be exploited, as is the case for linear
production games (cf. Owen (1975)). For small games, theorem 1.1 is
often not the easiest way to check for non-emptiness of the core, as the
following example illustrates.

!
EXAMPLE 1.2 Consider the 3-player game (N, v), with N = {I, 2, 3}
and
0 if lSI:::; 1;
60 if S = {I, 2};
v(S) = 48 if S = {I, 3}; (1.1 )
30 if S = {2, 3};
72 if S = N.

To check if the game is balanced, let,," : 2N\ {0} -+ [0,1] be a balanced


map. We have to check whether LSE2N\{0} ,,"(S)v(S) :::; v(N). Since
v (i) = 0 for all i E {1,2,3}, and v(i,j) > 0 for all i,j E {l,2,3},
i =I- j, we can restrict attention to balanced maps", with ,,"(i) = 0 for
all i E {I, 2, 3}. Since,," is balanced, we know that the following three
equalities hold,

",(1,2) +,,"(1,3) +",(1,2,3) = 1,


",(1,2) + ,,"(2,3) + ,,"(1,2,3) = 1,
,,"(1,3) + ,,"(2,3) + ,,"(1,2,3) = 1.

From this, we easily obtain ,,"(1,2) = ",(1,3) = ",(2,3) = l-K~,2,3), so


that the balanced map"" can be identified with ,,"(1,2,3) E [0,1]. It now
8 Games and networks

follows that

L K;(S)V(S)
SE2 N \{0}
_1 - K;(1, 2, 3) 60 1 - K;(1, 2, 3) 4 1 - K;(1, 2, 3) ( )
- 2 + 2 8+ 2 30 + I\; 1,2,3 72
=69 + 31\;(1,2,3) S 72 = v(N).

It follows that the game is balanced and, hence, that it has a nonempty
core. However, we do not obtain specific core elements with this proce-
dure. The direct method is easier in this simple example. A core element
x has to satisfy the following (in)equalities,

Xl 2: 0, X2 2: 0, X32: 0,
Xl + X2 2: 60, Xl + X3 2: 48, X2 + X3 2: 30,
Xl + X2 + X3 = 72.
So, for example, the allocation (Xl, X2, X3) = (40,22,10) is in the core of
the game (N, v). However, there are more core allocations. In fact, the
core is the convex hull of the three allocations (42,24,6), (42,18,12),
and (36,24,12). 0

Intuitively, it is clear that it is desirable to use a core allocation when


deciding on how to divide the jointly obtained value. However, there
are two possible problems with this rule-of-thumb. The first is that the
core might be empty (as in our first example) and the second is that
the core might contain many elements (as in the previous example). In
fact, rarely is it the case that the core of a game contains exactly one
element. To get around this problem, several single-valued allocation
rules have been introduced. A (single-valued) allocation rule is a func-
tion 'Y that assigns an allocation 'Y(N, v) to every coalitional game (or
possibly to games with specific characteristics). The nucleolus, which
was introduced in Schmeidler (1969), is a single-valued allocation rule
that always picks an allocation in the core whenever it is nonempty. The
nucleolus, however, is only defined for games for which the imputation
set is nonempty. Another popular allocation rule is the Shapley value,
which was introduced in Shapley (1953b). Even though the Shapley
value does not have the desirable property that it always picks an allo-
cation in the core whenever it is nonempty, it is more prevalent in the
literature than the nucleolus. Presumably, this is because the Shapley
value is defined even for games that have an empty imputation set and
because it is relatively easy to work with. In addition, as we will see
Coalitional games 9

later on in this section, the Shapley value has an appealing axiomatic


foundation. It is also the allocation rule that will be used extensively
throughout this book. There are several ways of explaining the Shapley
value. We allude to a few of these.
The Shapley value can be computed using so-called unanimity games.
Let N be a set of players and let S E 2N \{0} be a coalition of players.
The unanimity game (N,us) is the game described by us(T) = 1 if
S <;;; T and us(T) = 0 otherwise (see Shapley (1953b)). In this game,
a value of 1 can be obtained if and only if all players in S are in the
coalition T. The Shapley value <i>i(N, us) of the unanimity game (N, us)
gives nothing to the players outside S and divides the value of 1 equally
over the players in S, i.e., <i>i(N, us) = 111 if i E Sand <i>;(N, us) = 0
if i E N\S. The Shapley value for a general coalitional game can be
computed using unanimity coefficients. Every coalitional game (N, v)
can be written as a linear combination of unanimity games in a unique
way, i.e., v = I:SE2N\{0} '\s(v)us. The coefficients '\s(v), S E 2N \{0},
are called the unanimity coefficients ofthe game (N, v). Shapley (1953b)
showed that the unanimity coefficients of TU game (N, v) satisfy

'\s(v) = L (-l)l s l-ITl v (T) for all S E 2N \{0}. (1.2)


TE2 S \{0}

In terms of unanimity coefficients, the Shapley value <i> of the game (N, v)
is given by
,,'\s(v) .~
<i>i(N, v) = L...- lSI
for each 'I t:: N. (1.3)
Sc;.N:iES

The method described above is often very useful to quickly compute


the Shapley value, as we demonstrate in the following example.

EXAMPLE 1.3 Consider the game (N, v) that we encountered in exam-


ple 1.2. It is easily seen that v = 60U1,2 + 48u1,3 + :\OU2,3 - 66uN. 4 For
example, v(l, 2) = 60+0+0-0 = 60, and v(l, 2, 3) = 60+48+30-66 =
72. From the expression of the game (N, v) as a linear combination of
unanimity games, we derive that <i>(N, v) (30,30,0) + (24,0,24) +
(0,15,15) - (22,22,22) = (32,23,17).5 0

4We remind the reader of the fact that we omit the parenthesis { } as much as possible.
Hence, U{l,2} becomes Ul,2 and so on.
5Note that for this game the Shapley value is not a core allocation, even though the core is
nonempty.
10 Games and networks

In his paper, Shapley (1953b) describes a procedure that produces


the Shapley value of a game as the (expected) outcome. The players
in N decide to play the game in a grand coalition N, which is formed
by adding players one at a time, where the order in which the players
join is determined by chance and all orders are equally likely. Each
player is promised the amount which he contributes to the coalition at
the moment he joins it. The expected payoHs to the players under this
procedure are equal to

<I>i(N, v) = L ISI!(INI ~ ~ -lSI)! (v(S U i) - v(S)) (1.4)


SCN:iriS I I
for each i E N, which gives a second expression for the Shapley value.
Shapley (1953b) remarks that the above-described model lends support
to the view that the Shapley value is best regarded as an a priori as-
sessment of the situation, based on either ignorance or disregard of the
social organization of the players.
We will not crunch through the expected-payoHs approach to the
Shapley value for the game in example 1.2. We point out, however,
that this method will involve going through the six possible orders of
the three players in turn and is, in our opinion, more involved than the
unanimity-coefficient method.
A third approach to the Shapley value uses potentials for cooperative
games, which were introduced by Hart and Mas-Colell (1989). They con-
sider a real-valued function P on the set of all coalitional games. Given
this function, the marginal contribution of a player i in a coalitional
game (N,v) is

D;P(N, v) = P(N, v) - P(N\i, VIN\i), (1.5)

where (N\i, VIN\i) is the subgame on the player set N\i. The function
P is called a potential function if it satisfies the conditions P(0, w) = 0,
where (0, w) is a game with no players described by w(0) = 0, and
2:iENDiP(N,v) = v(N) for all coalitional games (N,v). Hart and
Mas-Colell (1989) show that such a potential function exists and that it
is unique. Furthermore, they show that for every coalitional game (N, v)
the vector of marginal contributions (D;P(N, v) )iEN coincides with the
Shapley value <I>(N, v).
The potential-function approach to the Shapley value is most useful
when trying to prove results for general classes of games, but much less
so for computing the Shapley value of specific games.
The last approach to the Shapley value we discuss in this section is
an axiomatic characterization. The properties that we mention are not
Coalitional games 11

exactly the same as the original ones provided in Shapley (1953b), but
they are very close and well-established in the literature. We prefer to
use the following set of properties for ease of future reference.
Efficiency is a pretty straightforward property. It simply requires that
for any game (N, v) the payoffs to the players add up to v (N), so that
the payoffs are attainable for the players and no money is left on the
table.

Efficiency: An allocation rule "( for coalitional games is efficient if


"((N, v) is an efficient payoff vector for each coalitional game (N, v).

Additivity concerns situations in which the same set of players N


cooperates in two different areas. The possible gains from cooperation
in one area are described by the coalitional game (N, v) and those in
the other area by (N, w). The possible gains from cooperation in the
both areas are described by the addition of these two games, which is
the coalitional game (N, v + w) with (v + w) (8) = v(8) + w(8) for each
8 t;:;: N. If an additive allocation rule is used to determine the payoffs
of the players, the payoffs to the players are the same whether the two
situations are evaluated separately or jointly.

Additivity: An allocation rule "( for coalitional games is additive if


for two coalitional games (N,v) and (N,w) with the same player set
it holds that
"((N, v + w) = "((N, v) + "((N, w). (1.6)

Symmetry of an allocation rule requires that two symmetric players


get the same payoff. Two players i and j are symmetric in a game
(N, v) if they are interchangeable in the game in the sense that the
value of a coalition that contains i but not j would not change if j was
included instead of i and vice versa. Formally, two players i, j E N
are symmetric in coalitional game (N, v) if v(8 U i) = v(8 U j) for all
coalitions 8 t;:;: N\ {i, j}.

Symmetry: An allocation rule "( for coalitional games is symmetric


if for each coalitional game (N, v)

for any two players i,j EN that are symmetric in (N,v).

The fourth property is the zero-player property. A player i E N


is a zero player in coalitional game (N, v) if v (8 U i) = v (8) for all
8 t;:;: N. Hence, a player is a zero player if his presence does not influence
12 Games and networks

the value of any coalition of players. Note that this also implies that
v(i) = v(0) = O. An allocation rule satisfies the zero-player property if
it gives a payoff of zero to any zero player.
Zero-Player Property: An allocation rule r for coalitional games
has the zero-player property if for each coalitional game (N, v)
ri(N, v) = 0
for any zero player i E N.
The Shapley value satisfies the four properties mentioned above. Also,
there is no allocation rule for coalitional games that satisfies these four
properties and that does not coincide with the Shapley value for each
coalitional game.
THEOREM 1. 2 The Shapley value is the unique allocation rule for coali-
tional games that satisfies efficiency, additivity, symmetry, and the zero-
player property.
We refer to Shapley (1953b) for a proof of this theorem. The proof
given there is directly applicable, since the only difference between the
properties used by Shapley (1953b) and the set of properties defined
above is that Shapley (1953b) lumps efficiency and the zero-player prop-
erty into one property, called the carrier property.
Theorem 1.2 shows that we if we believe that an allocation rule for
coalitional games should be additive, should always choose an efficient
allocation, should give the same payoff to symmetric players, and should
give nothing to zero players, then we should use the Shapley value. The
most controversial axiom of the four is additivity. In fact, this axiom is
not satisfied by the nucleolus, which we encountered on page 8.
We conclude this section with asymmetric variants of the Shapley
value. Shapley (1953a) introduces the family of weighted Shapley values
to account for differences in relative strength of the players that c",n be
modeled by weights. Weighted Shapley values can easily be described
using unanimity coefficients. Let W = (Wi)iEN E R~+ be a vector of
positive weights that express the relative strength of the players. For
every coalition S E 2N\ {0} we denote the sum of the weights of its
players by Ws = :EiES Wi· The weighted Shapley value Il>W of a coalitional
game (N, v) is then defined by

ll>i(N, v) = L Wi >,s{v)
Ws
(1. 7)
S<;N:iES

for each i E N. We refer to Il>w as the w-Shapley value or simply as a


weighted Shapley value. Note that if all players have equal weight, then
Networks 13

the weighted Shapley value equals the (symmetric) Shapley value. An


extensive analysis of weighted Shapley values can be found in Kalai and
Samet (1988).

1.2 NETWORKS
While in the previous section it is implicitly assumed that all coalitions
of players can be formed, this is in general not the case. In this section
we will discuss networks of bilateral cooperative relationships between
players.
Consider a player set N. In order for players to be able to coordi-
nate their actions, they have to be able to communicate. The bilateral
communication channels between the players in N are described by a
(communication) network. Such a network is a graph (N, L), which has
the set of players as its vertices and in which those players are connected
by a set of edges L ~ LN = {{i,j} I {i,j} ~ N, i I- j}. Hence, every
edge connects two players to each other. We will also refer to edges as
links between players. In order to minimize the use of parentheses, we
will oftentimes denote the link {i, j} between two players i and j by ij.
We remark that ij and ji represent the same link. For two players i
and j, link ij is included in the set of edges of the graph if and only
if these two players have the possibility of communicating and coordi-
nating their actions to cooperate with each other without requiring the
intermediation of other players. If ij E L, we say that i and j are directly
connected in the network.
If players i and j are not directly connected in the network, they might
still be able to coordinate their actions, provided that there are other
players through which they can do so. We say that players i and j are
connected in network (N, L) if there is a path in the network that connects
them, i.e., if there exists a sequence of players (iI, 1:2, ... ,it) such that
i 1 = i, it = j, and {ik' ik+d E L for all k E {I, 2, ... ,t-l}. If two players
i and j (i I- j) are connected in the network but not directly connected,
then we say that they are indirectly connected. If players i and j are
not connected, either directly or indirectly, then they will not be able
to coordinate their actions. Hence, the notion of connectedness induces
a partition of the player set into (cooperation) components, where two
players i and j are in the same (cooperation) component if and only
if they are connected, either directly or indirectly. We denote the set
of components of network (N, L) by N / L. The component containing
player i is Gi(L) = {.i E N I j = i or j is connected to i}. Note that
Gi(L) = Gj(L) whenever j E Gi(L).
14 Games and networks

EXAMPLE 1.4 Consider the network (N,L) with N = {I,2,3,4,5,6, 7}


and L = {I2, 15,26,37,47, 56}.6 This graph is represented in figure 1.1.

:0: :V
Figure 1.1. Network (N, L)
4

In this network, players 2 and 5 are directly connected to player 1.


Player 6 is indirectly connected to player 1 via player 2 and also via
player 5. There are no paths in the network connecting player 1 to
either player 3, 4, or 7. It follows that CdL) = {I, 2, 5, 6}. In fact,
NIL = {{1,2,5,6}, {3,4, 7}}. 0

To coordinate actions within a coalition S of players, without the


assistance of players outside the coalition, only links between players
in S are relevant. The set of links between players in S is denoted
L(S) = {ij ELI i,j E S}. Network (S,L(S)) induces a partition SIL
of coalition S into components in the same way as network (N, L) induces
a partition of the player set N. A component in S I L consists of players
in S who can coordinate their actions without the help of players outside
S. If network (S, L(S)) consists of a single component, i.e., lSI LI = 1,
then we say that coalition S is internally connected. If coalition S is
internally connected, then all players in S can coordinate their actions
without the help of players not in S. If S is not internally connected,
then the players in S might still be able to coordinate their actions with
the help of other players. This is the case if there exists a component of
the network that contains S. A coalition S with the property that there
exists aCE NIL with S ~ C is simply called connected.

EXAMPLE 1.5 Consider the network (N, L) in figure 1.1, which was also
the subject of example 1.4. In this network, players land 6 are connected
only indirectly, either through player 2 or through player 5 . Hence, play-
ers land 6 can coordinate their actions within coalition {l, 2, 6}, hut
not within coalition {I,6}. Using the notation introduced above, for
S = {I, 2, 6}, we find L(S) = {{l, 2}, {2, 6}} and SI L = {{I, 2, 6}}. For

6Since we have only seven players, it should be immediately clear that 12, for example,
denotes the link between players 1 and 2 and not player number 12.
Networks 15

T = {1,6}, however, it holds that L(T) = 0 and T / L = {{I}, {6}}.


Coalition S is internally connected and, hence, connected. Coalition T
is connected, though it is not internally connected. 0

We will encounter several specific classes of networks, which we in-


troduce here. We follow the terminology of graph theory in classifying
these networks. After giving the definitions, we illustrate some classes
of networks with examples.
If all players are isolated, i.e., L = 0, the network (N, L) is empty. A
network (N, L) is complete if all pairs of players are directly connected
in the network, i.e., L = LN. If the network has exactly one component,
then the network (N, L) is connected. In such a network, all pairs of
players are connected, either directly or indirectly. A cycle-free network
is a network that does not contain any cycles. A cycle is a circular path
in the network that does not use any player more than once. Formally, a
cycle in network (N, L) is a sequence of players (iI, i2, ... ,it, it+J), t 2:: 3,
such that {ik' ik+d E L for all k E {1,2, ... ,t}, il+ l = iI, and where
iI, i2, ... ,it are all distinct players. An alternative characterization of
cycle-free networks is that if two players i and j are connected in such
a network, then there exists a unique path connecting them. It follows
easily that the complete network is not cycle-free unless there are at
most two players. A very special kind of cycle-free network is one that
contains exactly one central player who is directly connected to every
other player and that contains no other links, so that all other players
need the central player in order to be able to coordinate their actions. A
network with this property is called a star. Formally, a network (N, L)
is a star if there exists a player i E N such that L == {ij I j E N\i}.
In chapter 3 we will encounter the class of cycle-complete networks. A
network (N, L) is cycle-complete if the following holds: if (iI, i2,"" it, iJ)
is a cycle in the graph then all players in this cycle are directly connected,
i.e., {ik' it} E L for all k,l E {I, ... , t} with k =1= t. In words, a network
is cycle-complete if for every cycle in the network the complete net-
work on the players forming this cycle is a subnetwork of (N,L). Since
cycle-free networks do not contain any cycles, they trivially satisfy the
requirement of cycle-completeness. Another class of networks that are
cycle-complete is the class of complete networks, since those networks
contain the complete network on all players in S for each coalition S of
players.
To conclude, we mention a special type of network that is neither
cycle-free nor cycle-complete if there are at least four players. A wheel
is a network that consists of a cycle between the players with no further
16 Games and networks

links. Formally, a wheel is a network (N, L) with INI > 2, where the
players in N can be renamed in such a way that N = {I, 2, ... ,n} and
L = {{k, k + I} IkE {I, 2, ... ,n -I}} U {{1,n}}.7

EXAMPLE 1.6 Consider the networks (N, Ll) and (N, L2) in figures 1.2
(a) and 1.2 (b), respectively.

,-tt
b. (N, L2)

FiguTe 1.2. Networks (N,Ll) and (N,L2)

Both networks (N, Ll) and (N, L2) are connected. They are also cydc-
free and, hence, cycle-complete. Network (N, Ll) contains a central
player (player 3) who is directly connected to every other player and no
other links. In other words, (N, Ll) is a star. Network (N, L2) is not
a star since player 3, who seems to be the obvious candidate to be the
central player, is only indirectly connected to player 6 and not directly.
Figure 1.3 shows network (N, L 3 ), which contains several cycles, such
as (1,6,2,5,1), (2,6,5,2), and (3,4,7,3). Hence, this network is not
cycle-free. However, network (N, L3) is cycle-complete since it contains
the complete network between players 1, 2, 5, and 6 as well as the
complete network between players 3, 4, and 7. Players 6 and 7 are not
connected directly. This does not contradict that the network is cycle-
complete, since there is also no cycle that includes both players 6 and
7.

1 2 3 4

5~(Vt
FiguTe 1.3. Network (N,L 3 )

Network (N, L4) in figure 1.4 consists of a cycle between six players
with no further links. This network is a wheeL

7The denomination wheel is Homewhat confusing since there are no spokes.


Networks 17

:0:
Figure 1.4.
5

Network (N,L 4 )

<>

Cycle-complete networks have features that are similar to properties


of cycle-free networks. For cycle-free networks it holds that there exists a
unique path between two connected players. The following result states
that if two players are connected in a cycle-complete network, then there
exists a unique shortest path connecting them. The length of a path is
the number of links it uses.

LEMMA 1.1 For a cycle-complete network (N,L) and two players i,j E
N who are connected in this network, there exists a unique shortest
path connecting i and j. Moreover, it holds that every path connecting
players i and j includes the players on this unique shortest path.

Lemma 1.1 implies that if the bilateral cooperative relationships be-


tween the players form a cycle-complete network, then for any two con-
nected players i and j, we can identify a set of players whose cooperation
is both necessary and sufficient to enable players i and j to coordinate
their actions. The proof of this lemma, which was stated in van den
Nouweland (1993), is rather tedious, but not very hard. Instead of in-
cluding its proof here, we will illustrate the lemma with an example. An
outline of the proof of the lemma can be found in Slikker (2000b).

EXAMPLE 1. 7 As we argued before in example 1.6, network (N, L 3 ) in


figure 1.3 is cycle-complete. Players 1 and 8 are connected in this net-
work. There are several paths connecting these two players, such as
(1,2,6,3,4,8), (1,2,5,6,3,7,4,8), and (1,6,3,7,4,8), but it is clear that ev-
ery path connecting players 1 and 8 includes players 6, 3, and 4. The
cooperation of players 6, 3, and 4 is necessary to enable players 1 and 8
to coordinate their actions. But the cooperation of these players is also
sufficient, since (1,6,3,4,8) is a path in the network connecting players 1
and 8. It is the unique shortest path connecting these two players. <>
18 Games and networks

We have established that for pairs of connected players we can identify


a set of players whose cooperation is necessary and sufficient to enable
the initial two players to coordinate their actions if the network is cycle-
complete. This result can be extended to coalitions of players consisting
of more than two players. In fact, as we show in lemma 1.2, cycle-
complete networks can be characterized using this feature.

LEMMA 1.2 Network (N, L) is cycle-complete if and only if for every


nonempty connected coalition S ~ N there exists an internally con-
nected set containing S that is contained in every internally connected
set containing S.

PROOF: Suppose (N, L) is cycle-complete and let S ~ N, S =I- 0, be a


connected coalition of players. Consider the set H(S), which is defined
as the intersection of all internally connected sets that contain S, i.e.,
H(R) = n{T ~ N I S ~ T, T is internally connected}. We will show
that H(S) is an internally connected set. From the definition of H(S) it
is immediately clear that this implies that H(S) is the smallest connected
set containing S.
Let i,.1 E H(S), i =I- .1, and let (Xl' ... ' Xt) be the (unique) short-
est path from i to .1 in (N, L). Now let T ~ N be an internally
connected coalition such that S ~ T. Then {i,.i} ~ T and there
is a path (Yl, ... ,Yq) from ito.1 in (T,L(T)). According to lemma
1.1, this path must include the players on the shortest path from i
to.1, i.e., {Yl,·.·,Yq} 2 {Xl, ... ,Xt}. Hence, {Xl, ... ,Xt} is a sub-
set of every internally connected coalition T that contains S. There-
fore, {Xl, ... , xt} ~ H(S) and (Xl' ... ' Xt) is a path from i to .1 in
(H(S), L(H(S))). We conclude that for every i,.1 E H(S) there ex-
ists a path from i to .1 in (H (S), L (H (S) ) ) , which is equivalent to saying
that H(S) it-> internally connected.
Now, suppose (N, L) it-> a network that is not cycle-complete. Then
there is a cycle (Xl, ... ,Xt,xt) in (N,L) and there are 't,.1 E {l, ... ,t}
such that i <.1 -1 and {Xi,Xj} rf- L. Obviout->ly, {Xi,Xi+l, ... ,Xj}
and {Xj, Xj+l, ... , Xt, Xl,.·., xd are two internally connected sets both
containing Xi and Xj. The intersection of these sets is {Xi, Xj}, which is
connected but not internally connected. 0

The set H(S) that we use in the proof of lemma 1.2 is called the
connected hull of S. For a connected set S in a cycle-complete network
(N, L) it consists of the players whose cooperation is both necessary and
Networks 19

sufficient to enable the players in S to coordinate their actions if the


network is cycle-complete. For future reference, we define the connected
hull for general coalitions and graphs.
Let (N, L) be a network and let S <:;:; N, S =1= 0, be a coalition of
players. If S is connected, we define the connected hull H(S) of S by

H(S) = n {T <:;:; N IS <:;:; T, T is internally connected }. (1.8)

If S is not connected, we define H(S) = 0.


The name connected hull is somewhat deceptive, for if the network
is not cycle-complete, then the connected hull of a connected coalition
might not be an internally connected coalition. We will see an illustra-
tion of this in example 1.8.

EXAMPLE 1.8 We established in example 1.6 that network (N, L 3 ) in


figure 1.3 is cycle-complete. Consider coalition S = {I, 2, 4, 8}, which
is connected but not internally connected in network (N, L 3 ). There
are several internally connected coalitions T that contain S, such as
{I, 2, 3, 4, 6, 7, 8} and the grand coalition N. However, it is not hard to
see that every internally connected coalition containing S has to include
players 1, 2, 3, 4, 6, and 8, which form an internally connected coalition.
Hence, {I, 2, 3, 4, 6, 8} is the connected hull H(S) of coalition S. It
consists of the players whose cooperation is both necessary and sufficient
to establish coordination between the players in S.
The wheel (N, L4) in figure 1.4 is not cycle-complete. By lemma 1.2,
there exists at least one coalition of players S for which we cannot find
a set of players whose cooperation is both necessary and sufficient to es-
tablish coordination between the players in S. Coalition S = {2, 3, 5} is
such a coalition. This coalition is connected; {I, 2, 3, 4, 5} and {2, 3, 5, 6}
are both internally connected coalitions that contain S. This shows that
players 1, 4, and 6 are all not necessary for the players in S to be able to
communicate. However, the remaining set of players is S, which is not
internally connected. If we apply definition (1.8) to the set S, we find
that H(S) = {2, 3, 5}, which falls apart into two components. 0
Chapter 2

RESTRICTED COOPERATION IN GAMES

In this chapter, we will put the two main concepts introduced in chap-
ter 1 together and study games with restricted communication. In sec-
tion 2.1 we show how restrictions on communication are integrated into
a coalitional game. This section contains the definitions of the network-
restricted game and the link game. These games are used in the other
two sections to define two allocation rules for games with communication
restrictions, the Myerson value in section 2.2 and the position value in
section 2.3. Both of these values are based on the Shapley value, which
was discussed extensively in section 1.1. A completely different value is
also discussed, in section 2.3. Throughout this book, the Myerson value
is the predominant allocation rule for games with communication restric-
tions. We discuss this rule and some of its axiomatic characterizations
extensively in section 2.2.

2.1 GAMES WITH RESTRICTED


COMMUNICATION
Consider a group of agents N for whom the gains from cooperation
for each coalition are given by a characteristic function 'V as described in
section 1.1 and who face restrictions on communication. The bilateral
communication possibilities between the agents are modeled by a net-
work (N, L) as defined in section 1.2. The triple (N, ~~, L), which reflects
a situation consisting of a coalitional game (N, 'V) and a communication
network (N, L), is called a communication situation. We denote the set
consisting of all communication situations by CS.
Myerson (1977) was the first to study f:Ommunication situations. He
introduced a new game associated with a communication situation, the
22 Restricted cooperation in games

network-restricted game, which incorporates both the possible gains


from cooperation as modeled by the coalitional game and the restric-
tions on communication reflected by the communication network. The
network-restricted game (N, v L ) associated with communication situa-
tion (N, v, L) has a characteristic function v L defined by

vL(5) = L v(C) for each 5 ~ N.


CES/L

The definition of the network-restricted game can be understood as fol-


lows. Consider a coalition of players 5 S;;; N. If coalition 5 is internally
connected, i.e., if all players in 5 can communicate with one another
(directly or indirectly) without the help of players not in 5, then they
can fully coordinate their actions and obtain the value v(5). If, how-
ever, coalition 5 is not internally connected, then not all players in 5 can
communicate with each other without the help of outsiders. Coalition
5 will then be partitioned into communication components according
to the partition 5/ L. The best that the players in 5 can accomplish
under these conditions is to coordinate their actions within each of these
components, but players in different components cannot coordinate their
actions and hence the components will operate independently. Hence,
taking the restrictions on communication within coalition 5 into account,
the value attainable by the players in 5 is "£CES/L v(C). Note that if
the communication network is the complete network (N, L N ), then all
coalitions of players are internally connected and the network-restricted
game (N,vL) is equal to the original game (N,v).

EXAMPLE 2.1 Consider communication situation (N, v, L) with player


set N = {l, 2, 3}, set of links L = {12, 23} (see figure 2.1), and charac-
teristic function v described by

if 151 : : : 1;
v(5) ={ ~O72 if 151 = 2;
if 5 = N.
(2.1)

2
1...- - - 4 . 0 - - -.....·3

Figure 2.1. Network (N, L)

Since coalitions {I, 2} and N are internally connected ({ 1, 2} / L =


{ {I, 2}} and N / L = {N}), we find that for these coalitions their values
Games with restricted communication 23

in the network-restricted game are equal to their values in the underlying


game (N, v), i.e., v L (l, 2) = v(1, 2) = 60 and vL(N) == v(N). Coalition
{I, 3}, however, is not internally connected. Since {I, 3} / L = {{I}, {3}},
it follows that v L (l, 3) = v(l) + v(3) = 0 + 0 = O. Determining in this
way v L (S) for all S S;;; N we find that the characteristic function of the
network-restricted game (N, v L ) is described by

if lSI:::; 1 or S = {1,3};
if S = {1,2} or S = {2,3}; (2.2)
if S = N.
o

The relationship between the network-restricted game (N, v L ) and


the underlying coalitional game (N, v) in a communication situation
(N, v, L) is most clearly explained in terms of the unanimity coefficients
of both games. Expressing the unanimity coefficients As (v L ) of the
network-restricted game in terms of the unanimity coefficients As (v) of
the underlying game explicitly brings out how the restrictions on commu-
nication change the possibilities of coalitions of players to realize profits.
Relations between the unanimity coefficients of an underlying game in a
communication situation and those of the associated network-restricted
game were first studied by Owen (1986), who shows that these relations
are particularly easy in case the communication network is cycle-free.
Owen's results were extended to cycle-complete networks in van den
Nouweland (1993), from which the following theorem is taken. Key to
the theorem is the existence of a smallest internally connected superset
for each connected set. By lemma 1.2, this implies that the scope of
the theorem cannot be extended beyond communication situations with
cycle-complete networks.

THEOREM 2.1 Let (N, v, L) be a communication situation with a cycle-


complete network (N, L). Then for each S E 2N \ {0}

AT(v), (2.3)
TE2N \ {0} :H(T)=S

where H(T) denotes the connected hull of a coalition T as defined in


(1.8).

PROOF: We will use the fact that the representation of a game as a


linear combination of unanimity games is unique.
24 Restricted cooperation in games

We first consider unanimity games. So, let T E 2N\ {0} be a coalition


of players and consider the communication situation (N, UT, L) consist-
ing of unanimity game (N,UT) and network (N,L). If T is not con-
nected, then the players in T can never coordinate their actions and,
hence, (UT)L(S) = 0 for all S ~ N. Further, if T is connected, then it
follows from lemma 1.2 that the players in T can coordinate their ac-
tions if and only if the players in the connected hull H(T} are present.
Therefore,
if H(T) ~ S;
if H(T) rz S.
Recall that H(T) = 0 if T is not connected. To simplify our notations,
we define the game (N, u0) by u0(S) = 0 for all S ~ N. With these
notations we have

(UT)L = uH(T) for all T ~ N.

We now extend our scope to general coalitional games. Note that the
map that assigns to a game (N, v) the corresponding network-restricted
game (N, v L ), is a linear map (see, e.g., Owen (1986)). Hence, for a
game (N, v)
v = L A.T(V}UT
TE2 N\{0}

leads to

vL = L A.T(V)(UT}L = L A.T(V)UH(T)
TE2 N\{0} TE2 N\{0}

=L
Sr;;.N TE2N\ {0}:H(T)=S
A.T(V}US·

Since the representation of a game as a linear combination of unanimity


games is unique, we derive

A.T(V}

for all S E 2N \{0}. o

We illustrate theorem 2.1 in the following example.


Games with restricted communication 25

EXAMPLE 2.2 Consider the 3-person communication situation in ex-


ample 2.1. Expressed as a linear combination of unanimity games, the
characteristic function v satisfies

v = 60U1,2 + 60U1,3 + 60U2,3 - lOSuN.

Note that network (N, L) is cycle-free and, hence, cycle-complete. Coali-


tion {1,3} is not internally connected. It is connected, however, and
H(1,3) = {I, 2, 3}. All other nonempty coalitions are internally con-
nected and H(T) = T for all T E 2 N \{ {I, 3}, 0}. Since N is the con-
nected hull of coalitions {I, 3} and N, we find that )..N(V L ) = )..1,3(V) +
)..N(V) = 60 - 108 = -4S. Coalition {1,3} is not internally connected
and, hence, is not the connected hull of any coalition. Hence, )..1,3 (v L ) =
O. For all other coalitions T E 2N \{N,{1,3},0} it holds that )..T(V L ) =
)..T(V). We find that the characteristic function ofthe network-restricted
game IS
vL = 60U1,2 + 60U2,3 - 4SUN, (2.4)
which is the characteristic function we found in (2.2). <)

The following example shows that the approach described in theorem


2.1 will not be correct for all coalitional games if the network is not
cycle-complete.

EXAMPLE 2.3 Let (N, v, L) be the 4-person communication situation


with player set N = {1,2,3,4}, set of links L = {12,23,34,41} (see
figure 2.2), and characteristic function v = U2,4.

40
1
3

Figure 2.2. Network (N, L)

Network (N, L) is a wheel with four players and is not cycle-complete.


Coalition {2, 4} is connected but not internally connected. In order to
be able to coordinate their actions, players 2 and 4 need either player
1 or player 3 (or both). If we apply (1.S) we find H(2,4) = {2,4},
which is not internally connected. Since v L (2,4) = v(2) + v(4) = 0 and
v(2,4) = 1, it is not true that (U2,4)L = UH(2,4)' In fact, it holds that

vL = U1,2,4 + U2,3,.1 - UN·


26 Restricted cooperation in games

Noting that H(T) -I- {2,4} for all T <:::; N with T -I- {2,4}, we find that

L AT(V) = A2,4(V) = 1 -I- 0 = A2,4(V L ).


TE2N\ {0}:H(T)={2,4}
o

The previous two examples illustrate an interesting feature of network-


restricted games in general, namely that AS(V L ) = 0 for every coalition
S that is not internally connected. This can be seen formally using
expression (1.2) of the unanimity coefficients AS(V L ), but it is easily un-
derstood on an intuitive level. If coalition S is not internally connected,
then its value in the network-restricted game is simply the addition of
the values of its components. Hence, the formation of coalition S does
not add value.
The network-restricted game evaluates the possible gains from co-
operation in a communication situation from the point of view of the
players. It provides an assessment of the gains from cooperation that
are obtainable by coalitions of players in the face of restricted communi-
cation possibilities. A different approach, which was pioneered by Borm
et al. (1992), is to focus on the importance of the communication links
in a communication situation. If there are no communication links, then
the players cannot realize any gains from cooperation at all. The gains
from cooperation that can be realized by the players are attributed to
the presence of the communication links that they use to coordinate
their actions. This leads to the definition of the link game, which has
the set of links L as its player set and in which the value of a set of
links A <:::; L is the value obtainable by the grand coalition N if the
communication channels in A are available to the players in N. The
link game (L,rV) associated with communication situation (N,v,L) has
characteristic function TV defined by

rV(A) = vA(N) = L v(C) for all A <:::; L. (2.5)


CEN/A

This definition reflects the fact that if only the communication links
in A are available, then the grand coalition N might be partitioned into
several communication components. Since coordination between such
components is not possible, the value obtainable by the grand coalition
N is the sum of the values obtainable by its components.

REMARK 2.1 In section 1.1, we pointed out that it is generally assumed


that v(0) = 0 for a characteristic function v. This assumption might be
Games with restricted communication 27

violated for the characteristic function rV of the link game if v(i) t=


0
for some player i E N, since rV(0) = LiEN v(i). While allowing for
t=
1'11(0) 0 would not have any substantial consequences in what fol-
lows, it would severely complicate notations. For that reason, we prefer
to restrict ourselves to zero-normalized games, i.e., games (N, v) with
v(i) = 0 for each i EN, in the remainder of this section.

EXAMPLE 2.4 Consider the 3-person communication situation (N, v, L)


that we studied in example 2.1. Note that the game (N,v) is zero-
normalized. The network (N, L) of this communication situation is
the line {12,23}, which is depicted in figure 2.1. For notational con-
venience we denote a = 12 and b = 23, so that L = {lL, b}. The
characteristic function 1'11 of the associated link game (L,rV) satisfies
rV(a) = v(l, 2) + v(3) = 60, r11(b) = v(1) + v(2, 3) =: 60, and rV(a, b) =
v(N) = 72. Expressing this as a linear combination of the unanimity
games (L, U a ), (L, Ub), and (L, uL), we obtain

(2.6)
o

Relations between the unanimity coefficients AA(r' V ) of the link game


(L, rV) and the unanimity coefficients AS(V) of the underlying coalitional
game (N, v) have only been established for communication situations
(N, v, L) with a cycle-free network (N, L) (see Borm et al. (1992)).

THEOREM 2.2 Let (N, v, L) be a communication situation with a zero-


normalized game (N, v) and a cycle-free network (N, L). Then for each
A E 2 L \{0}
AS(V), (2.7)
SE2N\{0}:A=L(H(S))

where L(H(S)) denotes the links between the players in the connected
hull of S.

The proof of theorem 2.2 is similar in spirit to that of theorem 2.1


and we will leave it to the reader. Key to the theorem is that for every
connected coalition S <:;;; N there exists a set of links that is necessary
and sufficient to establish communication between all the players in S.
This is true for cycle-free networks, in which there is a unique path
between any two connected players.
28 Restricted coopemtion in games

EXAMPLE 2.5 The 3-person communication situation (N, v, L) in ex-


ample 2.4 has a cycle-free network. The unique path between players 1
and 2 uses link a only. Hence, Aa(rV) = Al,2(V) = 60. The unique path
between players 1 and 3 uses both links, a and b. Also, the players in N
use both links to establish communication between themselves. Hence,
AdTV) = Al,3(V) + AN(V) = 60 ~ 108 = ~48. 0

The following example shows that the scope of theorem 2.2 cannot
be extended to include communication situations with cycle-complete
networks. While for such networks it holds that for every connected
coalition of players S there exists a coalition of players whose coop-
eration is necessary and sufficient to establish communication between
the members of S, these players might communicate through different
communication channels. Hence, the existence of a set of links whose
presence is both necessary and sufficient to establish communication be-
tween the players in a connected set S is not guaranteed.

EXAMPLE 2.6 Consider communication situation (N, v, L) with player


set N = {I, 2, 3}, set of links L = {12, 13, 23} (see figure 2.3), and
characteristic function v defined by

if lSI <::: 1;
v(S) = { ~o72 if lSI = 2; (2.8)
if S = N,

which was also used in previous examples. We denote a = 12, b = 23,


and c = 13.

Fzgure 2.3. Network (N, L)

Network (N, L) is cycle-complete, but not cycle-free. The connected


hull of coalition {1,2} is {1,2}, which is internally connected. Play-
ers 1 and 2 can communicate if only link a is available. Note that
{a} = L(H(l, 2)). However, players 1 and 2 might also establish com-
munication through player 3, using links band c, if link a is unavailable.
Hence, there is no set of links whose presence is both necessary and suf-
ficient to establish communication between players 1 and 2. To show
The Myerson value 29

that (2.7) does not hold for the communication situation in this exam-
ple, we compute ,\a,b(rV). Since rV(a) = rV(b) = 60 and rV(a, b) = 72,
it follows that '\a,b(r V) = -48. Since there is no seN such that
L(H(S)) = {a, b}, we find

2: '\s(v) = 0 -I- -48 = ,\a,b(rV).


SE2 N \ {0}:{a,b }=L(H(S))

2.2 THE MYERSON VALUE


In the previous section we discussed how several authors have ap-
proached the integration of restrictions on communication into a coali-
tional game. In this section and the next one, we will shift our focus
to allocation rules, which give us an assessment of the benefits for each
player from participating in a communication situation. The current
section is entirely devoted to the Myerson value, which is based on the
network-restricted game and the Shapley value. It is also the predomi-
nant allocation rule for communication situations in the literature. This
section contains the definition of the Myerson value as well as axiomatic
characterizations of this allocation rule.
An allocation rule on a class CS of communication situations is a
function 'Y that assigns a payoff vector 'Y(N, v, L) E RN to every com-
munication situation (N, v, L) in that class. 8
The Myerson value J.L (cf. Myerson (1977)) is the allocation rule that
assigns to every communication situation (N, v, L) the Shapley value of
the network-restricted game (N, v L ),

(2.9)

We illustrate the Myerson value in the following two examples.

EXAMPLE 2.7 Consider communication situation (N, v, L) with N =


{I, 2, 3}, L = {12, 23}, and v = 60Ul,2 + 60Ul,3 + 60U2,3 - 108uN, which
was also studied in examples 2.1 and 2.2. Since v L = 60Ul ,2 + 60U2 ,3 -
48uN, we can quickly find the Myerson value using expression (1.3) for

8We point out that we use CS to denote a subset of the set CS consisting of all communication
situations.
30 Restricted cooperation in games

the Shapley value,

fJ-l(N,v, L) = gh(N,v L ) = 30 + 0 -16 = 14;


fJ-2(N,v,L) = <I>2(N,v L ) = 30 + 30 -16 = 44;
fJ-3(N, v, L) = <I>3(N, v L ) = 0 + 30 - 16 = 14.

Note that, even though all three players are symmetric in the underly-
ing game, the Myerson value of player 2 is larger than that of players
1 and 3. The reason is that players 1 and 3 need player 2 in order to
coordinate their actions and cannot realize their possible gains from co-
operation without player 2's help. Player 2 can therefore appropriate a
part of the gains from cooperation that players 1 and 3 can realize with
player 2's help. This becomes especially clear if we compare the Myer-
son value fJ-(N, v, L) to that of the communication situation (N, v, L N ),
which is obtained from (N, v, L) by adding link 13 so that players 1
and 3 no longer need the help of player 2 in order to coordinate their
actions. Since the network-restricted game (N, v LN ) of communication
situation (N, v, LN) is equal to the underlying game (N, v), it holds that
fJ-(N, v, LN) = <I>(N, v). 0

EXAMPLE 2.8 Consider the communication situation (N, v, Ll) with


player set N = {I, 2, 3,4}, set of links Ll = {12, 14,23, 34}, and charac-
teristic function v given by

if lSI..:; 1;

v(S) = { ~~ if lSI = 2;
if lSI = 3;
(2.10)
108 if S = N.

Network (N, Ll) is the wheel represented in figure 2.4 (a).

40
1
3

2
4r:71 3
lLJ2
a: (N, £1)
The Myerson value 31

In this communication situation, players 1 and 3 need either player 2


or player 4 in order to be able to realize the potential gains from coopera-
tion. However, players 2 and 4 are in a similar position, since they cannot
realize their possible gains from cooperation without the help of players
1 or 3. In fact, it holds that p,(N, v, L1) = (27,27,27,27) = p,(N, V, L N ),
so that no player profits or gains from being in network (N, L1) as op-
posed to the complete network (N, L N ). The situation changes if we
add link {I, 3} to network (N, L1). The network that we then obtain is
the network (N, L2) represented in figure 2.4 (b). In network (N, L2),
players 2 and 4 still need either player 1 or player 3 in order to be able
to realize any gains from cooperation. However, players 1 and 3 are now
able to cooperate without the help of other players. Hence, players 1
and 3 stand to gain from this situation since they can appropriate part
of the gains from cooperation that players 2 and 4 can realize with their
help, without having to grant players 2 and 4 any part of the profits that
players 1 and 3 can realize between themselves. This is reflected in the
Myerson value, which is given by p,(N, V, L2) = (32,22,32,22). 0

If two players are in different communication components of commu-


nication situation (N, v, L), then they do not interact with each other
at all. Therefore, it seems reasonable to expect that the values of coali-
tions that include players who are not connected to player i as well as
links involving such players do not influence the payoff of player i. This
requirement is satisfied by the Myerson value which, as is stated in the-
orem 2.3, is component decomposable. An allocation rule is component
decomposable if for a communication situation (N, v, L) and a player
i EN, the payoff of this player is completely determined within the
component Gi(L) to which he belongs.

Component Decomposability An allocation rule 'Y on a class CS


of communication situations is component decomposable if for every
communication situation (N, v, L) E CS and every player i E N the
sub-communication situation (Gi' vIc;, L( Gi )) is also in CS and

'Yi(N,v,L) = 'Yi(Gi,vlci,L(Gi )). (2.11)

THEOREM 2.3 The Myerson value on the class CS of all communication


situations satisfies component decomposability.

The proof of theorem 2.3 is rather technical and lengthy but not very
insightful. It can be found in van den Nouweland (1993) (pp. 28-30).
We will skip this proof and illustrate the theorem with an example.
32 Restricted cooperation in games

EXAMPLE 2.9 Consider communication situation (N, v, L) with player


set N = {I, 2, 3, 4}, set of links L = {12, 34}, and characteristic function
v given by v = 2Ul,2 + 3U2,3 + 4U3,4 + UN. Network (N, L) consists of
two lines as represented in figure 2.5.

4______ 3

1------2

Figure 2.5. Network (N, L)

We first compute the Myerson value p,(N, v, L). It is not hard to see
that v L = 2Ul,2+4u3,4 and, hence, p,(N,v,L) = (1,1,2,2). The network
partitions the player set into two components, one with players 1 and 2
and the other with players 3 and 4. If we restrict our attention to compo-
nent C = {1,2}, then we obtain communication situation (C, w, {12})
with only players 1 and 2, who are directly connected, and the game
W = VIC = 2Ul,2. Since all possible links between the two players in this
situation are present, the network-restricted game corresponding to this
situation is w{12} = W = 2Ul,2 and p,( C, w, {12}) = (1,1). So, the Myer-
son values of players 1 and 2 in the reduced communication situation are
equal to their Myerson values in the original communication situation
(N, v, L). Note that the Myerson values of players 1 and 2 would not
be affected by a deletion of the link between players 3 and 4. Such a
deletion would, however, reduce the Myerson values of players 3 and 4
to O. 0

Theorem 2.3 is very useful since it allows us to split a communication


situation with many players and a network that is not connected into
several smaller communication situations for the purpose of determining
the Myerson values of the players. It is often substantially easier to deal
with communication situations with fewer players.
Myerson (1977) found the Myerson value when he was looking for an
allocation rule that satisfies the two properties component efficiency and
fairness, which we describe below.

Component Efficiency An allocation rule 'Y on a class CS of commu-


nication situations is component efficient if for every communication
The Myerson value 33

situation (N,v,L) E CS and every component C E NIL

L ri(N, v, L) = v(C). (2.12)


iEC

An allocation rule is component efficient if the payoffs to the players in


a component add up to the value of that component. With a component
efficient allocation rule the players in a component distribute the value
of this component among themselves. Hence, there are no payoff exter-
nalities between the different components in a communication situation.
This seems entirely reasonable, since there are also no externalities in
values between the various components. The players in component C
can realize the value v( C) irrespective of what the other players do.

EXAMPLE 2.10 We see an illustration of component efficiency of the


Myerson value in example 2.9. Network (N, L) in this example consists
of two components, {1,2} and {3, 4}. We have seen that /-L1 (N, V, L) +
/-L2(N, v, L) = 2 = v(l, 2) and /-L3(N, v, L) + /-L4(N, v, L) = 4 = v(3, 4).
Component efficiency also can be applied in case the network is con-
nected. This holds for the two communication networks (N, L1) and
(N, L2) in example 2.8. For communication situations (N, v, L1) and
(N, v, L2) we obtained
4 4
L /-Li(N, v, L1) = L JLi(N, v, L2) = 108 = v(l, 2, 3,4).
i=1 i=1

Fairness An allocation rule r on a class CS of communication situa-


tions is fair if for every communication situation (N, v, L) E CS and
any link ij E L it holds that

ri(N, v, L) - ri(N, v, L\ij) = rj(N, v, L) - rj(N, v, L\ij). (2.13)

An allocation rule is fair if the payoffs of two directly connected players


in- or decrease by the same amount if the link between them is severed.
Fairness reflects the equal-gains equity principle that two players should
gain equally from their bilateral agreement.

REMARK 2.2 We have different opinions ab01d how to handle the


domains on which allocation rules with various properties are defined.
34 Restricted cooperation in games

Anne is of the opinion that an allocation rule on a class CS can be


called fair if condition (2.13) holds whenever it can be checked, i. e., for
all communication situations (N, v, L) E CS and links ij E L such that
(N, v, L\ij) E CS. Marco, however, is of the opinion that an allocation
rule on a class CS can only be called fair if for all (N, v, L) E CS and
links ij E L it holds that (N, v, L \ij) E CS and, moreover, that condition
(2.13) is satisfied. Because the domains that we will encounter in this
book are all closed with respect to the manipulations that we consider' in
various properties of allocation rules, going one way or the other would
have no impact at all for what follows. Therefore, we have decided to not
include any conditions on the domains in our definitions of properties of
allocation rules. This has the added advantage that our definitions bring
out the requirements on allocation rules more clearly, because they are
not obstnlcted by requirements on domains.

EXAMPLE 2.11 The difference between network (N, L2) and network
(N, Ll) in example 2.8 is the deletion of link 13. Since the Myerson
values of players 1 and 3 equals 32 in communication situation (N, v, L2)
and 27 in communication situation (N, v, Ll), we see that both players
lose the same amount as a result of breaking the link between them.
For network (N,v,L) in example 2.9, the deletion of link {3,4} will
cause component {3, 4} to be split up into two components, one consist-
ing of player 3 and another consisting of player 4. The Myerson values
of both players 3 and 4 will drop from 2 to 0 as a result of the deletion
of the link between them. 0

Myerson (1977) found not only that an allocation rule satisfying com-
ponent efficiency and fairness exist, but also that it is unique. Moreover,
it is the Shapley value of the network-restricted game, which is nowadays
known as the Myerson value. A point of interest is the domain on which
we can establish this axiomatic characterization of the Myerson value.
While the Myerson value satisfies component efficiency and fairness on
the domain CS consisting of all possible communication situations, to
establish unicity we can restrict ourselves to a domain consisting of all
communication situations with a fixed player set and a fixed coalitional
game. For each player set N and coalitional game (N, v), we denote
the set of all communication situations with player set N and under-
lying game (N, v) by CS;;. Note that for a communication situation
(N, v, L) E CS;;, the addition or deletion of a link does not change the
The Myerson value 35

player set or the underlying game, so that fairness can be used on the
domain CS:. Theorem 2.4 was first proved in Myerson (1977) and an
alternative proof can be found in Aumann and Myerson (1988). We will
give a proof that features parts from both of these proofs as well as our
own approach.

THEOREM 2.4 The Myerson value is the unique allocation rule on CS:
that satisfies component efficiency and fairness.

PROOF: Using theorem 2.3 and efficiency of the Shapley value (see theo-
rem 1.2), it follows quickly that the Myerson value is component efficient
as follows. Let (N,v,L) E CS: and C E NIL. Then

iEC iEC

iEC

iEC

Fairness of the Myerson value follows most easily using expression


(1.4) of the Shapley value. Let (N,v,L) E CS: and i,j EN such that
ij E L. Then

/-Li(N,v,L) - /-Lj(N,v,L) = ifli(N,vL) - iflj(N,vL)

L ISI!(n -n~ - lSI)! (vL(S U i) - vL(S))


S~N:irtS

L ISI!(n -n~ -ISI)! (vL(SUj) - vL(S))


S~N:jrtS

"" ISI!(n-2- ISI)!(v L (SUi)_v L (SUj)). (2.14)


L (n -1)'
S~N\{i,j} .

The third equality follows by adding the coefficients of v L (T) in the ex-
pression after the second equality sign, for each T t;;;; N. It is a straight-
forward exercise to show that these add up to zero if {i, j} t;;;; T or
T t;;;; N\ {i, j}. If T t;;;; N with i E T and j ~ T, then T appears with co-
efficient (iT1-l)!(n~~-(ITI-l))! as S1 U i (S1 = T\i) in the first summation
and with coefficient ITp(n-l-ITI)!
n!
as 52 = T in the second summation.
36 Restr'icted cooperation in games

We then use that


(ITI- 1)!(71 -ITI)! + ITI!(71 - 1 -ITI)! = (ITI- l)!(n - 1 -ITI)!
71! 71! (71 - I)! '

Note that T appears with this coefficient as SUi (S = T\i) in the


expression after the third equality sign, A similar reasoning holds for all
T t:;;; N with JET and i r¢ T,
Since vL(S) = vL\ij (S) for all S t:;;; N with i r¢ SOl' j r¢ S, we conclude
from (2.14) and a similar expression for L\ij rather than L that

{li(N, v, L) -llj(N, v, L) = {li(N, v, L\ij) - {lj(N, v, L\ij),


which is equivalent to (2.13).
All that is left to prove now is that any allocation rule on CS: that sat-
isfies component efficiency and fairness has to coincide with the Myerson
value. We prove this by induction to the number of links in the network
(N, L). Let "'( be an allocation rule on CS: that is component efficient
and fair. Firstly, consider the communication situation (N, v, 0) E CS:
with no links. Then Ci (L) = {i} for each i E N and component effi-
ciency of both "'( and 11 implies "'(i(N, v, 0) = v(i) = {li(N, v, 0). Now, let
(N, v, L) E CS: be a communication situation with ILl = k ~ 1 links
and suppose that we already proved that "'( and {l coincide for commu-
nication situations with fewer than k links. Using fairness of both "'( and
11 and the induction hypothesis, we find that for any link ij E L

"'(i(N,v,L) -"'(j(N,v,L) = "'(i(N,v,L\ij) -"'(j(N,v,L\ij)


= /J,i(N, v, L\ij) - {lj(N, v, L\ij)
= {li(N, v, L) -llj(N, v, L),
which implies that

"'(i(N,v,L) - {li(N,v,L) = "'(j(N,v,L) - {lj(N,v,L), (2.15)

From this, we easily derive that (2.15) holds for all i and j that are
connected, directly or indirectly. We conclude that for each component
C E N/ L there exists a number de such that "'(i(N, v, L) - p'i(N, v, L) =
de for all i E C. Using component efficiency of both "'( and p., we find
that for a component C E N / L

lEG iEG iEG

so that de =0 has to hold. Hence, "'((N, v, L) = II(N, v, L). o


The Myerson value 37

Myerson (1980) shows that in theorem 2.4 fairness can be replaced


by the stronger requirement of balanced contributions. To describe this
axiom, we need a notation for the set of links adjacent to a player i.
So, we define Li = {l ELI i E l}. An allocation rule satisfies balanced
contributions if for any two players i and j it holds that the harm that
player i can inflict upon player j by breaking all links in Li is the same
as the harm that player j can inflict upon player i by undertaking a
similar action. In this sense, players i and j have equal threats against
each other.
Balanced Contributions An allocation rule 'Y on a class CS of com-
munication situations has balanced contributions if for every commu-
nication situation (N,v,L) E CS and any two players i,j E N we
have

We illustrate balanced contributions of the Myerson value in the fol-


lowing example.

EXAMPLE 2.12 Consider the communication situation (N,v,L2) in ex-


ample 2.8. The network (N, L2) is represented in figure 2.4 (b). We
established that J-L(N, v, L2) = (32,22,32,22). If player 1 breaks all his
links, then the set of links that are left is L\L 1 = {23,34}, which results
in the network-restricted game (N, vL\Ll) with characteristic function

vL\Ll = 60U2,3 + 60U3,4 - 24u2,3,4.

The Myerson value of the new communication situa.tion (N, v, L\Ld is


J-L(N, v, L\Ld = (0,22,52,22). Hence, player 2 neither loses nor gains if
player 1 breaks all his links.
If player 2 breaks all his links, then the set of links that are left
is L\L2 = {13, 14, 34}, which results in the network-restricted game
(N, v L \L2) with characteristic function

v L \L2 = 60U1 3
,
+ 60U1 4 + 60U3 4 -
, ,
84u1 ,
34·
1

The Myerson value of the new communication situation (N, v, L\L 2 ) is


J-L(N, v, L\L 2) = (32,0,32,32). Hence, player l's payoff does not change
if player 2 breaks all his links. This shows that

o
38 Restricted cooperation in games

We can establish uniqueness of a component-efficient allocation rule


with balanced contributions on the domain CS;;, like in theorem 2.4.
Since the deletion of all links adjacent to a particular player i does not
change the player set or the underlying game, balanced contributions
can be used on the domain CS;;.

THEOREM 2.5 The Myerson value is the unique allocation rule on CS;;
that satisfies component efficiency and balanced contributions.

PROOF: Since most of the intuition that goes into this proof is fairly
similar to that in the proof of theorem 2.4, we suffice by indicating the
lines along which the proof of the current theorem can be established.
Myerson (1980) proves that an allocation rule that has balanced con-
tributions is fair. Van den Nouweland (1993) proves that the Myerson
value has balanced contributions. Putting this together with theorem
2.4 gives the desired result. 0

When Borm et al. (1992) were looking for an axiomatic characteriza-


tion of the position value (which we will introduce in section 2.3), they
found a new axiomatic characterization of the Myerson value as well. In
order for their characterization of the Myerson value to work, however,
they had to restrict the domain to the set of communication situations
with player set N and with cycle-free networks (N, L). The reason is
that they were using a property called the superfluous link property,
which we will encounter in section 2.3 as well.
Replacing the superfluous link property in Borm et al. (1992) 's ax-
iomatic characteri;r,ation of fJ by either the strong superfluous link prop-
erty or the superfluous player property, van den Nouweland (1993) gave
two axiomatic characterizations of the Myerson value that are valid on
the domain of all communication situations with a player set N, which
we denote by CS N . Hence, the requirement of cycle-freeness can be
dropped. The following two properties are used to characterize the My-
erson value.

Additivity An allocation rule, on a class CS of communication sit-


uations is additive if for any two communication situations (N,v,L),
(N, w, L) E CS we have

,(N,v +w,L) = ,(N,v,L) +,(N,w,L). (2.17)

Additivity concerns situations in which the set of players N with


communication links in L cooperates in two different areas. The possible
The Myerson value 39

gains from cooperation in one area are described by the coalitional game
(N,v) and those in the other area by (N,w). If an additive allocation
rule is used to determine the payoffs of the players, the payoffs to the
players are the same whether the two situations are evaluated separately
or jointly.
To dC::lcribe player anonymity, we will need a notation for the set of
players in a network who maintain at least one link with other players.
For a network (N, L), we define

N(L) = {i E N 13j EN: ij E L}. (2.18)

Player anonymity of an allocation rule states that the allocation takes


a specific form if the communication situation is player anonymous. A
communication situation (N, v, L) is player anonymous if the value of a
coalition of players in the network-restricted game only depends on the
number of players in this coalition who have links with other players,
i.e., vL(S) = vL(T) for all S, T <;;; N with IS n N(L)I = IT n N(L)I.

Player Anonymity An allocation rule, on a class CS of communi-


cation situations is player anonymous if for every player anonymous
communication situation (N, v, L) E CS there exists a constant a E R
such that
,(N,v,L) = aeN(L). (2.19)

If a communication situation is player anonymous, then all play-


ers who have links with other players are symmetric in the network-
restricted game and players who are isolated do not contribute anything
when they cooperate with other players. According to a player anony-
mous allocation rule, the players who have links with other players then
all get the same payoff and those who are not in any links get noth-
ing. Note that this is reasonable since for an isolated player i in a
player anonymous communication situation it holds that v(i) = vL(i) =
1)L(0) = 0, so that such a player cannot obtain a nonnegative value for
himself when he works alone.
The following lemma states that for an allocation rule that satisfies
component efficiency, additivity, and player anonymity, the potential
value of coalitions of players that are not connected are not important
for determining the payoffs of the players.

LEMMA 2.1 Let, be an allocation rule on CS N that satisfies component


efficiency, additivity, and player anonymity. Then for every communica-
tion situation (N,v,L) it holds that ,(N,v,L) = ,(N,vL,L).
40 Restricted cooperation in games

PROOF: Let (N, v, L) be a communication situation. If we replace the


characteristic function v in this communication situation by the char-
acteristic function of its network-restricted game v L , we obtain another
communication situation (N, v L , L). Let (N, w) denote the game (N, v-
v L ) and consider communication situation (N, w, L). The network-re-
stricted game (N, w L ) corresponding to this situation assigns 0 to all
S ~ N, because

CES/L CES/L

for all S ~ N. Hence, (N, w, L) is player anonymous. Now component


efficiency and player anonymity of"'{ imply that "'{i(N,w,L) = 0 for all
i EN. Using additivity of ",{, we obtain

We conclude that ",{(N,v,L) = "'{(N,vL,L). o

We need two more properties before we can formulate the axiomatic


characterizations of the Myerson value. The first one involves superflu-
ous players and the second one involves strongly superfluous links.
A player i E N is superfluous in a communication situation (N, v, L)
if the presence of this player does not influence the value of the network-
restricted game, i.e., vL(S) = vL(S U i) for all S ~ N.
A link 1 E L is strongly superfluous in a communication situation
(N, v, L) if the deletion of this link has no influence on the network-
restricted game, i.e., vL(S) = vL\I(S) for all S ~ N.

Superfluous Player Property An allocation rule "'{ on a class CS


of communication situations satisfies the s'uperfluous player property
if for every communication situation (N, v, L) E CS and every player
i E N who is superfluous in this communication situation it holds
that
(2.20)

An allocation rule satisfies the superfluous player property if the pay-


offs to the payers do not change if a superfluous player breaks all the
links that he is involved in.

Strong Superfluous Link Property An allocation rule "'{ on a class


CS of communication situations satisfies the strong superfluous link
property if for every communication situation (N, v, L) E CS and
The Myerson value 41

every link l E L that is strongly superfluous in this communication


situation it holds that

,(N,v,L) =,(N,v,L\l). (2.21)

An allocation rule satisfies the strong superfluous link property if the


deletion of a strongly superfluous link does not change the payoffs of the
players.
In the following theorem, we will use the properties defined above
to give two axiomatic characterizations of the Myerson value. We re-
mark that we do not restrict ourselves to communication situations in
which the underlying coalitional games are zero-normalized, like van den
Nouweland (1993) does.

THEOREM 2.6 Let N be a set of players.


(i) The Myerson value is the unique allocation rule on CSN that satisfies
component efficiency, additivity, the superfluous player property, and
player anonymity.
(ii) The Myerson value is the unique allocation rule on CSN that satis-
fies component e.fficiency, additivity, the strong superfluous link prop-
edy, and player anonymity.

PROOF: For four of the five properties that are mentioned in the theorem
it is easy to see that they are satisfied by the Myerson value. Compo-
nent efficiency of the Myerson value was demonstrated in the proof of
theorem 2.4. Additivity of the Myerson value is easily derived from ad-
ditivity of the Shapley value. Player anonymity of the Myerson value
follows readily using component decomposability of,u and symmetry of
the Shapley value. Finally, it is immediately clear from its definition
that the Myerson value satisfies the strong superfluous link property.
Instead of proving directly that the Myerson value satisfies the super-
fluous player property, we will prove that this property is implied by the
other four properties. So, let, be an allocation rule on CS N that sat-
isfies component efficiency, additivity, the strong superfluous link prop-
erty, and player anonymity. We will show that, necessarily satisfies the
superfluous player property.
So, let i be a superfluous player in communication situation (N, v, L),
i.e., vL(S) = vL(S U i) for all S ~ N. Let l E Li and consider the
game (N, vL\I). Let S ~ N. We distinguish between two cases. If
Sj(L\l) = Sj L, then, obviously,
vL\I(S) = 1)L(S).
42 Restricted cooperation in games

If S/(L\l) -I- S/L, then with l = ij we have that S/(L\l) = {G E


S/L I i rt
G} U {Gi , Gj }, where Gi (Gj ) denotes the unique component
of S/(L\l) containing player i (j). For all components G of S/(L\l) we
know that v(G) = vL(G). Further, using that i is a superfluous player,
we see that v(Gi ) = VL(Gi ) = VL(Gi\i). We use this to derive that

vL\l(S) = L v(G)
CESj(L\I)
L vL(G\i)
CESj(L\I)
= vL(S\i) = vL(S),
where the last equality follows from the fact that i is a superfluous player.
We proved that (N, vL\I) coincides with (N, v L ), so that link l is
strongly superfluous. Using the strong superfluous link property of, now
implies that ,(N,v,L) = ,(N,v,L\l). Furthermore, since (N,vL\I) =
(N, v L ) we find that 'I is a superfluous player in (N, v, L\l) as well. This
implies that we can eliminate all links in Li one by one, just like we did
with link l above. We conclude that ,(N,v,L) = ,(N,v,L\L;), which
proves that the allocation rule, satisfies the superfluous player property.
We proceed with the proof of part (i). Since we have already es-
tablished that the Myerson value satisfies the four properties, we only
need to prove uniqueness. Let, be an allocation rule on CS N that
satisfies component efficiency, additivity, the superfluous player prop-
erty, and player anonymity. We show that , = p, must hold. So, let
(N,v,L) be a communication situation. Then, according to lemma 2.1,
,(N,v,L) = ,(N,vL,L). We can express characteristic function v L as
a linear combination of unanimity games;

vL = L AS(VL)Us.
SE2 N \ {0}
Using additivity of " we see that

,(N, v L , L) = L ,(N, AS(VL)Us, L).


SE2N\{0}

We have already seen in section 2.1 that AS(V L ) = 0 for every coalition
S that is not internally connected. Component efficiency and player
anonymity of , thus imply that for all S ~ N that are not internally
connected
Other allocation rules 43

for all i EN.


Now, let S ~ N be an internally connected coalition and set (3 =
AS(V L ). We will show that ,(N,(3us,L) = j.L(N,(3us,L). Since S is in-
ternally connected, every player in N\S is superfluous. Hence, repeated
application of the superfluous player property of, gives that

,(N,(3us,L) = ,(N,(3us,L(S)).

Communication situation (N, (3us, L(S)) is player anonymous, since for


each T ~ N it holds that ((3us)L(Sl(T) = (3 if T Il N(L(S)) = Sand
((3us)L(Sl(T) = 0 otherwise. Therefore, by player anonymity of " there
exists an a E R such that ,i(N, (3us, L(S)) = ae s . Using component
efficiency of " we conclude that a = &r
has to hold and, hence,

,(N,(3us,L) = 1~les = J.L(N,(3us,L).


Using additivity, we can now conclude

,(N,vL,L) = L ,(N,AS(VL)us,L)
SE2 N \ {0}

L J.L(N,AS(VL)us,L) = J.L(N,vL,L).
SE2 N \{0}

The proof of part (ii) follows readily from our results so far. We have
already established that the Myerson value satisfies the four properties.
To establish uniqueness, let, be an allocation rule on CS N that satisfies
component efficiency, additivity, the strong superfluous link property,
and player anonymity. We have shown that this implies that , nec-
essarily satisfies the superfluous player property. Using part (i) of the
theorem, it follows that, = J.L. 0

2.3 OTHER ALLOCATION RULES


In this section, we will briefly discuss two additional allocation rules
for communication situations that appear in the literature, the position
value and a value introduced in Hamiache (1999).
The position value was first introduced in Meessen (1988), a Masters
Thesis in Dutch, and brought to the attention of a wider public in Borm
et al. (1992). While the Myerson value focuses on the role of the players
in a communication situation, the position value focuses on the role
of the links. The position value is based on the link game (L, rV) as
44 Restricted cooperation in games

defined in (2.5). We will restrict ourselves to communication situations


(N, v, L) with zero-normalized games (N, v) to make sure that the link
game (L, rV) satisfies rV(0) = 0, as we noted in remark 2.1. We denote
the set of all communication situations (N, v, L) with fixed player set N
and a zero-normalized coalitional game (N, v) by CS{/. Let (N, v, L) E
CS{;' be such a communication situation. To find the position value, we
first determine the Shapley value of each link in the link game. Then the
Shapley value of each link ij is divided equally among players i and j.
The philosophy behind this is an equity principle similar in spirit to that
represented by the fairness axiom, namely that both players forming a
link ij should get equal revenue from it. The total payoff of a player is
found by adding the revenues he collects from all of his links.
The position value Jr of communication situation (N, v, L) E CSbv is
defined by
Jri(N, v, L) = L ~<I>I(L, rV) (2.22)
lELi

for all i E N.
The following two examples provide an illustration of the position
value.

EXAMPLE 2.13 Consider the communication situation (N, v, L) with


N = {I, 2, 3}, L = {12,23}, and v = 60Ul,2 + 60Ul,3 + 60U2,3 - 10SuN,
which was also studied in examples 2.1 and 2.4. Like in example 2.4, we
denote a = 12 and b = 23. In example 2.4 we found

This immediately leads to

<I>a(L, rV) = 60 + 0 - 24 = 36;


<I>b(L, rV) = 0 + 60 - 24 = 36,

where we use the description of the Shapley value in terms of unanimity


coefficients as described in (1.3).
Using these Shapley values we compute the position valu~,

Jrl(N,v,L) = ~<I>a(L,rV) = IS;

Jr2(N, v, L) = ~<I>a(L, rV) + ~<I>b(L, rV) = 36;


Jr3(N, v, L) = ~<I>b(L, rV) = IS.
Other allocation rules 45

Note that the position value and the Myerson value, determined in ex-
ample 2.7, do not coincide. 0

EXAMPLE 2.14 Consider communication situation (N, v, L1) player set


N = {I, 2, 3, 4}, set of links L1 = {I2, 34}, and characteristic function v
given by v = 4U1,2 + 12u2,4 + 8U3,4 + 18uN. Network (N, L1) consists of
two lines as represented in figure 2.6 (a).

4_____ 3

1-----2
:J3
1 2

a: (N,Ll) b: (N,L 2 )

Figure 2.6. Networks (N,L 1 ) and (N,L 2 )

We denote a = 12 and b = 34. The characteristic function of the


link game (L, rV) is rV = 4u a + 8Ub, from which we derive 7r(N, v, L) =
(2,2,4,4).
If we add link c = 23, we obtain network (N, L2), which is the line
represented in figure 2.6 (b). The link game of communication situation
(N, v, L2) is described by

The Shapley value of this link game equals

<I>a(L2, rV) = 4 + 6 = 10;


<I>b(L 2, rV) = 8 + 6 + 6 = 20;
<I>c(L2, rV) = 6 + 6 = 12,
and the position value equals 7r(N, v, L2) = (5,11,16,10). Note that the
addition of link c between players 2 and 3 increases the payoff of player
2 by 9 and that of player 3 by 12. This illustrates that the position value
does not satisfy fairness. 0

In section 4.5 we will show that the position value of a player i is equal
to the sum of the marginal contributions of the links in Li as measured
46 Restricted cooperation in games

by a so-called link potential function (see theorem 4.8). This result is


similar in spirit to the result for the Shapley value that we described in
section 1.1, namely that the Shapley value of a player i is equal to his
marginal contribution as defined in (1.5).
We now turn our attention to basic properties of the position value.
Like the Myerson value, the position value is component decomposable.
Theorem 2.7 can be found in van den Nouweland (1993) and we state it
without a proof.

THEOREM 2.7 The position value on the class of communication situa-


tions with zero-normalized underlying games satisfies component decom-
posability.

EXAMPLE 2.15 We use the communication situation (N, v, L1) in exam-


ple 2.14 to illustrate component decomposability of the position value.
Network (N, L1) in figure 2.6 (a) partitions the player set N into two
components. If we restrict our attention to the component C = {3,4},
we obtain communication situation (C, w, {34}) with only players 3 and
4, who are directly connected, and the game w = VIC = 8U3,4. Denoting
b = 34, the link game corresponding to this communication situation is
({b},rV) with rV(b) = 8, from which we derive that the position values
of players 3 and 4 in the reduced communication situation are 4, just
like in communication situation (N, v, L1). 0

To our knowledge, only one axiomatic characterization of the position


value is known. To present it, we need to introduce two more properties.
One deals with link anonymous communication situations and the other
with superfluous links.
A communication situation (N, v, L) is link anonymous if the value of
a set of links in the link game only depends on the number of links in
this set, i.e., rV(A) = rV(E) for all A, E r:;:; L with IAI = lEI.
Link Anonymity An allocation rule r on a class CS of communication
situations is link anonymous if for every link anonymous communica-
tion situation (N, v, L) E CS there exists a constant a E R such that
for all i E N
ri(N,v,L) = aiL;!. (2.23)
If a communication situation is link anonymous, then all links are
symmetric in the link game. Hence, all links are equally valuable. Ac-
cording to a link anonymous allocation rule, the payoff to a player then
only depends on the number of links that he is involved in.
Other allocation rules 47

A link l E L is superfluous in a communication situation (N, v, L) if


the presence or absence of this link does not change the value of the
grand coalition, i.e., rV(A) = rV(A\l) for all A ~ L.
For an allocation rule that has the superfluous link property the dele-
tion of a superfluous link does not change the payolls of the players.

Superfluous Link Property An allocation rule 'Y on a class CS of


communication situations satisfies the superfluous link property if for
every communication situation (N, v, L) E CS and every link l E L
that is superfluous in this communication situation it holds that

'Y(N,v,L) = 'Y(N,v,L\l). (2.24)

REMARK 2.3 On the set csr: of communication sil~uations with a zero-


normalized coalitional game, the superflu01Ls link property is weaker ver-
sion of the strong superfluous link property, which 'We introduced in sec-
tion 2.2. This is because the strong superfluous link property requires
(2.24) to hold in more situations than the superfluous link property.
The reason is that a link that is superfluous in a communication sit-
uation (N, v, L) E CSr: is also strongly superfluous in (N, v, L), but the
reverse does not necessarily hold.
To see that a s1Lperfluous link is strongly superjl'l.wus, let (N, v, L) E
CSr: be a communication situation with a zero-normalized game (N, v)
and suppose that l E L is a superfluous link in (N,v,L). Hence, rV(A) =
rV (A \l) for all A ~ L. Then for every coalition of players S ~ N it holds
that

VL(S) = L v(C) = L v(C)


CE8/ L CES/ L(8)
L v(C) + L v(i)
CE8/ L(S) iEN\8
L v(C)
CEN/L(S)
= rV(L(S)) = rV(L(S)\ll = vL\I(S),
where the third equality follows since (N, v) is zero-normalized, the sixth
equality follows using that l is superfluous, and the last equality follows
by performing the operations used in the first five equalities in reverse
order. We conclude that l is strongly superfluous in (N, v, L).
To see that a link that is strongly superfluous lin a communication
situation (N, v, L) E CS{'! is not necessarily superfluous 'in (N, v, L),
48 Restricted cooperation in games

consider the communication situation (N,v,L) with N = {1,2,3}, L =


L N , and v = U2,3' Denote a = 12. It is easily seen that vL(S) = vL\a(s)
for all S ~ N, so that a 'is strongly superfluous in (N, v, L) . However,
hnk a is not superfluous in (N, v, L) because, with b = 13, 'it holds that

rV(a, b) = v(N) = 1 f= 0 = v(l, 3) + v(2) = r"(b).

The axiomatic characterization of the position value in theorem 2,8


(i) was presented in Borm et a1. (1992) and it is valid only on the domain
consisting of communication situations (N, v, L) with a fixed player set
N, a zero-normalized game (N, v), and a cycle-free network (N, L). We
will denote the set of communication situations with these properties by
CS6':'*. For the sake of completeness, we also include Borm et a1. (1992)'s
axiomatic characterization of the Myerson value on CS~* (which we
referred to in section 2.2),

THEOREM 2.8 Let N be a set of players,

(i) The position value is the unique allocation rule on csi/* that satisfies
component efficiency, additivity, the superfluo'us link' property, and
link anonymity.

(ii) The Myerson value is the unique allocation rule on csi/* that sat-
isfies component efficiency, additivity, the superfluous link property,
and player anonymity.

The proof of theorem 2.8 is similar to that of theorem 2.6 and we will
not include it here.
The restriction to cycle-free networks in theorem 2.8 is needed because
of the use of the superfluous link property, In order to be able to apply
this axiom, one has to be able to find for each connected coalition S ~ N
a set of links whose presence is both necessary and sufficient to establish
communication between the players in coalition S. This, as we have seen
in section 2.1, can only be guaranteed for cycle-free networks.

EXAMPLE 2.16 Consider the communication situation (N,v,L) with


N = {I, 2, 3}, L = L N , and v = U2,3, which we also considered in
remark 2.3. Note that (N,v,L) E cSi/, but (N,v,L) r¢ CS~*' We saw
in remark 2.3 that link a = 12 i8 not superfluous in (N, v, L). Links
b = 13 and c = 23 are also not superfluous in (N,v,L). Hence, we can-
not apply the superfluous link property. 0
Other allocation rules 49

To our knowledge, the axiomatic characterization ofthe position value


presented in theorem 2.8 (i) is the only one known. It is still an open
problem to find an axiomatic characterization of the position value on a
class of communication situations that do not necessarily have a cycle-
free network.

We now turn to a third allocation rule for communication situations,


introduced by Hamiache (1999). He claims that there is a unique allo-
cation rule for communication situations satisfying five properties. The
driving force behind his value is a specific consistency property. Op-
posed to several other consistency properties, this one does not consider
reduced games but so-called associated games.
We introduce some additional notation. Let (N, L) be a network. For
every coalition of players 8 c:;; N we define 8* (L) to be the set of players
who are directly connected with at least one player in 8,
8*(L) = {i E N I there exists a j E 8 such that ij E L}. (2.25)
Note that 8*(L) = 0 for all S E 2N \{0} if (N,L) is the empty graph,
and 8*(L) = N for all 8 E 2N \{0} with 1812: 2 if (N,L) is the complete
graph. We stress that 8* (L) does not have to be a subset or a superset
of 8.
Let (N, v, L) be a communication situation and let 'Y be an allocation
rule for communication situations. The associated game (N, v'l,£) is
defined as follows. For a coalition 8 c:;; N that is internally connected

v'l'£(8) =v(8) + L ['Yj(8Uj,vI Su j,L(8U;n) -v(j)] , (2.26)


jES* (L)\S

and for a coalition 8 c:;; N that is not internally connected

v'l'£(8) = L v'l'£(C). (2.27)


CEsl£

Hamiache (1999) interprets v'l'£(8) as the value coalition 8 should


claim for itself. He uses the property associated consistency, which re-
quires that the game (N, v) in communication situation (N, v, L) can
be replaced with the associated game (N, v'l,£) without changing the
payoffs to the players according to 'Y.
Associated Consistency An allocation rule 'Y on a class CS of com-
munication situations satisfies associated consistency if for every com-
munication situation (N, v, L) E CS it holds that

'Y(N,v,L) = 'Y(N,v'l'£,L). (2.28)


50 Restricted cooperation in games

Associated consistency is similar to the result that we obtained in


lemma 2.1 for the network-restricted game. In that lemma we derived
equality between ,(N, v, L) and ,(N, v L , L) from three other properties,
namely component efficiency, additivity, and player anonymity. Hami-
ache (1999), however, does not derive associated consistency from other,
more basic, properties.
Besides associated consistency and component efficiency (which we
have already encountered), Hamiache (1999) uses the following three
properties.

Linearity An allocation rule, on a class CS of communication sit-


uations is linear if for any two communication situations (N, v, L),
(N, w, L) E CS with the same player set and the same network, and
two scalars 0', (3 E R it holds that

,(N,O'v + f3w, L) = O',(N, v, L) + f31'(N, w, L). (2.29)

Linearity extends additivity by considering a linear combination of


two communication situations with the same player set and the same
communication graph.

Independence of Irrelevant Players An allocation rule, on a class


CS of communication situations is independent of irrelevant players
if for a communication situation (N, v, L) E CS with v = Us for some
coalition S <;;; N that is internally connected in network (N, L), and
for every internally connected coalition of players T <;;; N with T :2 S
it holds that

,i(N, Us, L) = 'i(T, Us, L(T)) for all i E T. (2.30)

Independence of irrelevant players is related to the strong superfluous


link property and also to the superfluous player property. An allocation
rule that is component decomposable and that satisfies either the strong
superfluous link property or the superfluous player property necessarily
satisfies independence of irrelevant players. This is easily seen using that
every link in L\L(S) is strongly superfluous and every player in N\S is
superfluous in communication situation (N, ·Us, L) if S is an internally
connected coalition.

Positivity An allocation rule, on a class CS of communication sit-


uations is positive if for a communication situation (N, UN, L) E CS
with a connected network (N, L) it holds that

/i(N, UN, L) > 0 for all i E N. (2.31)


Other allocation rules 51

Positivity states that every player should obtain a positive value in


a communication situation in which the grand coalition N of players is
connected and in which a value of 1 is obtained if and only if all players
in N cooperate.
The following theorem is taken from Hamiache (1999) and we state it
without a proof.

THEOREM 2.9 There is a unique allocation rule on the class CS of all


communication situations that satisfies component efficiency, linearity,
independence of irrelevant players, positivity, and associated consistency.

Computing the allocation rule that satisfies the five properties in the-
orem 2.9 will in general be rather complex. Hamiache (1999) provides
an overview of payoffs for communication situations with at most four
players in which the underlying game is a unanimity game.

EXAMPLE 2.17 Consider the communication situation (N, v, L) as dis-


cussed in examples 2.7 and 2.13. Denote the value described in theorem
2.9 by "(*. Hamiache (1999) shows that

"(*(N, Ul,2, L) = (~,~, 0);


,,(*(N,Ul,3,L) = (~,~, ~);
,,(*(N,U2,3,L) = (O,~, ~);
"(*(N, UN, L) = (~,~, ~).
Using the linearity property of "(* we find

"(*(N,v,L) = 60,,(*(N,Ul,2,L) + 60,,(*(N,Ul,3,L)


+ 60,,(*(N,U2,3,L) -108"(*(N,UN,L)
= (18,36,18).
<>

Though "(* coincides with the position value for the communication
situation in example 2.17, such a relation does not hold in general. We
illustrate this in example 2.18.
Before we go to this example, we remark that, like the Myerson value,
the value introduced by Hamiache (1999) is an extension of the Shapley
52 Restricted cooperation in games

value. This is true since the payoffs the players receive in communica-
tion situation (N, v, L N ), in which all players can communicate directly,
coincides with <I> (N, v). This property is not satisfied by the position
value. We illustrate this in the following example.

EXAMPLE 2.18 Consider the communication situation (N,v,L) with


N = {I, 2, 3}, v = 12ul,2, and L = LN = {12, 13, 23}. Some com-
putations result in

<I>(N,v) = (6,6,0);
JL(N, v, L) = (6,6,0);
7r(N,v,L) = (5,5,2);
"(*(N,v,L) = (6,6,0),

where "(* denotes the value described in theorem 2.9. These payoffs il-
lustrate that the position value is not an extension of the Shapley value
and that "(* does not always coincide with the position value. 0
Chapter 3

INHERITANCE OF PROPERTIES
IN COMMUNICATION SITUATIONS

In this chapter we study some interesting properties of coalitional


games and how restrictions on communication influence these properties.
More specifically, we study what conditions on the underlying network in
a communication situation are necessary and sufficient to guarantee that
certain desirable properties of the underlying game are still satisfied by
the corresponding network-restricted game. Many results in this chapter
are taken from Slikker (2000b).

We start by studying superadditivity in section 3:.1. Then, in section


3.2, an analysis of balancedness and a closely related property called
totally balancedness follows. Subsequently, we introduce and study other
properties, starting with convexity and average convexity, in sections
3.3 and 3.4, respectively. After that, in section 3.5, we mainly focus on
properties involving the Shapley value. Firstly, we study under what
circumstances the property that the Shapley value of the underlying
game is in the core of this game is inherited by the network-restricted
game. Subsequently, we are interested in the property that the Shapley
values of a game and all its subgames belong to the corresponding cores.
We exploit the relation between this property and average convexity and
use the inheritance results on average convexity. Finally, we study the
properties that a game has a population monotonic allocation scheme
(PMAS) and that the Shapley allocation scheme is a PMAS in section
3.6. A PMAS for a coalitional game is a payoff scheme that associates
an efficient allocation with each of its subgames and that satisfies the
condition that the payoff to a player does not decrease if the coalition he
belongs to grows larger. The Shapley allocation scheme associates with
each subgame its Shapley value. We conclude the chapter with a review
54 Inheritance of properties in communication situations

of the results and some remarks on inheritance of properties related to


the link game in section 3.7.

3.1 SUPERADDITIVITY
In section 1.1 we introduced the property superadditivity. This prop-
erty states that if two disjoint coalitions merge, then the value of the
newly formed coalition is at least as much as the sum of the values
of the two separate coalitions. In the current section we focus on the
inheritance of this property.
Inheritance of superadditivity was first studied by Owen (1986), who
showed that superadditivity of a game (N, v) is inherited by the network-
restricted game (N,vL) for any network (N,L).

THEOREM 3.1 Let (N, L) be a network and let (N, v) be a coalitional


game. If (N, v) is superadditive then the network-restricted game (N, v L )
is superadditive as well.

PROOF: Let S, T c,;;; N such that SnT = 0. Since L(S)UL(T) c,;;; L(SUT),
it follows that (SUT)/ L is a coarser partition of SUT than (S/ L)U(T/ L)
is. Hence, every component C E (S U T)/ L is the union of several
components of (S, L(S)) and several components of (T, L(T)). Using
this and the superadditivity of (N,v), we derive

L v(C)
CE(SUT)/L

2': L v(C) + L v(C)


CES/L CET/L
= vL(S) + vL(T).
This completes the proof. 0

We illustrate the preceding theorem with an example.

EXAMPLE 3.1 Let (N, v, L) be the 4-person communication situation


with player set N = {I, 2, 3, 4}, set of links L = {12, 13, 14,23, 34} (see
figure 3.1), and characteristic function v described by v(S) = lSI - 1 for
each S c,;;; N, S =I 0.
It is easily checked that the game (N, v) is superadditive. Consider the
two disjoint coalitions S = {I, 3} and T = {2,4}. Then S/ L = {{I, 3}},
Balancedness and totally balancedness 55

40
1
3

Figure 3.1. Network (N,L)

T/L = {{2},{4}}, and (SUT)/L = {{1,2,3,4}}. It follows that

vL(SUT) = v(1,2,3,4) = 3
2: 1 = v(l, 3) + v(2) + v(4) = vL(S)+ vL(T).

3.2 BALANCEDNESS AND TOTALLY


BALANCEDNESS
In the current section we focus on games w~th nonempty cores. In
theorem 1.1 we stated that a game has a nonempty core if and only if
it is balanced. A game is called totally balanced if this game and all its
subgames are balanced, i.e., have nonempty cores.
We start with an example.

EXAMPLE 3.2 Consider communication situation (N,v,L) with player


set N = {I, 2, 3, 4}, set of links L = {12,34} (see figure 3.2), and char-
acteristic function v given by v = Ul + U2 - Ul,2 + 3UN.

4_ _ 3

1--2

Figure 3.2. Network (N,L)

The underlying game (N, v) is balanced. This is most easily seen by


noting that it has a nonempty core. For example, (1,1,1,1) E C(N, v).
Using that N is not connected in the network (N, L), we see that
the characteristic function v L of the network-restricted game equals
v L = Ul + U2 - Ul,2. It follows that this game has an empty core,
56 Inher'itance of pmper·ties in communication situations

because it is impossible to simultaneously satisfy the conditions Xl 2:


v L(l) = I, X2 2: v L (2) = I, X3 2: v L (3) = 0, X1 2: v L (4) = 0, and
Xl + X2 + X3 + X4 = v L (N) = 1. Hence, the network-restricted game is
not balanced. Note that this negative result is driven by the fact that
the value of the component {1,2} is very low. This low value can be
ignored when finding a core element of the underlying game (N, v), be-
cause the value of coalition N is fairly high. However, network (N, L) is
not connected, so that the value of the grand coalition in the network-
restricted game is very low and the game has no core elements. 0

The result that we obtain in example 3.2 is driven by the fact that
network (N, L) is not connected. We formalize this in the following theo-
rem, which, among other things, states that connectedness of a nontriv-
ial network is sufficient to guarantee that balancedness of (N, v) implies
balancedness of (N, v L ).

THEOREM 3.2 Let (N, L) be a network Then the following two state-
ments are equivalent.

(i) The network (N, L) is connected or empty.


(ii) FaT all balanced games (N,v) the netwoTk-Testricted game (N,vL)
is balanced.

PROOF: Suppose (i) holds. Let (N,v) be a balanced game. If (N,L) is


empty then v L = LiEN V(i)Ui implying that (N, v L ) is an additive game
and has a nonempty core. Now, suppose (N, L) is connected. Take
X E C(N, v), which is possible because (N, v) is balanced. We will show

that X E C(N, v L ) as well. Firstly, note that

LXi = v(N) = vL(N), (3.1)


iEN

where the first equality holds since X E C(N, v) and the second equality
follows because (N, L) is connected. Secondly, for all S <;;; N it holds
that
LXi L LXi L
= 2: v(C) = vL(S), (3.2)
iES CES/L iEC CES/L

where the inequality follows since X E C(N, v) and the last equality by
definition of the network-restricted game (N, v L ). Combining (3.1) and
(3.2) shows that X E C(N, v L ). Hence, (ii) holds.
Balancedness and totally balancedness 57

Now, assume that (ii) holds. Suppose that (i) does not hold, i.e.,
(N, L) is not connected and not empty. Consider (N, v) defined by

if lSI = 1;
v(S) ={ ~ if 1 < lSI < INI;
INI if S = N.

The game (N, v) is balanced as x defined by Xi = 1 for each i E N is a


core-element. Because (N,L) is nonempty it follows that INILI < INI.
Because (N, L) is not connected it follows that ICI < INI and v(C) E
{O, I} for each C E NIL. Hence,

vL(N) = L v(C):S; INI LI < INI·


CEN/L

Also, for each i E N it holds that vL(i) = v(i) = l.


Obviously, there exists no y E RN satisfying I:iEJIi Yi = vL(N) < INI
and Yi 2': vL(i) = 1 for each i E N. Hence, the core of (N,vL) is empty
and the network-restricted game is not balanced. This contradicts (ii).
We conclude that (i) has to hold. 0

We now turn our attention to the property totally balancedness. A


coalitional game (N, v) is totally balanced if it is balanced and all its
subgames (S, VIS), S ~ N, are balanced as well.
Note that the game that we used in example 3.2 is not totally bal-
anced, because the subgame ({1,2},VI{1,2}) has an empty core. In the
following theorem, which is due to van den Nouweland (1993), we show
that for a totally balanced underlying game the network-restricted game
is always balanced, irrespective of the specific network. In fact, if the
underlying game is totally balanced, then for any network the network-
restricted game is totally balanced as well.

THEOREM 3.3 Let (N, L) be a network and let (N, v) be a coalitional


game. If (N, v) is totally balanced then (N, v L ) is totally balanced.

PROOF: Suppose (N, v) is totally balanced. Let S ~~ N. It suffices to


show that (S, (v L ) IS) is balanced and we show this by constructing a
core-element of this game. By totally balancedness of (N, v) it follows
that for all C E S I L the core of the sub game of (N, v) on C is nonempty.
For each C E SI L, let (Xi)iEC E C( C, vIc) and consider the vector
Xs = (Xi)iES, We show that Xs E C(S, (vL)ls),
58 Inheritance of properties in communication situations

Firstly, note that

iES CES! L iEC CES!L

where the second equality follows from the fact that (Xi)iEC E C( C, vIc)
for each C E S/ L.
Secondly, for all T I:: S it holds that

iET DET!L iED DET!L

where the inequality follows from the fact that every component D E
T / L is contained in a component C E S / Land (Xi )iEC E C (C, VIc).
Combining (3.3) and (3.4) we obtain Xs E C(S, (vL)ls). D

3.3 CONVEXITY
In the current section we study the inheritance of convexity. The
results that we present in this section are taken from van den Nouweland
and Borm (1991).
A coalitional game is convex if a player's marginal contribution does
not decrease if he joins a larger coalition. Formally, coalitional game
(N,v) is convex iffor each i E N and for all S,T <;;;; N\i with S I:: T it
holds that
v(S U i) - v(S) ::; v(T U i) - v(T). (3.5)

An equivalent formulation of convexity of (N, v) is

v(S) + v(T) ::; v(S U T) + v(S n T) (3.6)

for all S, T <;;; N.


Convex games were studied by Shapley (1971) and have several ap-
pealing properties. Using formulation (3.6), it is quickly seen that a
convex game necessarily is superadditive. Moreover, using the proce-
dure leading to expression (1.4) of the Shapley value, it follows from
(3.5) that the Shapley value of a convex game (N, v) is inrthe core of
that game, i.e., iJ>(N, v) E C(N,v) for a convex game (N,v).
Convexity of the underlying game in a communication situation is not
sufficient for convexity of the network-restricted game. We demonstrate
this in the following example.
Convexity 59

EXAMPLE 3.3 Let (N, v, L) be the 4-person communication situation


with player set N = {1,2,3,4}, set of links L = {12,23,34,41} (see
figure 3.3), and characteristic function v given by v(8) = 181-1 for each
8 ~ N, 8 i- 0.

40
1
3

Figure 3.3. Network (N, L)

The game (N, v) is convex. This is easily seen by noting that the
marginal contribution v(8 U i) - v(8) of a player i equals 0 if 8 = 0
and 1 for all larger coalitions 8, and using (3.5).
With respect to the network-restricted game we find, among other
things, that since coalitions {1, 2, 3}, {1, 3, 4}, and N = {1, 2, 3, 4} are in-
ternally connected, it holds that v L (l, 2, 3) = v(l, 2, 3) = 2, v L (l, 3, 4) =
v(l, 3, 4) = 2, and vL(N) = v(N) = 3. However, coalition {1, 3} is not
internally connected and v L (l, 3) = v(l) + v(3) = O. Combining these
results we find

v L (1,2,3) -v L (1,3) =2-0=2


>1= 3- 2= vL(N) - v L (l, 3, 4). (3.7)

We conclude that although (N,v) is convex, (N,v L ) is not convex. 0

The example above shows that inheritance of convexity cannot be


guaranteed for all networks. It can be guaranteed, however, for cycle-
complete networks. This is stated in the following lemma.

LEMMA 3.1 Let (N,L) be a cycle-complete network and let (N,v) be a


convex coalitional game. Then (N, v L ) is convex.

PROOF: Let i E N and let S, T ~ N\i with S ~ T. It suffices to show


that v L (8 U i) - v L (8) ~ vL(T U i) - vL(T), i.e.,

L v(C) - L v(C) ~ L v(D) - L v(D). (3.8)


CE(SUi)/L CES/L DE(TUi)/L DET/L

When player i joins coalition 8, he causes the communication compo-


nents of (8, L(8)) containing at least one player who is directly connected
60 Inheritance of properties in communication situations

with player i to be united in one large component of (S U i, L(S U i)),


whereas the other components of(S, L(S)) remain unchanged. In for-
mula, denoting C = {C E S/L I :Jj E C: ij E L} and

Ci = iU U C,
GEe

we have Ci E (SUi)/ L, i.e., Ci = Ci(L(SUi)), and for each C E (SUi)/ L


with C =j:. Ci it holds that C E S/ L. Consequently, we have

L v(C) - L v(C) = v(i U U C) - L v(C).


GE(SUi)/L GES/L GEe GEe

Analogously, we obtain

L v(D) - L v(D) = v(i U U D) - L v(D),


DE(TUi)/L DET/L DED DED

where D = {D E T/L l:Jj ED: ij E L}. Hence, (3.8) is equivalent to

v(i U U C) - L v(C) :::; v(i U U D) - L v(D). (3.9)


DED DED

We proceed by showing that player i does not link up more communica-


tion components of coalition S than he links up components of coalition
T. This is true because we can number the elements of C and D in such
a way that C = {C1, ... ,Cs } and D = {D 1 , ... ,Dt}, where t ~ sand
C r <;;;; Dr for all r E {I, ... ,s}. This can be seen as follows: it easily fol-
lows that for all C E C there exists precisely one D E D such that C <;;;; D,
because every two players that are connected in (S, L(S)) are also con-
nected in (T, L(T)). Now, suppose there are E1, E2 E C, El =j:. E2, and
FED such that El <;;;; F and E2 <;;;; F. Let j1 E E1 and j2 E E2
he such that {i,jd ELand {i,12} E L. Note that {j1,12} rt L.
Since {j1,j2} <;;;; F E TIL, there is a path in (T,L) from j1 to h. So,
since i rt T, there is a cycle from i to i over j1 and 12 in the network
(N, L). However, since (N, L) is cycle-complete this should imply that
{j1,12}EL.
Now, we can use the properties of the game (N, v). Superadditivity
of the game (N, v) implies

v(iU U D)~v(iUUDr)+ L v(Dr) (3.10)


DED r=l r=s+l
Convexity 61

and convexity of (N, v) implies


s
v(i U UDr) - v(Dd 2' v(i U UDr U Cd - v(Cd;
r=1
k-l k

v(i U U Dr U U Cr) - v(Dk) 2' v(i U U Dr U U Cr) - v(Ck);


r=k r=1 r=k+l r=1
s-1 s
v(i U Ds U U
Cr) - v(Ds) 2' v(i U U
Cr) - v(Cs ),
r=1 r=1
w here the second inequality holds for all k E {2, ... ) s - I}. Adding all
these s inequalities and deleting the terms that appear to the left of the
inequality sign as well as to the right of it, we obtain
s s
v(i U UDr) - L v(Dr) 2' v(i U UC) - I~ v(C). (3.11)
r=1 r=1
Now, (3.10) and (3.11) readily imply (3.9). o

Lemma 3.1 establishes that cycle-completeness of a network (N, L)


is sufficient for inheritance of convexity. In the following lemma we ex-
tend example 3.3 to show that cycle-completeness of the network is also
necessary for inheritance of convexity in the sense that this inheritance
cannot be guaranteed if the underlying network is not cycle-complete.

LEMMA 3.2 Let (N,L) be a network that is not cycle-complete. Then


there exists a convex game (N, v) such that the network-restricted game
is not convex.

PROOF: Because network (N, L) is not cycle-complete, there is a cy-


cle (xl, ... ,xk,xd in (N,L) and i,j E {l, ... ,k} such that i < j -1,
{Xi, Xj} 1. Land {xm, Xj} E L for all rn E {i + 1, ... ) j - I}. Consider
the convex game (N,v) with v(S) = ISI-1 for all S (;;; N, S i- 0. Define
S = {Xi,Xi+l,Xj} and T = {xl, ... ,xd. Note that S, T, and T\:ri+l
are internally connected, while S\Xi+l is not. It follows that
vL(S) - vL(S\Xi+d = V(Xi, Xi+l, Xj) - V(Xi) - v(Xj) = 2
> 1 = v(T) - V(T\Xi+d = vL(T) - vL(T\Xi+d.
Hence, the game (N, v L ) is not convex. o
62 Inheritance of properties in communication situations

The following theorem follows straightforwardly by combining the re-


sults on inheritance of convexity described in lemmas 3.1 and 3.2.

THEOR.EM 3.4 Let (N, L) be a network. Then the following two state-
ments are equivalent.

(i) The network (N, L) is a cycle-complete network.

(ii) For all convex games (N,v) the netwoTk-r'estr'icted game (N,v L ) zs
convex.

3.4 AVERAGE CONVEXITY


In the current section we study the inheritance of the property average
convexity. This property was introduced by Iuarra and Usategui (1993),
who were searching for a characterization of coalitional games satisfying
the condition that the Shapley value is a core element. As we saw in
example l.3, the Shapley value of a coalitional game is not necessarily in
the core of that game, even if the game has a nonempty core. A Vf~rage
convexity is a sufficient condition on a game to guarantee that its Shapley
value is a core-element.
A coalitional game is average convex if for any coalition the sum (av-
erage) of the marginal contributions of the players in this coalition is less
than or equal to the sum (average) of the marginal contributions of the
same players in a larger coalition. In formula, coalitional game (N, v) is
average convex if for all S, T ~ N with S ~ T it holds that

L (v(S) - v(S\i)) ~ L (v(T) - v(T\i)). (3.12)


iES iES

Comparing the definitions of average convexity and convexity of a


game, it is clear that average convexity is a weaker property than con-
vexity. Further, average convexity is a stronger property than totally
balancedness. This follows from the fact that the Shapley value of an
average convex game is in its core (see luarra and Usategui (1993)),
which implies that average convex games have nonempty cores, and the
observation that a subgame of an average convex game is average convex
itself. Hence, every game that is average convex is totally balanced as
well. Note that a totally balanced game is superadditive, so that we can
conclude that an average convex game is superadditive.

EXAMPLE 3.4 Consider the coalitional game (N,v) with N = {1,2,3}


and v = 6Ul,2+ 6U2,3 - 3UN. This game is average convex. We will
A verage convexity 63

check that condition (3.12) holds for a few pairs of nested coalitions.
With S = {I} and T = {I, 2} we find

L (v(S) - v(S\i)) = 0 ::;; 6 = L (v(T) - v(T\i)).


iES iES

With S = {I} and T = N we find

L (v(S) -v(S\i)) = 0::;; 3 = L (v(T) - v(T\i)).


iES iES

With S = {I, 2} and T =N we find

L (v(S) - v(S\i)) = 6 + 6 = 12 ::;; 3 + 9 = L (v(T) - v(T\i)).


iES iES

Note that (N,v) is not convex, because

v(1,2) - v(2) = 6 >3= v(N) - v(2, 3).

The Shapley value is easily found using its expression in terms of una-
nimity coefficients. It holds that ip(N, v) = (2,5,2) and it is easily seen
that it is in the core C(N,v). Note that this shows that (N,v) is bal-
anced. Because (3,3) E C( {I, 2}, VI{1,2}), (0,0) E C( {I, 3}, VI{1,3}), and
(3,3) E C( {2, 3}, VI{2,3}), we see that all subgames of (N, v) are also bal-
anced, so that we can conclude that (N, v) is totally balanced. <>

We now turn to the study of conditions on networks that guarantee


that average convexity of a game is inherited by the network-restricted
game. Because of the relation between convexity and average convexity,
we start by considering cycle-complete networks. The following lemma
shows that cycle-completeness is a necessary condition on the network to
guarantee that average convexity of a game is inherited by the network-
restricted game.

LEMMA 3.3 Let (N, L) be a network that is not cycle-complete. Then


there exists an average convex game (N, v) such that the network-re-
stricted game (N, v L ) is not average convex.

PROOF: The proof of this lemma follows quickly from the proof of lemma
3.2. The convex game in that proof is, of course, average convex. Also,
the network-restricted game in that proof is not average convex, which
64 Inheritance of properties in communication situations

is seen easily considering the same coalitions Sand T that appear in the
proof of lemma 3.2. 0

While we have now established that cycle-completeness is a necessary


condition for the inheritance of average convexity, the following example
shows that it is not a sufficient condition.

EXAMPLE 3.5 Consider the 4- perSOll communication situation (N, v, L)


with N = {I, 2, 3, 4}, L = {12, 23, 24, 34} (see figure 3.4), and v given
by
if lSI = 1 or S E {{l, 2}, {I, 3}, {2, 4}};

v(S) = { ~ ifSE {{1,4},{2,3},{3,4}};


if lSI = 3;
16 if S = N.

Figure 3.4. Network (N, L)

Note that network (N, L) is cycle-complete. It can be checked that the


game (N,v) is average convex. For example, with Sl = {1,2,4} and
S2 = N we find that

L (V(Sl)-V(Sl\i)) =9+3+9=21
::; 7 + 7 + 7 = L (V(S2) - v(S2\i)).
iESj

Note that (N, v) is not convex because

v(2,3) - v(3) = 6 > 3 = v(2, 3, 4) - v(3, 4).


The characteristic function of the network-restricted game is given by
if lSI = 1 or S E {{I, 2}, {I, 3}, {I, 4}, {2, 4}};
if S E {{2, 3}, {3,4}, {I, 3, 4}};
if S E {{1,2,3},{1,2,4},{2,3,4}};
if S = N.
A vemge convexity 65

Let 51 = {1,2,4} and 52 = N then

L (v L (5d- vL (5 l \i)) =9+9+9=27

> 24 = 7 + 10 + 7 = L (v L (52 ) - vL (52 \i)) .


iESj

Hence, (N, v L ) is not average convex. <>

The previous example shows that cycle-completeness of the network


is not a sufficient condition to guarantee the inheritance of average con-
vexity. In fact, not even cycle-freeness of the network will guarantee the
inheritance of average convexity. The following example shows this.

EXAMPLE 3.6 Consider the 4-person communication situation (N, v, L)


with N = {1,2,3,4}, L = {12,23,34} (see figure 3.5) and characteristic
function v given by
if 151 = 1 or 5 E HI, 3}, {I, 4}, {2, 3}};

v(5) = { L ~5 E {{1,2},{2,4},{3,4},{1,2,3},{1,3,4}};
~5 E {{1,2,4},{2,3,4}};
19 if 5 = N.

2 3 4

Figure 3.5. The network (N, L)

Some tedious checking shows that (N, v) is average convex.


The characteristic function of the network-restricted game (N, v L ) is
given by
if 151 = 1 or 5 E {{I, 3}, {I, 4}, {2, 3}, {2, 4}};
if 5 E {{I, 2}, {3, 4}, {I, 2, 3}, {1,3, 4}, {l, 2, 4}};
if5 E {{2,3,4}};
if 5 = N.
With 51 = {2, 3, 4} and 52 = N, we find

L (v L (5d - vL (5 1 \i)) = 6 + 14 + 14 = 34

> 33 = 11 + 11 + 11 = L (v L (52 ) - vL (52 \i)) .


iESj
66 Inheritance of pr'opeTties in communication situations

We conclude that (N, v L ) is not average convex. o

The examples above indicate that the set of networks that ensure
inheritance of average convexity must he rather small. The following
lemma shows that stars are in this set.

LEMMA 3.4 Let (N, v, L) be a communication situation in which the


underlying game (N, v) is average convex and the network (N, L) is a
star. Then the network-restricted game (N, v L ) is also average convex.

PROOF: Without loss of generality, we assume that player 1 is the central


player in the star. Let Sl ~ S2 ~ N. We distinguish between several
cases.
Suppose that 1 fI- Sl· Then, obviously, Sd L = {{j} I j E Sd and for
each i E Sl, (Sl \i)j L = {{j} I j E Sl \i}. If 1 fI- S2, we have

L (V L (Sl) -V L(Sl\i)) = L (L v(j) - L V(j))


iESl iESl jESl jESl \i

= L v(i)

~ i~ (t:, v(j) - j~\i v(j) )


= L (V L (S2) - V L (S2\i)).
iESl

If 1 E S2, we have

i~ (v' (S,) - v(S, \i)) ~ i~ (~v(j) - j~\i V(j))


L

= L v(i)
iESl

=L (V L (S2) - V L (S2\i)),
iESl
A vemge convexity 67

where the inequality follows from superadditivity of (N, v), which is


implied by average convexity of (N, v).
Now, assume that 1 E S1. This implies that SdL = {Sd and
(S1\i)/L = {S1\i} for all i -1= I, and (S1\1)/L = {{j} I j E S1\1}.
Hence,

= L (v(Sd -v(S1\i)) +v(Sd --v L (S1\1) (3.13)


iESl\1

= L (v(Sd - V(Sl \i)) + V(S1 \1) - L v(j).


jES\\1

Analogously, we obtain

= L (V(S2) - v(S2\i)) + v(S2\1) -- L v(j). (3.14)

U sing average convexity of (N, v), we have

iESl iESl

Furthermore,

V(S1 \1) - L v(j) = (V(SI \1) + L v(j)) - L v(j)


JESl \1 jES2\Sl , jES2\1

:S v(S2\1) - L v(j), (3.16)


jE S 2\1

where the inequality follows from superadditivity of (N, v). Combining


(3.13), (3.14), (3.15), and (3.16), we obtain

L (vL(Sd - vL(Sl \i)) :S L (V L (S2) - V L (S2\i)) .

We conclude that (N, v L ) is average convex. o

In the main theorem of this section, theorem 3.6, we give necessary and
sufficient conditions on a network to ensure the inheritance of average
68 Inheritance of properties in communication situations

convexity. We start with three lemmas. The first lemma states that if a
connected network is cycle-complete, not cycle-free, and not complete,
then we can find a subnetwork like that in example 3.5.

LEMMA 3.5 Let (N, L) be a connected network that is (i) not com-
plete, (ii) cycle-complete, and (iii) not cycle-free. Then there exist
Xl, X2, X3, X4 EN such that

PROOF: A set S ~ N is called a clique in (N, L) if ij E L for all


{i,j} ~S. A clique S ~ N is called a maximal clique in (N,L) if there
is no clique T with T :=> S.
Since (N, L) is not cycle-free it contains at least one cycle, and, hence,
INI :2: 3. By cycle-completeness we then know that there is a clique
containing at least three vertices. Let S be a maximal clique in (N, L)
containing at least three vertices. Since (N, L) is not complete, we have
SeN. Because (N, L) is a connected network there exist i E S,
j E N\S with ij E L. S is a maximal clique, so there exists k E S
with jk rt. L. Cycle-completeness of (N, L) then implies that i must
be the unique vertex in S directly connected with j. Define Xl = j,
X2 = i, and let X3,X4 E S\i with X3 =I- X4. Then L(XI,X2,X3,X4)
{ {Xl, X2}, {X2, X3}, {X2, X4}, {X3, X4} }. 0

The second lemma shows that if a connected network is cycle-free,


but not a star, then it contains a subnetwork like the one in example
3.6.

LEMMA 3.6 Let (N, L) be a connected network that is (i) not a star
and (ii) cycle-free. Then there exist XI,X2,X3,X4 EN such that

L(XI,X2,X3,X4) = {{XI,X2},{X2,XJ},{X3,X4}}.

PROOF: Since (N, L) is connected, cycle-free, and not a star it follows


immediately that INI :2: 4. Furthermore, since (N, L) is cycle-free we
just have to show that there exist two vertices for which the shortest
path connecting them consists of three links.
Since (N, L) is connected and cycle-free (a tree) we have ILl = INI-l.
If we denote the degree of i by p( i) = I{j I ij E L} I, then we have
L-iEN p(i) = 21NI - 2. Since (N, L) is not a star and cycle-free we
Average convexity 69

have for all i E N that p(i) :::; INI - 2. Assume that i i- j implies
min{p(i),p(j)} :::; 1. This implies that I{i E N I p(i) > 1}1 :::; 1 and
~iEN p(i) :::; l(INI - 2) + (INI - 1)1 < 21NI - 2, a contradiction. We
conclude that there exist i, j EN, i i- j, with p( i) 2:. 2 and p(j) 2': 2.
Since (N, L) is a tree there exists a unique path between two players,
which is consequently the shortest path between them. The path be-
tween i and j consists of at least one link. Since the degree of both
i and j is at least 2, we can find a vertex k directly connected to
i and a vertex l directly connected to j both not on the (shortest)
path between i and .j. Since (N, L) is cycle-free, the (shortest) path
between land k is via i and j and, hence, we found a pair of ver-
tices with the shortest path between them consisting of at least three
links. Denote this path by (Xl, ... , xm). Then obviously, since m 2': 4,
L(Xl,X2,X3,X4) = {{Xl,.'r2},{X2,X3},{X3,X4}}. D

The third lemma states a property of the network-restricted game in


case the underlying network is cycle-complete. In the lemma, we use the
notion of a carrier of a coalitional game. A coalition T ~ N is a carrier
of a coalitional game (N, v) if it holds that v(8) =, v(8 n T) for each
8 ~ N. Hence, T is a carrier of (N, v) if all players in N\T are zero
players in (N, v).

LEMMA 3.7 Let (N,L) be a cycle-complete network and let T ~ N be


an internally connected coalition. Then for all coalitional games (N, v)
with carrier T it holds that T is a carrier of the network-restricted game
(N, v L ) as well.

PROOF: Let (N, v) be a coalitional game with carrier T. Let 8 ~ N.


We start by showing that first splitting 8 into its components and then
taking the intersection of each component with T gives the same result
as first taking the intersection of 8 with T and then splitting this into
its components, i.e.,

{CnT ICE 8/L} = (Tn8)/L. (3.17)

If 18 n TI :::; 1, then (3.17) is obviously true. From now on assume


18 n T I 2': 2. Let i, j E 8 n T. The fact that T is internally connected
implies that there exists a path in (N, L) between i and j. Because (N, L)
is a cycle-complete network, it follows by lemma 1.1 that there exists a
unique shortest path between i and j and that every path between 'I and
j includes all points on this shortest path. We denote the set of points
70 Inher'itance of pmperties in communication situations

on the shortest path between i and j, including i and j, by Pi,j' Note


that Pi,j ~ T has to hold since i and j are connected in T.
Now, it holds that there exists a component C E S / L with i,.i E C n T
if and only if there exists aCE S / L with Pi,j ~ C, which holds if and
only if Pi,j ~ S.
Similarly, it holds that there exists a component D E (T n S) / L with
i, JED if and only if there exists a D E (T n S) / L with Pi,j ~ D, which
holds if and only if Pi,) ~ S.
Combining these two results, we conclude that there exist aCE S / L
with i,j E CnT if and only if there exists DE (TnS)/L with i,j E D.
Hence, (3.17) holds. This implies that

VL(S) = L v(C) = L v(C n T) = L v(D) = vL(S n T).


CES/L CES/L DE(SnT)/L

This proves that T is a carrier of the network-restricted game. 0

U sing the previous three lemmas, we can prove the following theo-
rem, which covers the inheritance of average convexity for connected
networks.

THEOREM 3.5 Let (N, L) be a connected network. Then the following


two statements are equivalent.
(i ) Network (N, L) is a complete network or a star.
(ii) For all average convex games (N, v) the network-restricted game
(N, v L ) is average convex.

PROOF: (i)=}(ii) Let (N, v) be an average convex game. If (N, L) is


complete then v L = v and, hence, (N, v L ) is average convex. If (N, L)
is a star it follows from lemma 3.4 that (N, v L ) is average convex.
(ii)=}(i) Assume (ii) holds. Then by lemma 3.3 we conclude that (N, L)
is cycle-complete. Now, suppose that (N, L) is not complete and not a
star. We will show that then condition (ii) is violated. We will distin-
guish between two cases, (N, L) is not cycle-free and (N, L) is cycle-free.
Firstly, suppose that (N, L) is not cycle-free. By lemma 3.5 it fol-
lows that there exist Xl, X2, X3, X4 E N such that L (Xl, X2, X3, X4) =
{ {Xl, X2}, {X2' X3}, {X2, X4}, {X3, X4} }. Without loss of generality, we as-
sume Xi = i for all i E {1,2,3,4}. Now, construct the game (N,w) as
follows: w(S) = v(S n {I, 2, 3, 4}) for all S ~ N, where v is the charac-
teristic function of the game in example 3.5. From lemma 3.7 it follows
that {l, 2, 3, 4} is a carrier of the network-restricted game (N, w L ), so
Average convexity 71

that w L (8) = w L (8 n {1,2,3,4}) = w L(1,2,3,4)(8 rl {1,2,3,4}) for all


8 <;;; N. Because L(l, 2, 3, 4) corresponds to the network of example 3.5,
it follows analogously to example 3.5 that (N, w) is average convex but
(N, w L ) is not.
Secondly, suppose that (N, L) is cycle-free. From lemma 3.6 it fol-
lows that there exist X1,X2,X3,X1 E N such that L(Xl,X2,X3,X4) =
{{Xl, X2}, {X2' X3}, {X3, X4} }. Without loss of generality, we assume :1:i =
i for all i E {I, 2, 3, 4}. Now, construct the game (N, w) as follows:
w(8) = v(8n {1,2,3,4}) for all 8 <;;; N, where v is the characteristic
function of the game in example 3.6. From lemma 3.7 it follows that
{I, 2, 3, 4} is a carrier of the network-restricted game (N, w L ), so that
w L (8) = wL(8n {1,2,3,4}) = w To (l,2,3,4)(8n {1,2,~;,4}) for all 8 <;;; N.
Because L(l, 2, 3, 4) corresponds to the network of example 3.6, it follows
analogously to example 3.6 that (N, w) is average convex but (N, w L ) is
not. 0

We will extend theorem 3.5 to networks that are not connected. To


do this, we need one more lemma. The following lemma states that a
network-restricted game is average convex if and only if all its subgames
associated with the components of the network are average convex.

LEMMA 3.8 Let (N,v,L) be a communication situation. The network-


restricted game (N, v L ) is average convex if and only if for every com-
ponent C E NIL the game (C, (vL)IC) is average convex.

PROOF: Firstly, we prove the if-part. Assume that the game (C, (v£)IC)
is average convex for each C E NIL. Let 8 1 <;;; 8 2 <;;; N. Then we obtain
the following chain of (in)equalities.

iESl CEN/L

= L L (v L (8 1 n C) - vL ((81 \i) n C))


CEN/L iESl

= L L (vL(C n 8Il- vL((C n 8d\i))


CEN/ L iEcnSl

:s; L L (vL(C n 8 2 ) - vL((C n 8 2 )\i))


CEN/ L iECnsl

= L (v L (8 2 ) - vL (82 \i)) .
iESl
72 Inhe1'itance of prope'rf'ies in communication situations

The first equality follows from the definition of the network-restricted


gamc, which implies that for all 8 <:;; N, v L (8) = LCEN/L vJ~(8 n C).
The third equality follows since v L (8 1 n C) - v L ((8 1 \i) n C) = 0 if
i E 8 1 \C. The inequality follows from average convexity of the subgames
(C, (vL)!c) and thc notion that for all 8 <:;; C we have v L (8) = (v L )lc(8).
The last equality follows similar to the first three equalities.
The only-if-part of the lemma follows directly from the fact that for
each component C E Nj Land all 8 <:;; C we have v L (8) = (v L )lc(8) and
the observation that a subgame of an average convex game is average
convex itself. 0

We can now prove the main theorem of this section.

THEOREM 3.6 Let (N, L) be a network Then the following two state-
ments are equivalent.

(i) For every component C E N j L it holds that (C, L (C)) is a complete


network or a star.

(ii) For all average convex games (N, v) the networ'k-restricted game
(N, v L ) is average convex.

PROOF: Suppose (i) holds. Let (N, v) be an average convex game. Let
C E N j L be a component of the network. Then (C, vIc) is average
convex as well. Because the network (C, L( C)) is complete or a star,
it follows by theorem 3.5 that (C, (vlc)L(C)) is average convex. Noting
that (vlc)L(C) = (vL)IC' we conclude using lemma 3.8 that (N, v L ) is
average convex. So, (ii) holds.
Suppose (ii) holds. Let C E N j L be a component. Then, obviously,
(C, L( C)) is a connected network. Also, for every average convex game
(C,w) it holds that the network-restricted game (C,wL(C)) is average
convex. This is easily seen by considering the average convex game
(N, v) with characteristic function v such that v (8) = w (8 n C) for each
8 <:;; N and using (ii) and the fact that every subgame of an average
convex game is average convex. According to theorem 3.5, we can now
conclude that (C, L( C)) is a complete network or a star. Hence, (i)
holds.
This completes the proof. o
Core-inclusion of the Shapley value 73

3.5 CORE-INCLUSION OF THE SHAPLEY


VALUE
In the current section we study conditions on a network that guarantee
the inheritance of the property that the Shapley value is a core element.
We also study the inheritance of the property that the Shapley value of
a game and all its subgames belong to the corresponding cores. We will
see that this last property is related to average convexity of an associated
game.
Theorem 3.7 states that inheritance of the property that the Shapley
value belongs to the core is guaranteed if and only if the underlying
network is complete or empty. In the proof of the theorem we will use
the following lemma.

LEMMA 3.9 Let (N,L) be a network that is not cycle-complete. Then


there exists a coalitional game (N, v) such that <J?(N, v) E C(N, v) and
<J?(N, v L ) tf. C(N, v L ).

PROOF: Because network (N, L) is not cycle-complete, there exists a


cycle (Xl, ... ,Xk, xd and i, j E {I, ... , k}, i < j -1, such that {Xi, Xj} tf.
L. Define v = UXi,Xj' It is easily seen that <J?(N,v) I;:: C(N,v).
Because {Xi,Xj} tf. L, it holds that vL(Xi,Xj) = O. The network-
restricted game (N, v L ) is determined by a collection W of subsets of N,
where SEW if and only if there exists C E S / L such that {Xi, Xj} C;;; C.
It holds that vL(S) = 1 if SEW, and vL(S) = 0 otherwise. Note that if
T ~ Sand SEW, then T E W. This implies that (N, v L ) is monotonic.
Recall from section 1.1 that the Shapley value of a player is the average
over all possible orderings of the players of the marginal contribution of
this player to the set of players who precede him. Note that monotonicity
of (N, v L ) and the fact that vL(S) E {O, I} for all S C;;; N implies that all
the marginal contributions of a player are either 0 or 1 and that for any
specific ordering players Xi and Xj together receive either 0 or 1. The
fact that v L (Xi, Xj) = 0 implies that they both receive zero if they are
first and second in an ordering. So, players Xi and Xj together receive
a weighted average of 0 and 1, with a positive weight for O. Hence,
<J?Xi (N, v L ) + <J?Xj (N, '(}L) < 1.
By non-negativity of the marginal contributions and the efficiency
of the Shapley value, we have L~=l <J?Xl (N, v L ) < 1. Consequentl~,
L~=l <Px1(N,v L ) + <Pxi(N,v L ) + <Pxj(N,v L ) < 2. This last expression
74 Inheritance of properties in communication situations

implies that
j k i
L1>x[(N,v L ) < 1 or L 1>xl (N, v L ) + L 1>xl (N, v L ) < 1.
l=i l=j 1=1

Since vL(X;,X;+l, ... ,:rj) = vL(Xj, ... ,.7:k,Xl, ... ,Xi) 1, we now find
that 1>(N, v L ) if- C(N, v L ). This completes the proof. o

We can now prove the following theorem.

THEOREM 3.7 Let (N,L) be a network. Then the following two state-
ments are equivalent.

(i) (N, L) is the complete network or the empty network.


(ii) For every game (N,v) with 1>(N,v) E C(N,v) 'it holds that
1>(N, v L ) E C(N, v L ).

PROOF: Suppose (i) holds. Let (N, v) be a coalitional game with


1>(N, v) E C(N, v). If (N, L) is the complete network then v L = v
and 1>(N,v L ) E C(N,vL) because 1>(N, v) E C(N,v). If (N,L) is the
empty network, then v L = LiEN V(i)Ui. This shows that (N, v L ) is an
additive game, so that its Shapley value belongs to its core. We conclude
that (ii) holds.
Suppose (ii) holds. It follows from lemma 3.9 that (N, L) is cycle-
complete. Suppose that (N, L) is not the empty network and not con-
nected. Consider (N, v) defined by

if lSI = 1;
v(S) ={ ~ if 1 < lSI < INI;
INI if S = N.

Then 1>i(N, v) = 1 for all i E N. Obviously, 1>(N, v) E C(N, v). In the


same way as in the proof of theorem 3.2, it follows that C(N, v L ) = (/)
implying that 1>(N, v L ) if- C(N, v L ). This contradicts (ii). We conclude
that if (N, L) is not the empty network, then (N, L) is connected.
Now, suppose that (N, L) is connected, cycle-complete, but not com-
plete. Let i, j E N be two points that are not connected directly. Then,
according to lemma 1.1, there exists a unique shortest path between i
and j. Take three consecutive points on this path. Without loss of gener-
ality, we call them 1,2, and 3. It holds that L(1,2,3) = {12,23}. Define
v = -1tl,2 - 'U1,3 - 'U2,3 + 3'Ul,2,3. Then 1>;(N, v) = 0 for all i E N and it is
Core-inclusion of the Shapley value 75

easily seen that <I>(N, v) E C(N, v). I3ecause v L = -Ul,2 - U2,3 + 2Ul,2,3,
t -i
it follows that <I>2(N, v L ) = -~ - ~ + = < 0 = v L (2). We conclude
that <I>(N, v ) tj. C(N, v ). This contradicts (ii). We conclude that if
L L
(N, L) is connected and cycle-complete, then it has to be complete.
Combining the results obtained above, it follows that (N, L) is the
complete network or the empty network. Hence, (i) holds.
This completes the proof. 0

In the remainder of this section we will study inheritance of the prop-


erty that the Shapley values of a game and all its subgames are in the
corresponding cores. Marin-Solano and Rafels (1996) show that this
property is equivalent to average convexity of an associated potential
game. This associated potential game is derived from the (unique) po-
tential function P on the set of all coalitional games, which we encoun-
tered in section l.l. Let (N,v) be a coalitional game. The potential
game (N, p(fl:~)) associated with (N, v) is the coalit ional game defined
by p(fv~~) (S) = P(S, vIs) for all S <;;; N. Hart and Mas-Colell (1989)
show that the characteristic function of the associated potential game
can be expressed in terms of the unanimity coefficients >"5 (v) of the game
(N,v) as follows,
HM "" >"s(v)
P(N,v) = ~ lSIus. (3.18)
saiV \{0}
Whenever there is no ambiguity about the underlying game (N,v), we
will simply write pHM instead of p(fv~) and refer to it as the associated
potential game without specifying the coalitional game with which it
is associated. Finally, to avoid confusion with noncooperative potential
games, which we will encounter in chapter 10, we will sometimes refer
to potential games associated with coalitional games as HM-potential
games.
For future reference, we capture the relationship between average con-
vexity and core-inclusion of the Shapley values in the following lemma,
which is due to Marin-Solano and Rafels (1996).

LEMMA 3.10 The Shapley values of a coalitional game (N, v) and all
its subgames are in the corresponding cores if and only if the associated
potential game (N, P(fv~)) is average convex.

EXAMPLE 3.7 Consider the coalitional game (N, v) with N = {1, 2, 3}


and v = 12ul,2 + 12u2,3 - 9UN. Note that this game is not average
76 Inheritance of properties in communication situations

convex, since with S = {I, 2} and T = N we find


L (v(S) - v(S\i)) = 12 + 12 = 24
iES

> 18 = 15 + 3 = L (v(T) - v(T\i)).


iES

The Shapley value of this game is given by 1>(N, v) = (3,9,3). It is a


straightforward exercise to check that the Shapley value is in the core
and that for each TeN the Shapley value of the associated subgame is
in the corresponding core. According to lemma 3.10 this implies average
convexity of the potential game associated with (N, v).
By (3.18) we find that p(f/,~) = 6Ul,2 + 6U2,3 - 3UN. Note that
(N, p(I[,M))
JV,U
is the game that was studied in example 3.4. We showed
in that example that indeed this game is average convex. 0

Because the Shapley value of an average convex game is in its core,


and because all subgames of an average convex game are average convex,
lemma 3.10 readily implies the relation between average convexity of a
game and average convexity of its associated potential game reflected in
the following remark.

REMARK 3.1 For every average convex coalitional game (N, v) it holds
that its associated potential game (N, p(ft~)) is average convex.

Because of the equivalence of the property under consideration and


average convexity of the associated potential game, we wonder if we can
use theorem 3.6. It follows from this theorem that if every component of
(N, L) is a complete network or a star and (N, P(lv~)) is average convex,
then (N, (p(f!:~)) L) is average convex. However, we are interested in
average convexity of (N, P/fv~~L))' and the following example shows that
this is not implied by average convexity of (N, (p(ftt))L).

EXAMPLE 3.8 Consider the 4-person communication situation (N, v, L)


with N = {1,2,3,4}, v = 6U2.3 + 6U2,4 + 6U3,4 - 12u2,3,4, and L =
{12, 13, 14}. Network (N, L) is a star with player 1 as the central player.
The potential game (N, p(f/,~)) associated with (N, v) is described by
p(fv~) = 3U2,3 + 3U2,4 + 3U3,4 - 4U2,3,4, implying that
(p(f/~))L = 3U1,2,3 + 3U1,2,4 + 3U1,3,4 - 4U1,2,:3.4'
Core-inclusion of the Shapley value 77

Some straightforward calculations show that (N, (P(fv~))L) is average


convex.
For the network-restricted game (N, v L ) we find v L = 6U1,2,3+6ul,2,4 +
611,1,3,4 - 12u1,2,3,4, so that

P(ftZL) = 2Ul,2,3 + 2Ul,2,4 + 2Ul,3,4 - 3Ul,2,3,4.

With 51 = {I, 2, 3} and 52 = N we find that

L (P(fv~L)(51)-p(fv~L)(51\i)) =2+2+2
iESl

> 3+1+1= L (P(fv~L) (52) .- P(fv~L) (52 \i)).


iESl

We conclude that (N, P(fv~L)) is not average convex. <>

We conclude that we cannot use theorem 3.6. However, it turns out


that inheritance of average convexity of the associated potential game
is ensured for exactly the same class of networks for which inheritance
of average convexity of the underlying game is ensured. In the fol-
lowing lemma we prove that if the underlying network is a star, then
average convexity of the potential game associated with a coalitional
game implies average convexity of the potential game associated with
the network-restricted game.

LEMMA 3.11 Let (N, v, L) be a communication situation such that the


Shapley values of the game (N, v) and all its subgames are in the cor-
responding cores and the network (N, L) is a star. Then the Shapley
values of the game (N, v L ) and all its subgames are in the corresponding
cores.

PROOF: Note that the game (N, v) is obviously totally balanced and,
hence, superadditive.
Assume, without loss of generality, that player 1 is the central player
in the star. Fix a coalition T ~ N. We will show that <I>(T, (VL)T) E
C(T, (VL)T)'
We start by showing that for all 5 ~ T it holds that

L <I>i(T, (vL)IT) 2: v L (5), (3.19)


iES
78 Inheritance of properties in communication situations

Let S ~ T. We will distinguish between three cases: (i) 1 rt T, (ii)


1 E T\S, and (iii) 1 E S.
rt
(i) Assume that 1 T. Because vL(S) = LiES v(i) and <Pi(T, (vL)IT) =
v(i) for all i E T, it follows immediately that (3.19) holds.
(ii) Assume that 1 E T\S. Then vL(S) = LiES v(i). Let i E S.
Note that this implies that i =I- 1. Recall that <Pi (T, (v L ) IT) is a convex
combination of {vL(R) - vL(R\i)}R<:;;T:iER. Note that

if 1 E R;
if 1 rt R.

Because (N, v) is superadditive, v(R) - v(R\i) 2 v(i) for each R ~ T


with i E R. This implies that <Pi(T, (vL)IT) 2 v(i). Consequently,
LiES <Pi(T, (vL)IT) 2 LiES v(i) = vL(S) and we conclude that (3.19)
holds.
(iii) Assume that 1 E S. We will use the procedure based on orders
that leads to expression (1.4) for the Shapley value. Let 7r be a fixed
order of the players in T. For each player i E T, we denote by P Ri the
set of players that precede player i in the order 7r. Hence, player i does
not belong to P Ri. We first consider the marginal contributions of the
players in coalition S in the game (T, (v L ) IT) with order 7r. Using that
player 1 is the central player in the star (N, L), we obtain

vL(PRi U i) - vL(PRf)
v(PRl U 1) - LjEPRi v(j) if i = 1;
= { v(PRi U i) - v(PRf) if i =I- 1 and 1 E P Ri ;
v( i) if i =I- 1 and 1 rt P Rf.

We derive from this that

iES

L v(i) + v (PR1 U 1) - L v(i)


iES:iEP Ri iEP Ri

+ L (v(PRi U i) - v(PRf))
iES:IEPRY
=v(PR1 U1)- L v(i)+ L (v(PRiUi)-v(PRf)).
iET\S:iEP Ri iES:IEPRY
(3.20)
Core-inclusion of the Shapley value 79

For the marginal contributions of the players in the game (T, VIT), we
derive using superadditivity of (N, v) that

v(P Ri U i) - v(P R'[) 2 v(i) if i E T\S. (3.21 )

Furthermore, the sum of the marginal contributions of the players that


precede player 1 together with player 1, equals the value of the coalition
that consists of these players, i.e.,

(v(PRi U i) - v (PR,[)) = v(PR1 U 1). (3.22)


iET:iEP Rful

Using (3.20), (3.21), and (3.22), we conclude that

iES
= v(PR1 U 1)- v(i) + L (v(PRi U i) - v(PR,[))
iET\S:iEPRf iES:1EPRi
2 v(PR1 U 1) - L (v(PRi U i) - v(PRf))
iET\S:iEP Rf
+ L (v(PRi U i) - v(PR,[))
iES:1EPRi
= L (v(PRi U i) - v(PRf)).
iES
We have established that for any order 7r of the players in T, the sum
of the marginal contributions of the players in S in the game (T, (v L ) IT)
is larger than or equal to the sum of the marginal contributions of the
players in S in the game (T, VIT ). The average over all possible orders of
the players in T of the marginal contributions of the players in the game
(T,vIT) gives the Shapley value <l>(T,vIT). We know that <l>(T,vIT) E
C(T,vIT) and, hence, LiES <l>i(T,vIT) 2 v(S). We can now conclude
that LiEs <l>i(T, (vL)IT) 2 v(S) = vL(S). We conclude that (3.19) holds.
Combining cases (i), (ii), and (iii) we have that (3.19) holds for all
S ~ T. Note that vL(S) = (vL)IT(S) for all S ~ T by definition of the
subgame (T, (vL)IT), so that we have shown that

L <l>i(T, (vL)IT) 2 (vL)IT(S)


iES
for all S ~ T. Note that efficiency of the Shapley value implies that
LiET <l>i(T, (vL)IT) = vL(T). We now conclude that <l>(T, (vL)IT) E
80 Inheritance of properties in commnnication sitnations

C(T, (vL)IT). This completes the proof. o

To prove that networks in which all components are either complete or


stars are the only networks for which inheritance of the property that the
Shapley value of a game and all its subgames are in the corresponding
cores is guaranteed, we use the following two examples.

EXAMPLE 3.9 Let (N, w) be the 4-person game with


w = 12ul,4 + 12u2,3 + 12u3,4 + 9Ul,2,3 + 9Ul,2,4
- 9Ul,3,4 - 9U2,3,4 - SUN·
Then (N,P(f./;)) coincides with the game (N,v) in example 3.5. Let
(N, L) be the network in example 3.5, i.e., L = {12, 23, 24, 34}. Some
straightforward calculations show that (N, P/::,:vL)) is not average con-
~. 0

EXAMPLE 3.10 Let (N,w) be the 4-person game with


w = 16ul,2 + 16u2,4 + 16u3,4 - 6Ul,2,4 - 6U2,3,4 - 4UN.
Then (N,p{f./1v)) coincides with the game (N,v) in example 3.6. Let
(N,L) be the network in example 3.6, i.e., L = {12,23,34}. Some
straightforward calculations show that (N, P/fv~L)) is not average con-
~. 0

To obtain the main result of this section we need the following lemma.

LEMMA 3.12 Let (N,v,L) be a communication situation. The game


(N, P(JtvL)) is average convex if and only if for all C E N/ L the game
(C, (P(fv~L))IC) is average convex.

PROOF (SKETCH): Recall that the unanimity coefficients of (N,vL) are


denoted by AR(V L ), R <;;;; 2N \{0}. For each S <;;;; N it holds that

(P(fv~L))(S) = L
R~S:R#
Population monotonic allocation schemes 81

where the second equality holds because AR (v L ) = 0 if R is not contained


in a component C E N / L.
The only-if-part follows directly from the fact that for all C E N / L
and all T ~ C it holds that p(J.f;L)(T) = (p(J/:ZL))ldT).
The if-part follows from an argument similar to that in the proof of
lemma 3.8. 0

THEOREM 3.8 Let (N, L) be a network. Then the following two state-
ments are equivalent.

(i) For all C E N / L it holds that (C, L( C)) is a complete network or a


star.

(ii) For all games (N, v) such that the Shapley values of (N, v) and all
its subgames are in the corresponding cores it holds that the Shapley
values of (N, v L ) and all its subgames are in the corresponding cores.

PROOF (SKETCH): Firstly, recall from lemma 3.10 that the property that
the Shapley values of a game and all its subgames are in the correspond-
ing cores is logically equivalent to average convexity of the associated
potential game. Along the same lines as the proof of theorem 3.5, using
lemma 3.11 instead of lemma 3.4, lemma 3.9 instead of lemma 3.3, and
examples 3.9 and 3.10 instead of examples 3.5 and 3.6, it follows that
(i) and (ii) are equivalent for all connected networks (N, L).
Extending this result to all networks can be done along the same lines
as the proof of theorem 3.6 using lemma 3.12 instead of lemma 3.8. 0

3.6 POPULATION MONOTONIC


ALLOCATION SCHEMES
The last properties for which we investigate the inheritance use the no-
tion of an allocation scheme. We start by introducing allocation schemes.
An allocation scheme is a payoff scheme that provides payoff vectors for
a game and all its subgames. Formally, an allocation scheme for a game
(N, v) is a vector (Xi,S )iES,Sr;;N. Note that every allocation rule defined
on the set of all coalitional games naturally defines an allocation scheme
by applying this rule to a game itself and also to all its subgames. The
allocation scheme based on the Shapley value is called the Shapley allo-
cation scheme.
82 Inheritance of properties in communication situations

EXAMPLE 3.11 Consider the game (N, v) with N = {I, 2, 3} and v =


6Ul,2 + 6U2,3 - 3UN. This is the game we considered in example 3.4. In
that example we computed the Shapley value of this game, <Jl(N, v) =
(2,5,2). The Shapley values of the subgames of (N, v) are easily com-
puted. The Shapley allocation scheme of (N, v) in represented in table
3.1. 0

I Coalition II player 1 I player 2 I player 3 II


{I} 0 - -
{2} - 0 -
{3} - - 0
{1,2} 3 3 -
{1,3} 0 - 0
{2,3} - 3 3
{1,2,3} 2 5 2

Table 3.1. The Shapley allocation scheme

Population monotonic allocation schemes were introduced by Spru-


mont (1990). He was interested in a concept that guarantees that once
a coalition is formed, no player in this coalition is tempted to induce
the formation of a smaller coalition. Sprumont (1990) argues that this
requires that the payoff of any player does not decrease as the coali-
tion he belongs to grows larger. An allocation scheme that satisfies this
property and that also satisfies efficiency for each subgame is called a
population monotonic allocation scheme (PMAS). In formula, a vector
(Xi,S )iES,S(;;N is a population monotonic allocation scheme for a coali-
tional game (N, v) if it satisfies the following two conditions.
(i) LiES Xi,S = v(S) for all S ~ N.
(ii) Xi,S ~ Xi,T for all S, T ~ N with S ~ T and all i E S.
If a coalitional game (N, v) has a PMAS (Xi,S )iES,S(;'N then it is easily
seen that the payoff vector (Xi,N )iEN is a core-element of the game (N, v).
Also, for each S ~ N it holds that (Xi,S )iES is a core-element of the
subgame (S,vls). We conclude that every game that has a PMAS is
totally balanced.
The following remark, due to Marin-Solano and Rafels (1996), de-
scribes a relation between the Shapley allocation scheme being a PMAS
and convexity of the associated potential game.
Population monotonic allocation schemes 83

REMARK 3.2 The Shapley allocation scheme of coalitional game (N, v)


is a population monotonic allocation scheme (PMAS) if and only if the
associated potential game (N, p/fv~)) is convex.

An illustration of this remark can be found in the following two ex-


amples.

EXAMPLE 3.12 Consider the game (N,v) with N = {1,2,3} and v =


6U1,2+ 6U2,3 - 311,N. This is the game we already considered in examples
3.4 and 3.11. The Shapley allocation scheme of this game was repre-
sented in table 3.1. Using the payoffs in this table we find

ih({1,2},VI{1,2}) = 3 > 2 = <p1(N,v).


Hence, the Shapley allocation scheme is not a population monotonic
allocation scheme.
The potential game associated with (N, v) is described by P/fv~) =
3U1,2 + 3U2,3 - hL1,2,3. Hence, the marginal contribution of player 1 to
coalition {2} is 3, whereas his marginal contribution to coalition {2,3}
is 2. We conclude that (N, P/fv~)) is not convex. 0

EXAMPLE 3.13 Consider the game (N,v) with N = {1,2,3} and v =


+ 6U1,3 + 4U2,3 - 6UN. This game is not convex since
6U1,:!

v(1,2) - v(l) = 6 > 4 = 10 - 6 = v(N) -- v(l, 3).

The associated potential game of (N, v) is described by P/f/J) = 3U1,2 +


311,1,3 + 2U2,3 - 2UN. It can be checked that this game is convex, for
example,

p/f/J) (1, 2) - P(f.J~)(l) = 3:S: 6 - 3 = P/fv~)(N) - P(fv~)(1,3).

Remark 3.2 implies that the Shapley allocation scheme of (N, v) is a pop-
ulation monotonic allocation scheme. This Shapley allocation scheme is
represented in table 3.2. One can easily derive that it constitutes a pop-
ulation monotonic allocation scheme. 0

We will now focus on the inheritance of the property that a coalitional


game has a population monotonic allocation scheme. The following the-
orem shows that if a game has a PMAS then the network-restricted game
84 Inheritance of ]JT'O]Jer'ties in communication situations

Coalition II player 1 I player 2 I player 3


{l} ()
{2} ()
{3} ()
{1,2} 3 3
{l,3} 3 3
{2,3} 2 2
{1,2,3} 4 3 3

Table 3.2. The Shapley allocation scheme

has a PMAS for any network (N,L). Hence, no conditions on (N, L) are
necessary for the inheritance of this property.

THEOREM 3.9 Let (N, L) be a network and let (N, v) be a coalitional


game that has a PMAS. Then (N, v L ) has a PMAS.

PROOF: Let (Xi,S)iES,Sr;N be a PMAS for (N,v). For each S ~ Nand


all i E S we define
Yi,S = Xi,Ci(L(S))' (3.23)
We will show that (Yi,S) iES,Sr; N is a PMAS for (N, v L ) by checking the
two conditions in the definition of PMAS:
(i) Let S ~ N. Then

LYi,S = L LYi,S = L L Xi,Ci(£(S))


iES CES/ L iEC CES/ L iEC
= L LXi,C = L v(C) = vL(S).
CES/L iEC CES/L

(ii) Let S, T ~ N with S ~ T and let i E S. Then

Yi,S = Xi,Ci(L(S)) ~ Xi,Ci(L(T)) = Yi,T,


where the inequality follows since (Xi,S )iES,SCN is a PMAS for the
game (N,v) and Ci(L(S)) ~ Ci(L(T)). -
This completes the proof. o

In the following theorem we study the inheritance of the property


that the Shapley allocation scheme is a PMAS. The theorem states that
Population monotonic allocation schemes 85

inheritance of this property is guaranteed if and only if every component


of the network is complete.

THEOREM 3.10 Let (N,L) be a network. Then the following two state-
ments are equivalent.

(i) For all C E N/ L it holds that (C, L(C)) is a complete network.


(ii) For all (N, v) for which the Shapley allocation scheme is a PMAS
it holds that the Shapley allocation scheme of (N, v L ) is a PMAS for
(N,vL).

PROOF: Suppose (i) holds. Let (N, v) be a coalitional game for which
the Shapley allocation scheme is a PMAS. Denote Xi,S = 1>i(S, vIs) and
Yi,S = 1>i(S, (vL)ls) for all S ~ N and all i E S. We will show that for
all S ~ N and all i E S it holds that

1Ji,S = Yi,Ci(L(S)) = xi,C,(L(S))' (3.24)

The first equality follows by component decomposability of the Shapley


value of the network-restricted game (see theorem 2.3) and the second
equality follows because the undirected network (Ci(L(S)), L(Ci(L(S))))
is complete.
Because (3.24) implies equation (3.23), it follows that checking the
two conditions in the definition of a PMAS can be done along the same
lines as in the proof of theorem 3.9. We conclude that (ii) holds.
Suppose (ii) holds. Assume there exists C E N / L such that (C, L( C))
is not a complete network. Then there exist i, j, k E C such that
L(i,j,k) = {ij,jk}. Without loss of generality, we assume that i, j,
and k are 1, 2, and 3, respectively. We define

Then
P(N
IlM =
1])
,
'Ul ' 2 + 'Ul 3 + 'U2 3 -
, ,
'Ul 1 2 1 3·

In the game (N, P(fl;)) the marginal contribution of 3, player to a coali-


tion is zero except the marginal contribution of player i E {1, 2, 3} to a
coalition S with Sn {1, 2, 3}\i =1= 0, which equals 1. Hence, the marginal
contribution of a player to any coalition is less than or equal to his
marginal contribution to a larger coalition. Hence, the associated po-
tential game (N, P(fv~)) is convex. By remark 3.2 it follows that the
Shapley allocation scheme is a PMAS for (N, v). For (N, v L ) we find
86 Inheritance of properties in communication situations

that (v L )I{1,2} = 2U1,2 and

(v L )I{1,2,3} = + 2U1,2,3 + 2U2,3 -


2U1,2 3U1,2,3

= 2U1,2 + 2U2,3 - U1,2,3·


We then find that
L 1
<I>1({1,2},(v )1{1,2}) = 2 * 2" = 1
2 1 1 ,L
>"3 = 2 * 2" + 2 * 0 -1 *"3 = <I>1({1,2,3}, (v )1{1,2,3})'

Hence, the Shapley allocation scheme of (N, v L ) is not a PMAS of


(N, v L ). This contradicts (ii). We conclude that (C, L(C)) is a com-
plete network for each C E N / L.
This completes the proof. D

3.7 REVIEW AND REMARKS


In table 3.3 we summarize the results on inheritance described in this
chapter.

Property Condition on (N, L) to guarantee


inheritance of property of (N, v) by (N, v L )

Superadditivity no condition

Convexity cycle-complete

Balancedness connected or empty


Totally balancedness no condition
Shapley value in the core complete or empty

A verage convexity every component complete or a star


Shapley value of a game
and all its sub games every component complete or a star
in corresponding cores
Game has a PMAS no condition
Shapley allocation scheme every component complete
is a PMAS

Table 3.3. Survey of wnditions for inheritance


Review and remarks 87

We conclude with some remarks concerning the link game, which was
introduced in section 2.1. Because the link game describes values of links
rather than players, several properties we studied in this chapter have a
different interpretation in the context of a link game. We are only aware
of results on the inheritance of convexity and average convexity for the
link game.
Van den Nouweland and Borm (1991) find that the link game (L, rV)
associated with a communication situation (N, v, L) is convex if (N, v)
is convex and (N, L) cycle-free. Slikker (2000b) argues that there exists
no network (N, L) in which at least one component contains at least two
links that guarantees that average convexity of a game (N, v) is inherited
by the associated link game (L, rV).
Chapter 4

VARIANTS ON THE BASIC MODEL

In this chapter, we will discuss extensions of the basic model of chap-


ter 2 along several dimensions. Every model in this chapter has, like
the basic model, three cornerstones: a set of players, a function that
describes the economic possibilities of the players, and communication
restrictions between the players. Adjustments to the basic model can
consist of other representations of restrictions on communication, of the
economic possibilities of the players, or of a combination of both. Firstly,
in sections 4.1 and 4.2 we analyze two different formulations of restricted
communication. In section 4.1 the players are assumed to be partitioned
into several groups, i.e., each player belongs to exactly one group, and
communication is possible within such a group only. In section 4.2 bilat-
eral communication in the basic model is generalized by allowing com-
munication in conferences consisting of an arbitrary number of players,
while each player can belong to several groups. The third extension of
the basic model, studied in section 4.3, considers situations in which
communication possibilities are not completely reliable and might some-
times fail. This is represented by means of probabilistic networks. In
section 4.4 we model players' communication possibilities by means of
a network as in the basic model, but we model the possible gains from
cooperation using nontransferable utility games rather than TU games.
The last two sections of this chapter consider situations in which the
possible gains from cooperation cannot be modeled by means of a coop-
erative game. In section 4.5 we discuss a model in which the gains that
an internally connected group of players can obtain can depend on how
they are connected. A similar approach is taken in section 4.6, where
it is additionally assumed that the players' roles in a communication
relation may be asymmetric.
90 Variants on the basic model

4.1 GAMES WITH COALITION


STRUCTURES
In the basic model of chapter 2, components surfaced naturally as
maximal-sized coalitions of players who could cooperate given the re-
striction on communication between the players. In the current section
we will study games with coalition structures in which these components
are determined exogenously.
We will formalize this line of thought. Consider a pair (N, B) con-
sisting of a player set N and a partition B of this set into components.
Such a pair will be called a coalition structure. For convenience we will
sometimes refer to B as a coalition structure as well. The interpreta-
tion of a coalition structure is that no communication can take place
between players in different components of the coalition structure, while
any coalition of players who are contained in anyone component can
effectively communicate with each other.
The attentive reader may note that this interpretation is the same
as that of a network in which every pair of players in a component is
connected d~rectly, i.e., in which the network consists of several complete
subnetworks. We will allude to this relation later, when we relate the
allocation rule under consideration in this section to the Myerson value
of the basic model.
A coalition structure induces a partition of each coalition of players.
Let (N, B) be a coalition structure and let S ~ N. The induced partition
on S is denoted by (S, B(S)) with B(S) = {BnS I B E Band BnS =I- 0}.
So, two players are in the same partition element of S if and only if they
both belong to the same partition element according to B and they both
belong to S.

EXAMPLE 4.1 Consider the set of players N = {I, 2, 3,4, 5}. Suppose
that players 1, 2, and 3 can effectively cooperate with each other and that
the same holds for players 4 and 5. If additionally it holds that no mean-
ingful interaction between players 1, 2, and 3 on one hand and players
4 and 5 on the other hand is possible then the cooperation possibilities
can be modE:led by coalition structure (N, B) with B = {{l, 2, 3}, {4, 5}}.
This coalition structure is represented in figure 4.l.
Restricting our attention to the coalition consisting of players 2, 3,
and 4, we find that players 2 and 3 have the possibility to effectively
communicate with each other, while player 4 cannot communicate with
either player 2 or player 3. This is represented by the partition of {2, 3, 4}
into coalitions {2,3} and {4}, i.e., B({2,3,4}) = {{2, 3}, {4}}. 0
Games with coalitwn structures 91

Figure 4.1. Coalition structure (N,B)

Consider a group of agents N and a situation in which the gains from


cooperation for every coalition are given by characteristic function v,
while the players face communication restrictions modeled by coalition
structure (N, B). The associated triple (N, v, B) is called a game with a
coalition structure.
An allocation rule on a class QCS of games with coalition structures is
a function 'Y that assigns a payoff vector 'Y(N, v, B) ERN to every game
with a coalition structure (N, v, B) in that class.
Aumann and Dreze (1974) studied cooperative games with coalition
structures and allocation rules for these situations. Among other things
they studied the allocation rule <l'>AD that attributes to a player i E N
the Shapley value of player i of the subgame restricted to the partition
element BE B with i E B, i.e.,

<l'>fD(N, v, B) = <l'>i(B, vIB)' (4.1)

We refer to <l'>AD as the value of Aumann and Dreze.

EXAMPLE 4.2 Consider the 5-person game with a coalition structure


(N, v, B) with player set N = {I, 2,3,4, 5}, coalition structure B =
{{l, 2, 3}, {4, 5} }, and characteristic function v = 1211,1,2+811,1,3 + 1011,2,3-
611,1,2,3 + 2511,1,2,3,4 + 411,4,5 - 6UN. Coalition structure (N, B) was also con-
sidered in example 4.1 and represented in figure 4.1.
We will determine the value of Aumann and Dreze for this situation.
Firstly, we concentrate on players 1, 2, and 3 and consider vl{1,2,3} =
12u1,2 + 8Ul,3 + lOu2,3 - 6Ul,2,3. Using expression (1.3) of the Shapley
value it follows that
<I'>~D(N,v,B) =6+4-2=8;
<l'>tD(N, v, B) = 6+5- 2 = 9;
<l'>tD(N,v,B) = 4+ 5 - 2 = 7.
Restricting our attention to coalition {4, 5}, we find
<l'>tD(N, v, B) = 2;
<l'>tD(N, v, B) = 2.
92 Variants on the basic model

We conclude that ~AD(N,v,B) = (8,9,7,2,2). o

In a game with a coalition structure a player can directly cooperate


with any other player in the same component. In the setting of chapter 2
this would have been modeled by the complete subnetwork on the play-
ers in this component. The following theorem shows that the value of
Aumann and Dreze coincides with the Myerson value of the communica-
tion situation with the same underlying game and a network constructed
as described above. For every coalition structure (N, B) we denote this
network by
LB = {ij I 3B E B : i, j E Band i =I- j} .

THEOREM 4.1 Let (N,v,B) be a game with a coalition structure. Then

PROOF: Let B E B be a component. Consider an arbitrary player


i E B. By definition of network (N, LB) it follows that the component
in this network containing player i coincides with the partition element
he belongs to according (N, B), i.e., G;(LB) = B. Recall that (B, LB) is
the complete network on player set B. By component decomposability
of It it follows that

where the second equality follows by definition of LB and the third equal-
ity by definition of the Myerson value lL. The last equality follows be-
cause (B, LB) is a complete network, which implies that (vIB)L B = vIH.
The theorem follows by noting that ~i(B, vIB) is the value of Aumann
and Dreze for player i. 0

We illustrate theorem 4.1 in the following example.

EXAMPLE 4.3 Consider the game with a coalition structure (N, v, B) of


example 4.2. The associated network (N, LB) contains all links within
the two components, i.e., LB = L{1,2,3} u L{4,5}. It is represented in
figure 4.2.
The characteristic function of the network-restricted game is described
by V LB = 12ul,2 + 8Ul,3 + 10u2,3 - 6Ul,2,3 + 4U4,S, from which we derive
Games with coalition structures 93

Figure 4.2. Network (N, LB)

that /-L(N, v, LB) = (8,9,7,2,2) = q,AD(N, v, B). o

The relation between the value of Aumann and Dreze and the Myerson
value implies that for several properties that are satisfied by the Myerson
value there exist similar properties that are satisfied by the value of
Aumann and Dreze. We will discuss component efficiency and balanced
contributions. 9

Component Efficiency An allocation rule 'Y on a class geS of games


with coalition structures is component efficient if for every (N, v, B) E
geS and every component B E B it holds that

L 'Yi(N, v, B) = v(B). (4.2)


iEB

To describe the second property we need some additional notation.


Let N be a set of players. For every partition of the player set B =
{B l , ... , Bm}, each k E {l, ... , m}, and each i E B k , we denote the
coalition structure that results when player i leaves his partition element
by B - i = {Bl, ... ,Bk-l,Bk\{i}, {i},Bk+l, ... ,Bm}.

Component Restricted Balanced Contributions An allocation


rule 'Y on a class geS of games with coalition structures satisfies
component restricted balanced contributions if for every game with
coalition structure (N, v, B) E ges, every B E B, and all i, j E B it
holds that

'Yi(N, v, B) - 'Yi(N, v, B - j) = 'Yj(N, v, B) - 'Yj(N, v, B - i). (4.3)

We remark that this property is slightly weaker than a straightforward


adaptation of balanced contributions to games with coalition structures,
which would require (4.3) to hold for every pair of players rather than

9We remind the reader of our policy on domains when describing properties of allocation
rules, which is described in remark 2.2 on page 33.
94 Variants on the basic model

only for pairs of players in the same component. The weaker version,
however, is sufficient in a characterization of the value of Aumann and
Dreze. Moreover, one might argue that it is counterintuitive to put
restrictions on the harm two players in different partition elements can
inflict on each other.
The following example provides an illustration of component efficiency
and component restricted balanced contributions of the value of Aumann
and Dreze.

EXAMPLE 4.4 Consider the 5-person game with a coalition structure


that was also considered in examples 4.2 and 4.3. In example 4.2 we
showed that cpAD(N, v, B) = (8,9,7,2,2), which implies that

L CP;1D(N, v, B) = 8 + 9 + 7 = 24 = v(l, 2, 3).


iE{1,2,3}

Hence, the payoffs to the players in partition element {I, 2, 3} add up to


the value of coalition {I, 2, 3}, which illustrates that the value of Aumann
and Dreze satisfies component efficiency.
Consider the possible influence of players 1 and 2 on each other's
payoffs. If player 1 leaves partition element {I, 2, 3} then player 2 ends
up in partition element {2, 3}. Similarly, if player 2 would leave partition
element {I, 2, 3} then player 1 ends up in partition element {I, 3}. Since
V]{2,3} = lOu2,3 and V]{1,3} = 8Ul,3 it follows that CPtD(N,v,B -1) = 5
and cptD(N, v, B-2) = 4. Hence,

CPtD(N,v,B) - CPtD(N,v,B - 2) = 8 - 4 = 4;
CPtD(N, v, B) - CPtD(N, v, B-1) = 9 - 5 = 4,

which illustrates that cpAD satisfies component restricted balanced con-


tributions. 0

The relation between the value of Aumann and Dreze and the Myer-
son value as shown in theorem 4.1 naturally leads to the characterization
of the value of Aumann and Dreze outlined in theorem 4.2. This charac-
terization is valid on a domain GCS;;, the set of all games with coalition
structures with the same underlying game (N, v). Because the proof of
the theorem is similar to that given for the Myerson value in theorem
2.4, we will only provide a sketch of the proof. A full proof can be found
ill Slikker (2000a).
Hypergmph communication situations 95

THEOREM 4.2 The value of Aumann and Dreze is the unique alloca-
tion rule on G CS;: that satisfies component efficiency and component
restricted balanced contributions.

PROOF (SKETCH): It follows that the value of Aumann and Dreze sat-
isfies the two properties by combining theorem 4.1 and component de-
composability of the Myerson value with the fact that the Myerson value
satisfies component efficiency and balanced contributions. Unicity can
be shown similar to the second part of the proof of theorem 2.4. D

4.2 HYPERGRAPH COMMUNICATION


SITUATIONS
In the current section we extend the communication possibilities of the
players. Whereas in the basic model there are only bilateral communica-
tion possibilities, we now consider situations in which communication is
possible in conferences that can consist of an arbitrary number of play-
ers. Different from a coalition structure, however, a player can belong to
several coalitions of players who can effectively communicate with each
other.
We model communicative possibilities between players by means of
hypergraphs. A hyperyraph is a pair (N, H) with N the player set and H
a family of subsets of N. An element H E H is called a conference. The
interpretation of a hypergraph is as follows. Communication between
players in a hypergraph can only take place within a conference. Fur-
thermore, communication in this conference cannot take place effectively
if not all its members are present, i.e., all players of the conference have
to participate. Note that a hypergraph is a generalization of a network,
which has bilateral communication channels only.

EXAMPLE 4.5 Consider the hypergraph (N, H) with player set N =


{1,2,3,4,5,6} and set of conferences H = {{1,3},{2,5,6},{3,4,5}}.
This hypergraph is represented in figure 4.3.
Players 3 and 5 cannot communicate with each other in the absence
of all other players. However, meaningful communication can take place
within conference {3, 4, 5} in the presence of player 4. <>

Several concepts for networks that we encountered can be extended to


hypergraphs. Let (N, H) be a hypergraph. Like for networks, we want
96 Variants on the basic model

Figure 4.3. Hypergraph (N, H)

to define the components of a hypergraph. A path in hypergraph (N, H)


is a sequence (Xl, HI, X2,···, Xk-J, Hk-l, Xk) such that {Xl, Xl+d ~ Hi
for all l E {I, ... , k - I} and HI E H for all l E {I, ... , k - I}. A
cycle in the hypergraph (N, H) is a path (Xl, HI, X2,· .. , Xk, H k , xk+d
where k 2: 2, Xl,"" Xk are all distinct players in N, Xk+l = Xl, and
HI, ... , H k are all distinct conferences in H. A hypergraph is called
cycle-free if it does not contain a cycle. Two players i and j are connected
if there exists a path (xI,H I ,X2, ... ,Xk-I,Hk - I ,Xk) with Xl = i and
Xk = j. Two players i and j are called directly connected if there exists
H E H with {i, j} ~ H. Two connected players i and j are called
indirectly connected if they are not directly connected. The notion of
connectedness induces, similar to connectedness for networks, a partition
of the player set into (cooperation) components, where two players are
in the same (coopemt'ion) component if and only if they are connected.
The set of communication components will be denoted by N /H.
To coordinate actions within a coalition S ~ N, only conferences
within S are relevant. These conferences are denoted by H(S) = {H E
H I H ~ S}. The partition of S into communication components ac-
cording to hypergraph (S, H(S)) will be denoted by S/H.

EXAMPLE 4.6 Consider the hyper graph of example 4.5. In this hyper-
graph player 1 is connected to player 6 via conferences {I, 3}, {3, 4, 5},
and {2, 5, 6}, because (1, {I, 3}, 3, {3, 4, 5}, 5, {2, 5, 6}, 6) is a path in he
hypergraph. It is easily checked that the grand coalition is the unique
component in this hypergraph, i.e., N/H = {N}. However, if we con-
sider for example S = {I, 2, 3,4} then there are three components,
S/H = {{1,3},{2},{4}}.
Note that hypergraph (N, H) is cycle-free. Cycle-freeness is lost if
we add for example conference {I, 2} to the hypergraph. This extended
hypergraph (N, H U {{l, 2}} is represented in figure 4.4.
Hypergraph communication situations 97

Figure 4.4. Hypergraph (N, H U {{I, 2}})

Since (1, {I, 3}, 3, {3, 4, 5}, 5, {2, 5, 6}, 2, {I, 2}, 1) is a cycle, we con-
clude that hypergraph (N,1-l U {{I, 2}}) is not cycle-free. 0

Hypergraph communication situations are closely related to commu-


nication situations, the only difference being that the cooperation pos-
sibilities are described by a hypergraph instead of a network with only
bilateral communication relations. Formally, a hypergraph communica-
tion situation is a triple (N, v, 1-l) where (N, v) is a coalitional game
and (N,1-l) a hypergraph. As in a communication situation, (N, v) rep-
resents the possible gains from cooperation. The hypergraph (N,1-l)
models restricted cooperation possibilities between Iche players.
The potential gains from a coalition in hypergraph communication
situation (N, v, 1-l) depend on the coalitional game and the cooperation
possibilities. The hypergraph-restricted game (N, v1-l) incorporates both
and is defined by

v1-l(S) = L v(C) for all S ~ N, ( 4.4)


CES/1-l

where S /1-l denotes the partition of coalition S into (communication)


components. The value of a coalition in the hypergraph-restricted game
is defined as the sum of the values of the components of this coalition,
since meaningful communication occurs within components only.
An allocation rule on a class 1-lCS of hypergraph communication sit-
uations is a function --y that assigns a payoff vector --y(N, v, 1-l) E UN to
every hypergraph communication situation (N, v, 1-l) in that class.
The idea of modeling communication by means of hypergraphs is due
to Myerson (1980). The analogue of the Myerson value for communica-
tion situations is called the Myerson value for hyperymph communication
situations, or simply the Myerson value, and coincides with the Shapley
value of the hypergraph-restricted game. Formally, the Myerson value
98 Variants on the basic model

fJ, of hypergraph communication situation (N, v, H) is described by

(4.5)
We illustrate the hypergraph-restricted game and the Myerson value
in the following example.

EXAMPLE 4.7 Let (N,v, H) be the hypergraph communication situa-


tion with player set N = {I, 2, 3, 4,5, 6}, characteristic function v =
12ul,3 + 9U3,5 + 18ul,6, and set of conferences H = {{I, 3}, {2, 5, 6},
{3, 4, 5}}. Hypergraph (N, H) was also studied in example 4.5 and is
represented in figure 4.3. For any coalition we can determine its value
in the hypergraph restricted game using (4.4). For example,

v(C)
CE{1,2,3,4}/1i
= v(l, 3) + 11(2) + 11(4) = 12 + 0 + 0 = 12.

It is easily verified that

v ti = 12ul,3 + 9U3,4,5 + 18uN.


Hence, the Myerson value equals

fJ,(N, v, 1-l) = ifJ(N, v ti )


= (6,0,6,0,0,0) + (0,0,3,3,3,0) + (3,3,3,3,3,3)
= (9,3,12,6,6,3).
o

Myerson (1980) characterized the Myerson value for hypergraph com-


munication situations by two properties, component efficiency and fair-
ness. Both properties can be formulated by adapting the properties
with the same name for communication situations. For component effi-
ciency this adaptation is straightforward. Fairness of an allocation rule
for hypergraph communication situations demands that deleting a spe-
cific conference has the same effect on the payoffs of all players in this
conference. The characterization of the Myerson value for hypergraph
communication situations then follows along the lines of theorem 2.4.
Similarly, a characterization with component efficiency and balanced
contributions can be given.
In chapter 2 we presented the position value, an alternative alloca-
tion rule for communication situations that was, like the Myerson value,
Hypergraph communication situations 99

based on the Shapley value. We will follow the line of thought in the
definition of the position value to define a similar allocation rule for
hypergraph communication situations.
The main idea behind the position value for communication situations
is the equal division of the value of a communication relation between
the players that form this relation. This idea can straightforwardly be
extended to hypergraph communication situations. We start with the
introduction of the conference game, which describes the possible gains
of the players if only a few conferences are available to the players in N.
The conference game (H, rV) associated with hypergraph communication
situation (N, v, H) has characteristic function rV defined by

rV(A) = vA(N) = L v(C) for all A <:;;; H. (4.6)


CEN/A

The Shapley value of this conference game is used to evaluate the worth
of each conference. By dividing this worth equally among the players
in a conference we come to a straightforward extension of the position
value to the setting of hypergraph communication situations. To make
sure that rV(0) = 0 we restrict ourselves to zero-normalized games (N, v)
(see also remark 2.1). The set of hypergraph communication situations
with player set N and a zero-normalized game (N, v) will be denoted by
HCSS'. Furthermore, we denote the set of conferences in H that contain
player i by Hi = {H E H liE H}.
The position value 7r of hypergraph communication situation (N, v, H)
E HCSS' is defined by

7ri(N,v, H) = L 1~liPH(H,rV) for all i E N. (4.7)


HE}{i

The following example provides an illustration of the position value.

EXAMPLE 4.8 Consider hypergraph communication situation (N, v, H)


with player set N = {l, 2, 3, 4, 5, 6}, characteristic function v = 12uI,3 +
9U3,5 + 18uI,6, and set of conferences H = {{1,3},{2,5,6},{3,4,5}},
which was also studied in example 4.7. For convenience we denote
HI = {1,3}, H2 = {2, 5, 6}, and H3 = {3, 4, 5}. Some straightforward
calculations show that
100 Variants on the basic model

which immediately implies that

ipH1(1-l,r V ) = 12 + 0 + 6 = 18;

ip H2 (1-l, rV) = 0 + 0 + 6 = 6;
ip H3 (1-l, rV) = 0 + 9 + 6 = 15.

We can now determine the position value, e.g.,

With similar computations ~or the other players we find 7f(N, v, 1-l)
(9,2,14,5,7,2). <)

The axiom~.tic characterizations of the Myerson value and the posi-


tion value in chapter 2 are easily extended to the setting of hypergraph
communication situations. Van den Nouweland et al. (1992) provide
characterizations of the Myerson value and the position value for hy-
pergraph communication situations similar to theorem 2.8 on the class
of cycle-free hypergraph communication situations. All properties in
these characterizations are straightforward extensions of the properties
used in theorem 2.8 to hypergraph communication situations. Further,
van den Nouweland (1993) remarks that by extending the superfluous
player property and the strong superfluous link property to allocation
rules for hypergraph communication situations, characterizations of the
Myerson value on the class of all hypergraph communication with player
set N similar to theorem 2.6 can be provided.
We conclude this section with the remark that Bilbao (2000) gener-
alizes many of the results that we have described for communication
situations and hypergraph communication situations to communication
situations in which the possibilities between the players are modeled by
means of so-called union stable systems. A union stable system describes
the possible coalitions of players that can be formed directly rather than
deriving these from a network or a hypergraph. The most noticeable
difference between a union stable system and a set of feasible coalitions
derived from a hypergraph is that one-player coalitions need not be fea-
sible in a union stable system.

4.3 PROBABILISTIC COMMUNICATION


SITUATIONS
In the basic model of chapter 2 as well as in the previous sections it
is assumed that cooperation relations are completely reliable. In some
Probabilistic communication situations 101

situations it might be more appropriate to model cooperation in a prob-


abilistic way. In this section, which is based on Calvo et al. (1999), we
will follow this line of thought.
Consider the problem of determining the power of political parties in
a parliament. Previous studies, such as van Damme et al. (1994), have
used the basic model of chapter 2 to represent the compatibility between
such parties and derive power indices that take these compatibilities
into account. This method, however, requires us to categorize any two
parties as either completely incompatible or completely compatible. In
voting situations it seems more appropriate to associate with each pair of
political parties a number between 0 and 1, which represents the degree
of compatibility between two parties or the probability that they will
agree on a certain issue. 10 We describe a generalized version of the basic
model in which communication possibilities have a probabilistic nature.
Consider a set of agents N. A function p that assigns to every possible
link ij E LN a probability p(ij) is called a probabilistic network. Its
interpretation is that players i and j can have direct and meaningful
communication with each other with probability p(ij). The probabilities
are assumed to be independent. The function p will also be referred to
as a system of probabilities. Like in previous sections, we assume that
the profits that can be obtained by different coalitions of players are
described by a coalitional game (N,v). The triple (N,v,p) is called a
probabilistic communication situation.

EXAMPLE 4.9 Consider a player s who wants to sell his house and a
player b who would be interested in buying it. Most likely, players sand
b will not know each other. In that case they would each go to a real
estate agent r who could act as an intermediary between them. However,
they may try to find each other before going to a realtor to avoid having
to pay him a commission. Suppose that there is a probability q E [0, 1]
that players sand b meet each other and can then make the transaction
without the intermediation of the realtor. We can model this situation
by the system of probabilities p(br) = p(rs) = 1 and p(bs) = q.
Denote N = {b, r, s}. We normalize such that the difference between
the value of the house to the buyer and to the seller equals 1. Then the
possible profits to coalitions of these three players are described by the
coalitional game (N, v) with v(S) = 1 for all S ~ N that contain both
players sand band v(S) = 0 for coalitions of players that exclude either

lOWe point out that other authors, such as Owen (1971), have incorporated political parties'
ideological positions in models of voting by modeling them as points in Euclidian space.
102 Variants on the basic model

the seller or the buyer or both. The triple (N, v,p) is a probabilistic
communication situation. 0

Let (N, v,p) be a probabilistic communication situation. We will


define an associated coalitional game (N, v P ), called the probabilistic
network-restricted game, that incorporates both the economic possibil-
ities of the players described by the coalitional game (N, v) and the
probabilities of bilateral communication described by the system of prob-
abilities p. Since cooperation is not completely reliable, payoffs are not
known with certainty. Therefore, we will focus on expected profits in the
new game. Expected payoffs are illustrated in the following example.

EXAMPLE 4.10 Consider the probabilistic communication situation de-


scribed in example 4.9. With probability q, players band s can make
the transaction without the intermediation of the realtor and obtain a
profit of 1. With probability 1 - q, however, they will not be able to
generate a positive profit without the intermediation of the realtor. The
expected profit of coalition {b, s} is thus vP(b, s) = q. 0

We will generalize the idea presented in example 4.10. Let (N, v,p)
be a probabilistic communication situation and let S ~ N be a fixed
coalition of players. The probability that set of links L ~ L S is realized
among the agents in S equals

pS(L) = IIp(l) II (1 - p(l)).


lEL IELS\L

Now, suppose a set L ~ L S of communication links is realized. Then


the worth obtainable by coalition S is vL(S) = LCESIL v(C), i.e., the
value of coalition S in the network-restricted game associated with com-
munication situation (N, v, L). Hence, the expected profit of coalition S
equals
vP(S) = L
pS(L)vL(S).
LCLS
The coalitional game (N, v p ) is called the probabilistic network-restricted
game. A deterministic communication network L on N can be identified
with a function p : {ij I i, j E N, i =1= j} ---+ [0, 1], defined by p(ij) = 1 if
ij ELand p(ij) = 0 if ij tJ- L. It is easily seen that for this p it holds
that v P = v L . This shows that the procedure described above extends
the procedure followed in the basic model.
Probabilistic communication situations 103

We illustrate the computation of the probabilistic network-restricted


game in the following example.

EXAMPLE 4.11 For the probabilistic communication situation (N, v, p)


in example 4.9, we compute vP(b, r, s) as follows. With probability q all
links in LN will be realized and with probability 1 - q only the links
in L = {br, r s} will be formed. In both cases, the players in the grand
coalition N can obtain a profit of 1. Hence, vP(N) == lq + 1(1 - q) = 1.
The expected profit of coalition {b, s} was computed in example 4.10
to be vP(b, s) = q. Since all other coalitions are internally connected
with probability 1 and do not include both the buyer and the seller,
vP(S) = v(S) = 0 for all S ~ N with {b, s} Cl S. <)

An allocation nIle on a class pes of probabilistic communication sit-


uations is a function, that assigns a payoff vector ,(N, v,p) E RN to
every probabilistic communication situation (N, v, p) in that class.
The Myerson value for probabilistic communication situations 11, or
simply the Myerson value, assigns to each probabilistic communica-
tion situation (N, v,p) the Shapley value of its associated probabilistic
network-restricted game, i.e.,

Il(N, v,p) = <I>(N, v P ). (4.8)

EXAMPLE 4.12 In example 4.11 we computed the probabilistic network-


restricted game (N, v P ) associated with the probabilistic communication
situation (N, v,p) of example 4.9. We showed that

vP = qUb,s + (1 - q)Ub,r,s.
Using this, we easily find

1 1 2+q
ILb(N, v,p) = 2q + 3"(1 - q) = 6-;
1 2 - 2q
/lAN, v,p) = 0 + -(1 - q) = ---;
3 6
1 1 2+q
1l,.(N,v,p) = 2q + 3"(1 - q) = 6-·
Contrasting this with lli(N, V, {bT, TS}) = t
for all i E {b, T, s} and
Ilb(N, v, LN) = Ils(N, v, LN) = ~, Ilr(N, v, LN) = 0, we see that the
Myerson value of the probabilistic communication situation (N, v,p) is
104 Variants on the basic model

a weighted average of the Myerson values of the deterministic communi-


cation situation with links br and r8 and that with all links in L N , with
weights 1 - q and q, respectively. <>

Calvo et al. (1999) show that extensions of component efficiency and


fairness can be used to characterize the Myerson value for probabilis-
tic communication situations. Consider a coalition that has probability
o of communicating with any player outside the coalition and that is
minimal with respect to this property. Component efficiency states that
the total payoff to the players in such a coalition should be equal to the
expected profit of the coalition. We need some notations in order to
introduce component efficiency formally. Let (N, v,p) be a probabilistic
communication situation. We define the (deterministic) network (N, LP)
associated with this situation to be the network that contains the links
that have positive probability according to the system of probabilities p.
In formula, l = ij E V if and only if p( ij) > O. Network (N, V) induces
partition N / LP of N. Now, we can formally introduce the property
component efficiency.

Component Efficiency An allocation rule 'Y on a class pes of prob-


abilistic communication situations is component efficient if for every
(N,v,p) E pes and every component C E N/V it holds that

L 'Yi(N, v,p) = vP(C). (4.9)


iEC

Fairness states that when a possibility for direct communication be-


tween two players is eliminated, then the payoffs of both these players
change by the same amount. To describe this property formally, we need
some additional notation. Let (N, v,p) be a probabilistic communica-
tion situation and let i, j E N. Define the system of probabilities P-ij
by p_ij(kl) = p(kl) if kl =I- ij, and p-ij(ij) = o.

Fairness An allocation rule 'Y on a class pes of probabilistic commu-


nication situations satisfies fairness if for every probabilistic commu-
nication situation (N,v,p) E pes and all i,j EN it holds that

(4.10)

An alternative definition of fairness is obtained by replacing P-ij (ij) =


o by p-ij(ij) E [0,1]. Fairness would then state that the payoffs to
two players change by the same amount when the probability for direct
Probabilistic communication situations 105

communication between them changes. It is a straightforward exercise


to show that the two definitions of fairness are equivalent.
The following theorem captures the main result of the current section.
We omit its proof since it follows along the same lines as the proof of
theorem 2.4. The set PCS;; denotes the set of all probabilistic commu-
nication situations with (N, v) as the underlying game.

THEOREM 4.3 The Myerson vaz'ue for probabilistic communication sit-


uations is the unique allocation rule on PCS;; that satisfies component
efficiency and fairness.

In example 4.12 we saw that the Myerson value of the probabilistic


communication situation studied in that example is a weighted average
of the Myerson values of related communication situations with deter-
ministic graphs. The following theorem shows that this is true for any
probabilistic communication situation. Every probabilistic network on
a set of players N induces a probability distribution on the set of all
deterministic networks on N. Let p be a probabilistic network on a set
of players N. Then the probability that network L is realized is given
by pN (L). We use these probabilities to show that the Myerson value
of a probabilistic communication situation is a convex combination of
Myerson values of related (deterministic) communication situations.

THEOREM 4.4 Let (N, v, p) be a probabilistic communication sit1tation.


Then
p,(N, v,p) = L pN (L)p,(N, v, L).
LCLN

PROOF: Firstly, we will show that the probabilistic network-restricted


game is a convex combination of network-restricted games associated
with (deterministic) communication situations (N,v,L), L ~ LN.
Consider a coalition S ~ N. The probability that a set of links
L ~ LN\L s is realized, irrespective of which links in L S are realized,
will be denoted byll

p~S(L) = IIp(l) II (1 - p(l)).


lEL lE(LN\LS)\L

11 For S with 0 C S C N it holds that p-s is different from pN\S since LN \LS # UV\s.
106 Variants on the basic model

For the value of coalition S in the probabilistic network-restricted game


we find

vP(S) = L pS(L)vL(S)
LCLS

L (pS(L)VL(S) L P~S(L'))
U:;;Ls UC;LN\LS
= L L pS(L)p~s(L')vL(S)
LC;;Ls UC;;LN\LS

=L
= L pN (L)vL(S). (4.11)
LCLN

The second equality follows because LL'C;;V'V\LS p~S(L') = 1, which


holds since p~s is a probability distribution on LN \LS'. The fourth
equality follows since (L U L')(S) = L(S). The last equality holds since
pS (L )p ~ 5 (L') = pN (L U L'), which follows directly from the definitions
of pS, p~s, and pN.
From (4.11) it follows that v P = LLCLN pN (L)v L . Using this convex
combination we find ~

p,(N, v,p) = <J?(N, vP )


= <J?(N, L pN (L)v L )
LC;;V'V
= L pN (L)<J?(N, v L )
LCLN
= L pN(L)p,(N,v,L).
LCLN

The third equality follows using linearity of the Shapley value, which
states for any two coalitional games (N, v) and (N, w) and every pair of
real numbers a and b it holds that <J? (N, av + bw) = a<J? (N, v) + b<J? (N, w)
Though linearity is a stronger requirement than additivity, which was
mentioned in section 1.1, it is also satisfied by the Shapley value. 0
NTU communication situations 107

4.4 NTU COMMUNICATION SITUATIONS


In the previous sections we have discussed three variations on the
basic model with different representations of restricted communication.
In the current section we consider an alternative representation of the
possible gains from cooperation.
In chapter 1 we already mentioned it is not always appropriate to
model utilities as being transferable between the players. In such a
situation the possible gains from cooperation by a coalition cannot be
represented by a single number, but should be represented by a collection
of attainable vectors that specify the allocation to each participating
player. In section 1.1 we introduced NTU games. An NTU game is a pair
(N, V) in which N is a set of players and V is a characteristic function
that assigns to every coalition S ~ N a set V(S) ~;;; R S . Following
Aumann (1967), we assume that each set V(S) is both nonempty and
comprehensive, i.e., if x E V(S) and y E R S are such that y :5:: x, then
y E V(S).

EXAMPLE 4.13 Consider the 2-person NTU game (N, V) with N =


{l,2}, V(i) = (-00,12] for alIi E {1,2}, and V(N) = {x E RN I
2X1 + X2 :5:: 96}. If players 1 and 2 cooperate they can achieve several
allocations, such as (20,20) and (31,16). However, we do not expect the
players to settle for allocation (20,20), because they can both get more
by settling on for example the obtainable vector (30,30). We expect the
players to agree on an allocation that is on the Pareto boundary of V(N),
consisting of the allocations x E V(N) for which there is no payoff vector
y E V(N) that all players weakly prefer to x and that at least one player
strictly prefers to x.
It is easily checked that the Pareto boundary of V(N) is given by
{x E RN I 2X1 + X2 = 96}. Some of the payoff-vectors on this boundary
are (-12,120), (0,96), (36,24), and (48,0). <>

Shapley (1969) introduces an allocation rule for NTU games based on


weight vectors. Let (N, V) be an NTU game. A weight vector assigns
a relative weight to each player and is usually normalized so that the
weights of all players add up to 1. Formally, a weight vector is a vector
A = (Ai)iEN ERN with Ai> 0 for all i E Nand LiEN Ai = 1. 12 A

12Shapley (1969) only requires Ai 2: 0 for all i E N. Kern (1985) argues that a weight equal
to zero lacks intuition. Here, we restrict ourselves to positive weights, an approach that is
not only adopted by Kern (1985) but also by, for example, Aumann (1985).
108 Variants on the basic model

weight vector A is V-feasible iffor all S t;;;; N, S =I- 0,

VA(S) = sup{L AiXi Ix E V(S)} < 00.


iES

Hence, a V-feasible weight vector A generates a well-defined coalitional


game (N, vA), called the A-transfer game. The Shapley set iJ!(N, V) is
defined in terms of the Shapley values of A-transfer games for V-feasible
weight vectors. Formally, the Shapley .set of NTU game (N, V) is defined
by

iJ!(N, V) = {x E V(N) I there is a V-feasible weight vector A E RN


such that (AiXi)iEN = iJ!(N, VA)}'
Every vector x E iJ!(N, V) is called a Shapley value of the NTU game
(N, V).1 3

EXAMPLE 4.14 Consider the NTU game of example 4.13 and let A be
a weight vector. If A = (1,1), then vA(N) is not well-defined because
(a,96 - 2a) is an element of V(N) for each a E R and the set Ha +
1(96 - 2a) I a E R} has no upper bound.
It is easily seen that any weight vector that is V-feasible has to assign
twice as much weight to player 1 as it assigns to player 2. Since we
normalize weight vectors to add up to 1, this leaves weight vector A =
(~, i)as the unique candidate for a V-feasible weight vector. Using that
the Pareto boundary of V(N) is {x ERN I 2X1 + X2 = 96}, we compute

vA(N) = sup{L AiXi I x E V(S)}


iES
2 1
= sup{ -a
3
+ -(96
3
- 2a) I a E R} = 32.
Similar computations show that the weight vector A = (~, i) results in
the A-transfer game (N, vA) with

if S = {I};
VA(S) = { ~32 if S = {2};
if S = N,

13We refer the reader to Shapley (1969) and Aumann (1975) for a more extensive discussion
of this solution concept.
NTU communication situations 109

with iJ?(N, v>J = (18,14). To find a candidate x for a Shapley value of


(N, V), we set ~Xl = 18 and lX2 = 14 and obtain Xl == 27 and X2 = 42.
Since (27,42) E V(N), we conclude that
iJ>(N, V) = {(27, 42)}.
Note that 2 * 27 + 1 * 42 = 96, i.e., the unique Shapley value is on the
Pareto boundary of V (N). (>

The Shapley set for NTU games is characterized in Aumann (1985).


Of the six axioms used in his characterization, three are derived from
properties of the Shapley value for TU games, two from properties that
are used in the characterization of the Nash bargaining solution (see
Nash (1950a)), and the sixth one is purely technical. We refer to Aumann
(1985) for details.

In an NTU communication situation (N, V, L), a network (N, L) mod-


els, as before, the bilateral communication possibilities and an NTU
game (N, V) models the possible gains from cooperation. For simplic-
ity, we assume throughout the remainder of the current section that the
game (N, V) is zero-normalized, i.e., V(i) = {x E R I x ::; O} for each
i E N.
A solution concept on a class of NTU communication situations is a
function,,( that assigns a subset "((N, V,L) ~ RN to every NTU com-
munication situation (N, V, L) in that class. Van den Nouweland (1993)
uses the Shapley set to define two solution concepts for NTU communi-
cation situations that are similar in spirit to the Myerson value and the
position value. 14
The definition of the Myerson value is easily extended to NTU com-
munication situations using a network-restricted game and the definition
of the Shapley set. Let (N, V, L) be an NTU communication situation.
The network-restricted NTU game (N, VL) associated with (N, V, L) IS
described by 15

VL(S) = IT V(C) for each S ~ N.


CES/L

If a coalition S is internally connected, then the players in this coali-


tion can all coordinate their actions and achieve the same allocations

14We remark that our definitions of these solution concepts are slightly different from the
ones originally given by van den Nouweland (1993), who does not explicitly exclude weights
equal to zero.
15The symbol f1 denotes the Cartesian product.
110 Variants on the basic model

as in a situation with unrestricted communication. If S is not inter-


nally connected, then only the player~ within a component C E S / L can
coordinate their actions to achieve allocations in V (C). Because no com-
munication is possible between various components, the best the players
in coalition S can achieve are combinations of the allocations achiev-
able by its components. Hence, a payoff vector x can be achieved by S
only if the restriction of x to the players in C E S / L belongs to V (C),
i.e., the possible gains from cooperation by coalition S are described by
TICES/L V(C). Note that VL(S) is a subset of R S, which implies that
(N, VL) is indeed an NTU game.
The Myerson set fJ for NTU communication situations assigns to every
NTU communication situation (N, V, L) the Shapley set ofthe associated
network-restricted NTU game (N, V L ), i.e.,

fJ(N, V, L) = 1>(N, VL). (4.12)

We will demonstrate the computation of the Myerson set in an example


shortly.
We first turn our attention to a solution concept that stresses the
importance of the communication channels. In section 2.3 we described
the position value. It is defined using the link game, which describes the
possibilities of the players in the presence of various sets of communica-
tion links. A similar approach for NTU games runs into problems, since
the value of a set of links, say A, would then be defined as a subset of RN
rather than as a subset of R A, and this does not generate an NTU game.
This problem is circumvented by introducing link-admissible weight vec-
tors and A-link games.
Let (N, V, L) be an NTU communication situation. A weight vector
A ERN with Ai > 0 for all i E Nand 2:iEN = 1 is link admissible if for
all A ~ L

rv,A(A) = sup {L Aixi Ix E II V(C)} < 00. (4.13)


iEN CEN/A

So, a link-admissible weight vector A generates a A-link game (L, TV,A).


The position set for NTU communication situations is defined in terms
of the Shapley values of A-link games for link-admissible weight vectors.
Formally, the position set n:(N, V, L) of NTU communication situation
NTU communication situations 111

(N, V, L) is defined by

Jr(N, V, L) = {x E II V(C) I there is a link-admissible weight


CEN/L

vector A E RN such that AiXi = L ~<I>I(L,rV'A) for all i EN}.


tELi

The following example illustrates the Myerson set and the position
set for NTU communication situations.

EXAMPLE 4.15 Let a E [0, n


Consider the NTU communication situ-
ation (N, Va, L), where N = {I, 2, 3}, L = {13,23}, and (N, Va) is the
NTU game introduced by Roth (1980), which is defined by

Yc,(i) ={x; E R{i} I Xi ::; O} for each i E N;


1 1
Va(1,2) ={(X1,X2) I Xl::; 2,X2::; 2};
E R{1,2}

Va (l,3) ={(X1,X3) E R{1,3} I Xl::; a,x3:S 1- a};


Va (2,3) ={(X2,X3) E R{2,3} I X2::; a,x3:S 1- a};
Va (1,2,3) ={(X1,X2,X3) E R{1,2,3} I (X1,X2,X3) :S y for some y in the
1 1
convex hull of (2' 2,0), (a, 0,1 - a) and (0, a, 1 - a)}.

The network-restricted NTU game (N, (Va)L) associated with this


NTU communication situation is described by

L {(Va)(S) if S #- {I, 2};


(Va) (S) {(x 1, X)
2 E R{1,2} I X1 <_ 0 , X2_< O} if S = {I, 2}.
Consider the weight vector A = (~,~, ~). The A-transfer game (N, (v~h)
associated with (N, (Va)L) is described by (v~h = ~1L1,3 + ~1L2,3 - ~UN.
The Shapley value of this game is given by

<I>(N , (v aL )A ) = (~ ~ ~)
18' 18' 9 .

Hence, for all a E [0, ~l the vector X = (i, i,~) is a candidate for being
a Myerson value of (N, V"" L). However, X E VL(N) = V(N) if and only
if a :S ~. Considering the weight vector A = (~,~,~), we conclude
112 Variants on the basic model

Following a similar analysis to check whether the weight vector A


(~, ~, ~) is link admissible, we find that

Using the weight vector (i, i, ~), we only find

Now, consider NTU communication situation (N, Va, L N ), in which all


pairs of players can communicate. Choosing the weight vector (~, ~, ~)
leads to
1 lIN N 1
(3' 3' 3)
E M(N, Va, L ) n 7r(N, Va, L ) for all Go E [0, 2]' (4.14)

We end this example by examining weight vectors of the form ((3, (3, 1-
2(3), (3 E (0, ~) \ a}, in which the symmetric players 1 and 2 have equal
weight. For such weight vectors, we find no elements of the pmlition set
7r(N, Va, LN) for any Go E [0, n 0

Consider a TU game (N, v) and a coalition S <:::: N. Then the value


v(S) can be divided in many ways. In fact, considering all possible
divisions of v(S) naturally results in the set

V(S) = {x E R S I LXi .c::; v(S)}.


iES

This procedure results in an NTU game (N, V) corresponding to (N,v).


Theorem 4.5 states that the Myerson set and the position set for
NTU communication situations are indeed generalizations of the Myer-
son value and the position valuc. 1G

THEOREM 4.5 Let (N,v,L) E CS be a communication situation with


a zero-normalized game (N, v) and let (N, V) be the NTU game corre-
sponding to (N, v). Then the following two assertions hold:

(i) M(N, V,L) = {M(N,v,L)}.


(ii) 7r(N, V,L) = {7r(N,v,L)}.

lGpositivity of the weights of the players is explicitly used in the proof of this theorem to
show that the Myerson set and the position set both contain exactly one element.
NTU communication situations 113

PROOF: Let A E RN be a weight vector. Suppose C E NIL and i, j E C


are such that Ai =I- Aj. Then, obviously, sup{LkEc AkXk I x E V (Cn =
00. Hence, A is neither VL-feasible nor link admissible.
Any weight vector A E RN that satisfies the condition that for each
component C E NIL it holds that A=Aj for all i,j 1= C, is both VL_
feasible and link admissible. Let A be a weight vector satisfying this
condition and for each C E NIL define A(C) E (0,1] such that Ai = A(C)
for all i E C.
We first prove part (i). For the A-transfer game (v L )" corresponding
to V L and A it holds for all S s: N that

iES
= sup{ L A(C)( L Xi) IX E R S and
CEN/L iESnC
LXi S v(T) for all T E SIL}
iET
= L A(C)( L v(T))
CEN/L TE(SnC)/L
= L A(C)vL(S n C), (4.15)
CEN/L

where the third equality follows from the fact that every component of S
is contained in a component of N. Using expression (1.3) of the Shapley
value in terms of unanimity coefficients and the fact that AT( v L ) = 0
for all coalitions T that are not internally connected (see page 26), we
derive from (4.15) that

Consequently, (Aifli(N, v, L))iEN = if?(N, (v L ),,). Component efficiency


of the Myerson value implies that LiEN fli(N, v, L) = vL(N) and, hence,
fl(N, v, L) E VL(N). We conclude that fl(N, v, L) E fl(N, V, L). Using
positivity of Ai, we conclude that AiXi = if?i(N, (v L ».) has a unique
solution for each i E N, namely fli(N,v,L). Note that our conclusions
hold independent of the particular weight vector, as long as it is VL_
feasible. We conclude that fl(N, V, L) = {fl(N, v, Ln.
114 Variants on the basic model

We now prove part (ii). For the A-link game rV,A defined by (4.13)
and for all A t;;; L we have

iEN TENIA
=sup{ L A(C)(Lxi) I x E RN and LXi:S v(T)
CEN I L iEC iET
for all T E NjA}
L A(C)( L v(T))
CENIL TEClA
= L A(C)rV(A n L(C)),
CENIL
where the last equality follows by zero-normalization of (N, 11).
Then for each I ELand C E NIL such that I E L( C) it holds that

tI>1(L, r v,A) = A( C)tI>I(L, rV).

This shows that

Ai7ri(N, v, L) = Ai L ~tI>I(L, '[''') = L ~tI>I(L, rv,A)


[ELi [ELi
for each i E N, which makes 7r(N, v, L) a candidate for a position value of
(N, V, L). Positivity of the weights imply that 7r(N, v, L) is the unique
candidate associated with A. We still have to check for attainability.
Component efficiency of the position value implies I:iEC 7ri(N, 11, L) =
v(C) for each C E NIL and, hence, 7r(N,v,L) E llcENIL V(C). We
conclude that 7r(N, 11, L) E 7r(N, V, L). Since our conclusions are inde-
pendent of the particular link-admissible weight vector chosen, we obtain
7r(N, V,L) = {7r(N,1I,L)}. D

Theorem 4.5 motivates the question whether the the Myerson set or
the position set can be characterized axiomatically in a way similar in
spirit to the characterizations of the corresponding values for (TU) com-
munication situations that we discussed in chapter 2. This issue is stud-
ied in Casas-Mendez and Prada-Sanchez (2000). They provide axiomatic
characterizations of the Myerson set and the position set for NTU com-
munication situations. The properties that they use are extensions of the
properties of (TU) communication situations that we used in theorems
2.4 and 2.8 and extensions of some of the properties that Aumann (1985)
Reward communication situations 115

uses in his characterization of the Shapley set. We do not describe these


characterizations in detail here, but refer the reader to Casas-Mendez
and Prada-Sanchez (2000).

4.5 REWARD COMMUNICATION


SITUATIONS
Up to now, we have assumed that the economic possibilities of the
agents can be described by a cooperative game. In this section, we will
drop this assumption. We will consider a model in which the value that
can be obtained by a group of players does not depend on its connected
components only, but also on the internal structure of these components.
If the possible gains from cooperation within a player set N depend
on the internal structure of the components, we cannot represent this by
means of a characteristic function on N. We define a reward function
'r as a function that assigns to each set of links L ~~ LN a value r(L)
representing the profits obtainable by the grand coalition in network
(N, L). Note that a reward function is similar to the link game that we
introduced in section 2.1. The difference is that the link game associated
with a communication situation (N, v, L) E CS N is only defined for sets
of links contained in L, whereas a reward function gives a value for each
possible set of links contained in LN. In fact, with any coalitional game
(N, v) we can naturally associate a reward function TV defined by

rV(L) = L v(C) for all L ~ L N ,


CEN/L

in which the value of a set of links is defined as the worth obtainable by


the grand coalition N if the links in L can be used to coordinate players'
actions. With this definition, it holds for any set of links L ~ LN that
rV(L) = vL(N).
An important difference between reward functions and link games is
that a reward function can assign a different value to for example a
complete network on three players and a network in which these same
three players are connected by only two links. In a link game this is
not possible, because that would only take into account that all three
players are connected and assign the value of the 3-player coalition in
the underlying coalitional game to either of the two networks.
For reasons similar to those explained in remark 2.1, we will aSSllme
throughout this section that the reward function is zero-normalized, i.e.,
r(0) = O. This has the interpretation that we concentrate on the gains
from cooperation between the players.
The following example is one of a situation that can be described
using a reward function but not using a coalitional game.
116 Variants on the basic model

EXAMPLE 4.16 We will consider a version of the so-called connections


model. Let N = {I, 2, 3} be a set of three players. Each player holds a
private piece of information that is worth 1 to each of the other players.
The players communicate their private pieces of information to each
other using bilateral communication links in L <;:,; LN. However, the
information decays by a factor 5 E (0, 1) each time it is sent over a link.
Therefore, denoting the length (number of links) of the shortest path
between two connected players i and j by t(i,j), the value of player j's
private information is only 5i (i,j) to player i once it reaches him. The
total value of all the other players' pieces of private information to player
i in network (N, L) is then

ui(L) =
jEC;(L)\i

For example, the payoff of player 1 in network (N, {12, 23}) is 5 + 52


since player 1 is directly connected with player 2 and the shortest path
between players 1 and 3 uses two links. Similarly, we find that the payoffs
to players 2 and 3 in this network are 25 and 5 + 52, respectively.
The value of network L is defined as the sum of the payoffs of the
players in this network. For example, the value of network (N, {12, 23})
equals r(12, 23) = 45 + 25 2. Determining the value of each network in
the way described above, we find the reward function l' given by

0 if L = 0;
{ 25 if ILl = 1;
1'(L) = 45 + 252 if ILl = 2;
65 if L = LN.

Note that this reward function assigns different values to a network with
two links that connect three players and the network with three links
connecting the same players. 0

The definition of a reward function allows for externalities between


different components in a network. For example, there are positive ex-
ternalities if 1'(12) = 1'(34) = 0 and 1'(12, 34) = 1. If such externalities do
not exist, then we say that the reward function is component additive.
To formally define this property, let N be a set of players and consider
a reward function l' defined on subsets of LN. The reward function Tis
cumponent additive if for all L <;:,; VV it holds that

T(L) = L 1'(L(C)).
CEN/L
Reward communication situations 117

Analogous to cOUlmunication situations as discussed in chapter 2, we


define a reward communication situation as a triple (N, r, L), in which a
reward function r describes the possible gains from cooperation and in
which a network (N, L) describes the restricted cooperation possibilities
between the players.
An allocation rule on a class ReS of reward communication situations
is a function 'Y that assigns a payoff vector 'Y(N, r, L) E RN to every
reward communication situation (N, 1", L) in that class.
Jackson and Wolinsky (1996), who refer to reward functions as value
functions, extend the definitions of component efficiency and fairness to
this setting and prove that there exists a unique allocation rule that
satisfies these two properties on the class of reward communication sit-
uations with a component additive reward function. They also provide
an expression for this allocation rule.

Component Efficiency An allocation rule 'Y on a class ReS of reward


communication situations is component efficient if for every reward
communication situation (N, T, L) E ReS and every component C E
NjL
L 'Yi(N, T, L) = T(L(C)). (4.16)
iEC

Clearly, component efficiency is not a reasonable requirement if the


reward function is such that there are externalities between components.
In what follows, we restrict this property to a domain consisting of re-
ward communication situations with component additive reward func-
tions.

Fairness An allocation rule 'Y on a class ReS of reward communi-


cation situations is faiT if for every reward communication situation
(N, T, L) E ReS and any link ij E L it holds that

'Yi(N,T,L) -'Yi(N,r,L\ij) ='Yj(N,T,L) -'Yj(N,T,L\ij). (4.17)

For each player set N and reward function T, we denote the set of
all communication situations with player set N and underlying reward
function T by RCS;:'.
We need a few more definitions before we can state the result of Jack-
son and Wolinsky (1996). A reward communication situation (N, T, L)
naturally leads to the definition of an associated coalitional game. In
such a situation, the players in a coalition S S;;; N can use the links in
L(S) to communicate and then obtain the value T(L(S)). We define the
coalitional game (N, vr,L) associated with (N, T, L) by vr,L(s) = r(L(S))
118 Variants on the basic model

for all S <;;; N. This coalitional game is similar in spirit to the network-
restricted game of a communication situation in CS N because it takes
into account the economic possibilities of the players as well as the com-
munication restrictions.

THEOREM 4.6 Let N be a set of players and r a component additive re-


ward function. There exists a unique allocation rule on RCS;: satisfying
component efficiency and fairness. This allocation rule assigns to each
reward communication situation (N, r, L) E R CS;: the Shapley value of
the associated coalitional game (N, vr,L).

We omit the proof of this theorem, as it is easily proven along the


same lines as theorem 2.4. We refer to the allocation rule that surfaces in
theorem 4.6 as the Myerson value for reward communication situations,
or simply the Myerson value, for obvious reasons. Formally, the Myerson
value p. of reward communication situation (N, r, L) is defined by

JL(N,r,L) = <J?(N,vr,L). (4.18)

EXAMPLE 4.17 Consider the reward communication situation (N, T, L)


with N = {I, 2, 3}, L = {13, 23}, and reward function T defined by

0 if L E {0, {12}, {23}};


12
{ 36 if L = {13};
r(L) = if ILl = 2;
48 if L = LN.

The characteristic function of the coalitional game (N, vr,L) associated


with this reward communication situation is described by

if lSI = 1 or S E {0, {I, 2}, {2, 3}};


if S = {I, 3};
if S = N,

where vr,L(N) = 36 follows from L(N) = {13,23} and r(13, 23) = 36.
The characteristic function of this game equals vr,L = 1211.1,3 +24uN. The
Myerson value of reward communication situation (N, r, L) is the Shap-
ley value of (N, vr,L) and equals JL(N, T, L) = <J?(N, vr,L) = (14,8,14).
In a similar way we find that JL(N, T, LN) = (18,12,18). 0

Consider a communication situation (N, v, L) E CS N with a zero-


normalized coalitional game (N, v). The associated network-restricted
Reward communication situations 119

game (N, v L) coincides with the coalitional game (N, VTv ,L) associated
with reward communication situation (N, r V , L). To see this, let S <;;;; N.
Then

CENjL(S) CESjL(S)
where the third equality follows from zero-normalization of (N,v). Be-
cause J-L(N,rV,L) = iJ?(N,VTv,L) and J-L(N,v,L) = iJ?(N,v L ), it follows
that J-L(N,rV,L) = J-L(N,v,L). This shows that the Myerson value for
reward communication situations is indeed an extension of the Myerson
value for communication situations.
Slikker (2000a) shows that an alternative representation of the My-
erson value can be provided using the reward function r in a reward
communication situation (N, r, L) as the characteristic function of coali-
tional game (LN, r). We will refer to this coalitiona.l game as the net-
work game associated with reward function r. Because the characteristic
function of a coalitional game can be written as a linear combination of
characteristic functions of unanimity games in a unique way, there exist
unique unanimity coefficients AA (r), A <;;;; LN such that

r = L AA(r)uA.
AC;;LN

As defined in (2.18), N(A) denotes the set of players that are involved in
at least one link in network (N, A), i.e., N(A) = {i ~::: N I Ai -=I 0}. The
following theorem provides an alternative way to compute the Myerson
value.
THEOREM 4.7 Let (N,r,L) be a reward communication situation with
II component additive reward function r·. Then

(4.19)

for each i E N.

PROOF: Firstly, we show that AS(vT,L) = LAC:;:L:N(A)=S AA(r) for all


S <;;;; N. Let T <;;;; N. Then

L L AA(r) = L AA(r)
SC:;:T AC:;:L:N(A)=S AC:;:L(T)
= r(L(T))
= vT,L(T).
120 Variants on the basic model

This implies that LAS::L:N(A)=S AA(r), S ~ N, are unanimity coefficients


of the game (N,vr,L). Since these unanimity coefficients are unique we
conclude that AS(vT,L) = LAS::L:N(A)=S AA(T) for all S ~ N.
Using this relation between unanimity coefficients of (N, vT,L) and
(LN, r) we find for any i E N that
Jii(N,r,L) = <Pi(N,vr,L)
L AS(vr,L)
SS::N:iES lSI
L L
SS::N:iES AS::L:N(A)=S
L AA(r)
ACL:Ad0 IN (A)I'
o

EXAMPLE 4.18 Consider the reward communication situation (N, T, L)


with player set N = {I, 2, 3}, set of links L = {13,23}, and reward
function 'T' defined by
0 if L E {0, {12}, {23}};
12 if L = {13};
r(L) = { 36 if ILl = 2;
48 if L = L N ,
which was also the subject of example 4.17. Considering T as the char-
acteristic function of coalitional game (LN, 'T') gives

T = 12u13 + 24u12,13 + 24u13,23 + 36u12,23 - 48uLN. (4.20)


The Myerson value of reward communication situation (N, T, L) can be
computed using (4.20) and expression (4.19) for the Myerson value.
Using that A ~ L for A E {{13}, {13, 23}} while A g; L for A E
{{12, 13}, {12, 23}, LN} we find
Jil(N, r, L) = 6 + 0 + 8 + 0 + 0 = 14;
Ji2(N, r, L) = 0 + 0 + 8 + 0 + 0 = 8;
Ji3(N, 'T', L) = 6 + 0 + 8 + 0 + 0 = 14.
Though determined in a different way, these payoffs coincide with the
payoffs determined in example 4.17. <>
Reward communication situations 121

Slikker (1999) introduced a position value for reward communication


situations. The position value 7f of a reward communication situation
(N, r, L) is defined by

(4.21)

for each i E N. Recall that <I>1(L,rIL) = LAC;::L:lEA :\,1,) ,


where <I> de-
notes the Shapley value. Using this and the uniqueness of the unanimity
coefficients it follows that

for all i E N.
Notice the similarity between this expression for the position value of a
reward communication situation and the definition of the position value
of a communication situation (N, v, L) E CS. In fact, for a communi-
cation situation (N, v, L) E CS with a zero-normalized coalitional game
(N,v) it holds that 7fi(N,v,L) = LIELi ~<I>I(L,(rV)IL) = 7ri(N,rV,L).
This shows that the position value for reward communication situations
is indeed an extension of the position value for communication situations.

EXAMPLE 4.19 Consider the 3-person reward communication situation


(N, 1', LN) with N = {I, 2, 3}, L = {13,23}, and reward function l'
defined by l' = 12u13 + 24u12,13 + 24u13,23 + 36u12..23 - 4SuLN, which
was also studied in examples 4.17 and 4.1S. The position value of this
situation is easily derived using (4.21):

7rl(N,r,L) = 6 + 0 + 6 + 0 + 0 = 12;
7f2(N,r,L) =0+0+6+0+0=6;
7r3(N, 1', L) = 6 + 0 + 12 + 0 + 0 = IS.

Note that the position value does not coincide with the Myerson value
fJ(N,r,L), which was computed in examples 4.17 and 4.1S.
Adding link 12, we find in a similar manner as demonstrated above
that 7f(N, v, VV) = (17,14,17). 0

In section 1.1 we remarked that the Shapley value for coalitional games
coincides with the marginal contributions of the players to a so-called
122 Variants on the basic model

potential function defined on the class of all cooperative games. Slikker


(1999) proves a similar result for the Myerson value and the position
value.
Consider a function P that assigns to every reward communication
situation (N, r, L) a real number. The marginal contribution of a player
to such a function can be defined in at least two natural ways. Firstly,
as the total marginal contribution of all his links, i.e., for every reward
communication situation (N, r, L) and all i E N define

DlP(N,r,L) = P(N,r,L) - P(N,r,L\Li). (4.22)

Secondly, the marginal contribution of a player can be defined as the


sum of the marginal contributions of each of the linb he is involved in,
i.e., for every reward communication situation (N, r, L) and all i E N
define
Di2 P(N, T, L) = '~
" ' (P(N, T, L) - P(N, T, L\l)) . (4.23)
lELi

Though these two notions of marginal contributions might seem similar


at first sight, we will show that they lead to very different results.
We use the two notions of marginal contributions to define two types
of potential functions. A function P is called a player potential func-
tion if for all reward communication situations (N, T, L) it holds that
P(N,T,L) = 0 if L = 0 and

LDlP(N,r,L) =r(L), (4.24)


iEN

i.e., the sum of the marginal contributions Dl equals the value of the
network.
Secondly, a function P is called a link potential function if for all
reward communication situations (N, T, L) it holds that P(N, r, L) = 0
if L = 0 and the marginal contributions of the players as measured by
D2 equal the value of the network. In formula

LDrP(N,T,L) = T(L). (4.25)


iEN

The following theorem shows that both the player potential function
and the link potential function are unique. The corresponding marginal
contributions coincide with the Myerson value and the position value,
respectively. We remark that theorem 4.8 (i) is an extension of a result
by Winter (1992), who considers the Myerson value for communication
situations. In the theorem we use the notation RCSCA to denote the set
Reward communication situations 123

of reward communication situations with an underlying reward function


that is component additive.

THEOREM 4.8

(i) Ther-e exists a unique player- potential function P on the class of


r-ewar-d communication situations RCSCA. For- all r-ewar-d commu-
nication situations (N, r-, L) E RCSCA and all i E N 'it holds that
Dtp(N,r-,L) = fJoi(N,r-,L).
(ii) There exists a unique link potential function P on the class of r-e-
war-d communication situations RCSCA. For- all r-ewar-d communi-
cation situations (N, r-, L) E RCSCA and all i E N it holds that
D;P(N,r-,L) = 7ri(N,r-,L).

PROOF: Because the proofs of both parts of the theorem are very similar,
we provide only the proof of part (ii) and leave it to the reader to prove
part (i).
Firstly, we show that there exists a link potential function and that
the marginal contributions of the link potential function coincide with
the position value.
For every reward communication situation (N, r-, L) define

P(N,r-,L) =
'"
~ 2TA!'
AA(r-)
At;;L:A#0

Obviously, P(N, r-, L) = 0 if L = 0. Furthermore, for a reward commu-


nication situation (N, r-, L) and each i E N

D;P(N,r-,L) = L (P(N,r-,L) - P(N,r-,L\I))


lELi

=L ( L A2~~/ - L AA(r))
lEL, At;;L:A#0 At;;L\1:A#0 21AI
=L LlELi At;;L:1EA

= 7ri(N, r, L). (4.26)

Because the position value is efficient, it follows that the sum of the
marginal contributions equals the value of the cooperation structure.
Hence, P is a link potential function.
124 Variants on the basic model

It remains to show that the link potential function is unique. If Q is a


link potential function it follows by equations (4.23) and (4.25) that for
all reward communication situations (N, 1', L) with L i- 0 it holds that

1'(L) = LL (Q(N,r,L) -Q(N,1',L\l))


iEN lELi

= L 2(Q(N, 1', L) - Q(N, r, L\l)).


lEL

Hence,

Q(N, r, L) = 21~1 (1'(L) + 2L Q(N, 1', L\l)) .


lEL
Using that Q(N,1',L) = 0 if L = 0 determines Q(N,r',L) recursively.
This proves the uniqueness of the link potential function. 0

In section 1.1 we mentioned that the potential-function approach to


the Shapley value is most useful when trying to prove results for general
classes of games, but much less so for computing the Shapley value of
specific games. The same holds for the potential-function approach to
the Myerson value and the position value.
In chapter 11 we will return to reward functions and reward commu-
nication situations.

4.6 DIRECTED COMMUNICATION


SITUATIONS
The networks that were described in previous sections provide several
ways to model cooperation restrictions. In all these networks, however,
communication is modeled symmetrically, i.e., all players in a commu-
nication relation can use it to the same extent. In the current section,
which is based on Slikker et al. (2000b), we study asymmetric bilat-
eral communication relations between the players and we model these
by means of a directed graph. Like in the previous section, we assume
that the economic possibilities of the players are captured in a reward
function. However, the reward functions that are studied in this section
assign a value to every directed communication network.
The following example describes a situation in which bilateral rela-
tions are not symmetric.

EXAMPLE 4.20 Consider two players, 1 and 2, who plan to start a new
firm. The profitability of this new firm depends on its internal organiza-
tion. The target market of the firm is rapidly changing, and sometimes
Directed communicabon situations 125

decisions have to be made quickly and there is no time for negotiation be-
tween the two players. This demands for a hierarchical structure within
the firm. Player 1 has a lot of experience in the target market of the
new firm, while player 2 until recently worked in a completely different
market. Because of his superior experience, player 1 is likely to make
better decisions than player 2. If player 1 is in charge a profit of 2 re-
sults, while the firm will make a profit of only 1 if player 2 is in charge.
Obviously, this situation cannot be described by a reward function as
discussed in section 4.5, because it requires two different values for two
cooperation relations between the players that are different in nature. <>

A directed communication network is a directed graph (N, A) where


the set of vertices N = {1, ... ,n} is the set of players and A ~ {(i, j) I
i,j EN, i =I=- j} is a set of (directed) arcs that represent the communi-
cation possibilities between the players. Player i is the initiator of the
directed communication relation (i, j) and player j is the receiver.
Let (N, A) be a directed communication network. Because the players
can communicate only via the directed communication relations in A, co-
operation between them is restricted. Two players i and j are connected
if they can communicate directly, i.e., (i,j) E A or (j,i) E A, or in-
directly, i.e., there exists a path (Xl, el, X2, ... , Xk-l, ek-l, Xk), k 2: 3,
with Xl = i, Xk = j, and for alIi E {1, ... ,k -I} it holds that
e, E {(Xl,XlH), (Xl+l,Xl)} and el E A. Note that a directed commu-
nication relation (i, j) E A is directed to indicate which player initiated
it. It does, however, represent a fully developed communication link. It
is for this reason that connectedness, as defined above, is based on arbi-
trary communication paths that do not necessarily follow the directions
of the directed communication relation.
The restrictions on communication between the players result in a
partition N / A of the player set. Two players are in the same partition
element if and only if they are connected. For any S ~ N, the partition
of S into components according to (S,A(S)) is denoted by S/A, where
A(S) = {(i,j) E A I i,j E S}. An alternative, but equivalent, definition
of the partition N / A is the following. We define an (undirected) network
(N, LA) associated with a directed communication network (N, A) by

LA = {{i,j} I (i,j) E A}.

It holds that N / A = N / LA, where N / LA is the partition of the player


set into communication components as defined in section 2.l.
Let N be a set of players. We denote by AN the set of all pos-
sible directed communication relations between the players in N, i.e.,
126 Variants on the basic model

AN = {(i,j) I i,j EN, i =I- j}. A collection A ~ {(N,A) I A ~ AN}


of directed communication networks is called closed (under takmg in-
clusions) if for all (N, A) E A and all sets of directed communication
relations A' ~ A it holds that (N, A') E A. We restrict our attention to
collections of directed communication networks that are closed. Let A
be such a collection. A directed reward function r on A is a function that
assigns a real number to every set of arcs A with (N, A) E A. The value
r(A) represents the profit that can be obtained by the players in N ifthey
can use the directed communication relations in A to coordinate their
actions. Like we did for reward communication situations in section 4.5,
we assume that a directed reward function is zero-normalized, r(0) = 0,
and concentrate on the gains from cooperation between the players. Fol-
lowing the line of thought in the previous section, we assume that there
are no externalities between coalitions of players between which there
are no communication channels. Formally, we assume that a directed
reward function r is component additive, i.e.,

r(A) = L r(A(C)) for each A E A.


CEN/A

A triple (N, r, A) consisting of a set N of players, a directed reward


function r, and a directed communication network (N, A) is a directed
communication situation. For a fixed player set N and a closed collection
of directed communication networks A on player set N, we denote the
set of all directed communication situations with player set N, a directed
reward function on A, and a directed communication network in A by
DCSN,A.

EXAMPLE 4.21 Consider the situation that was described in example


4.20 and suppose that the players have decided that player 1 will make
decisions if there is no time for negotiation. We describe a directed com-
munication situation (N, r, A) that captures the essential elements ofthis
situation. The player set associated with this situation is N = {I, 2}. No
cooperation between the players results in zero profits, i.e., r(0) = O. If
player 1 is in charge, then he has a more dominant position than player 2,
which we model as directed communication network (N, {(1,2)}). The
value associated with this network is r((l, 2)) = 2.17 Analogously, we
define r((2, 1)) = 1. Finally, the actual internal organization of the firm

17 As before, we omit brackets and write r((1,2)) instead ofr({(1,2)}), r((1,2),(1,3)) instead
ofr({(1,2),(1,3)}), and so on.
Directed communication situations 127

is modeled by A = {(1,2)}. 0

An allocation rule on a class of directed communication situations


DeS is a function r that assigns a payoff vector "y(N, T, A) E RN to
every directed communication situation (N, T, A) in that class.
We ask ourselves the question whether the results that we obtained in
the previous section on the existence and uniqueness of an efficient and
fair allocation rule can be extended to the setting of directed communi-
cation situations. Hence, we are interested in an allocation rule that is
efficient and that treats different directed communication relations sim-
ilarly. However, we would like the allocation rule to distinguish between
the two players forming a directed communication relation to account
for their asymmetric roles in it. These requirements are captured in the
properties component efficiency and u-directed fairness. We define a
property a-directed fairness for every 0: :::: 1.

Component Efficiency An allocation rule r on a class DeS of di-


rected communication situations is component efficient if for every
directed communication situation (N, T, A) E DeS and every compo-
nent C E N/A
Lri(N,T,A) =r(A(C)). (4.27)
iEC

a-Directed fairness An allocation rule r on a class DeS of directed


communication situations is a-directed fair if for every directed com-
munication situation (N,T,A) E DeS and any arc (i,j) E A it holds
that

ri(N, r, A) - ri(N, T, A\(i,j)) = a (r.i (N, T, A) - rj(N, T, A\(i,j))).


(4.28)

The property a-directed fairness states that, though the change in


payoff as a result of the formation of an additional directed communica-
tion relation might be different for the initiator (i) and the receiver (j),
it holds that the ratio between these two differences is the same for all
directed communication relations (if they are not equal to zero). This
constant ratio is equal to 0: and it represents the different positions of an
initiator and a receiver in a directed communication relation. If a = 1,
then the initiator and the receiver experience the same influence of the
formation of an additional directed communication relation. We will
mainly restrict ourselves to 0: > 1. This has the interpretation that the
initiator of an additional directed communication relation experiences a
larger influence on his payoff than the receiver does. We say that an
128 Variants on the basic model

allocation rule satisfies diTcctcd fairness if this allocation rule satisfies


a-directed fairness for some a > 1.
The following lemma states that if there exists an allocation rule sat-
isfying component efficiency and a-directed fairness, then it is unique.
We omit the proof, which can be given along the lines of (the last part
of) the proof of theorem 2.4.

LEMMA 4.1 Let a > 1, let N be a set of players, and let A be a closed
collection of directed communication networks on N. There is at most
one allocation rule on DCSN,A that satisfies component efficiency and
a-directed fairness.

We have now established uniqueness of an allocation rule that satis-


fies component efficiency and a-directed fairness for some a > 1. The
following example, however, shows that existence of such an allocation
rule cannot be established in general.

EXAMPLE 4.22 Consider the set of players N = {I, 2, 3} and the set
of directed communication relations A' = {(I, 2), (2,3), (3, I)} between
these players. Let r be the directed reward function on A = {(N, A) I
A <;;: {(1,2),(2,3),(3,1)}} defined by r((1,2),(2,3),(3,1)) = 10 and
T(A) = 0 otherwise. The directed communication network (N, A') is
represented in figure 4.5.

Figure 4.5. Directed communication network (N, A')

Suppose '"Y is an allocation rule on DCSN,A that satisfies component


efficiency and a-directed fairness. We will show that a = 1 must neces-
sarily hold. By component efficiency it follows that '"Yi(N, r, 0) = 0 for
all i E N. Subsequently, consider the directed communication network
with arc (1,2) only, A = {(1,2)}. In directed communication situation
(N, T, {(1, 2)} ), player 3 receives 0 by component efficiency. Furthermore,
a-directed fairness implies that '"Y1 (N, r, {(I, 2)}) = ab2 (N, r, {(I, 2)})) .
Using component efficiency, we then obtain that '"Y1 (N, T, {(I, 2)}) =
'"Y2(N,T,{(1,2)}) = o. Proceeding like this, we find that for all A c A'
and all i E {I, 2, 3} it holds that '"Yi(N, r·, A) = o.
Directed communicabon situations 129

Now, consider the set of arcs A'. By a-directed fairness it follows that

'"Y1 (N, T, A') - '"1'1 (N, T, A'\ (1,2)) = a (-Y2 (N, T, A') - '"Y2(N, T, A'\(1, 2)));
'"Y2(N,T,A') - '"Y2(N,r·,A'\(2, 3)) = ab3(N,T,A') - '"Y3(N,T,A'\(2,3)));
'"Y3(N,T,A') - '"Y3(N,T,A'\(3, 1)) = abl(N,T,A') - '"Y1(N,T,A'\(3, 1))).
Using that '"Yi(N,T,A) = 0 for all A c A' and all i E N, we find

'"Yl(N,T,A') = a'"Y2(N,T,A')
= a 2'"Y3(N, T, A')
= ci'"YdN, T, A')
It follows that either a = 1 or '"Yl (N, T, A') = 0. The last possibility
would imply that I:iEN '"Yi(N, T, A') = 0, which contradicts component
efficiency. We conclude that a = l.
This shows that there is no allocation rule that satisfies component
efficiency and a-directed fairness if a i=- 1. 0

Example 4.22 illustrates that in general there exists no allocation


rule that satisfies component efficiency and a-directed fairness for some
a > 1. The crucial element in example 4.22 is the presence of a cycle
in the network, i.e., a path P = (Xl, el, X2, ... ,Xl, ell Xl+l) with l ~ 2,
Xl = Xl+l, and Xl, ... ,Xl all distinct players. However, the mere pres-
ence of a cycle in a directed communication network is not sufficient to
extend example 4.22. An additional requirement is that the cycle does
not contain as many arcs directed one way as it contains arcs directed
the other way. For an arbitrary path P = (xl,el,x2, ... ,el-l,xl), we
denote the number of directed communication relations directed from
the last player towards the first player minus the number of directed
communication relations directed the other way by

t(P) =I{T E {I, ... ,l-l} I e r = (Xr+l,x,·)}1

-I {T E {I, ... , l - I} I e r = (xr, Xr+ l)} I·


Cycle Property A directed communication network satisfies the
cycle pmperty if for every cycle P = (xl,el,x2, ... ,xl,el,xl+d in
the network it holds that t(P) = 0.

The cycle property states that every cycle in a network contains the
same number of directed communication relations directed one way as
it contains directed communication relations directed the other way.
130 Variants on the basic model

Though this property surfaces as the culprit in examples such as 4.22,


it is not easily interpreted. Slikker et al. (2000b) provide an equiva-
lent property, the hierarchical-classes property, that is more easily in-
terpreted. It states that there is a hierarchy among the players in the
sense that they can be partitioned into numbered classes in such a way
that any directed communication relation is an arc between players in
classes with consecutive numbers pointing from the player in the class
with the higher number to that in the class with the lower number.
Directed communication networks that satisfy the hierarchical-classes
property are sometimes used to represent hierarchical structures in or-
ganizations. Directed communication relations then represent top-down
relations between individuals in the organization.
Hierarchical-Classes Property A directed communication network
(N, A) satisfies the hierarchical-classes property if there exists an (or-
dered) partition 13 = (Bl,'" ,Bm) of N such that for all (i, j) E A
there exists a k E {I, ... , m - I} such that i E Bk+l and j E B k .
In the following example we illustrate the relation between the cycle
property and the hierarchical-classes property.

EXAMPLE 4.23 Consider the directed communication network (N, A')


that is represented in figure 4.6. This network contains several cycles,
such as (1, (1,2),2, (2,6),6, (5,6),5, (1,5),1) and (1, (1, 5), 5, (5,6),6,
(7,6),7, (3, 7), 3, (3,2),2, (1,2),1). However, all of these cycles contain
as many directed communication relations in the direction of the cycle
as in the opposite direction. For example, in the cycle (1, (1,5),5, (5,6),
6, (7,6),7, (3,7),3, (3,2),2, (1,2),1) three directed communication rela-
tions have the same direction as the cycle, namely (1,5), (5,6), and (3,2),
and the other three directed communication relations have the opposite
direction. We conclude that (N, A') satisfies the cycle property.

II] r
5 6 7 8

FiguTe 4.6. Directed communication network (N, A')

We will show that (N, A') satisfies the hierarchical-classes property.


To do this, we need to provide an ordered partition of N such that every
Directed communication situations 131

directed communication relation is between two players in consecutive


partition elements, and is initiated by the player in the partition ele-
ment with the highest number. We demonstrate in figure 4.7 that the
partitions ({6}, {2, 4, 5, 7}, {I, 3, 8}) and ({ 4, 6}, {2, 5, 7, 8}, {l, 3}) both
satisfy these conditions. 0

[
B3

[
5 7 B2 5 7

6
Bl 6
a: (N,8 1 ) I>: (N, B2)

Figure 4.7. Two (ordered) partitions into hiearchical classes

So far, we have argued that the hierarchical-classes property is a nec-


essary condition on all elements of a class of directed communication
networks for the existence of an allocation rule that satisfies component
efficiency and a-directed fairness for some a > 1. Slikker et al. (2000b)
show that this condition is sufficient as well. We state their result with-
out a proof.

THEOREM 4.9 Let a > 1, let N be a set of players and let A be a closed
collection of directed communication networks on N. Then there exists
an allocation rule on DCSN,A that satisfies component efficiency and
a-directed fairness if and only if every directed communication network
(N, A) E A satisfies the hierarchical-classes property.

We illustrate theorem 4.9 in the following example.

EXAMPLE 4.24 Let (N, A') be the directed communication network


that was represented in figure 4.6 restricted to {I, 2, 3, 5, 6, 7}. Hence,
N = {1,2,3,5,6,7} and A' = {(1,2),(1,5),(2,6),(3,2),(3,7),(5,6),
(7,6)}. We define the reward function r on A = {(N, A) I A <;;; A'}
by
if A CA';
r(A) = { ~5
if A = A'
132 Variants on the basic model

and consider the directed communication situation (N, T, A').


We saw in example 4.23 that (N, A') satisfies the hierarchical-elasses
property. We will now show that there exists an allocation rule 1 on
DCSN,A that satisfies component efficiency and a-directed fairness with
(t = 2. We define 1 by 11 (N, r, A') = 13(N, r, A') = 4, 12(N, r, A') =
15(N, r, A') = 17(N, r, A') = 2, 16(N, r, A') = 1, and 1i(N, .,., A) = 0 for
each i E N and A c A'. It is easily checked that 1 satisifes component
effiency and a-directed fairness with a = 2. <)
II

NETWORK FORMATION
Chapter 5

NONCOOPERATIVE GAMES

In the current chapter, we discuss the concepts of noncooperative


game theory that will be used in the chapters to come. There are two
basic models of noncooperative games, games in extensive form and
games in strategic form. A game in extensive form is a dynamic de-
scription of a situation that explicitly specifies all its details, such as
who moves when and what a player's options are each time he is called
upon to move. The predominant solution concept that is used for games
in extensive form is subgame-perfect Nash equilibrium, which is found
through backward induction. We discuss games in extensive form and
subgame-perfect Nash equilibrium in section 5.1. A game in strategic
form is a static and condensed description of a situation, in which the
players submit their strategies, which are then consistently followed and
lead to a unique outcome with corresponding payoffs. Games in strate-
gic form are introduced in section 5.2. In that section, we also describe
Nash equilibrium and several of its refinements, namely undominated
Nash equilibrium, strong Nash equilibrium, and coalition-proof Nash
equilibrium.

5.1 GAMES IN EXTENSIVE FORM


Games in extensive form can be easily represented using game trees. A
game tree is a decision tree with multiple decision makers. We introduce
game trees using an example.

EXAMPLE 5.1 Consider the game tree in figure 5.1. It represents a


version of the so-called ultimatum game.
There are two players in this game, the proposer, player 1, and the
responder, player 2. The players are promised $10 provided that they
136 Noncooperative games

7-3 split 5-5 split


6-4 split

2 2

7,3 0, 0 6,4 0, 0 5, 5 0, 0

Figure 5.1. The game tree of the game in exam pie 5.1

agree on how to split it between them. They have to try and reach an
agreement following a strict procedure. Firstly, the proposer proposes
a split of the money between the responder and himself. To keep this
game simple, we assume that there are only three possible proposals,
namely a 7-3 split ($7 for player 1 and $3 for player 2), a 6-4 split, or
a 5-5 split. These three possible alternatives are represented as three
branches leaving the initial node of the game, which is labeled as a
decision node for player 1. Each branch is labeled with the proposal it
represents.
After the proposer has put a proposal on the table, the responder can
either agree or disagree with the proposed split. In the game tree, this
is shown as two branches, labeled agree and disagree, leaving each of the
three decision nodes for player 2. We add the numbers 1, 2, and 3 be-
cause player 2 is (dis)agreeing with different proposals, and (dis)agreeing
to one proposal is not the same as (dis)agreeing to another. After a
choice of action by player 2, the game reaches an end node, in which
no further choices can be made by the players. For each end node, it is
specified what the payoffs to the players are. If player 2 agrees with the
split proposed by player 1, then the players get the $10 and split it as
agreed. If player 2 disagrees with the split proposed by player 1, then
the players do not get the $10 and leave empty-handed, i.e., each gets
O. 0

The example above is very simple. Nevertheless, it illustrates all the


ingredients of games in extensive form that we use in the chapters to
Games in e;Etensive form 137

follow. Adding more decision nodes, players, moves, etc., is straightfor-


ward. 1S We provide one more example.

EXAMPLE 5.2 Consider a market with one incumbent, firm I, and a


potential entrant, firm E. Firstly, firm E decides on whether to enter
the market, action e, or not to enter the market, action n. If firm
E decides not to enter the market, then firm E obviously makes no
profit in this market, while firm I continues to enjoy its current profit
of say 10. If finn E decides to enter the market, then firm I has to
decide how to react. Firm I can either fight the entrant and launch an
expensive advertising campaign, action F 1 , or firm I can accommodate
the entrant, action AI. The entrant first waits to see how the incumbent
firm reacts to its entry and then decides whether to fight or accommodate
the incumbent firm. The actions by the entrant are denoted F2 and A2
after the incumbent firm has decided to fight, and by F3 and A3 after firm
1's decision to accommodate entry. We use subscripts for the decisions
to fight and accommodate to distinguish between the two firms taking
these decisions and the different pieces of information that firm E can
have when making a decision to fight or accommodate.
This situation is represented in figure 5.2, in which we also give the
payoffs for the various possible outcomes. We list the payoff of firm E
first and that of firm I second, so that, for example, a payoff 3, 2 denotes
that firm E gets 3 and firm I gets 2. 0

The predominant solution concept used in games in extensive form is


subgame-perfect Nash equilibrium. The idea underlying subgame-perfect
Nash equilibrium is that each player tries to anticipate the other players'
choices of alternatives following each of his possible choices. Proceeding
like this, a player tries to predict what end node will be reached (hy-
pothetically) following each of his alternatives. Then, a player uses this
information to decide which alternative to choose. The subgame-perfect
Nash equilibria of a game are most easily found using backward induc-
tion. We illustrate subgame-perfect Nash equilibrium for the games in
examples 5.1 and 5.2.

18We will only encounter games with perfect information without chance moves. Therefore,
we now have introduced all the ingredients that we need. For a richer description of games
in extensive form we refer the reader to a game theory text such as, for example, Myerson
(1991).
138 Noncooperative games

e n
I
0, 10

E E

2, 2 1, 4 3,2 4, 5

FiguTe 5.2. The game tree of t.he game in example 5.2

EXAMPLE 5.3 Consider the ultimatum game represented in figure 5.1.


Before choosing one of his alternatives, player 1 places himself in player
2's shoes and tries to predict whether player 2 will agree or disagree
following each of the proposals that player 1 can make. Suppose that
player 1 has proposed an equal split. Then player 2 can agree and get
$5, or he can disagree and get O. Player 2 wants to get as much as he can
and, faced with a choice between agreeing and getting $5 or disagreeing
and getting nothing, player 2 will agree. So, if player 1 proposes a 5-5
split, then player 2 will agree and player 1 will get $5. Note, however,
that player 1 can expect to get more if he proposes a 7-3 split. Player
2 has no other choice than to agree with such a split and get $3 or
disagree and get O. Hence, player 2 will agree with such a split as well.
Proceeding like this, we obtain the subgame-perfect Nal:lh equilibrium
that is represented by arrows in figure 5.3. It results in a payoff of $7
for player 1 and $3 for player 2.
Player 2 might try to get player 1 to make a proposal that is better
for player 2 by threatening to disagree if player 1 proposes a 7-3 split
(and agree if player 1 proposes a 6-4 or a 5-5 split). If player 1 be-
lieves this threat, then proposing a 7-3 split will leave him with nothing,
whereas proposing a 6-4 split will result in a payoff of $6 for player 1.
Then player 1 should propose a 6-4 split, player 2 will accept, and the
money will be split accordingly. Note that player 2 now gets $4, which
is more than the $3 he gets in the subgame-perfect Nash equilibrium.
However, we have to ask ourselves the question whether player 1 il:l go-
ing to believe player 2's threat. Player 2 has no means of committing
himself to execute his threat (if he did, it should have been part of the
Games in extensive form 139

7-3 split 5-5 split


6-4 split

2 2

disagree2 agree 3

7,3 0, 0 6, 4 0, 0 5, 5 0, 0

Figure 5.3. The subgame-perfect Nash equilibrium

description of the game). Hence, if player 1 proposes a 7-3 split, then


player 2 can stubbornly disagree and get 0, or he can give in and agree
and get $3. Hence, it is in player 2's self interest to agree. Therefore,
the threat to disagree with a proposed 7-3 split is not a credible threat.
Subgame-perfect Nash equilibrium rules out exactly such non-credible
threats by requiring that the players make optimal decisions at every
decision node. 0

EXAMPLE 5.4 Consider the game that we described in example 5.2,


which is represented in figure 5.2. We use backward induction to find
the subgame-perfect Nash equilibrium of this game.
The last decisions that have to be made are those in which firm E has
to decide on whether to fight or accommodate, after having observed
firm 1's decision to fight or accommodate. Firstly, consider the subgame
that is to be played after firm 1 decides to fight (Fd. Then firm E can
fight (F2 ) and get 2 or accommodate (A 2 ) and get l. It is then optimal
for firm E to choose action F 2 . Second, consider the subgame that is to
be played if firm 1 plays action A 1. Firm E can then play F.3 and get
3 or play A3 and get 4. Faced with this choice, firm E will choose to
accommodate (A3).
Now that we have solved the very smallest subgames, we turn our
attention to the next smallest subgame. That is the one that starts with
firm 1's decision whether to fight or accommodate if firm E enters. If
firm 1 decides to fight (Fd, then we have just seen that firm E will then
fight (F2 ), and this will result in a payoff 2 for firm 1 . If firm 1 plays A 1 ,
however, then it will get a payoff of 5 because firm E will accommodate
140 Noncooperative games

(A3) as well. Therefore, it is optimal for firm I to play Al if firm E


enters.
We have one more step left, namely to determine the optimal choice
of action by firm E in the very first decision node of the game. If firm
E does not enter (n), then it will have a payoff O. If it chooses action e,
then we have just argued that firm I will then play AI, followed by A3
by firm E. The resulting payoff of firm E is 4, which is more than this
firm will get if it does not enter. We conclude that firm E will enter.
The subgame-perfect Nash equilibrium of this game is represented in
figure 5.4. According to this subgame perfect Nash equilibrium, the out-
come of the game is that firm E enters, firm I then accommodates, and
firm E accommodates as well. <>
E

e n
I
0, 10

2, 2 1,4 3, 2 4, 5

Figure 5.4. The subgame-perfect Nash equilibrium

5.2 GAMES IN STRATEGIC FORM


A game in strategic form consists of three elements. Firstly, there is
the set of players N. Secondly, each player i E N has a set of strategies
available to him, which is denoted by Si. We denote a specific strategy
of player i by Si E Si. We denote the Cartesian product of the strategy
spaces of all the players by S = IliEN Si, so that S E S denotes a vector
S = (Si)iEN of strategies, one for each player. Thirdly, a payoff junction
fi : S -t R describes the payoffs of a player i E N resulting from all
possible choices of strategies by the players. Gathering all these pieces,
a game in strategic jorm is a tuple (N; (Si)iEN; (fi)iEtv).
Games in strategic form 141

EXAMPLE 5.5 In this example we consider a game that is a version of a


game that is commonly known as the prisoner's dilemma. For a rather
entertaining version of the story that gave this game its name, we refer
the reader to Dixit and Nalebuff (1991). We concentrate on a different
story.
Consider a market with two firms of equal strength who are trying to
form a cartel. In order to keep prices (and profits) high, they have to
restrict their output. Each firm decides whether to produce a low level
of output or a high one. Producing a high level of output is attractive
if the other firm produces a low level of output, since prices will still
be reasonably high. Such an action, however, will drive prices down
somewhat and have a negative effect on the profit of the other firm,
which can recoup a part of its profits by also producing a high level of
output. To fix things, let's say each firm earns a profit of 5 when they
each produce a low level of output, and each earns 4 when they each
produce a high level of output. If one of the firms produces a high level
of output while the other produces a low level of output, then the first
firm earns a profit of 6 and the second only 3.
The player set of the strategic-form game corresponding to this situ-
ation is given by N = {1,2}, where the elements of N represent firms
1 and 2. Each firm i E {1,2} has a strategy set Si = {Low, High},
indicating its choice to produce either low output or high output. The
payoff function of firm 1 is il, given by

if Sl = 82 = Low;
if 81 = Low and 82 == High;
if 81 = High and 82 := Low;
if 81 = 82 = High.

The payoff function 12 of firm 2 is similar.


A convenient way to represent games in strategic £orm is demonstrated
in figure 5.5. In this figure, we represent the game (N; (Sl, S2); (il, h))
by a matrix, in which the rows correspond to player l's strategies and
the columns correspond to the strategies of player 2. Each cell in the
matrix then corresponds to a strategy-combination 8 = (81,82) and in
such a cell we write h(81, 82), 12(81,82), i.e., the payoffs to the players
with player 1 's payoff first followed by the payoff of player 2. 0

Extensive-form games can also be represented in strategic form. A


strategy of a player is then a complete contingent plan of action, i.e., a
strategy describes for each of his decision nodes what action the player
will choose if the decision node is reached. We will demonstrate this for
the extensive-form games of examples 5.1 and 5.2.
142 Noncooperative games

Low
High

Figure 5.5. The game in strategic form

EXAMPLE 5.6 In the game in example 5.1 (see figure 5.1), player 1 has
only one decision to make, namely whether to propose a 7-3, a 6-4, or
a 5-5 split. A strategy for player 1 describes which one of these actions
he will choose at his decision node. Therefore, Sl ={ 7-3 split, 6-4 split,
5-5 split}.
A strategy for player 2 is more complicated. Such a strategy has to
specify what action player 2 plans to take in all three of his decision
nodes, if they are reached. Player 2 can clearly make his decision de-
pendent on the proposal that player 1 puts on the table. The strategy
of player 2 to disagree if player 1 proposes a 7-3 split and agree in all
other cases is denoted by (disagree I ,agree2 ,agree3)' It is easily seen that
player 2 has eight possible strategies.
We represent this strategic-form game in figure 5.6. The matrix has
rows that correspond to the strategies of player 1 and columns that
correspond to the strategies of player 2. To save some space, when
describing player 2's strategies we will use Ai and Di to denote the
actions agreei and disagree'i, respectively.

(AI, A 2 , A 3 ) (A I ,A 2 ,D3j JAI, D 2 , A 3 ) (AI, D 2 , D31


7-3 split 7, 3 7, 3 7,3 7,3
6-4 split 6, 4 6, 4 0, 0 0, 0
5-5 split 5, 5 0,0 5, 5 0, 0

(DI' A 2 , A 3 ) (D I , A 2 , D 3 ) (D 1 , D 2 , A 3 ) (DI, D 2 , D 3 )
7-3 split
6-4 split
0,0
6, 4
0, 0
6, 4
0, 0
0, 0
0, °
0, 0
5-5 split 5, 5 0, 0 5, 5 0, 0

Figure 5.6. The strategic-form version of the game in example 5.1

The payoffs that are listed in figure 5.6 are easily found from figure
5.1. For example, if player 1 plays his strategy 6-4 split and player 2
Games in strategic form 143

plays his strategy (AI, D 2 , A 3 ), then player 1 proposes a 6-4 split and
player 2 disagrees, so both players get 0. If player 1 then switches to
his strategy 5-5 split while player 2 does not change his strategy and
still plays (A I, D 2 , A 3 ), then player 2 accepts the 5-5 split proposed by
player 1 and both players get 5. 0

EXAMPLE 5.7 In the game in example 5.2, which is represented in figure


5.2, firm E has three decisions nodes. In the initial node, firm E has to
choose between two actions, namely e and n. Furthermore, firm E has
to decide on action F2 or A2 if firm I decides to fight. Finally, it has
to decide on action F3 or A3 if firm I plays AI. Therefore, firm E has
eight strategies.
Firm I has only one decision node, in which it has to choose between
two actions. Therefore, firm I has two strategies.
We represent the strategic-form version of the game in figure 5.7.
In this figure, the rows correspond to the strategies of firm E and the
columns to the strategies of firm I. Also, in each cell corresponding to a
strategy pair, we list the payoff of firm E first and that of firm I second.

(e, F 2 , F 3 ) 2,2 3,2


(e, F 2 , A 3 ) 2, 2 4, 5
(e, A 2 , F3 ) 1,4 3,2
(e, A 2 , A 3 ) 1,4 4, 5
(n, F 2 , F 3 ) 0,10 0,10
(n, F 2 , A 3 ) 0, 10 0,10
(n, A 2 , F 3 ) 0, 10 0,10
(n, A 2 , A 3 ) 0,10 0, 10

Figure 5.7. The strategic-form version of the game in example 5.2

Note that the definition of a strategy requires that a player specifies


an action to choose for each of his decision nodes, even though some of
these might not be reached. This convention implies that a strategy for
firm E has to specify what action it would choose in decision nodes that
will never be reached if the firm chooses not to ent.er (action n) in the
initial node. 0

The basic solution concept for strategic-form games is Nash equilib-


rium (cf. Nash (1950b)). A Nash equilibrium in a strategic-form game
144 Noncooperative games

is a strategy profile specifying a strategy for each player that is stable


in the sense that no player will find it profitable to unilaterally deviate
from this profile and choose another strategy. Formally, S = (Si)iEN E S
is a Nash equilibrium of the game (N; (Si)iEN; UdiEN) iffor each player
i E N and each s;E Si it holds that

Ji (s) 2: Ii (s;, B-i),


where B-i = (Sj)jEN\i denotes the (fixed) strategies of the players other
than i and (s;, B-i) E S denotes the strategy profile in which player i
plays s;and each player j E N\i plays Sj.

EXAMPLE 5.8 Consider the game in example 5.5, which is represented


in figure 5.5. The strategy combination (Low, Low) is not a Nash equi-
librium, because a player who deviates to High will increase his payoff
from 5 to 6. It is also not a Nash equilibrium for one of the players to
play Low and the other to play High, because in this case the player
who plays Low can increase his payoff from 3 to 4 by playing High in-
stead. Finally, strategy combination (High, High) is a Nash equilibrium,
because by deviating to Low, a player can only decrease his payoff. 0

EXAMPLE 5.9 Consider the game of example 5.6, represented in figure


5.6. This game in strategic form has seven Nash equilibria, namely
(7-3 split,(AI' A 2, A3)), (7-3 split,(AI' A 2, D3)), (7-3 split,(AI' D 2, A3)),
(7-3 split,(AI' D2, D3)), (6-4 split,(DI' A 2, A3)), (6-4 split,(DI, A 2, D3)),
and (5-5 split,(DI,D2,A3))' Strategy profile (6-4 split,(A I ,D2,A3 )), for
example, is not a Nash equilibrium, because player 1 can increase his
payoff by deviating to 7-3 split, which player 2 will agree to according
to strategy (AI, D 2, A3). 0

The previous example shows that there might be an abundance of


Nash equilibria. For this reason, several refinements of Nash equilibrium
have been introduced in the literature. Each refinement puts additional
restrictions on Nash equilibria. Although there are many equilibrium
refinements, we only use three of those in the chapters to come, and we
discuss only these three here. For additional refinements we refer the
reader to Myerson (1991) or van Damme (1991).
The first refinement of Nash equilibrium we consider is undominated
Nash equilibrium. Let (N; (SdiEN; (li)iEN) be a strategic-form game.
We say that a strategy of player i dominates another strategy of the
same player if the first one is never worse for player i than the second
Games in strategic form 145

one and sometimes better. Formally, strategy Si E Si dominates strategy


s~ E Si if for all possible strategy tuples L i E S-i == TIjEN\i Sj of the
other players it holds that
(5.1)
with the inequality being strict for at least one L i E S-i. A strategy
that is not dominated by any other strategy is called an undominated
strategy. A strategy profile S = (8i)iEN is an un dominated Nash equi-
librium if it is a Nash equilibrium in undominated strategies, i.e., 8 is a
Nash equilibrium in which 8i is an undominated strategy for each player
i E N.
A special case arises when a player has a strategy that gives him a
payoff that is at least as good as the payoff he can obtain using any
of his other strategies, for any strategy profile of the other players. A
strategy 8i E Si is called a weakly dominant strategy if (5.1) holds for all
8~ E Si and all L i E S-i. Note that a player may have several weakly
dominant strategies. Such strategies would necessarily have to result
in the same payoff for the player for any (fixed) strategy profile of the
other players. If a weakly dominant strategy 8i E Si is such that for
each s~ E Si there exists a L i E S-i such that (5.1) holds with strict
inequality, then strategy Si E Si dominates all the other strategies of
player i and is called a dominant strategy. Note that a player can have
at most one dominant strategy.

EXAMPLE 5.10 We saw in example 5.9 that the game in figure 5.6 has
many Nash equilibria. However, not all of the Nash equilibria that we
found are undominated. For example, strategy (Dl' A 2 , D 3 ) of player
2 is dominated by strategy (Dl' A 2 , A 3 ), because these strategies give
player 2 the same payoff if player 1 plays either 7-3 split or 6-4 split,
but if player 1 plays 5-5 split, then strategy (D l ,A 2 ,A3 ) gives player 2 a
higher payoff than strategy (D l , A 2 , D3). Actually, strategy (AI, A 2 , A 3 )
dominates every other strategy of player 2 and, consequently, player 2
has a (unique) dominant strategy.
As for player 1, all his strategies are undominated. This can be seen
by noting that the strategy 7-3 split gives player 1 a higher payoff than
all his other strategies if player 2 plays (A l ,A 2 ,A3 L that the strategy
6-4 split is best for player 1 if player 2 plays (Dl' A 2 , A 3 ), and that the
strategy 5-5 split gives player 1 the highest possible payoff if player 2
plays (Dl' D 2 , A3).
It follows that (7-3 split,(A l , A 2 , A 3 )) is the only undominated Nash
equilibrium of the game. Note that this corresponds to the subgame per-
fect Nash equilibrium of the extensive-form game that was represented
146 Noncooperative games

in figure 5.3, from which the strategic-form game in figure 5.6 is de-
rived. Except for strategy (AI, A 2 , A 3 ), all strategies of player 2 involve
non-credible threats. 0

EXAMPLE 5.11 Consider the game in strategic form of example 5.7,


which was represented in figure 5.7. This game has three Nash equilibria,
namely ((e, F2, F3 ),FI ), ((e, F2, A 3 ),Ar), and ((e, A 2, A 3 ),Ar). However,
only one of these is an undominated Nash equilibrium. For player 2,
neither of his strategies dominates the other, so strategies FI and Al
are both undominated. Player 1 has a dominant strategy, (e, F 2 , A 3 ),
which dominates all his other strategies. Therefore, ((e,F2 ,A 3 ),Ar) is
the unique undominated Nash equilibrium of the game. Note that this
corresponds to the subgame perfect Nash equilibrium of the extensive-
form game in figure 5.4, from which we derived the strategic-form game
in figure 5.7. 0

In the following example we show that a game in strategic form might


have more than one undominated Nash equilibrium.

EXAMPLE 5.12 Consider the strategic-form game that is represented in


figure 5.S. This game has two players. Player 1, the row player, has two
strategies and player 2, the column player, has three strategies.

Top
Bottom

Figure 5.S. The strategic-form game

Strategies Top and Bottom of player 1 are both undominated. Strat-


egy Right of player 2 is dominated by strategy Middle, but strategies Left
and Middle are both undominated. Therefore, the two Nash equilibria
of the game, (Top, Left) and (Bottom, Middle) are both undominated
Nash equilibria.
The two Nash equilibria of the game are such that one of them is
preferred by all players. In equilibrium (Top, Left) both players get only
3, whereas in equilibrium (Bottom, Middle) they both get 4. The reason
why (Top, Left) is a Nash equilibrium nevertheless, is that both players
need to change their strategies to increase their payoffs. The strategy
Games in strategic form 147

tuple (Top, Left) is stable against unilateral deviations, but not against
deviations by multiple players who coordinate their actions. <>
A Nash equilibrium that satisfies the requirement that it is stable
against deviations by coalitions of players is called a strong Nash equi-
librium (see Aumann (1959)). Formally, S = (Si)iEN E S is a strong Nash
equilibrium of the game (N; (Si)iE; (j;)iEN) if for all coalitions T ~ N
it holds that there is no strategy tuple tT = (ti)iE=T E ST = I1iET Si
such that fi(tT, SN\T) 2': j;(s) for all i E T, with the inequality being
strict for at least one player i E T. Here, SN\T == (s j) j EN\T denotes
the (fixed) strategies of the players not in T and (tT, SN\T) E S denotes
the strategy profile in which each player i E T plays ti and each player
j E N\T plays Sj.

EXAMPLE 5.13 We saw in example 5.11 that the game in figure 5.7
has three Nash equilibria, namely ((e, F 2 , F 3 ),Fd, ((e, F 2 , A 3 ),Ad, and
((e, A 2 , A 3 ),Ad. Equilibrium ((e, F 2 , F.3),F]) is not strong, because coali-
tion T = {1, 2} can increase player l's payoff from :2 to 3 while keeping
player 2's payoff at 2 by playing strategies tT = ((e,A 2,F3),A 1). In the
other two Nash equilibria, ((e, F 2 , A 3 ),A 1 ) and ((e, A 2 , A 3 ),Ad, player 1
gets a payoff of 4 and player 2 of 5. Player 2 would like to change to
one of the strategy profiles in which he gets 10, but player 1 will never
agree to this because his payoff would be lowered to O. We conclude
that ((e, F 2 , A 3 ),A 1 ) and ((e, A 2 , A 3 ),Ad are strong Nash equilibria of
the game. <>

EXAMPLE 5.14 We saw in example 5.9 that the game in figure 5.6
has seven Nash equilibria. All of these Nash equilibria are strong, be-
cause a gain for one of the players can only come at the expense of the
other player. For example, if the players play Nash equilibrium (6-4
split,(D 1 , A 2 , A 3 )), then player 1 has a payoff of 6 and player 2 of 4. Ev-
ery strategy profile in which player 1 has a payoff of more than 6, yields
player 2 a payoff of only 3. On the other hand, every strategy profile in
which player 2 has a payoff of more than 4, yields player 1 a payoff of
only 5. <>
The following example shows that strong Nash equilibria might not
exist.

EXAMPLE 5.15 Consider the 3-player game represented in figure 5.9. In


this figure, the rows correspond to the strategies of player 1, the columns
148 Noncooperative games

to the strategies of player 2, and the three matrices correspond to the


three possible strategies of player 3. Each player i E {I, 2, 3} has three
strategies, namely Si, ti, and Ui.

3,3, ° 0, 0, °
3, 3, ° 3,3, ° 0,3,3
0, 3, 3 1,4, 1
3, 0, 3 3,0,3 3,0,3 0, 0, ° 0,3,3 0, 3, 3
4, 1, 1 3, 0, 3 4, 1, 1 3, 3, ° 1,4, 1

3,3, ° 0, 3, 3 1,4, 1
3, 0, 3 1, 1, 4 1, 1,4
4, 1, 1 1, 1,4 2, 2, 2

Figure 5.9. The strategic-form game

This game has four Nash equilibria. These are (U1' 32, 33), (31, U2, t3),
(t1' t2, 'U3), and (U1' U2, 'lL3). None of these four Nash equilibria are strong,
though. In equilibrium (U1' 32, S3) players 2 and 3 can increase their pay-
offs from 1 to 3 by changing their strategies to t2 and t3, in equilibrium
(31, U2, t3) players 1 and 3 have a similar profitable deviation to t1 and
3:3, and in equilibrium (t1' t2, U3) players 1 and 2 have a similar profitable
deviation to Sl and 32· In equilibrium (U1' U2, 'U3), any pair of players can
deviate such that both their payoffs increase from 2 to 3. For example,
players 1 and 3 can deviate to strategies tl and S3. <)

It is not uncommon for larger games to not have any strong Nash
equilibrium. It has been argued that this is due to the fact that all prof-
itable deviations are allowed, including ones that are not stable against
further deviations. In a coali tion-proof Nash equili bri urn (cf. Bern-
heim et al. (1987)), the only deviations that are allowed are those that
are themselves stable against further deviations by sub coalitions of the
deviating coalition. To provide the formal definition of coalition-proof
Nash equilibrium, which is inductive, we need some additional notation.
Let f = (N; (Si)iEN; (J;)iEN) be a game in strategic form. For every
coalition TeN and s*rv\T E SN\T, we define a strategic-form game
f(s*rv\T) = (T; (Si)iET; (ft)iET), which is the game induced on the play-
ers of T by the strategies s*rv\T" For each i E T, the payoff function
it : Sr --+ R of thii:J game is given by it(ST) = J;(ST,3*rv\T) for all
ST EST·
Games in stmtegic form 149

Coalition-proof Nash equilibTia are defined inductively. In a I-player


game ({ i}; 5 i ; fi), a strategy si E 5 = 5i is a coalition-proof Nash
equilibrium if si maximizes 1; over 5,. Let r be a game with n >
1 players and suppose that coalition-proof Nash equilibria have been
defined for games with fewer than n players. A strategy profile 8* E 5 =
5 N is called self enforcing iffor all TeN, it holds that sT is a coalition-
proof Nash equilibrium of the game r(s~\T)' A strat.egy profile s* is a
coalit.ion-proof Nash equilibrium of r if s* is self enforcing and there is
no other self-enforcing strategy profile s E 5 N such that 1; (8) > 1; (8*)
for all i E N.

EXAMPLE 5.16 Consider the game in figure 5.9. We saw in example 5.15
that this game has four Nash equilibria and no strong Nash equilibria.
We saw that (U1' 82, 83) is not a strong Nash equilibrium by point.ing out
that players 2 and 3 can increase their payoffs by changing their strate-
gies to t2 and t3. But the strategy profile (U1' t2, t3) that is then being
played is not stable against further deviations. After cooperating with
player 3 to get to (111, t2, t3), player 2 has an incentive to deviate from
this agreement and play 112 to increase his payoff even further. Hence,
the initial deviation by players 2 and 3 is not one that is allowed. How-
ever, (111,82,83) is not a coalition-proof Nash equilibrium because there
is another profitable deviation that is stable against further deviations,
namely the deviation to 112 and U3 by players 2 and a.
To illustrate that the method using deviations and further deviations,
which is in examples much easier to use, leads to the same result as the
method using induced games on subcoalitions and self-enforcing strategy
profiles, we also apply this last method to show that (111, s2, 83) is not
a coalition-proof Nash equilibrium. Firstly, we consider the game r('U1)
which is induced on the players of {2, 3} by the strategy 111 by player l.
This game is represented in figure 5.10, in which the rows now correspond
to the strategies of player 2 and the columns to those of player 3. In
each cell, we list two payoffs, first that of player 2 and second that of
player 3.

1, 1
0, 3
3,°
3,3
1, 1
1,4
1, 1 4, 1 2, 2

Figure 5.10. The game r(Ul)


150 Noncooperative games

This game has two Nash equilibria, (82,83) and (712,713). Both are, of
course, stable against deviations by either player 2 or player 3. There-
fore, both strategy profiles (82, 83), and (712, 713) are self enforcing. How-
ever, only (712,713) is a coalition-proof Nash equilibrium of the game
f(ud, because both players get a higher payoff in this equilibrium than
in the other one.
Now, we have established that (82,83) is not a coalition-proof Nash
equilibrium ofthe game f(ud. Hence, (711,82,83) is not a coalition-proof
Nash equilibrium.
In an analogous manner, we can show that (31,712, t3) and (t1' t2, 713)
are not coalition-proof Nash equilibria either. We have one more Nash
equilibrium to check, namely (711,712,713), There are only three profitable
deviations from this strategy profile. These are (81,82) by players 1 and
2, (t1,83) by players 1 and 3, and (t2' t3) by players 2 and 3. Because of
their symmetry, we can suffice by checking for only one of these that it is
not allowed. Take deviation (31,32) by players 1 and 2. This deviation
is not stable against further deviations, because player 1 can increase his
payoff to 4 by deviating to 711. Therefore, this deviation is not allowed.
Since none of the three profitable deviations from (711,712, U3) is allowed,
it follows that (711,712,713) is a coalition-proof Nash equilibrium.
We conclude that (U1,U2,U3) is the unique coalition-proof Nash equi-
librium. 0

We conclude this section with an example that demonstrates that,


even if each player has a dominant strategy in a game, a coalition-proof
Nash equilibrium might involve the play of dominated strategies.

EXAMPLE 5.17 Consider the game in strategic form with two players
that is represented in figure 5.11.

Figur'e 5.11. A 2-player game

Note that this game has two Nash equilibria, (81,82) and (t1, t2)'
Furthermore, it is easily checked that 31 is the unique dominant strategy
of player 1 and 82 the unique dominant strategy of player 2. Strategies
tl for player 1 and t2 for player 2 are both dominated. However, we will
show that (t1' t2) is the unique coalition-proof Nash equilibrium of this
game.
Games in strategic form 151

Since every coalition-proof Nash equilibrium is a Nash equilibrium,


(81, t2) and (tl,82) are not coalition-proof Nash equilibria. Strategy
profile (81,82) is also not a coalition-proof Nash equilibrium, because
players 1 and 2 together can deviate to (tl' t2) and both improve their
payoffs and no player can make a deviation from (tl' t2) that is profitable.
This leaves (tl, t2) as the only candidate for a coalition-proof Nash equi-
librium. Note that both players receive their highest possible payoff if
strategy profile (h, t2) is played. Hence, no coalition can deviate from
(tl, t2) to a strategy profile that improves the payoffs of the deviating
players. Therefore, (tl' t2) is a coalition-proof Nash equilibrium. Note
that it involves all players playing a dominated strategy. 0
Chapter 6

A NETWORK FORMATION MODEL IN


EXTENSIVE FORM

In this chapter we consider extensive-form games of network forma-


tion. The starting point is a coalitional game that describes the profits
obtainable by all possible coalitions of players. The players engage in
a network-formation process to be able to coordinate their actions and
realize the possible gains from cooperation. The network-formation pro-
cess itself is sequential, i.e., links are formed one at a time and players
observe which links are formed as the game progresses. Once a network
has been formed, the players bargain over the division of the jointly ob-
tained profits. This bargaining process is not modeled explicitly. Rather,
an exogenously given allocation rule is used to describe the payoffs to
the players in any of the networks that they can form. Most of the re-
sults in this chapter are obtained for network-formation games in which
the allocation rule used to determine the players' payoffs is the Myerson
value.
We provide the formal definition of the extensive-form network-for-
mation game in section 6.1. Then, in sections 6.2 and 6.3 we analyze
several illustrative examples. In section 6.4 we provide some results con-
cerning the networks that are formed in subgame-perfect Nash equilibria
of the network-formation games and, finally, we study network-formation
games associated with symmetric convex games.

6.1 DESCRIPTION OF THE MODEL


In this section, we provide the formal definition of the extensive-form
network-formation game as well as some preliminary results concerning
the networks that are formed in subgame-perfect Nash equilibria.
We consider situations in which the possible gains from cooperation
between the players are described by a coalitional game. Players make
154 A network-formation model in extensive form

decisions on the possible formation of communication links. The incen-


tives for forming communication links stem from the desire to cooperate
with other players to reap the benefits from higher payoffs. We model the
process of link formation as a game in extensive form. In this game, pairs
of players get opportunities to form links in succession. An exogenously
given order of the pairs of players determines in what order the players
get opportunities to form links. We assume that it takes the consent of
two players to form the link between them. Hence, a link between two
players is formed if and only if both of these players agree on forming
it. 19 If two players at some point in the game pass up the opportunity
to form the link between them and new links are added to the network
after that, then these two players will get another opportunity to form
the link between them. The extensive-form game of network formation
that we are about to describe formally is a slight generalization of the
game that was introduced by Aumann and Myerson (1988).
Let (N, v) be a coalitional game with at least two players and let 'Y
be an allocation rule for communication situations that is defined on
CS;;. Let cr be an exogenously given order of the pairs of players in
N. For ease of notation, we identify each pair of players {i,j} with the
link ij. With this notation, cr : LN ---+ { 1,2, ... , G) } is a bijection with
the interpretation that cr(ij) = k denotes that pair ij is in position k.
Initially, there are no links between the players. The players have the
opportunity to form links in the order determined by cr. If at some point
in the game a pair of players gets an opportunity to form a link, this link
is actually formed if and only if both players agree on forming it. Once
the link has been formed, it cannot be broken in a further stage of the
game. After the players have formed the link or have decided against
forming it, the game moves to the next pair of players in the order cr.
After all pairs of players have had an opportunity to form a link, the
game goes back to the beginning of the order cr and each pair of players
who have not formed a link yet get another opportunity to do so, in the
order determined by cr. This process continues as long as new links are
formed. When a new link is formed, all pairs of players who have not
formed a link yet are given an opportunity to reconsider. The game ends
when, after the last link has been formed, all pairs of players who have
not formed a link with each other yet have had a final opportunity to do
so and decided against it. Throughout the process of network formation,

19We are somewhat vague about the exact details of the game, such as which player in a
pair decides first. We do so because such details have no influence on the qualitative results
that we present in this chapter and we do not want to obscure these results with complicated
notation. For a more formal treatment, we refer the reader to Slikker (2000a).
Some examples 155

its entire history is known to all players. When the game ends, a network
(N, L) has been formed. The payoffs to the players are then determined
using the exogenous allocation rule.". Hence, if network (N, L) has
been formed, then each player i E N receives "(i(N, v, L). In the original
model of Aumann and Myerson (1988), the Myerson value is used to
determine players' payoffs. While we allow for different allocation rules,
most of our results are obtained for the Myerson value.
We are interested in networks that are supported by subgame-perfect
Nash equilibria of the extensive-form games of network formation
t::,.n! (N, v, ,,(, a).20 In the following sections, we study the network-for-
mation games t::,.n!(N,v,,,(,a) by means of some examples.

6.2 SOME EXAMPLES


In this section we discuss three examples, in which we use the Myerson
value as the exogenous allocation rule. The first example is one of a
superadditive game for which the network-formation process does not
result in formation of the complete network. In the second example,
we illustrate that the order a of the pairs of players may influence the
networks that are formed in subgame-perfect Nash equilibria. In the last
example, we give a convex game and a connected but incomplete network
that has the property that the players will not form any additional links
if this network is formed at some point in the network-formation process.
The first example is due to Aumann and Myerson (1988).

EXAMPLE 6.1 Consider the 3-player symmetric game (N, v) with player
set N = {I, 2, 3} and characteristic function v given by

if ITI ~ 1;
veT) = { ~o72 if ITI = 2; (6.1)
ifT = N.

We point out that this game is superadditive. It was studied by Aumann


and Myerson (1988), who showed that every subgame-perfect Nash equi-
librium of the extensive-form network-formation games fin! (N, v, IL, a)
results in the formation of a network with exactly one link. We will
recall their arguments.
To do so, we first describe the payoffs to the players as determined
by the Myerson value for the various possible networks. In the empty

20We point out that subgame-perfect Nash equilibria exist because the game 1'>. nf (N, v, r, (7)
is one of perfect information.
156 A network-formation model in extensive form

network, every player receives zero, i.e.,

fLi(N, v, 0) = 0 for each i E N.

In a network with one link, (N, {ij}), the two linked players equally
divide the value of a 2-player coalition and the isolated player gets zero;

fLk(N,v,{i j }}={ ~o if k cf- {i,j};


if k E {i,j}.

In a network with two links, (N,{ij,jk}), the payoffs are (see example
2.7)
if r = j;
fLr(N, v, {ij,jk}} = { ~:
if r E {i, k}.
Finally, in the complete network each player receives the same payoff
and
fLi(N, V, LN) = 24 for each i EN.
Let a be the order of the pairs of players in which 12 comes first,
followed by 13 and 23, successively. We represent the extensive-form
network-formation game fl.nj(N,v,fL,a} in figure 6.1. In this figure, we
have condensed the decisions of pairs of players in the following way.
Consider the initial node of the game. Players 1 and 2 have an opportu-
nity to form link 12, and both of these players have to indicate whether
or not they want to form it. The order in which these two players make
their decisions does not influence the outcome of the game, and the link
is formed if and only if both players agree on forming it. Therefore, we
can schematically represent this as a decision node for the pair of players
1,2 with two outgoing branches, one labeled 'Yes', which is followed if
both players agree on forming the link, and one labeled 'No', which is
followed if at least one of the two players does not want to form link 12.
We now set out to find the networks that are supported by sub game-
perfect Nash equilibria. Note that each player receives a positive payoff
if he forms any links at all, whereas all players receive zero if no links are
formed. It follows that at least one link will be formed in a subgame-
perfect Nash equilibrium. Suppose that two players, say i and j, have
formed a link. If no additional links are formed, these two players will
each receive a payoff of 30. Certainly, both player i and player j would
prefer to form a link with the remaining player k and receive 44. How-
ever, if player i forms a link with player k, then players j and k will also
form a link to increase their payoffs from 14 to 24. So, both players i
and j know that if one of them forms a link with player k, the other
will do so as well and they both will end up receiving a payoff of 24.
Because this is less than the payoffs that they receive if only link ij is
Some examples 157

1,2 Yes 1,3 Yes 2,3 Yes 24, 24, 24


No
No ---------. 44, 14, 14

Yes 1,3
2,3 Yes ----e 24, 24, 24
No No ---------. 14, 44, 14
No
---------. 30, 30, 0

Yes 2,3 Yes 1,2


1,3 Yes ----e 24, 24, 24

No
No ---------. 14, 14, 44

Yes 2,3 Yes


1,2 24, 24, 24
No No
---------. 44, 14, 14
No
---------. 30, 0, 30

Yes 1,2 Yes 1,3 Yes


2,3 ----e 24, 24, 24
No
No ---------. 14, 44, 14

Yes 1,2 Yes


No 1,3 ----e 24, 24, 24
No
---------. 14, 14, 44
No
---------. 0, 30, 30
0,0,0
Figure 6.1. The network-formation game in extensive form in example 6.1

formed, no additional links will be formed. We conclude that exactly


one link will be formed in a subgame-perfect Nash equilibrium. The link
that is formed may be any of the three possible links, independent of
the specific order a. This is discussed in more detail by Slikker (2000a).
Also, for the order (j it can be derived quite straightforwardly that any
of the three links 12, 13, and 23 can be formed in a subgame-perfect
Nash equilibrium by analyzing the representation of this game in figure
6.1. 0
158 A network-formation model in extensive form

The main conclusion of the previous example is that there exist su-
peradditive coalitional games such that only incomplete or even non-
connected networks are supported by subgame-perfect Nash equilibria
of the corresponding extensive-form games of network formation.
The following example shows that different networks may be sup-
ported by subgame-perfect Nash equilibria of t:~.nf (N, v, IL, a) for differ-
ent orders a of the pairs of players.

EXAMPLE 6.2 Consider the coalitional game (N, v) with player set N =
{l, 2, 3,4} and characteristic function v described by

v = 2'Ul,2 + 2U3,4 + 48'U1 ,2,3 + 48ul,2,4 + 48ul,3,4 + 48u2,3,4 - 144'UN.

We are interested in the subgame-perfect Nash equilibria of the network-


formation game Ll nf (N, v, IL, a), for some order a.
Suppose that at some point in the network-formation game the three
links between players 1, 2, and 3 have been formed. If no additional
links are formed, then the players receive the payoffs IL(N,v,L{1,2,3}) =
(17,17,16,0). Using backward induction, it can be shown that once an
additional link is been formed, the players will form all the other links as
well, so that the complete network will be formed. The players will then
receive the payoffs IL(N, v, LN) = (13,13,13,13). Because this implies
a reduction in payoff for players 1, 2, and 3, they will not form any
more links. Note that player 4 cannot single handedly form additional
links. We conclude that no additional links will be formed if network
(N, L {1,2,3}) is formed at some point in the network-formation game.
Symmetry between the players implies that a similar statement holds
for each network that has three links between three players.
In a manner quite similar to the analysis in example 6.1, it can now
be shown that a network of the form (N,{ij,ik,jk}) will be formed.
However, the specific network that can be be formed depends on the
order a. For example, if 12 is first in the order, this pair of players will
form a link immediately and one of the two networks (N, L {1,2,3}) or
(N, L {1,2,4}) will result. Alternatively, if 34 is first in the order, they will
directly form a link and one of the networks (N, L{1,3,4}) or (N, L{2,3,4})
will result eventually. This illustrates the dependency of the order on
the networks that can be formed in subgame-perfect Nash equilibria. 0

The following example, which is due to R. Holzman (private com-


munication), studies a convex coalitional game. Attempts to obtain
general results for the extensive-form network-formation games have
mainly concentrated on convex games. This issue was addressed by
van den Nouweland (1993), and is further studied by Slikker and Norde
Some examples 159

(2000). We report their results in section 6.4. As an intermediate step,


van den Nouweland (1993) formulates the conjecture that for any convex
game (N, v) and any network (N, L) that is not the complete network,
there exist two players i and j such that fli(N, v, L) ::; fli(N, v, LN) and
flj(N,v,L) ::; flj(N,v,L N ) and, moreover, ij rf- 1" Example 6.3 dis-
proves this conjecture, which van den Nouweland (1993) intended to use
to prove the conjecture that for a convex coalitional game a network
that is payoff equivalent to the complete network will always be formed
in a subgame-perfect Nash equilibrium of the extensive-form game of
network formation.

EXAMPLE 6.3 Consider the 5-person coalitional game (N,11) with N =


{I, 2, 3, 4, 5} and characteristic function 11 described by

We use the Myerson value to determine players' payoffs. The payoffs to


the players in the complete network are

Consider the wheel (N,L) with L = {12,23,34,45,51}, which is repre-

50
sented in figure 6.2.

4
3

1 2
Figure 6.2. Network (N, L)

The network-restricted game associated with this wheel is

vL = 2Ul,2,3 + 2Ul,3,4,5 + 2Ul,4,5 + 2Ul,2,3,4 + 3Ul,2,b + 3U2,3,4,5 - 7UN

and the corresponding payoffs are

N v L = (29 ~ 61 61 91)
fl( , ,) 15' 60' 60' 60' 60 .

Hence, all players except for player 1 get a higher payoff in the wheel
than they get in the complete network.
160 A network-formation model in extensive form

Suppose that the wheel (N,L) has been formed at some point in the
network-formation game t::,.nf(N,v,jL, a). Then no additional links will
be formed if the formation of an additional link would induce the for-
mation of all remaining links, so that the complete network would be
formed. This means that the route mapped out by van den Nouweland
(1993) to prove the conjecture that for a convex coalitional game a net-
work that is payoff equivalent to the complete network will always be
formed is not valid. 0

6.3 WEIGHTED MAJORITY GAMES


In this section we study two examples of network-formation in weight-
ed majority games. In the first example we consider a weighted majority
game with one large party and several small ones, a so-called apex game.
In the second example, we discuss a weighted majority game for which
a connected but incomplete network is supported by a subgame-perfect
Nash equilibrium of the link-formation game.
We start by introducing weighted majority games. Consider a country
with a multi-party parliament. We denote by N the set of parties that
have seats in the parliament. Also, for each party i E N, we denote
its number of seats by Wi. Suppose that a coalition of parties needs a
total of q seats, the quota in order to form a government. The tuple
(N, q, (Wi)iEN) is a weighted majority situation. We assume throughout
this section that ~ 2:iEN Wi < q, i.e., a coalition needs more than half
of the seats to form a government. With a weighted majority situation
(N, q, (Wi)iEN) we associate a weighted majority game (N, v) with the
characteristic function v defined by

v(T) = { ~ if 2:iET Wi :::: q;


otherwise

for all T ~ N. Hence, in the weighted majority game, a coalition T


of parties has the value 0 if these parties have fewer than q seats and
hence cannot form a government. If the parties in a coalition T meet
the quota, they can form a government and T is assigned a value 1 in
the game.
The following example, due to Aumann and Myerson (1988), considers
a weighted majority situation with one large party and four small parties.

EXAMPLE 6.4 Consider the weighted majority situation (N,q, (Wi)iEN)


with parties N = {I, 2,3,4, 5}, quota q = 4, and numbers of votes Wl = 3
and W2 = W3 = W4 = W5 = 1. In this situation, there is one large party
which needs only one of the other parties to form a government. A
Weighted majority games 161

government without the large party can only be formed by all the small
parties. The associated weighted majority game (N, v) is given by

if T = {2, 3, 4, 5} or if 1 E T and ITI ~ 2;


v(T) = { ~ otherwise.

We consider the formation of bilateral relations between these parties,


using the Myerson value to determine the payoff to the parties. In table
6.1 we list the Myerson values for several networks in which all possible
links between the cooperating parties have been formed. Note that for
such networks that are not listed in the table, we can find the payoffs
using symmetry.

Coalition T Payoffs p(N, v, LT)

{1,2} (~, ~,O,O,O)

{1,2,3} (~,t,t,O,O)
{l,2,3,4} (t,f2,f2,f2,O)
N (~,to,to,to,to)
{2,3,4,5} (O,~, ~,~,~)

Table 6.1. Payoffs for networks with all possible links within components

The large party needs at least one small party to form a government.
However, this small party can increase its political clout within the gov-
ernment by inviting additional small parties to join it. In a government
with two parties, both parties are equally strong even though they have
different numbers of seats. By inviting more parties., the large party de-
creases its dependency on a specific small party, while each small party
still needs the large one. This argument does no longer hold if there are
enough small parties in the government to form a majority without the
large party. The payoff of the large party is maximal if there are three
small parties in the government. Notice that three is the maximal num-
ber of small parties that can be included in the government without the
large party losing its veto power. A small party has the highest payoff
if it forms a government with the large party only. However, all small
parties prefer to form a government with each other, excluding the large
party, to forming a government with the large party and at least one
other small party.
162 A network-formation model in extensive form

In our analysis of the game of network formation we will restrict our


attention to networks in which all links are formed within components.
Several arguments justify this restriction. Firstly, note that for any
network (N, L) in which parties 2, 3, 4, and 5 do not all belong to the
same component, it holds that only the links in which party 1 is involved
are relevant. Further, any network in which {2, 3, 4, 5} is a component
results in the same payoffs as network (N, L{2,3,4,5}). Finally, it can
be shown that all links will be formed once a connected network has
been formed. The formation of a network (N, LT) is identified with the
formation of coalition T, for each T ~ N.
We use backward induction to find out which coalitions can be formed
in subgame-perfect Nash equilibria of the network-formation games
tl nf (N, v, /-L, a). Firstly, assume that a coalition with the large party
and three small parties has been formed, say coalition {I, 2, 3, 4}. If
party 2 forms an additional link with party 5, than the complete net-
work will eventually be formed. This improves the payoff of party 2
from 112 to 110 and the payoff of party 5 from 0 to 110 • Hence, once a
coalition with the large party and three small parties has been formed,
the remaining links will also be formed.
Secondly, assume that a coalition with the large party and two small
parties has been formed, say coalition {I, 2, 3}. If any of these parties
forms an additional link with parties 4 or 5, it follows using the results
that we have obtained so far, that eventually the complete network will
be formed. This would decrease the payoff of the large party from ~
to ~ and that of parties 2 and 3 form i to 110 • We conclude that no
additional links will be formed once a coalition with the large party and
two small ones has been formed.
Thirdly, suppose that a coalition with the large party and one small
party has been formed, say {1,2}. Then party 1 can permanently im-
prove its payoff from 1 to ~ by forming a link with one of the other
small parties. Because this will also improve the payoff of this small
party, we conclude that a coalition with the large party and one small
party cannot be sustained in a subgame-perfect Nash equilibrium.
Fourthly, suppose the four small parties have formed a coalition. If
one of them forms a link with the large party, then eventually the com-
plete network will be formed. Because parties 2, 3, 4, and 5 all receive
~ in coalition {2, 3, 4, 5} and only/0 in the complete network, we con-
clude that coalition {2, 3,4, 5} is sustainable in a subgame-perfect Nash
equilibrium.
Finally, suppose that no links have been formed so far. U sing the
analysis above, we conclude that only two types of structures can result,
a coalition with the large party and two small parties or the coalition
Weighted majority games 163

with four small parties. Since all four small parties prefer the coalition
with four small parties to any coalition with one large party and two
small parties (independent of whether they are in this coalition or not),
we conclude that the coalition with four small parties will be formed in
a subgame-perfect Nash equilibrium. 0

Aumann and Myerson (1988) mention that the analysis in example


6.4 is typical for situations with one large party having more than one
and less than q seats and several small parties having one seat each.
According to subgame-perfect Nash equilibria, the complete network on
a minimal winning coalition of small parties will be formed.
As far as we know, no general results for other types of weighted ma-
jority games have been reported in the literature. Aumann and Myerson
(1988) remark that in the two examples of network-formation games as-
sociated with the 5-party weighted majority situation (N, 4, (2,2,1,1,1))
and the 7-party weighted majority situation (N, 6, (3,3,1,1,1,1,1)), re-
spectively, both the structures with the large parties and the structures
with one large party and all small parties can be formed.
The following example, which is due to Feinberg (1998), addresses a
question that was posed by Aumann and Myerson (1988). They won-
dered whether there exists a majority game for which a network that
is not internally complete can be sustained by a subgame-perfect Nash
equilibrium.

EXAMPLE 6.5 Consider the weighted majority game with 8 parties,


which have 5, 1, 2, 2, 2, 2, 4, and 1 seats, respectively, and a quota
of 12 seats. The weighted majority game (N, v) associated with this
situation is described by N = {I, 2, 3, 4, 5, 6,7, 8} and

v(T) = { ~ if LiET Wi 2: 12;


otherwise.

Feinberg (1998) considers the network-formation games fln/ (N, v, p" a).
He shows that, once the network (N, L) with

L = L N \{37, 47, 57, 67,18,48,58, 68}

has been formed, no further links will be formed by the parties. We


represent the network (N, L) in figure 6.3.
The payoffs of the parties in this network are

N v L _ (123 27 42 38 38 38 _~ 23)
p,( , , ) - 420' 420' 420' 420' 420' 420' 420' 420 '
164 A network-formation model in extensive form

4 3 8
Figure 6.3. Network (N, L)

whereas the complete network would result in the payoffs

N LN _ (122 22 41 41 41 41 90 22)
f-l( ,v, ) - 420' 420' 420' 420' 420' 420' 420' 420 .
Parties 4, 5, and 6 prefer the complete network, but they have no possi-
bility to enforce the formation of this structure. Parties 1, 2, 3, 7, and 8
all prefer network (N, L) and they can enforce that no additional links
are formed.
It is still unknown whether there exists a subgame-perfect Nash equi-
libriumof ~nf(N, V,f-l, CJ) that results in the formation of network (N, L)
if the formation process is started from the empty network. 0

6.4 SYMMETRIC CONVEX GAMES


In this section, which is based on Slikker and Norde (2000), we discuss
network formation in symmetric convex games. We concentrate on such
games because it seems that, if any general results for the extensive-form
network-formation model ~nf (N, v, 'Y, CJ) can be obtained, it should be
for such games. This is because, at least superficially, it seems that in a
convex game the players have incentives to form large coalitions, so that
one might expect a connected network to be formed. The restriction to
symmetric games further simplifies the situation because in such a game
all players are symmetric and, hence, one might expect the formation of
a network in which the players have symmetric positions.
Van den Nouweland (1993) formulated the conjecture that for a con-
vex coalitional game (N, v) it holds that the complete network is always
supported by a subgame-perfect Nash equilibrium of the extensive-form
game ~nf (N, v, fL, CJ) of network formation. However, we have already
shown in example 6.3 that the method of proof of this conjecture that
van den Nouweland (1993) envisioned does not work. In this section we
present the results obtained by Slikker and Norde (2000), who prove that
the conjecture is true for symmetric convex games with up to five players.
However, as far as we know, it is still unknown whether the conjecture
is true for symmetric convex games with more than five players.
Symmetric convex games 165

We start the exposition in this section with two general lemmas. In


the first lemma we state a sufficient condition for the complete network
to be supported by a subgame-perfect Nash equilibrium. The condition
is that for all networks that are not complete it holds that there exist two
players who are not directly connected with each other and who each
weakly prefer the complete network to the current one. This lemma
was first stated by van den Nouweland (1993) and is proved in detail by
Slikker and Norde (2000). We will only sketch their arguments here.

LEMMA 6.1 Let (N, v) be a coalitional game, 1 an allocation rule for


communication situations, and IJ an order of all pairs of players. If
for each L C LN there exist two players i, j E N such that ij tt L,
N N .
li(N,v,L) :s: li(N,v,L ), and Ij(N,v,L) :s: ,j(N,v,L ), then It holds
that (N, LN) is supported by a subgame-perfect Nash equilibrium of
~nf (N, v, I, IJ).

PROOF (SKETCH): Suppose that for each L C LN there exist two players
'i,j EN such that ij tt L, li(N,v,L) :s: li(N,v,L N ), and Ij(N,v,L) :s:
N
Ij(N,v,L ). We define a strategy profile s in the game ~nf(N,v",IJ)
as follows. Consider a decision node in the game and suppose that a
player i has to decide whether or not he wants to form a link ij, while
the links in L C LN have been formed so far. Then, according to his
strategy Si, player i indicates that he wants to form link ij if and only
if,i(N,v,L) :s: li(N,v,L N ).
Obviously, since for all L C L N there exist two players i, j E N such
that ij tt L, li(N,v,L) :s: li(N,v,L N ), and ,j(N,v,L) :s: Ij(N,v,L N ),
strategy profile S results in the formation of the complete network.
It remains to show that S is a subgame-perfect Nash equilibrium of
~nf(N,v",IJ). Using backward induction, we know that it suffices to
show that for each subgame the choice at its initial node as prescribed by
s is optimal given that the choices of players according to s are subgame
perfect at all other decision nodes in the subgame. Consider a subgame
and suppose that in its initial node player i has to decide whether or not
he wants to form a link ij while the links in L C LN have been formed
so far. Furthermore, assume that the choices according to s of all other
players are subgame perfect and that the choices of player i according
to Si at all other decision nodes are subgame perfect.
Firstly, assume that link ij would be formed if player i followed
his strategy Si. Hence, li(N,v,L) :s: li(N,v,L N ) and Ij(N,v,L) :s:
Ij(N,v,L N ). Suppose player i deviates from Si. There are two possibil-
ities. Possibility number one is that no additional links will be formed,
166 A network-formation model in extensive form

so that network (N, L) results. The other possibility is that the com-
plete network (N, LN) is formed, because by construction of s, it holds
that if at least one additional link is formed then all links will eventually
be formed. In both cases, player i's payoff will not be higher than if he
follows Si.
Secondly, assume that link ij would not be formed if player i followed
his strategy Si. This leaves two possibilities. Ifri(N,v,L) :::; ri(N,v,L N )
then we know by construction of S that rj(N, v, L) > rj(N, v, £,v) has
to hold. Hence, player i cannot unilaterally enforce the formation of
the link, nor can he influence the outcome of the game by deviating at
the initial node. The other possibility is that ri(N, v, L) > ri(N, v, L N ).
Then, by construction of s, it follows that one of the networks (N, L)
and (N, LN) will be formed if the players play according to s. If player
i does not follow his strategy Si at the initial node of the subgame, then
either this is not going to have an effect on the formation of link ij
(ifrj(N,v,L) > rj(N,v,LN)) and on the network that is eventually
formed, or the deviation by player i results in the formation of link ij
and, eventually, in the formation of the complete network (N, L N ). In
both cases player i does not improve his payoff by deviating.
This shows that player i cannot improve his payoff by deviating from
oSi at the initial node of the subgame. 0

In the second lemma we state a necessary condition for a network to


be supported by a subgame-perfect Nash equilibrium. For a detailed
proof of this lemma we refer the reader to Slikker and Norde (2000). We
just give the general idea here.

LEMMA 6.2 Let (N, v) be a coalitional game, r an allocation rule for


communication situations, and u an order of all pairs of players. Let £
be a set of networks for the set of players N that includes the complete
network. Suppose that for all networks (N, L) rf- £ it holds that there
exist two players i, j E N such that ij rf- L and, moreover, ri (N, v, L) <
ri(N,v,L') and rj(N,v,L) < rj(N,v,L') for each (N,L') E £ with
L C L'. Then every network (N, L) that is supported by a subgame-
perfect Nash equilibrium of t::.nf(N,v,r,u) is an element of £.

PROOF (SKETCH): Suppose that a network that is not in £ is supported


by a subgame-perfect Nash equilibrium. Consider the set of networks
(N, L) that are not in £ and for which it holds that in a subgame that
is played after exactly the links in L have been formed, there exists a
subgame perfect Nash equilibrium according to which no additional links
Symmetric convex games 167

are formed. 21 Then, this set of networks is non-empty. Let (N, L) f/. £.
be a network in this set with the maximum number of links. Consider
a subgame that is played after exactly the links in L have been formed
and consider a subgame-perfect Nash equilibrium s in this subgame that
prescribes that no additional links are formed. By assumption there
exist two players i,j EN such that ij f/. L and, moreover, 'Yi(N, v, L) <
'Yi(N, V, L') and 'Yj(N, v, L) < 'Yj(N, V, L') for each (N, L') E £. with
L c L'. It follows from the maximality of (N,L) that, if players i and j
form link ij, which they have an opportunity to do, then some network
(N, L') E £. will eventually be formed. Note that for this network it
would hold that 'Yi(N, v, L') > 'Yi(N, v, L) and 'Yj(N, v, L') > 'Yj(N, v, L).
Hence, at least one of the players i and j did not make an optimal choice
in one of the decision nodes in the subgame. This contradicts that s is
subgame-perfect Nash equilibrium. We now conlude that a network that
is not in £. is not supported by a subgame-perfect Nash equilibrium of
b,.nf(N,v,'Y,u). D

It can be shown that a slightly generalized version of lemma 6.2 holds


for extensive-form link-formation games that are obtained when some
links have been formed before the start of the game.

We now turn our attention to the study of which networks are sup-
ported by subgame-perfect Nash equilibria of the link-formation games
b,.n f (N,v,ll,u) for symmetric convex (coalitional) games (N,v).
Considering symmetric games rather than arbitrary coalitional games
reduces the complexity of our problem significantly, since we can re-
strict ourselves to non-isomorphic networks. Two networks (N1 , L 1 )
and (N2' L 2) are isomorphic if there is a one-to-one correspondence be-
tween the vertices in N1 and those in N2 with the additional property
a link between two vertices in N1 is included in L1 if and only if the
link between the corresponding two vertices in N2 is included in L 2. For
example, networks ({I, 2, 3}, {12}) and ({I, 2, 3}, {13}}) are isomorphic.
In fact, all networks with three vertices and one link are isomorphic.
In a communication situation (N, v, L) with a symmetric game (N, v),
the Myerson value of a player only depends on his position in the network
and not his identity. Hence, we can reduce the complexity of the analysis
if the underlying game is symmetric, because it suffices to calculate the
Myerson values for non-isomorphic networks only.

21We remark that such a subgame might not actually be reacbed in a subgame-perfect Nash
equilibrium of the game An! (N, v, 'Y, u).
168 A network-formation model in extensive form

We start by considering symmetric convex games with three players.


Let (N, v) be such a game and assume that N = {1, 2, 3}. Since the
value of a coalition depends on its size only, there exist numbers VI, v2,
and V3 such that v(T) = vITI for all T ~ N. For convenience, we assume
that (N, v) is zero-normalized, i.e., VI = O. Using the symmetry of the
players, it follows that in the complete network the players equally divide
the value of the grand coalition, i.e., p,(N, v, LN) = (~V3' ~V3' ~V3). We
will show that for every other network we can identify a pair of players
who are not linked with each other and who each receive at most as
much as they would in the complete network. As we argued before, it
suffices to consider non-isomorphic networks.
First, consider the empty network (N,0). According to the Myerson
value, each player i E N receives VI = 0 in the empty network. By
convexity of (N, v) it follows that 3VI :S V3, or VI :S ~V3. This implies
that P,i(N, v, 0) :S p,(N, v, LN) for all i E N. Thus, we find that, while
there are no links in the empty network, all players would have at least
as high a payoff in the complete network.
Secondly, consider a network with one link, say link 12. It is easily
verified that p,(N, v, {12}) = (~V2,~V2'0). Convexity of (N,v) implies
that V3 - V2 ~ V2 - VI and V2 - VI ~ vI· Using these inequalities and
Vl = 0, it is a straightforward exercise to show that ~V3 ~ ~V2. Thus, we
find that, while links 13 and 23 are not included in network (N, {12}),
all players would have at least as high a payoff in the complete network.
Finally, consider a network with two links, say links 12 and 13. In
this network, the payoffs to the players are p,(N, v, {12, 13}) = (~V3 +
~V2' ~V3 - ~V2' ~V3 - ~V2). Since V2 ~ 2VI ~ 0 by convexity and zero-
normalization of (N, v), it follows that players 2 and 3, would have at
least as high a payoff in the complete network. Note that link 23 is not
included in the network (N, {12, 13}).
So far, we have shown that for every network that is not complete
we can identify a pair of players who are not linked with each other
and who each receive a payoff that is lower than or equal to their payoff
in the complete network. Using lemma 6.1, we now conclude that the
complete network is supported by a subgame-perfect Nash equilibrium
of the network-formation game ~nf (N, v, p" a), in which the Myerson
value is used to determine the players' payoffs. Note that this holds for
any order a of the pairs of players.
Using a method similar to the one demonstrated above, namely simple
computation and comparison, Slikker and Norde (2000) prove that a
similar result holds for all symmetric convex games with at most five
players. We state their result wit.hout a proof.
Symmetric convex games 169

THEOREM 6.1 Let (N, v) be a symmetric convex game with at least two
and at most five players. Then the complete network (N, LN) is sup-
ported by a subgame-perfect Nash equilibrium of the network-formation
games D,.nf (N, v, /-l, a) for any order a of the pairs of players.

A game is called strictly convex if the convexity inequality (3.5) holds


with strict inequality for all pairs of nested coalitions, i.e., v(S U i) -
v(S) :::; v(T U i) - v(T) for all i E N and all S, T ~ N\i with SeT.
With the help of lemma 6.2 it can be shown that for a symmetric and
strictly convex game (N, v) with at least two and at most five players it
holds that a network in which the players have payoffs that are different
from those they obtain in the complete network is not supported by
a subgame-perfect Nash equilibrium of D,.nf(N,v,/-l,(J). Note that this
implies that, while there might be several networks that are supported by
subgame-perfect Nash equilibria, the payoffs of the players are uniquely
determined in such equilibria. This result is captured in the following
theorem. We refer the reader to Slikker and Norde (2000) for its proof.

THEOREM 6.2 Let (N,v) be a symmetric and strictly convex game with
at least two and at most five players. For any order a of the pairs of play-
ers it holds /-l(N, v, L) = /-l(N, V, LN) for all networks (N, L) that are sup-
ported by a subgame-perfect Nash equilibrium of the network-formation
game D,.nf(N,v,/-l,a).

In the remainder of this section we focus on the question whether


theorem 6.2 can be extended to games with six or more players. In
our analysis, we use the 6-player symmetric game (N, v) with player set
N = {I, 2, 3, 4, 5, 6} and characteristic function v described by

v = 60 L Ui,j + 900UN· (6.2)


i,jEN:iij

It is a straightforward exercise to show that this game is strictly convex.


For this game we can identify a network that has the property that
all players whose payoffs are less than or equal to their payoffs in the
complete network have already formed all possible links between them.
We denote by (N, L*) the network that contains all possible links except
for the links 15, 25, 36, and 46, i.e.,

L* = {I2,I3,I4,16,23,24,26,34,35,45,56}.
The Myerson value of communication situation (N, v, L*) equals

/-l(N, v, L*) = (301,301,301,301,298,298).


170 A network-fomwtion model in extensive form

U sing the symmetry of the players, we quickly find the payoffs of the
players in the complete network. These are

p,(N, v, LN) = (300,300,300,300,300,300).

Cooperation structures (N, LN) and (N, L*) are shown in figure 6.4.

3 4 5

2 1 6

a: (N, LN) b: (N, L*)

Figure 6.4. Networks (N,LN) and (N,L*)

Players 5 and 6 are the only players who are worse off in network
(N,L*) than they arc in the complete network (N,Vv), while all other
players have a higher payoff in network (N, L*). Because link 56 is
already included in network (N, L *), we cannot apply lemma 6.1 in an
attempt to prove that the complete network is supported by a subgame-
perfect Nash equilibrium.
Though we now know that we cannot prove a generalization of theo-
rem 6.2 to more than five players using lemma 6.1, the analysis so far
does not imply that such a generalization cannot be obtained. There-
fore, Slikker and Norde (2000) turn to a more extensive analysis of the
6-player game described in (6.2). They provide a survey of the payoffs for
the communication situations (N, v, L) for all of the 156 non-isomorphic
networks (N, L) with six players. Using these payoffs and the generalized
version of lemma 6.2 that was mentioned on page 167, they prove that
for the 6-player game described by (6.2) and starting with an arbitrary
network, each subgame-perfect Nash equilibrium results in the forma-
tion of a network that is payoff equivalent to the complete network or
isomorphic to network (N, L*). Up to isomorphisms, only six networks
satisfy this condition, namely the networks (N, LN) and (N, L*), which
were already represented in figure 6.4, and four networks that are payoff
equivalent to (N, L N ), which we represent in figure 6.5.
We remark that, although it suffices to consider these six non-isomor-
phic networks, there are many more networks that are payoff equivalent
to the complete network or isomorphic to network (N, L*). For example,
Symmetric convex games 171

3 5 345

6
~
2 1 6

3 4 5 345

D
2 1 6

Figure 6.5. Four networks that are payoff equivalent t.o (N, LN)

there exist ninety networks that are, up to isomorphisms, the same as


network (N, L*).
The main result in Slikker and Norde (2000) implies that there are
networks that are not payoff equivalent to the complete network but
that are supported by subgame-perfect Nash equilibria. To formulate
their result, we need some terminology. Note that in any network that
is isomorphic to (N, L *) there are two players whose payoff is lower then
the payoff they would get in the complete network. We say that these
two players are exploited by the others.
THEOREM 0.3 Let (N,v) be the 6-player game described in (6.2) and let
0" be any order of the pairs of players. For each pair of players i, j E N
it holds that there exists a network (N, L) that is isomorphic to (N, L *)
and in which players i and j are exploited such that (N, L) is supported
by a subgame-perfect Nash equilibrium of the network-formation game
tJ..nf(N, v, fJ"0").
Theorem 6.3 states that, independent of the initial order of the pairs
of players, any pair of players can end up being exploited in a subgame-
perfect Nash equilibrium. This implies that theorem 6.2 cannot be ex-
tended to games with more than five players. However, note that theo-
rem 0.3 does not imply that a specific network, for example the complete
network, is not supported by a subgame-perfect Nash equilibrium. It is
unknown whether all subgame-perfect Nash equilibria result in the for-
mation of a network in which two players are exploited. Hence, we still
172 A network-formation model in extensive form

do not know whether theorem 6.1 can be extended to games with more
than five players.
Chapter 7

A NETWORK FORMATION MODEL IN


STRATEGIC FORM

In the current chapter, which is based on Dutta et al. (1998), we study


strategic-form games of network formation. Like in chapter 6, we model
situations in which the eventual distribution of payoffs is determined
in two distinct stages. In the first stage players form links and in the
second stage players negotiate over the division of the payoff, given the
network that has been formed in the first stage. We model the first stage,
the process of network formation, as a game in strategic form, in which
players' choices whether or not to form links are made simultaneously
rather than sequentially. Like in the previous chapter, we do not Illodel
the second stage explicitly but use an exogenously given allocation rule
to determine the payoffs to the players once a network has been formed.
However, rather than focusing solely on the Myerson value, we consider a
more general class of allocation rules which includes the Myerson value.
It is relatively easy to obtain general results for the strategic-form
games of network formation. We show that several equilibrium refine-
ments predict the formation of the complete network or some network in
which the players get the same payoffs as in the complete network, if the
underlying coalitional game is superadditive. We conclude this chapter
with a comparison of the network-formation model in strategic form of
this chapter with the network-formation model in extensive form of the
previous chapter and some remarks on related liter(J,ture.
We start in section 7.1 with the formal definition of the model and
a description of the class of allocation rules under consideration. We
then study Nash equilibria and strong Nash equilibria in section 7.2,
and undominated Nash equilibria and coalition-proof Nash equilibria in
section 7.3. Section 7.4 contains a discussion of the model and section
7.5 concludes with a discussion of some related literature.
174 A network-formation model in stmtegic form

7.1 DESCRIPTION OF THE MODEL


The possible gains from cooperation between the players are described
by a coalitional game. Knowing this underlying game, the players decide
what communication links to form, because they cannot realize any gains
from cooperation in the absence of communication links. We model the
players' decisions on whether or not to form a link with other agents as
a game in strategic form. In this network-formation game, each player
announces a set of players with whom he wants to form a link. A link
is formed between players i and j if and only if they both want to form
the link. This results in the formation of a specific network among
the players. An exogenously given allocation rule for communication
situations then gives the payoffs to the individual players. The strategic-
form games of network formation that we are about to describe formally
were originally introduced by Myerson (1991) (p. 448).22
Let (N, v) be a coalitional game and let , be an allocation rule for
communication situations. The set of strategies available to a player i is
Si = 2N \i. A strategy Si c::: N\ i of player i is an announcement of the set
of players he wants to form communication links with. A communication
link between two players will be formed if and only if both players want
to form the link. Hence, a strategy profile s E S = TIiEN Si gives rise to
a unique network with the set of links

L(s) = {ij liE Sj, j E Si}. (7.1)

The payoffs to the players are given by ,(N,v,L(s)), i.e., their payoffs
according to , in the resulting communication situation (N, v, L (s )).
The network-formation game in strategic form rnJ (N, v, ,) is described
by the tuple (N; (Si)iEN; U;')iEl'V), where Si = 2N\i for each i EN and
the payoff function f"f = U?)iEN is defined by

r(s) = ,(N,v,L(s)) for all s E S. (7.2)

The following example illustrates the model described above.

EXAMPLE 7.1 Let (N, v) be the 2-player coalitional game with N =


{1,2} and v = 2'UN. We will use the Myerson value p, to determine the
payoffs to the players for any given network. In the network-formation
game rnJ (N, v, p,) every player can choose between two strategies, rep-
resenting whether or not he wants to form a link with the other player.

22See also Hart aud Kurz (1983), who discuss a similar strategic-form game in the context
of the endogenous formation of coalition structures.
Description of the model 175

Link 12 will be formed if and only if both players indicate that they
want to form it, and it will not be formed if at least one player indicates
that he doeJ not want to form it. For example, if player 1 plays 81 = 0
and player 2 plays t2 = {I}, then no links will be formed. Both players
get a payoff of 0 if no links are formed. Using J.L(N, v, {12}) = (1,1),
we find the payoffs to the players for all possible strategy combinations.
The game rnf(N,v,J.L) is represented in figure 7.1.

81 = 0
h = {2}
Figure 7.1. Network-formation game rnf(N,v,/L)

This game has multiple Nash equilibria. Strategy profile (81,82)


(0,0) is a Nash equilibrium since no player can achieve a positive profit
by unilaterally deviating. The reason for this is that it takes two players
to form a link. Another Nash equilibrium is (t1' t2) = ({2}, {I}). Any
unilateral deviation from this strategy profile results in a decrease in the
payoff of the deviating player, from 1 to O. 0

In section 7.2 we will show that in general there is an abundance of


Nash equilibria in the strategic-form games of network formation when-
ever the underlying coalitional game is superadditive and the allocation
rule 'Y satisfies some basic properties. One of these properties is compo-
nent efficiency, which was defined in section 2.2. The other two proper-
ties are weak link symmetry and the improvement property, which we
introduce now. 23

Weak Link Symmetry An allocation rule 'Y on a class CS of com-


munication situations satisfies weak link symmetry if for every com-
munication situation (N, v, L) E CS and every link ij it holds that if
'Yi(N, v, L U ij) > 'Yi(N, v, L) then 'Yj(N, v, L U ij) > 'Yj(N, v, L).

This property is a weak form of fairness. It says that if the formation


of a new link between players i and j is profitable for player i, then it
must also improve the payoff of player j.

Improvement Property An allocation rule 'Y on a class CS of com-


munication situations satisfies the improvement property if for every

23We remind the reader of our policy on domains when describing properties of allocation
rules, which is described in remark 2.2 on page 33.
176 A network-formation model in strategic form

communication situation (N, v, L) E CS and every link ij it holds that


ifthere exists a k E N\ {i, j} such that I'k(N, v, L U ij) > I'k(N, v, L),
then l'i(N, v, L U ij) > l'i(N, v, L) or I'j(N, v, L U ij) > I'j(N, v, L).
The improvement property states that if the formation of a new link
between players i and j improves the payoff of a third player k, at least
one of the two players forming the link should also benefit.
Before we continue, we point out that component efficiency, weak link
symmetry, and the improvement property are independent, i.e., none of
these three properties is implied by the other two. We illustrate this
for weak link symmetry in the following example and leave it to the
reader to find examples for component efficiency and the improvement
property.

EXAMPLE 7.2 The proportional links allocation rule, denoted I'P, IS


given by
IL;I v(C(L))
ryP(N v L) = { L.jECi(L) ILjl '
IZ " v(i)
for all (N, v, L) E CS and all i E N. This allocation rule divides the
value of each component of a communication graph among the players
who form it in proportion to the number of links each player in the
component has formed. This allocation rule captures the notion that
the more links a player has with other players, the better are his relative
prospects in the subsequent negotiations over the division of the payoff.
Denote the class of communication situations with an underlying game
that assigns a non-negative value to each coalition by CS+. Clearly
then, the proportional links allocation rule satisfies component efficiency
and the improvement property on CS+. It does not, however, satisfy
weak link symmetry on this class. To see this, consider the formation
of link 12 in the superadditive coalitional game ({ 1, 2}, v) with v =
Ul + Ul,2' It holds that 1'f({1,2},v,0) = 1 = l'i({1,2},v,{12}) but
I'f ({I, 2}, v, 0) = 0 < 1 = I'f ({I, 2}, v, {12} ). 0
The class of allocation rules for communication situations satisfying
component efficiency, weak link symmetry and the improvement prop-
erty is reasonably large. For example, it contains the Myerson value
as well as all weighted Myerson values on the class of communication
situations with a superadditive underlying game. 24 A weighted Myer-
son value is defined analogously to the Myerson value, as a weighted

24Slikker (2000a) provides a formal proof of this statement in the more general setting of
hypergraph communication situations.
Description of the model 177

Shapley value of the network-restricted game. For an extensive analysis


of weighted Myerson values we refer to Slikker and van den Nouweland
(2000a).
We show in the following lemma that any allocation rule on a class
CS;: of communication situations satisfying the three properties neces-
sarily satisfies a fourth property, called link monotonicity, if the under-
lying coalitional game (N, v) is superadditive.

Link Monotonicity An allocation rule 'Y on a class CS of communica-


tion situations satisfies link monotonicity if for every communication
situation (N, v, L) E CS and every link ij it holds that

'Yi(N, v, L U ij) ::::: 'Yi(N, v, L).

If the allocation rule used is link monotonic, then forming an addi-


tional link will never result in a decrease in the payoff of one of the
players who form it.

LEMMA 7.1 Let (N,v) be a superadditive coalitional game and let 'Y
be an allocation rule on CS;: that satisfies component efficiency, weak
link symmetry, and the improvement property. Then 'Y satisfies link
monotonicity as well.

PROOF: Let (N, v, L) E CS;: and consider a link ij tf- L. Suppose that
'Yi(N,v,L U ij) < 'Yi(N,v,L). Then, by weak link symmetry of 'Y, we
must have 'Yj(L U ij) ::; 'Yj(N, v, L). We now (temporarily) distinguish
between two cases: N I(L U ij) = NIL and NI(L U ij) =1= NIL.
If N I (L U ij) = NIL, then i, j E Ci (L) = Ci (L U ij) and component
efficiency of 'Y implies that

L 'Yk(N, v, L U ij) = v(Ci(L)) = L 'Yk(N, v, L). (7.3)


kECi(Luij) kECi(L)
If NI(L U ij) =1= NIL, then Ci(L U ij) = Ci(L) U Cj(L). Component
efficiency of 'Y and superadditivity of (N, v) then imply

L 'Yk(N,v,LUij) = v(Ci(L Uij))


kECi(LUij)

::::: v(Ci(L)) + v(Cj(L))


L 'Yk(N, v, L). (7.4)
kE(Ci(L)UCj (L»
178 A network-formation model in strategic form

We conclude from (7.3) and (7.4) that

L 'YdN, v, L U i.1) .2: L 'Yk(N,v,L) (7.5)


kEC,(LUij) kECi(LUij)

in both cases.
Now, 'Yi(N,v,LUi.1) < 'Yi(N,v,L), 'Yj(LUi.1):S 'Yj(N,v,L), and (7.5)
imply that there exists k rt {i, j} such that 'Yk (N, v, LUz.1) > 'YdN, v, L).
This shows that 'Y violates the improvement property. We conclude that
'Yi(N, v, L U ij) .2: 'Yi(N, v, L) has to hold. 0

Lemma 7.2 points out another consequence of the three basic prop-
erties. It implies that if the formation of a link ij affects the payoff of
some other player k, then it must also affect the payoffs of both players
that formed the new link. This property will be used later on in the
paper.

LEMMA 7.2 Let (N, v) be a superadditive coalitional game and let 'Y be
an allocation rule on CS;; that satisfies component efficiency, weak link
symmetry, and the improvement property. Also, let (N, v, L) E CS;; be
a communication situation and let i, j E N. Then it holds that if for
some k E N\ {i, j}, 'YdN, v, L U ij) =1= 'YdN, v, L), then 'Yi (N, v, L U ij) >
'Yi(N, v, L) and 'Yj(N, v, L U ij) > 'Yj(N, v, L).

PROOF: Suppose that there exists a player kl E N\{i,j} such that


'Ykl (N, v, L U ij) =1= 'Yk 1 (N, v, L). If 'Ykl (N, V, L U ij) > 'Ykl (N, v, L), then
it follows from weak link symmetry and the improvement property of 'Y
that 'Yi(N, v, L U i.1) > 'Yi(N, v, L) and 'Yj(N, v, L U ij) > 'Yj(N, v, L).
Now, suppose that 'Yh(N,v,LUij) < 'Ykl(N,v,L). It follows from
weak link symmetry of'Y that either 'Yi(N,v,LUij) > 'Yi(N,v,L) and
'Yj(N,v,L U ij) > 'Yj(N,v,L), or 'Yi(N,v,L U i.1) :S 'Yi(N,v,L) and
'Yj (N, v, L U ij) ::; 'Yj (N, v, L). We have established the lemma if we
show that the latter option leads to a contradiction.
So, suppose that 'Yi(N, v, L U i.1) ::; 'Yi(N, v, L) and 'Yj(N, v, L U'I.1) ::;
'Yj (N, v, L). Like we did in the proof of lemma 7.1, we can now show that
component efficiency of'Y and superadditivity of (N, v) imply that there
exists a player k2 rt {i, j} such that 'Yk2 (N, V, L U i.1) > 'Yk2 (N, V, L). The
improvement property then states that 'Yi (N, V, L U ij) > 'Yi (N, v, L) or
'Y.i(N,v,L U ij) > 'Yj(N,v,L), which gives us the contradiction we were
looking for. 0
Nash eqllilibrillTT! and strong Nash eqllilibrillTT! 179

7.2 NASH EQUILIBRIUM AND STRONG


NASH EQUILIBRIUM
In this section, we address the question of what networks can be
formed in Nash equilibria and strong Nash equilibria of the network-
formation game that we formulated in section 7.1 if the underlying coali-
tional game is super additive. We show that there is an abundance of
Nash equilibria by proving that all networks can be supported by Nash
equilibria. This makes Nash equilibrium useless to try and make pre-
dictions about what network structures will prevail. Hence, we consider
refinements of Nash equilibrium and start by studying strong Nash equi-
librium. However, such equilibria do not always exist in the network-
formation game in strategic form. This urges us to study less demanding
refinements of Nash equilibrium in section 7.3.
We start by studying Nash equilibria of the strategic-form games of
network formation. Theorem 7.1 shows that Nash equilibrium does not
enable us to distinguish between different networks. If the underlying
coalitional game is superadditive and the allocation rule, satisfies the
three basic properties listed in section 7.1, then l' is link monotonic.
Hence, no player wants to unilaterally hreak a link. To form a new link,
two players have to be willing to form it. We will use these two facts to
show that any network can be supported by a Nash equilibrium.
THEOREM 7.1 Let (N, v) be a superadditive coalitional game and let,
be an allocation rule on CS;: that satisfies component efficiency, weak
link symmetry, and the improvement property. Then any network (N, L)
can be supported by a Nash equilibrium of the network-formation game
rnJ(N,v,,).

PROOF: Let (N, L) be a network. Define the strategy profile S = (Si)iE1V


such that each player announces that he wants to form links with exactly
those players to whom he is connected directly in network (N, L), i.e.,
Si = {j E N\i I ij E L} for each i E N. Obviously, the links that are
formed if the players play these strategies is L(s) = L. To complete the
proof, we show that s is a Nash equilibrium of rnJ (N, v, I)'
Consider a fixed player i EN. Since by definition of the strategies
(Si)iEN it holds that i E Sj if and only if j E So:, no new links will
be formed if player i announces that he wants to form an additional
link with a player j tf. Si. Hence, such a change in his strategy will not
influence the payoff to player i. So, the only change in player i's strategy
that can possihly change his payoff is to break currently existing links.
However, lemma 7.1 shows that such an action can only lower player i's
payoff. We conclude that player i does not have a profitable deviation
180 A network-formation model in strategic form

from.') and, consequently, that 8 is a Nash equilibrium of rnJ (N, v, "f). 0

The conclusion that every network can be supported by a Nash equi-


librium, seems to be largely due to the fact that it takes two players to
form a new link, whereas the Nash equilibrium concept considers only
deviations by one player at a time. This inspires us to turn our at-
tention to strong Nash equilbria, in which deviations by coalitions of
players are allowed. However, the following example shows that strong
N ash equilibria might not exist.

EXAMPLE 7.3 Consider the superadditive coalitional game (N, v) on the


player set N = {I, 2, 3} with characteristic function v defined by

v = 6Ul,2 + 6Ul,3 + 6U2,3 - I2uN.

Consider the network-formation game rnJ (N, v, p,). Recall that the My-
erson value satisfies component efficiency, weak link symmetry, and the
improvement property. We will show that no network can be formed
in a strong Nash equilibrium, which leads to the conclusion that the
nctwork-formation game rnJ (N, v, p,) does not have any strong Nash
equilibrium.
There in no strong Nash equilibrium of the gamc rnJ(N,v,p,) that
results in the formation of a network with no links at all, because each
player receives 0 in this network, whereas any two players can deviate
and form the link between them to each obtain 3.
Also, there is no strong Nash equilibrium of the game rnJ (N, v, p,)
that results in the formation of the complete network. According to the
Myerson value each player receives 2 in this network, whereas any two
players can deviate and break their links with the third player to each
obtain 3.
We now turn our attention to networks with two links. Consider, for
example, the network (N, {I2, I3}), in which player 1 gets 4 according
to the Myerson value and players 2 and 3 get 1 each. Players 2 and 3
can deviate to the strategies 82 = {3} and 53 = {2}, which results in
breaking their links with player 1 and forming a link with each other.
This improves the payoffs of both player 2 and player 3 by 2, from
1 to 3. This shows that there is no strong Nash equilibrium of the
game rnJ(N,V,JL) that results in the formation of network (N, {12, I3}).
Obviously, the same conclusion holds for any other network with two
links.
Finally, we consider networks with one link. Such networks are not
supported by a strong Nash equilibrium, because one of the linked play-
Nash equilibrium and strong Nash equilibrium 181

ers and the isolated player can deviate and form an adclitionallink, which
improves both of these deviating players' payoffs by 1. 0

A natural reaction is to think that for strong Nash equilibria to exist,


one needs some condition on the underlying game. Note that the game
in the previous example is not balanced, i.e., it has an empty core.
Maybe some condition like balancedness of (N, v) would be sufficient to
guarantee the existence of strong Nash equilibria. This, however, is not
the case, as the following example shows.

EXAMPLE 7.4 Consider the coalitional game (N, v) with player set N =
{I, 2, 3, 4, 5, 6} and characteristic function v defined by

v = 60 L 'Ui,j + 900UN· (7.6)


i,jEN:i-j.j

Note that this is the game that we studied extensively in section 6.4. It
is convex, and therefore balanced. Also, it is easily seen that this game is
strictly superadditive, i.e., V(TI U T 2 ) > v(Td + V(T2) for all non-empty
T 1 , T2 ~ N with Tl nT2 = 0. Strict superadditivity of (N, v) implies that
the Myerson value J.l satisfies the property that two players who form
an additional link both (strictly) improve their payoffs. 25 Therefore, the
complete network is the only network that can possibly be supported by
a strong Nash equilibrium of rnJ (N, v, Ji')'
We recall from section 6.4 that

J.l(N, v, LN) = (300,300,300,300,300,300)


and that
J.l(N, v, L *) = (301,301,301,301,298,298)
for the network (N, L*) with links L* = {12, 13, 14, 16, 23, 24, 26, 34, 35,
45,56} (see figure 6.4 (b)). This shows that the complete network is not
supported by a strong Nash equilibrium, because coali1:ion {l, 2, 3, 4} can
deviate from the strategy profile that results in the complete network to
a strategy profile that induces the formation of network (N,L*), which
results in higher payoff for all deviating players.
This shows that there is no strong Nash equilibrium of the network-
formation game rnJ (N, v, Ji')' 0

25Slikker (2000a) provides a direct proof of the statement that the Myerson value satisfies
link monotonicity on CS;; for any superadditive coalitional game (N, v). His proof is easily
extended to show that a strict version of link mono tonicity is satisfied if the underlying game
is strictly superadditive.
182 A network-formation model in str'ategic form

The previous example shows that even convexity of the underlying


coalitional game is not sufficient for non-emptiness of the set of strong
N ash equilibria. On the other hand, we show in the following example
that even balancedness of (N, v) is not necessary for non-emptiness of
the set of strong Nash equilibria.

EXAMPLE 7.5 Consider the superadditive game (N,v) with player set
N = {1, 2, 3} and characteristic function v = 18u1,2 + 18u1,3 + 6U2,3 -
24uN. It is easily seen that this game has an empty core and, hence, that
it is not balanced. However, using the Myerson value to determine play-
ers' payoffs in the various networks, we find that the network-formation
game rnf (N, v, p) has a strong Nash equilibrium. To show this, we first
compute the Myerson values for all networks on N. These are as follows:
p,(N, v, 0) = (0,0,0), IL(N, v, {12}) = (9,9,0), IL(N, v, {13}) = (9,0,9),
p,(N, v, {23}) = (0,3,3), p,(N, v, {12, 13}) = (12,3,3), IL(N, v, {12, 23}),
= (7,10,1), IL(N, v, {13, 23}) = (7,1,10), and p(N, v, LN) = (10,4,4).
The strategies 8i = N\i for all i E {1, 2, 3} result in the formation of
the complete network (N, LN) with payoffs of 10 for player 1 and 4 for
players 2 and 3 each. Since these strategies form a Nash equilibrium
(see the proof of theorem 7.1), and since there is no network that gives
all three players a higher payoff than they get in the complete network,
we only check that there is no 2-player coalition that wants to devi-
ate from (81,82,83)' Since network (N, {12, 13}) is the only network in
which player 1 gets a higher payoff than he gets in the complete net-
work, but players 2 and 3 each get a lower payoff in this network, the
only 2-player coalition that can possibly have a deviation that is prof-
itable to all its members is coalition {2, 3}. However, there is no network
in which both players 2 and 3 get a payoff higher than 4. We conclude
that (81,8'2,83) is a strong Nash equilibrium of the network-formation
game rnf(N,v,IL). 0

We have seen that balancedness of (N, v) is not necessary, while even


convexity of (N, v) is not sufficient for non-emptiness of the set of strong
Nash equilibria of the network-formation game rnf(N,v,,,() for an al-
location rule "( that satisfies component efficiency, weak link symmetry,
and the improvement property. The issue of what are plausible nec-
essary and\or sufficient conditions for the existence of a strong Nash
equilibrium of the network-formation game remains an open question.
Undominated Nash equilibrium and coalition-proof Nash equilibrium 183

7.3 UNDOMINATED NASH EQUILIBRIUM


AND COALITION-PROOF NASH
EQUILIBRIUM
In the previous section we showed that every network can be formed
in a Nash equilibrium of the strategic-form game of network formation
for any superadditive game, if the allocation rule satisfies the three ba-
sic properties in section 7.1. Hence, Nash equilibrium has no predictive
power in these circumstances and we turned our attention to refine-
ments of Nash equilibrium. We showed that strong Nash equilibrium is
not a useful refinement in this context, since we cannot determine nec-
essary conditions for the set of strong Nash equilibria to be nonempty.
Hence, in the current section we study less demanding refinements of
Nash equilibrium. We show that undominated Nash equilibrium and
coalition-proof Nash equilibrium predict the formation of the complete
network or a network that is payoff-equivalent to the complete network.
The following lemma, which is taken from Slikker (2000a), is essential
to prove the statements that undominated Nash equilibria and coalition-
proof Nash equilibria result in the formation of networks that are payoff-
equivalent to the complete network.

LEMMA 7.3 Let (N,v) be a superadditive coalitional game and let 'Y
be an allocation rule on CS;; that satisfies component efficiency, weak
link symmetry, and the improvement property. Consider the network-
formation game rnf (N, v, ,,). Let player i E Nand Si, 8~ E Si with
8i <;;;; 8~ be two nested strategies of player i. Suppose that Li E S_; is a
strategy profile for the players in N\i such that f? (8;, L;) = f? (8;, Li).
Then 1'(8;, L;) = 1'(8;, L;).

PROOF: If L(8;, ,L;) = L(s;, Li), then the statement in the theorem
is obviously true. So, suppose L(8;, L;) i= L(8;, L,). We denote L =
8;
L(8;, L;) and use the fact that 8i <;;;; to derive that there exists a set of
links L' <;;;; L;\L such that L(8;, Li) = L(8i' L;) U L' . Fix a link ij E L'.
U sing link monotonicity of 'Y (see lemma 7.1) repeatedly, we derive that
'Yi(N,v,L):::; 'Yi(N,v,LUij):::; 'Yi(N,v,LUL'). Note that we use the
fact that every link in L' is in Li. Since 'Yi(N, 11, L) = 'Yi(N, v, L U L') by
assumption, we conclude that 'Yi(N, v, L) = 'Yi(N, v, LUij). Using lemma
7.2, we see that 'Yk(N,v,L) = 'YdN,v,LUij) for allk E N\{i,j} has to
hold. Also, it follows from weak link symmetry and link monotonicity
of'Y that 'Yj(N,v,L) = 'Yj(N,v,L U ij) has to hold. This shows that
'Y(N, v, L) = 'Y(N, v, L U ij).
184 A network-formation model in strategic form

Adding the links in L' one by one and repeating the arguments above
for each of those links, it follows that 'Y(N, v, L) = 'Y(N, v, L U L'). This
shows that 1'(Si, ,,-d = 1'(S;, "-i)' D

Link monotonicity of "1 implies that for every player it is a weakly dom-
inant strategy in the network-formation game rnJ (N, v, "1) to announce
that he wants to form links with all other players. For notational con-
venience we denote this strategy of player i by Si = N\i, and we denote
S = (Si)iEN.

LEMMA 7.4 Let (N,v) be a superadditive coalitional game and let "1
be an allocation rule on CS;; that satisfies component efficiency, weak
link symmetry, and the improvement property. Then S is a profile of
weakly dominant strategies in the associated network-formation game
rnJ (N, v, "1).

PROOF: Let i E N and let Si E Si be an arbitrary strategy for player i.


We will show that Si weakly dominates Si. Fix a profile "-i E S-i for
the players in N\i. Denote L = L(Si' "-i) and L = L(Si' "-i)' Note that
L ~ L because Si ~ Si. Also, each l E L\L is an element of Li . Adding
the links in L\L one by one and applying link monotonicity repeatedly,
we obtain

Because this holds for all "-i E S-i, we conclude that Si weakly domi-
nates Si. D

Using lemmas 7.3 and 7.4, we quickly obtain that undominated Nash
equilibria of the network-formation game lead to the formation of net-
works that are payoff-equivalent to the complete network. We refer to
such networks as essentially complete. Formally, a network (N, L) is
essentially complete for "1 if'Y(N,v,L) = 'Y(N,v,L N ). Hence, if (N,L)
is essentially complete for "1, but L 'I L N , then the set of links LN\L is
inessential in the sense that adding this set of links does not change the
payoffs to the players. Note that a network that is essentially complete
for an allocation rule "1 is not necessarily essentially complete for another
allocation rule "1'.

THEOREM 7.2 Let (N, v) be a superadditive coalitional game and let "1
be an allocation rule on CS;; that satisfies component efficiency, weak
Undominated Nash equilibrium and coalition-proof Nash equilibrium 185

link symmetry, and the improvement property. Then S is an undom-


inated Nash equilibrium of the network-formation game rnJ(N,v,{).
Moreover, if S is an undominated Nash equilibrium of rnJ (N, v, {), then
(N,L(s)) is essentially complete for T

PROOF: We showed in lemma 7.4 that Si is a weakly dominant strategy


in the game rnJ (N, v, {) for each player i E N. Clearly then, S is an
undominated Nash equilibrium of rnJ (N, v, I)'
To prove the second statement in the theorem, suppose that s -I- S
is an undominated Nash equilibrium of rnJ (N, v, I)' We will show that
(N, L(8)) is essentially complete for T
For each player i E N, since Si is undominated and Si weakly dom-
inates 8i, it easily follows that f;' (Si' S -;) = f;' (8i, .Li) for every S -i E
S-i. In particular, we find for each i E N = {I, ... ,n} that

f;'(s{1, ... ,i}, S{i+l, ... ,n}) = f;'(s{l, ... ,i-l}, S{i, ... ,n})·

Since Si <;;; Si for each i E N, it follows from lemma 7.3 that

P(S{l, ... ,i}, S{i+l, ... ,n}) = P(S{l, ... ,i-l}, S{,i, ... ,n})
for all i E N. We conclude that {(N, v, L(s)) = fl(s) = fl(s) =
{(N, v, LN) and, hence, that (N, L(s)) is essentially complete for {. 0
The following example shows that using link monotonicity instead of
weak link symmetry does not guarantee the validity of the statements
in theorem 7.2.

EXAMPLE 7.6 Consider the superadditive coalitional game (N,v) with


player set N = {I, 2, 3} and characteristic function v = U1,2 + Ul,3 +
U2,3 + 2UN· Define the allocation rule { on CS{;' by {(N, v, 0) = (0,0,0),
{(N, v, {12}) = {(N, v, {23}) = (0,1,0), 'Y(N, v, {13}) = (0,0,1),
{(N,v,{12,13}) = (2,2,1), {(N,v,{12,23}) = (1,4,0), and, finally,
{(N, v, {13, 23}) = {(N, v, LN) = (1,3,1). It is easily checked that this
allocation rule satisifes component efficiency, link monotonicity, and the
improvement property. However, it does not satisfy weak link symmetry.
Consider the network-formation game rnJ (N, v, I)' In this game,
strategy Sl = {2,3} is a dominant strategy for player 1 and strat-
egy S2 = {1,3} is a dominant strategy for player 2. Further, strate-
gies S3 = {1} and S3 = {1,2} are both weakly dominant strategies
for player 3. We find that S is an undominated Nash equilibrium of
rnJ(N,v,{). This is in line with the first part of theorem 7.2, which we
established using only link monotonicity. However, we also find that
186 A netw(JT'k-fonnation model in strategic form

~ - (81,32,83) is an llndominated Nash equilibrium of fnJ(N,v,,).


Since f"l(s) = ,(N,v,{12,13}) = (2,2,1) i=- (1,3,1) = ,(N,v,L N ),
(N, L(.s)) is not essentially complete for ,. 0

We now turn our attention to coalition-proof Nash equilibria. In ex-


ample 5.17 we showed that, in general, even if each player has a dominant
strategy in a game, a coalition-proof Nash equilibrium might involve the
play of dominated strategies. This example shows that undominated
Nash equilibria and coalition-proof Nash equilibria may lead to very dif-
ferent payoffs to the players. However, theorem 7.3 shows that for the
strategic-form games of network formation coalition-proof Nash equilib-
ria lead to similar outcomes as ulldominated Nash equilibria.

THEOREM 7.3 Let (N,v) be a 8upemdditive game and let, be an allo-


cation rule on CS;: that satisfies component efficiency, weak link sym-
metry, and the improvement property. Then s is a coalition-proof Na.sh
equilibrium of the network-formation game fnJ (N, v,,). Moreover, if s
is a coa.lition-proof Na.sh equilibrium of fnJ(N,v,,), then (N,L(.s)) is
essentially complete for J.

PROOF: For notational convenience, we denote the set of coalition-


proof Nash equilibria of a game f by CPNE(r). Also, we denote f* =
fnJ (N, v, ,). We will actually prove a generalized version of the theorem.
We will prove that for each coalition T ~ N and all SN\T E SN\T it holds
that 3T E CPNE(f*(sN\T)) and that for all .s~ E CPNE(f*(.sN\T)) it
holds that f"l(S~,SN\T) = P(8T,SN\T). We prove this by induction to
the number of elements in T.
Let i E Nand SN\; E SN\i. In lemma 7.4 we proved that 3i is a
weakly dominant strategy in f*. Hence, for all Si E Si it holds that

Thus, 3i E CPNE(f*(sN\i)). Now, let 8; E CPNE(f*(sN\;)). Since


s'; and 3i are both coalition-proof Nash equilibria in the I-player game
f*(8N\i)' it follows that f?(S;,8N\;) = f?(3i,sN\i)· Using lemma 7.3 it
follows that
f "l ( 8i,8N\,
I
- f"l (-.
.) - .)
8,,8N\z·
Now, let T S;;; N with ITI > l. Assume that for all R S;;; N with
IRI < ITI and for all SN\R E SN\R we have 3R E CPNE(f*(8N\R)) and
f"l(s~, SN\R) = f"l(8R, 8N\R) for all 8~ E CPNE(f*(8N\R)).
Undorninated Nash equilibr'ium and coalition-proof Nash equilibrium 187

Let SN\T E SN\T' Using the induction hypothesis, we derive that


SR E CPNE(f*(ST\R, SN\T)) for all ReT. This shows that ST is self-
enforcing in f* (s N\T)'
Suppose that s~ is self-enforcing in f* (s N\T) as well. Let i E T be
fixed for the moment. Because Si is a weakly dominant strategy of player
i, we know that

(7.7)

Because s~ is self-enforcing, s;'\i E CPNE(f*(s;,SN\T)) and, hence, it


follows from the induction hypothesis that

[y(S;,ST\i,SN\T) = [y(S;.,SN\T)' (7.8)

Combining (7.7) and (7.8), we see that

f?(ST,SN\T) 2: f?(S~,SN\T)' (7.9)

We conclude that ST E CPNE(r*(SN\T))'


In order to prove that f' (s~, SN\T) = f' (ST' SN\T) for all s~ E
CPNE(f*(SN\T)), let s~ E CPNE(f*(8N\T))' Note that this implies
that 8'y is self-enforcing, so that we can apply (7.7), (7.8), and (7.9).
Because s'y E CPNE(f*(8N\T)), (7.9) cannot hold with strict inequality
for all i E T. Let i E T be such that f?(ST,8N\T) = f?(S'y,SN\T)'
Together with (7.8) this implies that (7.7) has to hold with equality for
i, i.e., f;'(ST,SN\T) = f;'(S;,ST\i,8N\T)' We can thus apply lemma 7.3
and see that
(7.10)
We now obtain the desired result, namely fi(S'y,8N\T) = fi(ST,SN\T),
from (7.10) and (7.8). This completes the proof. D

The following example shows that not every strategy profile result-
ing in the formation of an essentially complete network ii:i necei:isarily a
coalition-proof N ash equilibrium.

EXAMPLE 7.7 Consider the superadditive coalitional game (N, v) with


N = {1,2,3,4} and characteristic function v = 2LTS;N:ITI=2UT. Fur-
thermore, consider the Myerson value on CS;; and network-formation
game fnf(N,v,p,). Define strategy profile t by tl = t.3 = {2,4} and
t2 = t4 = {1, 3}. If the players play these strategies, then the net-
work (N,L(t)) with L(t) = {12,23,34,41} will be formed. This net-
work is represented in figure 7.2 (a). It follows by i:iymmetry that
p,(N,v,L(t)) = (3,3,3,3) = p,(N,v,L N ). Hence, (N.L(t)) is essentially
188 A network-formation model in strategic form

40
1
3

2
4
n
lU
3

a: (N, L(t))

Figure 7.2. Networks (N, L(t)) and (N, L(8], t2, 83, t4))

complete. We will show that t is not a coalition-proof Nash equilibrium


of fnf (N, v, /1).
The deviation (51,53) by coalition {1,3} results in L(51,t2,53,t4) =
{12,13,14,23,24} and

It follows that
2 1 1
/11(N,v,L(51,t2,53,t4)) = 1 + 1 + 1 +0+0+:3 +0 -:2 = 36
> 3 = /11(N, v, L(t)).
A similar result holds for the payoffs of player 3. Using link monotonici ty
of fL, we conclude that neither player 1 nor player 3 has a deviation from
(81, t2, 83, t4) that improves his payoff. 0

We conclude this section with an example that shows that weak link
symmetry cannot be replaced by link monotonicity in theorem 7.3.

EXAMPLE 7.8 Recall that the proportional links allocation rule 'Y P ,
which we introduced in example 7.2, satisfies component efficiency and
the improvement property on the class of communication situations
CS+, but not weak link symmetry. Let (N,v) be a 3-player game with
N = {I, 2, 3} and characteristic function v given by v(S) = 0 if lSI:::; 1
and v(S) = 12 if ISI2 2. It is easily seen that 'YP(N,'U, {12}) = (6,6,0),
'Y P (N,v,{12,13}) = (6,3,3), and ryP(N,v,L N ) = (4,4,4). Because of
symmetry, we can avoid writing down the payoffs corresponding to the
other networks. Note that the proportional links allocation rule satisfies
link monotonicity on CS;;. We will show that s is not a coalition-proof
Nash equilibrium of the network-formation game f"f (N, 'U, 'Y P ).
If every player i plays Si, then the complete network will be formed,
which results in a payoff of 4 for each player. However, if players 1
and 2 deviate to the strategies 31 = {2} and 3'2 = {I}, then only the
Comparison of the network-formation models 189

link between 1 and 2 will be formed and these players will each improve
their payoff to 6. Since there is no network in which either player 1 or
player 2 gets a payoff higher than 6, none of them has an incentive to
deviate from (81,82,33). This shows that s is not a coalition-proof Nash
equilibrium.
Dutta et al. (1998) show that, for the game (N, v) in this example,
the set of coalition-proof Nash equilibria of the game rnJ (N, v, ,),p) is
nonempty. In fact, they prove that a strategy profile is a coalition proof
Nash equilibrium of rnJ (N, v, ,),P) if and only ifit results in the formation
of exactly one link. Hence, each coalition-proof Nash equilibrium of the
game rnJ (N, v, ,),P) results in the formation of a network that is not
essentially complete for r.? <)

7.4 COMPARISON OF THE


NETWORK-FORMATION MODELS
In this section we compare the two models of network formation stud-
ied in the current chapter and the previous chapter.
To illustrate the differences between the games of network formation
in extensive form and the games of network formation in strategic form,
we consider the 3-person game (N, v) with player set N = {I, 2, 3} and

if ITI :S 1;
v(T) ={ ~o72 if ITI = 2; (7.11)
ifT = N.
This game was also studied in example 6.1. The prediction of the
network-formation game in extensive form is that exactly one link will
be formed. Suppose that, at some point in the game, link 12 is formed.
Notice that either 1 or 2 gains by forming an additional link with 3,
provided that the other player does not form a link with 3. Two further
points need to be noted. Firstly, if player i forms a link with 3, then it is
in the interest of j (j #- i) to also link up with 3. Secondly, if all links are
formed, then players 1 and 2 are worse-off compared to the network in
which they alone form a link. Hence, the network (N, {12}) is sustained
as an 'equilibrium' by a pair of mutual threats of the kind:
"If you form a link with 3, then so will 1."
Of course, this kind of threat makes sense only if i will come to know
whether j has formed a link with 3. Moreover, i can acquire this infor-
mation only if the negotiation process is public. If bilateral negotiations
are conducted secretly, then it may be in the interest of some pair to
conceal the fact that they have formed a link until the process of bilat-
eral negotiations has come to an end. It is also clear that if different
pairs can carry out negotiations simultaneously and if links once formed
190 A network-formation model in strategic form

cannot be broken, then the mutual threats referred to earlier cannot be


carried out.
So, there are many contexts where considerations other than threats
may have an important influence on the formation of links. For instance,
suppose players 1 and 2 have already formed a link amongst themselves.
Suppose also that neither player has as yet started negotiations with
player 3. If 3 starts negotiations simultaneously with both 1 and 2, then
1 and 2 are in fact faced with a prisoners' dilemma situation. To see
this, denote land nl as the strategies of forming a link with 3 and not
forming a link with 3, respectively. Then the payoffs to 1 and 2 are
described by the payoff matrix in figure 7.3.

nl

Fig1l7"e 7.S. Payoffs in a prisoners' dilemma situation

Note that l, that is forming a link with 3, is a dominant strategy


for both players. Obviously, in the network-formation game in strategic
form, the complete network will form simply because players 1 and 2
cannot sign a binding agreement to abstain from forming a link with 3.

7.5 REMARKS
In this section we briefly discuss some additional literature related to
network formation games in strategic form.
Slikker (2000a) studies the formation of hypergraphs, as discussed in
section 4.2. He considers hypergraph-formation games in strategic form,
which are straightforward extensions of the network-formation games
that we have seen in this chapter. He shows that, if the underlying
coalitional game is superadditive, the hypergraphs that are supported
by coalition-proof Nash equilibria and undominated Nash equilibria are
payoff equivalent to the complete hypergraph, which consists of all possi-
ble conferences. Slikker (2000a) also considers several other equilibrium
refinements and shows that these give rise to similar results, both for
network-formation games and for hypergraph-formation games.
Meca-Martinez et al. (1998) and Slikker (2000a) study the formation
of coalition structures, which we discussed in section 4.1. They use dif-
ferent rules of coalition formation, but both focus on finding sufficient
conditions on an exogenous allocation rule to guarantee that the grand
coalition is supported by a strong Nash equilibrium in case the underly-
ing coalitional game is convex.
Remarks 191

We end this section by pointing the reader to chapter 10, which is de-
voted to the study of network-formation games that are potential games.
Chapter 8

NETWORK FORMATION WITH COSTS


FOR ESTABLISHING LINKS

In the current chapter, which is based on Slikker and van den Nouwe-
land (2000b), we incorporate costs for the formation of communication
links into the network-formation models that we saw in chapters 6 and
7. Our main interest is to analyze the influence of costs for establish-
ing links on the networks that result according to several equilibrium
concepts.
We start in section 8.1 by introducing a natural extension of the My-
erson value to the setting of communication situations that incorporate
costs for forming links. Then, in section 8.2, we introduce costs in the
games of network formation in extensive form of cha,pter 6 and study the
networks that are formed in subgame-perfect Nash equilibria for varying
levels of costs. In section 8.3 we introduce costs in the strategic-form
games of network formation of chapter 7 and analyze the influence of
costs on the networks that are formed in undominated Nash equilibria
and coalition-proof Nash equilibria. In sections 8.2 and 8.3 we limit our-
selves to games with no more than three players for practical reasons.
However, we extend the scope of our analysis to larger games in section
8.4. We conclude this chapter in section 8.5 with a comparison of the
results that we obtain for the extensive-form games of network formation
and the strategic-form games of network formation.

8.1 THE COST-EXTENDED MYERSON


VALUE
In this section we introduce costs in communication situations and
we define an extension of the Myerson value to a setting in which the
formation of communication links in not free of charge. We show that
194 Network formation with C08t8 for e8tabli8hing link8

the extension that we propose can be derived from the Myerson value
for reward communication situations that we saw in section 4.5.
We want to focus on the costs for forming links on the networks that
are formed in various equilibria of the network-formation games. In
order to be able to clearly isolate this effect, we assume that all links
have the same cost associated with them. Hence, we assume that the
formation of any communication link results in a fixed cost c ;::: O. A
cost-extended communication situation is a tuple (N, v, L, c), in which

°
N is a set of players, (N, v) a coalitional game, (N, L) a communication
network, and c ;::: the cost for establishing a communication link.
Let (N, v, L, c) be a cost-extended communication situation. Includ-
ing costs for establishing links has the effect that the value that the
players in an internally connected coalition can obtain also depends on
the number of links that they have formed and not just on whether they
are connected or not. For any coalition T c;;: N of players, its value in
(N, v, L, c) is naturally defined as the sum ofthe values of its components
minus the costs for the links between its members. We define the cost-
extended network-restricted game (N, vL,C) associated with (N, v, L, c)
by
vL,C(T)= L v(C)-cIL(T)1 (8.1)
CET/L

for each T c;;: N. The cost-extended network-restricted game incor-


porates three elements, namely the information on profits obtainable
by coalitions of cooperating players that is contained in the coalitional
game, the restricted possibilities for cooperation contained in the net-
work, and the costs for establishing links. Note that the cost-extended

uation (N, v, L, c) with c = °


network-restricted game (N, vL,C) of a cost-extended communication sit-

communication situation (N, v, L).


equals the network-restricted game of

Analogous to the Myerson value of a communication situation


(N, v, L), we define the cost-extended Myerson val'uc 1/ of a cost-extended
communication situation (N, v, L, c) as the Shapley value of the associ-
ated cost-extended network-restricted game, i.e.,

l/(N, v, L, c) = ip(N, vL,C). (8,2)

= p,(N, v, L) whenever c = 0, so that


It is easily seen that l/(N, v, L, c)
the cost-extended Myerson value is indeed an extension of the Myerson
value for communication situations as introduced in section 2.2.
The definition of the cost-extended Myerson value in (8.2) is based
on the game (N, vL,C), which includes both the benefits and the costs
of forming links. This shows that 1/ can be interpreted as a solution
The cost-extended }vfyerson value 195

to the bargaining problem in which the playe~s bargain over the bene-
fits and the costs of forming links simultaneously. Another angle would
be to assume that the players pay the costs for forming links as they
form them and then, when these costs are sunk, bargain over the divi-
sion of the benefits once the process of network formation has settled
down and a specific network has been formed. As it turns out, both
angles, while very different methodologically, lead to the same end re-
sult, namely the cost-extended Myerson value. To see this, note that
for any cost-extended communication situation (N, v, L, e) we can ex-
press the characteristic function of the cost-extended network-restricted
game (N, vL,C) in terms of that of the network-restricted game (N,7J L )
of communication situation (N, v, L) as follows.

vL,c = v L - ~ U·1,,)'.
C "'""' (8.3)
ijEL

Hence, using additivity of the Shapley value, it follows that


1
v;(N,v,L,e) = J-l;(N,v,L) - '2 elL;1 (S.4)

for each i E N. This last expression reflects separate treatments of costs


and benefits. Benefits are allocated according to the Myerson value,
whereas the cost of each link is simply split between the players who
form it.
We will now show that the cost-extended Myerson value as we defined
it can be derived from the Myerson value for reward communication
situations that we saw in section 4.5. Let (N, v, L, c) be a cost-extended
communication situation. We associate with this situation the reward
function rV'C that assigns to each network (N, A) (with A <::; LN) the
sum of the values of its components minus the costs of its links, i.e.,

rV,C(A) = L 11(C) - c IAI.


eEN/A

Note that we have to assume that the game (N, v) is zero-normalized


(v( i) = 0 for each i E N) to ensure that the reward function rV'c is
zero-normalized, which was an assumption in section 4.5. The following
theorem shows that the Myerson value of a cost-extended communica-
tion situation (N, v, L, c) is equal to the Myerson value of the reward
communication situation (N, rV'c, L).
THEOREM 8.1 Let (N, 71, L, c) be a cost-extended communication situa-
tion in which the game (N, 71) 'is zero-rwr·malized. Then
J-l(N,rV,C,L) = v(N,v,L,e).
196 Network formation with costs for establishing links

PROOF: We recall from section 4.5 that fl(N,r,L) = if!(N,vr,L), where


vr,L is defined by vr,L(T) = r(L(T)) for all T <:;;; N. Hence, fl(N, rV'c, L)
= if! (N, vrv,c ,L). We first take a closer look at the characteristic function
vrv,c ,L. It holds that

L v(C)-cIL(T)1
CEN/L(T)

L v(C) - c IL(T)I = vL,C(T),


CET/L(T)

where the third equality follows from zero-normalization of (N, v). Using
that vrv,c ,L = vL,c, it follows from the definitions of the cost-extended
Myerson value and the Myerson value for reward communication situa-
tions that

This completes the proof. o


The remainder of this section is devoted to the computation of the
cost-extended Myerson value for symmetric 3-player games. For nota-
tional convenience, we define

I/C(N,v,L) = I/(N,v,L,c)

for all communication situations (N,v,L) E CS N and all c 2': O. We will


refer to I/C as the c-extended Myerson value of communication situation
(N, v, L). Note that I/c is an allocation rule for communication situa-
tions, whereas 1/ is an allocation rule for cost-extended communication
situations (N, v, L, c) that consist of a communication situation (N, v, L)
and a cost c 2': 0 for establishing a link. With these notations, we have
I/0 = fl.
Let (N, v) be a symmetric 3-player game. Hence, there exist Vl,V2,V3 E
R such that v(T) = VITI for all T <:;;; N with T # 0. To simplify our
notations, we assume that the game (N, v) is zero-normalized, i.e., Vi =
O. Also, we restrict ourselves to coalitional games in which cooperation is
beneficial and we assume throughout the remainder of this chapter that
V2 2': 0 and V3 2': O. Due to symmetry, it suffices to distinguish between
the four non-isomorphic networks among the three players to find the
payoffs according to the cost-extended Myerson value for all possible
communication networks among the three players. These four networks
are represented in figure 8.1. Also because of symmetry, it suffices to
distinguish between the five possible positions a player can have in a
The cost-extended Myerson value 197

network. These positions are indicated by the numbers 1 through 5 in


figure 8. 1. 26

3L4 5L5
.1 .1

1. .1 2-------.2

Figure B.l. Different positions

We analyze the preferences of the players over the five positions in-
dicated in figure 8.1. These preferences will, of course, depend on the
underlying coalitional game and the costs for establishing communica-
tion links. To economize on notation, we denote N:= {i,j,k}, where i,
j, and k are different players. In our computations of the cost-extended
Myerson values below, we use the payoffs that the Myerson value at-
tributes to the players in 3-person communication situations (without
costs) that we described in section 6.4.
Position 1 is the isolated position. An isolated player receives a payoff
of zero. Note that there is an isolated player in a network with one link.

vf(N, v, 0) = vf(N, v, {jk}) = O.

Position 2 denotes a linked position in a network with one link. The two
linked players equally divide the value of a 2-persoll coalition and the
cost of the link, i.e.,

v'j(N, v, {jk}) = ~V2 - ~c.


Position 3 is the central position in a network with two links. According
to the c-extended Myerson value a player in this position receives

vi(N, v, {ij, ik}) = ~V3 + ~V2 - c.

Position 4 is a non-central position in a network with two links. The


payoff attributed by the c-extended Myerson value to a player in this
position equals

v JC(N" .. Z'k})
v {,
ZJ 1
= -V3 - 1 -
-V2 1
-c
3 6 2 ..

26The figures in this chapter are similar to the figures in Slikker and van den Nouweland
(2000b).
198 Network formation with costs for establishing links

Finally, position 5 represents a position in the network with three links.


In this network every player receives

viC(N , V, L N) 1
= :iV3 - c.

Obviously a player's preferences over positions 1 through 5 will change


with the coalitional game (N, v) and the cost c of forming a link. In
a situation where the costs c are fairly high, a player will find that
in some cases the benefits from being linked to another player do not
outweigh the cost for forming a link. The preferences of a player are
described in table 8.1. The notation >- that we use in this table denotes
preferences. For example, 1 >- 2 means that a player prefers position 1
to position 2. If a condition in table 8.1 holds with equality then a player
is indifferent between the two corresponding positions, while a reverse
preference holds if the reverse inequality holds.

Preference Condition dependent on c I Condition independent of c


1 >- 2 c > V2
1 >- 3 c> IV3 + IV2
1 >- 4 ~ ~
c > ~V3 - 3"V2
1 >- 5 c > ~V3
. 1
2>-3 c> 3"V3 - 3"V2
2>-4 2V2 > V3
2
2>-5 c> 3"V3 - V2
3>-4 c < V2
3>-5 V2 >0
4>-5 c > ~V2

Table 8.1. Preferences over different positions

8.2 NETWORK-FORMATION GAMES IN


EXTENSIVE FORM
In this section we introduce and analyze a slightly modified version of
the games of network formation in extensive form that were introduced
in chapter 6. The modification consists of the incorporation of costs for
establishing communication links. We study the subgame-perfect Nash
equilibria of these modified games.
We start by incorporating costs into the extensive-form network-for-
mation games .6. nf (N, v, " a) that we studied in chapter 6. This is
easily done by using a c-extended Myerson value V C to determine players'
payoffs in various networks. Given a coalitional game (N, v), a cost c 2: 0
N etwork-formation games in e:rtensive form 199

for establishing a link, and an exogenously given rule of order of pairs of


players CY, the extensive-form game of network formation t::,.C(N,v,cy) is
the game t::,.n/ (N, v, v C , CY). In this game, each player i E N gets a payoff

vf(N,v,L)

if a network (N, L) is formed.


The following example illustrates the influence of costs on the outcome
of the game.

EXAMPLE 8.1 Consider the 3-player symmetric game (N, v) with char-
acteristic function v defined by

if ITI :S 1;
v(T) = { ~o if ITI = 2; (8.5)
72 if T = N.

This game was also studied in example 6.1, and we showed there that
in the absence of costs for establishing communication links, every sub-
game-perfect Nash equilibrium results in the formation of exactly one
link. In the current example, we will analyze the influence of costs for
forming links on the networks that are formed in subgame-perfect Nash
equilibria. In table 8.2 we list the payoffs according to the c-extended
Myerson value v C for the five positions that we distinguished in section
8.1. These payoffs are found by substituting 60 for V2 and 72 for V3 in
the expressions that we obtained for the c-extended Myerson values for
the five positions.

Position Payoff

0
2 30 - ~c
3 44 - c
4 14 - ~c
5 24 - c

Table 8.2. Payoffs in different positions

In example 6.1 we saw that one link will be formed in a subgame-


perfect Nash equilibrium if c = O. What will happen if establishing a
communication link with another player is not free of charge any more?
One would expect that relatively small costs will not have much influence
200 Network formation with costs for' establishing links

and that larger costs will result in the formation of fewer links. Indeed,
for small costs, say c = 1, we can repeat the discussion in example
6.1 and conclude that exactly one link will be formed. However, if the
costs are larger the analysis changes. Assume, for example, that c = 22.
The payoffs associated with these costs for the five positions that we
distinguished in section 8.1 are represented in table 8.3.

Position I Payoff II
1 0
2 19
3 22
4 3
5 2

Table 8.S. Payoffs in different positions with c = 22

The influence of the costs on the payoffs changes the incentives of the
players. Once two links have been formed, the two players who did not
form a link between them yet, prefer to stay in the current situation
and receive 3 instead of forming a link and receiving 2. If exactly one
link has been formed, a player who is already linked is now willing to
form a link with the isolated player because this will increase his payoff
(from 19 to 22) and the threat of ending up in the complete network has
disappeared. Obviously, all players prefer forming some links to no link
at all. It follows that, if c = 22, all three networks with two links are
supported by a subgame-perfec:t Nash equilibrium. 0

Example 8.1 illustrates that an increase in the cost for establishing


a communication link can result in more communication between the
players. The general analysis that follows will shed some light on the
circumstances under which we can obtain this result.
Let (N, v) be a zero-normalized symmetric 3-player game and let
c 2 0 be the cost of forming a link. To find which networks will
be formed in subgame-perfec:t Nash equilibria of the network-formation
games tlC(N,v,a), we use the general expressions for the cost-extended
Myerson value V C that we provided in section 8.1 and the preferences of
the players over different positions that were analyzed in table 8.1. It
takes some patient combining and analyzing of the various inequalities
that are obtained, but eventually it turns out that we need to distinguish
between three classes of games. These are non-superadditive garnes, su-
peradditive games that are not convex, and convex games. For each
Network-formation games in extensive form 201

of these three classes, we obtain a specific pattern of networks that are


formed in subgame-perfect Nash equilibria as the costs for establishing
communication links increase.
Note that the 3-player zero-normalized and symmetric game (N, v)
is non-superadditive if and only if V2 > V3. For the class consisting of
non-superadditive games, we find that as the cost for forming a link, c,
increases, the pattern of networks that are formed in subgame-perfect
Nash equilibria of the network-formation games b. c(JV, v, a) is as repre-
sented in figure 8.2.

• •

• •
~~ __________-+I___________ ~c

U V2

Figure 8.2. Networks according to subgame-perfect Nash equiIibria if V2 > V3

Figure 8.2 is, as are all other figures that follow in the current chap-
ter, a schematic representation. The way to read it is as follows. For
c < V2 any of the three possible networks with one link is supported by
a subgame-perfect Nash equilibrium, whereas for c > V2 only the empty
network is supported. On the boundary, where c = V2, both networks
that appear for c < V2 and for c > V2 are supported by a subgame-perfect
Nash equilibrium. We see that for a game with V2 > V3, the complete net-
work (in which all players are connected directly) wilI never be formed.
It follows from the preferences of the players that the complete network
would be formed only if c < ~V3 - V2. However, for a game with V2 > V3
it holds that ~V3 - V2 < O. Because the cost c of establishing a commu-
nication link is taken to be nonnegative, the complete network does not
show up in the figure.
We get a different pattern for games that satisfy 2V2 > V3 > V2. Such
games are superadditive but not convex. Figure 8.3 shows the pattern
of networks formed in subgame-perfect Nash equilibria for this class of
games.
Note that the game that we considered in example 8.1 has V2 = 60
and V3 = 72, so that it satisfies the condition 2V2 > V3 > V2 and figure
8.3 is applicable for this game. We saw in example 8.1 that the complete
network is not formed for this game, even if the cost c is very low. This
does not contradict the results in figure 8.3, because ~V3 -V2 = -12 < O.
202 Network formation with costs for establishing links

D L
• • •
.----. .----. • •
2 I 1I 2 I 1 I • c
"3V3 - V2 "3 V2 "3V3 - "3 V2 V2

Figure 8.3. Networks according to subgame-perfect Nash equilibria if 2V2 > V3 > V2

Note that we did not explicitly indicate c = 0 in figure 8.3, since the
condition 2V2 > V3 > V2 can result in either jV3 - V2 < 0 or jV3 - V2 > O.
Note that games with V2 = V3 are not explicitly considered in either
figure 8.2 or figure 8.3. However, if V2 = V3, then figures 8.2 and 8.3
lead to the same results because several of the boundaries coincide. 27
Likewise, for convex games with V3 = 2V2, we obtain the same results
whether we look in figures 8.3 or in figure 8.4, in which we consider
convex games with V3 > 2V2.

L

• •

Figure 8.4. Networks according to subgame-perfect Nash equilibria if V3 > 2V2

We conclude from the three patterns that we found above, that for
non-superadditive games and for convex games increasing costs for es-
tablishing communication links always result in a decreasing number of
links being formed in equilibrium. For superadditive games that are not
convex, however, we find that increasing costs can result in the formation
of more links.

8.3 NETWORK-FORMATION GAMES IN


STRATEGIC FORM
We now turn our attention to the introduction of costs for establish-
ing links in the strategic-form games of network formation that we dis-

27We point out that figure 8.3 gives 'lS a little bit more information for very specific costs,
because it shows that a network with two links is also supported by a subgame-perfect Nash
equilibrium if the costs are equal to V2.
Network-formation games in str'ategic form 203

cussed in chapter 7. We briefly discuss why Nash equilibrium and strong


Nash equilibrium are not very useful to predict which networks will be
formed. We then continue by studying the networks that are formed in
undominated Nash equilibria and coalition-proof Nash equilibria of the
network-formation games.
We start by incorporating costs into the strategic-form network-for-
mation games rnj (N, v, ,) that we studied in chapter 7. This is easily
accomplished by using a c-extended Myerson value //c to determine play-

cost c ~ °
ers' payoffs in various networks. Given a coalitional game (N, v) and a
for establishing a link, the strategic-form game of network
formation rC(N,v) is the game rnj(N,v,v C ). In this game, each player
i E N gets a payoff
v'f(N, v, L)
if a network (N, L) is formed.
We present the results on Nash equilibria and strong Nash equilibria
in the following subsection. Undominated Nash equilibria and coalition-
proof Nash equilibria are discussed in subsequent subsections.

8.3.1 NASH EQUILIBRIUM AND STRONG


NASH EQUILIBRIUM
We start this subsection with an example illustrating that many net-
works can result from Nash equilibria.

EXAMPLE 8.2 Let (N, v) be the 3-player symmetric game that was the
subject of study in example 8.1. For this game, the payoffs to the players
for the five positions we distinguished in figure 8.1 on page 197 are
summarized in table 8.2 on page 199.
Because the game (N, v) is superadditive, it follows from theorem 7.1
that every network can be supported by a Nash equilibrium if c = O.
This result surfaces because no player wants to break a link and because
two players are needed to form a new link. If the cost [')r forming a link
increases, however, then players may want to break links. For example,
if c > 20, then a player prefers position 4 to position 5 and, hence, the
full cooperation network is not supported by a Nash equilibrium. In
general, it holds that fewer networks are supported by Nash equilibria
as c increases. The empty network, however, is supported by the Nash
equilibrium (0,0,0) for any cost c since players cannot form additional
links through unilateral deviations. 0

We now turn our attention to zero-normalized symmetric 3-player


games in general. Using the expressions for the cost-extended Myerson
204 Network formation with costs for establishing links

values and the information on players' preferences over the possible po-
sitions that we provided in section 8.1, it is straightforward to check the
conditions under which the various networks are supported by a Nash
equilibrium. It turns out that we need to distinguish between the same
three classes of games that we saw in the previous section. We represent
the results in figures 8.5, 8.6, and 8.7 .

.6.,L, - , . -,.
------------------~2--+1-,1-------------,11~--------~1~--~. c
3V3 - 3V2 3113 112

Figure 8.5. Networks according to Nash equilibria if V2 > V3

.6.,L, -,. L,
~l--------------------'1+1------------~2~~1-'1---------+I----+' c
o 3112 3113 - 3112 112

Fzgure 8.6. Networks according to Nash equilibria if 2V2 > V3 > V2

.6.,L, - , L, L,
b
Figure 8.7. Networks according to Nash equilibria if V3 > 2112

Nash equilibria are obviously not very useful to predict which net-
works will be formed, because they support an abundance of networks.
Therefore, we consider refinements of Nash equilibrium. The refinement
strong Nash equilibrium is not useful because the strategic-form games
of network formation might not have any strong Nash equilibria. We
argued this in example 7.3 in case c = O. Arguments similar to those
in example 7.3 show that the set of strong Nash equilibria might be
empty if c > o. In the following subsections, we turn our attention to
the refinements undominated Nash equilibrium and coalition-proof Nash
equilibrium.
Network-formation games in strategic form 205

8.3.2 UNDOMINATED NASH EQUILIBRIUM


The current subsection is devoted to undominated Nash equilibria,
which were introduced in section 5.2. Because an undominated Nash
equilibrium is a Nash equilibrium in undominated strategies, we have to
determine which strategies are undominated. We do this on a case-by-
case basis.
Consider a zero-normalized symmetric 3-person game (N, v) such that
2V2 > V3 > V2 and costs c for forming links such that c < ~V2' We see
in figure 8.6 that in this case all networks can be supported in a Nash
equilibrium. Checking the conditions in table 8.1, we see that every
player prefers position 5 to positions 1 and 4, position 3 to positions
1 and 2, and positions 2 and 4 to position 1. We conclude from this
that every player has a dominant strategy, namely to announce that
he wants to form links with both other players. This implies that for
every player i E N the strategy 8i = N\i is the unique undominated
strategy. Because a strategy profile consisting of dominant strategies is
a Nash equilibrium, we find that there is a unique undominated Nash
equilibrium, namely 8 = (8i)iEN. We conclude that £(8) = £N is the
only network that is supported by an undominated Nash equilibrium in
this case.
The second case we consider is that of a zero-normalized symmetric
3-player game (N, v) such that V3 > 2V2 and a cost c such that tV2 < c <
V'2' The networks supported by Nash equilibria are all networks except
the complete one (see figure 8.7). We denote N = {i,j,k}. Consider the
strategies of player i. Strategy Si = 0 is dominated by strategy s; = {j}
because positions 2 and 4 are both preferred to position 1. Strategy
s; = {j} is undominated for the following reasons. It is not dominated
by strategy Si = 0 or strategy s;' = {k} because 2 >- 1. Also, it is not
dominated by 8i = {j,k} because 4 >- 5. Strategy 8i = {j,k}, however,
is an undominated strategy as well, because 3 >- 1 and 3 >- 2. Hence, for
every player, all strategies except the strategy Si = 0 are undominated.
It follows that all networks that are supported by Nash equilibria can
be supported by strategy profiles consisting of undominated strategies
as well. This does not imply that all these networks can be supported
by Nash equilibria that consist of undominated strategies only. For
example, consider network (N, {ij}), which has one link and an isolated
player k. Let S = (Si' Sj, Sk) be a triple of undominated strategies such
that £(s) = {ij}. Then we know that Sk E {{i}, {j}, ri,j}}. Without
loss of generality, we assume that i E 8k. Because £(8) = {ij} and
i E Sk, it follows that Si = {j}. Note, however, that the fact that 4>- 2
implies that player i can improve his payoff by deviating to 8i = {j, k}.
Therefore, we conclude that S is not a Nash equilibrium. This shows
206 Network formation with costs for establishing links

that a network with one link is not supported by an undominated Nash


equilibrium. A similar argument shows that the empty network is not
supported by an undominated Nash equilibrium. A network with two
links, however, is supported by an undominated Nash equilibrium: s =
(Si,Sj,Sk) = ({j), {i,k},{j}) is an undominated Nash equilibrium that
results in the formation of network L(s) = {ij,jk}.
Analyzing all the possible cases in the way described above, we even-
tually find all networks that are supported by undominated Nash equi-
libria. We represent the results in figures 8.8, 8.9, and 8.10.

, . , .
--~~o--rl-,l------------------'l'--------------1------- c
3v:3 - 3V2 3V3 VL

Figure B. B. Networks according to undominated Nash equilibria if 112 > V3

LL , .
~1------~1~1----~2~+1--~1------------~------~·
O c
3V2 3V3 - 3V2 v2

Fig'U1"e B. 9. ~etworks according to undominated Nash equilibria if 2V2 > v~ > V2

LLL , .
~1------~1~1--------+-------------~1~1--~1----~. c
o "3V2 v2 3V3 3V2 +

Figure B.I0. Networks according to undominated Nash equilibria if Va> 2112

8.3.3 COALITION-PROOF NASH


EQUILIBRIUM
We now turn our attention to coalition-proof Nash equilibria, which
were introduced in section 5.2. We start with an example that illus-
tr'ates coalition-proof Nash equilibria of the network-formation games in
strategic form.
Network-formation games in strategic form 207

EXAMPLE 8.3 Consider the 3-player symmetric game (N, v) that was
studied in examples 8.1 and 8.2. Suppose that c= O. Then there is
no network that is different from the complete network and that is es-
sentially complete for V C• Hence, it follows from theorem 7.3 that the
complete network is the only network that is supported in a coalition-
proof Nash equilibrium of the game rC(N, v) if c = O. To get intuition for
how to find which networks are supported by coalition-proof Nash equi-
libria in general, we now show that all networks other than the complete
network are not supported by a coalition-proof Nash equilibrium.
We denote N = {i, j, k}. In a Nash equilibrium S there can be no two
players i and j such that i E Sj and j f{. Si. This is true because every
player prefers positions 2 and 4 to position 1, position 3 to position 2, and
position 5 to position 4, so that every player prefers to form all the links
he can. This implies that the only Nash equilibrium S that supports
the empty network is that in which Si = Sj = Sk = 0. This strategy
profile, however, is not a coalition-proof Nash equilibrium, because any
two players i and j have a profitable deviation that is allowed. Deviating
to ti = {j} and tj = {i} results in the formation of link ij and increases
the payoffs of both players i and j from 0 to 30. Also, neither player
i nor player j has a profitable further deviation, because all they can
accomplish by unilateral deviation is breaking link ij.
The only Nash equilibrium S that supports a network with one link,
link ij, is that with Si = {j}, Sj = {i}, and Sk = 0. However, players i
and k can increase their payoffs to 44 and 14, respectively, by deviating
to the strategies ti = {j, k} and tk = {i} and forming an additional link.
Neither player i nor player k has an incentive to deviate from this new
strategy profile because they can not unilaterally form new links. We
conclude that a network with one link is not supported by a coalition-
proof Nash equilibrium.
It remains to show that networks with two links are not supported
by coalition-proof Nash equilibria. Consider network (N,{ij,jk}). This
network is supported by one Nash equilibrium only, namely strategy
profile S in which Si = {j}, Sj = {i,k}, and Sk = {j}. This Nash equi-
libriumis not coalition-proof, however, because (ti,tk) = ({j,k},{i,j})
is a profitable deviation by players i and k that is stable against further
deviations by either player i or player k.
We have now shown that all networks other than the complete one
are not supported by coalition-proof Nash equilibria. We proceed by
showing that the complete network is supported by a coalition-proof
Nash equilibrium. Consider the strategy profile 8 = (8i' 8j, 8k) = ({j, k},
{i,k},{i,j}). Obviously, this strategy profile results in the formation
of the complete network. The only deviations from 8 that increase the
208 Network formation with costs for establishing links

payoffs of all deviating players are deviations by two players i and j who
break their links with the third player k and induce the formation of
network (N, {ij}), so that they each get 30 rather than 24. However,
such a deviation, (ti,ij) = ({j},{i}), is not stable against further devi-
ations because player i can increase his payoff from 30 to 44 by playing
Ui = {j, k} and inducing the formation of network (N, {ij, ik}). We
conclude that s is a coalition-proof Nash equilibrium.
Everything we did in this example so far, is for c = o. For small
costs c > 0, the discussion above still reflects what is going on and the
conclusion will be unchanged. For larger costs, however, some of the
deviations that we considered will no longer be attractive. Suppose, for
instance, that c = 24. Then each player has a payoff of only 0 in the
complete network, whereas each player has a positive payoff in a network
with two links. Because each player prefers position 3 to position 2 and
position 4 to position 1, there are no profitable deviations from a strategy
that results in the formation of a network with two links. It is not hard
to see that, if c = 24, the only networks that are supported by coalition-
proof Nash equilibria are the ones with two links. <)

To determine which networks are supported by coalition-proof Nash


equilibria of the game rC(N, v) for any zero-normalized symmetric 3-
player game (N, v) and any cost c, we use table 8.4. In this table we list
the networks that are supported by coalition-proof Nash equilibria as a
function of players' preferences over positions.

Network resulting from


I Most preferred position Additional condition coalition-proof
Nash equilibrium

1 zero links
2 one link
3 5>-- 4 three links
3 4>--5 two links
4 3>-- 1 two links
4 1 >-- 3 zero links
5 three links

Table 8.4. Coalition-proof NaEh equilibria depending on preferences of the players

Using this table, it is fairly straightforward to determine the net-


works that are supported by coalition-proof N ash equilibria for the three
classes of games that we distinguished before. Figure 8.11 shows the
networks that are supported by coalition-proof Nash equilibria for non-
Extensions 209

superadditive games, figure 8.12 provides the results for superadditive


but non-convex games, and figure 8.13 deals with convex games .

• •

• •
------------~2~-+1-,1--------------+1---------------~' c
3V3 - 3V2 V2

Figure 8.11. Networks according to coalition-proof Nash equilibria if V2 > V3

6 L
• •
~
• •
b 11
3 V2
2
3V3 -
I 1
3 V2
I
v2
' c

Figure 8.12. Networks according to coalition-proof Nash equilibria if 2V2 > V3 > V2

L

Figure 8.1S. Networks according to coalition-proof Nash equilibria if V3 > 2V2

8.4 EXTENSIONS
We now extend our scope to games with more than three players. We
discuss to what extent our results of the previous sections with respect to
games with three players do or do not hold for games with more players.
The most interesting result that is obtained for symmetric 3-player
games is that in the network-formation games in extensive form it is
possible that as the cost for establishing a link increases, more links
are formed in subgame perfect Nash equilibria. Slikker and van den
Nouweland (2000b) show that this result can be extended to games with
more than 3 players. They provide an example of a 4-player game for
which an increase in c does, under certain circumstances, lead to the
formation of more links. They also extend the game in example 8.1 and
210 Network formation with costs for establishing links

use it to show that for n-player games with n odd, it is possible that
as the costs for establishing links increases, more links are formed in
subgame-perfect Nash equilibria.
The second point of interest is whether we will again find that the
pattern of networks formed in equilibrium depends on whether a game
is superadditive and/or convex. Slikker and van den Nouweland (2000b)
provide two examples of symmetric 4-player games that are superaddi-
tive but not convex. They show that for these two games the patterns
of networks that are supported by subgame-perfect Nash equilibria of
the extensive-form network-formation games t::,.C(N, v, 0") as a function
of the cost c are different. It is perhaps not very surprising that for
4-player games the relationship between the cost c of forming a link and
the networks that are supported by various equilibrium concepts can-
not be related back simply to superadditivity and/or convexity of the
game. For zero-normalized symmetric 3-player games superadditivity
and convexity can each be described by a single inequality, V3 :::: V2 and
2V2 :::: 113, respectively. It is really these two conditions that are im-
portant. For zero-normalized symmetric games with more than three
players, however, more inequalities are needed to describe these proper-
ties. For example, for a zero-normalized symmetric 4-player game there
are three superadditivity conditions, V3 :::: V2, V4 :::: V3, and V4 :::: 2V2, and
two convexity conditions, V3 - 112 :::: V2 and V4 - 113 :::: V3 - 1)2. Slikker
and van den NOllweland (2000h) consider the possihility that for zero-
normalized symmetric 4-player games the patterns of networks formed
are dependent on which of the five superadditivity and convexity con-
ditions are satisfied by the game. However, they provide examples that
show that this is not the case.

8.5 COMPARISON OF THE


NETWORK-FORMATION MODELS
This section is devoted to a discussion of the cost-network patterns
that we derived in the previous sections. We considered suhgame-perfect
Nash equilibria of the network-formation games in extensive form. The
equilibrium concept for the network-formation games in strategic form
that is most similar to subgame perfection in spirit is undominated Nash
equilibrium. However, it appears from figures 8.8 through 8.10 that in
some cases there is still a multiplicity of networks resulting from undomi-
nated Nash equilibria. Comparing figures 8.8 through 8.10 to figures 8.11
through 8.13, it appears that coalition-proof Nash equilibrium allows us
to further refine our predictions. We point out that, even for the (3-
player) network-formation games in strategic form, coalition-proof Nash
equilibrium is not a refinement of undominated Nash equilibrium on
Comparison of the network-formation models 211

the strategy level. However, on a network level, we can view coalition-


proof Nash equilibrium as a refinement of un dominated Nash equilibrium
for the strategic-form network-formation games. We compare the cost-
network patterns for subgame-perfect Nash equilibria in the network-
formation games in extensive form with those for coalition-proof Nash
equilibria in the network-formation games in strategic form.
Comparing figures 8.2, 8.3, and 8.4 to figures 8.11, 8.12, and 8.13,
respectively, we find that the predictions according to subgame-perfect
Nash equilibrium in the games in extensive form and those according
to coalition-proof Nash equilibrium in the games in strategic form are
remarkably similar.
Figures 8.4 and 8.13 show that for convex games (N, v), both games
!:~"C(N, v, a) and rC(N, v) predict the formation of the same networks for
all costs c.
Figures 8.2 and 8.11 show that for non-superadditive games, the pre-
dictions of both network-formation games are almost the same. The only
difference is that the level of c that marks the transition from the com-
plete network to a network with one link is possibly positive (~V3 - tV2)
in the games ~C(N, v, a), whereas it is negative (~V:l - V2) in the games
rC(N, v) (see the discussion of figure 8.2 on page 201). Note that con-
sidering undominated Nash equilibria instead of coalition-proof Nash
equilibria for the games rC(N, v) will only make the differences more
pronounced.
The predictions of both types of network-formation games are most
dissimilar for the class of superadditive non-convex games. In the games
~C(N, v, a) we get a network with one link in case ~V3 - V'2 < c < tV2
(see figure 8.3), whereas in the games rC(N,v) for these costs we get
the complete network (see figure 8.12). For lower costs the complete
network results in both types of games.
The discussion on mutual threats in section 7.4 is applicable to all
games with the property that 2V2 > V3 > V2. Not only is the difference
between the predictions of both models of network formation a result of
the validity of mutual threats in the network-formation games in exten-
sive form, so is the remarkable result that higher costs may result in the
formation of more links in the games ~C(N, v, a). For high costs, the
mutual threats will no longer be credible because a player who executes
it would permanently decrease his payoff.
Chapter 9

A ONE-STAGE MODEL OF NETWORK


FORMATION AND PAYOFF DIVISION

In this chapter, which is based on Slikker and van den Nouweland


(2001), we study a model of network formation in which players bargain
over the formation of links and the division of the payoffs simultaneously.
This makes the model very different from those in previous chapters,
where network-formation and bargaining over payoff division occurred
in two sequential stages. We introduce the one-stage model of network
formation and payoff division in section 9.1. Because bargaining over
payoffs in this model occurs while bargaining over network formation and
not after it, we cannot use an allocation rule to model the payoff division
once a network has been formed, like we did before. The one-stage
model of network formation and payoff division will generate predictions
about the networks formed as well as about the payoffs of the players.
We analyze both the networks and the payoffs that result according to
several equilibrium concepts. In section 9.2 we analyze Nash equilibria,
in section 9.3 strong Nash equilibria, and in section 9.4 coalition-proof
N ash equilibria.

9.1 THE MODEL


In this section, we introduce a model that provides an integrated ap-
proach to network formation and payoff division. Our starting point is
the same as in the games of network formation in extensive and strategic
forms that we discussed in chapters 6 and 7. Namely, we assume that a
coalitional game describes the profits that the various coalitions of play-
ers can obtain if they coordinate their actions. Given such a coalitional
game, players state which links they are willing to form and for each
of those links they also state the payoff that they want to receive for
forming it.
214 A one-stage model of network formation and payoff division

Let (N, v) be a coalitional game that describes the profits obtainable


by the various coalitions of players. Like in previous chapters, we restrict
ourselves to zero-normalized games to avoid unnecessarily complicated
notation. In addition, we often assume that v(N) > O. This assumption
merely implies that it is profitable for the players to cooperate, because
v(N) > I:iEN v(i).
Let (N, v,) be a coalitional game. We model the process of network
formation and payoff division as a game in strategic form ric (N, v) =
(N; (Si)iEN; (!i)iEN). To describe the strategy sets of the players, we
introduce the notation A = R+ U {P}, where R+ = [0, (0) and P stands
for Pass. The strategy set of player i is

Si = { ciEN
A I Cii = P } . (9.1)

Hence, a strategy for player i specifies a c~ for any player j E N, such


that cj E R+ U {P}. The interpretation of cj = P is that player i is
not willing to form a link with player j. Obviously, player i cannot
form a link with himself, so for all strategies ci E Si it is required that
ci = P. If cj E R+ then player i is willing to form a link with player
j, and he claims an amount cj for forming it. We stress that a strategy
of player i E N is denoted by ci rather than Si, as we did in earlier
chapters. We have adjusted our notation for two reasons. The first
one is that the strategy of a player is a vector and we want to adhere
to the convention to indicate the various elements in such vectors by
subscripts. Then, it is less confusing to use superscripts rather than
subscripts to indicate the player whose strategy we are considering. So,
with our notation c~ the superscript i denotes the player who is playing
the strategy, while the subscript j corresponds to a specific coordinate
of this strategy. The second reason for changing our notation is that
we think of cj E R+ as a claim and we stress this interpretation by
using c rather than s. We use all the same abbreviations for strategy-
tuples denoted by (Ci)iEN that we llsed for strategy-tuples denotes by
(Si)iEN in previous chapters. For example, for any T ~ N we denote
cT = (d)iET, we denote c = c N = (C i )iElYl and for each kEN we denote
c k = (Cj)jEN\k'
We now describe the payoff functions of the players. Suppose the
players play strategy profile c E S. To determine the resulting payoffs
to the players, we also have to determine the network that is formed.
We start by determining the set l(c) of links that the players are willing
to form according to strategy profile c. Because it takes the consent of
The model 215

both players to form the link between them, we obtain

l(c) = {ij I cI,cj E R+}. (9.2)

To see which of the links in l(c) will actually be formed, we have to de-
termine whether the claims of the players are feasible. Network (N,l(c))
partitions the player set into components. The links in l(c) between the
players in such a component can actually be formed if and only if the
total payoffs that the players in this component claim to form them do
not exceed the profit that they can obtain. If this is not the case then
the links will not be formed because the players cannot get their claims.
Hence, the set L(c) of links that are formed equals

L(c) = {ij E l(c) I L (c~ + Ck') ::; v(Gi(l(c)))}, (9.3)


kmEl( e):k,mECi (l( e))

where Gi(l(c)) denotes the component of network (N, l(c)) that contains
player i. This construction of (N,L(c)) implies that if some players in a
component are too greedy by claiming large amounts on their links, all
players in the component can end up being isolated, receiving a payoff
of zero.
An alternative approach would be to punish only greedy players by
not allowing them to form the links for which they claim high amounts,
while allowing the other players in a component to form their links.
One could start by eliminating the links for which players state the
highest claims and repeat this until the claims on the links that are
left are feasible. However, it is not obvious that the player with the
highest claim is claiming too much. Such a player might be essential for
obtaining joint profits. We therefore think that it is not a good idea to
follow this procedure for eliminating links.
Instead of eliminating links for which the players state high claims,
one could focus on coalitions of players that can afford to pay their
members the claims they state for forming the links between them. This
approach seems reasonable at first, but runs into trouble when there are
several such coalitions while their union is not able to pay for all the
claims. Consider, for example, the 3-person game (N, v) with v(N) =
100, v(T) = 50 if ITI = 2, and v(T) = 0 otherwise. Suppose the players
play the strategy profile c in which c{ = P if i = j and cI = 25 for each
i, j E N with i f. j. According to this strategy profile, anyone of the
three possible links can be formed and the two players who form a link
can then receive their claims for it. However, if all three players were to
form all their links, then they cannot receive the claims for those links.
So, the method of selecting the largest coalition of players who can pay
216 A one-stage model of network formation and payoff division

for the links that their members want to form with each other leads to
a selection problem.
There appears to be no straightforward manner to identify the players
who claim too much. Rather than solving this selection problem by
superimposing which links are formed in a situation in which players
collectively claim too much, we take the point of view that this should
have consequences for all players involved. We opt for a method in
which greediness has severe consequences in the sense that no links are
formed between players in a component of (N, I (c)) if the players in such
a component collectively claim more than the coalition can afford to pay.
Note that this method will give the players strong incentives to fine-tune
their claims to reflect their strengths and weaknesses in the game (N, v).
Now that we have determined the network that is formed ifthe players
play strategy tuple c, the payoffs to the players can be found by adding
their claims for the links that are actually formed, i.e.,

j;(c) = L cj.
j:ijEL(c)

Note that this indeed gives an isolated player a payoff of zero, because
the empty sum is equal to zero.
We refer to the game (N; (S;)iEN; (Ji)iEN) with the strategy sets and
payoff functions as defined above as the link and claim game and we
denote it by rIC(N, v). We illustrate this game in the following example.

EXAMPLE 9.1 Let (N,v) be the 3-person coalitional game with N


{I, 2, 3} and characteristic function v defined by

o iflTI = 1;
v(T) = { 60 if ITI = 2;
144 ifT = N.

Consider the strategy profile

c= (cl,c2 ,C3 ) = ((P,20,20), (20, P, 20), (P,20,P)) E S

in the corresponding link and claim game. This strategy profile is rep-
resented in figure 9.1. An arrow pointing from a player i to another
player j represents that i is willing to form a link with j and the number
written next to such an arrow is the claim cj. The omission of an arrow
from i to j indicates that c~ = P, i.e., player i is not willing to form a
link with player j.
The model 217

\20
20/ \20
1~~2
Figure 9.1. Strategy profile c

The link between players 1 and 3 is not in l(c). The reason is that,
while player 1 would like to form this link (c~ = 20 E R+), player 3
does not (c~ = P). The link between players 1 and 2 is in l(c), because
both players 1 and 2 want to form it. Proceeding in this way, we find
l(c) = {12,23}. The network (N, l(c)) partitions the player set into
one component, N/l(c) = {{I, 2, 3}}. If we add the payoffs that the
players claim for forming the links in l (c), then we find c~ + cI + c~ +
c~ = 80 ~ 144 = v(N). We conclude that these claims are feasible for
coalition N. Hence, all links in l(c) are formed and L(c) = {12,23}.
The corresponding payoffs to the players are h(c) = c~ = 20, h(c) =
cI + c~ = 40, and h(c) = c~ = 20.
To obtain an example of a strategy profile in which the players claim
too much for all the links in l (c) to be formed, consider the profile
C = ((P, 40, 40), (40, P, P), (P,40,P)). It holds that l(c) = {12} and
c~ + cI = 80 > 60 = v(1,2). Hence, L(c) = 0 and Ji(c) = 0 for every
i E N. (>

We conclude this section with a short discussion of an altogether dif-


ferent approach to defining a game that integrates network formation
and payoff division. We could define a strategy of a player to be a state-
ment on a network that he wants to be formed and a claim for this
network. Assume that in this alternative model the links between the
players in a component of a network are formed if and only if all play-
ers in this component want to form this set of links and, moreover, if
their claims are feasible. A model defined in such a way would be a
straightforward extension of the model of coalition formation by Borm
and Tijs (1992) to networks in which the internal structure of a coalition
is specified. Note that this formulation requires a player to express his
opinion on links that will or will not be formed between other players,
in which the original player takes no part.
218 A one-stage model of network formation and payoff division

9.2 NASH EQUILIBRIUM


In the current section we study the networks and payoffs that are
supported by Nash equilibria of the link and claim games rlC(N,v).
One of the questions that we are interested in is which payoff vectors in
the imputation set of an underlying coalitional game (N, v) can possibly
cmergc as thc payoffs in a Nash equilibrium.
We start with an example.

EXAMPLE 9.2 Consider the 3-player game (N, v) in example 9.1. We


show that, for this game, several possible networks are supported by
Nash equilibria of the link and claim game rlC(N, v). If each player
'i EN plays strategy ci = (P, P, P), the empty network is formcd. These
strategies form a Nash equilibrium, because no player can unilaterally
enforce the formation of a link.
The network with only link 12 is formed if the players play strategy
l)rofilc c in which c1 = (P, 20 " P) ('2 = (40 , p., P) , and c'3 = (P, P , P) . It
'

is easily seen that these strategies form a Nash equilibrium and result in
the payoff vector (20,40, Q). Note that for any x E [0, fiO] the strategy
profile c in which c1 = (P, X, P), c2 = (60- x, P, P), and c3 = (P, P, P) is
a Nash equilibrium which results in the formation of network (N, {12})
and payoff vector (x,60 - x, 0). Therefore, we see that we cannot as-
sociate a unique equilibrium-payoff vector with network (N, {12}). We
easily reach similar conclusions for networks (N, {13}) and (N, {23}).
For an example of a Nash equilibrium that results in the formation of
a network with two links, say 12 and 23, consider the strategy profile c
in which c 1 = c3 = (P, 48, P) and c2 = (24, P, 24). The resulting payoff
vector is (48,48,48). We refer to player 2 as the middleman, in network
(N, {l2, 23}), because players 1 and 3 can only communicate with each
other through player 2. Player 2 is the only player in the network who
can break one of his links and still be connected to another player. This
possibility gives him some leverage in claims and therefore restricts the
set of payoff vectors that can be attained in a Nash equilibrium that
results in the formation of the two links 12 and 23.
Finally, we consider the complete network. A a strategy profile c
that results in the formation of this network has to satisfy cj E R+
for each i,j E N, i =1= j, and c~ + cj + ci + c~ + cf + c~ ::; 144. If
c E NE(rlC(N,v)), then no player can increase his payoff by unilaterally
dcviating to a strategy in which he simply raises one of his claims. This
implies that c~ + c.~ + ci + c~ + cf + c~ = 144. This, in turn, implies
that at least one player claims a positive amount for the formation of
some link. Without loss of generality, we assume that c~ > O. Player 1
Nash equilibrium 219

can receive this claim only if player 2 wants to form a link with player
1. Hence, if player 2 changes his strategy such that cI = P, player
1 will not receive his claim c~. Note that the players can still obtain
v(N) = 144 in network (N, {13, 23}) and that player 2 can therefore
increase his claim on the link with player 3 by c~ + cI if he refuses to
form the link with player 1. Denoting c2 = (P, P, cI + c~ + c~), we see
that h(c1, C2, (3) = cr cI
+ C} + c~ > + c§ = h(c). Because player 2 has
a profitable deviation from it, C is not a Nash equilibrium. We conclude
that the complete network is not supported by a Nash equilibrium.
Summarizing, we find that, for the game (N, v) in this example, the
complete network is the only network that is not supported by a Nash
equilibrium of the link and claim game rIC(N,v). 0

The method that we followed in example 9.2 to show that the complete
network is not supported by a Nash equilibrium can straightforwardly
be extended to more general situations. Using it, we can show that for
any coalitional game (N, v) it holds that a strategy profile in rIC(N, v)
is not a Nash equilibrium if it results in the formation of a network
containing a cycle and if at least one player claims a positive amount on
one of the links in the cycle. This implies that a Nash equilibrium does
not support a network that contains cycles, unless all players claim zero
for the formation of the links in such a cycle. We state this formally in
the following theorem.

THEOREM 9.1 Let (N, v) be a zero-normalized coalitional game. For


every Nash equilibrium c in the link and claim game rlC(N,v) it holds
that all claims on links in cycles in (N, L(c)) are equal to zero.

PROOF: Let c be a Nash equilibrium in rlC(N, v). If the resulting net-


work (N, L(c)) does not contain a cycle, then the statement in the the-
orem is trivially satisfied. So, assume that (N, L(c)) contains a cycle.
Suppose that one of the players claims a positive amount on one of the
links in the cycle. We will show that this leads to a contradiction.
Let i,j E N be such that link ij is in a cycle in (N,L(c)) and such
that cj > o. Because link ij is in a cycle, there is a player k E N\ {i, j}
such that jk E L(c). We define a new strategy cj for player j by cj =
(dN \ {i,k}' q, CfJ, where c{ = P and q = Cf + q + cj. In strategy cj player
j has the same attitude as in cj towards forming links with players other
than i and k, but player j now refuses to form a link with player i
and increases his claim on the link with player k by the amount that
players i and j together were previously obtaining for the formation of
220 A one-stage model of network formation and payoff division

link ij. It follows easily that the new claims are feasible for the players
in component G;(L(c)) = G;(L(c)\ij) and that

fj (c - j , cJ) = fj (c) + c~ > fj (c).

We conclude that c cannot be a Nash equilibrium of rIC(N, v), which is


a contradiction. 0

An implication of theorem 9.1 is that for a game (N, v) with at least


three players and a positive value for the grand coalition (v(N) > 0),
the complete network is not supported by a Nash equilibrium.
In the following theorem, we concentrate on payoff vectors. We de-
scribe conditions on payoff vectors in the imputation set J(N, v) (see
section 1.1) that identify which of these payoff vectors are supported
by Nash equilibria of the game rIC(N, v). The theorem is limited to 3-
player games that satisfy some rather mild conditions. For convenience,
we denote the players in a player set N with 3 players by 1, 2, and 3.

THEOREM 9.2 Let (N, v) be a zero-normalized 3-player coalitional game


such that v(N) > 0 and v(N) > v(T) for every 2-player coalition T, and
let x be an imputation of (N, v). Then there exists a Nash equilibrium c
of the link and claim game rIC(N, v) such that f(c) = x if and only if at
least two of the following conditions are satisfied:

Xl + X2 2': v(l, 2), (9.4)


Xl + X3 2': v(l, 3), (9.5)
X2 + X3 2': v(2, 3). (9.6)

PROOF: We first prove the if-part. We assume, without loss of general-


ity, that the first two inequalities hold. Consider the strategy profile c
defined by c l = (P, T' T), c2 = (X2' P, P), and c3 = (X3, P, P). Obvi-
ously, L(c) = {12, 13} and f(c) = (Xl,X2,X3). It is easily seen that cis
a Nash equilibrium of rIC(N, v).
To prove the only-if-part, let c be strategy profile in rlC(N, v) such that
f(c) = X and suppose that at most one of the three inequalities holds.
Because X E J(N,v), it holds that Xl + X2 + X3 = v(N) > O. Together
with the condition that v(N) > v(T) for all TeN, this implies that
network (N, L(c)) is connected. If this were not the case, the claims of
the players would not be feasible. Using theorem 9.1, we conclude that
there can be no cycles, so that it follows that L(c) consists of exactly
two links. Without loss of generality, we assume that L(c) = {12, 13}.
Because at most one of the inequalities (9.4), (9.5), and (9.6) holds, we
know that either Xl + X2 < v(l, 2) or Xl + X3 < v(l, 3) holds. Without
Nash equilibrium 221

loss of generality, we suppose that Xl + X2 < v{l, 2). Then player 1 can
improve his payoff by switching to strategy (P, v{l, 2) - X2, P), thereby
breaking the link with player 3 and inducing the formation of network
(N, {12}). We conclude that c is not a Nash equilibrium. 0

We apply theorem 9.2 in the following example.

EXAMPLE 9.3 Consider the coalitional game (N, v) with player set N =

!
{I, 2, 3} and characteristic function v described by

0 iflTI=lj
120 if T = {I, 2}j
v(T) = 60 if T = {I, 3}j
80 ifT = {2,3}j
180 ifT = N.

Using theorem 9.2, we can easily determine which of the payoff vec-
tors in the imputation set J(N, v) are supported by Nash equilibria of
rlC(N,v). We represent these payoff vectors in figure 9.2. The triangle
in this figure is a two-dimensional representation of the imputation set,
which is the intersection ofthe hyperplane described by Xl +X2+X3 = 180
with the nonnegative orthant R~. The shaded area in figure 9.2 cor-
responds to the set of imputations that are supported by Nash equilib-
~ 0

We see in example 9.3 that there exist 3-person coalitional games for
which not all payoff vectors in the imputation set are supported by Nash
equilibria. We can, however, identify an appealing set of imputations
that are supported by Nash equilibria. Because every core allocation
satisfies inequalities (9.4), (9.5), and (9.6), it follows from theorem 9.2
that for 3-player games that satisfy the conditions in the theorem it holds
that every core allocation can be supported by a Nash equilibrium. We
extend this result to coalitional games with more than three players in
the following theorem.

THEOREM 9.3 For any zeTa-normalized coalitional game (N, v) it holds


that everyone of its core allocations is supported by a Nash equilibrium
of the link and claim game rlC(N, v).

PROOF: Let (N, v) be a zero-normalized coalitional game and let x E


C (N, v) be a payoff vector in its core. We will construct a Nash equilib-
rium c of rlC(N, v) such that f(c) = x.
222 A one-stage model of network formation and ]layoff division

(0,0,180)

Xl + X2 == 120

(180,0,0) L-.------(-lO-0-'-,8-0,-O)--(-60-,-L12-0,-O)----~ (0,180,0)

Figure 9.2. The shaded area represents the payoff vectors in the imputation set
supported by I\ash equilibria

Fix a player i E N and consider the strategy profile c defined by


i Xi
Ck = -- for all k E N\i,
n-1
ci = P,
and, for every j E N\i,

q=P for all k E N\i,


C; = Xj'

This strategy profile results in the formation of a star with player i as


the central player and payoff vector x.
We will show that c is a Nash equilibrium of rIC(N, v). Any player
j E N\i cannot improve his payoff by unilateral deviation, because X
is individually rational and by deviating player j can only break his
link with player i and become isolated. It remains to show that player
i does not have a profitable deviation. Let ci E Si be an arbitrary
deviation by player i. All links that are formed when the players play
the new strategy profile ((~i, c-· i ) involve player i and every player j who
is (directly) connected to player i in the new network receives his claim
Strong Nash eq'uilibrium 223

Xj' Using that x E C(N, v), we derive

JECi(L(ci,ci))\i

L fj(ci,c- i )::; v(Ci(L(ci,c- i ))).


jECi(L(ci,ci))

Putting the two inequalities together, we conclude that fi(Ci,C- i ) ::; Xi


has to hold, so that the deviation is not profitable for player i. 0

We conclude this section with some remarks on how the results in


theorems 9.2 and 9.3 can be used to find equilibrium payoff vectors
that are not in the imputation set. Let (N, v) be a zero-normalized
coalitional game. If c is a Nash equilibrium of the link and claim game
rIC(N, v), then it must be the case that (fi(C))iEB E J(B, vIB) for every
component BENI L(c) of the network that is formed. Also, if c is a
Nash equilibrium of rIC(N, v) and BE NI L(c), then c generates a Nash
equilibrium of rIC(B, vIB), which is obtained by restricting the strategies
of the players in B to claims on links with other players in B. On the
other hand, if we have a partition of N and for each partition element
B a Nash equilibrium of the game rIC(B, vIB), then we find a Nash
equilibrium of rIC(N, v) by extending the strategies of all players so that
no player wants to form a link with a player in another partition element.

9.3 STRONG NASH EQUILIBRIUM


In the current section we consider strong Nash equilibria of the link
and claim games. Like in the previous section, our focus is on payoff
vectors that are supported by equilibria.
In the following example we show that strong Nash equilibria of the
link and claim game do not necessarily result in payoff vectors that are
in the core of the underlying coalitional game.

EXAMPLE 9.4 Let (N, v) be a 4-person coalitional game with player set
N = {I, 2, 3, 4} and characteristic function v defined by

if ITI = 1;
if ITI = 2 or ITI = 3;
ifT = N.
It is not hard to see that strategy profile c with c 1 = (P, 1, P, P), c2 =
(I,P,P,P), c3 = (P,P,P,I), and c4 = (P,P,I,P) is a strong Nash
224 A one-stage model of network formation and payoff division

equilibrium. The resulting payoff vector f(c) = (1,1,1,1) is not in the


core of the game because the sum of the payoffs is larger than the value
of the grand coalition. In fact, the core of the game (N, v) is empty
because it is impossible to find payoffs Xl, X2, X3, and X4 such that
Xl + X2 ;:::: 2 and X3 + X4 2: 2 and Xl + X2 + X3 + X4 = 3 are simultaneously
satisfied. 0
The result that we obtain in the previous example seems to emanate
from the fact that we can partition the player set into coalitions in such
a way that the sum of the values of the partition elements exceeds the
value of the grand coalition. This inspires us to consider coalitional
games (N, v) that satisfy the condition
t
v(N) 2: L v(Ek) for all partitions {E I , ... , Ed of N. (9.7)
k=l

The interpretation of this condition is that the sum of the values of the
coalitions in a partition of the player set is less than or equal to the value
of the grand coalition. We point out that any game that has a nonempty
core must necessarily satisfy condition (9.7).
In the following theorem we show that for a game (N, v) satisfying
condition (9.7) every strong Nash equilibrium of the link and claim game
rIC(N, v) results in a payoff vector in the core of (N, v).
THEOREM 9.4 Let (N, v) be a zero-normalized coalitional game that sat-
isfies condition (9.7). Then fOT ever'y strong Nash equilibrium c of the
link and claim game rIc (N, v) it holds that f (c) is in the COTe of (N, v).

PROOF: Let c be a strong Nash equilibrium of the link and claim game
rIC(N,v). Suppose X = f(c) (j. C(N,v). We will derive a contradiction.
Because v(N) 2: Lk=lv(Ek) for all partitions {E l , ... , Ed of N it fol-
lows that LiEN fi(c) ::; v(N). Let T ~ N be a coalition whose members
collectively receive less than the value v(T). We know that such a coali-
v(T)-"L x
tion exists because X (j. C(N, v). Now, we define E = IT(ET J > 0
and fix an i E T. Consider the deviation (7 by the players in coalition
T defined by

ct =p for all j E T\i, and k E N\i,


cI =Xj + E for all j E T\i,
-i
Ck =P for all k E (N\T) U i,
-i Xi +E for all k E T\i.
ck = ITI-1
Strong Nash equilibrium 225

If the players in T play c7, then a star that includes all players in T
is formed in which player i is the central player, and each player j E
T receives a payoff Xj + Eo This contradicts that c is a strong Nash
equilibrium of rIC(N, v). 0

An implication of theorem 9.4 is that the set of strong Nash equilibria


of the link and claim game rIC(N, v) is empty if the underlying zero-
normalized coalitional game (N, v) satisfies condition (9.7) and has an
empty core. In the following example we illustrate that not every payoff
vector in the core is necessarily supported by a strong Nash equilibrium.

EXAMPLE 9.5 Consider the game (N, v) that we studied in example 9.3.
Note that this game satisfies condition (9.7). Consider the payoff vector
(60,60,60), which is a core element. It follows from theorem 9.3 that
there exists a Nash equilibrium of rIC(N, v) that supports payoff vector
(60,60,60). Let c be such a Nash equilibrium. Using theorem 9.1, we
know that (N, L(c)) has two links. Suppose that L(c) = {12, 13}. This,

°
together with f(e) = (60,60,60), implies that c 1 =: (P, c~, 60 - e~) for
some ~ e~ ~ 60, e2 = (60, P, c~), and e3 = (60, e~, P), where either
c~ = P or e~ = P (or both). Because fde) = 60, we know that either
° °
c~ > or e~ = 60 - c~ > (or both). Without loss of generality, we
assume that e~ > 0.
We will argue that c is not a strong Nash equilibrium. Consider the
1
deviation ((;2, (;3) by players 2 and 3 defined by (;2 = (P, P, 60 + ~) and
1
(;3 = (60, ~, P). Strategy profile (c 1 , (;2, (;3) results in the formation of
1 1
links 13 and 23 and payoff vector (cL 60 + ~,60 + ~). Because c~ > 0,
players 2 and 3 have both improved their payoffs through the deviation.
We conclude that c is not a strong Nash equilibrium of rIC(N, v).
In a similar manner, we can show that either of the assumptions
L(e) = {12,23} or L(c) = {13,23} leads to the conclusion that e is
not a strong Nash equilibrium of rIC(N, v). Hence, we find that payoff
vector (60,60,60) is not supported by a strong Nash equilibrium.
In the line of reasoning above, the only important aspect of the payoff
vector (60,60,60) is that all its components are positive, so that any
strategy profile e resulting in this payoff vector has a middleman who
gets a positive payoff. Extending this idea, we see that none of the payoff
vectors in the core of (N,v) in which all coordinates are positive can be
supported by a strong Nash equilibrium.
In the core element (70,110,0), one of the players has a payoff of
zero, so that we cannot apply the same reasoning as before. Indeed, the
226 A one-stage model of network formation and payoff division

payoff vector (70,110,0) is supported by the strong Nash equilibrium c


in which c 1 = (P, P, 70), c 2 = (P, P, 110), and c3 = (0,0, P).
We represent the payoffs that are supported by strong Nash equilibria
of the game r'c (N, 1)) in figure 9.3. 0

(0,0,180)

(120,0 ,60 )/-_-\-_--'x""~._+:._::...X2=____==_=1:=2..:..0_ _ _---.

(180,0,0) '--------(-10-0-'-,80-,0-)--(-60-,1..1-2-0 ,-0)----....:. (0,180,0)

Figure 9.S. Bold parts represent the payoff vectors in the core that are supported
by strong Nash equilibria

In theorem 9.5 we show that the results that we obtained for the game
in example 9.5 hold in general. The following lemma identifies a set of
payoff vectors that are supported by strong Nash equilibria.

LEMMA 9.1 Let (N,v) be a zero-normalized coalitional game that sat-


isfies condition (9.7). Then every core allocation of (N,v) in which at
least one player, player i, receives a payoff of zero is supported by a
strong Nash equilibrium of r'C(N, v) that results in the formation of a
star in which player i is the central player.

PROOF: Let x E C(N, v) and i E N be such that Xi = 0. Consider the


strategy profile c defined by ci
= 0 for all k E N\i and ci = P, and for
every player j E N\i, c~ = :c,i and ci
= P for all k E N\i. The strategy
profile results in the formation of a star that encompasses all players in
Strong Nash equilibrium 227

N and in which player i is the central player. The payoff vector that is
obtained is x.
We will show that c is a strong Nash equilibrium of rIC(N,v). Sup-
pose that the players in a coalition T ~ N deviate and choose a strat-
egy profile cT E ST and that there exists a player JET such that
fj(cT,c'V\T) > fj(c). We denote Cj = Cj(L(cT,cN\T)), the component
of the new network that contains player j. Consider the set Cj \T of
players who are connected to player j and who are not in the deviating
coalition T. Every player in k E Cj \T receives the same payoff as he
received according to strategy c, namely Xk. We derive from this the
following restriction on the payoffs of the players in coalition Cj n T:

(9.8)

However, because x is an element of the core of (N, v), it holds that

L Xk ~ v(Cj ). (9.9)
kECj

Combining (9.8) and (9.9), we derive

Because fy(cT,cN\T) > fj(c) = Xj, we now know that there exists a
player k E T such that fk (ilr , cN\T) < fk(C). We conclude that at least
one player in T experiences a decrease in his payoff by deviating to cT.
Because deviation cT was chosen arbitrarily, we may conclude that c is
a strong Nash equilibrium of rIC(N, v). 0

Using lemma 9.1, we can identify the payoff vectors that are supported
by strong Nash equilibria of the link and claim game. In the first part
of theorem 9.5 we identify a class of games for which the set of payoff
vectors supported by strong Nash equilibria coincides with the core. In
the second part we describe a class of games for which the set of payoff
vectors supported by strong Nash equilibria coincides with the set of
core allocations in which at least one of the players receives a payoff of
zero.

THEOREM 9.5 Let (N, v) be a zem-normal'lzed coal-itional game that sat-


isfies condition (9.7) and r1C(N, v) the corresponding link and claim
game.
228 A one-stage model of network formation and payoff division

(i) If there exists a partition {B I , ... , Bd of N such that IBkl = 2 for


all k E {1, ... , t} and v(N) = Lk=l V(Bk) then

{f(c) I c is a strong Nash equilibrium ofrIC(N, v)} = C(N,v).


(9.10)
(ii) Ifv(N) > Lk=l V(Bk) for all partitions {B I , ... ,Bd of N in which
IBkl = 2 for each k E {1, ... ,t} then

{ f (c) I c is a strong Nash equilibrium of ric (N, v) }

= {x E C(N, v) I :3i EN: Xi = o}. (9.11)

PROOF: It follows from theorem 9.4 that only payoff vectors in the core
of (N, v) are supported by strong Nash equilibria of rtc (N, v).
To prove part (i), assume that there exists a partition {B I , ... , Bd of
N such that IBkl = 2 for all k E {1, ... , t} and v(N) = Lk=l V(Bk). Let
{B{, ... ,Bn be such a partition. It remains to prove that every core
allocation is supported by a strong Nash equibrium. Let x E C(N, v) be
a core allocation. Note that according to this core allocation the sum of
the payoffs to the players in any coalition B E {Bi, ... , Bn equals v(B).
Consider the strategy profile c defined as follows. Let B E {Bi, ... , B;}
be a partition element and denote the two players in this part ion element
by i and j. Then the strategy of player i is defined by cj = Xi and =p4
for all k E N\j. Similar strategies for player j and for the players in the
other components result in the formation of a network with components
Bi, ... ,B; and in payoff vector x. Now, arguments similar to those in
the proof of lemma 9.1 show that c is a strong Nash equilibrium.
In order to prove part (ii), assume that v(N) > Lk=l V(Bk) for every
partition {B I , ... ,Bd of N such that IBkl = 2 for each k E {1, ... , t}.
One of the implications that we are looking for is proven in lemma 9.1,
namely that every core allocation in which at least one player receives
a payoff of zero is supported by a strong Nash equilibrium. It remains
to prove that no other core allocations are supported by a strong Nash
equilibrium. Suppose that c is a strategy profile in the link and claim
game rIC(N, v) that supports a payoff vector x E C(N, v) such that Xi >
o for each i E N. We will show that c is not. a strong Nash equilibrium.
Because f(c) = :1; and :r: E C(N, v), we know that L-BEN/L(c) v(B) =
v(N). This implies that there exists aBE NjL(c) such that IBI = 1
or IBI 2: 3. Consider such a component B. If IBI = 1 then Ji(c) = 0
for the unique player i E B and, consequently, f(c) f. Xi. We conclude
Coalition-proof Nash equilibrium 229

that IBI 2:: 3 has to hold. This implies that there is at least one player
i E B who is directly connected to at least two other players. Arguments
similar to those in example 9.5 show that such a player cannot receive
a positive payoff in a strong Nash equilibrium of rIC(N, v). This proves
that there is no a strong Nash equilibrium of rIC(N, v} such that f(c} =
x. D
We point out that the condition in the second part of theorem 9.5 is
trivially satisfied for a game (N, v) with an odd number of players.
In spite of the conclusion that strong Nash equilibria of the link and
claim games often exist, the strong Nash equilibrium concept seems quite
restrictive. For a large class of coalitional games, strong Nash equilibria
result in at least one of the players receiving a payoff of zero. Actually,
we find that one of the players receiving a payoff of zero will be connected
to several, possibly even all, other players. He might even be the central
player in a star. The payoff of a player in such a central position is
kept low in a strong Nash equilibrium because other players can avoid
going through him by forming new links between themselves. Note,
however, that such deviations are not necessarily stable against further
deviations. This motivates us to consider coalition-proof Nash equilibria
in the following section.

9.4 COALITION-PROOF NASH


EQUILIBRIUM
In the current section we study coalition-proof Nash equilibria as a
compromise between Nash equilibria and strong Nash equilibria. Our
main interest is in the set of payoff vectors of a game (N, v) that are
supported by coalition-proof Nash equilibria of the associated link and
claim game rIC(N, v). Throughout this section we restrict ourselves to
games with three players. While studying 3-player games is obviously
restrictive, it will nevertheless give us an idea about what type of payoff
vectors are supported in equilibria.
We start with an example to illustrate coalition-proof Nash equilibria
of a link and claim game and the curiosities that arise.

EXAMPLE 9.6 Consider the coalitional game (N, v) with player set N =
{l, 2, 3} and characteristic function v described by

1
0 if ITI = 1;
120 if T - {1 , 2}·,
v{T) = 60 if T = {1, 3};
80 if T = {2,3};
180 ifT = N.
230 A one-stage model of network formation and payoff divi8ion

Note that this is the game that was studied in examples 9.3 and 9.5.
We concentrate on payoff vector x = (100,20,60), which is in the core of
(N, v), but not supported by a strong Nash equilibrium of the associated
link and claim game r1C(N, v) (see example 9.5). We will show that x is
supported by a coalition-proof Nash equilibrium.
Consider the strategy profile c defined by c l = (F, 100, F), c2 =
(lO,F,lO), and c3 = (F,60,F). It holds that L(c) = {12,23} and
f(c) = x. We will show that c is a coalition-proof Nash equilibrium
of r1C(N, v).
It is easily seen that no player can unilaterally deviate to a strategy
that gives him a higher payoff, and we conclude that c is a Nash equilib-
rium. Also, because Xl + X2 + X3 = 180 = v(N), we know that there are
no deviations by coalition N that increase the payoffs of all players. To
prove that c is a coalition-proof Nash equilibrium, we have to show that
there are no profitable deviations by 2-player coalitions that are stable
against further deviations.
We start by considering a deviation by coalition {I, 2}. The sum
of the payoffs received by players 1 and 2 according to c equals the
value of coalition {I, 2}. Hence, to improve their payoffs, players 1 and
2 need to deviate to a strategy profile that induces the formation of
a network in which they are connected with player 3. However, the
strategy of player 3 implies that he will still receive 60 after the deviation
by coalition {I, 2}. Hence, players 1 and 2 together cannot obtain more
than v(N) - 60 = 120 = Xl + X2 after the deviation. This shows that
they cannot both improve their payoffs. Similar arguments show that
there are no profitable deviations by coalition {2,3}.
It remains to consider deviations by coalition {I, 3}. Because Xl +X,1 =
160 > v(l, 3), any profitable deviation by players 1 and 3 results in the
formation of a connected network. To improve their payoffs, players 1
and 3 have to break a link with player 2 on which player 2 has a positive
claim. Without loss of generality, assume that link 12 will be broken.
This is represented by the strategies

with 0 < a < 1,


with cr + c~ = 60 + lOp, 0 < p ::; 1 - a.

This deviation, however, is not stable against further deviations be-


cause player 3 can deviate from strategy profile (c l , c 2 , ( 3 ) by playing
(I = (F, 70, F), thereby inducing the formation of network (N, {23})
and improving his payoff from 60 + lOp to 70 = v(2, 3) - c~. Because (;3
is a coalition-proof Nash equilibrium in the reduced game that emerges
when the strategies of players 1 and 2 are fixed to c1 and c 2 , respectively,
Coalition-proof Nash equilibrium 231

it follows that deviation (C l , ( 3 ) is not self-enforcing. We conclude that


c is coalition-proof Nash equilibrium of rlC(N, v). 0
The previous example shows how the requirement that deviations be
self-enforcing weeds out the set of possible deviations, which, in turn,
implies the stability of some strategies that were not stable against ar-
bitrary deviations. The remainder of this section is dedicated to the
problem of finding all payoff vectors supported by coalition-proof Nash
equilibria of the link and claim games associated with 3-player coali-
tional games. A remarkable result is that there exist coalition-proof
Nash equilibria that result in payoff vectors outside the cores of the
underlying coalitional games.
In order to prove the main theorems in this section we need a series
of lemmas. The first lemma identifies a set of payoff vectors that are
supported by coalition-proof Nash equilibria, which are not supported
by strong. Nash equilibria.

LEMMA 9.2 Let (N,v) be a 3-player zero-normalized coalitional game


that satisfies condition (9.7). Let x E J(N, v) be such that at least
one player j receives exactly his marginal contribution, Xj = v(N) -
v(N\j), and at least one other player k receives at most his marginal
contribution, Xk :::; v(N) - v(N\k). Then there exists a coalition-proof
Nash equilibrium c of the link and claim game rlC(N, c!)) such that L(c) =
{ij, ik} and f (c) = X, where i denotes the remaining player.

PROOF: We assume, without loss of generality, that N = {I, 2, 3}, X2 =


v(N) - v(l, 3) and X3 :::; v(N) - v(l, 2). Because x E J(N, v) and, hence,
Xl +X2+X3 = v(N), it follows that Xl ;::: v(l, 2)+1)(1, :\) -v(N). Consider
the strategy profile c defined by

c l =(P,Xl,O),
c2 =(X2' P, P),
3
C =(X3, P, P).
We will prove that c is a coalition-proof N ash equilibrium of rlc(N, v).
Because Xl + X2 ;::: v(l, 2) and Xl + X3 = v(l, 3), coalition {2,3} is the

°
only coalition that can possibly deviate to a strategy profile that results
in a higher payoff to each of its members. If Xl = then X2 + X3 =
v(N) ;::: v(2, 3), so that coalition {2, 3} cannot deviate to a strategy that
is profitable for both players 2 and 3.
From now on, we assume that Xl > 0. Given the strategy of player
1, coalition {2,3} has at most two possibilities to deviate and obtain
232 A one-stage model of network formation and payoff division

higher payoffs to both its members. One possibility for them is to break
exactly one of the links with player 1 and form link 23. Because c~ = 0
and c~ > 0, player 2 will break the link with player 1. This is represented
by the strategies

with 0 < ex < 1,


with CI + c~ = X3 + ;JX 1, 0 < ;J S; 1 - ex.

Now, player 3 can break the link with player 2 and form a link with
player 1 only,
(';3 = (Xl + X3, P, P),

thereby improving his payoff from X3 + ;JXI to X3 +XI = v(1, 3). Because
player 1 claims zero on the link with player 3, the claims on link 13 are
indeed attainable.
Another possible deviation by coalition {2, 3} exists if its value is large
enough, more precisely, if v(2, 3) > X2 + X3. Players 2 and 3 can then
form a coalition on their own and improve their payoffs. This results
from the strategies

c2 =(P, P, c§),
c3 =(P, c~, P),
with c§ > X2, c~ > X3, and c§ + c~ S; v(2,3). Because c§ > X2 and
Xl + X2 + v(N) 2': v(2, 3), it holds that X3 < c~ < X3 + Xl. It follows
X3 =
that there exists ;J E (0,1) such that d = X3 + ;JXI. Again, player 3 can
achieve a further improvement in his payoff by playing (';3.
We conclude that none of the profitable deviations by coalition {2, 3}
are self-enforcing and that c is a coalition-proof Nash equilibrium of
rIC(N, v). 0

Note that an extreme point of the core which is not on the boundary
of the imputation set attributes to two players their marginal contribu-
tion to the grand coalition. The remaining player receives at most his
marginal contribution. Lemma 9.2 above then implies that such a payoff
vector is supported by a coalition-proof Nash equilibrium.
In lemmas 9.1 and 9.2 we have identified two sets of payoff vectors
that are supported by coalition-proof Nash equilibria. We will show
in the following lemmas that no other payoff vectors in the imputation
set are supported by coalition-proof Nash equilibria that result in the
formation of a network with two links if the underlying coalitional game
has a nonempty core.
For convenience, for a network (N, L) in which INI = 3 and ILl = 2,
we will denote the middleman by i and the two other players by j and
Coalition-proof Nash equilibrium 233

k. With this notation it holds that L = {ij,ik}. The strategies of the


players in rIC(N, v) have three coordinates, which we order in such a
way that the first coordinate corresponds to player i, the second to j,
and the third to k. Hence, we denote

ei =(P, ej, e).J with ej, 4 E R+,


d =(c{, P, c{) with c{ E R+,
ek =(ef,eJ,P) with ef E R+.
If L(e) = L then it follows that ck
= P or eJ = P has to hold. Since we
are interested in Nash equilibria, we assume that e~ +ek+cI +ef = v(N).

LEMMA 9.3 Let (N, v) be a 3-player zero-normalized coalitional game


that satisfies condition (9.7). Let strategy profile e in the link and
claim game rIC(N,v) be such that L(e) = {ij,ik}. If ej + ek = and
f(e) E J(N, v)\C(N, v) then e is not a coalition-proof Nash equilibrium
°
of rIC(N, v).

PROOF: Suppose that f(e) E J(N, v)\C(N, v) and e~ + 4 = 0. Then,


since v(i) = v(j) = v(k) = 0, we find that at least one of the following
three inequalities holds:

cI < v(i,j), (9.12)


ef < v(i, k), (9.13)
cI +
. k
ei
.
< v(), k). (9.14)

Inequality (9.14) does not hold because c{ +ef


= v(N) and v(T) :::; v(N)
-ef
then ci = (P,v(i,j) -c{,P)
for each T ~ N. Ifv(i,j) -c{ ~ v(i,k)
is a self-enforcing deviation. Otherwise, c = (P, P, v(i, k) -
i en
is a
self-enforcing deviation. In both cases, player i improves his payoff. We
conclude that e is not a coalition-proof Nash equilibrium of ric (N, v). 0

The following lemma identifies another sufficient condition for a strat-


egy profile not to be a coalition-proof Nash equilibrium.

LEMMA 9.4 Let (N, v) be a 3-player zero-normalized coalitional game


that satisfies condition (9.7) and that has a nonempty core. Let strategy
profile c in the link and claim game r 1c ( N, v) be such that L (c) = {ij, ik}.
234 A one-stage model of network formation and payoff division

If C~ +4 > 0, Cf -:/:- v(N) - v(i, k), and c~ -:/:- v(N) - v(i,j), then C is not
a coalition-proof Nash equilibrium of r1C(N, v).

PROOF: We will prove the lemma by contradiction. Suppose c~ +4 > 0,


Cf -:/:- v(N) -v(i, k), and c~ -:/:- v(N) -v(i,j). Assume that C is a coalition-
proof Nash equilibrium. Then, obviously, c~ + ck + c{ + c~ = v(N). We
denote x = f(c).
Because c is a coalition-proof Nash equilibrium, player i cannot devi-
ate to a strategy that improves his payoff by breaking exactly one link
and claiming the highest possible payoff on the other link. This implies
that
(9.15)
and
Xi + Xk = Cji + Cki + Cik >_ v (.z, k) . (9.16)

From c)+4+Cf +c~ = v(N), Cf -:/:- v(N)-v(i, k), and c~ -:/:- v(N)-v(i,j),
it follows that neither (9.15) nor (9.16) can hold with equality. Therefore,

(9.17)

and
i
Cj + cki + cik > v (.z, k) . (9.18)
Suppose that 4 > O. We define a deviation for coalition {j, k} in
which player k breaks his link with player i and players j and k form
link jk and, moreover, both players j and k improve their payoffs. Let

.
wIth Ck
-' . = max {d.
2'Ck + v(z,]) -
k z .,
xi - Xj , }
ci
ck =(P, cj, P) .
wIth
-k
Cj
.
= xk + mm { 2,Xi + Xj - v(z,]) } .
k . ,

c
By construction of cJ and k it follows that Ck + = Xk + cJ
and, 4
hence, the claims are attainable. Because 4
> 0 and Xi + Xj - v(i,j) =
c~ + 4 + c{ - v(i,j) > 0, it follows that c{ > 0 and cJ
> Xk, so that both
players j and k improve their payoffs.
Because C is a coalition-proof Nash equilibrium of r1C(N, v), either
player j or player k has to have a profitable further deviation from
(c i , cJ, ck ). Player k clearly cannot improve his payoff any further. Also,
player j cannot improve his payoff any further by breaking the link with
player k because cj +Xj +Ck :2: c~ +Xj + 4 +v(i, j) - Xi - Xj = v(i, j). So,
deviation (cJ,c k ) is not self-enforcing only ifc~ > v(N)-v(j,k), because
Coalition-proof Nash equilibrium 235

then player j can further improve his payoff by breaking the link with
player i and claiming v(j, k) - cJ on the link with player k.
We have now established that 4 > 0 implies c~ > v(N) - v(j, k). In
an analogous manner, we derive that c~ > 0 implies 4
> v(N) - v(j, k).
Because cj + 4 4
> 0, we now know that either > 0 or cj > 0 holds.
Note, however, that v(N) - v(j, k) 2: O. Hence, 4 > 0 implies c~ > 0
and vice versa. We conclude that c~ > v(N) - v(j, k) 2: 0 and 4 >
v(N) - v(j, k) 2: 0 both hold.
To define a profitable deviation from c that is stable against further
deviations, we define

E = v(j, k) - max{ v(i, k) - cic, cf} - max{v(i, j) - c~, c{}. (9.19)

We will show that E > O.


Because c} > v(N) - v(j, k) and cic > v(N) - v(j, k), we derive from
cj + 4 + c{ + cf = v(N) that

c{ + c7 < 2v(j, k) - v(N) ~ v(j, k). (9.20)

Furthermore, we find that

v(i, k) - 4 + v(i,j) - c~
< v(i, k) + v(i,j) - 2v(N) + 2v(j, k)
~ v(i, k) + v(i,j) - v(i, j) - v(i, k) - v(j, k) + 2v(j, k)
= v(j, k), (9.21)
where the weak inequality follows from the non-emptiness of the core,
which implies balancedness of the game (N, v). Also, we find that

v(i, k) - cic + c{ < v(i, k) - (v(N) - v(j, k)) + v(N) - v(i, k),
= v(j, k), (9.22)

where the inequality follows from 4 > v(N) - v(j, k) and < v(N) - cl
v(i, k), which in turn follows from (9.18) and c~ + 4 + c{ + cf = v(N).
Analogously, we find that

c7 + v(i,j) - c~ < v(j, k). (9.23)

Combining (9.19), (9.20), (9.21), (9.22), and (9.23), we derive that E > O.
Using E, we define the following deviation from c by players j and k.

() = (P,P,max{v(i,j) - c~,c{} + ~E)


236 A one-stage model of network formation and payoff division

and
ck = (P, max {v(i, k) - c/o cn + ~E' P).
Because E > 0, this deviation improves the payoffs of both players j and
k. Also, it is defined in such a way that neither player j nor player k
has a profitable further deviation. This contradicts the assumption that
c is a coalition-proof Nash equilibrium. 0

The following remark will be used later on.

REMARK 9.1 Note in the last part of the proof of lemma 9.4 that even
if v( i, j) - cj + v( i, k) - ck = v(j, k), while the other conditions for E to be
positive are satisfied, then (c j , ck ) improves the payoffs of both players j
and k and neither j nor k can improve his payoff by deviating further.

We can now characterize the set of payoff vectors of a 3-player coali-


tional game (N, v) with a nonempty core that are supported by coalition-
proof Nash equilibria of the link and claim game ric (N, v) and that result
in the formation of a network with two links.

THEOREM 9.6 Let (N,v) be a 3-player zero-normalized coalitional game


that satisfies condition (9.7) and that has a nonempty core. Let x E RN.
TheTe exists a coalition-proof Nash equilibrium c of the link and claim
game rIC(N,v) such that L(c) = {ij,ik} and f(c) = x 'if and only if

x E C(N,v) and Xi =0 (9.24)

or

x E J(N, v) and (Xj= v(N) - v(N\j), Xk ::; v(N) - v(N\k)


or Xk = v(N) - v(N\k), Xj ::; v(N) - v(N\j)).
(9.25)

PROOF: The if-part of the theorem follows by lemmas 9.1 and 9.2. It
remains to show the only-if-part,
Assume that c is coalition-proof" Nash equilibrium of the link and
claim game rIC(N,v) such that L(c) = {ij,'ik} and denote x = f(c).
Obviously, it holds that x E J(N,v). We distinguish between two cases,
Xi = 0 and Xi > O.
Suppose Xi = O. Because:1; E J(N, v) we know by lemma 9.3 that
x E C(N, v) has to hold. Hence, (9.24) is satisfied.
Coalition-proof Nash equilibrium 237

Now, suppose Xi > O. Then, obviously, (9.24) is not satisfied. Con-


sidering the deviation possibilities of the middleman, player i, it follows
that
Xi + Xj 2: v(i,j) and Xi + Xk 2: v(i, k). (9.26)
Using lemma 9.4, we derive that Xj = v(N) - v(i, k:) or Xk = v(N) -
v(i, j). We assume, without loss of generality, that Xj = v(N) - v(i, k).
By (9.26) and Xi + Xj + Xk = v(N), it follows that Xi 2: v(i, j) + v(i, k) -
v(N) and Xk ::; v(N) - v(i,j). This means that condition (9.25) is
satisfied.
This completes the proof. D

We illustrate theorem 9.6 in the following example.

EXAMPLE 9.7 Consider the coalitional game (N, v) with player set N =

!
{I, 2, 3} and characteristic function v described by

0 if ITI = 1;
120 if T = {I, 2};
v(T) = 60 if T = {I, 3};
80 ifT = {2,3};
180 ifT = N.

Note that this is the game that was studied in examples 9.3, 9.5, and
9.6.
We already know that any coalition-proof Nash equilibrium of the link
and claim game rlc(N, v) that supports a payoff vector in the imputation
set of (N, v) results in the formation of exactly two links. Because (N, v)
has a non empty core, we can use theorem 9.6 to determine the set of
payoff vectors in the imputation set that are supported by coalition-proof
Nash equilibria. These payoff vectors are represented in figure 9.4. 0

For coalitional games with a nonempty core, theorem 9.6 describes


the coalition-proof Nash equilibria of the link and claim game that re-
sult in the formation of exactly two links. Of course, the set of payoff
vectors described by (9.24) is empty if the core of the underlying game
is empty. The set of payoff vectors described by (9.25) are supported by
coalition-proof Nash equilibria, independent of whether or not the core
of the underlying game is empty. In the following theorem we identify
a third set of imputations, which are supported by coalition-proof Nash
equilibria of the link and claim game r1C(N, v) if the core of (N, v) is
empty.
238 A une-stage model of network formation and payoff division

(0,0,180)

(120,0,60) Xl + X2 = 120

~~~------------------~

(180,0,0) L -_ _ _ _- - - . : ' - -_ _J - _ _ _----" (0,180,0)


(100,80,0) (60,120,0)

FiguTe 9.4. Bold parts represent the payoff vectors in the imputation set that are
supported by coalition-proof Nash equilibria

THEOREM 9.7 Let (N, v) be a 3-player zero-nonnalized coalitional game


that satisfies condition (9.7) and that has an empty core. Let x E RN.
There exists a coalition-proof Nash equilibrium c of the link and claim
game rlC(N,v) such that L(c) = {ij,ik} and f(c) = x if and only if
condition (9.25) holds or

x EI(N, v), -::Jj, k E N\i : Xi + Xj ~ v(i,j), xi + Xk ~ vO, k),


Xi> 2v(N) - 2v(j, k), xi < v(i,j) + v(i, k) - v(j, k). (9.27)

PROOF: It follows from lemma 9.2 that if x satisfies condition (9.25) then
there exists a coalition-proof Nash equilibrium c of the link and claim
game r1C(N,v) such that L(c) = {ij,ik} and f(c) = x. Suppose that
x does not satisfy condition (9.25). It suffices to show that x satisfies
condition (9.27) if and only if there exists a coalition-proof Nash equi-
librium c of the link and claim game rlC(N,v) such that L(c) = {ij,ik}
and i(c) = x.
Assume that there exists a coalition-proof Nash equilibrium c such
that L(c) = {ij,ik} and i(c) = x. Because player i cannot have a
profitable unilateral deviation, we find Xi + Xj ~ v(i,j) and Xi + Xk ~
v(i, k). Because erN, v) =1= 0, we derive using lemma 9.3 that Xi > O.
Coalition-proof Nash eqnilibrinm 239

As in the proof of lemma 9.4 we find that ck > v(N) - v(j, k) and
cj > v(N) - v(j, k). Hence, Xi > 2v(N) - 2v(j, k) and Xj + Xk =
V(N)-Xi < 2v(j, k)-v(N) ::::: v(j, k). Consider the deviation possibilities
of coalition {j,k}. We already know c;
+c~ < v(j,k). Also,

ct > v(N) - v(j, k)


2': v(N) - v(j, k) + v(i, k) - Xi - Xk
= V ( i, k) - v (j, k) + C{,

where the last equality follows from Xi + Xj + Xk = v(N), which has to


hold because c is a Nash equilibrium. We find that v(i, k) - ck + < c;
v(j, k). Analogously, we find v(i,j) - c~ + c~ < v(j, k). This implies that
(Cj, Ck) as defined on page 235 and discussed in remark 9.1, is a profitable
and stable deviation by coalition {j,k} if v(i,k) - ck + v(i,j) - cj :::::
v (j, k). Because c is a coalition-proof Nash equilibrium, we conclude
that v(i,k)-ck+v(i,j)-cj > v(j,k), i.e., Xi < v(i,k)+v('i,j)-v(j,k).
This shows that (9.27) is satisfied.
Now, assume that X satisfies condition (9.27). Define strategy profile
c hy c i = (P, ~Xi' ~Xi)' cj = (Xj, P, P), and ck = (Xk·. P, P). Then f(c) =
x. We will prove that c is a coalition-proof Nash equilibrium. Obviously,
c is a Nash equilibrium. Furthermore, coalitions {:i,j} and {i, k} have
no possibility to profitably deviate. By efficiency, we find that there is
no profitable deviation for the grand coalition N. It remains to consider
deviations by coalition {j, k}. Since cj = ck > v(N) - v(j, k), a possibly
stahle deviation by coalition {j, k} should result in the formation of link
jk only. Consider an arbitrary deviation (Cj,Ck) with the property that
L(ci,cj,Ck) = {jk}. Because Xi < v(i,k) + v(i,j) - v(j,k), we find
Xi + ~ + cJ < v(i, k) + v(i, j). Hence, 4
+ cJ < v(i, k) or cj + c{ <
v(i,j). We conclude that deviation (Cj,Ck) is not stable against further
deviations. This proves that c is a coalition-proof Nash equilibrium. 0
The following example provides an illustration of theorem 9.7.

EXAMPLE 9.8 Let (N,v) the a 3-player coalitional game with N


{I, 2, 3} and characteristic function v defined by

1
0 if ITI = 1;
36 ifT = {I, 2};
v(T) = 30 if T = {I, 3};
45 if T = {2, 3};
50 ifT = N.
240 A one-stage model of network formation and payoff division

The payoff vectors in the imputation set that are supported by coalition-
proof Nash equilibria of the link and claim game rlC(N, v) can be found
by applying theorem 9.7, because two links have to be formed to obtain
a payoff vector in the imputation set. We represent the result in figure
9.5. 0

(0,0,50)

Xl + X2 = 36

(50,0,0) L -_ _ _ _--L-_---'-....",:---_"'---"--~ (0,50,0)

Figure 9.5. Bold and shaded parts represent the payoff vectors in the imputation set
resulting that are supported by coalition-proof Nash equilibria

So far, we have concentrated on determining which imputations can


be supported by coalition-proof Nash equilibria. For most games, this
means that a network with two links has to be formed. However, there
are also coalition-proof Nash equilibria that result in the formation of
a network with one link. The payoff vectors that are obtained in such
equilibria will, in general, not be imputations. In the following theorem
we describe a set of payoff vectors that can be supported by coalition-
proof Nash equilibria that result in the formation of a network with one
link.

THEOREM 9.8 Let (N, v) be a 3-player zero-normalized coalitional game


that satisfies condition (9,7). Let x E RN. There exists a coalition-
proof Nash equilibrium c of the link and claim game rlC(N, v) such that
Coalition-proof Nash equilibrium 241

L(c) = {ij} and f(c) = x if


3k E N\{i,j} :
Xi > v(N) - v(j, k), Xi 2': v(N) - v(j, k) + v(i, k) - v(i,j),
Xj > v(N) - v(i, k), Xj 2': v(N) - v(i, k) + v(j, k) - v(i, j),
Xi + Xj = v(i,j), and Xk = O. (9.28)

Also, if v(N) > v(i,j) then all coalition-proof Nash equilibria that re-
sult in the formation of link ij only, generate payoff vectors that satisfy
condition {9.28}.

PROOF: We assume throughout this proof that N == {l, 2, 3}. To prove


the first part of the theorem, we assume, without loss of generality,
that Xl > v(N) - v(2, 3), Xl 2': v(N) - v(2, 3) + v(1, 3) - v(1, 2), X2 >
v(N) - v(1, 3), X2 2': v(N) - v(1, 3) + v(2, 3) - v(1, 2), Xl + X2 = v(1, 2),
and X3 = O. We define a strategy profile c by

cl = (P,xl,v(N) - v(1,2)),
c2 = (X2' P, v(N) - v(1, 2)),
c3 = (P, P, P).
Note that L(c) = {12} and f(c) = x. We will prove that c is a coalition-
proof Nash equilibrium of the link and claim game rIC(N,v). More
precisely, we will show that no coalition has a profitable deviation from
c that is stable against further deviations.
It is obvious that no individual player has a profitable deviation from
strategy profile c.
Now, suppose that c is a self-enforcing deviation by N that improves
the payoffs of all players. Because every player receives a positive payoff
according to c, there can be no isolated players. Hence, IL(c)1 2': 2.
Because c is a Nash equilibrium, it follows from theorem 9.1 that IL(c)1 =
2. Because c is self-enforcing and the players divide v(N) according to c,
it is a coalition-proof Nash equilibrium of flc(N, v). We use theorems 9.6
and 9.7 to conclude that at least two players will receive a payoff that is
less than or equal to their marginal contribution to the grand coalition.
Because players players 1 and 2 receive their marginal contribution to
the grand coalition if strategy profile c is played, this contradicts our
assumption that c is a profitable deviation by N. We conclude that
there is no self-enforcing deviation by the grand coalition that gives all
players a higher payoff.
It is clear that coalition {1, 2} has no possibility to deviate to a strat-
egy profile that results in higher payoffs to both its members. Hence, it
242 A one-stage model of network formation and payoff division

remains to consider deviations by coalitions {l,3} and {2,3}. We first


consider deviations by coalition {2,3}. We distinguish between several
cases.
Consider a profitable deviation ((:2, (:3) by coalition {2, 3} to a strategy
profile that results in the formation of link 23 only, i.e.,

(:2 =(P, P, x2 + a(v(2, 3) - X2)) with 0< a < 1,


(:3 =(P, p( v(2, 3) - X2), P) with 0 < f3 :s: 1 - a.

Note that X2 = v(1,2) - Xl :s: v(N) - Xl < v(2,3). Hence, deviation


((:2, (:3) results in a higher payoff to both players 2 and 3. However,
player 3 can achieve a further payoff improvement by playing

c3 = (v(l, 2) + v(l, 3) - v(N), P, P),

because c§ + cy = v(l, 3) and

f3(v(2,3) - X2) < v(2, 3) - X2 :s: v(l, 2) + v(1, 3) - v(N).

The last inequality follows from X2 ~ v(N) - v(l, 3) + v(2, 3) - v(l, 2).
We conclude that deviation ((:2, (:3) is not self-enforcing.
Coalition {2, 3} can also deviate to a strategy profile that induces the
formation of a connected network. Consider a deviation that results in
the formation of links 13 and 23. This is described by strategies

with 0 < a < 1,


with cf + d = f3xl, 0 < f3 :s: 1 - a.

Note that players 2 and 3 divide at most v(1,2) = Xl + X2, because


c§ = v(N) - v(1,2). Deviation (c 2 ,c3 ) is not self-enforcing, because

(3.7:1 < :£1 = v(l, 2) - .7:2 < v(l, 2) + v(1, 3) - v(N)


and, consequently, player 3 can improve his payoff by playing c3 .
Coalition {2,3} can deviate to a strategy profile that results iII the
formation of links 12 and 23. Such a deviation can be profitable if and
only if v(N) > v(1, 2). Let

with ci + c~ = X2 + a(v(N) - v(l, 2)), 0 < a < 1,


with c~ = f3(v(N) - v(1, 2)), 0 < f3 :s: 1 - a.

Because Xl > v(N) - v(2, 3), the sum of the payoffs to players 2 and
3 will be less than v(2, 3). Hence, player 2 can improve his payoff by
breaking his link with player 1.
Coalition-proof Nash equilibrium 243

Deviations by coalition {2,3} to a strategy profile that results in the


formation of links 12 and 13 can be ruled out in a similar way. Finally,
note that a deviation by coalition {2, 3} to a strategy profile that indices
the formation of the complete network cannot be profitable for both its
members. We have now shown that coalition {2,3} does not have a
profitable deviation from c that is stable against further deviations.
Because the deviation possibilities of coalition {1,3} are similar to
those of coalition {2, 3}, it follows that coalition {I, 3} also does not have
a profitable deviation from c that is stable against further deviations.
This concludes the proof of the first part of the theorem.
We now turn to the proof of the second part of the theorem. Suppose,
without loss of generality, that v(N) > v(l, 2). Let c be a coalition-proof
Nash equilibrium such that L(c) = {12}. We will show by contradiction
that f(c) is a payoff vector in the set described by (9.28) with i = 1,
j = 2, and k = 3. Assume that f(c) does not belong to this set. Without
loss of generality, we can limit our analysis to the following two cases:

(i) h (c) = d ::; v(N) - v(2, 3),


(ii) h (c) = c~< v(N) - v(2, 3) + v(l, 3) - v(l, 2),
h (c) > v(N) - v(2, 3), and h(c) > v(N) - v(l, 3).
Because c is a coalition-proof Nash equilibrium, it holds that h(c) +
h(c) = v(l, 2). Furthermore, since player 3 cannot unilaterally deviate
to a strategy that gives him a higher payoff, it has to hold that c~ 2':
v(N) - v(l, 2) or c~ = P for all i E {I, 2}.
We start by analyzing case (i). Consider the following deviation by
players 2 and 3

(} =(ci + a(v(N) - v(l, 2)), P, 0) with 0 < a < 1,


[;3 =(P, (1 - a)(v(N) - v(l, 2)), P).

This deviation is profitable for both players because v(N) > v(1,2).
Because h(cl, [;2, [;3)+ h(c 1 , [;2, [;3) = v(N) -c~ 2': v(N)-v(N)+v(2, 3) =
v(2,3) and c~ 2': v(N) - v(l, 2) or c~ = P, neither player 2 nor player
3 has an opportunity to achieve a further improvement in his payoff.
Hence, deviation (6 2 ,63 ) is self-enforcing. This contradicts that c is a
coalition-proof Nash equilibrium.
We now analyze case (ii). Consider the following deviation by players
1 and 3.

(:1 = (P, P, v(N) - v(2, 3) + v(l, 3) - v(l, 2)),


(:3 = (v(2,3) +v(1,2) -v(N),P,P).
244 A one-stage model of network formation and payoff division

Note that the sum of the claims on link 13 equals v(l, 3) and that player 1
achieves an increase in payoff. Also, player 3 improves his payoff because
v(2,3) + v(l, 2) - v(N) > 0, which follows from v(1,2) = Xl + X2 >
v(N) - v(2, 3) + v(N) - v(l, 3) and v(N) ~ v(l, 3). We will show that
deviation (cl , c3 ) is stable against further deviations. Player 1 cannot
unilaterally improve his payoff any further because cr
> v(N) - v(l, 3).
It follows from

v(N) - v(2, 3) < h (c) < v(N) - v(2, 3) + v(l, 3) - v(1, 2)


that v(l, 3) > v(l, 2). Because c~ = P or c~ ~ v(N) - v(l, 2) > v(N) -
v(1,3), player 3 cannot improve his payoff by deviating to a strategy
that results in the formation of links 13 and 23. Furthermore, player 3
cannot improve his payoff by deviating to a strategy that results in the
formation of 23 only, because c~ = P or

c~ + 13 (c 1 , c2 , c3 ) =c~ + cr
~ v(N) - v(l, 2) + v(2, 3) + v(l, 2) - v(N)
= v(2, 3).

We conclude that (c 1 , c3 ) is a profitable deviation from c that is stable


against further deviations. This contradicts that c is a coalition-proof
Nash equilibrium. This completes the proof of the second part of the
theorem. 0

We remark that there exists a payoff vector X satisfying condition


(9.28) only if the core of the game (N, v) is empty. This follows because
if (9.28) holds then

v(i,j) = Xi + Xj > v(N) - v(i, k) + v(N) - v(j, k) (9.29)

and, hence, v(i,j) +v(i,k) +v(j,k) > 2v(N). This means that (N,v) is
not balanced, which implies that it has an empty core. Conversely, ifthe
core of a zero-normalized game (N, v) is empty, then there exists a payoff
vector X satisfying condition (9.28). To see this, note that such a game
has exactly one condition for balancedness, namely v( i, j) + v( i, k) +
v(j, k) ::; 2v(N). If C(N, v) = 0 then v(i,j) + v(i, k) + v(j, k) > 2v(N).
We assume, without loss of generality, that v(i,j) ~ v(i, k) ~ v(j, k).
Then x defined by Xi = v(N)-v(j, k)+ ~[v(i, j)+v(i, k)+v(j, k) -2v(N)J,
Xj = v(N) - v(i, k) + ~[v(i, j) + v(i, k) + v(j, k) - 2v(N)J, and Xk = 0, is
a payoff vector that satisfies condition (9.28).
Theorem 9.9 describes the coalition-proof Nash equilibria of the link
and claim games that result in the formation of the empty network.
Coalition-proof Nash equilibrium 245

THEOREM 9.9 Let (N,v) be a 3-player zero-normalized coalitional game


that satisfies condition (9.7). Let x E RN. There exists a coalition-
proof Nash equilibrium c of the link and claim game rIC(N, v) such that
L(c) = 0 and f(c) = x if and only if

:?'R,TcN:
R =I T, IRI = ITI = 2, v(R) = v(T) = v(N), and x = (0,0,0)
(9.30)
or
v(T) = 0, for all T such that ITI = 2, and x = (0,0,0). (9.31 )

PROOF: We start by proving the only-if-part. Suppose c is a coalition-


proof Nash equilibrium of rIC(N, v) such that L(c) = 0 and, hence,
f(c) = (0,0,0). We assume, without loss of generality, that v(1,2) 2::
v(2,3) 2:: v(1,3). Suppose conditions (9.30) and (9.31) are both not
satisfied. Then v(2,3) < v(N) and v(1,2) > O. We distinguish be-
tween three cases. Firstly, if v(2,3) > 0 then payoff vector y de-
fined by Y1 = v(N) - v(2, 3), Y2 = min{ 1v(2, 3), v(N) - v(1, 3)}, and
Y3 = v(2,3) - Y2, is supported by a coalition-proof Nash equilibrium
(see theorems 9.6 and 9.7) and results in positive payoffs to all play-
ers. Secondly, if v(N) = v(1,2) and v(2,3) = 0 then there is no
coalition-proof Nash equilibrium c such that f(c) == (0,0,0), because
(13 1 ,132 ) = ((P, 1v(N),P), (1v(N),P,P)) would be a self-enforcing devi-
ation from c that improves the payoffs of both players 1 and 2. Finally,
consider the case where v(N) > v(1,2) and v(2,3) = O. Then payoff
vector Y defined by Y1 = Y2 = 1v(1,2) > 0 and Y3 == v(N) - v(1, 2), is
supported by a coalition-proof Nash equilibrium of rIC(N, v) (see theo-
rem 9.6). Hence, we can always find a self-enforcing deviation from c
if conditions (9.30) and (9.31) are both not satisfied . We conclude that
there is no coalition-proof Nash equilibrium that results in payoff vector
(0,0,0) if conditions (9.30) and (9.31) are both not satisfied.
We now prove the if-part of the theorem. Suppose condition (9.30) is
satisfied. We will distinguish between two cases.
Case (i): Suppose that there is a 2-player coalition that has a value
that is strictly smaller than v(N). We assume, without loss of generality,
that v(l, 2) < v(N). Consider the following strategy profile
c1 = (P, v(N), 0),
c2 = (v(N),P,O),
3 = (v(N) v(N) P)
c 2' 2 ' .
246 A one-stage model of network formation and payoff division

Clearly, (N,L(c)) is the empty network and f(c) = (0,0,0). We will


show that c is a coalition-proof Nash equilibrium of the link and claim
game rIC(N, v). It is obvious that an individual player cannot improve
his payoff by unilaterally deviating. Because every self-enforcing strategy
profile of the grand coalition in which all players improve their payoffs
has to result in the formation of a connected network and payoffs that
add up to v(N), it follows that such a self-enforcing strategy profile is
a coalition-proof Nash equilibrium. There is only one 2-player coalition
with a smaller value than the value of the grand coalition. It follows
that there is no coalition-proof Nash equilibrium with all players receiv-
ing a positive payoff, because, according to theorems 9.6 and 9.7, such
a coalition-proof Nash equilibrium should result in at least one player
receiving a payoff of zero. Note that in order to draw this conclusion
from (9.27) efficiency is needed. We have now shown that there is no
profitable deviation from c by coalition N that is stable against further
deviations. It remains to consider deviations by 2-player coalitions.
Consider an arbitrary profitable deviation by coalition {I, 3}. Firstly,
they can play strategies that result in the formation of network (N, {13}).
Player 3 can then achieve a further increase in his payoff by playing
(P, v(N), P), which leaves player 1 isolated. Secondly, players 1 and 3
can play strategies that result in the formation of a connected network.
cI
Such a network cannot include link 12 because = v(N). Again, player
3 can improve his payoff further by playing (P, v (N), P), leaving player
1 isolated. We have now considered all profitable deviations by coalition
{1,3} and shown that they are not stable against further deviations.
Because the possible deviations by coalition {2, 3} are similar to those
by coalition {I, 3}, it remains to consider deviations by coalition {I, 2}.
We distinguish between two cases. Players 1 and 2 can deviate to strate-
gies that result in the formation of network (N, {I, 2}). If they do this
then at least one player in the coalition {1,2} receives less than v(~),
because v(l, 2) < v(N). Such a player can further improve his payoff by
playing (P, P, v(~)). Players 1 and 2 can also deviate to strategies that
result in the formation of a connected network. In such a network, one
of them will be a middleman and receive less than v(~) since player 3
receives v(~) and the remaining player receives a positive payoff. Hence,
the middleman can further improve his payoff by playing (P, P, v(~)).
We have now considered all profitable deviations by coalition {I, 2} and
shown that they are not stable against further deviations.
This completes the proof that c is a coalition-proof Nash equilibrium
of the link and claim game r1C(N, v).
Coalition-proof Nash equilibrium 247

Case (ii): Suppose that v(T) = v(N) for a1l2-player coalitions T. Let
(; be the strategy profile defined by (;1 = (P,v(N),O), (;2 = (O,P,v(N)),
and (;3 = (v(N), 0, P). By theorem 9.7, it follows that there is no
coalition-proof Nash equilibrium of the link and claim game r1C(N, v)
that results in a positive payoff to all players. So, there is no self-
enforcing deviation from (; by the grand coalition N that results in a
positive payoff to all players. A deviation by a 2-person coalition can-
not be stable because the third player claims zero on one of the links.
Obviously, a player cannot unilaterally improve his payoff. We conclude
that (; is a coalition-proof Nash equilibrium.
We have now proven that there exists a coalition-proof Nash equi-
librium c of the link and claim game r1C(N, v) such that L(c) = 0 and
f(c) = (0,0,0) if condition (9.30) is satisfied. To finish the proof of
the theorem, suppose that condition (9.31) is satisfied. Then strategy
profile c defined by ci = (P, P, P) for each i E N, is a coalition-proof
Nash equilibrium of the link and claim game r1C(N, v). This follows
easily by noting that I-player and 2-player coalitions have no profitable
deviations and that, according to theorem 9.6, there is no coalition-proof
Nash equilibrium of r1C(N, v) with a positive payoff to all players. Ob-
viously, f(c) = (0,0,0). 0

REMARK 9.2 Theorems 9.6, 9.7, 9.8, and 9.9 together describe all pay-
off vectors that are supported by coalition-proof Nash equilibria of the link
and claim game, both for games with a nonempty core and for games with
an empty core. To see this, we need one more step. If v(N) = v(i,j)
then a payoff vector corresponding to a coalition-proof Nash equilibrium
that results in the formation of network (N, {ij}) might not belong to the
set described by {9.28}. Therefore, assume that v(N) = v(i,j) and con-
sider a coalition-proof Nash equilibrium c of r1C(N, v) such that f (c) = x
and x does not belong to the set described by (9.28). Assume, without
loss of generality, that
Xj :s: v(N) - v(i, k)
or
Xj < v(N) - v(i, k) + v(j, k) - v(i, j)
= v(j, k) - v(i, k)
:s: v(N) - v(i, k).
Then, because
{9.25}.
Xk = °= v(N) - v( i, j), x belongs to the set described by
248 A one-stage model of network formation and payoff division

To end this chapter, we briefly discuss adjusted coalition-proof Nash


equilibria. We have applied coalition-proof Nash equilibrium to games
of link formation. According to this equilibrium concept the set of possi-
ble deviations is limited by the requirement that they are stable against
further deviations. However, the size of deviating coalitions is not lim-
ited. In a setting of link formation, where each link is formed by two
players, it might make sense to limit the size of deviating coalitions to be
less than or equal to two. We refer to the so-obtained equilibrium con-
cept as adjusted coalition-proof Nash equilibrium. It is defined analogous
to coalition-proof Nash equilibrium, but with the additional restriction
that only coalitions consisting of one or two players can deviate. Slikker
(2000a) shows that all results in the current section also hold for ad-
justed coalition-proof Nash equilibrium instead of coalition-proof Nash
equilibrium.
Chapter 10

NETWORK FORMATION AND


POTENTIAL GAMES

In this chapter we revisit the network-formation model in strategic


form that we saw in chapter 7. We study the conditions under which
these strategic-form games satisfy the property that all the information
that is necessary to determine Nash equilibria can be captured in a single
function on the set of all strategy profiles. If such a function exists, it
is called a potential function or just a potential, and a strategic-form
game that has a potential is called a potential game. For potential
games, the set of strategy profiles that maximize the potential constitute
a refinement of Nash equilibrium. For network-formation games that
are potential games, we study the network structures that are formed in
potential-maximizing Nash equilibria.
We provide the definitions of potential games and of the potential
maximizer as an equilibrium refinement in section 10.1. In section 10.2
we explain a result that relates potential games to Shapley values of coali-
tional games. This result is used in section 10.3 to study the strategic-
form model of network formation and its relation to potential games. We
highlight the networks that are supported by potential-maximizing Nash
equilibria. We conclude in section 10.4 with some remarks on extensions
of the results in this chapter.

10.1 POTENTIAL GAMES


In the current section we provide the definitions of potential games and
of the potential maximizer as an equilibrium refinement. We point out
that we are considering potentials for noncooperative games, which are
different from the potentials for cooperative game that we encountered
in chapters 1 and 3.
We start this section with an example.
250 Network formation and potential games

EXAMPLE 10.1 Let (N,v) be a coalitional game with player set N


{I, 2, 3} and characteristic function v given by

0 if ITI :::; 1;

v(T) = 1 40
50
60
if T = {I, 2};
if T = {I, 3};
if T = {2, 3};
(10.1)

72 ifT = N.

We consider the so-called participation game associated with (N, v). The
participation game is a strategic-form game with player set N in which
each player i E N has two strategy-choices, namely to participate or not
to participate. We represent these two choices by the strategies 8i for the
choice to participate and ti for the choice not to participate. The payoffs
to the players are determined as follows. Each player i E N who chooses
not to participate (8i) receives his stand-alone value v(i) and each player
i E N who chooses to participate (t;) receives his Shapley value in the
subgame on the coalition of participating players. We represent this
strategic-form game in figure 10.1.

0, 0, 0 0, 30, 30
25, 0, 25 19, 24, 29

Figur·e 10.1. The participation game

This participation game has the interesting property that there exists
a real-valued function on the set of strategy profiles that captures for
any deviation by any player the change in payoff of the deviating player.
We will construct such a function, which we denote by P. Because we
are interested in payoff differences only, we can choose an arbitrary value
of P for one strategy profile. Let P(Sl, 82, S3) = O. The values of P for
other strategy profiles are found by looking at the changes in payoffs
to deviating players. For example, if player 1 changes his strategy from
81 to iI, while players 2 and 3 play (82,83), then player 1 experiences
no change in his payoff. Hence, P(t1' 82, 83) = P(81' 82, 83) = o. If
player 2 now changes his strategy from 82 to t2, while players 1 and 3
stick to (t1' 83), then player 2's payoff increases from 0 to 20. Therefore,
P(t1' t2, S3) = P(i}, 82, 83) + 20 = 20. Continuing like this, we find
the function P represented in figure 10.2. It is important to note that
P captures for any deviation by any player the change in payoff of the
deviating player and not just for those changes that are used to determine
Potential games 251

P. For example, if player 2 changes his strategy from 82 to t2 while


players 1 and 3 play (81,83), then player 2's payoff does not change.
Hence, P(81,t2,83) = P(81,82,83) = O. Also, if player 1 changes his
strategy from t1 to 81 while players 2 and 3 play (t2, 83), player l's payoff
decreases by 20. Note that P(81, t2, 83) - P(tl, t2, 83) = 0 - 20 = -20.

82 t2
81 fOl]Q]
t1~
t;j

Figure 10.2. The function P

Note that we would find a different real-valued function Q that also


captures for any deviation by any player the change in payoff of the
deviating player if we choose Q(81, 82, 83) not equal to 0 but some other
number. For example, if we choose Q(81, 82, 83) = 15, then we find the
function Q that is represented in figure 10.3.

82 t2 82 t2
81
t1 ffim
15
83
35
81
it BillE
40
t3
64

Figure 10.3. The function Q

It is easily checked that the participation game in figure 10.1 has two
Nash equilibria, (81,82,83) and (t1, t2, t3). Because Nash equilibria are
by definition strategy profiles that satisfy the property that no player
can gain from unilateral deviation, we can also find them by looking at
the function P in figure 10.2 (or the function Q in figure 10.3). In fact,
we can replace the payoff functions of all players by P and determine
the Nash equilibria of the newly-created strategic-form game. Replacing
players payoff functions by P does not change the set of Nash equilibria.
Note that the strategy profile that has the highest value of P is a Nash
equilibrium, because a unilateral change in strategy results in a decrease
of P and, hence, in a decrease in payoff for the deviating player. It seems
natural to select the Nash equilibrium that maximizes the value of P.
This would be the strategy profile (tt, t2, t3) in our example. Note that
this is the same strategy profile that maximizes the value of Q. 0

A function P like we encountered in example 10.1, i.e., a real-valued


function on the set of strategy profiles that captures for any deviation by
252 Network formation and potential games

any player the change in payoff of the deviating player, is called a poten-
tial function or simply a potential for the strategic-form game. Formally,
a potential function for a strategic-form game r = (N; (Si)iEN; (Ji)iEN)
is a function P on S = TIiEN Si that satisfies the property that for every
strategy profile s E S, every i E N, and every ti E Si it holds that

(10.2)

A game r that admits a potential is called a potential game.


We saw in example 10.1 that a potential game allows for several po-
tential functions. The two potential functions P and Q in the example
differ by a constant 15, for every strategy profile. It follows from the next
lemma that this is not a coincidence. This lemma is due to Monderer
and Shapley (1996).

LEMMA 10.1 Let r = (N; (Si)iEN; (J;)iEN) be a potential game, and let
P and Q be potential functions for r. Then there exists a constant c
such that
P(s) = Q(s) + c (10.3)
for all s E S.

PROOF: Without loss of generality, we assume that N = {l, 2, ... ,n}.


We fix a strategy profile t E S and let s E S be an arbitrary strategy
profile. We define a sequence of strategy profiles aD, aI, ... ,an that starts
with s and ends with t and in which players change their strategies from
Si to ti one after the other. Formally, the sequence is defined inductively
by sO = sand a i = (a~l, ti) for all i E {I, .. , ,n}. Because P and Q are
potential functions for r, it follows from (10.2) that

for each i E {I, ... ,n}. Hence, defining

H(s) = :2)J;(a i ) - J;(a i- 1 )),


iEN
we obtain P(t) - P(s) = H(s) and Q(t) - Q(s) = H(s). It follows that
P(s) - Q(s) = P(t) - Q(t), which is independent of s. We conclude that
c = P(t) - Q(t) is a constant that satisfies (10.3) for all .5 E S. 0

We show in the next example that not every game admits a potential.
Potential games 253

Figure 10.4. A 2-player strategic-form game

EXAMPLE 10.2 Consider the 2-player strategic-form game that is rep-


resented in figure lOA.
We will try to find a potential function P for this game. Without loss
of generality, we assume that P(81' 82) = O. Then P(t1' 82) = h(t1, 82)-
h(81, 82) +P(81, 82) = 1 and P(81' t2) = !2(81,t2) - /2(81,82) +P(81, 82)
= -1. Consequently, we find P(t1' t2) = fdh, t2) - h(81, t2) + P(81' t2)
= O. Note that this implies that

This shows that the game that is represented in figure lOA is not a
potential game. 0

If a strategic-form game admits a potential, then a potential function


for the game contains all the information necessary to determine its Nash
equilibria. Moreover, any potential function singles out specific Nash
equilibria, namely those that maximize the value of the potential. We
formally state this result, which was obtained by Monderer and Shapley
(1996), in the following lemma. We have already illustrated this result
in example 10.1 and we omit its proof here.

LEMMA 10.2 Let r = (N; (Si)iEN; (fi)iEN) be a potential game and let
P be a potential function for this game. Define a strategic-form game
r(P), which is obtained from r by replacing the payoff function of each
player by the potential function P, i.e., r(p) = (N; (Si)iEN; (P)iEN)'
Then the following two statements hold.
(i) The set of Nash equilibria of r coincides with the set of Nash equi-
libria of r(p).
(ii) If 8 is a strategy profile for which the function P assumes its maximal
value, then 8 is a Nash equilibrium of r.

Let r be a potential game. There are many potential functions for this
game. However, all these potential functions differ by only a constant,
as we saw in lemma 10.1. Therefore, the strategy profiles that maximize
254 Network formation and potential games

the value of one potential function are the same strategy profiles that
maximize the value of all potential functions for f. Hence, the set of
potential-maximizing strategy profiles is well-defined. Also, it follows
from lemma 10.2 (ii) that such strategy profiles are Nash equilibria of
the game f, so that the set of potential-maximizing strategy profiles can
be used as an equilibrium refinement. A Nash equilibrium in this set is
known as a potential maximizer. This equilibrium refinement was intro-
duced by Monderer and Shapley (1996). As a motivation for this equi-
librium refinement, they remark that for the so-called stag-hunt game
that was described by Crawford (1991), potential maximization selects
strategy profiles that are supported by the experimental results of van
Huyck et al. (1990). Ui (2000b) provides additional justification for this
equilibrium refinement by showing that Nash equilibria that maximize
a potential function are generically robust. We refer to these papers for
further details. Here, we just stress that the set of potential-maximizing
strategy profiles is a well-defined equilibrium refinement.

10.2 A REPRESENTATION THEOREM


This section is devoted to a representation theorem that exposes a
relation between potential games and Shapley values.
In example 10.1 we investigated a participation game in which the
Shapley value was used to determine the payoffs of the participating
players and we saw that the participation game is a potential game.
This result is a special case of a more general result, which we explain
below.
Let N be a set of players and S = [liEN Si a set of strategy profiles
for these players. Suppose that after the players chose a strategy profile
S E S, they playa coalitional game (N, vs). Note that the particular
coalitional game played depends on the strategies chosen by the players.
However, we assume that the value of a coalition R ~ N of players in the
coalitional game that is played, is independent of the strategies chosen
by the players outside R. Hence, for any coalition R ~ N and any two
strategy profiles s, t E S such that SR = tR, it holds that vs(R) = vt(R).
We formalize this situation as follows. With every player set N and set
of strategy profiles S for these players, we associate an indexed set of
coalitional games in the set

9N,S = { {(N, Vs)}sES E (TUN)s I for all s, t E S, R ~ N

it holds that vs(R) = vt(R) if SR = tR}' (10.4)


A representation theorem 255

where TUN denotes the set of coalitional games with player set N. We
recall that the unanimity coefficients of a game (N, Vs) are denoted by
AR(V s ), R E 2 N \{0}. In terms of unanimity coefficients, we rewrite
(10.4) as

gN,S = {{(N,VS)}SES E (TUN)s I for all s,t E S, R E 2 N \{0}

it holds that AR(V s ) = AR(Vt) if SR = tR}. (10.5)


We now state the main result of this section, which relates potential
games to Shapley values. This theorem is due to Ui (2000a).

THEOREM 10.1 Let r = (N; (Si)iEN; (fdiEN) be a game in strategic


form. r is a potential game if and only if there exists an indexed set of
coalitional games {(N,Vs)}sES E gN,S such that

(10.6)

for each i E N and each s E S.

PROOF: We first prove the if-part of the theorem. Assume that there
exists an indexed set of coalitional games {(N, v s )} sES E 9 N,S such that
fi(05) = ipi(N, vs) for all i E N and all s E S. We define

AR(Vs )
Q(05) = (10.7)
IRI
for each s E S. We will show that Q is a potential function for r. Let
i E N, s E S, and ti E Si. Then

L AR(V s ) - L AR(V(ti,S-;))
Rc;N:Rff/j
IRI Rc;N:Rl·f/j
IRI
= Q(s) - Q(ti, Li),
where the third equality follows from (10.5), which implies that AR(V s ) =
AR(V(ti,S-i)) for all R ~ N\i with R i= 0. We conclude that Q is a
potential function for r and, consequently, that r is a potential game.
256 Network formation and potential games

To prove the only-if-part of the theorem, we assume that r is a po-


tential game. Let Q be a potential function for r. For each 8 E S, we
define a game (N, vs) via its unanimity coefficients. Therefore, let 8 E S.
For all R E 2N\ {0} define

AR(Vs ) = j lRI (L:iEN Ji(8) - (INI- 1)Q(8))


IRI( -Ji(8)+Q(8))
if R = N;
ifR=N\ifor
some i E N;
o otherwise.
This determines for each 8 E S the coalitional game (N, vs), where Vs =
L:RE2N\{0} AR(Vs)UR·
We first show that {(N,VS)}SES E YN,S. Let R E 2N\{0} and 8,t E S
such that 8R = tR. If R = N or R is such that IRI ~ INI - 2 then
it immediately follows from their definitions that AR(Vs ) = AR(Vt). It
remains to consider R with IRI = INI - 1. Let i E Nand R = N\i.
Then fi(8) - fi(t) = Q(8) - Q(t) and, hence,

AR(Vs) = IRI ( - 7Ti(8) + Q(8)) = IRI ( - 7Ti(t) + Q(t)) = AR(Vt).

This shows that AR(Vs ) = AR(Vt) for all R E 2N\{0} and, consequently,
that {(N, Vs)}sES E YN,S.
Finally, we show that for each player i E N and each strategy profile
8 E S it holds that ifJi(N,vs) = fi(8). So, let i EN and 8 E S. Then

ifJi(N, VS) = IRI


"L...J AR(Vs )
Rr:;.N:iER
= L fj(8) - (INI- 1)Q(8) + L (- fj(8) + Q(8))
jEN jEN:ji-i
= fi(8).
This completes the proof. o
To conclude this section, we shortly mention a relation between po-
tential functions for noncooperative and cooperative games. Let r be a
strategic-form game that is a potential game. Let {(N, Vs)}sES E YN,S
be an indexed set of games such that Ji(8) = ifJi(v s ) for all i E Nand
all 8 E S. The unanimity coefficients of the game (N, Vs) are denoted by
AR(V s ), R E 2 N\{0}. Using these una.nimity coefficients, we define a po-
tential function Q for the game r by Q(8) = L:RE2N\{0} Ai~s) for each
8 E S. It holds that for every 8 E S the value of the potential function
Q(8) equals the value that is attributed to the coalitional game (N, vs)
by the potential for coalitional games that we described in section 1.1.
Network formation 257

10.3 NETWORK FORMATION


We now turn our attention to the network-formation game in strategic
form that was defined in chapter 7. The goal of the current section, which
is based on Qin (1996), is two-fold. Firstly, we study the conditions
under which the network-formation game is a potential game. Secondly,
for network-formation games that are potential garnes, we study which
networks are formed as a result of potential-maximizing strategy profiles.
Recall that the strategic-form network-formation game in chapter 7
is based on an exogenously given allocation rule " which is used to
determine players' payoffs in various networks. In the following exam-
ple we show that the network-formation game is not always a potential
game. In this example, we use the proportional links solution, which we
encountered in example 7.2, as the exogenous allocation rule.

EXAMPLE 1 0.3 Let (N, v) be the coalitional game with player set N =
{I, 2, 3} and characteristic function v given by

if ITI :::; 1;
°
v(T) =
1 36
48
60
72
if T = {I, 2};
ifT={1,3};
if T = {2,3};
ifT = N.
(10.8)

in which the proportional links solution ,F


We consider the strategic-form game of network formation rnJ (N, v, ,F)
is used to determine players'
payoffs. We will show that this game is not a potential game. To do
this, we concentrate on a part of the game. In figure 10.5 we represent
the part of the payoff matrix of rnJ (N, v, ,F) that is obtained by fixing
the strategy of player 3 at 83 = {I, 2}. In this figure, player 1 is the row
player, and player 2 is the column player.

0
° °°
{I} {3} {I,3}
o 0, 30, 30 0,30, 30
°
0, 0, 0, 0,
{2} 0, 0, 18, 18, 0,30, 30 18, 36, 18
{3} 24, 0, 24 24,0,24 18, 18, 36 18, 18, 36
{2,3} 24,0,24 36, 18, 18 18, 18, 36 24, 24, 24
83 = {1,2}
Figure 10.5. Part of the payoff matrix of rnr (N, v, ,,()

We will show by contradiction that there exists no function P that


is a potential function for the game rnJ (N, v, ,F). Suppose that P is
258 Network formation and potential games

a potential function for fnf (N, v, ,P). Then the following equalities
should hold.

P(0, {3}, {I, 2}) - P(0, {I}, {I, 2}) = 30 - 0 = 30,


P( {2, 3}, {3}, {l, 2}) - P(0, {3}, {I, 2}) = 18 - 0 = 18,
P( {2, 3}, {I}, {I, 2}) - P( {2, 3}, {3}, {I, 2}) = 18 - 18 = 0,
P(0, {I}, {I, 2}) - P( {2, 3}, {I}, {I, 2}) = 0 - 36 = -36.

Without loss of generality, we assume that P(0, {I}, {I, 2}) = o. It


then follows from the equalities above that P(0, {3}, {I, 2}) = 0 + 30 =
30, P( {2, 3}, {3}, {I, 2}) = 30 + 18 = 48, P( {2, 3}, {I}, {I, 2}) = 48 +
o = 48, and P(0,{1},{1,2}) = 48 - 36 = 12, which contradicts our
assumption that P(0, {I}, {I, 2}) = O. This show that fnf (N, v, ,P) is
not a potential game. <)

We have illustrated that fnf (N, v, ,p) is not, in general, a potential


game. However, the strategic-form link-formation game fnf (N, v, ,) is
a potential game if the exogenous allocation rule, is the Myerson value
/.1. This follows from theorem 10.2 (see page 261), which shows that
the Myerson value in the only component-efficient allocation rule that
guarantees that fnf (N, v, ,) is a potential game. We prove intermediate
results in the following two lemmas. The first lemma states that if
the network-formation game associated with coalitional game (N, v) and
allocation rule, is a potential game, then the underlying allocation rule
satisfies fairness on C S{; (see section 2.2).

LEMMA 10.3 Let, be an allocation rule on CS:; and let (N,v) be a


coalitional game. If fnf (N, v, ,) is a potential game, then, satisfies
fairness.

PROOF: Suppose fnf (N, v, ,) is a potential game and let P be a poten-


tial function for this game. Let (N, L) be a network. For each kEN,
we define the strategy
S k = {j I j k E L}.

Obviously, L(s) = L. Choose a link ij E L. We use the notation S-ij to


denote the strategies of the players in N\ {i, j}. Then it holds that

P(s;\j,Sj,S-;j) = P(s;\j,sj\i,s-;j)
= P(s;, sj\i, S-;j),
Network formation 259

because the three strategy tuples (Si\j,Sj,S-ij), (si\j,sj\i,S-ij), and


(si,sj\i,s_ij) all result in the formation of the same network, namely
(N, L\ij), and, hence, in the same payoffs for each player.
We now find that

1'i(N, v, L) - 1'i(N, v, L\ij) = 1? (s) - 1? (Si \j, S-i)


= P(s) - P(Si\j, S-i)
= P(s) - P(sj\i, S-j)
= 1J(s) - 1J(Sj\1,S-j)
=1'j(N,v,L) -1'j(N,v,L\ij).

Hence, 1'i(N, v, L) - 1'i(N, v, L\ij) = 1'j(N, v, L) - 1'j(N, v, L\ij). Be-


cause ij E L was chosen arbitrarily, we may now conclude that 1'satisfies
fairness. 0

In the following lemma we show that the strategic-form game of net-


work formation rnJ (N, v, 1') is a potential game if the Myerson value is
used as the external allocation rule. This result is due to Qin (1996).
However, we provide a different proof of this lemma, using the represen-
tation theorem 10.1.

LEMMA 10.4 For any coalitional game (N, v) it holds that the network-
formation game rnJ (N, v, p,) is a potential game.

PROOF: Let (N, v) be a coalitional game. For any strategy profile


S E S in the strategic-form game rnJ (N, v, p,), we consider the network-
restricted game (N, vL(s)) associated with communication situation
(N, v, L(s)). This defines an indexed set of coalitional games
{(N, vL(s))}SES,
We will show that {(N, vL(s))}sES E (iN,S. Let R ~ Nand S =
(8R,8N\R) E S. Because vL(s)(R) = LCER/L(s) v(C) and R/L(8) does
not depend on 8 N\R, it follows that vL(s) (R) does not depend on 8 N\R'
This implies that {(N, vL(s))}SES E (iN,S.
Also, by the definitions of 1f and the Myerson value, it follows that

1;(8) = p'i(N, v, L(8)) = <I>i(N, V L(8)).


It now follows from theorem 10.1 that rnJ (N, v, p,) IS a potential
game. 0

We illustrate lemma 10.4 in the following example.


260 Network formation and potential games

EXAMPLE 10.4 Let (N, v) be the coalitional game with player set N =
{I, 2, 3} and characteristic function v given by

0 if ITI ::; 1;

v(T) = 1 36
48
60
if T = {I, 2};
if T = {I, 3};
if T = {2,3};
(10.9)

72 ifT = N.

Note that this is the same game as that in example 10.3. However,
we are now going to use the Myerson value as the exogenous allocation
rule rather than the proportional links solution. We will consider the
same part of the game that was considered in example 10.3 and show
that when the Myerson value is used, we can find a potential function
for this part of the game. In figure 10.6 we represent the part of the
link-formation game rnJ(N,v,fJ) that is obtained by fixing player 3's
strategy 83 = {I, 2}. In this figure, player 1 is the row player and player
2 is the column player.

0 {l} {3} {1,3}


o 0, 0, 0 0, 0, 0 0, 30, 30 0, 30, 30
{2} 0, 0, 0 18, 18, 0 0,30, 30 10, 40, 22
{3} 24, 0, 24 24,0, 24 12, 18, 42 12, 18, 42
{2,3} 24,0, 24 38, 14, 20 12, 18, 42 18, 24, 30
83={1,2}

Figure 10.6. Part of the payoff matrix of rnJ (N, v,,.)

It is straightforward to verify that the function represented in figure


10.7 is a potential function for the part of the game rnJ (N, v, fJ) that is
represented in figure 10.6. <)

0 {I} {3} {1,3J


o 0 0 30 30
{2} 0 18 30 40
{3} 24 24 42 42
{2,3} 24 38 42 48
83 = {1,2}
Figure 10.1. A potential function for the game in figure 10.6

Combining lemmas 10.3 and 10.4, we easily derive the following result.
NetwoTk formation 261

THEOREM 10.2 Let N be a set of players, , a component-efficient al-


location rule on CSN , and (N,v) a coalitional game. Then rnJ(N,v,,)
is a potential game if and only if, coincides with fL on CS;:.

PROOF: The if-part in the theorem follows directly from lemma 10.4.
To prove the only-if-part, suppose that the network-formation game
rnJ (N, v,,) is a potential game. Then it follows from lemma 10.3 that
, satisfies fairness on CS;:. Because, is component efficient by assump-
tion, it follows from theorem 2.4 that, coincides with fL on CS;:. D

Now that we have established conditions under which the strategic-


form network-formation games are potential games, we turn our atten-
tion to the study of which networks are formed when the players play
potential-maximizing strategy profiles.
The following theorem shows that, for superadditive coalitional games,
the networks resulting from potential-maximizing strategy profiles are
similar to those resulting from undominated Nash equilibria and coali-
tion-proof Nash equilibria. Recall that strategy profile s is defined by
Si = N\i for each i E N (see chapter 7).

THEOREM 10.3 Let (N, v) be a superadditive coalit'ional game and let


P be a potential function for the network-formation game rnJ(N,V,fL).
Then P ass'umes its maximum value in S. Furthermore, if t is a strategy
profile in which P is maximal, then fL(N,v,L(t)) = ,,!(N,v,L(s)).

PROOF: We know by lemma 7.4 that for each i E N it holds that Si is a


weakly dominant strategy in the game rnJ (N, v, fL). Let t be an arbitrary
strategy profile in the game rnJ (N, v, fL). Without loss of generality, we
assume that N = {1, 2, ... , n}. We define a sequence of strategy profiles
so, sl, ... ,sn that starts with t and ends with s and in which players
change their strategies from ti to Si one after the other. Formally, the
sequence is defined inductively by sO = t and si = (s~i\ Si) for all
i E {1, ... , n}. Because Si is a weakly dominant strategy for each player
i E N, we have for all i E {O, ... ,n -I} that ft+1(si+1) ~ ft+1(si). We
derive from this that p(si+1) ~ p(si) for all i E {O, ... ,n - I}. Thus,

(10.10)

This completes the proof of the first part of the theorem.


We now prove the second part of the theorem. Let t E S be a strategy
profile in which P is maximal. Because P(s) ~ P(s) for all strategy
262 Network formation and potential games

profiles s E S, it follows that P(s) = P(t). This means that with


respect to this strategy profile t, every inequality in (10.10) has to hold
with equality. Because
p(sk) - p(sk-1) = fJ:(sk) - fJ:(sk-1) ;::: 0
for all k E {1, ... ,n}, it then follows that fJ:(sk) = fJ:(sk-1) for all k E
{l, ... , n}. Using lemma 7.3, we then conclude that jI-'(sk) = jI-'(sk-1)
for all k E {1, ... ,n} and, hence,
r(t) = r(so) = ... = r(sn) = r(s).
This shows that /-1(N,v,L(t)) = /-1(N,v,L(s)). o
We illustrate theorem 10.3 in the following example.

EXAMPLE 10.5 Consider the network-formation game rnJ (N, v, /-1) that
was studied in example 10.4. A part of a potential function for this
game is represented in figure 10.7. By theorem 10.2, we know that this
potential function can be extended to the set of all strategy profiles.
Figure 10.7 illustrates that, under the condition that player 3 chooses
S3 = {1, 2}, the potential function assumes its maximal value at strate-
gies Sl and S2 of players 1 and 2. Theorem 10.3 implies that this value
is the maximal value the potential function assumes on the set of all
strategy profiles. <)

10.4 REMARKS
In this section we point the reader to some additional literature related
to network-formation and potential games.
Garratt and Qin (2000) apply the network-formation game
rnJ (N, v, /-1) to study coalition-government formation in systems with
three political parties. They provide a detailed analysis of the strategy
profiles that are potential-maximizing.
Monderer and Shapley (1996) also introduce a more general version
of potential functions, namely weighted potential functions. Let r =
(N; (Si)iEN; (fi)iEN) be a strategic-form game and w E R:,,?,+ a vector of
relative weights of the players. A function pw : TIiEN Si ---+ R is called a
w-potential for r if for every player i E N, every strategy profile s E S,
and every strategy ti E Si for player i, it holds that
fi(Si,8-i) - h(ti, 8-i) = Wi (pw (Si, 8-d - PW(ti' 8- i )). (10.11)
A game r is called a w-potential game if it admits a w-potential and it
is called a weighted potential game if it there exists a vector of weights
w E R:,,?,+ such that r is a w-potential game.
Remarks 263

Weighted potential games are used by Slikker et al. (2000a) to an-


alyze hypergraph-formation games in strategic form. The hypergraph-
formation games that they study are straightforward extensions of the
strategic-form network-formation games that we have studied in the cur-
rent chapter. Slikker et al. (2000a) prove an extension of the represen-
tation theorem in section 10.2 and show that there exists a relation
between weighted potential games and weighted Shapley values. They
use this result to prove that hypergraph-formation games are weighted
potential games if and only if a weighted Myerson value is used as the ex-
ogenous allocation rule. They also show a result similar to that obtained
in theorem 10.3. Using the fact that hypergraph-formation games are
an extension of the network-formation games that we have studied, we
derive weighted variants of the results presented in the current chapter.
Slikker (2001) uses potential games to study the formation of coalition
structures. He considers strategic-form games of coalition formation that
have features similar to the games of network-formation that we have
studied in the current chapter. He shows that, depending on the specific
rules of coalition formation, the coalition-formation games are potential
games only if one of two exogenous allocation rules is used to determine
players' payoffs in various coalition structures. These two allocation rules
are equal division of the surplus of a coalition over and above the stand-
alone values ofthe players, and the value of Aumann and Dreze, which we
discussed in section 4.1. Slikker (2000a) considers potential-maximizing
strategy profiles for coalition-formation games that are potential games.
Chapter 11

NETWORK FORMATION AND REWARD


FUNCTIONS

In the last chapter of this book, we consider questions related to the


formation of networks in cases where a reward function gives the value
of each network. As we saw in section 4.5, reward functions allow us to
model situations in which the value that can be obtained by a group of
players does not depend solely on whether they are connected or not,
but also on exactly how they are connected to each other.
In section 11.1 we discuss the notion of pairwise sta.bility of networks
that was introduced by Jackson and Wolinsky (1996) as a requirement
on networks that are likely to arise in various processes. of network for-
mation. The main question that is addressed in this section is whether
we can hope that processes of network formation lead to networks that
are optimal in the sense that they maximize the total reward. The an-
swer is negative. It is shown that pairwise stability is in general not
reconcilable with optimality. The tension between stability and opti-
mality is further studied by Dutta and Mutuswami (1997), who consider
a specific strategic-form game of network formation. They concentrate
on two stability concepts derived from this game, strong stability and
weak stability. We report their results in section 11.2. We conclude this
chapter with a section in which we discuss dynamic models of network
formation in which players are not forward looking.

11.1 PAIRWISE STABILITY


This section, which is based on Jackson and Wolinsky (1996), concen-
trates on general reward functions. No explicit model of network forma-
tion is given. Instead, the notion of pairwise stability is introduced as a
requirement on networks that are likely to arise in va.rious processes of
266 Network formation and reward functions

network formation. The focus in this section is on the tension between


pairwise stability and optimality of networks.
We encountered reward functions in section 4.5. If the set of players is
N, then a reward function r is a function that associates with each set of
links L C;;; LN a value r(L) E R which represents the profits obtainable
by the players in N if they are organized in network (N, L). To keep
our notations as simple as possible, we again make the assumption that
r(0) = O. Given a reward function r, a network (N, L) is optimal for
reward function r if r(L) 2: r(L') for all L' C;;; LN .28 Hence, an optimal
network is one which maximizes the total obtainable reward. Note that
a network that is optimal for a reward function r may not be optimal
for another reward function r'.
The definition of pairwise stability is based on the idea that players
are free to form new links or sever existing ones if this will increase their
payoffs. Hence, pairwise stability is defined with a specific allocation
rule in mind that is used to determine the payoffs to individual players
in various networks. Let i be an allocation rule that is defined on a
class ReS of reward communication situations which has the property
that if (N, r, L) E ReS then (N, r, L') E ReS for all L' C;;; LN. Let
(N, r, L) E ReS. Then network (N, L) is pairwise stable with respect to
reward function r and allocation rule i if the following two conditions
are satisfied.
(i) For all ij E L it holds that

ii(N, T, L) 2: ii(N, r, L\ij) and ij(N, r, L) 2: ij(N, r, L\ij).

(ii) For all ij rf. L it holds that

if ii(N, r, L) < ii(N, r, L U ij) then ij(N, r, L) > ij (N, r, L U ij).


The first condition reflects the idea that players can break links unilat-
erally, while the second condition reflects the idea that it requires the
consent of two players to form a new link. Note that it is implicitly as-
sumed that a new link between two players will be formed if one of these
players benefits from its formation, while the other one is indifferent. If
there is no ambiguity about the reward function and the allocation rule,
then we will just say that a network is or is not pairwise stable.
The following example illustrates the concepts of optimality and pair-
wise stability.

28 Jackson and Wolinsky (1996) call networks with this property strongly efficient. We di-
vert from their terminology to avoid confusion with the properties efficiency and component
efficiency that we have encountered in earlier chapters.
Pairwise stability 267

EXAMPLE 11.1 We consider the so-called co-author model. In this


model, players are researchers who spend their time writing co-authored
papers. By assumption, each paper has two authors. Denoting the set
of players by N, we construct the network (N, L) such that ij E L if
and only if players i and j are involved in a joint project. Then ILil
is the number of projects in which player i is involved, for each i E N.
Obviously, the more projects a player is involved in, the less time he
can spend on anyone of his projects. Also, we assume that the value
of a project is determined by how much time the researchers put into
it and an additional term that captures the idea that interaction be-
tween researchers is desirable. To make things concrete, suppose that
every player has a unit of time, which he divides equally over all his
projects, and that the value to player i of a joint project by player i and
j equals dJ + liJI + I£ill£jl' Hence, the value to player i in network
(N, L) equals 29

(11.1)

This defines a reward function r given by r(L) = LiEN ui(L).


The following two results are due to Jackson and Wolinsky (1996).
They consider the allocation rule that assigns to each player i his value
ui(L) in every network (N, L).
(i) If there are an even number of players, then the optimal networks in
the co-author model are those in which each component consists of
two players.

(ii) A pairwise stable network in the co-author model has the property
that all its components are complete (sub)networks and that no two
of its components have the same number of players. Moreover, for
any two components G1 , G2 in a pairwise stable network it holds that
IG1 1> IG2 12 or IG21> IGl I2.
These two results show that in the co-author model pairwise stable net-
works have more links than is desirable for overall reward maximization.
This is because authors do not fully take into account the negative effects
on their current co-authors when they take on new projects.
To make things concrete, we consider a model with four players, N =
{1,2,3,4}. According to (i), it follows that optimal networks have two
links and two components, such that every player is involved in exactly

29Recall that the empty sum equals zero.


268 Network formation and reward functions

one link. We will show that none of the optimal networks is pairwise
stable. Because of the symmetry of the situation, it suffices to show this
for one of the optimal networks, and we choose the one with links 12
and 34, which is represented in figure 11.1.

4..--...3

1..--...2

Figure 11.1. An optimal network

Applying equation (11.1), we find that each player i E N receives


Ui( {12, 34}) = 3 in this network and, hence, that the value of this net-
work equals 12. However, any two players who are not connected yet
can benefit from forming a link between them. For example, if players
1 and 4 form link 14, then players 1 and 4 both improve their payoffs
to ud {12, 14, 34}) = (! + 1 + !) + (! + ! +~) = 3~. We find that
(N, {12, 34}) is not pairwise stable. Note that the addition of link 14
decreases the reward of the network because the payoffs of players 2 and
3 decrease from 3 to 2 and, hence, the reward of the new network is only
10!.
So, which networks are pairwise stable? According to (ii) above, The
only networks that can possibly be pairwise stable, are the complete
network and networks that have one isolated player and all three links
between the other three players. These two types of networks are repre-
sented in figure 11.2.

4"", .3 4~13
J-~2 1161 2

a: (N, {12, 14, 24})

Figure 11.2. Two types of networks that are possibly pairwise stable

We will show that the network in figure 11.2 (a), (N, {12, 14,24}), is
not pairwise stable. It is easily checked that players 1, 2, and 4 each
receive 2 * (! + ! +~) = 2! in this network, while player 3 receives
nothing. Players 1 and 3 can benefit by forming link 13, because this
increases player l's payoff to 2 * (! t i)
+ + + (1 + + = 3~ and t t)
Pairwise stability 269

player 3's payoff to q. We conclude that (N, {12, 14, 24}) is not pairwise
stable. Because of the symmetry of the situation, we see that none of
the networks that look like that in figure 11.2 (a) are pairwise stable.
So, the only remaining candidate for a network that is pairwise stable
is the complete network. Because of the symmetry of the situation, to
show that this network is pairwise stable, it suffices to check for just one
link that its deletion does not benefit one of the players who are involved
in it. It is a straightforward exercise to show that each player receives
3 * ~ = 2t in the complete network. If player 1 breaks a link with any of
the other players, though, then his payoff drops to 2 * (~ + t + ~) = 2.
We conclude that the unique pairwise stable network in the co-author
model with 4 players is the complete network. Note that this network
is not optimal, because its reward is only 4 * 2t = 9t, which is less than
the reward of the network (N, {12, 34}). 0

The tension between pairwise stability and optimality that we ob-


serve in example 11.1 persists in general. To state this result, we need
to introduce a new property for allocation rules. An allocation rule is
anonymous if it assigns payoffs to the players in a reward communica-
tion situation based solely on their position in the network and on the
values obtainable in different networks. The term anonymity refers to
the fact that the identities of the players do not influence their payoffs.
The definition of anonymity uses permutations of a player set N. A
permutation of N is a function 1[" : N ~ N such that for each i E N
there exists precisely one j E N with n(j) = i. Hence, a permuta-
tion scrambles the names of the players. The network and the reward
function need to be scrambled accordingly. For a reward communica-
tion situation (N, r, L) and a permutation 1[" of the player set N, the
scrambled reward communication situation (N, r7r, L7r) has a set of links
L7r = {ij I there exists a link kl E L such that i = 1["(k) and j = 1["(l)}
and the reward function r 7r defined by r 7r (A7r) = r(A) for each A ~ LN.

Anonymity An allocation rule . .( on a class ReS of reward communi-


cation situations is anonymous if for every (N, r, L) E ReS and every
permutation 1[" of the set of players N it holds that

17r(i)(N,r 7r ,L7r ) = li(N,r,L) (11.2)

for each i E N.

In the following theorem, which is due to Jackson and Wolinsky


(1996), we also use the property component efficiency for reward com-
munication situations with component additive reward functions. These
270 Network formation and reward functions

two concepts were defined in section 4.5. We denote the set of all reward
communication situations with a fixed player set N and a component
additive reward function r by RCSijA'
THEOREM 11.1 Let N be a set uf player's such that INI ~ 3. Then there
does not exist an allocation rule 'Y on RCSijA that is anonymous and
component efficient and such that for each component additive reward
function r at least one optimal network is pairwise stable.

PROOF: Let 'Y be an allocation rule 'Yon RCSijA that is anonymous and
component efficient. We will show that there exist a component additive
reward function r such that no optimal network is pairwise stable.
Let T <;;;; N be such that ITI = 3. Without loss of generality, we
assume that T = {I, 2, 3}. Consider the component additive reward
function r defined by
18 if L = {12}, L = {13}, or L = {23};
{ 19 if L = {12, 13}, L = {12, 23}, or L = {l3, 23};
r(L) = ~8 if L = {12, 13, 23};
if there exists alE L with l 9! {12, 13, 23}.
Note that there are exactly three networks that are optimal for r, namely
(N, {12, 13}), (N, {12, 23}), and (N, {13, 23}). We will show that none of
these three is pairwise stable. Because of the symmetry of the situation,
it suffices to show that network (N, {12, 13}) is not pairwise stable.
Suppose that network (N, {12, 13}) is pairwise stable. We will show
that this leads to a contradiction. Using anonymity and component ef-
ficiency of allocation rule 'Y, we obtain 'Yi(N, r, {12, 13, 23}) = 6 for each
i E {1,2,3}, 'Y1(N,{12,13}) +'Y2(N,{12,13}) + 'Y3(N, {12,13}) = 19,
and 'Y2(N, {12, 13}) = 'Y3(N, {12, 13}). Pairwise stability of (N, {12, 13})
then implies that 'Y2(N, {12, 13}) = 'Y3(N, {12, 13}) ~ 6, because other-
wise players 2 and 3 would want to form the additional link 23. Hence,
using component efficiency of 'Y, we derive that 'Y1 (N, {12, 13}) ::; 19 -
12 = 7. Note, however, that it follows from anonymity and component
efficiency of'Y that 'Y1 (N, r, {12}) = 9. We conclude that player 1 would
prefer to break the link with player 3 to get 9 rather than 7, so that
network (N, {12, 13}) is not pairwise stable. D

Theorem 11.1 shows that there are component additive reward func-
tions such that for all anonymous and component efficient allocation
rules none of the optimal networks are pairwise stable. Note that the
conditions on the allocation rule are not unreasonable. They are, for
example, satisfied by the Myerson value for reward communication sit-
uations (see section 4.5). Jackson and Wolinsky (1996) show that the
Weak and strong stability 271

conflict between optimality and pairwise stability can be avoided if at-


tention is restricted to certain classes of reward functions or if allocation
rules are not required to be component efficient. We illustrate their
results in the following examples.

EXAMPLE 11.2 Consider the connections model that we saw in example


4.16. We recall that N = {I, 2, 3} and that the reward function r is
defined by r(L) = ul(L) + u2(L) + u3(L) with u;(L) := L;jECi(L)\; 6t (i,j)
for each L <;;; LN. Here, 6 E (0,1) is the rate of decay of information and
t (i, j) denotes the length of the shortest path connecting players i and
].
Obviously, the allocation rule given by "Yi(N,r,L) = ui(L), for each
L <;;; L N , is anonymous and component efficient. One easily verifies
that "Y(N,r,0) = (0,0,0), "Y(N,r,{12}) = (6,6,0), ry(N,r,{12,13}) =
(26,6 + 62 ,6 + 62 ), and "Y(N,T',VV) = (26,26,26). Using the symmetry
of the situation, we now know the payoffs to the players for all possible
networks. It is easily seen from these payoffs that the complete network
is the unique pairwise stable network. Note that this network is also the
unique optimal network. 0

EXAMPLE 11.3 Consider the set of players N = {I, 2, 3} and reward


function r defined by

18 if L = {12}, L = {13}, or L = {23};


r(L) = { 19 if L = {12, 13}, L = {12, 23}, or L = {13, 23};
18 ifL = {12,13,23}.
Note that this is the reward function used in the proof of theorem 11.1.
Consider the equal-division allocation rule "Y, which divides the value of
any network equally among all three players. Hence . in a network with
one link or three links every player gets 6, and in a. network with two
links every player gets 6!. This allocation rule does satisfy anonymity,
but violates component efficiency. Further, with this equal-division al-
location rule the three optimal networks a.re pairwise stable. 0

11.2 WEAK AND STRONG STABILITY


This section is based on Dutta and Mutuswami (1997), who further
study the potential conflict between optimality and stability of networks.
They restrict their attention to component additive reward functions and
component efficient allocation rules and consider two stability notions,
strong stability and weak stability.
272 Network formation and reward functions

The notions of strong stability and weak stability are defined using
the strategic-form game of network formation that we defined in section
7.1. We recall that in this game, every player i E N has a strategy set
Si = 2N \i, and network (N,L(8)) with links L(8) = {ij liE 8j,j E
8d results if the players play strategy profile 8 E S. With underlying
reward function r and allocation rule ,,(, the payoff to player i is then
f?(8) = "((N,r,L(8)). For a given reward function r and allocation
rule ,,(, we denote this network-formation game by rnl (N, r, "(). Now,
a network (N, L) is strongly stable (with respect to reward function r
and allocation rule "() if it can be formed in a strong Nash equilibrium
of the game rnl (N, T, "().30 Similarly, it is called weakly stable (with
respect to reward function r and allocation rule "() if it can be formed
in a coalition-proof Nash equilibrium of the game rnl (N, T, "(). Because
in a strong Nash equilibrium the size of a coalition of deviating players
is not restricted to be at most two, and because deviating players are
not restricted to the addition or deletion of one link at a time, any
network that is strongly stable is also pairwise stable. Also, because
every strong Nash equilibrium is also a coalition-proof Nash equilibrium,
every strongly stable network is weakly stable. The following example
illustrates that networks that are not pairwise stable can still be weakly
stable.

EXAMPLE 11.4 Let N = {I, 2, 3} be a set of players and let T be the


reward function defined by

if ILl::; 1 or L = {12, 23};


T(L) = { ~5 otherwise.

Consider allocation rule "( defined by "((N, T, L) = (0,0,0) for all L with
r(L) = 0, ,,((N,r,{12,13}) = (4,7,4), ,,((N,T,{13,23}) = (5,5,5), and
"((N, T, LN) = (6,6,3).
Network (N, {13, 23}) is not pairwise stable because both players 1
and 2 prefer the complete network (N, L N ). We will argue, however,
that network (N, {13, 23}) is weakly stable. Consider strategy profile
s = ({3}, {3}, {I, 2}) in network-formation game rnl (N, T, "(). Note that
L(8) = {13,23}. Obviously, the only possible deviation that improves
the payoffs of all the deviating players is the deviation by players 1 and 2
to strategies tl = {2, 3} and t2 = {I, 3}, which will induce the formation

30Dutta and Mutuswami (1997) use a slightly different definition of strong Nash equilibrium
and, hence, their concept of strong stability is slightly different from the one that we use.
The results in this section are not affected by this difference.
Weak and strong stability 273

of the complete network. This deviation, however, is not allowed because


player 2 can further increase his payoff from 6 to 7 by breaking link 23
and inducing the formation of network (N, {12, 13}). We conclude that
network (N, {13, 23}) is weakly stable. <)

Dutta and Mutuswami (1997) first try to relax the anonymity con-
dition to one of weighted fairness, which is obtained by adapting the
fairness property (4.17) to account for exogenous weights that reflect
the relative strengths of players like we saw in the definition of weighted
Shapley values (see section 1.1). They find a result that is completely
analogous to theorem 4.6, namely that there exists a unique allocation
rule that satisfies component efficiency and weighted fairness, and that
it is a weighted Myerson value. 31 The following theorem is a slightly
stronger version of a theorem in Dutta and Mutuswami (1997), who
require a network to be strongly stable rather than pairwise stable.

THEOREM 11.2 Let N be a set of players such that INI 2: 3. Then there
does not exist an allocation rule on RCSlJA that satisfies weighted fair-
ness and component efficiency and such that for each component additive
reward function r at least one optimal network is pairwise stable.

PROOF: Let 'Y be an allocation rule on RCSlJA that satisfies weighted


fairness and component efficiency. Then there exists a vector of weights
W E R~+ such that 'Y(N,r,L) = ipw(N,vr,L) for all (N,r,L) E RCSlJA'
where vr,L(T) = r(L(T)) for all T ~ N, as defined on page 118. Let
T ~ N be such that ITI = 3. Without loss of generality, we assume that
T = {I, 2, 3} and WI + W2 + W3 = 1. Consider the component additive
reward function r defined by

if L = {12}, L = {13}, or L = {23};


if L = {12, 13}, L = {12, 23}, or L = {13,23};
if L = {12, 13, 23};
if there exists alE L with l (j. {12, 13, 23},

where 0 < E < min{wl,w2,W3}. Note that the only networks that are
optimal for r are the networks with exactly one of the links 12, 13, or
23 and no other links. We will show that these three networks are not
pairwise stable. Because of symmetry it suffices to show that (N, {12})
is not pairwise stable.

31 Dutta and Mutuswami (1997) do not include a proof of this statement. We refer the reader
to Slikker et al. (2000a) for a proof in the context of hypergraph communication situations,
which can easily be adapted to fit the setting of reward communication situations.
274 Network formation and reward functions

Using v r ,{12} = (1 + 2E) Ul,2 and expression (1.7) for the weighted
Shapley value, we compute
'WI
1'1(N,r,{12})=(1+2E) .
'WI + 'W2
From v r ,{l2,13} = (1 + 2E) Ul,2 + (1 + 2E) Ul,3 - (1 + 3E) U1,2,3, it follows
that

I'I(N,T, {12, 13}) = (1 + 2E) 'WI + (1 + 2E) 'WI - (1 + 3E)'WI,


'WI + 'W2 'WI + 'W3
where we use that 'WI + 'W2 + 'W3 = 1. We derive from this that

1'1 (N, r, {12, 13}) -1'1 (N, r, {12}) = 'WI ( 1 + 2E - (1 + 3E) )


WI + 'W3
_ ((1 + 2E) - ('WI + 'W3)(1 + 3E))
- 'WI
'WI + 11!3

-_ 'WI ((1 + 2E) - (1- 'W.2)(1 + 3E))


'WI + 'W3
> 0,
where the third inequality holds because 'WI + 'W2 + 'W3 = 1 and the
inequality holds because (1 + 2E) - (1 - 'W2)(1 + 3E) = -E + 'W2 + 3'W2E >
'W2 - E > O. Also, using the expression for vr ,{12,13} above, we find

1'3(N, T, {12, 13} )-1'3(N, T, {12}) = 'W3 ( 1 + 2E . - (1 + 3f) )


'WI + 'W.1
= -'W 3 ( I'I(N,r,{12,13}) -1'1(N,r,{12})) > O.
'WI
We conclude that network (N, {12}) is not pairwise stable because play-
ers 1 and 3 can improve their payoffs by forming link 13. 0

Theorem 11.2 shows that requiring weighted fairness instead of an-


onymity does not resolve the conflict between optimality and stability
established by Jackson and Wolinsky (1996). Therefore, Dutta and Mu-
tuswami (1997) examine whether changing the requirement of pairwise
stability rather than that of anonymity resolves the conflict. They re-
place the requirement of pairwise stability in theorem 11.1 by one of
weak stability and show that this also does not resolve the conflict. We
report their result in the following theorem, whose proof we omit because
it is almost identical to that of theorem 11.1.
Weak and strong stability 275

THEOREM 11.3 Let N be a set of players such that INI ;::: 3. Then
there does not exist an allocation rule on RCSIJA that is anonymous and
component efficient and such that for each component additive reward
function r at least one optimal netwoTk 'is weakly stable.
This result seems to be in contrast with the spirit of theorem 7.3.
In that theorem we consider superadditive games, so that the complete
network is optimal, and it is proven that the complete network is weakly
stable, In fact, the theorem implies that every weakly stable network
is optimal. Dutta and Mutuswami (1997) show that the crucial prop-
erty for this result to hold is mono tonicity of a reward function, i,e"
r(L U ij) ;::: r(L) for all set of links L, Though this assumption is satis-
fied by reward functions associated with superadditive games, it is not,
for example, satisfied by the reward functions that we obtain in chapter
8 when there are positive costs for establishing links.
To avoid the monotonicity assumption to resolve the conflict between
optimality and stability of networks, Dutta and Mutuswami (1997) take
an implementation approach. The idea behind this approach is that if
we believe that only strongly stable networks will be formed, then we
should not care whether the allocation rule is anonymous for networks
that are not strongly stable, Hence, an allocation rule is required to
be anonymous only on the set of strongly stable networks. Dutta and
Mutuswami (1997) show that this, finally, solves the conflict between
optimality and stability of networks, Their result is valid for component
additive reward functions that assign a positive value to every nonempty
set of links, We denote the set of reward communication situations with
such reward functions by RCS6~'
THEOREM 11.4 There exists a component efficient allocation rule 'Y on
RCS6~ that is anonymous on the set of stmngly stable networks and has
the property that JOT every (N, r, L) E RCS6~ the set of stmngly stable
networks is nonempty and contained in the set of optimal networks.
To prove this theorem, Dutta and Mutuswami (1997) construct an
elaborate allocation rule, which we will not reproduce here. They also
provide an implementation result in terms of weak stability, but that
requires the additional restriction on the set of reward functions that
there exists an optimal network with the property that each of its com-
ponents has a player who is directly connected to all other players in
the component. This property is satisfied in the connections model in
example 11.2 and in the co-author model that we saw in example 11.1.
We conclude this section with the remark that Dutta and Jackson
(2000) take the study of optimality and stability to the setting of directed
communication networks.
276 Network formation and reward functions

11.3 DYNAMIC MODELS OF NETWORK


FORMATION
In this section we briefly address some dynamic models of network
formation in which players are not forward looking. We are not aware
of any general results for these models. Rather, papers in this area seem
to focus on specific parametric models. We will not cover all of these,
but zoom in on just a few.

Watts (2001) studies a model that starts with a set of players who
are unconnected to each other. Over time, pairs of players meet and
get an opportunity to form the link between them and to sever any of
their existing links. It takes the consent of both players to form the
link between them, but each of them can unilaterally break any of his
links. Agents are myopic in the sense that they base their decisions
to break and/or sever links on the immediate jmpact that this has on
their payoffs. The agents are not forward loolting and do not take into
account that the addition or deletion of links may influence the decisions
to be made by other players in the future. The payoff of a player i in a
network (N, L) is

ui(L) = L 5t (i,j) - clLil,


jECi(L)\i

where 5 E (0,1) is the rate of decay of information every time it is sent


over a link, t( i, j) denotes the lengths of the shortest path connecting
players i and j, and c is a cost per player per link. 32 This is a connections
model like in example 4.16, only now we are assuming that every player
has to pay a cost c for everyone of his links. In this setting, a network
is called stable if it is a rest point of the link formation process. Hence,
a network (N, L) is stable if the following conditions are met.

(i) ui(L) 2': ui(L\ij) for all ij E L;


(ii) For all ij rt L and any L' ~ Li and any L" ~ Lj it holds that if
ui((L U ij)\(L' U L")) > ui(L), then uj((L U ij)\(L' U L")) < uj(L).
This notion of stability is an extension of pairwise stability because pairs
of players can simultaneously form and/or sever links. Watts (200l)
analyzes what networks will be formed in the model described above.
She distinguishes between three cases. If 5 - c > 52 > 0, the cost

32Note that the cost of a link is 2c in this model, whereas it was c in the models in chapter
8.
Dynamic models of network formation 277

of a direct connection is such that players prefer to pay for a direct


connection rather than just get the benefits from an indirect connection.
Then players form a link whenever they are given an opportunity to
do so, and they never break a link. Hence, network (N, LN) is formed.
This network is also the unique optimal network in this case. If c > 8,
then no links are ever formed because the benefit of a link does not
outweigh its cost, and the empty network results. 33 This implies that
for c < 8 + 1N1-28, when the only optimal networks are stars, a non-
optimal network is formed. If 0 < 8 - c < 82 and there are at least four
players, then there is a positive probability p* that the formation process
will converge to a star. As the number of players increases to infinity,
then p* decreases and converges to O. Because, for these parameters, the
only optimal networks are stars, this implies that the probability that
an optimal network will be formed decreases as the number of players
increases.

Jackson and Watts (1999a) introduce an evolutionary framework for


the study of network formation. They start from a basic model that
is very much like the one used by Watts (2001). In this basic model,
players can form or delete links one at a time, where it takes the consent
of two players to form the link between them, but players can break
links unilaterally. Players behave myopically and base their decisions to
break and/or sever links on the immediate impact that this has on their
payoffs. Starting from any network, this process of link formation and
deletion leads to either a pairwise stable network or induces a cycle of
networks between which the process keeps switching.
To bring in the evolutionary component, Jackson and Watts (1999a)
allow (with a small probability) for unintended mutations, i.e., addition
or deletion of links that cannot be explained from players' myopic behav-
ior. Such an error may be due to exogenous factors unexplained in the
model, or to miscalculations by the players themselves. The possibility
of errors induces a stochastic process of link formation and predictions
can be made about the relative amounts of time that the process will
spend in various networks. Naturally, the process gravitates to pairwise
stable networks and cycles of networks. However, due to errors, the pro-
cess might move away from such rest points of the process. The more
errors it takes to move away from a pairwise stable network or a cycle of
networks, the lower the probability that the process will leave it when
errors occur. The process favors stochastically stable networks, i.e., the

33In a recent paper, Watts (2000) shows that the players might form links even if c > Ii if
they are non-myopic.
278 Network formation and reward functions

ones that are harder to move away from and easier to get to as a result
of errorl:>.
For the formal description of this evolutionary framework, we refer
the reader to Jackson and Watts (1999a). We suffice by illustrating it
in the following example, which is taken from their paper.

EXAMPLE 11.5 In the marriage problem (cf. Gale and Shapley (1962)),
the set of players is divided into a set of men M and a set of women
W. Same-sex marriages and polygamy are not legal, so that networks
(N, L) are allowed only if they satisfy the conditions that for any ij E L
it holds that either i EM and JEW or i E Wand j EM, and \L.,\ :::; 1
for all i EMu W. Every person's utility depends only on their partner
in marriage. For simplicity, we adopt the terminology that someone il:>
his or her own partner if he or she is not married. To study the stability
of allowed networks, it is sufficient to know everyone's preferences over
partners. Each man rni E M has preferences over his set of possible
partners W U rni and every woman Wj E W has preferences over her set
of possible partners M U Wj.
We consider an example with just two men, rnl and rn2, and two
women, WI and W2. The preferences of these men and women are de-
scribed in table 11.1. 34

II Player Preference

m[ Wj >- W2 >- mj
m2 W2 >- m2 >- Wj
WI rn[ >- Wj >- m2
W2 m2 >- mj >- W2

Table 11.1. Preferences of the players

In figure 1l.3 we show all seven allowed networks for this marriage
problem and we draw an arrow pointing from one network to another
if the second network can be obtained from the first one by the addi-
tion or deletion of a link by players who behave myopically. For exam-
ple, we draw an arrow pointing from network (N, {m2wr}) to network
(N, {rnIw2,rn2wr}) because rni and 'UJ2 both prefer being married to
each other to being single. Also, we draw an arrow pointing from net-

.34 Recall from chapter 8 that x >- y means t.hat. x is preferred to y.


Dynamic models of network formation 279

work (N, {wlm2}) to the empty network (N, 0) because m2 prefers being
single to being married to WI.

Figur·e 11.3. Transitions following from myopic behavior

A pairwise stable network is one that has no arrows pointing away


from it, because myopically behaving players do not want to add or delete
any links when they are in such a network. This leaves two pairwise
stable networks, (N, {mi WI, m2w2}) and (N, {mi W2}).
An error in the evolutionary process is an unintended addition or
deletion of a link. Hence, an error occurs when the process moves in
the opposite direction of an arrow. To see which of the two pairwise
stable networks are stochastically stable, we count how many times the
process has to move in the opposite direction of an arrow to get from one
network to the other. It requires one error (the deletion oflink mIw2) to
move from (N,{mIw2}) to (N,{mIwI,m2wd), whereas it requires two
errors (the deletion of both links mi WI and m2w2) to move the other
way. Because network (N, {ml WI, m2w2}) is harder to move away from
and easier to get to, this is the unique stochastically stable network. 0

We conclude this section by pointing the reader to additional papers


that consider dynamic models of network formation, but which we do
not cover in any detail. Bala and Goyal (2000) study the formation of
directed communication networks for a class of reward functions that are
similar to that in the connections model. The most important difference
between this paper and the ones we have discussed is that players can
form links unilaterally. They can access the information of other players
280 Network formation and reward functions

if they are willing to incur the cost of obtaining it. Bala and Goyal (2000)
consider two models, one in which a link that is formed is only useful to
the player who formed it, and another in which such a link is also useful
to the player who is at the receiving end. Because links can be formed or
broken unilaterally, stability of a network requires that no player wants
to form or delete a link. Networks with this property are called Nash
networks. Bala and Goyal (2000) show that Nash networks have simple
architectures and are often optimal. Moreover, in a dynamic framework
these equilibrium networks emerge quite rapidly. Finally, we mention
Jackson and Watts (1999b) and Goyal and Vega-Redondo (1999), who
litudy the formation of (undirected) networks in a context of coordination
games, and Johnson and Gilles (2000), who study network formation in
a setting where the costs of various links are determined by the distances
between players.
References

Aumann, R. (1959). Acceptable points in general cooperative n-person


games. In Tucker, A. and Luce, R., editors, Contrib'utions to the The-
ory of Games IV, pages 287-324. Princeton University Press, Prince-
ton.

Aumann, R. (1967). A survey of cooperative games without side-


payments. In Shubik, M., editor, Essays in Mathematical Economics,
pages 3-27. Princeton University Press, Princeton.

Aumann, R. (1975). Values of markets with a continuum of traders.


Econometrica, 43:611-646.

Aumann, R. (1985). An axiomatization of the non-transferable utility


value. Econometrica, 53:599-612.

Aumann, R. and Dreze, J. (1974). Cooperative games with coalition


structures. International Journal of Game Theory, 3:217-237.

Aumann, R. and Myerson, R. (1988). Endogenous formation of links


between players and coalitions: an application of the Shapley value.
In Roth, A., editor, The Shapley Value, pages 175-191. Cambridge
University Press, Cambridge, United Kingdom.

Bala, V. and Goyal, S. (2000). A noncooperative model of network


formation. Econometrica, 68: 1181-1229.

Bernheim, B., Peleg, B., and Whinston, M. (1987). Coalition-proof Nash


equilibria I: concepts. Journal of Economic Theory, 42:1-12.

Bilbao, J. (2000). Cooperative Games on Combinatorial Structures.


Kluwer Academic Publishers, Boston.
282 References

Bondareva, O. (1963). Certain applications of the methods oflinear pro-


gramming to the theory of cooperative games (In Russian). Problemy
Kibernetiki, 10:119-139.

Borrn, P., Owen, G., and Tijs, S. (1992). On the position value for
communication situations. SIAM Journal on Discrete Mathematics,
5:305-320.

Borm, P. and Tijs, S. (1992). Strategic claim games corresponding to


an NTU-game. Games and Economic Behavior, 4:58-71.

Calvo, E., Lasaga, J., and van den Nouweland, A. (1999). Values of
games with probabilistic graphs. Mathematical Social Sciences, 37:79-
95.

Casas-Mendez, B. and Prada-Sanchez, J. (2000). Properties of the ntu


myerson set and the ntu position set. Working paper.

Crawford, V. (1991). An evolutionary interpretation of van Huyck,


Battalio, and Beil's experimental results on coordination. Games and
Economic Behavior, 3:25-59.

Dixit, A. and Nalebuff, B. (1991). Thinking strategically. W.W. Norton


& Company.

Dutta, B. and Jackson, M. (2000). The stability and efficiency of directed


communication networks. Review of Economic Design, 5:251-272.

Dutta, B. and Mutuswami, S. (1997). Stable networks. Journal of


Economic Theory, 76:322-344.
Dutta, B., van den Nouweland, A., and Tijs, S. (1998). Link forma-
tion in cooperative situations. International Journal of Game Theory,
27:245-256.

Feinberg, Y. (1998). An incomplete cooperation structure for a voting


game can be strategically stable. Games and Economic Behavior,
24:2-9.
Gale, D. and Shapley, L. (1962). College admissions and the stability of
marriage. American Mathematical Monthly, 69:9-15.
Garratt, R. and Qin, C. (2000). Potential maximization and coalition
government formation. Working paper.
Goyal, S. and Vega-Redondo, F. (1999). Learning, network formation
and coordination. Working paper.
References 283

Hamiache, G. (1999). A value with incomplete information. Games and


Economic Behavior, 26:59-78.
Hart, S. and Mas-Colell, A. (1989). Potential, value and consistency.
Econometrica, 57:589-614.
Iuarra, E. and Usategui, J. (1993). The Shapley value and average convex
games. International Journal of Game Theory, 22:19-23.
Jackson, M. and Watts, A. (1999a). The evolution of social and economic
networks. Working paper.
Jackson, M. and Watts, A. (1999b). On the formation of interaction
networks in social coordination games. Working paper.
Jackson, M. and Wolinsky, A. (1996). A strategic model of social and
economic networks. Journal of Economic Theory, 71:44-74.
Johnson, C. and Gilles, R. (2000). Spatial social networks. Review of
Economic Design, 5:273-299.
Kalai, E. and Samet, D. (1988). Weighted Shapley values. In Roth, A.,
editor, The Shapley VahLe, pages 83-99. Cambridge University Press,
Cambridge, United Kingdom.
Kern, R. (1985). The Shapley transfer value without zero weights. In-
ternational Journal of Game Theory, 14:73-92.
Marin-Solano, J. and Rafels, C. (1996). Convexity versus average con-
vexity: Potential, PMAS, the Shapley value and simple games. WPE
96/03, University de Barcelona, Barcelona, Spain.
Meca-Martinez, A., Sanchez-Soriano, J., Garda-Jurado, 1., and Tijs,
S. (1998). Strong equilibria in claim games corresponding to convex
games. International Journal of Game Theory, 27:211-217.
Meessen, R. (1988). Communication games (In Dutch). Master's thesis,
Department of Mathematics, University of Nijmegen, The Nether-
lands.
Monderer, D. and Shapley, L. (1996). Potential games. Games and
Economic Behavior, 14:124-143.
Myerson, R. (1977). Graphs and cooperation in games. Mathematics of
Operations Research, 2:225-229.
Myerson, R. (1980). Conference structures and fair allocation rules.
International J01Lrnal of Game Theory, 9:169-182.
284 References

Myerson, R. (1991). Game Theory: Analysis of Conflict. Harvard Uni-


versity Press, Cambridge, Massachusetts.
Nash, J. (1950a). The bargaining problem. Econometrica, 18:155-162.
Nash, J. (l950b). Equilibrium points in n-person games. Proceedings of
the National Academy of Sciences, 36:48-49.
Owen, G. (1971). Political games. Naval Research Logistics Quarterly,
18:345-355.
Owen, G. (1975). On the core of linear production games. Mathematical
Programming, 9:358-370.
Owen, G. (1986). Values of graph-restricted games. SIAM Journal on
Algebraic and Discrete Mathematics, 7:210-220.
Qin, C. (1996). Endogenous formation of cooperation structures. Journal
of Economic Theory, 69:218-226.
Roth, A. (1980). Values of games without side-payments: some difficul-
ties with the current concept. Econometrica, 48:457-465.
Schmeidler, D. (1969). The nucleolus of a characteristic function game.
SIAM Journal of Applied Mathematics, 17:1163-1170.
Shapley, L. (1953a). Additive and non-additive set-functions. PhD thesis,
Department of Mathematics, Princeton University, Princeton.
Shapley, L. (1953b). A value for n-person games. In Tucker, A. and
Kuhn, H., editors, Contributions to the Theory of Games II, pages
307-317. Princeton University Press, Princeton.
Shapley, L. (1967). On balanced sets and cores. Naval Research Logistics
Quarterly, 14:453-460.
Shapley, L. (1969). Utility comparison and the theory of games. In
La Decision: Aggregation et Dynamique des Ordres de Preference,
pages 251-263. Paris: Editions du Centre National de la Recherche
Scientifique.
Shapley, L. (1971). Cores of convex games. International Journal of
Game Theory, 1:11-26.
Slikker, M. (1999). Link monotonic allocation schemes. CentER Discus-
sion Paper 9906, Tilburg University, Tilburg, The Netherlands.
Slikker, M. (2000a). Decision Making and Cooperation Restrictions. PhD
thesis, Tilburg University, Tilburg, The Netherlands.
References 285

Slikker, M. (2000b). Inheritance of properties in communication situa-


tions. International Journal of Game Theory, 29:241-268.
Slikker, M. (2001). Coalition formation and potential games. Games
and Economic Behavior (to appear).
Slikker, M., Dutta, B., van den Nouweland, A., and Tijs, S. (2000a).
Potential maximizers and network formation. Mathematical Social
Sciences, 39:55-70.
Slikker, M., Gilles, R., Norde, R., and Tijs, S. (2000b). Directed commu-
nication networks. CentER Discussion Paper 2000-84, Tilburg Uni-
versity, Tilburg, The Netherlands.
Slikker, M. and Norde, R. (2000). Incomplete stable structures in sym-
metric convex games. CentER Discussion Paper 2000-97, Tilburg Uni-
versity, Tilburg, The Netherlands.
Slikker, M. and van den Nouweland, A. (2000a). Communication situ-
ations with asymmetric players. Mathematical Methods of Operations
Research, 52:39-56.
Slikker, M. and van den Nouweland, A. (2000b). Network formation with
costs for establishing links. Review of Economic Design, Springer-
Verlag, 5:333-362.
Slikker, M. and van den Nouweland, A. (2001). A one-stage model of
link formation and payoff division. Games and Economic Behavior
(to appear).
Sprumont, Y. (1990). Population monotonic allocation schemes for co-
operative games with transferable utilities. Games and Economic Be-
havior, 2:378-394.
Ui, T. (2000a). A Shapley value representation of potential games.
Games and Economic Behavior, 31:121-135.
Ui, T. (2000b). Robust equilibria of potential games. Working paper.
van Damme, E. (1991). Stability and perfection of Nash equilibria.
Springer-Verlag, Berlin.
van Damme, E., Feltkamp, V., Rurkens, S., and Strausz, R. (1994).
Politieke machtsverhoudingen (In Dutch). Economisch Statistische
Berichten, 79:482-486.
van den Nouweland, A. (1993). Games and Graphs in Economic Situa-
tions. PhD thesis, Tilburg University, Tilburg, The Netherlands.
286 References

van den Nouweland, A. and Borm, P. (1991). On the convexity of com-


munication games. International Journal of Game Theory, 19:421-
430.
van den Nouweland, A., Bonn, P., and Tijs, S. (1992). Allocation rules
for hypergraph communication situations. International Journal of
Game Theory, 20:255-268.

van Huyck, J., Battalio, R., and Beil, R. (1990). Tactic coordination
games, strategic uncertainty, and coordination failure. AmeT'ican Eco-
nomic Review, 35:347-359.
Watts, A. (2000). Non-myopic formation of circle networks, Working
paper.
Watts, A. (2001). A dynamic model of network formation. Games and
Economic Behavior (to appear).

Winter, E. (1992). The consistency and potential for values of games


with coalition structure. Games and Economic Behavior, 4:132-144.
Notations

ITI the number of elements of the set T


Rt:;.T R is a subset of T
ReT R is a subset of T and R is not equal to T
2N the set of all subsets of N
rC(N,v) a game of network formation in strategic form with costs c
for establishing links
a link and claim game, i.e. a one-stage strategic-form game
of network formation
rnf(N,v,,) a game of network formation in strategic form
f:,.C(N,v,a) a game of network formation in extensive form with costs c
for establishing links
f:,.nf(N,v",a) a game of network formation in extensive form
AT(V) unanimity coefficient of coalition T in coalitional game
(N,v)
the ylyerson value
the cost-extended Myerson value
the c-extended Myerson value
7r the position value
<I> the Shapley value
q,AD the value of Aumann and Dreze
q,w the w-Shapley value, i.e., a weighted Shapley value
CS the set of all communication situations
CS a class of communication situations, i.e., a subset of CS
CS~ the set of all communication situations with underlying
coalitional game (N, v)
the set of all communication situations with player set N
the set of all communication situations with player set N
and a zero-normalized game (N, v)
CS~* the set of all communication situations with player set N,
a zero-normalized game (N, v), and a cycle-free network
(N,L)
the set of all communication situations with an underlying
game that assigns a non-negative value to each coalition
288 Notations

DCS a set of directed communication situations


DCSN,A the set of all directed communication situations with player set
N, a directed reward function on A, and a directed
communication network in A
the indicator vector with eT eT
= 1 if i E T and = 0 if i rf- S
a class of games with coalition structures
the set of all games with coalition structures with underlying
game (N, v)
1-lCS a class of hypergraph communication situations
Hcsli the set of all hypergraph communication situations with player
set N and a zero-normalized game (N, v)
LN the set of all links between players in N
N the set of natural numbers
PCS a class of probabilistic communication situations
pes;; the set of all probabilistic communication situations with
underlying game (N, v)
the set of real numbers
the set of vectors whose coordinates are indexed by the elements
of N
R+ the set of non-negative real numbers
R++ the set of positive real numbers
R ++
N
the set of vectors in RN whose coordinates are all positive
RCS a class of reward communication situations
RCS;; the set of all reward communication situations with player set N
and underlying reward function r
the set of all reward communication situations with player set N
and an underlying reward function that is component additive
RCScA the set of all reward communication situations with an underlying
reward function that is component additive
Rcs~1 the set of all reward communication situations with a component
additive reward function that assigns a positive value to every
non empty set of links
rV the characteristic function of the link game associated with
communication situation (N, v, L)
the set of coalitional games with player set N
characteristic function of a unanimity game
the characteristic function of the network-restricted game
associated with communication situation (N, v, L)
the characteristic function of the cost-extended network-restricted
game associated with cost-extended communication situation
(N, v, L,c)
the characteristic function of the coalitional game associated with
reward communication situation (N, r, L)
Index

additivity, 11, 38 associated consistency, 49


allocation, 5
efficient, 5 balanced contributions, 37
individually rational, 5 component restricted, 93
allocation rule, 8, 29, 91, 97, 103, 117 balanced map, 6
a-directed fair, 127
additive, 11, 38 carrier, 69
anonymous, 269 characteristic function, 4
associated consistency, 49 clique, 68
balanced contributions, 37 maximal, 68
component decomposable, 31 coalition, 3
component efficient, 32, 93, 104, connected, 14
117,127 internally connected, 14
component restricted balanced con- coalition structure, 90
tributions, 93 coalitional game, 4, see also game
directed fair, 128 communication situation, 21
efficient, 11 cost-extended, 194
fair, 33, 104, 117 directed, 126
improvement property, 175 hypergraph, 97
independence of irrelevant players, link anonymous, 46
50 NTU,109
linear, 50 player anonymous, 39
link anonymous, 46 probabilistic, 101
link monotonic, 177 reward, 117
player anonymous, 39 component, 13, 96
positive, 50 component decomposability, 31
proportional links allocation rule, component efficiency, 32, 93, 104, 117, 127
176 component restricted balanced contribu-
strong superfluous link property, 40 tions, 93
superfluous link property, 47 comprehensive, 107
superfluous player property, 40 conference, 95
symmetric, 11 connected hull, 19
weak link symmetry, 175 cooperative garrce, 4
weighted fair, 273 core, 5
zero-player property, 12 credible threat, 139
allocation scheme, 81 cycle, 15, 96, 129
population monotonic, 82 cycle property, 129
Shapley, 81
anonymity, 269 degree, 68
arc, 125 directed communication relation, 125
290 Index

efficiency, 11 strongly superfluous, 40


efficient, 5 superfluous, 47
error, 277 link anonymity, 4G
link monotonicity, 177
fairness, 33, 104, 117
a-di rected, 127 marginal contribution 10
directed, 128 middleman, 218 '
weighted, 273 mutation, 277
Myerson set, 110
game Myerson value, 29, 97, 118
average convex, 62, 76 cost-extended, 194
balanced, 7, 56 weighted, 176
coalitional game, 4 myopic, 276
conference game, 99
convex, 58 Nash equilibrium, 143, 203, 218
cooperative garne, 4 coalition-proof, 149, 206, 229
extensive form, 135 strong, 147,223
hypergraph-restricted, 97 subgame-perfect, 137, 198
A-link game, 110 undominated, 145, 205
A-transfer game, 108 network, 13
link and claim game, 216 essentially complete, 184
link game, 26, 87 communication network, 13
monotonic, 5
complete, 15, 73
network game, 119 connected, 15
network-formation game cycle-complete, 15
in extensive form, 154 cycle-free, 15
in strategic form, 174 directed communication network,
125
network-restricted game, 22
cost-extended, 194 empty, 15, 73
network-restricted NTU game, optimal, 266
pairwise stable, 266
109
probabilistic network, 101
probabilistic, 102
stable, 276
nontransferable utility game, 4
star, 15
strategic form, 140
stochastically stable, 277
subgame,4
strongly stable, 272
superadditive, 4, 54
weakly stable, 272
symmetric, 196
wheel, 15
totally balanced, 57
nontransferable utility, 4
transferable utility game, 4
NTU gaILe, 4
unanimity game, 9
nucleolus, 8
zero-normalized, 27
~ame tree. 135
pairwise stability, 266
game with a coalition structure, 91
Pareto boundary, 107
path, 1.3, 96, 125
hierarchical classes property, 130 payoff vector, 5
hypergraph, 95 player
cycle-free, 96 connected, 13, 96, 125
directly connected, 13, 96
implementation, 275 indirectly connected, 13, 96
improvement property, 175 superfluous, 40
imputation set, 5, 220 symmetric, 11
independence of irrelevant players, 50 zero player, 11
initiator, 125 player anonymity, 39
isomorphic, 167 PMAS,82
position set, 110
linearity, 50 position value, 44, 99
link, 13 positivity, 50
Index 291

potential, 252 strong stability, 272


potential function, 10, 252 strong superfluous link property, 40
link potential function, 122 subgame, 4, 139
player potential function, 122 superfluous link property, 47
potential game, 75, 252 superfluous player property, 40
HM-,75 symmetry, 11
weighted, 262 system of probabilities, 101
potential maximizer, 254
prisoners' dilemma, 190 transferable utility, 4
proportional links allocation rule, 176 TV game, 4

receiver, 125 unanimity coefficients, 9


reward function, 115
union stable system, 100
component additive, 116, 126
directed, 126
monotonic, 275 value of Aumann and Dreze, 91

Shapley set, 108 weak link symmetry, 175


Shapley value, 9, 10, 73 weak stability, 272
w-,12 weight vector, 107
weighted, 12 link admissible, 110
star, 15 V-feasible, 108
stochastic stability, 277 weighted majority game, 160
strategy, 140, 141 wheel,15
dominant, 145
undominated, 145 zero player, 11, 69
weakly dominant, 145 zero-player property, 12
THEORY AND DECISION LIBRARY

SERIES C: GAME THEORY MA THEMATICAL PROGRAMMING


AND OPERATIONS RESEARCH
Editor: Hans Peters, Maastricht University, The Netherlands

1. B.R. Munier and M.F. Shakun (eds.): Compromise, Negotiation and Group
Decision 1988 ISBN 90-277-2625-6
2. R. Selten: Models of Strategic Rationality 1988 ISBN 90-277-2663-9
3. T. Driessen: Cooperative Games, Solutions and Applications 1988
ISBN 90-277-2729-5
4. P.P. Wakker: Additive Representations of Preferences: A New Foundation of
Decision Analysis 1989 ISBN 0-7923-0050-5
5. A. Rapoport: Experimental Studies of Interactive Decisions 1990
ISBN 0-7923-0685-6
6. K.G. Ramamurthy: Coherent Structures and Simple Games 1990
ISBN 0-7923-0685-6
7. T.E.S. Raghavan, T.S. Ferguson, T. Parthasarathy and O.J.Vreze (eds.):
Stochastic Games and Related Topics: In Honor of Professor L.s. Shapley 1991
ISBN 0-7923-1016-0
8. J. Abdou and H. Keiding: Effectivity Functions in Social Choice 1991
ISBN 0-7923-1147-7
9. H.J.M. Peters: Axiomatic Bargaining Game Theory 1992
ISBN 0-7923-1873-0
10. D. Butnariu and E.P. Klement: Triangular Norm-Based Measures and Games with
Fuzzy Coalitions 1993 ISBN 0-7923-2369-6
II. R.P. Gilles and P.H.M. Ruys: Imperfections and Behavior in Economic
Organization 1994 ISBN 0-7923-9460-7
12. R.P. Gilles: Economic Exchange and Social Oranization. The Edgeworthian
Foundations of General Equilibium Theory 1996 ISBN 0-7923-4200-3
l3. PJ.-J. Herings: Static and Dynamic Aspects of General Disequilibrium Theory
1996 ISBN 0-7923-98l3-0
14. F. van Dijk: Social Ties and Economic Performance 1997
ISBN 0-7923-9836-X
15. W. Spanjers: Hierarchically Structured Economies: Models with Bilateral
Exchange Institutions 1997 ISBN 0-7923-4398-0
16. I. Curiel: Cooperative Game Theory and Applications: Cooperative Games Arising
from Combinatorial Optimization Problems 1997 ISBN 0-7923-4476-6
17.0.1. Larichev and H.M. Moshkovich: Verbal Decision Analysisfor Unstructured
Problems 1997 ISBN 0-7923-4578-9
18. T. Parthasarathy, B. Dutta, J.A.M. Potters, T.E.S. Raghavan, D. Ray and A. Sen
(eds.): Game Theoretical Applications to Economics and Operations Research
1997 ISBN 0-7923-4712-9
19. A.M.A. van Deeman: Coalition Formation and Social Choice 1997
ISBN 0-7923-4750-1
THEORY AND DECISION LIBRARY: SERIES C

20. M.O.L. Bacharach, L.-A. Gerard-Yaret, P. Mongin and H.S. Shin (eds.):
Epistemic Logic and the Theory 0/ Games and Decisions 1997
ISBN 0-7923-4804-4
21. Z. Yang: Computing Equilibria and Fixed Points 1999
ISBN 0-7923-8395-8
22. G. Owen: Discrete Mathematics and GameTHeory 1999
ISBN 0-7923-851 I-X
23. I. Garcia-Jurado, F. Patrone and S. Tijs (eds.): Game Practice: Contributions/rom
Applied Game Theory J999 ISBN 0-7923-8661-2
24. 1. Suijs: Cooperative Decision-making under Risk 1999
ISBN 0-7923-8660-4
25. J. Rosenmiiller: Game Theory: Stochastics, information, Strategies and
Cooperation 2000 ISBN 0-7923-8673-6
26. 1. Mario Bilbao: Cooperative Games on Combinatorial Structures 2000
ISBN 0-7923-7782-6
27. M. Slikker and A.van den Nouweland: Social and Economic Networks in
Cooperative Game Theory 200 I ISBN 0-7923-7226-3

You might also like