You are on page 1of 12

Summary Microeconomics Articles

Risk Aversion and Incentive Effects Holt/Laury


Empirical data on risk aversion:
- Farmers exhibit a significant amount of risk aversion that tends to increase as payoffs
are increased.
- Increase in risk aversion as the prize value is increased. However, this depends if
you look at the buyers or sellers, since subjects tend to put a high selling price on
something they own (endowment effect) and a lower buying price om something
they do not.

Problem: difficult to assess risk behaviour in experiments, because often low payoffs are
given or the setting is too unreal for people to behave realistically.

Experiment in paper: subjects are presented with simple choice tasks that are used to
estimate the degree of risk aversion. Its about if payoff levels increase risk aversion.

Low pay-off treatment is based on ten choices between paired lotteries, so either option A or
option B. As you can see, pay-offs for option A (2.00 or 1.60) are less variable than the ones
in B (3.85 or 0.10). So B is more risky option.

- Risk-seeker chooses B, where theres a 10% chance of winning 3.85. However, the
expected payoff is 1.17 higher when choosing A.

When the probability of the high payoff outcome increases enough (moving down the table),
a person should switch to option B.

- Risk neutral person chooses A four times and then switches to B. This is because
the first four times, the expected payoff is higher for A. After that, the expected payoff
is higher for B. Because the person is risk neutral, he doesnt care about
probabilities.
- Risk averse person chooses A up to the last decision, where he would switch to B.
Theres now a 100% chance of winning 3.85.

The actual experiment consisted of four tasks:


1. Subjects indicate a preference, option A or B, for each of the ten paired lottery
choices with the understanding that one of these will be chosen.
2. Then the same task is done, but now there are hypothetical payoffs 20 times the
levels shown in the table (A: 40 or 32; B: 70 or 2).
3. This task is also a high-payoff task, but the payoffs were paid in cash.
4. Low money payoffs again, as in the table.

To explore the effect of even larger increases in payoffs, there was the experiment where
payoffs were like 50x and 90x higher than in the table.
In all of the experiments, the majority of subjects chose the safe option when the probability
of the higher payoff was small, and then crossed to B without ever going back to A.
Therefore, the total number of safe A choices is used as an indicator of risk aversion.

Figure: the proportion of A


choices for each of the ten
decisions. Horizontal axis:
decision number.
Dashed line: predictions under
risk neutrality probability that
safe option chosen for fist four
decisions, and then p=0.

The series of actual choices lies


to the right of the risk neutral
predictions, showing a tendency
of risk aversion.

Figure: results of 20x real payoff


treatments (20x the money in the
previous table, and real = cash is
paid out)(solid line with squares)
- Low real payoffs = solid
line with dots.
- 50x real = diamonds
- 90x real = triangles
- Risk neutral = dashed

There is a clear increase in risk


aversion as all payoffs are scaled
up. The higher the payoff (50x or
90x), the more risk aversion.

However, subjects are more risk averse with high real payoff levels than with comparable
hypothetical payoffs. This shows that people dont know how they would behave in actual
situations of choice.

Final conclusions:
- Behaviour is slightly more inconsistent under the high hypothetical treatments, but
the primary incentive effects is in levels, not if the money is paid out or not.
- With real payoffs, risk aversion increases sharply. Behaviour is largely unaffected
when hypothetical payoffs are scaled up.
- Utility function shows:
- increasing relative risk aversion, which captures the effects of payoffs scale on the
frequency of safe choices
- decreasing absolute risk aversion, which avoids absurd amounts of risk aversion for
high stakes gambles
- Subjects facing hypothetical choices cannot imagine how they would actually behave
under high incentive conditions.
A Perspective on Psychology and Economics Rabin
Criticism on mainstream economics: its psychologically unrealistic. This resulted in
behavioural economics efforts to incorporate more realistic notions of human nature into
economics. The more realistic the assumptions about economic actors, the better
economics.

Second-wave behavioural economics: moves beyond pointing out problems with current
economic assumptions and on to the task of systematically and formally exploring the
alternatives with the same methods that economists are familiar with.

Normal economics assumes: 100% self-interest, 100% rationality, 100% self-control.

Rational economics: people care about the future, therefore save, and are more likely to
save the longer their planned retirement.
Behavioural economics: less-than-100% self-control shows that people under-save and
over-borrow.

Economic assumptions on human nature:


- People are Bayesian information processions
- People have well-defined and stable preferences
- People maximize their expected utility
- People exponentially discount future well-being
- People are self-interested
- People have preferences over final outcomes, not changes
- People have only instrumental/functional taste for beliefs and information
Goal of psychological economics: to investigate behaviourally grounded departures from
these assumptions that seem economically relevant.

There are different ways in which we can depart from these assumptions:
Preferences
- Reference-based utility: they care a lot about changes (in wealth, consumption,
ownership), not solely about absolute levels.
- Non-expected utility: preferences are not linear. People care differently about
uncertainty and risk
- Social preferences: people are not 100% self-interested, but care about the well-
being of others.

There are not stable, well-defined, time-invariant preferences, so we cannot assume people
maximize their utility in the correct way.
People mispredict/misremember their own utility. There are identifiable patterns in how
people misperceive their own future taste (they underestimate how much their taste will
change). They also misperceive how they evaluate their experienced well-being from the
past, they tend to under-emphasize duration of the episode.
There are also framing and context effects: a lot of decisions are very sensitive to the
framing or context of the choice.

Bayesian reasoning/beliefs
People from distorted beliefs. Judgement under uncertainty identifies heuristics and biases
in forming probabilistic beliefs.
Three other types of departures from the basic assumptions:
a. The way people care about change rather than final states
b. The way people care about others well-being rather than solely ourselves
c. The way people care disproportionately about current well-being than future well-
being

a. Humans care about changes, not just absolute levels. Our sense of well-being from
our total consumption is not just a function of its level, but also on how that level
compares to what we were used to.
The feeling about not having an item depends not just on our taste for the item, but
on whether or not we owned that item before (feeling of loss).
Once a new steady state is reached, we tend over time to return to previous hedonic
level. The event of becoming wealthy, not just being wealthy, is a source of
satisfaction, and once we get used to the new standard of living, we feel as happy as
before when we were poor.

Loss aversion: sensation of loss is very large relative to gains. This is seen as in the
evaluation of losses and gains in money, where losses trigger more emotions. Its
also seen in evaluation of loss and gains on times endowment effect: fact that
people who have randomly been given an object will instantly value it higher than
those who have not been endowed with the object.
Misprediction of preferences: people exaggerate how long sensations of gains and
losses will last. We think the pain of losing an object or the joy of increasing wealth
will last longer than it will. Therefore, we over-react to changes. We also over-react
because we isolate particular experiences and decisions from each other. A small
loss seems bad, because we dont think in long-term perspective, where the losses
will surely be overwhelmed by gains in the long-term.

Risk aversion: derives from diminishing marginal utility of wealth. Money is less
valuable to us if we are wealthy than if we are poor. This is the assumption, but this is
wrong. Our attitude towards risk are driven primarily by attitudes towards change in
wealth levels.

b. Economic actions are not self-interested. People are altruistic: positive concern for
others as well as yourself. This can be general (you care about all others well-being)
or targeted (you care about selected others (friends) well-being). The more a
sacrifice helps somebody, the more likely you are to be willing to make this sacrifice.
People weight others utility positively in their own utility function. Besides this, people
also care about fairness of the distribution of resources and people care about
intentions and motives, and want to reciprocate the good or bad behaviour of others.

Example: Party C chooses between to different allocations for two other anonymous
parties, A and B:
- (A,B) = (7.70, 3,75) 50%
- (4.00, 4.00) 50%
C wants to help these parties, but cares about social efficiency and equality.

Party B chooses between to different allocations for two other anonymous parties, A
and B:
- (7.70, 3,75) 40%
- (4.00, 4.00) 60%
B is less willing to choose the option that is good for A, but not as good for himself,
even though it maximizes total utility. Might be because B is self-interested.
Reciprocal action: Party B makes same choice as in previous example, but chooses
after A has created this choice by rejecting (5.50, 5.50). A chose the unfair one (7.50,
3.75). Now B chooses:
- (7.70, 3,75) 10%
- (4.00, 4.00) 90%
B is less likely to sacrifice his payoff now he knows A is selfish.
Players in games behave systematically differently as a function of previous
behaviour by other players. B punishes A in this case.

c. People like to experience pleasant things soon and to delay unpleasant things until
later.
Present-biased preferences: a person discounts near-term incremental delays in
well-being more severely than she discounts distant-future incremental delays
present > future. We are more averse to delaying todays gratification until tomorrow
than we are averse to delaying the same thing from 90 days to 91 days from now
time inconsistency.

Example: two choices:


- 7 hours work April 1, relax April 2
- relax April 1, 7.7 hours work April 2
If I asked you this on January 1, you will choose the first option, since theres less
work. But if I asked you this on April 1, you might choose the second. This is because
the choice is now about work now or work tomorrow.
We feel that today is different than tomorrow. We do not feel that 90 days from now is
different than 91 days from now.

This behaviour (present-biased preferences) totally explains savings behaviour,


credit-card debts, job search by unemployed, procrastination, alcohol, smoking.

Anomalies: Ultimatums, Dictators and Manners Camerer/Thaler


In economics, most behaviour can be explained by assuming that agents have stable, well-
defined preferences and make rational choices consistent with those preferences in markets.
Empirical result is an anomaly if it is difficult to rationalize or if weird assumptions are
needed to explain the result.

Ultimatum game: two players are given a sum of money. The first player, Proposer, offers
some portion of the money to the second player, Responder. If Responder accepts, he gets
what is offered, and Proposer gets the rest. If the Responder rejects, both players get
nothing.
If both players are income maximizers, and Propoer knows this, then he should offer a
penny (or smallest unit of currency available) and Responder should accept. Instead, offers
are usually around 30%-40% of total. Offers of less than 20% are often rejected.

These results are robust. Different variables change the average offers and acceptances
significantly, but under no conditions are very small offers made and accepted.
However, this may be due to the small stakes that are used in experiments (around
10euros). Will this be different if 100euro is offered? This experiment has been done. Result:
people rejected 10euros, and 60% of 30euros offered were also rejected. Therefore, the
amount at stake alters behaviour muss less than more subtle manipulations.

Another question: does nationality of subjects make a difference? Israeli Proposers gave
quite lower offers, and Israeli Responders were willing to accept low offers. But in general,
the mode offers were in range of 40%-50%, no change in nationalities.
If Responders reject small offers because they seem unfair, then their willingness to reject
should depend on what they think the Proposers are keeping for themselves. New
experiment in which players divide 100 chips. However, value of chips is manipulated,
between 10 cents or 30 cents. Players know their own chip valuation, but in some conditions
dont know the other players value.
In the case when both players have the same payoff, and this is known to both, we have the
usual ultimatum game and offers are around 50%.
If information is asymmetric (when Proposer knows that chips are worth 30 cents to him but
only 10 cents to Responder), theres an equal offer of 75% to the Responder. However, if
only Responder knows he has the higher value, he can offer 50% and still seem fair.
Do Proposers want to be fair or seem fair? Data: appearance of fairness is enough. Self-
interested behaviour is alive.
If Responder knows he has lower values, there is a high rejection rate. Responders are
upset getting low values and Proposers react to that anger. When both players know that the
Proposer has the higher value, offers start at 50%, but high rejection rates drive up the offers
to 64%.

Dictator game: first player, Allocator, makes decision regarding an offer and the second
player, Recipient, must accept. This can tell us if Proposers in ultimatum games who offer
more than they have to are fair-minded or feared having low offers reject. Result: both
mattered. Offers in dictator games are lower than in ultimatum games, but are still positive.
About 65% of the subjects kept all the money for themselves. As the social distance
between the Allocator and Recipient grows, the offers shrink.

Theres an asymmetric attitude towards fairness in which relative comparison matters a lot
when I feel unfairly treated, but matters very little when I feel fairly treated. I will accept a
2euro offer out of a 5euro deal, with will reject a 2euro offer out of a 10euro deal.

Subjects are more likely to accept small offers if they come from a random device than if
they are chosen by a Proposer. People are punishing unfairness, not rejecting inequality.

Fairness equilibrium: agents differentiate between an intentional act of meanness, which


they will punish, and an inadvertently mean act, which they will tolerate. Prisoners dilemma:
Nash equilibrium for both is to defect. This is a fairness equilibrium, since both players are
punishing the others uncooperative action. The other fairness equilibrium is cooperate-
cooperate. Both players are willing to sacrifice something to reward the other players
cooperative act.

Models of learning: subjects have no underlying concern for others payoffs. Proposers learn
ot make generous offers because they discover that Responders reject stingy offers.
Responders also learn to accept low offers, but more slowly, since the cost to them of
rejecting a small offer is less than the cost to the Proposer of having small offers turned
down.

Manners: in a dictator game if first player has been given 10 euros, he will split more evenly
than if he feels he earned the right to the 10 euros. This also depends on the relationship
between the two: if the relationship has been made less personal, then sharing shrinks.

Two different games:


- Competition among Proposers: they offer less, since they need more money for
themselves.
- Competition among Responders: accept lower offers, because that is still more than
the 0 of rejecting.
Altruism in Anonymous Dictator Games Eckel/Grossman
The common explanation for why dictators do not just offer zero in a dictator game is that
participants are motivated by factors in addiction to monetary payoffs, such as altruism or
fairness.
This paper tests the role of altruism in explaining observed behaviour. The experiment is
designed in such a way that the usual anonymous recipient is now a reputable charity. In the
standard dictator game, the subjects know that the other players are just like themselves.
Now the subjects know the recipient is a charity. This treatment substantially increases
altruistic giving. We can infer that:
- Altruism is a motivating factor in human behaviour in general
- Even under double-anonymous conditions, an increase in the deservingness of a
recipient increases the quantity of donations by experimental subjects

Different research:
- Hoffman, McGabe: generosity of subjects in ultimatum and dictator games is the
result of experimenter effect: the effect from the experimenter knowing the
individuals decision. Altruistic behaviour is not due to fairness, but rather to a social
concern for what others may think, and for being held in high regard by others.

In this double-anonymous game, with little motivation for other-regarding behaviour,


subjects behaviour is close to the game-theoretic predictions for noncooperative,
nonrepeated game with selfish, payoff-maximizing subjects.
For fairness, a donor must obtain some value from his donation. Fairness and
altruism require context: the circumstances of the recipient determine what is the fair
or appropriate charitable action to take.

- Forsythe, Horowitz: they compare outcomes of ultimatum and dictator games to test
the fairness hypothesis that the distributions of offers are the same in the two
games. SPNE is both games is the same: give zero. Hypothesis: if offers are due
solely to proposers concerns with fairness, the offers will be the same in the two
games.
Result: outcomes in the two games differ (in dictator game, 22% play as altruists, in
ultimatum games, 65% play as altruists), therefore they reject fairness hypothesis.
The extent to which an individual will engage in altruistic behaviour depends on its
opportunity cost. The cost of unfair behaviour for the Proposer is not the same in the
two games, because the offer can be rejected in the ultimatum game. In the dictator
game, there is no risk of rejection.

Eckel, Grossman have two experiments:


Dictator game in which the dictator is asked to determine the division of 10euros:
1. Double-anonymous dictator game (Hoffman)
2. Anonymous partner is replaced by a charity

Results:
1. 30 of the 48 subjects chose to keep the 10euros (62%), therefore offering 0. The 48
subjects donated a total of 51euros (10.6% of payoffs).
2. These subjects donated 31% (149) of the payoffs. Only 27% kept the 10euros for
themselves.
Five subjects donated the full 10 euros; none in the anonymous treatment did.

In the double-anonymous dictator games, we observe a change in donations when subjects


are given information about the characteristics of the recipient. Altruistic behaviour increases
when the recipient is a charity as opposed to an unknown person.
An Experimental Study of the Centipede Game McKelvey/Palfrey
Centipede game: two players alternately get a chance to take the larger portion of a
continually escalating pile of money. As soon as one person takes, the game ends with that
player getting the larger portion of the pile, and the other player getting the smaller portion.
Game theoretic equilibrium: first mover should take the large pile on the first round.

Game: pot starts at 10.50euros, divided into a pile of 10euros and a pile of 0.50euros. Each
time a player passes, both piles are multiplied by 10. The game has a total of six moves
(three moves for each player). Nash equilibrium: first playing taking large pile).
Actual experiment: 37 out of 662 games end with first player taking the large pile. How this
irrational behaviour can arise is based on reputation effects and incomplete information.

- A significant amount of learning is taking place (when repeated games are played)
- Even with inexperienced subjects, only a small fraction of their behaviour is
unaccounted for by a simple game-theoretic equilibrium model in which beliefs are
accurate.

Reputation effect: believing that the other player is behaving like an altruist or not.
Experienced subjects exhibit a pattern of cooperation until shortly before the end of the
game, when they start to adopt noncooperative behaviour.

We start with pot of 0.50, divided into 0.40


and 0.10. Each time a player passes, both
piles are multiplied by two. We have two
round and three round versions (figure 1
and figure 2).

People were matched with people they had


never played before, to prevent cooperative
behaviour.

Result: in 7% of the four move game and


1% of the six move game would first movers
TAKE on the first round. There is a
consistent pattern in all of the sessions, where the probability of TAKE increases as we get
closer to the last move.
Another result: there are differences between the earlier and later plays of the game, where
players in the later games behave more rational, since they gained more experience in the
game.

If we look at individual level, there are several subjects who PASS at every opportunity they
have. This shows altruism, because an obvious way to rationalize their behaviour is to
assume that they have a utility function that is increasing in the sum of both players, rather
than a selfish utility function that only depends on that players own payoff.

Another result: the subjects behaviour is inconsistent with the use of a single pure strategy
throughout all the games they played. Example: subject #6 in session #1 chooses PASS at
the last node of the first game, but TAKES at the first opportunity a few games later. Rational
play cannot account for some sequences of plays. A player might first be altruistic, but then
becomes rational.
An Introduction to Applicable Game Theory Gibbons
Game-theoretic models allow economists to study the implications of rationality, self-interest
and equilibrium.
Complete information: there is no private information: the timing, feasible moves and
payoffs of the game are all common knowledge.

Static games with complete information


Two-player, simultaneous-move game. Timing:
1. Player 1 chooses an action a1 from a set of feasible actions A1. Simultaneously,
player 2 chooses an action a2 from a set of feasible actions A2.
2. After the players choose their actions, they receive payoffs.

Rather than asking how one should play the game, first ask how one should not play the
game.
Player 1 has two actions: [Up, Down]. Player 2 has
three: [Left, Middle, Right].
For player 2, Right is dominated by playing Middle.
No matter what player 1 does, Middle is always
better. If P1 knows that P2 is rational, then P1 can
eliminate Right from P2s action space.

Now Down is dominated by Up for P1. No matter


what P2 now does, Up is always better than Down.
So if P1 is rational (and P1 knows that P2 is
rational), then Down is eliminated.

If P2 knows that P1 is rational, and P2 knows that


P1 knows that P2 is rational, then P2 can eliminate
Down. Now Middle dominates Left (2 > 0).

Equilibrium: (Up, Middle).

The process of solving this game is iterated


elimination of dominated strategies. This is
basically repeatedly asking how one should not
play the game.

To find Nash equilibrium, instead of asking what the solution of a given game is, we ask
what outcome cannot be the solution.
Each players predicted strategy must be that players best response to the predicted
strategies of the other players self-enforcing/strategically stable. No single player wants to
deviate. These strategies are Nash equilibrium.
For the outcomes in the previous game, theres only one strategies. In every other outcome,
one player would have an incentive to deviate. Sometimes Nash equilibrium is efficient
(yields highest payoffs in the game for both players). Often its not: Prisoners Dilemma.

In this game, L1L2 is the Nash equilibrium. However, it is not


the most efficient one (which is R1R2).
The Dating Game has multiple Nash equilibria. Chris and
Pat have dinner together. Pat has to buy the wine and
Chris the food, but they cannot communicate. Both prefer
Red wine with Steak and White wine with Chicken. But
Chris prefers the white version and Pat the red version.
There are two equilibria but no obvious way in deciding
between them. This needs a mixing strategy.

Matching pennies: no pair of strategies satisfying the


mutual best response definition of Nash equilibrium. In this
game, each players wants to outguess the other one. There
is no pair of strategies satisfying Nash. Solution to this
game involves uncertainty about what the players will do
we need mixed strategy. Before pure strategies, but now
that doesnt work.
Mixed strategy: probability distribution over some or all of the players pure strategies.

Dynamic games with complete information


Two-player, sequential-move games. Timing:
1. Player 1 chooses an action a1 from a set of feasible actions A1.
2. Player 2 observes 1s choice and then chooses an action a2 from a set of feasible
actions A2.
Example: Stackelberg.
The new solution concept: backward induction.
In many dynamic games there are many Nash equilibria, some of which depend on
noncredible threats: threats that threatener would not want to carry out, but will not have to
carry out if the threat is believed.
In this game, P1
chooses either to Trust
or Not Trust P2. If he
chooses Not Trust,
game ends. If he
chooses Trust, P2
chooses to Honour or
Betray.

Solution: by working backward through the game tree. If P2 gets to move (so if P2 chooses
Trust), then P2 will choose Betray (2 > 1). Knowing this, P1s choice depends on payoff of 0
(Not Trust) or -1 (Trust, Betray). He will choose Not Trust (0 > -1).

In some games, there are several Nash equilibria, some of which rely on noncredible
threats. Backward-induction solution, however, is always a Nash equilibrium that does not
rely on noncredible threats.
Example: Nash equilibrium that relies on
noncredible threat.
Backward induction solution: for P2 to choose
R and for P2 to choose R.
But according to the matrix form, there are two
equilibria: [R, R] and [L, L]. The second one
relies on the noncredible threat by P2 to play
L rather than R if given the move. If P1 believes this threat, he will play L.
Subgame-perfect Nash equilibrium: refinement of Nash equilibrium; to be subgame-
perfect, the players strategies must first be a Nash equilibrium and must then rule out Nash
equilibria that rely on non-credible threats.
Subgame: piece of an original game that remains to be played beginning at any point at
which the complete history of the play of the game thus far is common knowledge.
A Nash equilibrium is subgame-perfect if the players strategies constitute a Nash
equilibrium in every subgame.

When people interact over time, threats and promises concerning future behaviour may
influence current behaviour repeated games.
Go back to trust game:
Player 1: in period 1, play Trust, if all moves in the previous periods have been Trust and
Honour. Otherwise, play Not Trust.
Player 2: If given the move this period, play Honour if all moves in all previous periods have
been Trust and Honour. Otherwise, play Betray.
Now there is corporation. But P2 has incentive for defection. Its better off playing Betray. But
if P1 knows this he will play Not Trust. Cooperation is better.

Static games with incomplete information


Bayesian games: games with incomplete information. Players payoff functions are not
common knowledge; at least one player is uncertain about another players payoff function.
Example: sealed-bid auction: each bidder knows his own valuation for the good, but doesnt
know any other bidders valuation.
In previous Dating Game, two pure-strategy Nash
equilibria: [Steak, Red] and [Chicken, White].
Now they dont know each others payoff.

We construct a pure-strategy Bayesian Nash


Equilibrium of this incomplete information version of
the Dating Game, in which Chris chooses Steak if tc exceeds a critical value c, and chooses
Chicken otherwise, and Pat chooses White if tp exceeds a critical value p and chooses Red
otherwise.
Chris chooses Steak with probability (x c)/x and Pat chooses White with probability
(x p)/p.
As incomplete information disappears (as x moves to zero), players behaviour in this game
approaches their behaviour in the mixed-strategy Nash equilibrium (Chris chooses Steak
with p=2/3 and chooses Chicken with =1/3, same for Pat).

Harsanyi: a mixed-strategy Nash equilibrium in a game with complete information can


(almost always) be interpreted as a pure-strategy Bayesian Nash equilibrium in a closely
related game with little bit of incomplete information.
Behavioural Biases in Auction: An Experimental Study Dodonova
There is substantial evidence that people do not always behave according to the expected
utility theory and that they are subject to behavioural biases.
Kahneman, Tversky: people are loss averse and are subject do endowment effect (main
building block of prospect theory).
Regret theory: people who made certain decisions in the past may have regrets if these
decisions turn out to be wrong even if they appeared correct with the informational available.

When bidders do not have dominating strategies, they may feel regret when their bidding
decisions turn out not to be the best (after the end of the auction), even when they were the
best decisions based on the informational available at the time of the bidding:
- Loser regret: a bidder in first-price sealed-bid auction who values object at 1,000 and
bids only 500 feels regret when she learns winning bid was only 501. This makes
bidders bid more aggressively.
- Winner regret: same bidder who bids 500 feels regret for money left on the table if
she wins and learns the other highest bid was 100. This makes bidders shade their
bids in first-price sealed-auctions.
Bidders in descending Dutch auctions are subject to loser regret, but are free from winner
regret, because the second largest bid doesnt exist and the winner never finds out the price
at which she could have won.

Hypothesis 1: The final price in Dutch auctions should be higher than in otherwise identical
fist-price sealed-bid auctions.

Bidders optimal strategies in both second-price sealed-bid and first-price open-bid English
auctions are to bid their true value of the object and such optimal bidding should result in
same outcome (same final price and same winner). However, the different auction formats
may result in different bidders attitudes toward the object. Bidders in English auctions may
feel quasi-endowment effect: if bidder A bids 1,000 she may feel as if she already owns
the object and paid 1,000 for it. Therefore, if she gets overbid. She thinks of increasing her
bid. A person prefers to pay extra dollars in order to keep the object even if she would not
buy the same object for that price in a simple buying decision.

Hypothesis 2: Bidders in open-bid English auctions will bid more aggressively than in
otherwise identical second-price sealed-bid auctions where bidders face only one-time
bidding decisions and are not subject to the quasi-endowment effect.

Hypothesis 3: The sellers revealed valuation of the object should be higher than the
average revealed bidders valuation (loss aversion, endowment effect).

Hypothesis 1: Rejected. Average price in first-price sealed-bid auctions is higher than in


Dutch auctions (but difference is insignificant).

Hypothesis 2: Accepted. Bidders in English auctions bid more aggressively than in second-
price sealed-bid auctions. On average a bidder in an English auction will bid more than she
would for the same item in a second-price sealed-bid auction.

Hypothesis 3: Accepted. Sellers valuations are significantly higher than bidders valuation.
Selling of an item is seen as a loss.

You might also like