You are on page 1of 196

The Paradox of Choice

Why MORE is less


Yakov Ben-Haim
Technion
Israel Institute of Technology
0
\lectures\talks\lib\pdox-choice01.tex c Yakov Ben-Haim 8.4.2013
1
Contents
1 What Makes a Good Decision? (pdox-choice01.tex) 3
1.1 Choosing a College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Severe Uncertainty: What is a Good Strategy? . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 Three Types of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4 Severe Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.5 Response to Severe Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.6 Does Robust Satiscing Use Probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.7 Does Robust Satiscing Use Probability? (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . 88
1.7.1 First Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
1.7.2 Second Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
1.7.3 Third Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2 Robustness and Probability: Proxy Theorems (proxy-very-shrt02.tex) 142
3 Robust-Satiscing is a Proxy for Probability of Survival (proxy02.tex) 148
4 Robust Satiscing: Normative or Prescriptive? (pdox-choice01.tex) 180
2
lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/3
1 What Makes a Good Decision?
Main Source:
Barry Schwartz, Yakov Ben-Haim, and Cli Dacso, 2011,
What Makes a Good Decision? Robust Satiscing as a
Normative Standard of Rational Behaviour, The Jour-
nal for the Theory of Social Behaviour, 41(2): 209-227.
Pre-print to be found on:
http://info-gap.com/content.php?id=23
lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/4
1.1 Choosing a College
lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/5
Decision problem:
Youve been accepted to several good universities.
You must choose one.
How to go about it?

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/6


Decision problem:
Youve been accepted to several good universities.
You must choose one.
How to go about it?
Multi-attribute utility analysis.
lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/7
Multi-attribute utility analysis.
Attributes:
Size.
Location.
Reputation.
Quality of physics program.
Electrical Engineering Dept.
Social life.
Housing.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/8


Multi-attribute utility analysis.
Attributes:
Size.
Location.
Reputation.
Quality of physics program.
Electrical Engineering Dept.
Social life.
Housing.
Importance weights for each attribute.
Scale 0 to 1. Vector: w
i
.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/9


Multi-attribute utility analysis.
Attributes:
Size.
Location.
Reputation.
Quality of physics program.
Electrical Engineering Dept.
Social life.
Housing.
Importance weights for each attribute.
Scale 0 to 1. Vector: w
i
.
Evaluate each school on each attribute.
Scale 1 to 10. Matrix: s
ij
.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/10


Multi-attribute utility analysis.
Attributes:
Size.
Location.
Reputation.
Quality of physics program.
Electrical Engineering Dept.
Social life.
Housing.
Importance weights for each attribute.
Scale 0 to 1. Vector: w
i
.
Evaluate each school on each attribute.
Scale 1 to 10. Matrix: s
ij
.
Evaluate each school.
Weighted score of school j: S
j
=

i
w
i
s
ij
.
Choose school with highest score.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/11


Multi-attribute utility analysis.
Attributes:
Size.
Location.
Reputation.
Quality of physics program.
Electrical Engineering Dept.
Social life.
Housing.
Importance weights for each attribute.
Scale 0 to 1. Vector: w
i
.
Evaluate each school on each attribute.
Scale 1 to 10. Matrix: s
ij
.
Evaluate each school.
Weighted score of school j: S
j
=

i
w
i
s
ij
.
Choose school with highest score.
Is this a good decision strategy?
lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/12
Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/13


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/14


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).
How to estimate these pdfs?

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/15


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).
How to estimate these pdfs?
These pdfs are uncertain.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/16


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).
How to estimate these pdfs?
These pdfs are uncertain.
Attributes:
List complete?
Unknown future interests.

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/17


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).
How to estimate these pdfs?
These pdfs are uncertain.
Attributes:
List complete?
Unknown future interests.
Future subjective state:
What will it feel like as a student?
Romantic relationship?

lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/18


Uncertainties:
Importance weights:
Your future preferences unclear.
Choose pdf p(w
i
).
School scores:
Will famous prof leave?
Choose pdf q(s
ij
).
How to estimate these pdfs?
These pdfs are uncertain.
Attributes:
List complete?
Unknown future interests.
Future subjective state:
What will it feel like as a student?
Romantic relationship?
Uncertainty: severe. What is a good strategy?
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/19
1.2 Severe Uncertainty: What is a Good Strategy?
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/20
Decision strategy: Optimization
Make best possible estimates.
Choose best-estimated school.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/21


Decision strategy: Optimization
Make best possible estimates.
Choose best-estimated school.
Decision strategy: 1-reason (lexicographic)
Identify most important attribute.
Choose best school for this attribute.
Use other attributes to break ties.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/22


Decision strategy: Optimization
Make best possible estimates.
Choose best-estimated school.
Decision strategy: 1-reason (lexicographic)
Identify most important attribute.
Choose best school for this attribute.
Use other attributes to break ties.
Decision strategy: Min-max
Ameliorate worst case.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/23


Decision strategy: Optimization
Make best possible estimates.
Choose best-estimated school.
Decision strategy: 1-reason (lexicographic)
Identify most important attribute.
Choose best school for this attribute.
Use other attributes to break ties.
Decision strategy: Min-max
Ameliorate worst case.
Decision strategy: Robust satiscing.
Identify satisfactory score.
Choose school with robustness to error.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/24


Decision strategy: Optimization
Make best possible estimates.
Choose best-estimated school.
Decision strategy: 1-reason (lexicographic)
Identify most important attribute.
Choose best school for this attribute.
Use other attributes to break ties.
Decision strategy: Min-max
Ameliorate worst case.
Decision strategy: Robust satiscing.
Identify satisfactory score.
Choose school with robustness to error.
Decision strategy: Opportune windfalling.
Identify wonderful score.
Choose school opportune to uncertainty.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/25
A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/26


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/27


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/28


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.
Some goods are almost always substantive. E.g.:
Money, pleasure, utility, etc.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/29


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.
Some goods are almost always substantive. E.g.:
Money, pleasure, utility, etc.
Some goods are almost always expedient. E.g.:
Safety, robustness, stability, etc.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/30


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.
Some goods are almost always substantive. E.g.:
Money, pleasure, utility, etc.
Some goods are almost always expedient. E.g.:
Safety, robustness, stability, etc.
My substantive may be your expedient. E.g.:
Designers goal: mechanical stability of vehicle.
Substantive.
Passengers tool: mechanical stability of vehicle.
Expedient.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/31


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.
Some goods are almost always substantive. E.g.:
Money, pleasure, utility, etc.
Some goods are almost always expedient. E.g.:
Safety, robustness, stability, etc.
My substantive may be your expedient. E.g.:
Designers goal: mechanical stability of vehicle.
Substantive.
Passengers tool: mechanical stability of vehicle.
Expedient.
Economists goal: economic growth. Substantive.
Consumers tool: economic growth. Expedient.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty: What is a Good Strategy? 196/32


A common confusion: substantive vs expedient goods.
A good is useful, valuable, desirable . . . . E.g.:
Money, love, food, information, mechanical stability,
economic growth, robustness against surprise . . . .
A substantive good is the goal of the decision.
An expedient good is a tool to achieve a goal.
Some goods are almost always substantive. E.g.:
Money, pleasure, utility, etc.
Some goods are almost always expedient. E.g.:
Safety, robustness, stability, etc.
My substantive may be your expedient. E.g.:
Designers goal: mechanical stability of vehicle.
Substantive.
Passengers tool: mechanical stability of vehicle.
Expedient.
Economists goal: economic growth. Substantive.
Consumers tool: economic growth. Expedient.
We may use dierent strategies for
substantive and expedient goods.
lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/33
1.3 Three Types of Probability
Sources:
Schwartz, Ben-Haim, Dacso, 2011.
Jonathan Baron, 2008, Thinking and Deciding.
lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/34
What is probability?
What do the following statements mean?
The probability of throwing 7 with 2 dice is 1/6.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/35


What is probability?
What do the following statements mean?
The probability of throwing 7 with 2 dice is 1/6.
The probability of developing prostate cancer is 0.03.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/36


What is probability?
What do the following statements mean?
The probability of throwing 7 with 2 dice is 1/6.
The probability of developing prostate cancer is 0.03.
The probability that Maccabee TA
will win championship is 1/4.
lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/37
Logical probability.
36 equi-probable outcomes with 2 dice.
6 outcomes sum to 7.
The probability of throwing 7 with 2 dice is 1/6.
The prob of 7 is a logical deduction.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/38


Logical probability.
36 equi-probable outcomes with 2 dice.
6 outcomes sum to 7.
The probability of throwing 7 with 2 dice is 1/6.
The prob of 7 is a logical deduction.
Empirical probability.
Sample 10,000 healthy men, aged 4570.
300 develop prostate cancer.
300/10,000 = 0.03.
The probability is an empirical frequency.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/39


Logical probability.
36 equi-probable outcomes with 2 dice.
6 outcomes sum to 7.
The probability of throwing 7 with 2 dice is 1/6.
The prob of 7 is a logical deduction.
Empirical probability.
Sample 10,000 healthy men, aged 4570.
300 develop prostate cancer.
300/10,000 = 0.03.
The probability is an empirical frequency.
Personal or subjective probability.
Will Maccabee TA win? . . . I think so.
How sure are you? . . . Id give em 25% chance.
Personal judgment.
Expression of condence.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/40


Logical probability.
36 equi-probable outcomes with 2 dice.
6 outcomes sum to 7.
The probability of throwing 7 with 2 dice is 1/6.
The prob of 7 is a logical deduction.
Empirical probability.
Sample 10,000 healthy men, aged 4570.
300 develop prostate cancer.
300/10,000 = 0.03.
The probability is an empirical frequency.
Personal or subjective probability.
Will Maccabee TA win? . . . I think so.
How sure are you? . . . Id give em 25% chance.
Personal judgment.
Expression of condence.
Do these probabilities dier?
lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/41
Empirical & Personal probability overlap.
Personal: judgment based on experience.
Empirical: collect relevant data.
Requires judgment. E.g.:
Select healthy to test, health now, health history.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/42


Empirical & Personal probability overlap.
Personal: judgment based on experience.
Empirical: collect relevant data.
Requires judgment. E.g.:
Select healthy to test, health now, health history.
Logical & Empirical probability overlap.
Empirical: observed recurrence.
Logical:
Deduction from denition of fair dice.
Observed recurrence: dice tend to be fair.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/43


Empirical & Personal probability overlap.
Personal: judgment based on experience.
Empirical: collect relevant data.
Requires judgment. E.g.:
Select healthy to test, health now, health history.
Logical & Empirical probability overlap.
Empirical: observed recurrence.
Logical:
Deduction from denition of fair dice.
Observed recurrence: dice tend to be fair.
Logical & Personal probability overlap.
Personal: judgment from experience.
Logical:
Deduction with rules of inference.
Pre-logical choice of rules of inference.
E.g. reason by contradiction.
lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/44
All three concepts of probability,
Logical: outcome of ideal dice,
Empirical: frequency of prostate cancer,
Personal: Maccabees chances,
Overlap but dier in meaning.

lectures\talks\lib\pdox-choice01.tex Three Types of Probability 196/45


All three concepts of probability,
Logical: outcome of ideal dice,
Empirical: frequency of prostate cancer,
Personal: Maccabees chances,
Overlap but dier in meaning.
Are mathematically identical: Kolmogorov axioms.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/46
1.4 Severe Uncertainty
We discussed three types of probability.
Is all uncertainty probabilistic?
In a previous lecture
1
we claimed that
ignorance is not probabilistic:
Monty Halls 3-door problem.
Pascals wager.
Lewis Carrolls 2-bag riddle.
Keynes new material.
1
These examples are discussed in lecture The Strange World of Human Decision Making,
\lectures\decisions\lectures\foibles\foibles01.pdf.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/47
Frank Knight:
Risk, Uncertainty and Prot, 1921.
Risk:
Probabilistic: known pdfs.
Recurrent: lots of data.
Can be insured against.

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/48


Frank Knight:
Risk, Uncertainty and Prot, 1921.
Risk:
Probabilistic: known pdfs.
Recurrent: lots of data.
Can be insured against.
True uncertainty: ignorance, surprise.
Scientic discovery.
New innovation by competitor.
New consumer preferences.
Political upheaval.
Unforeseen natural disaster.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/49
49 W
51 B
R
100
W, B
H
Figure 1: Ellsberg, 1931. Figure 2: Ellsbergs Urns.
Ellsberg paradox:
1st expmnt: win on black. Which urn?

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/50


49 W
51 B
R
100
W, B
H
Figure 3: Ellsberg, 1931. Figure 4: Ellsbergs Urns.
Ellsberg paradox:
1st expmnt: win on black. Which urn?
2nd expmnt: win on white. Which urn?

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/51


49 W
51 B
R
100
W, B
H
Figure 5: Ellsberg, 1931. Figure 6: Ellsbergs Urns.
Ellsberg paradox:
1st expmnt: win on black. Which urn?
2nd expmnt: win on white. Which urn?
Most folks stick with R. Sensible?

lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/52


49 W
51 B
R
100
W, B
H
Figure 7: Ellsberg, 1931. Figure 8: Ellsbergs Urns.
Ellsberg paradox:
1st expmnt: win on black. Which urn?
2nd expmnt: win on white. Which urn?
Most folks stick with R. Sensible?
Types of uncertainty:
Probability vs ambiguity.
Verbal information.
Probabilistic risk vs uncertainty.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/53
Figure 9: Cuban missile crisis, Oct 1962. U-2 re-
connaissance photograph of Soviet nuclear missiles
in Cuba. Missile transports and tents for fuel-
ing and maintenance are visible. Courtesy of CIA.
http://en.wikipedia.org/wiki/Cuban Missile Crisis.
Strategic uncertainty.
Outcomes depend 2 or more agents.
Your info about Them very limited.
Example: Cuban missile crisis.
US detects nuclear missiles on Cuba.
How many missiles? Russian intentions unclear.
What should US do?
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/54
Ignorance is not probabilistic:
2
Monty Halls 3-door problem.
Pascals wager.
Lewis Carrolls 2-bag riddle.
Keynes new material.

2
These examples are discussed in lecture The Strange World of Human Decision Making,
\lectures\decisions\lectures\foibles\foibles01.pdf.
lectures\talks\lib\pdox-choice01.tex Severe Uncertainty 196/55
Ignorance is not probabilistic:
Monty Halls 3-door problem.
Pascals wager.
Lewis Carrolls 2-bag riddle.
Keynes new material.
Ignorance is a gap between what we
do know and what we need to know
in order to make a good decision.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/56
1.5 Response to Severe Uncertainty
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/57
What decision strategy for severe uncertainty?
Best-model optimization.
1-reason (lexicographic).
Min-max (worst case).
Robust satiscing.
Opportune windfalling.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/58


What decision strategy for severe uncertainty?
Best-model optimization.
1-reason (lexicographic).
Min-max (worst case).
Robust satiscing.
Opportune windfalling.
Paradox of choice (Barry Schwartz):
Under severe uncertainty,
aiming for more achieves less.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/59


What decision strategy for severe uncertainty?
Best-model optimization.
1-reason (lexicographic).
Min-max (worst case).
Robust satiscing.
Opportune windfalling.
Paradox of choice (Barry Schwartz):
Under severe uncertainty,
aiming for more achieves less.
Do you agree? Always? Sometimes?
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/60
We will explore robust satiscing.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/61


We will explore robust satiscing.
Response to ignorance: 2 questions.
What do you want/need? What is essential outcome?

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/62


We will explore robust satiscing.
Response to ignorance: 2 questions.
What do you want/need? What is essential outcome?
How vulnerable are you to surprise?

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/63


We will explore robust satiscing.
Response to ignorance: 2 questions.
What do you want/need? What is essential outcome?
How vulnerable are you to surprise?
Response to ignorance: robust-satiscing.
Choose option which:
Satises requirements.
Maximally robust to surprise.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/64


We will explore robust satiscing.
Response to ignorance: 2 questions.
What do you want/need? What is essential outcome?
How vulnerable are you to surprise?
Response to ignorance: robust-satiscing.
Choose option which:
Satises requirements.
Maximally robust to surprise.
Does rob-sat dier from outcome optimization?
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/65
Foraging strategies.
Optimizing: Maximize caloric intake.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/66


Foraging strategies.
Optimizing: Maximize caloric intake.
Robust-satiscing: survive reliably.
Satisce caloric requirement.
Maximize robustness to uncertainty.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/67


Foraging strategies.
Optimizing: Maximize caloric intake.
Robust-satiscing: survive reliably.
Satisce caloric requirement.
Maximize robustness to uncertainty.
Robust-satiscing survives in evolution.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/68
Financial market strategies.
Optimizing: Maximize revenue.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/69


Financial market strategies.
Optimizing: Maximize revenue.
Robust-satiscing: beat the competition.
Satisce revenue requirement.
Maximize robustness to uncertainty.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/70


Financial market strategies.
Optimizing: Maximize revenue.
Robust-satiscing: beat the competition.
Satisce revenue requirement.
Maximize robustness to uncertainty.
Rbst-satiscing survives in competition.
Equity premium puzzle.
Home bias paradox.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/71
49 W
51 B
R
100
W, B
H
Figure 10: Ellsberg, 1931. Figure 11: Ellsbergs Urns.
Humans and ambiguity: Ellsberg paradox.
Probabilistic risk vs uncertainty.
Optimizing: Maximize expected utility.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/72


49 W
51 B
R
100
W, B
H
Figure 12: Ellsberg, 1931. Figure 13: Ellsbergs Urns.
Humans and ambiguity: Ellsberg paradox.
Probabilistic risk vs uncertainty.
Optimizing: Maximize expected utility.
Robust-satiscing: do good enough.
Satisce expected utility.
Maximize robustness to uncertainty.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Response to Severe Uncertainty 196/73


49 W
51 B
R
100
W, B
H
Figure 14: Ellsberg, 1931. Figure 15: Ellsbergs Urns.
Humans and ambiguity: Ellsberg paradox.
Probabilistic risk vs uncertainty.
Optimizing: Maximize expected utility.
Robust-satiscing: do good enough.
Satisce expected utility.
Maximize robustness to uncertainty.
Rbst-satiscers are happier. (Schwartz)
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/74
1.6 Does Robust Satiscing Use Probability?
Source: Schwartz, Ben-Haim, Dacso, 2011.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/75
It might be argued that:
Robust-satiscing implicitly uses probability:
Choose the more robust option.
is the same as
Choose the option more likely to succeed.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/76


It might be argued that:
Robust-satiscing implicitly uses probability:
Choose the more robust option.
is the same as
Choose the option more likely to succeed.
This argument is false.
The latter requires probability judgment.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/77


It might be argued that:
Robust-satiscing implicitly uses probability:
Choose the more robust option.
is the same as
Choose the option more likely to succeed.
This argument is false.
The latter requires probability judgment.
The former:
requires robustness thinking.
does not require probabilistic thinking.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/78


It might be argued that:
Robust-satiscing implicitly uses probability:
Choose the more robust option.
is the same as
Choose the option more likely to succeed.
This argument is false.
The latter requires probability judgment.
The former:
requires robustness thinking.
does not require probabilistic thinking.
Robustness question:
How wrong can our estimates be
and outcome of choice still acceptable?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/79


It might be argued that:
Robust-satiscing implicitly uses probability:
Choose the more robust option.
is the same as
Choose the option more likely to succeed.
This argument is false.
The latter requires probability judgment.
The former:
requires robustness thinking.
does not require probabilistic thinking.
Robustness question:
How wrong can our estimates be
and outcome of choice still acceptable?
The answer does not use probability.
(More on this later.)
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/80
Example of robust thinking in choosing university.
Attribute: number of great physicists
Requirement: At least 2 great physicists.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/81


Example of robust thinking in choosing university.
Attribute: number of great physicists
Requirement: At least 2 great physicists.
Uni 1 has 4 great physicists.
Uni 2 has 5 great physicists.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/82


Example of robust thinking in choosing university.
Attribute: number of great physicists
Requirement: At least 2 great physicists.
Uni 1 has 4 great physicists.
Uni 2 has 5 great physicists.
More profs can leave Uni 2
and still satisfy requirement.
Uni 2 is more robust if all else the same.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/83


Example of robust thinking in choosing university.
Attribute: number of great physicists
Requirement: At least 2 great physicists.
Uni 1 has 4 great physicists.
Uni 2 has 5 great physicists.
More profs can leave Uni 2
and still satisfy requirement.
Uni 2 is more robust if all else the same.
It is not true that:
Choose the more robust option.
is the same as
Choose option more likely to succeed.
Judgment of robustness is not judgment of likelihood.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/84
A common confusion: description vs prescription.
Describe behavior of decision makers.
This can always be done probabilistically (or robustly).

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/85


A common confusion: description vs prescription.
Describe behavior of decision makers.
This can always be done probabilistically (or robustly).
Precribe a decision strategy.
This cannot always be done probabilistically:
Probabilistic strategy requires pdfs.
Robust satiscing does not require pdfs.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/86


A common confusion: description vs prescription.
Describe behavior of decision makers.
This can always be done probabilistically (or robustly).
Precribe a decision strategy.
This cannot always be done probabilistically:
Probabilistic strategy requires pdfs.
Robust satiscing does not require pdfs.
Probability and Robustness are:
Descriptively interchangable.
Prescriptively distinct.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/87


A common confusion: description vs prescription.
Describe behavior of decision makers.
This can always be done probabilistically (or robustly).
Precribe a decision strategy.
This cannot always be done probabilistically:
Probabilistic strategy requires pdfs.
Robust satiscing does not require pdfs.
Probability and Robustness are:
Descriptively interchangable.
Prescriptively distinct.
Robust satiscing does not use probability.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/88
1.7 Does Robust Satiscing Use Probability? (cont.)
Sources:
Yakov Ben-Haim, 2010, Uncertainty, Probability and
Robust Preferences, working paper.
3
Yakov Ben-Haim, 2011, Robustness and Lockes Wing-
less Gentleman.
4
3
http://info-gap.com/content.php?id=23
4
http://decisions-and-info-gaps.blogspot.com/2011/09/robustness-and-lockes-wingless.html
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/89
We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/90


We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?
Yes? No? Maybe?
(2)
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/91
We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?
Yes? No? Maybe?
(2) Does a robust preference
assume some probability distribution
on the uncertain events?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/92


We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?
Yes? No? Maybe?
(2) Does a robust preference
assume some probability distribution
on the uncertain events?
Yes? No? Maybe?
(3)
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/93
We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?
Yes? No? Maybe?
(2) Does a robust preference
assume some probability distribution
on the uncertain events?
Yes? No? Maybe?
(3) Are any probabilistic assumptions
needed in order to justify robust preferences?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/94


We consider 3 questions:
(1) Does non-probabilistic
robust preference between options
need to assume a uniform probability distribution
on underlying uncertain events?
Yes? No? Maybe?
(2) Does a robust preference
assume some probability distribution
on the uncertain events?
Yes? No? Maybe?
(3) Are any probabilistic assumptions
needed in order to justify robust preferences?
Yes? No? Maybe?
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/95
Conicting views on uncertainty and probability.
Keynes and Carnap vs Knight and Wald.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/96
Figure 16: John Maynard Keynes, 18831946.
Probability is fundamental to uncertainty.
John Maynard Keynes asserts:
5
Part of our knowledge
we obtain direct; and part by argument. The Theory of
Probability is concerned with that part which we obtain
by argument, and it treats of the dierent degrees in
which the results so obtained are conclusive or inconclu-
sive. . . .
The method of this treatise has been to regard sub-
jective probability as fundamental and to treat all other
relevant conceptions as derivative from this.
5
Keynes, John Maynard, 1929, A Treatise on Probability, Macmillan and Co., London, pp.3, 281282
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/97
Figure 17: Rudolph Carnap, 18911970.
Probability is fundamental to uncertainty.
Among Rudolf Carnaps
6
basic conceptions is the con-
tention that all inductive reasoning, in the wide sense
of nondeductive or nondemonstrative reasoning, is rea-
soning in terms of probability.
6
Carnap, Rudolf, 1962, Logical Foundations of Probability, 2nd ed., University of Chicago Press, p.v
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/98
Figure 18: Frank Knight, 18851972.
Probability is not fundamental to uncertainty.
Frank Knight:
7
Business decisions . . . deal with situa-
tions which are far too unique, generally speaking, for
any sort of statistical tabulation to have any value for
guidance. The conception of an objectively measurable
probability or chance is simply inapplicable. . . .
It is this true uncertainty which by preventing the the-
oretically perfect outworking of the tendencies of com-
petition gives the characteristic form of enterprise to
economic organization as a whole and accounts for the
peculiar income of the entrepreneur.
7
Knight, Frank H., 1921, Risk, Uncertainty and Prot. Houghton Miin Co. Re-issued by University of Chicago Press, 1971,
pp.231232
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/99
Figure 19: Abraham Wald, 19021950.
Probability is not fundamental to uncertainty.
Abraham Wald
8
wrote that in most of the applications
not even the existence of . . . an a priori probability dis-
tribution [on the class of distribution functions] . . . can
be postulated, and in those few cases where the exis-
tence of an a priori probability distribution . . . may be
assumed this distribution is usually unknown.
8
Wald, A., 1945. Statistical decision functions which minimize the maximum risk, Annals of Mathematics, 46(2), 265280, p.267
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/100
Non-probabilistic robust preferences:
Based on set-theory rep of uncertainty:
Sets, or families of sets, of events.
E.g. min-max (worst case) or info-gap.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 196/101


Non-probabilistic robust preferences:
Based on set-theory rep of uncertainty:
Sets, or families of sets, of events.
E.g. min-max (worst case) or info-gap.
Robust preference for options B and C:
B more robust than C.
Hence B robust preferred over C: B
r
C.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/102
1.7.1 First Question
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/103
Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/104


Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
Uniform distribution nonexistent
if event space unbounded.
Thus uniform distribution cannot underlie
robust preference.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/105


Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
Uniform distribution nonexistent
if event space unbounded.
Thus uniform distribution cannot underlie
robust preference.
Robust preference may be unjustied.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/106
Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
If uniform distribution exists:
It justies robust pref if it implies
B more likely than C.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/107


Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
If uniform distribution exists:
It justies robust pref if it implies
B more likely than C.
Many other distributions may imply
B more likely than C.
E.g. 3-door problem.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/108


Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
If uniform distribution exists:
It justies robust pref if it implies
B more likely than C.
Many other distributions may imply
B more likely than C.
E.g. 3-door problem.
Uniform dist not necessary to justify robust pref.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 1st Question 196/109


Does a non-probabilistic robust preference between op-
tions need to assume a uniform probability distribution
on the underlying uncertain events?
If uniform distribution exists:
It justies robust pref if it implies
B more likely than C.
Many other distributions may imply
B more likely than C.
E.g. 3-door problem.
Uniform dist not necessary to justify robust pref.
Robust pref may still require some distribution.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/110
1.7.2 Second Question
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/111
Does a robust preference assume some probability dis-
tribution on the uncertain events?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/112


Does a robust preference assume some probability dis-
tribution on the uncertain events?
Assuming a pdf could justify robust preferences,
Such an assumption is not necessary.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/113
Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/114


Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.
p(u) is a pdf on the us.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/115


Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.
p(u) is a pdf on the us.
S is the set of all pdfs p(u).

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/116


Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.
p(u) is a pdf on the us.
S is the set of all pdfs p(u).
S
B
S of all p(u)s for which
B is more likely to succeed than C.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/117


Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.
p(u) is a pdf on the us.
S is the set of all pdfs p(u).
S
B
S of all p(u)s for which
B is more likely to succeed than C.
p
T
(u) true pdf.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/118


Notation:
u is an underlying uncertainty.
There is a set-model of uncertain u.
p(u) is a pdf on the us.
S is the set of all pdfs p(u).
S
B
S of all p(u)s for which
B is more likely to succeed than C.
p
T
(u) true pdf.
Our question:
Is it necessary to assume p
T
S
B
to justify robust pref ?
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/119
p
T
(u) S
B
implies
robust pref is justied probabilistically:
p
T
S
B
= B
r
C (1)
The = means justies or warrants or motivates.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/120


p
T
(u) S
B
implies
robust pref is justied probabilistically:
p
T
S
B
= B
r
C (2)
The = means justies or warrants or motivates.
Converse not true:
Reasonable DM can accept
r
without believing p
T
S
B
.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/121


p
T
(u) S
B
implies
robust pref is justied probabilistically:
p
T
S
B
= B
r
C (3)
The = means justies or warrants or motivates.
Converse not true:
Reasonable DM can accept
r
without believing p
T
S
B
.
Accept
r
if
p
T
S
B
more likely than p
T
S
B
:
Prob(p
T
S
B
) >
1
2
= B
r
C (4)

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/122


p
T
(u) S
B
implies
robust pref is justied probabilistically:
p
T
S
B
= B
r
C (5)
The = means justies or warrants or motivates.
Converse not true:
Reasonable DM can accept
r
without believing p
T
S
B
.
Accept
r
if
p
T
S
B
more likely than p
T
S
B
:
Prob(p
T
S
B
) >
1
2
= B
r
C (6)
Lower warrant of = in eq.(6) than in eq.(5),
but still relevant.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/123
Accept
r
if
Prob(p
T
S
B
) >
1
2
= B
r
C (7)
Converse of (7) need not hold.
Accept
r
if
Prob(p
T
S
B
) >
1
2
more likely than
Prob(p
T
S
B
)
1
2
:
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
= B
r
C (8)

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/124


Accept
r
if
Prob(p
T
S
B
) >
1
2
= B
r
C (9)
Converse of (9) need not hold.
Accept
r
if
Prob(p
T
S
B
) >
1
2
more likely than
Prob(p
T
S
B
)
1
2
:
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
= B
r
C (10)
This regression can go forever.
One could claim:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (11)
Converse not necessary at any step.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/125
Second question was, Is:
p
T
S
B
(12)
necessary to motivate robust preferences:
B
r
C (13)

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/126


Second question was, Is:
p
T
S
B
(14)
necessary to motivate robust preferences:
B
r
C (15)
Summary of Q.2 answer:
An innity of probability beliefs
would justify
r
to some extent:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (16)

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/127


Second question was, Is:
p
T
S
B
(17)
necessary to motivate robust preferences:
B
r
C (18)
Summary of Q.2 answer:
An innity of probability beliefs
would justify
r
to some extent:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (19)
Higher-order probability statements
provide weaker justication.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/128


Second question was, Is:
p
T
S
B
(20)
necessary to motivate robust preferences:
B
r
C (21)
Summary of Q.2 answer:
An innity of probability beliefs
would justify
r
to some extent:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (22)
Higher-order probability statements
provide weaker justication.
None of any nite sequence of prob
beliefs is necessary to justify
r
.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/129


Second question was, Is:
p
T
S
B
(23)
necessary to motivate robust preferences:
B
r
C (24)
Summary of Q.2 answer:
An innity of probability beliefs
would justify
r
to some extent:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (25)
Higher-order probability statements
provide weaker justication.
None of any nite sequence of prob
beliefs is necessary to justify
r
.
At any step, prob belief can be deferred
to next higher-order belief.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 2nd Question 196/130


Second question was, Is:
p
T
S
B
(26)
necessary to motivate robust preferences:
B
r
C (27)
Summary of Q.2 answer:
An innity of probability beliefs
would justify
r
to some extent:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
= B
r
C (28)
Higher-order probability statements
provide weaker justication.
None of any nite sequence of prob
beliefs is necessary to justify
r
.
At any step, prob belief can be deferred
to next higher-order belief.
Answer to second: Nope.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/131
1.7.3 Third Question
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/132
First question:
Does B
r
C need to assume a uniform pdf on the un-
derlying uncertain events?
Second question:
Does B
r
C assume some probability distribution on
the uncertain events?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/133


First question:
Does B
r
C need to assume a uniform pdf on the un-
derlying uncertain events?
Second question:
Does B
r
C assume some probability distribution on
the uncertain events?
We answered NO in both cases.
The argument was deductive, conclusive.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/134


First question:
Does B
r
C need to assume a uniform pdf on the un-
derlying uncertain events?
Second question:
Does B
r
C assume some probability distribution on
the uncertain events?
We answered NO in both cases.
The argument was deductive, conclusive.
Third question:
Is at least one probability belief, from among the innite
sequence of beliefs:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
(29)
necessary in order to justify B
r
C?

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/135


First question:
Does B
r
C need to assume a uniform pdf on the un-
derlying uncertain events?
Second question:
Does B
r
C assume some probability distribution on
the uncertain events?
We answered NO in both cases.
The argument was deductive, conclusive.
Third question:
Is at least one probability belief, from among the innite
sequence of beliefs:
Prob
_
_
_
. . .
_

_
Prob
_
_
_
Prob(p
T
S
B
) >
1
2
_
_
_
>
1
2
_

_
_
_
_
>
1
2
(30)
necessary in order to justify B
r
C?
Answer: plausibly no.
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/136
Figure 20: John Locke, 16321704.
John Lockes wingless gentleman.
If we will disbelieve everything, because we cannot cer-
tainly know all things; we shall do muchwhat as wisely
as he, who would not use his legs, but sit still and perish,
because he had no wings to y.
9

9
John Locke, 1706, An Essay Concerning Human Understanding, 5th edition. Roger Woolhouse, editor. Penquin Books, 1997,
p.57, I.i.5
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/137
Figure 21: John Locke, 16321704.
John Lockes wingless gentleman.
If we will disbelieve everything, because we cannot cer-
tainly know all things; we shall do muchwhat as wisely
as he, who would not use his legs, but sit still and perish,
because he had no wings to y.
10
Our situation:
If we disbelieve all props in eq.(30),
rejecting
r
is muchwhat as wise as
Lockes wingless gentleman.
Rejecting
r
is epistemic paralysis.
10
John Locke, 1706, An Essay Concerning Human Understanding, 5th edition. Roger Woolhouse, editor. Penquin Books, 1997,
p.57, I.i.5
lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/138
Causes of epistemic paralysis:
Pdfs not known: info-gaps.
Pdfs dont exist: Wald.
Indeterminism: Shackle-Popper.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/139


Causes of epistemic paralysis:
Pdfs not known: info-gaps.
Pdfs dont exist: Wald.
Indeterminism: Shackle-Popper.
Implication of epistemic paralysis:
Inaction, injury, retrogression, . . . .

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/140


Causes of epistemic paralysis:
Pdfs not known: info-gaps.
Pdfs dont exist: Wald.
Indeterminism: Shackle-Popper.
Implication of epistemic paralysis:
Inaction, injury, retrogression, . . . .
Avoiding probabilistic epistemic paralysis:
Probability is one model of the world.
Other models:
Fuzzy, P-box, min-max, info-gap . . . .
Expand conceptions of uncertainty.

lectures\talks\lib\pdox-choice01.tex Does Robust Satiscing Use Probability? 3rd Question 196/141


Causes of epistemic paralysis:
Pdfs not known: info-gaps.
Pdfs dont exist: Wald.
Indeterminism: Shackle-Popper.
Implication of epistemic paralysis:
Inaction, injury, retrogression, . . . .
Avoiding probabilistic epistemic paralysis:
Probability is one model of the world.
Other models:
Fuzzy, P-box, min-max, info-gap . . . .
Expand conceptions of uncertainty.
Answer to Q.3:

r
is epistemic last resort.

r
is sometimes best bet.
(More on this later.)
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/142
2 Robustness and Probability:
A Short Intuitive Introduction to Proxy Theorems
Is robustness a good bet for survival?
Is robustness a proxy for probability?
Can we maximize survival probability
w/o knowing probability distributions?
10
lectures talks lib proxy-very-shrt02.tex 12.2.2013
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/143
Robustness proxies for probability: Examples.
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/144
Foraging strategies.
Optimizing: Maximize caloric intake.
Robust-satiscing: survive reliably.
Satisce caloric requirement.
Maximize robustness to uncertainty.
Robust-satiscing survives in evolution.
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/145
Financial market strategies.
Optimizing: Maximize revenue.
Robust-satiscing: beat the competition.
Satisce revenue requirement.
Maximize robustness to uncertainty.
Rbst-satiscing survives in competition.
Equity premium puzzle.
Home bias paradox.
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/146
49 W
51 B
R
100
W, B
H
Figure 22: Ellsberg, 1931. Figure 23: Ellsbergs Urns.
Humans and ambiguity: Ellsberg paradox.
Probabilistic risk vs uncertainty.
Optimizing: Maximize expected utility.
Robust-satiscing: do good enough.
Satisce expected utility.
Maximize robustness to uncertainty.
Rbst-satiscers are happier.
lib proxy-very-shrt02.tex Robustness and Probability: Proxy Theorems 196/147/147
Proxy theorems: Robustness proxies for Probability
Robust satiscing is (often) a
better bet than optimizing.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/148
3 Robust-Satiscing is a Proxy for Probability of Survival
10
\lectures\talks\lib\proxy02.tex 12.2.2013
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/149
Decision problem:
Decision: r.
Uncertainty: u.
Loss function: L(r, u).
Satiscing: L(r, u) L
c
.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/150


Decision problem:
Decision: r.
Uncertainty: u.
Loss function: L(r, u).
Satiscing: L(r, u) L
c
.
Info-gap uncertainty model: U(h,

u), h 0.
Unbounded family of nested sets.
Axioms:
Contraction: U(0,

u) = {

u}
Nesting: h < h

= U(h,

u) U(h

,

u)
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/151
Robustness: max tolerable uncertainty.

h(r, L
c
) = max
_

_
h :
_
_
_
max
uU(h,

u)
L(r, u)
_
_
_
L
c
_

_
(31)

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/152


Robustness: max tolerable uncertainty.

h(r, L
c
) = max
_

_
h :
_
_
_
max
uU(h,

u)
L(r, u)
_
_
_
L
c
_

_
(32)
Robust-satiscing preferences:
r
r
r

if

h(r, L
c
) >

h(r

, L
c
) (33)

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/153


Robustness: max tolerable uncertainty.

h(r, L
c
) = max
_

_
h :
_
_
_
max
uU(h,

u)
L(r, u)
_
_
_
L
c
_

_
(34)
Robust-satiscing preferences:
r
r
r

if

h(r, L
c
) >

h(r

, L
c
) (35)
Probability of survival:
P
s
(r) = Prob[L(r, u) L
c
] =

L(r,u)L
c
p(u) du (36)

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/154


Robustness: max tolerable uncertainty.

h(r, L
c
) = max
_

_
h :
_
_
_
max
uU(h,

u)
L(r, u)
_
_
_
L
c
_

_
(37)
Robust-satiscing preferences:
r
r
r

if

h(r, L
c
) >

h(r

, L
c
) (38)
Probability of survival:
P
s
(r) = Prob[L(r, u) L
c
] =

L(r,u)L
c
p(u) du (39)
Probabilistic preferences:
r
p
r

if P
s
(r) > P
s
(r

) (40)
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/155
Robustness: max tolerable uncertainty.

h(r, L
c
) = max
_

_
h :
_
_
_
max
uU(h,

u)
L(r, u)
_
_
_
L
c
_

_
(41)
Robust-satiscing preferences:
r
r
r

if

h(r, L
c
) >

h(r

, L
c
) (42)
Probability of survival:
P
s
(r) = Prob[L(r, u) L
c
] =

L(r,u)L
c
p(u) du (43)
Probabilistic preferences:
r
p
r

if P
s
(r) > P
s
(r

) (44)
Do
r
and
p
agree?


h(r, L
c
) proxies for P
s
(r)???


h(r, L
c
) >

h(r

, L
c
) implies P
s
(r) P
s
(r

)???
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/156
Why
r
and
p
Are
Not Necessarily Equivalent
Two actions, r
1
and r
2
, with robustnesses:

h(r
1
, L
c
) <

h(r
2
, L
c
) (45)
Denote

h
i
=

h(r
i
, L
c
), U
i
= U(

h
i
,

q).
U
i
are nested:
U
1
U
2
because

h
1
<

h
2
(46)
Q
1
Q
2
U
i
reect agents beliefs.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/157
Survival set:
(r, L
c
) = {u : L(r, u) L
c
} (47)
Prob of survival:
P
s
(r) = Prob[(r, L
c
)] (48)
Survival sets:
Need not be nested.
Do not reect agents beliefs.
Q
1
Q
2

2
Proxy theorem need not hold.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/158
Systems with Proxy Theorems
(Each theorem with its own ne print)
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/159
Uncertain Bayesian mixing of 2 models.
Decision r.
Outcome depends on which of 2 models,
A or B, is true.
u is uncertain probability that A is true.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/160


Uncertain Bayesian mixing of 2 models.
Decision r.
Outcome depends on which of 2 models,
A or B, is true.
u is uncertain probability that A is true.
L(r, u) is the expected loss.
L
c
is greatest acceptable expected loss.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/161


Uncertain Bayesian mixing of 2 models.
Decision r.
Outcome depends on which of 2 models,
A or B, is true.
u is uncertain probability that A is true.
L(r, u) is the expected loss.
L
c
is greatest acceptable expected loss.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/162


Uncertain Bayesian mixing of 2 models.
Decision r.
Outcome depends on which of 2 models,
A or B, is true.
u is uncertain probability that A is true.
L(r, u) is the expected loss.
L
c
is greatest acceptable expected loss.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.


h(r, L
c
) is a proxy for P
s
(r):
Any change in r that increases

h cannot decrease P
s
.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/163
Risky and risk-free assets.
r is fraction of budget invested in risky assets.
u is uncertain return from risky assets.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/164


Risky and risk-free assets.
r is fraction of budget invested in risky assets.
u is uncertain return from risky assets.
L(r, u) is the expected return.
L
c
is least acceptable expected return.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/165


Risky and risk-free assets.
r is fraction of budget invested in risky assets.
u is uncertain return from risky assets.
L(r, u) is the expected return.
L
c
is least acceptable expected return.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/166


Risky and risk-free assets.
r is fraction of budget invested in risky assets.
u is uncertain return from risky assets.
L(r, u) is the expected return.
L
c
is least acceptable expected return.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.


h(r, L
c
) is a proxy for P
s
(r):
Any change in r that increases

h cannot decrease P
s
.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/167
Foraging in an uncertain environment:
Is the grass really greener?
r is duration on current patch.
u is relative productivity of other patch.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/168


Foraging in an uncertain environment:
Is the grass really greener?
r is duration on current patch.
u is relative productivity of other patch.
L(r, u) is the total energy gain.
L
c
is the least acceptable energy gain.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/169


Foraging in an uncertain environment:
Is the grass really greener?
r is duration on current patch.
u is relative productivity of other patch.
L(r, u) is the total energy gain.
L
c
is the least acceptable energy gain.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/170


Foraging in an uncertain environment:
Is the grass really greener?
r is duration on current patch.
u is relative productivity of other patch.
L(r, u) is the total energy gain.
L
c
is the least acceptable energy gain.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.


h(r, L
c
) is a proxy for P
s
(r):
Any change in r that increases

h cannot decrease P
s
.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/171
49 W
51 B
R
100
W, B
H
Figure 24: Ellsbergs Urns.
Ellsbergs paradox.
2 lotteries: 1 risky, 1 ambiguous.
2 experiments: win on black or win on white.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/172


49 W
51 B
R
100
W, B
H
Figure 25: Ellsbergs Urns.
Ellsbergs paradox.
2 lotteries: 1 risky, 1 ambiguous.
2 experiments: win on black or win on white.
Choice between risk and ambiguity.
Ellsbergs agents prefer risky lottery both times.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/173


49 W
51 B
R
100
W, B
H
Figure 26: Ellsbergs Urns.
Ellsbergs paradox.
2 lotteries: 1 risky, 1 ambiguous.
2 experiments: win on black or win on white.
Choice between risk and ambiguity.
Ellsbergs agents prefer risky lottery both times.
Ellsbergs agents are robust-satiscers because:
Robustness is a proxy for probability, so:
Agents maximize probability of adequate return.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/174
Forecasting uncertain dynamic system.
L
1
(u) is true behavior of system.
u is exogenous uncertainty. (Independent of forecast.)

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/175


Forecasting uncertain dynamic system.
L
1
(u) is true behavior of system.
u is exogenous uncertainty. (Independent of forecast.)
L
2
(r) is forecasting model.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/176


Forecasting uncertain dynamic system.
L
1
(u) is true behavior of system.
u is exogenous uncertainty. (Independent of forecast.)
L
2
(r) is forecasting model.
L(r, u) = |L
1
(u) L
2
(r)| is forecast error.
L
c
is greatest acceptable forecast error.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/177


Forecasting uncertain dynamic system.
L
1
(u) is true behavior of system.
u is exogenous uncertainty. (Independent of forecast.)
L
2
(r) is forecasting model.
L(r, u) = |L
1
(u) L
2
(r)| is forecast error.
L
c
is greatest acceptable forecast error.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.

\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/178


Forecasting uncertain dynamic system.
L
1
(u) is true behavior of system.
u is exogenous uncertainty. (Independent of forecast.)
L
2
(r) is forecasting model.
L(r, u) = |L
1
(u) L
2
(r)| is forecast error.
L
c
is greatest acceptable forecast error.


h(r, L
c
) is robustness for satiscing at L
c
.
P
s
(r) is probability for satiscing at L
c
.


h(r, L
c
) is a proxy for P
s
(r):
Any change in r that increases

h cannot decrease P
s
.
\lib\proxy02.tex Robustness and Probability: Proxy Theorems 196/179/179
Recap:
Many systems have proxy property.
Many systems dont have proxy prop.
How prevalent is the proxy property?
Human: economic competition.
Biological: foraging.
Physical: quantum uncertainty.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/180
4 Robust Satiscing: Normative or Prescriptive?
Main Source:
Barry Schwartz, Yakov Ben-Haim, and Cli Dacso, 2011,
What Makes a Good Decision? Robust Satiscing as a
Normative Standard of Rational Behaviour, The Jour-
nal for the Theory of Social Behaviour, 41(2): 209-227.
Pre-print to be found on:
http://info-gap.com/content.php?id=23
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/181
3 types of decision theories:
Descriptive.
Normative.
Prescriptive.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/182
3 types of decision theories:
Descriptive:
Describe how DMs decide.
Normative.
Prescriptive.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/183
3 types of decision theories:
Descriptive:
Describe how DMs decide.
Normative:
Establish norm, gold standard for decision making.
Prescriptive.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/184
3 types of decision theories:
Descriptive:
Describe how DMs decide.
Normative:
Establish norm, gold standard for decision making.
Prescriptive:
Specify implementable strategies
given human and epistemic limitations.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/185
Simon introduced satiscing (1950s)
as a prescriptive compromise.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/186


Simon introduced satiscing (1950s)
as a prescriptive compromise.
Satiscing: satisfying critical requirements.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/187


Simon introduced satiscing (1950s)
as a prescriptive compromise.
Satiscing: satisfying critical requirements.
Utility optimizing was the ideal norm.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/188


Simon introduced satiscing (1950s)
as a prescriptive compromise.
Satiscing: satisfying critical requirements.
Utility optimizing was the ideal norm.
People, animals, organizations
dont have info and ability to optimize.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/189


Simon introduced satiscing (1950s)
as a prescriptive compromise.
Satiscing: satisfying critical requirements.
Utility optimizing was the ideal norm.
People, animals, organizations
dont have info and ability to optimize.
If they did have, they would optimize:
Moral imperative.
Competitive advantage.
lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/190
In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/191


In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.
It may be a good bet.
Enhance survival probability (proxy thms).

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/192


In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.
It may be a good bet.
Enhance survival probability (proxy thms).
Avoidepistemic paralysis: Lockes wingless gentleman.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/193


In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.
It may be a good bet.
Enhance survival probability (proxy thms).
Avoidepistemic paralysis: Lockes wingless gentleman.
Resolve dilemma of at maximum:
Many equally excellent options.
Rank by robustness.

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/194


In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.
It may be a good bet.
Enhance survival probability (proxy thms).
Avoidepistemic paralysis: Lockes wingless gentleman.
Resolve dilemma of at maximum:
Many equally excellent options.
Rank by robustness.
Psychologically healthier for personal decisions:
Avoid search costs.
Avoid regret.
Facilitate decision making (36 jellies).

lectures\talks\lib\pdox-choice01.tex Robust Satiscing: Normative or Prescriptive? 196/195


In many situations robust-satiscing is
prescriptive and normative: an implementable ideal.
It may be a good bet.
Enhance survival probability (proxy thms).
Avoidepistemic paralysis: Lockes wingless gentleman.
Resolve dilemma of at maximum:
Many equally excellent options.
Rank by robustness.
Psychologically healthier for personal decisions:
Avoid search costs.
Avoid regret.
Facilitate decision making (36 jellies).
Explains the paradox of choice:
Why aiming at more yields less.
\lectures\talks\lib\pdox-choice01.tex Paradox of Choice 196/196
In Conclusion
Human decision making
under uncertainty
is
In
te
re
st
i
n
g

You might also like