Professional Documents
Culture Documents
A black swan, a member of the species Cygnus atratus, which remained undocumented until the
eighteenth century
The Black Swan Theory or "Theory of Black Swan Events" was developed by Nassim
Nicholas Taleb to explain: 1) the disproportionate role of high-impact, hard to predict, and rare
events that are beyond the realm of normal expectations in history, science, finance and
technology, 2) the non-computability of the probability of the consequential rare events using
scientific methods (owing to their very nature of small probabilities) and 3) the psychological
biases that make people individually and collectively blind to uncertainty and unaware of the
massive role of the rare event in historical affairs. Unlike the earlier philosophical "black swan
problem", the "Black Swan Theory" (capitalized) refers only to unexpected events of large
magnitude and consequence and their dominant role in history. Such events, considered extreme
outliers, collectively play vastly larger roles than regular occurrences.
Background
Black Swan Events were characterized by Nassim Nicholas Taleb in his 2007 book (revised and
completed in 2010), The Black Swan. Taleb regards almost all major scientific discoveries,
historical events, and artistic accomplishments as "black swans" undirected and unpredicted.
He gives the rise of the Internet, the personal computer, World War I, and the September 11
attacks as examples of Black Swan Events.
The term black swan was a Latin expression its oldest known reference comes from the poet
Juvenal's characterization of something being "rara avis in terris nigroque simillima cygno"
(6.165).[1] In English, this Latin phrase means "a rare bird in the lands, and very like a black
swan." When the phrase was coined, the black swan was presumed not to exist. The importance
of the simile lies in its analogy to the fragility of any system of thought. A set of conclusions is
potentially undone once any of its fundamental postulates is disproven. In this case, the
observation of a single black swan would be the undoing of the phrase's underlying logic, as
well as any reasoning that followed from that underlying logic.
Juvenal's phrase was a common expression in 16th century London as a statement of
impossibility. The London expression derives from the Old World presumption that all swans
1
must be white because all historical records of swans reported that they had white feathers. [2] In
that context, a black swan was impossible or at least nonexistent. After a Dutch expedition led
by explorer Willem de Vlamingh on the Swan River in 1697, discovered black swans in
Western Australia[3], the term metamorphosed to connote that a perceived impossibility might
later be disproven. Taleb notes that in the 19th century John Stuart Mill used the black swan
logical fallacy as a new term to identify falsification.
Specifically, Taleb asserts[4] in the New York Times:
What we call here a Black Swan (and capitalize it) is an event with the following three
attributes.
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the
past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in
spite of its outlier status, human nature makes us concoct explanations for its occurrence after
the fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not
prospective) predictability. A small number of Black Swans explains almost everything in our
world, from the success of ideas and religions, to the dynamics of historical events, to elements
of our own personal lives.
Coping with black swan events
The main idea in Taleb's book is not to attempt to predict Black Swan Events, but to build
robustness against negative ones that occur and being able to exploit positive ones. Taleb
contends that banks and trading firms are very vulnerable to hazardous Black Swan Events and
are exposed to losses beyond that predicted by their defective models.
Taleb states that a Black Swan Event depends on the observerusing a simple example, what
may be a Black Swan surprise for a turkey is not a Black Swan surprise for its butcherhence
the objective should be to "avoid being the turkey" by identifying areas of vulnerability in order
to "turn the Black Swans white".
Identifying a black swan event
Based on the author's criteria:
1. The event is a surprise (to the observer).
2. The event has a major impact.
3. After the fact, the event is rationalized by hindsight, as if it had been expected.
Epistemological approach
Taleb's black swan is different from the earlier philosophical versions of the problem,
specifically in epistemology, as it concerns a phenomenon with specific empirical and statistical
properties which he calls, "the fourth quadrant".[5] Taleb's problem is about epistemic limitations
in some parts of the areas covered in decision making. These limitations are twofold:
philosophical (mathematical) and empirical (human known epistemic biases). The philosophical
problem is about the decrease in knowledge when it comes to rare events as these are not visible
in past samples and therefore require a strong a priori, or what one can call an extrapolating
theory; accordingly events depend more and more on theories when their probability is small. In
the fourth quadrant, knowledge is both uncertain and consequences are large, requiring more
robustness.
2
Before Taleb,[6] those who dealt with the notion of the improbable, such as Hume, Mill, and
Popper focused on the problem of induction in logic, specifically, that of drawing general
conclusions from specific observations. Taleb's Black Swan Event has a central and unique
attribute, high impact. His claim is that almost all consequential events in history come from the
unexpectedyet humans later convince themselves that these events are explainable in
hindsight (bias).
One problem, labeled the ludic fallacy by Taleb, is the belief that the unstructured randomness
found in life resembles the structured randomness found in games. This stems from the
assumption that the unexpected may be predicted by extrapolating from variations in statistics
based on past observations, especially when these statistics are presumed to represent samples
from a bell-shaped curve. These concerns often are highly relevant in financial markets, where
major players use value at risk models, which imply normal distributions, although market
returns typically have fat tail distributions.
More generally, decision theory, based on a fixed universe or a model of possible outcomes,
ignores and minimizes the effect of events that are "outside model". For instance, a simple
model of daily stock market returns may include extreme moves such as Black Monday (1987),
but might not model the breakdown of markets following the September 11 attacks of 2001. A
fixed model considers the "known unknowns", but ignores the "unknown unknowns".
Taleb notes that other distributions are not usable with precision, but often are more descriptive,
such as the fractal, power law, or scalable distributions and that awareness of these might help
to temper expectations.[7]
Beyond this, he emphasizes that many events simply are without precedent, undercutting the
basis of this type of reasoning altogether.
Taleb also argues for the use of counterfactual reasoning when considering risk.[8][9]
Taleb's ten principles for a black swan robust world
Taleb enumerates ten principles for building systems that are robust to Black Swan Events:[10]
1. What is fragile should break early while it is still small. Nothing should ever become
Too Big to Fail. Entities are considered to be "Too big to fail" by those who believe
those entities are so central to a macroeconomy that their failure will be disastrous to an
economy, and as such believe they should become recipients of beneficial financial and
economic policies from governments and/or central banks.
2. No socialisation of losses and privatisation of gains. In political discourse, the phrase
"privatizing profits and socializing losses" refers to any instance of speculators
benefitting (privately) from profits, but not taking losses, by pushing the losses onto
society at large, particularly via the government.
3. People who were driving a school bus blindfolded (and crashed it) should never be
given a new bus.
4. Do not let someone making an "incentive" bonus manage a nuclear plant or your
financial risks.
5. Counter-balance complexity with simplicity.
6. Do not give children sticks of dynamite, even if they come with a warning.
7. Only Ponzi schemes should depend on confidence. Governments should never need to
"restore confidence".
8. Do not give an addict more drugs if he has withdrawal pains.
9. Citizens should not depend on financial assets or fallible "expert" advice for their
retirement.
3
Survivorship bias. We see the winners and try to "learn" from them, while forgetting the
huge number of losers.
Skewed distributions. Many real life phenomena are not 50:50 bets like tossing a coin,
but have various unusual and counter-intuitive distributions. An example of this is a
99:1 bet in which you almost always win, but when you lose, you lose all your savings.
People can easily be fooled by statements like "I won this bet 50 times". According to
Taleb: "Option sellers, it is said, eat like chickens and [go to the bathroom] like
elephants", which is to say, option sellers may earn a steady small income from selling
the options, but when a disaster happens they lose a fortune.
Taleb distribution
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Taleb distributions pose several fundamental problems, all possibly leading to risk being
overlooked:
-
The very presence or possibility of adverse events may pose a problem per se, which is
ignored by only looking at the average case a decision may be good in expectation (in
the aggregate, in the long term), but a single rare event may ruin the investor: one is
courting disaster.
-
Unobserved events
This is Taleb's central contention, which he calls black swans because extreme events
are rare, they have often not been observed yet, and thus are not included in scenario
analysis or stress testing.
-
Hard-to-compute expectation
For instance, only buying options (so one can at most lose the premium), not selling
them. However, this investment strategy can be associated with consistent losses; long
periods where the investor "bleeds" premium costs, awaiting unexpected events which
may not occur. Taleb closed down his hedge fund Empirica under circumstances that
are still under debate, but it is understood that his fund, while invested in 2001, did not
manage to profit from the September 11th attacks - clearly a "Black Swan" event
(http://ftalphaville.ft.com/blog/2009/06/03/56579/tavakoli-really-does-have-issueswith-taleb/).
-
This does not eliminate the risk, but identifies which assumptions are key to
conclusions, and thus meriting close scrutiny.
-
Widely used in industry, they do not include unforeseen events but emphasize various
possibilities and what one stands to lose, so one is not blinded by absence of losses thus
far.
-
Anchoring the common human tendency to rely too heavily, or "anchor," on one trait
or piece of information when making decisions.
Bandwagon effect the tendency to do (or believe) things because many other people
do (or believe) the same. Related to groupthink and herd behavior.
Bias blind spot the tendency to see oneself as less biased than other people.[2]
Choice-supportive bias the tendency to remember one's choices as better than they
actually were.
Confirmation bias the tendency to search for or interpret information in a way that
confirms one's preconceptions.[3]
Congruence bias the tendency to test hypotheses exclusively through direct testing,
in contrast to tests of possible alternative hypotheses.
Contrast effect the enhancement or diminishing of a weight or other measurement
when compared with a recently observed contrasting object.[4]
Denomination effect the tendency to spend more money when it is denominated in
small amounts (e.g. coins) rather than large amounts (e.g. bills).[5]
Distinction bias the tendency to view two options as more dissimilar when evaluating
them simultaneously than when evaluating them separately.[6]
Endowment effect "the fact that people often demand much more to give up an
object than they would be willing to pay to acquire it".[7]
7
Ambiguity effect the tendency to avoid options for which missing information makes
the probability seem "unknown."[24]
Anchoring effect the tendency to rely too heavily, or "anchor," on a past reference or
on one trait or piece of information when making decisions (also called "insufficient
adjustment").
Attentional bias the tendency to neglect relevant data when making judgments of a
correlation or association.
Authority bias the tendency to value an ambiguous stimulus (e.g., an art
performance) according to the opinion of someone who is seen as an authority on the
topic.
Availability heuristic estimating what is more likely by what is more available in
memory, which is biased toward vivid, unusual, or emotionally charged examples.
Availability cascade a self-reinforcing process in which a collective belief gains
more and more plausibility through its increasing repetition in public discourse (or
"repeat something long enough and it will become true").
Base rate neglect' or Base rate fallacy the tendency to base judgments on specifics,
ignoring general statistical information.[25]
Belief bias an effect where someone's evaluation of the logical strength of an
argument is biased by the believability of the conclusion.[26]
Clustering illusion the tendency to see patterns where actually none exist.
Capability bias the tendency to believe that the closer average performance is to a
target, the tighter the distribution of the data set.
Conjunction fallacy the tendency to assume that specific conditions are more
probable than general ones.[27]
Gambler's fallacy the tendency to think that future probabilities are altered by past
events, when in reality they are unchanged. Results from an erroneous
conceptualization of the Law of large numbers. For example, "I've flipped heads with
this coin five times consecutively, so the chance of tails coming out on the sixth flip is
much greater than heads."
Hindsight bias sometimes called the "I-knew-it-all-along" effect, the tendency to see
past events as being predictable.[28]
Illusory correlation inaccurately perceiving a relationship between two events, either
because of prejudice or selective processing of information.[29]
Observer-expectancy effect when a researcher expects a given result and therefore
unconsciously manipulates an experiment or misinterprets data in order to find it (see
also subject-expectancy effect).
Optimism bias the tendency to be over-optimistic about the outcome of planned
actions.[30]
Ostrich effect ignoring an obvious (negative) situation.
Social biases
Egocentric bias occurs when people claim more responsibility for themselves for the
results of a joint action than an outside observer would.
Forer effect (aka Barnum effect) the tendency to give high accuracy ratings to
descriptions of their personality that supposedly are tailored specifically for them, but
are in fact vague and general enough to apply to a wide range of people. For example,
horoscopes.
False consensus effect the tendency for people to overestimate the degree to which
others agree with them.[34]
Fundamental attribution error the tendency for people to over-emphasize
personality-based explanations for behaviors observed in others while underemphasizing the role and power of situational influences on the same behavior (see also
actor-observer bias, group attribution error, positivity effect, and negativity effect).[35]
Halo effect the tendency for a person's positive or negative traits to "spill over" from
one area of their personality to another in others' perceptions of them (see also physical
attractiveness stereotype).[36]
Herd instinct common tendency to adopt the opinions and follow the behaviors of
the majority to feel safer and to avoid conflict.
10
Memory errors
11
11.
12.
13.
14.
15. Gangahar, Anuj (16 April 2008). "Market Risk: Mispriced risk tests market faith in a
prized formula". The Financial Times. New York. Archived from the original on 20
April 2008. Retrieved 23 May 2012.
13