You are on page 1of 13

High risk events

Black swan theory


http://en.wikipedia.org/wiki/Black_swan_theory
11 de octubre de 2010
From Wikipedia, the free encyclopedia
.

A black swan, a member of the species Cygnus atratus, which remained undocumented until the
eighteenth century
The Black Swan Theory or "Theory of Black Swan Events" was developed by Nassim
Nicholas Taleb to explain: 1) the disproportionate role of high-impact, hard to predict, and rare
events that are beyond the realm of normal expectations in history, science, finance and
technology, 2) the non-computability of the probability of the consequential rare events using
scientific methods (owing to their very nature of small probabilities) and 3) the psychological
biases that make people individually and collectively blind to uncertainty and unaware of the
massive role of the rare event in historical affairs. Unlike the earlier philosophical "black swan
problem", the "Black Swan Theory" (capitalized) refers only to unexpected events of large
magnitude and consequence and their dominant role in history. Such events, considered extreme
outliers, collectively play vastly larger roles than regular occurrences.
Background
Black Swan Events were characterized by Nassim Nicholas Taleb in his 2007 book (revised and
completed in 2010), The Black Swan. Taleb regards almost all major scientific discoveries,
historical events, and artistic accomplishments as "black swans" undirected and unpredicted.
He gives the rise of the Internet, the personal computer, World War I, and the September 11
attacks as examples of Black Swan Events.
The term black swan was a Latin expression its oldest known reference comes from the poet
Juvenal's characterization of something being "rara avis in terris nigroque simillima cygno"
(6.165).[1] In English, this Latin phrase means "a rare bird in the lands, and very like a black
swan." When the phrase was coined, the black swan was presumed not to exist. The importance
of the simile lies in its analogy to the fragility of any system of thought. A set of conclusions is
potentially undone once any of its fundamental postulates is disproven. In this case, the
observation of a single black swan would be the undoing of the phrase's underlying logic, as
well as any reasoning that followed from that underlying logic.
Juvenal's phrase was a common expression in 16th century London as a statement of
impossibility. The London expression derives from the Old World presumption that all swans
1

must be white because all historical records of swans reported that they had white feathers. [2] In
that context, a black swan was impossible or at least nonexistent. After a Dutch expedition led
by explorer Willem de Vlamingh on the Swan River in 1697, discovered black swans in
Western Australia[3], the term metamorphosed to connote that a perceived impossibility might
later be disproven. Taleb notes that in the 19th century John Stuart Mill used the black swan
logical fallacy as a new term to identify falsification.
Specifically, Taleb asserts[4] in the New York Times:
What we call here a Black Swan (and capitalize it) is an event with the following three
attributes.
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the
past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in
spite of its outlier status, human nature makes us concoct explanations for its occurrence after
the fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not
prospective) predictability. A small number of Black Swans explains almost everything in our
world, from the success of ideas and religions, to the dynamics of historical events, to elements
of our own personal lives.
Coping with black swan events
The main idea in Taleb's book is not to attempt to predict Black Swan Events, but to build
robustness against negative ones that occur and being able to exploit positive ones. Taleb
contends that banks and trading firms are very vulnerable to hazardous Black Swan Events and
are exposed to losses beyond that predicted by their defective models.
Taleb states that a Black Swan Event depends on the observerusing a simple example, what
may be a Black Swan surprise for a turkey is not a Black Swan surprise for its butcherhence
the objective should be to "avoid being the turkey" by identifying areas of vulnerability in order
to "turn the Black Swans white".
Identifying a black swan event
Based on the author's criteria:
1. The event is a surprise (to the observer).
2. The event has a major impact.
3. After the fact, the event is rationalized by hindsight, as if it had been expected.
Epistemological approach
Taleb's black swan is different from the earlier philosophical versions of the problem,
specifically in epistemology, as it concerns a phenomenon with specific empirical and statistical
properties which he calls, "the fourth quadrant".[5] Taleb's problem is about epistemic limitations
in some parts of the areas covered in decision making. These limitations are twofold:
philosophical (mathematical) and empirical (human known epistemic biases). The philosophical
problem is about the decrease in knowledge when it comes to rare events as these are not visible
in past samples and therefore require a strong a priori, or what one can call an extrapolating
theory; accordingly events depend more and more on theories when their probability is small. In
the fourth quadrant, knowledge is both uncertain and consequences are large, requiring more
robustness.
2

Before Taleb,[6] those who dealt with the notion of the improbable, such as Hume, Mill, and
Popper focused on the problem of induction in logic, specifically, that of drawing general
conclusions from specific observations. Taleb's Black Swan Event has a central and unique
attribute, high impact. His claim is that almost all consequential events in history come from the
unexpectedyet humans later convince themselves that these events are explainable in
hindsight (bias).
One problem, labeled the ludic fallacy by Taleb, is the belief that the unstructured randomness
found in life resembles the structured randomness found in games. This stems from the
assumption that the unexpected may be predicted by extrapolating from variations in statistics
based on past observations, especially when these statistics are presumed to represent samples
from a bell-shaped curve. These concerns often are highly relevant in financial markets, where
major players use value at risk models, which imply normal distributions, although market
returns typically have fat tail distributions.
More generally, decision theory, based on a fixed universe or a model of possible outcomes,
ignores and minimizes the effect of events that are "outside model". For instance, a simple
model of daily stock market returns may include extreme moves such as Black Monday (1987),
but might not model the breakdown of markets following the September 11 attacks of 2001. A
fixed model considers the "known unknowns", but ignores the "unknown unknowns".
Taleb notes that other distributions are not usable with precision, but often are more descriptive,
such as the fractal, power law, or scalable distributions and that awareness of these might help
to temper expectations.[7]
Beyond this, he emphasizes that many events simply are without precedent, undercutting the
basis of this type of reasoning altogether.
Taleb also argues for the use of counterfactual reasoning when considering risk.[8][9]
Taleb's ten principles for a black swan robust world
Taleb enumerates ten principles for building systems that are robust to Black Swan Events:[10]
1. What is fragile should break early while it is still small. Nothing should ever become
Too Big to Fail. Entities are considered to be "Too big to fail" by those who believe
those entities are so central to a macroeconomy that their failure will be disastrous to an
economy, and as such believe they should become recipients of beneficial financial and
economic policies from governments and/or central banks.
2. No socialisation of losses and privatisation of gains. In political discourse, the phrase
"privatizing profits and socializing losses" refers to any instance of speculators
benefitting (privately) from profits, but not taking losses, by pushing the losses onto
society at large, particularly via the government.
3. People who were driving a school bus blindfolded (and crashed it) should never be
given a new bus.
4. Do not let someone making an "incentive" bonus manage a nuclear plant or your
financial risks.
5. Counter-balance complexity with simplicity.
6. Do not give children sticks of dynamite, even if they come with a warning.
7. Only Ponzi schemes should depend on confidence. Governments should never need to
"restore confidence".
8. Do not give an addict more drugs if he has withdrawal pains.
9. Citizens should not depend on financial assets or fallible "expert" advice for their
retirement.
3

10. Make an omelette with the broken eggs.


In addition to these ten principles, Taleb also recommends employing both physical and
functional redundancy in the design of systems. These two steps can be found in the principles
of resilience architecting. (Reference: Jackson, S. Architecting Resilient Systems: John Wiley &
Sons. Hoboken, NJ: 2010.)
Fooled by Randomness
From Wikipedia, the free encyclopedia
Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets is a book
written by Nassim Nicholas Taleb about the fallibility of human knowledge.
Reaction
The book was selected by Fortune as one of the 75 "Smartest Books of All Time."[1]
The book's name, Fooled by Randomness, has also become an idiom in English used to describe
when someone sees a pattern where there is just random noise.[citation needed]
Thesis
Taleb sets forth the idea that modern humans are often unaware of the existence of randomness.
They tend to explain random outcomes as non-random.
Human beings:
1. overestimate causality, e.g., they see elephants in the clouds instead of understanding
that they are in fact randomly shaped clouds that appear to our eyes as elephants (or
something else);
2. tend to view the world as more explainable than it really is. So they look for
explanations even when there are none.
Other misperceptions of randomness that are discussed include:

Survivorship bias. We see the winners and try to "learn" from them, while forgetting the
huge number of losers.
Skewed distributions. Many real life phenomena are not 50:50 bets like tossing a coin,
but have various unusual and counter-intuitive distributions. An example of this is a
99:1 bet in which you almost always win, but when you lose, you lose all your savings.
People can easily be fooled by statements like "I won this bet 50 times". According to
Taleb: "Option sellers, it is said, eat like chickens and [go to the bathroom] like
elephants", which is to say, option sellers may earn a steady small income from selling
the options, but when a disaster happens they lose a fortune.

Taleb distribution
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In economics and finance, a Taleb distribution is a term coined by U.K. economists/journalists


Martin Wolf and John Kay to describe a returns profile that appears at times deceptively lowrisk with steady returns, but experiences periodically catastrophic drawdowns. It does not
describe a statistical probability distribution, and does not have an associated mathematical
formula. The term is meant to refer to an investment returns profile in which there is a high
probability of a small gain, and a small probability of a very large loss, which more than
outweighs the gains. In these situations the expected value is (very much) less than zero, but this
fact is camouflaged by the appearance of low risk and steady returns. It is a combination of
kurtosis risk and skewness risk: overall returns are dominated by extreme events (kurtosis),
which are to the downside (skew). The corresponding situation is also known as the peso
problem.
The term describes dangerous or flawed trading strategies. The Taleb distribution is named for
Nassim Taleb, based on ideas outlined in his Fooled by Randomness.[1] More detailed and
formal discussion of the bets on small probability events is in the academic essay by Taleb,
called "Why Did the Crisis of 2008 Happen?"[2]
Criticism of trading strategies
Pursuing a trading strategy with a Taleb distribution yields a high probability of steady returns
for a time, but with a near certainty of eventual ruin. This is done consciously by some as a
risky trading strategy, while some critics argue that it is done either unconsciously by some,
unaware of the hazards ("innocent fraud"), or consciously by others, particularly in hedge funds.
Risky strategy
If done consciously, with one's own capital or openly disclosed to investors, this is a risky
strategy, but appeals to some: one will want to exit the trade before the rare event happens. This
occurs for instance in a speculative bubble, where one purchases an asset in the expectation that
it will likely go up, but may plummet, and hopes to sell the asset before the bubble bursts.
This has also been referred to as "picking up pennies in front of a steamroller".[3]
"Innocent fraud"
John Kay has likened securities trading to bad driving, as both are characterized by Taleb
distributions.[4] Drivers can make many small gains in time by taking risks such as overtaking
on the inside and tailgating, however, they are then at risk of experiencing a very large loss in
the form of a serious traffic accident. Kay has described Taleb Distributions as the basis of the
carry trade and has claimed that along with mark-to-market accounting and other practices,
constitute part of what JK Galbraith has called "innocent fraud".[5]
Moral hazard
Some critics of the hedge fund industry claim that the compensation structure generate high fees
for investment strategies that follow a Taleb distribution, creating moral hazard.[6] In such a
scenario, the fund can claim high asset management and performance fees until they suddenly
'blow up', losing the investor significant sums of money and wiping out all the gains to the
investor generated in previous periods; however, the fund manager keeps all fees earned prior to
the losses being incurred and ends up enriching himself in the long run because he does not
pay for his losses.
Risks
5

Taleb distributions pose several fundamental problems, all possibly leading to risk being
overlooked:
-

Presence of extreme adverse events

The very presence or possibility of adverse events may pose a problem per se, which is
ignored by only looking at the average case a decision may be good in expectation (in
the aggregate, in the long term), but a single rare event may ruin the investor: one is
courting disaster.
-

Unobserved events

This is Taleb's central contention, which he calls black swans because extreme events
are rare, they have often not been observed yet, and thus are not included in scenario
analysis or stress testing.
-

Hard-to-compute expectation

A subtler issue is that expectation is very sensitive to assumptions about probability: a


trade with a $1 gain 99.9% of the time and a $500 loss 0.1% of the time has positive
expected value; while if the $500 loss occurs 0.2% of the time it has approximately 0
expected value; and if the $500 loss occurs 0.3% of the time it has negative expected
value. This is exacerbated by the difficulty of estimating the probability of rare events
(in this example one would need to observe thousands of trials to estimate the
probability with confidence), and by the use of financial leverage: mistaking a small
loss for a small gain and magnifying by leverage yields a hidden large loss.
More formally, while the risks for a known distribution can be calculated, in practice one does
not know the distribution: one is operating under uncertainty, in economics called Knightian
uncertainty.
Mitigants
A number of mitigants have been proposed, by Taleb and others. These include:
-

Not exposing oneself to large losses

For instance, only buying options (so one can at most lose the premium), not selling
them. However, this investment strategy can be associated with consistent losses; long
periods where the investor "bleeds" premium costs, awaiting unexpected events which
may not occur. Taleb closed down his hedge fund Empirica under circumstances that
are still under debate, but it is understood that his fund, while invested in 2001, did not
manage to profit from the September 11th attacks - clearly a "Black Swan" event
(http://ftalphaville.ft.com/blog/2009/06/03/56579/tavakoli-really-does-have-issueswith-taleb/).
-

Performing sensitivity analysis on assumptions

This does not eliminate the risk, but identifies which assumptions are key to
conclusions, and thus meriting close scrutiny.
-

Scenario analysis and stress testing


6

Widely used in industry, they do not include unforeseen events but emphasize various
possibilities and what one stands to lose, so one is not blinded by absence of losses thus
far.
-

Using non-probabilistic decision techniques

While most classical decision theory is based on probabilistic techniques of expected


value or expected utility, alternatives exist which do not require assumptions about the
probabilities of various outcomes, and are thus robust. These include minimax, minimax
regret, and info-gap decision theory.
List of cognitive biases
A cognitive bias is a pattern of deviation in judgment that occurs in particular situations.
Implicit in the concept of a "pattern of deviation" is a standard of comparison; this may be the
judgment of people outside those particular situations, or may be a set of independently
verifiable facts. The existence of some of these cognitive biases has been verified empirically in
the field of psychology.
Cognitive biases are instances of evolved mental behavior. Some are presumably adaptive, for
example, because they lead to more effective actions in given contexts or enable faster decisions
when faster decisions are of greater value. Others presumably result from a lack of appropriate
mental mechanisms, or from the misapplication of a mechanism that is adaptive under different
circumstances.
Cognitive bias is a general term that is used to describe many distortions in the human mind that
are difficult to eliminate and that lead to perceptual distortion, inaccurate judgment, or illogical
interpretation.[1]
Decision-making and behavioral biases
Many of these biases are studied for how they affect belief formation, business decisions, and
scientific research.

Anchoring the common human tendency to rely too heavily, or "anchor," on one trait
or piece of information when making decisions.
Bandwagon effect the tendency to do (or believe) things because many other people
do (or believe) the same. Related to groupthink and herd behavior.
Bias blind spot the tendency to see oneself as less biased than other people.[2]
Choice-supportive bias the tendency to remember one's choices as better than they
actually were.
Confirmation bias the tendency to search for or interpret information in a way that
confirms one's preconceptions.[3]
Congruence bias the tendency to test hypotheses exclusively through direct testing,
in contrast to tests of possible alternative hypotheses.
Contrast effect the enhancement or diminishing of a weight or other measurement
when compared with a recently observed contrasting object.[4]
Denomination effect the tendency to spend more money when it is denominated in
small amounts (e.g. coins) rather than large amounts (e.g. bills).[5]
Distinction bias the tendency to view two options as more dissimilar when evaluating
them simultaneously than when evaluating them separately.[6]
Endowment effect "the fact that people often demand much more to give up an
object than they would be willing to pay to acquire it".[7]
7

Experimenter's or Expectation bias the tendency for experimenters to believe,


certify, and publish data that agree with their expectations for the outcome of an
experiment, and to disbelieve, discard, or downgrade the corresponding weightings for
data that appear to conflict with those expectations.[8]
Extraordinarity bias the tendency to value an object more than others in the same
category as a result of an extraordinarity of that object that does not, in itself, change the
value.[citation needed]
Focusing effect the tendency to place too much importance on one aspect of an event;
causes error in accurately predicting the utility of a future outcome.[9]
Framing effect drawing different conclusions from the same information, depending
on how that information is presented.
Hyperbolic discounting the tendency for people to have a stronger preference for
more immediate payoffs relative to later payoffs, where the tendency increases the
closer to the present both payoffs are.[10]
Illusion of control the tendency to overestimate one's degree of influence over other
external events.[11]
Impact bias the tendency to overestimate the length or the intensity of the impact of
future feeling states.[12]
Information bias the tendency to seek information even when it cannot affect
action.[13]
Interloper effect the tendency to value third party consultation as objective,
confirming, and without motive. Also consultation paradox, the conclusion that
solutions proposed by existing personnel within an organization are less likely to
receive support than from those recruited for that purpose.
Irrational escalation the phenomenon where people justify increased investment in a
decision, based on the cumulative prior investment, despite new evidence suggesting
that the decision was probably wrong.
Loss aversion "the disutility of giving up an object is greater than the utility
associated with acquiring it".[14] (see also Sunk cost effects and Endowment effect).
Mere exposure effect the tendency to express undue liking for things merely because
of familiarity with them.[15]
Money illusion the tendency to concentrate on the nominal (face value) of money
rather than its value in terms of purchasing power.[16]
Moral credential effect the tendency of a track record of non-prejudice to increase
subsequent prejudice.
Negativity bias the tendency to pay more attention and give more weight to negative
than positive experiences or other kinds of information.
Neglect of probability the tendency to completely disregard probability when making
a decision under uncertainty.[17]
Normalcy bias the refusal to plan for, or react to, a disaster which has never
happened before.
Omission bias the tendency to judge harmful actions as worse, or less moral, than
equally harmful omissions (inactions).[18]
Outcome bias the tendency to judge a decision by its eventual outcome instead of
based on the quality of the decision at the time it was made.
Planning fallacy the tendency to underestimate task-completion times.[12]
Post-purchase rationalization the tendency to persuade oneself through rational
argument that a purchase was a good value.
Pseudocertainty effect the tendency to make risk-averse choices if the expected
outcome is positive, but make risk-seeking choices to avoid negative outcomes.[19]
Reactance the urge to do the opposite of what someone wants you to do out of a need
to resist a perceived attempt to constrain your freedom of choice.
Restraint bias the tendency to overestimate one's ability to show restraint in the face
of temptation.
Selective perception the tendency for expectations to affect perception.
8

Semmelweis reflex the tendency to reject new evidence that contradicts an


established paradigm.[20]
Status quo bias the tendency to like things to stay relatively the same (see also loss
aversion, endowment effect, and system justification).[21][22]
Wishful thinking the formation of beliefs and the making of decisions according to
what is pleasing to imagine instead of by appeal to evidence or rationality.[23]
Zero-risk bias preference for reducing a small risk to zero over a greater reduction in
a larger risk.

Biases in probability and belief


Many of these biases are often studied for how they affect business and economic decisions and
how they affect experimental research.

Ambiguity effect the tendency to avoid options for which missing information makes
the probability seem "unknown."[24]
Anchoring effect the tendency to rely too heavily, or "anchor," on a past reference or
on one trait or piece of information when making decisions (also called "insufficient
adjustment").
Attentional bias the tendency to neglect relevant data when making judgments of a
correlation or association.
Authority bias the tendency to value an ambiguous stimulus (e.g., an art
performance) according to the opinion of someone who is seen as an authority on the
topic.
Availability heuristic estimating what is more likely by what is more available in
memory, which is biased toward vivid, unusual, or emotionally charged examples.
Availability cascade a self-reinforcing process in which a collective belief gains
more and more plausibility through its increasing repetition in public discourse (or
"repeat something long enough and it will become true").
Base rate neglect' or Base rate fallacy the tendency to base judgments on specifics,
ignoring general statistical information.[25]
Belief bias an effect where someone's evaluation of the logical strength of an
argument is biased by the believability of the conclusion.[26]
Clustering illusion the tendency to see patterns where actually none exist.
Capability bias the tendency to believe that the closer average performance is to a
target, the tighter the distribution of the data set.
Conjunction fallacy the tendency to assume that specific conditions are more
probable than general ones.[27]
Gambler's fallacy the tendency to think that future probabilities are altered by past
events, when in reality they are unchanged. Results from an erroneous
conceptualization of the Law of large numbers. For example, "I've flipped heads with
this coin five times consecutively, so the chance of tails coming out on the sixth flip is
much greater than heads."
Hindsight bias sometimes called the "I-knew-it-all-along" effect, the tendency to see
past events as being predictable.[28]
Illusory correlation inaccurately perceiving a relationship between two events, either
because of prejudice or selective processing of information.[29]
Observer-expectancy effect when a researcher expects a given result and therefore
unconsciously manipulates an experiment or misinterprets data in order to find it (see
also subject-expectancy effect).
Optimism bias the tendency to be over-optimistic about the outcome of planned
actions.[30]
Ostrich effect ignoring an obvious (negative) situation.

Overconfidence effect excessive confidence in one's own answers to questions. For


example, for certain types of questions, answers that people rate as "99% certain" turn
out to be wrong 40% of the time.[31][32]
Positive outcome bias the tendency of one to overestimate the probability of a
favorable outcome coming to pass in a given situation (see also wishful thinking,
optimism bias, and valence effect).
Pareidolia a vague and random stimulus (often an image or sound) is perceived as
significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and
hearing hidden messages on records played in reverse.
Pessimism bias the tendency for depressed people to be over-pessimistic about the
outcome of planned actions.
Primacy effect the tendency to weigh initial events more than subsequent events.[33]
Recency effect the tendency to weigh recent events more than earlier events (see also
peak-end rule).
Disregard of regression toward the mean the tendency to expect extreme
performance to continue.
Stereotyping expecting a member of a group to have certain characteristics without
having actual information about that individual.
Subadditivity effect the tendency to judge probability of the whole to be less than the
probabilities of the parts.
Subjective validation perception that something is true if a subject's belief demands
it to be true. Also assigns perceived connections between coincidences.
Well travelled road effect underestimation of the duration taken to traverse ofttraveled routes and over-estimate the duration taken to traverse less familiar routes.

Social biases

Actorobserver bias the tendency for explanations of other individuals' behaviors to


overemphasize the influence of their personality and underemphasize the influence of
their situation (see also Fundamental attribution error). However, this is coupled with
the opposite tendency for the self in that explanations for our own behaviors
overemphasize the influence of our situation and underemphasize the influence of our
own personality.
DunningKruger effect a two-fold bias. On one hand the lack of metacognitive
ability deludes people, who overrate their capabilities. On the other hand, skilled people
underrate their abilities, as they assume the others have a similar understanding. [citation
needed]

Egocentric bias occurs when people claim more responsibility for themselves for the
results of a joint action than an outside observer would.
Forer effect (aka Barnum effect) the tendency to give high accuracy ratings to
descriptions of their personality that supposedly are tailored specifically for them, but
are in fact vague and general enough to apply to a wide range of people. For example,
horoscopes.
False consensus effect the tendency for people to overestimate the degree to which
others agree with them.[34]
Fundamental attribution error the tendency for people to over-emphasize
personality-based explanations for behaviors observed in others while underemphasizing the role and power of situational influences on the same behavior (see also
actor-observer bias, group attribution error, positivity effect, and negativity effect).[35]
Halo effect the tendency for a person's positive or negative traits to "spill over" from
one area of their personality to another in others' perceptions of them (see also physical
attractiveness stereotype).[36]
Herd instinct common tendency to adopt the opinions and follow the behaviors of
the majority to feel safer and to avoid conflict.

10

Illusion of asymmetric insight people perceive their knowledge of their peers to


surpass their peers' knowledge of them.[37]
Illusion of transparency people overestimate others' ability to know them, and they
also overestimate their ability to know others.
Illusory superiority overestimating one's desirable qualities, and underestimating
undesirable qualities, relative to other people. (Also known as "Lake Wobegon effect,"
"better-than-average effect," or "superiority bias").[38]
Ingroup bias the tendency for people to give preferential treatment to others they
perceive to be members of their own groups.
Just-world phenomenon the tendency for people to believe that the world is just and
therefore people "get what they deserve."
Outgroup homogeneity bias individuals see members of their own group as being
relatively more varied than members of other groups.[39]
Projection bias the tendency to unconsciously assume that others (or one's future
selves) share one's current emotional states, thoughts and values.[40]
Self-serving bias the tendency to claim more responsibility for successes than
failures. It may also manifest itself as a tendency for people to evaluate ambiguous
information in a way beneficial to their interests (see also group-serving bias).[41]
System justification the tendency to defend and bolster the status quo. Existing
social, economic, and political arrangements tend to be preferred, and alternatives
disparaged sometimes even at the expense of individual and collective self-interest. (See
also status quo bias.)
Trait ascription bias the tendency for people to view themselves as relatively
variable in terms of personality, behavior and mood while viewing others as much more
predictable.
Ultimate attribution error similar to the fundamental attribution error, in this error a
person is likely to make an internal attribution to an entire group instead of the
individuals within the group.

Memory errors

Consistency bias incorrectly remembering one's past attitudes and behavior as


resembling present attitudes and behavior.
Cryptomnesia a form of misattribution where a memory is mistaken for imagination.
Egocentric bias recalling the past in a self-serving manner, e.g. remembering one's
exam grades as being better than they were, or remembering a caught fish as being
bigger than it was.
False memory confusion of imagination with memory, or the confusion of true
memories with false memories.
Hindsight bias filtering memory of past events through present knowledge, so that
those events look more predictable than they actually were; also known as the "I-knewit-all-along effect."[28]
Reminiscence bump the effect that people tend to recall more personal events from
adolescence and early adulthood than from other lifetime periods.
Rosy retrospection the tendency to rate past events more positively than they had
actually rated them when the event occurred.
Self-serving bias perceiving oneself responsible for desirable outcomes but not
responsible for undesirable ones.
Suggestibility a form of misattribution where ideas suggested by a questioner are
mistaken for memory.
Telescoping effect the effect that recent events appear to have occurred more
remotely and remote events appear to have occurred more recently.
Von Restorff effect the tendency for an item that "stands out like a sore thumb" to be
more likely to be remembered than other items.

11

Common theoretical causes of some cognitive biases

Bounded rationality limits on optimization and rationality


Attribute substitution making a complex, difficult judgement by unconsciously
substituting an easier judgement[42]
Attribution theory, especially:
o Salience
Cognitive dissonance, and related:
o Impression management
o Self-perception theory
Heuristics, including:
o Availability heuristic estimating what is more likely by what is more available
in memory, which is biased toward vivid, unusual, or emotionally charged
examples[29]
o Representativeness heuristic judging probabilities on the basis of
resemblance[29]
o Affect heuristic basing a decision on an emotional reaction rather than a
calculation of risks and benefits[43]
Introspection illusion
Adaptive bias
Misinterpretations or misuse of statistics.

Methods for dealing with cognitive biases


Reference class forecasting was developed by Daniel Kahneman, Amos Tversky, and Bent
Flyvbjerg to eliminate or reduce the impact of cognitive biases on decision making.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

11.
12.

13.

14.

Taleb 2010, p. xxi.


http://www.fooledbyrandomness.com/FatTails.html
Taleb 2010.
JSTOR 294875
"Opacity". Fooled by randomness. Retrieved 2011-10-17.
"Black Swan Unique to Western Australia", Parliament, AU: Curriculum, archived
from the original on 201131.
Hammond, Peter (October 2009), WERI Bulletin (1), UK: Warwick.
"The Black Swan: The Impact of the Highly Improbable". The New York Times. 22
April 2007.
Taleb 2010, pp. 37478.
Webb, Allen (December 2008). "Taking improbable events seriously: An interview
with the author of The Black Swan (Corporate Finance)" (Interview). McKinsey
Quarterly. McKinsey. p. 3. Retrieved 23 May 2012. "Taleb: In fact, I tried in The Black
Swan to turn a lot of black swans white! Thats why I kept going on and on against
financial theories, financial-risk managers, and people who do quantitative finance."
Taleb 2008.
Taleb, Nassim Nicholas (April 2007). The Black Swan: The Impact of the Highly
Improbable (1st ed.). London: Penguin. p. 400. ISBN 1846140455. Retrieved 23 May
2012.
Gelman, Andrew (April 2007). "Nassim Talebs "The Black Swan"". Statistical
Modeling, Causal Inference, and Social Science. Columbia University. Retrieved 23
May 2012.
Taleb, Nassim Nicholas (22 April 2007), 1. The Impact of the Highly Improbable, "The
Black Swan", The New York Times
12

15. Gangahar, Anuj (16 April 2008). "Market Risk: Mispriced risk tests market faith in a
prized formula". The Financial Times. New York. Archived from the original on 20
April 2008. Retrieved 23 May 2012.

13

You might also like