You are on page 1of 40

Deductive reasoning

From Wikipedia, the free encyclopedia

Deductive reasoning, also deductive logic or logical deduction or, informally, "top-down"
logic,[1] is the process of reasoning from one or more statements(premises) to reach a logically
certain conclusion.[2] It differs from inductive reasoning or abductive reasoning.
Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear,
and the rules of deductive logic are followed, then the conclusion reached is necessarily true.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the
following way: In deductive reasoning, a conclusion is reachedreductively by applying general rules
that hold over the entirety of a closed domain of discourse, narrowing the range under consideration
until only the conclusions is left. In inductive reasoning, the conclusion is reached by generalizing or
extrapolating from, i.e., there is epistemic uncertainty. Note, however, that the inductive reasoning
mentioned here is not the same as induction used in mathematical proofs mathematical
induction is actually a form of deductive reasoning.

Contents
[hide]

1 Simple example
2 Law of detachment
3 Law of syllogism
4 Law of contrapositive
5 Validity and soundness
6 History
7 Education
8 See also
9 References
10 Further reading
11 External links

Simple example[edit]
An example of a deductive argument:

1. All men are mortal.


2. Kass is a man.
3. Therefore, Kass is mortal.
The first premise states that all objects classified as "men" have the attribute "mortal". The second
premise states that "Kass" is classified as a "man" a member of the set "men". The conclusion
then states that "Kass" must be "mortal" because he inherits this attribute from his classification as a
"man".

Law of detachment[edit]
Main article: Modus ponens
The law of detachment (also known as affirming the antecedent and Modus ponens) is the first
form of deductive reasoning. A single conditional statement is made, and a hypothesis (P) is stated.
The conclusion (Q) is then deduced from the statement and the hypothesis. The most basic form is
listed below:
1. P Q (conditional statement)
2. P (hypothesis stated)
3. Q (conclusion deduced)
In deductive reasoning, we can conclude Q from P by using the law of detachment.[3] However, if the
conclusion (Q) is given instead of the hypothesis (P) then there is no definitive conclusion.
The following is an example of an argument using the law of detachment in the form of an if-then
statement:

1. If an angle satisfies 90 < A < 180, then A is an obtuse angle.


2. A = 120.
3. A is an obtuse angle.
Since the measurement of angle A is greater than 90 and less than 180, we can deduce that A is
an obtuse angle. If however, we are given the conclusion that A is an obtuse angle we cannot
deduce the premise that A = 120.

Law of syllogism[edit]
The law of syllogism takes two conditional statements and forms a conclusion by combining the
hypothesis of one statement with the conclusion of another. Here is the general form:

1. P Q
2. Q R
3. Therefore, P R.
The following is an example:

1. If Larry is sick, then he will be absent.


2. If Larry is absent, then he will miss his classwork.
3. Therefore, if Larry is sick, then he will miss his classwork.
We deduced the final statement by combining the hypothesis of the first statement with the
conclusion of the second statement. We also allow that this could be a false statement. This is an
example of the Transitive Property in mathematics. The Transitive Property is sometimes phrased in
this form:

1. A = B.
2. B = C.
3. Therefore A = C.

Law of contrapositive[edit]
Main article: Modus tollens
The law of contrapositive states that, in a conditional, if the conclusion is false, then
the hypothesis must be false also. The general form is the following:

1. P Q.
2. ~Q.
3. Therefore we can conclude ~P.
The following are examples:
1. If it is raining, then there are clouds in the sky.
2. There are no clouds in the sky.
3. Thus, it is not raining.

Validity and soundness[edit]


Deductive arguments are evaluated in terms of their validity and soundness.
An argument is valid if it is impossible for its premises to be true while its conclusion is false. In other
words, the conclusion must be true if the premises are true. An argument can be valid even though
the premises are false.
An argument is sound if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Fallacious
arguments often take that form.
The following is an example of an argument that is valid, but not sound:

1. Everyone who eats carrots is a quarterback.


2. John eats carrots.
3. Therefore, John is a quarterback.
The example's first premise is false there are people who eat carrots and are not quarterbacks
but the conclusion must be true, so long as the premises are true (i.e. it is impossible for the
premises to be true and the conclusion false). Therefore the argument is valid, but not sound.
Generalizations are often used to make invalid arguments, such as "everyone who eats carrots is a
quarterback." Not everyone who eats carrots is a quarterback, thus proving the flaw of such
arguments.
In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are
definitely quarterbacks. This theory of deductive reasoning also known as term logic was
developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic.
Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and
soundness. In cases of inductive reasoning, even though the premises are true and the argument is
"valid", it is possible for the conclusion to be false (determined to be false with a counterexample or
other means).

History[edit]
This section
requires expansion.(January 2015)

Aristotle started documenting deductive reasoning in the 4th century BC.[4]

Education[edit]
Deductive reasoning is generally thought of[by whom?] as a skill that develops without any formal teaching
or training. As a result of this belief, deductive reasoning skills are not taught in secondary schools,
where students are expected to use reasoning more often and at a higher level.[5] It is in high school,
for example, that students have an abrupt introduction to mathematical proofs which rely heavily
on deductive reasoning.[5]
Fallacy
From Wikipedia, the free encyclopedia

This article is about errors in reasoning. For the formal concept in philosophy and logic, see formal
fallacy. For other uses, see Fallacy (disambiguation).

This article needs additional citations for verification. Please help improve this article by adding
citations to reliable sources. Unsourced material may be challenged and removed. (August 2010)

A fallacy is the use of poor, or invalid, reasoning for the construction of an argument.[1][2] A fallacious
argument may be deceptive by appearing to be better than it really is. Some fallacies are committed
intentionally to manipulate or persuade by deception, while others are committed unintentionally due
to carelessness or ignorance.
Fallacies are commonly divided into "formal" and "informal". A formal fallacy can be expressed
neatly in a standard system of logic, such as propositional logic,[1]while an informal fallacy originates
in an error in reasoning other than an improper logical form.[3] Arguments containing informal fallacies
may be formally valid, but still fallacious.[4]

Contents
[hide]

1 Formal fallacy
o 1.1 Common examples
2 Aristotle's Fallacies
3 Whately's grouping of fallacies
4 Intentional fallacies
5 Deductive fallacy
6 Paul Meehl's Fallacies
7 Fallacies of Measurement
8 Other systems of classification
9 Assessment of Fallacies - Pragmatic Theory
10 See also
11 References
12 Further reading
13 External links

Formal fallacy[edit]
Main article: Formal fallacy

A formal fallacy is a common error of thinking that can neatly be expressed in standard system of
logic.[1] An argument that is formally fallacious is rendered invalid due to a flaw in its logical structure.
Such an argument is always considered to be wrong.
The presence of a formal fallacy in a deductive argument does not imply anything about the
argument's premises or its conclusion. Both may actually be true, or may even be more probable as
a result of the argument; but the deductive argument is still invalid because the conclusion does not
follow from the premises in the manner described. By extension, an argument can contain a formal
fallacy even if the argument is not a deductive one: for instance, an inductive argument that
incorrectly applies principles of probability or causality can be said to commit a formal fallacy.
Common examples[edit]
Main article: List of fallacies Formal fallacies

Aristotle's Fallacies[edit]
Aristotle was the first to systematize logical errors into a list. Aristotle's "Sophistical Refutations" (De
Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types, those
depending on language and those not depending on language.[5] These fallacies are called verbal
fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking
about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in
which a conclusion is obtained by improper or ambiguous use of words.[6]

Whately's grouping of fallacies[edit]


Richard Whately defines a fallacy broadly as, "any argument, or apparent argument, which
professes to be decisive of the matter at hand, while in reality it is not.[7]
Whately divided fallacies into two groups: logical and material. According to Whately, logical fallacies
are arguments where the conclusion does not follow from the premises. Material fallacies are not
logical errors because the conclusion does follow from the premises. He then divided the logical
group into two groups: purely logical and semi-logical. The semi-logical group included all of
Aristotle's sophisms except:ignoratio elenchi, petitio principii, and non causa pro causa, which are in
the material group.[8]
Intentional fallacies[edit]
Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic
debate, a conversation among friends, political discourse, advertising, or for comedic purposes, the
arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than
offering relevant evidence, that the conclusion is true.
Examples of this include the speaker or writer: diverting the argument to unrelated issues with a red
herring (Ignoratio elenchi); insulting someone's character (argumentum ad hominem), assuming they
are right by "begging the question" (petitio principi); making jumps in logic (non-sequitur); identifying
a false cause and effect (post hoc ergo propter hoc); asserting that everyone agrees
(bandwagoning); creating a "false dilemma" ("either-or fallacy") in which the situation is
oversimplified; selectively using facts (card-stacking); making false or misleading comparisons (false
equivalence and "false analogy); generalizing quickly and sloppily (hasty generalization).[9]
In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies
of amphiboly, for instance, to make ironic statements; Gary Larsonemploys fallacious reasoning in
many of his cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching
students how to be persuasive by means of a whole host of informal and formal fallacies.[10]

Deductive fallacy[edit]
Main articles: Deductive fallacy and formal fallacy

In philosophy, the term formal fallacy for logical fallacies and defined formally as: a flaw in the
structure of a deductive argument which renders the argumentinvalid. The term is preferred as logic
is the use of valid reasoning and a fallacy is an argument that uses poor reasoning therefore the
term logical fallacy is an oxymoron. However, the same terms are used in informal discourse to
mean an argument which is problematic for any reason. A logical form such as "A and B" is
independent of any particular conjunction of meaningful propositions. Logical form alone can
guarantee that given true premises, a true conclusion must follow. However, formal logic makes no
such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or
logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises
must be true for a statement to be true.

Paul Meehl's Fallacies[edit]


In Why I Do Not Attend Case Conferences[11] (1973), psychologist Paul Meehl discusses several
fallacies that can arise in medical case conferences that are primarily held to diagnose patients.
These fallacies can also be considered more general errors of thinking that all individuals (not just
psychologists) are prone to making.

Barnum effect: Making a statement that is trivial, and true of everyone, e.g of all patients, but
which appears to have special significance to the diagnosis.
Sick-sick fallacy ("pathological set"): The tendency to generalize from personal experiences of
health and ways of being, to the identification of others who are different from ourselves as
being "sick". Meehl emphasizes that though psychologists claim to know about this tendency,
most are not very good at correcting it in their own thinking.
"Me too" fallacy: The opposite of Sick-sick. Imagining that "everyone does this" and thereby
minimizing a symptom without assessing the probability of whether a mentally healthy person
would actually do it. A variation of this is Uncle George's pancake fallacy. This minimizes a
symptom through reference to a friend/relative who exhibited a similar symptom, thereby
implying that it is normal. Meehl points out that consideration should be given that the patient is
not healthy by comparison but that the friend/relative is unhealthy.
Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him." A relativism that Meehl sees
as a waste of time. There is a distinction between reality and delusion that is important to make
when assessing a patient and so the consideration of comparative realities can mislead and
distract from the importance of a patient's delusion to a diagnostic decision.
Hidden decisions: Decisions based on factors that we do not own up to or challenge, and for
example result in the placing of middle- and upper-class patients in therapy while lower-class
patients are given medication. Meehl identifies these decisions as related to an implicit ideal
patient who is young, attractive, verbal, intelligent, and successful (YAVIS). He sees YAVIS
patients as being preferred by psychotherapists because they can pay for long-term treatment
and are more enjoyable to interact with.
The spun-glass theory of the mind: The belief that the human organism is so fragile that minor
negative events, such as criticism, rejection, or failure, are bound to cause major trauma to the
system. Essentially not giving humans, and sometimes patients, enough credit for their
resilience and ability to recover.[11]

Fallacies of Measurement[edit]
Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly
authority,[12][13] and there is lively discussion regarding the relative usefulness of such metrics for
measuring the value of knowledge production in the context of an "information
tsunami."[14] Where mathematical fallacies are subtle mistakes in reasoning leading to invalid
mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the
extrapolation of raw data to a measurement-based value claim. The ancient Greek
Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable
measurements through his "human-measure" principle and the practice of dissoi logoi (arguing
multiple sides of an issue).[15][16] This history helps explain why measurement fallacies are informed
by informal logic and argumentation theory.

Anchoring fallacy: Anchoring is a cognitive bias, first theorized by Amos Tversky and Daniel
Kahneman, that "describes the common human tendency to rely too heavily on the first piece of
information offered (the 'anchor') when making decisions." In measurement arguments,
anchoring fallacies can occur when unwarranted weight is given to data generated by metrics
that the arguers themselves acknowledge is flawed. For example, limitations of the Journal
Impact Factor (JIF) are well documented,[17] and even JIF pioneer Eugene Garfield notes, "while
citation data create new tools for analyses of research performance, it should be stressed that
they supplement rather than replace other quantitative-and qualitative-indicators."[18] To the
extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative
judgments, or leave behind Garfield's "supplement rather than replace" caveat, they court
commission of anchoring fallacies.
Naturalistic Fallacy: In the context of measurement, a naturalistic fallacy can occur in a
reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in the case of
sheer quantity metrics based on the premise "more is better"[14] or, in the case of developmental
assessment in the field of psychology, "higher is better."[19]
False Analogy: In the context of measurement, this error in reasoning occurs when claims are
supported by unsound comparisons between data points, hence the false analogy's informal
nickname of the "apples and oranges" fallacy.[20] For example, the Scopus and Web of
Science bibliographic databases have difficulty distinguishing between citations of scholarly
work that are arms-length endorsements, ceremonial citations, or negative citations (indicating
the citing author withholds endorsement of the cited work).[21] Hence, measurement-based value
claims premised on the uniform quality of all citations may be questioned on false analogy
grounds.
Argumentum ex Silentio: An argument from silence features an unwarranted conclusion
advanced based on the absence of data. For example, Academic Analytics' Faculty Scholarly
Productivity Index purports to measure overall faculty productivity, yet the tool does not capture
data based on citations in books. This creates a possibility that low productivity measurements
using the tool may constitute argumentum ex silentio fallacies, to the extent that such
measurements are supported by the absence of book citation data.
Ecological Fallacy: An ecological fallacy is committed when one draws an inference from data
based on the premise that qualities observed for groups necessarily hold for individuals; for
example, "if countries with more Protestants tend to have higher suicide rates, then Protestants
must be more likely to commit suicide."[22] In metrical argumentation, ecological fallacies can be
committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto
Rican" faculty) via reference to aggregate data about a larger and different group (e.g.
"Hispanic" faculty).[23]

Other systems of classification[edit]


Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S.
Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False
Appearances), which summarize the various kinds of mistakes to which the human intellect is prone.
With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i.
J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies
(1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic
(1847) ; A. Sidgwick, Fallacies (1883) and other textbooks.

Assessment of Fallacies - Pragmatic Theory[edit]


According to the pragmatic theory,[24] a fallacy can in some instances be an error a fallacy, use of a
heuristic (short version of an argumentation scheme) to jump to a conclusion. However, even more
worryingly, in other instances it is a tactic or ploy used inappropriately in argumentation to try to get
the best of a speech part unfairly. There are always two parties to an argument containing a fallacy -
the perpetrator and the intended victim. The dialogue framework required to support the pragmatic
theory of fallacy is built on the presumption that argumentative dialogue has both an adversarial
component and a collaborative component. A dialogue has individual goals for each participant, but
also collective (shared) goals that apply to all participants. A fallacy of the second kind is seen as
more than simply violation of a rule of reasonable dialogue. It is also a deceptive tactic of
argumentation, based on sleight-of-hand. Aristotle explicitly compared contentious reasoning to
unfair fighting in athletic contest. But the roots of the pragmatic theory go back even further in history
to the Sophists. The pragmatic theory finds its roots in the Aristotelian conception of a fallacy as a
sophistical refutation, but also supports the view that many of the types of arguments traditionally
labelled as fallacies are in fact reasonable techniques of argumentation that can be used, in many
cases, to support legitimate goals of dialogue. Hence on the pragmatic approach, each case needs
to analyzed individually, to determine by the textual evidence whether the argument is fallacious or
reasonable.

See also[edit]
Logic portal

Thinking portal
Psychology portal

Lists

List of cognitive biases


List of fallacies
List of memory biases
List of paradoxes
Concepts

Association fallacy
Cogency
Cognitive bias
Cognitive distortion
Demagogy
Evidence
Fallacies of definition
False premise
False statement
Invalid proof
Mathematical fallacy
Paradox
Prosecutor's fallacy
Sophism
Soundness
Truth
Validity
Victim blaming
Works

Attacking Faulty Reasoning


Straight and Crooked Thinking

References[edit]
1. ^ Jump up to:a b c HARRY J. GENSLER, The A to Z of Logic (2010:p74). Rowman & Littlefield, ISBN 9780810875968
2. Jump up^ JOHN WOODS, The Death of Argument (2004). Applied Logic Series Volume 32, pp 3-23. ISBN
9789048167005
3. Jump up^ "Informal Fallacies, Northern Kentucky University". Retrieved 2013-09-10.
4. Jump up^ "Internet Encyclopedia of Philosophy, The University of Tennessee at Martin". Retrieved 2013-09-10.
5. Jump up^ "Aristotle's original 13 fallacies". The Non Sequitur. Retrieved 2013-05-28.
6. Jump up^ "PHIL 495: Philosophical Writing (Spring 2008), Texas A&M University". Retrieved 2013-09-10.
7. Jump up^ Frans H. van Eemeren, Bart Garssen, Bert Meuffels (2009). Fallacies and Judgments of Reasonableness:
Empirical Research Concerning the Pragma-Dialectical Discussion Rules, p.8. ISBN 9789048126149.
8. Jump up^ Coffey, P. (1912). The Science of Logic. Longmans, Green, and Company. p. 302. LCCN 12018756.
9. Jump up^ Ed Shewan (2003). Applications of Grammar: Principles of Effective Communication (2nd ed.). Christian
Liberty Press. pp. 92 ff. ISBN 1-930367-28-7.
10. Jump up^ Boyer, Web. "How to Be Persuasive". Retrieved 12/05/2012. Check date values
in: |accessdate= (help)
11. ^ Jump up to:a b Meehl, P.E. (1973). Psychodiagnosis: Selected papers. Minneapolis (MN): University of Minnesota
Press, p. 225-302.
12. Jump up^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 3236.
Retrieved October 28, 2013.
13. Jump up^ Jensen, Michael (June 15, 2007). "The New Metrics of Scholarly Authority". Chronicle Review. Retrieved 28
October 2013.
14. ^ Jump up to:a b Baveye, Phillippe C. (2010). "Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in
Perspective". Journal of Scholarly Publishing 41: 191215.doi:10.1353/scp.0.0074.
15. Jump up^ Schiappa, Edward (1991). Protagoras and Logos: A Study in Greek Philosophy and Rhetoric. Columbia, SC:
University of South Carolina Press. ISBN 0872497585.
16. Jump up^ Protagoras (1972). The Older Sophists. Indianapolis, IN: Hackett Publishing Co. ISBN 0872205568.
17. Jump up^ National Communication Journal (2013). Impact Factors, Journal Quality, and Communication Journals: A
Report for the Council of Communication Associations (PDF). Washington, D.C.: National Communication Association.
18. Jump up^ Gafield, Eugene (1993). "What Citations Tell us About Canadian Research,". Canadian Journal of Library and
Information Science 18 (4): 34.
19. Jump up^ Stein, Zachary (October 2008). "Myth Busting and Metric Making: Refashioning the Discourse about
Development". Integral Leadership Review 8 (5). Retrieved 28 October2013.
20. Jump up^ Kornprobst, Markus (2007). "Comparing Apples and Oranges? Leading and Misleading Uses of Historical
Analogies". Millennium - Journal of International Studies 36: 2949. doi:10.1177/03058298070360010301. Retrieved 29
October 2013.
21. Jump up^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 32.
Retrieved October 28, 2013.
22. Jump up^ Freedman, David A. (2004). Michael S. Lewis-Beck & Alan Bryman & Tim Futing Liao, ed. Encyclopedia of
Social Science Research Methods. Thousand Oaks, CA: Sage. pp. 293295. ISBN 0761923632.
23. Jump up^ Allen, Henry L. (1997). "Faculty Workload and Productivity: Ethnic and Gender Disparities" (PDF). NEA 1997
Almanac of Higher Education: 39. Retrieved 29 October 2013.
24. Jump up^ Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press.

Fearnside, W. Ward and William B. Holther, Fallacy: The Counterfeit of Argument, 1959.
Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP,
2005, ISBN 87-991013-7-8
D. H. Fischer, Historians' Fallacies: Toward a Logic of Historical Thought, Harper Torchbooks, 1970.
Warburton Nigel, Thinking from A to Z, Routledge 1998.
T. Edward Damer. Attacking Faulty Reasoning, 5th Edition, Wadsworth, 2005. ISBN 0-534-60516-8
Sagan, Carl, "The Demon-Haunted World: Science As a Candle in the Dark".Ballantine Books, March 1997 ISBN 0-345-
40946-9, 480 pgs. 1996 hardback edition:Random House, ISBN 0-394-53512-X, xv+457 pages plus addenda insert (some
printings). Ch.12.

Further reading[edit]
C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0-
916475-24-7.
Hans V. Hansen; Robert C. Pinto (1995). Fallacies: classical and contemporary readings. Penn
State Press. ISBN 978-0-271-01417-3.
Frans van Eemeren; Bart Garssen; Bert Meuffels (2009). Fallacies and Judgments of
Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion.
Springer. ISBN 978-90-481-2613-2.
Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University
Press, 1989.
Douglas, Walton (1987). Informal Fallacies. Amsterdam: John Benjamins.
Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama
Press.
Walton, Douglas (2010). "Why Fallacies Appear to Be Better Arguments than They
Are". Informal Logic 30 (2): 159184.
John Woods (2004). The death of argument: fallacies in agent based reasoning.
Springer. ISBN 978-1-4020-2663-8.
Historical texts

Aristotle, On Sophistical Refutations, De Sophistici Elenchi. library.adelaide.edu.au


William of Ockham, Summa of Logic (ca. 1323) Part III.4.
John Buridan, Summulae de dialectica Book VII.
Francis Bacon, the doctrine of the idols in Novum Organum Scientiarum, Aphorisms concerning
The Interpretation of Nature and the Kingdom of Man, XXIIIff. fly.hiwaay.net
Arthur Schopenhauer, The Art of Controversy | Die Kunst, Recht zu behalten - The Art Of
Controversy (bilingual), (also known as "Schopenhauers 38 stratagems"). gutenberg.net
John Stuart Mill, A System of Logic - Raciocinative and Inductive. Book 5, Chapter 7, Fallacies
of Confusion. la.utexas.edu

Top-down and bottom-up design


From Wikipedia, the free encyclopedia

"Top-down" redirects here. For other uses, see Top-down (disambiguation).

"Bottom up" redirects here. For other uses, see Bottom-up (disambiguation).
[hide]This article has multiple issues. Please help improve it or discuss these issues on the talk page.

This article possibly contains original research. (December 2010)

This article needs additional citations for verification. (December 2010)

Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories (see systemics), and
management and organization. In practice, they can be seen as a style of thinking and teaching.
A top-down approach (also known as stepwise design and in some cases used as a synonym
of decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems. In a top-down approach an overview of the system is formulated, specifying but not
detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes
in many additional subsystem levels, until the entire specification is reduced to base elements. A
top-down model is often specified with the assistance of "black boxes", these make it easier to
manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed
enough to realistically validate the model. Top down approach starts with the big picture. It breaks
down from there into smaller segments.[1]
A bottom-up approach is the piecing together of systems to give rise to more complex systems,
thus making the original systems sub-systems of the emergent system. Bottom-up processing is a
type of information processing based on incoming data from the environment to form a perception.
From a Cognitive Psychology perspective, information enters the eyes in one direction (sensory
input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and
recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-
up approach the individual base elements of the system are first specified in great detail. These
elements are then linked together to form larger subsystems, which then in turn are linked,
sometimes in many levels, until a complete top-level system is formed. This strategy often
resembles a "seed" model, whereby the beginnings are small but eventually grow in complexity and
completeness. However, "organic strategies" may result in a tangle of elements and subsystems,
developed in isolation and subject to local optimization as opposed to meeting a global purpose.

Contents
[hide]

1 Product design and development


2 Computer science
o 2.1 Software development
o 2.2 Programming
o 2.3 Parsing
3 Nanotechnology
4 Neuroscience and psychology
5 Management and organization
o 5.1 State organization
o 5.2 Public health
6 Architecture
7 Ecology
8 See also
9 Notes
10 References
11 External links

Product design and development[edit]


Main article: New product development

During the design and development of new products, designers and engineers rely on both a
bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or
existing components are selected and integrated into the product. An example would include
selecting a particular fastener, such as a bolt, and designing the receiving components such that the
fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it
would fit properly in the receiving components.[2] For perspective, for a product with more restrictive
requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more
top-down approach is taken and almost everything is custom designed. However, when it's more
important to minimize cost and increase component availability, such as with manufacturing
equipment, a more bottom-up approach would be taken, and as many off-the-shelf components
(bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings
would be designed around the selected components.

Computer science[edit]
Software development[edit]
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top-down and bottom-up approaches play a key role.
Top-down approaches emphasize planning and a complete understanding of the system. It is
inherent that no coding can begin until a sufficient level of detail has been reached in the design
of at least some part of the system. Top-down approaches are implemented by attaching the
stubs in place of the module. This, however, delays testing of the ultimate functional units of a
system until significant design is complete. Bottom-up emphasizes coding and early testing,
which can begin as soon as the first module has been specified. This approach, however, runs
the risk that modules may be coded without having a clear idea of how they link to other parts of
the system, and that such linking may not be as easy as first thought. Re-usability of code is one
of the main benefits of the bottom-up approach.[3]
Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus
Wirth. Mills developed structured programming concepts for practical use and tested them in a
1969 project to automate the New York Times morgue index. The engineering and management
success of this project led to the spread of the top-down approach through IBM and the rest of
the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal
programming language, wrote the influential paper Program Development by Stepwise
Refinement. Since Niklaus Wirth went on to develop languages such
as Modula andOberon (where one could define a module before knowing about the entire
program specification), one can infer that top down programming was not strictly what he
promoted. Top-down methods were favored in software engineering until the late
1980s,[3] and object-oriented programming assisted in demonstrating the idea that both aspects
of top-down and bottom-up programming could be utilized.
Modern software design approaches usually combine both top-down and bottom-up
approaches. Although an understanding of the complete system is usually considered necessary
for good design, leading theoretically to a top-down approach, most software projects attempt to
make use of existing code to some degree. Pre-existing modules give designs a bottom-up
flavor. Some design approaches also use an approach where a partially functional system is
designed and coded to completion, and this system is then expanded to fulfill all the
requirements for the project
Programming[edit]

Building blocks are an example of bottom-up design because the parts are first created and then assembled without regard to how the

parts will work in the assembly.

Top-down is a programming style, the mainstay of traditional procedural languages, in which


design begins by specifying complex pieces and then dividing them into successively smaller
pieces. The technique for writing a program using topdown methods is to write a main
procedure that names all the major functions it will need. Later, the programming team looks at
the requirements of each of those functions and the process is repeated. These
compartmentalized sub-routines eventually will perform actions so simple they can be easily and
concisely coded. When all the various sub-routines have been coded the program is ready for
testing. By defining how the application comes together at a high level, lower level work can be
self-contained. By defining how the lower level abstractions are expected to integrate into higher
level ones, interfaces become clearly defined.
In a bottom-up approach, the individual base elements of the system are first specified in great
detail. These elements are then linked together to form larger subsystems, which then in turn
are linked, sometimes in many levels, until a complete top-level system is formed. This strategy
often resembles a "seed" model, whereby the beginnings are small, but eventually grow in
complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses
"objects" to design applications and computer programs. In mechanical engineering with
software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can
design products as pieces not part of the whole and later add those pieces together to form
assemblies like building with LEGO. Engineers call this piece part design.
This bottom-up approach has one weakness. Good intuition is necessary to decide the
functionality that is to be provided by the module. If a system is to be built from existing system,
this approach is more suitable as it starts from some existing modules.
Parsing[edit]
Parsing is the process of analyzing an input sequence (such as that read from a file or a
keyboard) in order to determine its grammatical structure. This method is used in the analysis of
both natural languages and computer languages, as in a compiler.
Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to
identify the most fundamental units first, and then to infer higher-order structures from them.
Top-down parsers, on the other hand, hypothesize general parse tree structures and then
consider whether the known fundamental structures are compatible with the hypothesis.
See Top-down parsing and Bottom-up parsing.
Nanotechnology[edit]
Main article: Nanotechnology

Top-down and bottom-up are two approaches for the manufacture of products. These terms
were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to
distinguish between molecular manufacturing (to mass-produce large atomically precise objects)
and conventional manufacturing (which can mass-produce large objects that are not atomically
precise). Bottom-up approaches seek to have smaller (usually molecular) components built up
into more complex assemblies, while top-down approaches seek to create nanoscale devices by
using larger, externally controlled ones to direct their assembly.
The top-down approach often uses the traditional workshop or microfabrication methods where
externally controlled tools are used to cut, mill, and shape materials into the desired shape and
order. Micropatterning techniques, such as photolithography and inkjet printing belong to this
category.
Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause
single-molecule components to (a) self-organize or self-assemble into some useful
conformation, or (b) rely on positional assembly. These approaches utilize the concepts
of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry.
Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel
and much cheaper than top-down methods, but could potentially be overwhelmed as the size
and complexity of the desired assembly increases.

Neuroscience and psychology[edit]

An example of top-down processing: Even though the second letter in each word is ambiguous, top-down processing allows for easy

disambiguation based on the context.

These terms are also employed in neuroscience, cognitive neuroscience and cognitive
psychology to discuss the flow of information in processing.[4] Typically sensory input is
considered "down", and higher cognitive processes, which have more information from other
sources, are considered "up". A bottom-up process is characterized by an absence of higher
level direction in sensory processing, whereas a top-down process is characterized by a high
level of direction of sensory processing by more cognition, such as goals or targets (Beiderman,
19).[3]
According to Psychology notes written by Dr. Charles Ramskov, a Psychology professor at De
Anza College, Rock, Neiser, and Gregory claim that top-down approach involves perception that
is an active and constructive process.[5] Additionally, it is an approach not directly given by
stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions.
According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain
that gives a vague stimulus, perception becomes a top-down approach."[6]
Conversely, Psychology defines bottom-up processing as an approach wherein there is a
progression from the individual elements to the whole. According to Ramskov, one proponent of
bottom-up approach, Gibson, claims that it is a process that includes visual perception that
needs information available from proximal stimulus produced by the distal stimulus.[7] Theoretical
Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and
clearly enough."[6]
Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual
identification, are considered bottom-up processes because they rely primarily on sensory
information, whereas processes such as motor control and directed attention are considered
top-down because they are goal directed. Neurologically speaking, some areas of the brain,
such as area V1 mostly have bottom-up connections.[6] Other areas, such as the fusiform
gyrus have inputs from higher brain areas and are considered to have top-down influence.[8]
The study of visual attention provides an example. If your attention is drawn to a flower in a field,
it may be because the color or shape of the flower are visually salient. The information that
caused you to attend to the flower came to you in a bottom-up fashionyour attention was not
contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast
this situation with one in which you are looking for a flower. You have a representation of what
you are looking for. When you see the object you are looking for, it is salient. This is an example
of the use of top-down information.
In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is
stereotypically the visionary, or the person who sees the larger picture and overview. Such
people focus on the big picture and from that derive the details to support it. "Bottom-up" (or
"small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape.
The expression "seeing the wood for the trees" references the two styles of cognition.[9]

Management and organization[edit]


In management and organizational arenas, the terms "top-down" and "bottom-up" are used to
indicate how decisions are made.
A "top-down" approach is one where an executive, decision maker, or other person or body
makes a decision. This approach is disseminated under their authority to lower levels in the
hierarchy, who are, to a greater or lesser extent, bound by them. For example, a structure in
which decisions either are approved by a manager, or approved by his or her authorized
representatives based on the manager's prior guidelines, is top-down management.
A "bottom-up" approach is one that works from the grassrootsfrom a large number of people
working together, causing a decision to arise from their joint involvement. A decision by a
number of activists, students, or victims of some incident to take action is a "bottom-up"
decision. Positive aspects of top-down approaches include their efficiency and superb overview
of higher levels. Also, external effects can be internalized. On the negative side, if reforms are
perceived to be imposed from above, it can be difficult for lower levels to accept them (e.g.
Bresser Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless
of the content of reforms (e.g. Dubois 2002). A bottom-up approach allows for more
experimentation and a better feeling for what is needed at the bottom.
State organization[edit]
Both approaches can be found in the organization of states, this involving political decisions.
In bottom-up organized organizations, e.g. ministries and their subordinate entities, decisions
are prepared by experts in their fields, which define, out of their expertise, the policy they deem
necessary. If they cannot agree, even on a compromise, they escalate the problem to the next
higher hierarchy level, where a decision would be sought. Finally, the highest common principal
might have to take the decision. Information is in the debt of the inferior to the superior, which
means that the inferior owes information to the superior. In the effect, as soon as inferiors agree,
the head of the organization only provides his or her face for the decision which their inferiors
have agreed upon.
Among several countries, the German political system provides one of the purest forms of a
bottom-up approach. The German Federal Act on the Public Service provides that any inferior
has to consult and support any superiors, that he or she only has to follow general
guidelines" of the superiors, and that he or she would have to be fully responsible for any own
act in office, and would have to follow a specific, formal complaint procedure if in doubt of the
legality of an order.[10]Frequently, German politicians had to leave office on the allegation that
they took wrong decisions because of their resistance to inferior experts' opinions (this
commonly being called to be beratungsresistent", or resistant to consultation, in German). The
historical foundation of this approach lies with the fact that, in the 19th century, many politicians
used to be noblemen without appropriate education, who more and more became forced to rely
on consultation of educated experts, which (in particular after the Prussian reforms of Stein and
Hardenberg) enjoyed the status of financially and personally independent, indismissable, and
neutral experts as Beamte (public servants under public law).[11]
The experience of two dictatorships in the country and, after the end of such regimes, emerging
calls for the legal responsibility of the aidees of the aidees" (Helfershelfer) of such regimes also
furnished calls for the principle of personal responsibility of any expert for any decision made,
this leading to a strengthening of the bottom-up approach, which requires maximum
responsibility of the superiors. A similar approach can be found in British police laws, where
entitlements of police constables are vested in the constable in person and not in the police as
an administrative agency, this leading to the single constable being fully responsible for his or
her own acts in office, in particular their legality.
In the opposite, the French administration is based on a top-down approach, where regular
public servants enjoy no other task than simply to execute decisions made by their superiors. As
those superiors also require consultation, this consultation is provided by members of a cabinet,
which is distinctive from the regular ministry staff in terms of staff and organization. Those
members who are not members of the cabinet are not entitled to make any suggestions or to
take any decisions of political dimension.
The advantage of the bottom-up approach is the level of expertise provided, combined with the
motivating experience of any member of the administration to be responsible and finally the
independent engine" of progress in that field of personal responsibility. A disadvantage is the
lack of democratic control and transparency, this leading, from a democratic viewpoint, to the
deferment of actual power of policy-making to faceless, if even known, public servants. Even the
fact that certain politicians might provide their face" to the actual decisions of their inferiors
might not mitigate this effect, but rather strong parliamentary rights of control and influence in
legislative procedures (as they do exist in the example of Germany).
The advantage of the top-down principle is that political and administrative responsibilities are
clearly distinguished from each other, and that responsibility for political failures can be clearly
identified with the relevant office holder. Disadvantages are that the system triggers
demotivation of inferiors, who know that their ideas to innovative approaches might not be
welcome just because of their position, and that the decision-makers cannot make use of the full
range of expertise which their inferiors will have collected.
Administrations in dictatorships traditionally work according to a strict top-down approach. As
civil servants below the level of the political leadership are discouraged from making
suggestions, they use to suffer from the lack of expertise which could be provided by the
inferiors, which regularly leads to a breakdown of the system after a few decades.
Modern communist states, which the People's Republic of China forms an example of, therefore
prefer to define a framework of permissible, or even encouraged, criticism and self-determination
by inferiors, which would not affect the major state doctrine, but allows the use of professional
and expertise-driven knowledge and the use of it for the decision-making persons in office.
Public health[edit]
Both top-down and bottom-up approaches exist in public health. There are many examples of
top-down programs, often run by governments or large inter-governmental organizations (IGOs);
many of these are disease-specific or issue-specific, such as HIV control
or Smallpox Eradication. Examples of bottom-up programs include many small NGOs set up to
improve local access to healthcare. However, a lot of programs seek to combine both
approaches; for instance,guinea worm eradication, a single-disease international program
currently run by the Carter Center has involved the training of many local volunteers, boosting
bottom-up capacity, as have international programs for hygiene, sanitation, and access to
primary health-care.

Architecture[edit]
Often, the cole des Beaux-Arts school of design is said to have primarily promoted top-down
design because it taught that an architectural design should begin with a parti, a basic plan
drawing of the overall project.[citation needed]
By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the
study of translating small-scale organizational systems to a larger, more architectural scale (as
with the woodpanel carving and furniture design).

Ecology[edit]
In ecology, top-down control refers to when a top predator controls the structure or population
dynamics of the ecosystem. The classic example is of kelp forestecosystems. In such
ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp.
When otters are removed, urchin populations grow and reduce the kelp forest creating urchin
barrens. In other words, such ecosystems are not controlled by productivity of the kelp but rather
a top predator.
Bottom up control in ecosystems refers to ecosystems in which the nutrient supply and
productivity and type of primary producers (plants and phytoplankton) control the ecosystem
structure. An example would be how plankton populations are controlled by the availability of
nutrients. Plankton populations tend to be higher and more complex in areas where upwelling
brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be
influenced by both types of control.
Abductive reasoning
From Wikipedia, the free encyclopedia

"Abductive" redirects here. For other uses, see Abduction (disambiguation).

Abductive reasoning (also called abduction,[1] abductive inference[2] or retroduction[3]) is a form


of logical inference that goes from an observation to ahypothesis that accounts for the observation,
ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike
in deductive reasoning, the premises do not guarantee the conclusion. One can understand
abductive reasoning as "inference to the best explanation".[4]
The fields of law,[5] computer science, and artificial intelligence research[6] renewed interest in the
subject of abduction. Diagnostic expert systems frequently employ abduction.

Contents
[hide]

1 History
2 Deduction, induction, and abduction
3 Formalizations of abduction
o 3.1 Logic-based abduction
o 3.2 Set-cover abduction
o 3.3 Abductive validation
o 3.4 Probabilistic abduction
o 3.5 Subjective logic abduction
4 History
o 4.1 1867
o 4.2 1878
o 4.3 1883
o 4.4 1902 and after
o 4.5 Pragmatism
o 4.6 Three levels of logic about abduction
4.6.1 Classification of signs
4.6.2 Critique of arguments
4.6.3 Methodology of inquiry
o 4.7 Other writers
5 Applications
6 See also
7 References
8 Notes
9 External links

History[edit]
The American philosopher Charles Sanders Peirce (18391914) first introduced the term as
"guessing".[7] Peirce said that to abduce a hypothetical explanation from an observed
circumstance is to surmise that may be true because then would be a matter of
course.[8] Thus, to abduce from involves determining that is sufficient, but not necessary,
for .
For example, suppose we observe that the lawn is wet. If it rained last night, then it would be
unsurprising that the lawn is wet. Therefore, by abductive reasoning, the possibility that it rained last
night is reasonable (but note that Peirce did not remain convinced that a single logical form covers
all abduction).[9] Moreover, abducing it rained last night from the observation of the wet lawn can lead
to a false conclusion. In this example, dew, lawn sprinklers, or some other process may have
resulted in the wet lawn, even in the absence of rain.
Peirce argues that good abductive reasoning from P to Q involves not simply a determination
that Q is sufficient for P, but also that Q is among the most economical explanations for P.
Simplification and economy both call for that "leap" of abduction.[10]

Deduction, induction, and abduction[edit]


Main article: Logical reasoning

Deductive reasoning (deduction)


allows deriving from only where is a formal logical consequence of . In other words,
deduction derives the consequences of the assumed. Given the truth of the assumptions, a
valid deduction guarantees the truth of the conclusion. For example, given that all bachelors
are unmarried males, and given that this person is a bachelor, one can deduce that this
person is an unmarried male.
Inductive reasoning (induction)
allows inferring from , where does not follow necessarily from . might give us very
good reason to accept , but it does not ensure . For example, if all swans that we have
observed so far are white, we may induce that the possibility that all swans are white is
reasonable. We have good reason to believe the conclusion from the premise, but the truth
of the conclusion is not guaranteed. (Indeed, it turns out that some swans are black.)
Abductive reasoning (abduction)
allows inferring as an explanation of . Because of this inference, abduction allows the
precondition to be abduced from the consequence . Deductive reasoning and abductive
reasoning thus differ in the direction in which a rule like " entails " is used for inference.
As such, abduction is formally equivalent to the logical fallacy of affirming the
consequent (or Post hoc ergo propter hoc) because of multiple possible explanations for .
For example, in a billiard game, after glancing and seeing the eight ball moving towards us,
we may abduce that the cue ball struck the eight ball. The strike of the cue ball would
account for the movement of the eight ball. It serves as a hypothesis that explains our
observation. Given the many possible explanations for the movement of the eight ball, our
abduction does not leave us certain that the cue ball in fact struck the eight ball, but our
abduction, still useful, can serve to orient us in our surroundings. Despite many possible
explanations for any physical process that we observe, we tend to abduce a single
explanation (or a few explanations) for this process in the expectation that we can better
orient ourselves in our surroundings and disregard some possibilities. Properly used,
abductive reasoning can be a useful source of priors in Bayesian statistics.

Formalizations of abduction[edit]
Logic-based abduction[edit]
In logic, explanation is done from a logical theory representing a domain and a set of
observations . Abduction is the process of deriving a set of explanations of
according to and picking out one of those explanations. For to be an explanation
of according to , it should satisfy two conditions:

follows from and ;

is consistent with .
In formal logic, and are assumed to be sets of literals. The two conditions for
being an explanation of according to theory are formalized as:

;
is consistent.
Among the possible explanations satisfying these two conditions, some
other condition of minimality is usually imposed to avoid irrelevant facts (not
contributing to the entailment of ) being included in the explanations.
Abduction is then the process that picks out some member of . Criteria for
picking out a member representing "the best" explanation include the simplicity,
the prior probability, or the explanatory power of the explanation.
A proof theoretical abduction method for first order classical logic based on
the sequent calculus and a dual one, based on semantic tableaux (analytic
tableaux) have been proposed (Cialdea Mayer & Pirri 1993). The methods are
sound and complete and work for full first order logic, without requiring any
preliminary reduction of formulae into normal forms. These methods have also
been extended to modal logic.
Abductive logic programming is a computational framework that extends
normal logic programming with abduction. It separates the theory into two
components, one of which is a normal logic program, used to generate by
means of backward reasoning, the other of which is a set of integrity
constraints, used to filter the set of candidate explanations.
Set-cover abduction[edit]
A different formalization of abduction is based on inverting the function that
calculates the visible effects of the hypotheses. Formally, we are given a set of
hypotheses and a set of manifestations ; they are related by the domain
knowledge, represented by a function that takes as an argument a set of
hypotheses and gives as a result the corresponding set of manifestations. In
other words, for every subset of the hypotheses , their effects are
known to be .
Abduction is performed by finding a set such that . In
other words, abduction is performed by finding a set of hypotheses such
that their effects include all observations .
A common assumption is that the effects of the hypotheses are independent,

that is, for every , it holds that . If this


condition is met, abduction can be seen as a form of set covering.
Abductive validation[edit]
Abductive validation is the process of validating a given hypothesis through
abductive reasoning. This can also be called reasoning through successive
approximation. Under this principle, an explanation is valid if it is the best
possible explanation of a set of known data. The best possible explanation is
often defined in terms of simplicity and elegance (see Occam's razor).
Abductive validation is common practice in hypothesis formation in science;
moreover, Peirce claims that it is a ubiquitous aspect of thought:
Looking out my window this lovely spring morning, I see an azalea in full bloom.
No, no! I don't see that; though that is the only way I can describe what I see.
That is a proposition, a sentence, a fact; but what I perceive is not proposition,
sentence, fact, but only an image, which I make intelligible in part by means of a
statement of fact. This statement is abstract; but what I see is concrete. I
perform an abduction when I so much as express in a sentence anything I see.
The truth is that the whole fabric of our knowledge is one matted felt of pure
hypothesis confirmed and refined by induction. Not the smallest advance can be
made in knowledge beyond the stage of vacant staring, without making an
abduction at every step.[11]
It was Peirce's own maxim that "Facts cannot be explained by a hypothesis
more extraordinary than these facts themselves; and of various hypotheses the
least extraordinary must be adopted."[12] After obtaining results from an inference
procedure, we may be left with multiple assumptions, some of which may be
contradictory. Abductive validation is a method for identifying the assumptions
that will lead to your goal.
Probabilistic abduction[edit]
Probabilistic abductive reasoning is a form of abductive validation, and is used
extensively in areas where conclusions about possible hypotheses need to be
derived, such as for making diagnoses from medical tests. For example, a
pharmaceutical company that develops a test for a particular infectious disease
will typically determine the reliability of the test by hiring a group of infected and
a group of non-infected people to undergo the test. Assume the statements :
"Positive test", : "Negative test", : "Infected", and : "Not infected". The
result of these trials will then determine the reliability of the test in terms of
its sensitivity and false positive rate . The interpretations of the
conditionals are: : "The probability of positive test given infection",
and : "The probability of positive test in the absence of infection". The
problem with applying these conditionals in a practical setting is that they are
expressed in the opposite direction to what the practitioner needs. The
conditionals needed for making the diagnosis are: : "The probability of
infection given positive test", and : "The probability of infection given
negative test". The probability of infection could then have been conditionally
deduced as , where " " denotes
conditional deduction. Unfortunately the required conditionals are usually not
directly available to the medical practitioner, but they can be obtained if the
base rate of the infection in the population is known.
The required conditionals can be correctly derived by inverting the available
conditionals using Bayes rule. The inverted conditionals are obtained as

follows: The
term on the right hand side of the equation expresses the base rate of
the infection in the population. Similarly, the term expresses the default
likelihood of positive test on a random person in the population. In the
expressions below and denote the base rates of
and its complement respectively, so that
e.g. . The full expression for the
required conditionals and are then

The full expression for the conditionally abduced probability of infection in a


tested person, expressed as , given the outcome of the test, the base
rate of the infection, as well as the test's sensitivity and false positive rate, is
then given by

.
This further simplifies to

.
Probabilistic abduction can thus be described as a method for inverting
conditionals in order to apply probabilistic deduction.
A medical test result is typically considered positive or negative, so when
applying the above equation it can be assumed that either
(positive) or (negative). In case the patient tests positive, the
above equation can be simplified to which will give the
correct likelihood that the patient actually is infected.
The Base rate fallacy in medicine,[13] or the Prosecutor's fallacy[14] in legal
reasoning, consists of making the erroneous assumption
that . While this reasoning error often can produce a
relatively good approximation of the correct hypothesis probability value, it can
lead to a completely wrong result and wrong conclusion in case the base rate is
very low and the reliability of the test is not perfect. An extreme example of the
base rate fallacy is to conclude that a male person is pregnant just because he
tests positive in a pregnancy test. Obviously, the base rate of male pregnancy is
zero, and assuming that the test is not perfect, it would be correct to conclude
that the male person is not pregnant.
The expression for probabilistic abduction can be generalised to multinomial
cases,[15] i.e., with a state space of multiple and a state space of
multiple states .
Subjective logic abduction[edit]
Subjective logic generalises probabilistic logic by including parameters for
uncertainty in the input arguments. Abduction in subjective logic is thus similar
to probabilistic abduction described above.[15] The input arguments in subjective
logic are composite functions called subjective opinions which can be binomial
when the opinion applies to a single proposition or multinomial when it applies
to a set of propositions. A multinomial opinion thus applies to a frame (i.e. a
state space of exhaustive and mutually disjoint propositions ), and is denoted

by the composite function , where is a vector of belief


masses over the propositions of , is the uncertainty mass, and is a
vector of base rate values over the propositions of . These components

satisfy and as well

as .
Assume the frames and , the sets of conditional opinions
and , the opinion on , and the base rate function on .
Based on these parameters, subjective logic provides a method for deriving the
set of inverted conditionals and . Using these inverted
conditionals, subjective logic also provides a method for deduction. Abduction in
subjective logic consists of inverting the conditionals and then applying
deduction.

The symbolic notation for conditional abduction is " ", and the operator itself is
denoted as . The expression for subjective logic abduction is

then:[15] .
The advantage of using subjective logic abduction compared to probabilistic
abduction is that uncertainty about the probability values of the input arguments
can be explicitly expressed and taken into account during the analysis. It is thus
possible to perform abductive analysis in the presence of missing or incomplete
input evidence, which normally results in degrees of uncertainty in the output
conclusions.

History[edit]
The philosopher Charles Sanders Peirce (/prs/; 18391914) introduced
abduction into modern logic. Over the years he called such
inference hypothesis,abduction, presumption, and retroduction. He considered it
a topic in logic as a normative field in philosophy, not in purely formal or
mathematical logic, and eventually as a topic also in economics of research.
As two stages of the development, extension, etc., of a hypothesis in scientific
inquiry, abduction and also induction are often collapsed into one overarching
concept the hypothesis. That is why, in the scientific method pioneered
by Galileo and Bacon, the abductive stage of hypothesis formation is
conceptualized simply as induction. Thus, in the twentieth century this collapse
was reinforced by Karl Popper's explication of the hypothetico-deductive model,
where the hypothesis is considered to be just "a guess"[16] (in the spirit of
Peirce). However, when the formation of a hypothesis is considered the result of
a process it becomes clear that this "guess" has already been tried and made
more robust in thought as a necessary stage of its acquiring the status of
hypothesis. Indeed many abductions are rejected or heavily modified by
subsequent abductions before they ever reach this stage.
Before 1900, Peirce treated abduction as the use of a known rule to explain an
observation, e.g., it is a known rule that if it rains the grass is wet; so, to explain
the fact that the grass is wet; one infers that it has rained. This remains the
common use of the term "abduction" in the social sciences and in artificial
intelligence.
Peirce consistently characterized it as the kind of inference that originates a
hypothesis by concluding in an explanation, though an unassured one, for some
very curious or surprising (anomalous) observation stated in a premise. As early
as 1865 he wrote that all conceptions of cause and force are reached through
hypothetical inference; in the 1900s he wrote that all explanatory content of
theories is reached through abduction. In other respects Peirce revised his view
of abduction over the years.[17]
In later years his view came to be:

Abduction is guessing.[7] It is "very little hampered" by rules of logic.[8] Even a


well-prepared mind's individual guesses are more frequently wrong than
right.[18]But the success of our guesses far exceeds that of random luck and
seems born of attunement to nature by instinct[19] (some speak of intuition in
such contexts[20]).
Abduction guesses a new or outside idea so as to account in a plausible,
instinctive, economical way for a surprising or very complicated
phenomenon. That is its proximate aim.[19]
Its longer aim is to economize inquiry itself. Its rationale is inductive: it
works often enough, is the only source of new ideas, and has no substitute
in expediting the discovery of new truths.[21] Its rationale especially involves
its role in coordination with other modes of inference in inquiry. It is
inference to explanatory hypotheses for selection of those best worth trying.
Pragmatism is the logic of abduction. Upon the generation of an explanation
(which he came to regard as instinctively guided), the pragmatic
maxim gives the necessary and sufficient logical rule to abduction in
general. The hypothesis, being insecure, needs to have
conceivable[22] implications for informed practice, so as to be
testable[23][24] and, through its trials, to expedite and economize inquiry. The
economy of research is what calls for abduction and governs its art.[10]
Writing in 1910, Peirce admits that "in almost everything I printed before the
beginning of this century I more or less mixed up hypothesis and induction" and
he traces the confusion of these two types of reasoning to logicians' too "narrow
and formalistic a conception of inference, as necessarily having formulated
judgments from its premises."[25]
He started out in the 1860s treating hypothetical inference in a number of ways
which he eventually peeled away as inessential or, in some cases, mistaken:

as inferring the occurrence of a character (a characteristic) from the


observed combined occurrence of multiple characters which its occurrence
would necessarily involve;[26] for example, if any occurrence of A is known to
necessitate occurrence of B, C, D, E, then the observation of B, C, D,
E suggests by way of explanation the occurrence of A. (But by 1878 he no
longer regarded such multiplicity as common to all hypothetical inference.[27])
as aiming for a more or less probable hypothesis (in 1867 and 1883 but not
in 1878; anyway by 1900 the justification is not probability but the lack of
alternatives to guessing and the fact that guessing is fruitful;[28] by 1903 he
speaks of the "likely" in the sense of nearing the truth in an "indefinite
sense";[29] by 1908 he discusses plausibility as instinctive appeal.[19]) In a
paper dated by editors as circa 1901, he discusses "instinct" and
"naturalness", along with the kind of considerations (low cost of testing,
logical caution, breadth, and incomplexity) that he later calls
methodeutical.[30]
as induction from characters (but as early as 1900 he characterized
abduction as guessing[28])
as citing a known rule in a premise rather than hypothesizing a rule in the
conclusion (but by 1903 he allowed either approach[8][31])
as basically a transformation of a deductive categorical syllogism[27] (but in
1903 he offered a variation on modus ponens instead,[8] and by 1911 he
was unconvinced that any one form covers all hypothetical inference[9]).
1867[edit]
In 1867, in "The Natural Classification of Arguments",[26] hypothetical inference
always deals with a cluster of characters (call them P, P, P, etc.) known to
occur at least whenever a certain character (M) occurs. Note that categorical
syllogisms have elements traditionally called middles, predicates, and subjects.
For example: All men [middle] are mortal [predicate]; Socrates [subject] is
a man [middle]; ergo Socrates [subject] is mortal [predicate]". Below, 'M' stands
for a middle; 'P' for a predicate; 'S' for a subject. Note also that Peirce held that
all deduction can be put into the form of the categorical syllogism Barbara
(AAA-1).

[Deduction]. Induction. Hypothesis.

[Any] M is P S, S, S, &c. are taken at random Any M is, for instance, P, P,


[Any] S is M as M's; P, &c.;
[Any] S is S, S, S, &c. are P: S is P, P, P, &c.:
P. Any M is probably P. S is probably M.
1878[edit]
In 1878, in "Deduction, Induction, and Hypothesis",[27] there is no longer a need
for multiple characters or predicates in order for an inference to be hypothetical,
although it is still helpful. Moreover Peirce no longer poses hypothetical
inference as concluding in a probable hypothesis. In the forms themselves, it is
understood but not explicit that induction involves random selection and that
hypothetical inference involves response to a "very curious circumstance". The
forms instead emphasize the modes of inference as rearrangements of one
another's propositions (without the bracketed hints shown below).
Deduction. Induction. Hypothesis.

Rule: All the beans from this Case: These beans are [randomly Rule: All the beans from this
bag are white. selected] from this bag. bag are white.
Case: These beans are from Result: These beans are white. Result: These beans [oddly]
this bag. Rule: All the beans from this bag are white.
Result: These beans are are white. Case: These beans are
white. from this bag.

1883[edit]
Peirce long treated abduction in terms of induction from characters or traits
(weighed, not counted like objects), explicitly so in his influential 1883 "A Theory
of Probable Inference", in which he returns to involving probability in the
hypothetical conclusion.[32] Like "Deduction, Induction, and Hypothesis" in 1878,
it was widely read (see the historical books on statistics by Stephen Stigler),
unlike his later amendments of his conception of abduction. Today abduction
remains most commonly understood as induction from characters and
extension of a known rule to cover unexplained circumstances.
1902 and after[edit]
In 1902 Peirce wrote that he now regarded the syllogistical forms and the
doctrine of extension and comprehension (i.e., objects and characters as
referenced by terms), as being less fundamental than he had earlier
thought.[33] In 1903 he offered the following form for abduction:[8]
The surprising fact, C, is observed;
But if A were true, C would be a matter of course,
Hence, there is reason to suspect that A is true.
The hypothesis is framed, but not asserted, in a premise, then asserted
as rationally suspectable in the conclusion. Thus, as in the earlier
categorical syllogistic form, the conclusion is formulated from some
premise(s). But all the same the hypothesis consists more clearly than
ever in a new or outside idea beyond what is known or observed.
Induction in a sense goes beyond observations already reported in the
premises, but it merely amplifies ideas already known to represent
occurrences, or tests an idea supplied by hypothesis; either way it
requires previous abductions in order to get such ideas in the first
place. Induction seeks facts to test a hypothesis; abduction seeks a
hypothesis to account for facts.
Note that the hypothesis ("A") could be of a rule. It need not even be a
rule strictly necessitating the surprising observation ("C"), which needs
to follow only as a "matter of course"; or the "course" itself could
amount to some known rule, merely alluded to, and also not necessarily
a rule of strict necessity. In the same year, Peirce wrote that reaching a
hypothesis may involve placing a surprising observation under either a
newly hypothesized rule or a hypothesized combination of a known rule
with a peculiar state of facts, so that the phenomenon would be not
surprising but instead either necessarily implied or at least likely.[31]
Peirce did not remain quite convinced about any such form as the
categorical syllogistic form or the 1903 form. In 1911, he wrote, "I do
not, at present, feel quite convinced that any logical form can be
assigned that will cover all 'Retroductions'. For what I mean by a
Retroduction is simply a conjecture which arises in the mind."[9]
Pragmatism[edit]
In 1901 Peirce wrote, "There would be no logic in imposing rules, and
saying that they ought to be followed, until it is made out that the
purpose of hypothesis requires them."[34] In 1903 Peirce
called pragmatism "the logic of abduction" and said that the pragmatic
maxim gives the necessary and sufficient logical rule to abduction in
general.[24] The pragmatic maxim is: "Consider what effects, that might
conceivably have practical bearings, we conceive the object of our
conception to have. Then, our conception of these effects is the whole
of our conception of the object." It is a method for fruitful clarification of
conceptions by equating the meaning of a conception with the
conceivable practical implications of its object's conceived effects.
Peirce held that that is precisely tailored to abduction's purpose in
inquiry, the forming of an idea that could conceivably shape informed
conduct. In various writings in the 1900s[10][35] he said that the conduct of
abduction (or retroduction) is governed by considerations of economy,
belonging in particular to the economics of research. He regarded
economics as a normative science whose analytic portion might be part
of logical methodeutic (that is, theory of inquiry).[36]
Three levels of logic about abduction[edit]
Peirce came over the years to divide (philosophical) logic into three
departments:

1. Stechiology, or speculative grammar, on the conditions for


meaningfulness. Classification of signs (semblances,
symptoms, symbols, etc.) and their combinations (as well as
their objects and interpretants).
2. Logical critic, or logic proper, on validity or justifiability of
inference, the conditions for true representation. Critique of
arguments in their various modes (deduction, induction,
abduction).
3. Methodeutic, or speculative rhetoric, on the conditions for
determination of interpretations. Methodology of inquiry in its
interplay of modes.
Peirce had, from the start, seen the modes of inference as being
coordinated together in scientific inquiry and, by the 1900s, held that
hypothetical inference in particular is inadequately treated at the level of
critique of arguments.[23][24] To increase the assurance of a hypothetical
conclusion, one needs to deduce implications about evidence to be
found, predictions which induction can test through observation so as to
evaluate the hypothesis. That is Peirce's outline of the scientific
method of inquiry, as covered in his inquiry methodology, which
includes pragmatism or, as he later called it, pragmaticism, the
clarification of ideas in terms of their conceivable implications regarding
informed practice.
Classification of signs[edit]

As early as 1866,[37] Peirce held that:


1. Hypothesis (abductive inference) is inference through an icon (also
called a likeness).
2. Induction is inference through an index (a sign by factual
connection); a sample is an index of the totality from which it is drawn.
3. Deduction is inference through a symbol (a sign by interpretive habit
irrespective of resemblance or connection to its object).
In 1902, Peirce wrote that, in abduction: "It is recognized that the
phenomena are like, i.e. constitute an Icon of, a replica of a general
conception, or Symbol."[38]
Critique of arguments[edit]

At the critical level Peirce examined the forms of abductive arguments


(as discussed above), and came to hold that the hypothesis should
economize explanation for plausibility in terms of the feasible and
natural. In 1908 Peirce described this plausibility in some detail.[19] It
involves not likeliness based on observations (which is instead the
inductive evaluation of a hypothesis), but instead optimal simplicity in
the sense of the "facile and natural", as by Galileo's natural light of
reason and as distinct from "logical simplicity" (Peirce does not dismiss
logical simplicity entirely but sees it in a subordinate role; taken to its
logical extreme it would favor adding no explanation to the observation
at all). Even a well-prepared mind guesses oftener wrong than right, but
our guesses succeed better than random luck at reaching the truth or at
least advancing the inquiry, and that indicates to Peirce that they are
based in instinctive attunement to nature, an affinity between the mind's
processes and the processes of the real, which would account for why
appealingly "natural" guesses are the ones that oftenest (or least
seldom) succeed; to which Peirce added the argument that such
guesses are to be preferred since, without "a natural bent like nature's",
people would have no hope of understanding nature. In 1910 Peirce
made a three-way distinction between probability, verisimilitude, and
plausibility, and defined plausibility with a normative "ought": "By
plausibility, I mean the degree to which a theory ought to recommend
itself to our belief independently of any kind of evidence other than our
instinct urging us to regard it favorably."[39] For Peirce, plausibility does
not depend on observed frequencies or probabilities, or on
verisimilitude, or even on testability, which is not a question of the
critique of the hypothetical inference as an inference, but rather a
question of the hypothesis's relation to the inquiry process.
The phrase "inference to the best explanation" (not used by Peirce but
often applied to hypothetical inference) is not always understood as
referring to the most simple and natural. However, in other senses of
"best", such as "standing up best to tests", it is hard to know which is
the best explanation to form, since one has not tested it yet. Still, for
Peirce, any justification of an abductive inference as good is not
completed upon its formation as an argument (unlike with induction and
deduction) and instead depends also on its methodological role and
promise (such as its testability) in advancing inquiry.[23][24][40]
Methodology of inquiry[edit]

At the methodeutical level Peirce held that a hypothesis is judged and


selected[23] for testing because it offers, via its trial, to expedite and
economize the inquiry process itself toward new truths, first of all by
being testable and also by further economies,[10] in terms of cost, value,
and relationships among guesses (hypotheses). Here, considerations
such as probability, absent from the treatment of abduction at the
critical level, come into play. For examples:

Cost: A simple but low-odds guess, if low in cost to test for falsity,
may belong first in line for testing, to get it out of the way. If
surprisingly it stands up to tests, that is worth knowing early in the
inquiry, which otherwise might have stayed long on a wrong though
seemingly likelier track.
Value: A guess is intrinsically worth testing if it has instinctual
plausibility or reasoned objective probability, while subjective
likelihood, though reasoned, can be treacherous.
Interrelationships: Guesses can be chosen for trial strategically for
their
caution, for which Peirce gave as example the game of Twenty
Questions,
breadth of applicability to explain various phenomena, and
incomplexity, that of a hypothesis that seems too simple but
whose trial "may give a good 'leave,' as the billiard-players
say", and be instructive for the pursuit of various and conflicting
hypotheses that are less simple.[41]
Other writers[edit]
Norwood Russell Hanson, a philosopher of science, wanted to grasp a
logic explaining how scientific discoveries take place. He used Peirce's
notion of abduction for this.[42]
Further development of the concept can be found in Peter
Lipton's Inference to the Best Explanation (Lipton, 1991).

Applications[edit]
Applications in artificial intelligence include fault diagnosis, belief
revision, and automated planning. The most direct application of
abduction is that of automatically detecting faults in systems: given a
theory relating faults with their effects and a set of observed effects,
abduction can be used to derive sets of faults that are likely to be the
cause of the problem.
In medicine, abduction can be seen as a component of clinical
evaluation and judgment.[43][44]
Abduction can also be used to model automated planning.[45] Given a
logical theory relating action occurrences with their effects (for example,
a formula of theevent calculus), the problem of finding a plan for
reaching a state can be modeled as the problem of abducting a set of
literals implying that the final state is the goal state.
In intelligence analysis, Analysis of Competing
Hypotheses and Bayesian networks, probabilistic abductive reasoning
is used extensively. Similarly in medical diagnosis and legal reasoning,
the same methods are being used, although there have been many
examples of errors, especially caused by the base rate fallacyand
the prosecutor's fallacy.
Belief revision, the process of adapting beliefs in view of new
information, is another field in which abduction has been applied. The
main problem of belief revision is that the new information may be
inconsistent with the corpus of beliefs, while the result of the
incorporation cannot be inconsistent. This process can be done by the
use of abduction: once an explanation for the observation has been
found, integrating it does not generate inconsistency. This use of
abduction is not straightforward, as adding propositional formulae to
other propositional formulae can only make inconsistencies worse.
Instead, abduction is done at the level of the ordering of preference of
the possible worlds. Preference models use fuzzy logic or utility
models.
In the philosophy of science, abduction has been the key inference
method to support scientific realism, and much of the debate about
scientific realism is focused on whether abduction is an acceptable
method of inference.
In historical linguistics, abduction during language acquisition is often
taken to be an essential part of processes of language change such as
reanalysis andanalogy.[46]
In anthropology, Alfred Gell in his influential book Art and
Agency defined abduction (after Eco[47]) as "a case of synthetic
inference 'where we find some very curious circumstances, which
would be explained by the supposition that it was a case of some
general rule, and thereupon adopt that supposition".[48] Gell criticizes
existing 'anthropological' studies of art, for being too preoccupied with
aesthetic value and not preoccupied enough with the central
anthropological concern of uncovering 'social relationships,' specifically
the social contexts in which artworks are produced, circulated, and
received.[49] Abduction is used as the mechanism for getting from art to
agency. That is, abduction can explain how works of art inspire
a sensus communis: the commonly-held views shared by members that
characterize a given society.[50] The question Gell asks in the book is,
'how does it initially 'speak' to people?' He answers by saying that "No
reasonable person could suppose that art-like relations between people
and things do not involve at least some form of semiosis."[48] However,
he rejects any intimation that semiosis can be thought of as a language
because then he would have to admit to some pre-established
existence of the sensus communis that he wants to claim only emerges
afterwards out of art. Abduction is the answer to this conundrum
because the tentative nature of the abduction concept (Peirce likened it
to guessing) means that not only can it operate outside of any pre-
existing framework, but moreover, it can actually intimate the existence
of a framework. As Gell reasons in his analysis, the physical existence
of the artwork prompts the viewer to perform an abduction that imbues
the artwork with intentionality. A statue of a goddess, for example, in
some senses actually becomes the goddess in the mind of the
beholder; and represents not only the form of the deity but also her
intentions (which are adduced from the feeling of her very presence).
Therefore through abduction, Gell claims that art can have the kind of
agency that plants the seeds that grow into cultural myths. The power
of agency is the power to motivate actions and inspire ultimately the
shared understanding that characterizes any given society.

Defeasible reasoning
From Wikipedia, the free encyclopedia
This article includes a list of references, related reading or external links, but its sources remain
unclear because it lacks inline citations. Please improve this article by introducing more precise
citations. (April 2010)

Defeasible reasoning is a kind of reasoning that is based on reasons that are defeasible, as
opposed to the indefeasible reasons of deductive logic. Defeasible reasoning is a particular kind of
non-demonstrative reasoning, where the reasoning does not produce a full, complete, or final
demonstration of a claim, i.e., where fallibility and corrigibility of a conclusion are acknowledged. In
other words defeasible reasoning produces a contingent statement or claim. Other kinds of non-
demonstrative reasoning are probabilistic reasoning, inductive
reasoning, statistical reasoning, abductive reasoning, and paraconsistent reasoning. Defeasible
reasoning is also a kind of ampliative reasoning because its conclusions reach beyond the pure
meanings of the premises.
The differences between these kinds of reasoning correspond to differences about the conditional
that each kind of reasoning uses, and on what premise (or on what authority) the conditional is
adopted:

Deductive (from meaning postulate, axiom, or contingent assertion): if p then q (i.e., q or not-p)
Defeasible (from authority): if p then (defeasibly) q
Probabilistic (from combinatorics and indifference): if p then (probably) q
Statistical (from data and presumption): the frequency of qs among ps is high (or inference from
a model fit to data); hence, (in the right context) if p then (probably) q
Inductive (theory formation; from data, coherence, simplicity, and confirmation): (inducibly)
"if p then q"; hence, if p then (deducibly-but-revisably) q
Abductive (from data and theory): p and q are correlated, and q is sufficient for p; hence,
if p then (abducibly) q as cause
Defeasible reasoning finds its fullest expression in jurisprudence, ethics and moral
philosophy, epistemology, pragmatics and
conversational conventions inlinguistics, constructivist decision theories, and in knowledge
representation and planning in artificial intelligence. It is also closely identified with prima
facie(presumptive) reasoning (i.e., reasoning on the "face" of evidence), and ceteris paribus (default)
reasoning (i.e., reasoning, all things "being equal").

Contents
[hide]

1 History
2 Political and judicial use
3 Specificity
4 Nature of defeasibility
5 See also
6 References
7 External links

History[edit]
Though Aristotle differentiated the forms of reasoning that are valid for logic and philosophy from the
more general ones that are used in everyday life (seedialectics and rhetoric), 20th century
philosophers mainly concentrated on deductive reasoning. At the end of the 19th century, logic texts
would typically survey both demonstrative and non-demonstrative reasoning, often giving more
space to the latter. However, after the blossoming of mathematical logic at the hands ofBertrand
Russell, Alfred North Whitehead and Willard van Orman Quine, latter-20th century logic texts paid
little attention to the non-deductive modes of inference.
There are several notable exceptions. John Maynard Keynes wrote his dissertation on non-
demonstrative reasoning, and influenced the thinking of Ludwig Wittgenstein on this subject.
Wittgenstein, in turn, had many admirers, including the positivist legal scholar H.L.A. Hart and
the speech act linguist John L. Austin,Stephen Toulmin in rhetoric (Chaim Perelman too), the moral
theorists W.D. Ross and C.L. Stevenson, and the vagueness epistemologist/ontologist Friedrich
Waismann.
The etymology of defeasible usually refers to Middle English law of contracts, where a condition of
defeasance is a clause that can invalidate or annul a contract or deed.
Though defeat, dominate, defer, defy, deprecate and derogate are often used in the same contexts
as defeasible, the
verbs annul and invalidate (and nullify,overturn, rescind, vacate, repeal, debar, void, cancel, counter
mand, preempt, etc.) are more properly correlated with the concept of defeasibility than those words
beginning with the letter d. Many dictionaries do contain the verb, to defease with past
participle, defeased.
Philosophers in moral theory and rhetoric had taken defeasibility largely for granted when American
epistemologists rediscovered Wittgenstein's thinking on the subject: John Ladd, Roderick
Chisholm, Roderick Firth, Ernest Sosa, Robert Nozick, and John L. Pollock all began writing with
new conviction about howappearance as red was only a defeasible reason for believing something
to be red. More importantly Wittgenstein's orientation toward language-games (and away
from semantics) emboldened these epistemologists to manage rather than to expurgate prima
facie logical inconsistency.
At the same time (in the mid-1960s), two more students of Hart and Austin at Oxford, Brian
Barry and David Gauthier, were applying defeasible reasoning to political argument and practical
reasoning (of action), respectively. Joel Feinberg and Joseph Raz were beginning to produce
equally mature works in ethics and jurisprudence informed by defeasibility.
By far the most significant works on defeasibility by the mid-1970s were in epistemology,
where John Pollock's 1974 Knowledge and Justification popularized his terminology
of undercutting and rebutting (which mirrored the analysis of Toulmin). Pollock's work was significant
precisely because it brought defeasibility so close to philosophical logicians. The failure of logicians
to dismiss defeasibility in epistemology (as Cambridge's logicians had done to Hart decades earlier)
landed defeasible reasoning in the philosophical mainstream.
Defeasibility had always been closely related to argument, rhetoric, and law, except in epistemology,
where the chains of reasons, and the origin of reasons, were not often discussed. Nicholas
Rescher's Dialectics is an example of how difficult it was for philosophers to contemplate more
complex systems of defeasible reasoning. This was in part because proponents of informal
logic became the keepers of argument and rhetoric while insisting that formalism was anathema to
argument.
About this time, researchers in artificial intelligence became interested in non-monotonic
reasoning and its semantics. With philosophers such as Pollock and Donald Nute (e.g., defeasible
logic), dozens of computer scientists and logicians produced complex systems of defeasible
reasoning between 1980 and 2000. No single system of defeasible reasoning would emerge in the
same way that Quine's system of logic became a de facto standard. Nevertheless, the 100-year
headstart on non-demonstrative logical calculi, due to George Boole, Charles Sanders Peirce,
and Gottlob Frege was being closed: both demonstrative and non-demonstrative reasoning now
have formal calculi.
There are related (and slightly competing) systems of reasoning that are newer than systems of
defeasible reasoning, e.g., belief revision and dynamic logic. The dialogue logics of Charles
Hamblin and Jim Mackenzie, and their colleagues, can also be tied closely to defeasible reasoning.
Belief revision is a non-constructive specification of the desiderata with which, or constraints
according to which, epistemic change takes place. Dynamic logic is related mainly because, like
paraconsistent logic, the reordering of premises can change the set of justified conclusions.
Dialogue logics introduce an adversary, but are like belief revision theories in their adherence to
deductively consistent states of belief.

Political and judicial use[edit]


Many political philosophers have been fond of the word indefeasible when referring to rights, e.g.,
that were inalienable, divine, or indubitable. For example, in the 1776 Virginia Declaration of Rights,
"community hath an indubitable, inalienable, and indefeasible right to reform, alter or abolish
government..." (also attributed toJames Madison); and John Adams, "The people have a right, an
indisputable, unalienable, indefeasible, divine right to that most dreaded and envied kind of
knowledge - I mean of the character and conduct of their rulers." Also, Lord Aberdeen: "indefeasible
right inherent in the British Crown" and Gouverneur Morris: "the Basis of our own Constitution is the
indefeasible Right of the People." Scholarship about Abraham Lincoln often cites these passages in
the justification of secession. Philosophers who use the word defeasible have historically had
different world views from those who use the word indefeasible (and this distinction has often been
mirrored by Oxford and Cambridge zeitgeist); hence it is rare to find authors who use both words.
In judicial opinions, the use of defeasible is commonplace. There is however disagreement among
legal logicians whether defeasible reasoning is central, e.g., in the consideration of open
texture, precedent, exceptions, and rationales, or whether it applies only to explicit defeasance
clauses. H.L.A. Hart in The Concept of Law gives two famous examples of defeasibility: "No vehicles
in the park" (except during parades); and "Offer, acceptance, and memorandum produce a contract"
(except when the contract is illegal, the parties are minors, inebriated, or incapacitated, etc.).

Specificity[edit]
One of the main disputes among those who produce systems of defeasible reasoning is the status of
a rule of specificity. In its simplest form, it is the same rule as subclass inheritance preempting class
inheritance:

(R1) if p then (defeasibly) q e.g., if penguin then not-


flies
(R2) if r then (defeasibly) not-q e.g., if bird then flies
(O1) if p then (deductively) r e.g., if penguin then bird
(M1) arguably, p e.g., arguably, penguin
(M2) R1 is a more specific reason than R2 e.g., R1 is better than R2
(M3) therefore, arguably, q e.g., therefore, arguably,
not-flies

Approximately half of the systems of defeasible reasoning discussed today adopt a rule of
specificity, while half expect that such preference rules be written explicitly by whoever provides the
defeasible reasons. For example, Rescher's dialectical system uses specificity, as do early systems
of multiple inheritance (e.g., David Touretzky) and the early argument systems of Donald Nute and
of Guillermo Simari and Ronald Loui. Defeasible reasoning accounts of precedent (stare
decisisand case-based reasoning) also make use of specificity (e.g., Joseph Raz and the work of
Kevin D. Ashley and Edwina Rissland). Meanwhile, the argument systems of Henry Prakken and
Giovanni Sartor, of Bart Verheij and Jaap Hage, and the system of Phan Minh Dung do not adopt
such a rule.

Nature of defeasibility[edit]
There is a distinct difference between those who theorize about defeasible reasoning as if it were a
system of confirmational revision (with affinities to belief revision), and those who theorize about
defeasibility as if it were the result of further (non-empirical) investigation. There are at least three
kinds of further non-empirical investigation: progress in a lexical/syntactic process, progress in a
computational process, and progress in an adversary or legal proceeding.
Defeasibility as corrigibility: Here, a person learns something new that annuls a prior inference. In
this case, defeasible reasoning provides a constructive mechanism for belief revision, like a truth
maintenance system as envisioned by Jon Doyle.
Defeasibility as shorthand for preconditions: Here, the author of a set of rules or legislative code
is writing rules with exceptions. Sometimes a set of defeasible rules can be rewritten, with more
cogency, with explicit (local) pre-conditions instead of (non-local) competing rules. Many non-
monotonic systems with fixed-point orpreferential semantics fit this view. However, sometimes the
rules govern a process of argument (the last view on this list), so that they cannot be re-compiled
into a set of deductive rules lest they lose their force in situations with incomplete knowledge or
incomplete derivation of preconditions.
Defeasibility as an anytime algorithm: Here, it is assumed that calculating arguments takes time,
and at any given time, based on a subset of the potentially constructible arguments, a conclusion is
defeasibly justified. Isaac Levi has protested against this kind of defeasibility, but it is well-suited to
the heuristic projects of, for example, Herbert A. Simon. On this view, the best move so far in a
chess-playing program's analysis at a particular depth is a defeasibly justified conclusion. This
interpretation works with either the prior or the next semantical view.
Defeasibility as a means of controlling an investigative or social process: Here, justification is
the result of the right kind of procedure (e.g., a fair and efficient hearing), and defeasible reasoning
provides impetus for pro and con responses to each other. Defeasibility has to do with the
alternation of verdict as locutions are made and cases presented, not the changing of a mind with
respect to new (empirical) discovery. Under this view, defeasible reasoning and defeasible
argumentation refer to the same phenomenon.
Argument from authority
From Wikipedia, the free encyclopedia

(Redirected from Appeal to authority)

Argument from authority, also authoritative argument and appeal to authority, is a common
form of argument which leads to a logical fallacy when used in argumentative reasoning.[1]
In informal reasoning, the appeal to authority is a form of argument attempting to establish
a statistical syllogism.[2] The appeal to authority relies on an argument of the form:[3]
A is an authority on a particular topic
A says something about that topic
A is probably correct
Fallacious examples of using the appeal include any appeal to authority used in the
context of logical reasoning, and appealing to the position of an authority or authorities
to dismiss evidence,[2][4][5][6] as authorities can come to the wrong judgments through error,
bias, dishonesty, or falling prey to groupthink. Thus, the appeal to authority is not a
generally reliable argument for establishing facts.[7]

Contents
[hide]

1 Forms
o 1.1 General
o 1.2 Dismissal of evidence
o 1.3 Appeal to non-authorities
o 1.4 Use in logic
2 Notable examples
o 2.1 Inaccurate chromosome number
o 2.2 The tongue map
o 2.3 Surgical sterilization and puerperal infections
3 Psychological basis
4 See also
5 References
6 Sources
7 External links

Forms[edit]
General[edit]
The argument from authority can take several forms. As a syllogism, the argument has
the following basic structure:[4][8]
A says P about subject matter S.
A should be trusted about subject matter S.
Therefore, P is correct.
The second premise is not accepted as valid, as it amounts to
an unfounded assertion that leads to circular reasoning able to define
person or group A into inerrancy on any subject matter.[4][9]
One real world example of this tautological inerrancy is how Ignaz
Semmelweis' evidence that puerperal fever was caused by a contagious
agent, as opposed to the then-accepted view that it was caused mainly by
environmental factors,[10] was dismissed largely based on appeals to
authority. Multiple critics stated that they did not accept the claims in part
because of the fact that in all the academic literature on puerperal fever
there was nothing that supported the view Semmelweis was
advancing.[11] They were thus effectively using the circular argument that
"the literature is not in error, therefore the literature is not in error".[12]
Dismissal of evidence[edit]
The equally fallacious counter-argument from authority takes the form:[13]
B has provided evidence for position T.
A says position T is incorrect.
Therefore, B's evidence is false.
This form is fallacious as it does not actually refute the
evidence given by B, merely notes that there is disagreement
with it.[13] This form is especially unsound when there is no
indication that A is aware of the evidence given by B.[14]
Appeal to non-authorities[edit]
Fallacious arguments from authority can also be the result of
citing a non-authority as an authority.[4] These arguments
assume that a person without status or authority is inherently
reliable. The appeal to poverty for example is the fallacy of
thinking a conclusion is probably correct because the one who
holds or is presenting it is poor.[15] When an argument holds that
a conclusion is likely to be true precisely because the one who
holds or is presenting it lacks authority, it is a fallacious appeal
to the common man.[5][16][17]
However, it is also a fallacious ad hominem argument to argue
that a person presenting statements lacks authority and thus
their arguments do not need to be considered.[18] As appeals to
a perceived lack of authority, these types of argument are
fallacious for much the same reasons as an appeal to
authority.[19]
Use in logic[edit]
It is fallacious to use any appeal to authority in the context of
logical reasoning. Because the argument from authority is not a
logical argument in that it does not argue something's negation
or affirmation constitutes a contradiction, it is fallacious to
assert that the conclusion must be true.[4] Such a determinative
assertion is alogical non sequitur as the conclusion does not
follow unconditionally, in the sense of being logically
necessary.[20][21]
The only exceptions to this would be an authority which is
logically required to always be correct, such as
an omniscient being that does not lie.[22]

Notable examples[edit]
Inaccurate chromosome number[edit]
In 1923, leading American zoologist Theophilus
Painter declared based on his findings that humans had 24
pairs of chromosomes. From the 1920s to the 1950s, this
continued to be held based on Painter's authority,[23] despite
subsequent counts totaling the correct number of 23.[24] Even
textbooks with photos clearly showing 23 pairs incorrectly
declared the number to be 24 based on the authority of the
then-consensus of 24 pairs.[24]
As Robert Matthews said of the event, "Scientists had preferred
to bow to authority rather than believe the evidence of their
own eyes".[24] As such, their reasoning was an appeal to
authority.[25]
The tongue map[edit]
Another example is that of the tongue map, which purported to
show different areas of taste on the tongue. While it originated
from a misreading of the original text, it got taken up
in textbooks and the scientific literature[26] for nearly a century,
and remained even after being shown to be wrong in the
1970s[27][28] and despite being easily disproven on one's own
tongue.[29][30]
Surgical sterilization and puerperal
infections[edit]
In the mid-to-late 19th century a small minority of doctors, most
notably Ignaz Semmelweis, argued that puerperal fevers were
caused by an infection or toxin[31] the spread of which was
preventable by aseptic technique by physicians such as hand
washing with chlorine.[11] This view was largely
discounted because, as one 1843 paper noted, "writers of
authority...profess a disbelief in [such a] contagion", and
instead held that puerperal fevers were caused by
environmental factors which would render such techniques
irrelevant.[11] This was in spite of evidence against their
proposed explanations, such as Semmelweis' observations that
two side-by-side clinics had radically different rates
of puerperal infection, that puerperal infection was extremely
rare in births that took place outside of hospitals, and that
infection rates were unrelated to weather or seasonal
variations, all of which went against the prevailing explanation
of environmental causes such as miasma.[10]

Psychological basis[edit]
An integral part of the appeal to authority is the cognitive
bias known as the Asch effect.[25] In repeated and modified
instances of the Asch conformity experiments, it was found that
high-status individuals create a stronger likelihood of a subject
agreeing with an obviously false conclusion, despite the subject
normally being able to clearly see that the answer was
incorrect.[32]
Further, humans have been shown to feel strong emotional
pressure to conform to authorities and majority positions. A
repeat of the experiments by another group of researchers
found that "Participants reported considerable distress under
the group pressure", with 59% conforming at least once and
agreeing with the clearly incorrect answer, whereas the
incorrect answer was much more rarely given when no such
pressures were present.[33]

You might also like