Professional Documents
Culture Documents
Deductive reasoning, also deductive logic or logical deduction or, informally, "top-down"
logic,[1] is the process of reasoning from one or more statements(premises) to reach a logically
certain conclusion.[2] It differs from inductive reasoning or abductive reasoning.
Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear,
and the rules of deductive logic are followed, then the conclusion reached is necessarily true.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the
following way: In deductive reasoning, a conclusion is reachedreductively by applying general rules
that hold over the entirety of a closed domain of discourse, narrowing the range under consideration
until only the conclusions is left. In inductive reasoning, the conclusion is reached by generalizing or
extrapolating from, i.e., there is epistemic uncertainty. Note, however, that the inductive reasoning
mentioned here is not the same as induction used in mathematical proofs mathematical
induction is actually a form of deductive reasoning.
Contents
[hide]
1 Simple example
2 Law of detachment
3 Law of syllogism
4 Law of contrapositive
5 Validity and soundness
6 History
7 Education
8 See also
9 References
10 Further reading
11 External links
Simple example[edit]
An example of a deductive argument:
Law of detachment[edit]
Main article: Modus ponens
The law of detachment (also known as affirming the antecedent and Modus ponens) is the first
form of deductive reasoning. A single conditional statement is made, and a hypothesis (P) is stated.
The conclusion (Q) is then deduced from the statement and the hypothesis. The most basic form is
listed below:
1. P Q (conditional statement)
2. P (hypothesis stated)
3. Q (conclusion deduced)
In deductive reasoning, we can conclude Q from P by using the law of detachment.[3] However, if the
conclusion (Q) is given instead of the hypothesis (P) then there is no definitive conclusion.
The following is an example of an argument using the law of detachment in the form of an if-then
statement:
Law of syllogism[edit]
The law of syllogism takes two conditional statements and forms a conclusion by combining the
hypothesis of one statement with the conclusion of another. Here is the general form:
1. P Q
2. Q R
3. Therefore, P R.
The following is an example:
1. A = B.
2. B = C.
3. Therefore A = C.
Law of contrapositive[edit]
Main article: Modus tollens
The law of contrapositive states that, in a conditional, if the conclusion is false, then
the hypothesis must be false also. The general form is the following:
1. P Q.
2. ~Q.
3. Therefore we can conclude ~P.
The following are examples:
1. If it is raining, then there are clouds in the sky.
2. There are no clouds in the sky.
3. Thus, it is not raining.
History[edit]
This section
requires expansion.(January 2015)
Education[edit]
Deductive reasoning is generally thought of[by whom?] as a skill that develops without any formal teaching
or training. As a result of this belief, deductive reasoning skills are not taught in secondary schools,
where students are expected to use reasoning more often and at a higher level.[5] It is in high school,
for example, that students have an abrupt introduction to mathematical proofs which rely heavily
on deductive reasoning.[5]
Fallacy
From Wikipedia, the free encyclopedia
This article is about errors in reasoning. For the formal concept in philosophy and logic, see formal
fallacy. For other uses, see Fallacy (disambiguation).
This article needs additional citations for verification. Please help improve this article by adding
citations to reliable sources. Unsourced material may be challenged and removed. (August 2010)
A fallacy is the use of poor, or invalid, reasoning for the construction of an argument.[1][2] A fallacious
argument may be deceptive by appearing to be better than it really is. Some fallacies are committed
intentionally to manipulate or persuade by deception, while others are committed unintentionally due
to carelessness or ignorance.
Fallacies are commonly divided into "formal" and "informal". A formal fallacy can be expressed
neatly in a standard system of logic, such as propositional logic,[1]while an informal fallacy originates
in an error in reasoning other than an improper logical form.[3] Arguments containing informal fallacies
may be formally valid, but still fallacious.[4]
Contents
[hide]
1 Formal fallacy
o 1.1 Common examples
2 Aristotle's Fallacies
3 Whately's grouping of fallacies
4 Intentional fallacies
5 Deductive fallacy
6 Paul Meehl's Fallacies
7 Fallacies of Measurement
8 Other systems of classification
9 Assessment of Fallacies - Pragmatic Theory
10 See also
11 References
12 Further reading
13 External links
Formal fallacy[edit]
Main article: Formal fallacy
A formal fallacy is a common error of thinking that can neatly be expressed in standard system of
logic.[1] An argument that is formally fallacious is rendered invalid due to a flaw in its logical structure.
Such an argument is always considered to be wrong.
The presence of a formal fallacy in a deductive argument does not imply anything about the
argument's premises or its conclusion. Both may actually be true, or may even be more probable as
a result of the argument; but the deductive argument is still invalid because the conclusion does not
follow from the premises in the manner described. By extension, an argument can contain a formal
fallacy even if the argument is not a deductive one: for instance, an inductive argument that
incorrectly applies principles of probability or causality can be said to commit a formal fallacy.
Common examples[edit]
Main article: List of fallacies Formal fallacies
Aristotle's Fallacies[edit]
Aristotle was the first to systematize logical errors into a list. Aristotle's "Sophistical Refutations" (De
Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types, those
depending on language and those not depending on language.[5] These fallacies are called verbal
fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking
about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in
which a conclusion is obtained by improper or ambiguous use of words.[6]
Deductive fallacy[edit]
Main articles: Deductive fallacy and formal fallacy
In philosophy, the term formal fallacy for logical fallacies and defined formally as: a flaw in the
structure of a deductive argument which renders the argumentinvalid. The term is preferred as logic
is the use of valid reasoning and a fallacy is an argument that uses poor reasoning therefore the
term logical fallacy is an oxymoron. However, the same terms are used in informal discourse to
mean an argument which is problematic for any reason. A logical form such as "A and B" is
independent of any particular conjunction of meaningful propositions. Logical form alone can
guarantee that given true premises, a true conclusion must follow. However, formal logic makes no
such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or
logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises
must be true for a statement to be true.
Barnum effect: Making a statement that is trivial, and true of everyone, e.g of all patients, but
which appears to have special significance to the diagnosis.
Sick-sick fallacy ("pathological set"): The tendency to generalize from personal experiences of
health and ways of being, to the identification of others who are different from ourselves as
being "sick". Meehl emphasizes that though psychologists claim to know about this tendency,
most are not very good at correcting it in their own thinking.
"Me too" fallacy: The opposite of Sick-sick. Imagining that "everyone does this" and thereby
minimizing a symptom without assessing the probability of whether a mentally healthy person
would actually do it. A variation of this is Uncle George's pancake fallacy. This minimizes a
symptom through reference to a friend/relative who exhibited a similar symptom, thereby
implying that it is normal. Meehl points out that consideration should be given that the patient is
not healthy by comparison but that the friend/relative is unhealthy.
Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him." A relativism that Meehl sees
as a waste of time. There is a distinction between reality and delusion that is important to make
when assessing a patient and so the consideration of comparative realities can mislead and
distract from the importance of a patient's delusion to a diagnostic decision.
Hidden decisions: Decisions based on factors that we do not own up to or challenge, and for
example result in the placing of middle- and upper-class patients in therapy while lower-class
patients are given medication. Meehl identifies these decisions as related to an implicit ideal
patient who is young, attractive, verbal, intelligent, and successful (YAVIS). He sees YAVIS
patients as being preferred by psychotherapists because they can pay for long-term treatment
and are more enjoyable to interact with.
The spun-glass theory of the mind: The belief that the human organism is so fragile that minor
negative events, such as criticism, rejection, or failure, are bound to cause major trauma to the
system. Essentially not giving humans, and sometimes patients, enough credit for their
resilience and ability to recover.[11]
Fallacies of Measurement[edit]
Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly
authority,[12][13] and there is lively discussion regarding the relative usefulness of such metrics for
measuring the value of knowledge production in the context of an "information
tsunami."[14] Where mathematical fallacies are subtle mistakes in reasoning leading to invalid
mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the
extrapolation of raw data to a measurement-based value claim. The ancient Greek
Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable
measurements through his "human-measure" principle and the practice of dissoi logoi (arguing
multiple sides of an issue).[15][16] This history helps explain why measurement fallacies are informed
by informal logic and argumentation theory.
Anchoring fallacy: Anchoring is a cognitive bias, first theorized by Amos Tversky and Daniel
Kahneman, that "describes the common human tendency to rely too heavily on the first piece of
information offered (the 'anchor') when making decisions." In measurement arguments,
anchoring fallacies can occur when unwarranted weight is given to data generated by metrics
that the arguers themselves acknowledge is flawed. For example, limitations of the Journal
Impact Factor (JIF) are well documented,[17] and even JIF pioneer Eugene Garfield notes, "while
citation data create new tools for analyses of research performance, it should be stressed that
they supplement rather than replace other quantitative-and qualitative-indicators."[18] To the
extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative
judgments, or leave behind Garfield's "supplement rather than replace" caveat, they court
commission of anchoring fallacies.
Naturalistic Fallacy: In the context of measurement, a naturalistic fallacy can occur in a
reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in the case of
sheer quantity metrics based on the premise "more is better"[14] or, in the case of developmental
assessment in the field of psychology, "higher is better."[19]
False Analogy: In the context of measurement, this error in reasoning occurs when claims are
supported by unsound comparisons between data points, hence the false analogy's informal
nickname of the "apples and oranges" fallacy.[20] For example, the Scopus and Web of
Science bibliographic databases have difficulty distinguishing between citations of scholarly
work that are arms-length endorsements, ceremonial citations, or negative citations (indicating
the citing author withholds endorsement of the cited work).[21] Hence, measurement-based value
claims premised on the uniform quality of all citations may be questioned on false analogy
grounds.
Argumentum ex Silentio: An argument from silence features an unwarranted conclusion
advanced based on the absence of data. For example, Academic Analytics' Faculty Scholarly
Productivity Index purports to measure overall faculty productivity, yet the tool does not capture
data based on citations in books. This creates a possibility that low productivity measurements
using the tool may constitute argumentum ex silentio fallacies, to the extent that such
measurements are supported by the absence of book citation data.
Ecological Fallacy: An ecological fallacy is committed when one draws an inference from data
based on the premise that qualities observed for groups necessarily hold for individuals; for
example, "if countries with more Protestants tend to have higher suicide rates, then Protestants
must be more likely to commit suicide."[22] In metrical argumentation, ecological fallacies can be
committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto
Rican" faculty) via reference to aggregate data about a larger and different group (e.g.
"Hispanic" faculty).[23]
See also[edit]
Logic portal
Thinking portal
Psychology portal
Lists
Association fallacy
Cogency
Cognitive bias
Cognitive distortion
Demagogy
Evidence
Fallacies of definition
False premise
False statement
Invalid proof
Mathematical fallacy
Paradox
Prosecutor's fallacy
Sophism
Soundness
Truth
Validity
Victim blaming
Works
References[edit]
1. ^ Jump up to:a b c HARRY J. GENSLER, The A to Z of Logic (2010:p74). Rowman & Littlefield, ISBN 9780810875968
2. Jump up^ JOHN WOODS, The Death of Argument (2004). Applied Logic Series Volume 32, pp 3-23. ISBN
9789048167005
3. Jump up^ "Informal Fallacies, Northern Kentucky University". Retrieved 2013-09-10.
4. Jump up^ "Internet Encyclopedia of Philosophy, The University of Tennessee at Martin". Retrieved 2013-09-10.
5. Jump up^ "Aristotle's original 13 fallacies". The Non Sequitur. Retrieved 2013-05-28.
6. Jump up^ "PHIL 495: Philosophical Writing (Spring 2008), Texas A&M University". Retrieved 2013-09-10.
7. Jump up^ Frans H. van Eemeren, Bart Garssen, Bert Meuffels (2009). Fallacies and Judgments of Reasonableness:
Empirical Research Concerning the Pragma-Dialectical Discussion Rules, p.8. ISBN 9789048126149.
8. Jump up^ Coffey, P. (1912). The Science of Logic. Longmans, Green, and Company. p. 302. LCCN 12018756.
9. Jump up^ Ed Shewan (2003). Applications of Grammar: Principles of Effective Communication (2nd ed.). Christian
Liberty Press. pp. 92 ff. ISBN 1-930367-28-7.
10. Jump up^ Boyer, Web. "How to Be Persuasive". Retrieved 12/05/2012. Check date values
in: |accessdate= (help)
11. ^ Jump up to:a b Meehl, P.E. (1973). Psychodiagnosis: Selected papers. Minneapolis (MN): University of Minnesota
Press, p. 225-302.
12. Jump up^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 3236.
Retrieved October 28, 2013.
13. Jump up^ Jensen, Michael (June 15, 2007). "The New Metrics of Scholarly Authority". Chronicle Review. Retrieved 28
October 2013.
14. ^ Jump up to:a b Baveye, Phillippe C. (2010). "Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in
Perspective". Journal of Scholarly Publishing 41: 191215.doi:10.1353/scp.0.0074.
15. Jump up^ Schiappa, Edward (1991). Protagoras and Logos: A Study in Greek Philosophy and Rhetoric. Columbia, SC:
University of South Carolina Press. ISBN 0872497585.
16. Jump up^ Protagoras (1972). The Older Sophists. Indianapolis, IN: Hackett Publishing Co. ISBN 0872205568.
17. Jump up^ National Communication Journal (2013). Impact Factors, Journal Quality, and Communication Journals: A
Report for the Council of Communication Associations (PDF). Washington, D.C.: National Communication Association.
18. Jump up^ Gafield, Eugene (1993). "What Citations Tell us About Canadian Research,". Canadian Journal of Library and
Information Science 18 (4): 34.
19. Jump up^ Stein, Zachary (October 2008). "Myth Busting and Metric Making: Refashioning the Discourse about
Development". Integral Leadership Review 8 (5). Retrieved 28 October2013.
20. Jump up^ Kornprobst, Markus (2007). "Comparing Apples and Oranges? Leading and Misleading Uses of Historical
Analogies". Millennium - Journal of International Studies 36: 2949. doi:10.1177/03058298070360010301. Retrieved 29
October 2013.
21. Jump up^ Meho, Lokman (2007). "The Rise and Rise of Citation Analysis" (PDF). Physics World. January: 32.
Retrieved October 28, 2013.
22. Jump up^ Freedman, David A. (2004). Michael S. Lewis-Beck & Alan Bryman & Tim Futing Liao, ed. Encyclopedia of
Social Science Research Methods. Thousand Oaks, CA: Sage. pp. 293295. ISBN 0761923632.
23. Jump up^ Allen, Henry L. (1997). "Faculty Workload and Productivity: Ethnic and Gender Disparities" (PDF). NEA 1997
Almanac of Higher Education: 39. Retrieved 29 October 2013.
24. Jump up^ Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press.
Fearnside, W. Ward and William B. Holther, Fallacy: The Counterfeit of Argument, 1959.
Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP,
2005, ISBN 87-991013-7-8
D. H. Fischer, Historians' Fallacies: Toward a Logic of Historical Thought, Harper Torchbooks, 1970.
Warburton Nigel, Thinking from A to Z, Routledge 1998.
T. Edward Damer. Attacking Faulty Reasoning, 5th Edition, Wadsworth, 2005. ISBN 0-534-60516-8
Sagan, Carl, "The Demon-Haunted World: Science As a Candle in the Dark".Ballantine Books, March 1997 ISBN 0-345-
40946-9, 480 pgs. 1996 hardback edition:Random House, ISBN 0-394-53512-X, xv+457 pages plus addenda insert (some
printings). Ch.12.
Further reading[edit]
C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0-
916475-24-7.
Hans V. Hansen; Robert C. Pinto (1995). Fallacies: classical and contemporary readings. Penn
State Press. ISBN 978-0-271-01417-3.
Frans van Eemeren; Bart Garssen; Bert Meuffels (2009). Fallacies and Judgments of
Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion.
Springer. ISBN 978-90-481-2613-2.
Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University
Press, 1989.
Douglas, Walton (1987). Informal Fallacies. Amsterdam: John Benjamins.
Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama
Press.
Walton, Douglas (2010). "Why Fallacies Appear to Be Better Arguments than They
Are". Informal Logic 30 (2): 159184.
John Woods (2004). The death of argument: fallacies in agent based reasoning.
Springer. ISBN 978-1-4020-2663-8.
Historical texts
"Bottom up" redirects here. For other uses, see Bottom-up (disambiguation).
[hide]This article has multiple issues. Please help improve it or discuss these issues on the talk page.
Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories (see systemics), and
management and organization. In practice, they can be seen as a style of thinking and teaching.
A top-down approach (also known as stepwise design and in some cases used as a synonym
of decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems. In a top-down approach an overview of the system is formulated, specifying but not
detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes
in many additional subsystem levels, until the entire specification is reduced to base elements. A
top-down model is often specified with the assistance of "black boxes", these make it easier to
manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed
enough to realistically validate the model. Top down approach starts with the big picture. It breaks
down from there into smaller segments.[1]
A bottom-up approach is the piecing together of systems to give rise to more complex systems,
thus making the original systems sub-systems of the emergent system. Bottom-up processing is a
type of information processing based on incoming data from the environment to form a perception.
From a Cognitive Psychology perspective, information enters the eyes in one direction (sensory
input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and
recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-
up approach the individual base elements of the system are first specified in great detail. These
elements are then linked together to form larger subsystems, which then in turn are linked,
sometimes in many levels, until a complete top-level system is formed. This strategy often
resembles a "seed" model, whereby the beginnings are small but eventually grow in complexity and
completeness. However, "organic strategies" may result in a tangle of elements and subsystems,
developed in isolation and subject to local optimization as opposed to meeting a global purpose.
Contents
[hide]
During the design and development of new products, designers and engineers rely on both a
bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or
existing components are selected and integrated into the product. An example would include
selecting a particular fastener, such as a bolt, and designing the receiving components such that the
fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it
would fit properly in the receiving components.[2] For perspective, for a product with more restrictive
requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more
top-down approach is taken and almost everything is custom designed. However, when it's more
important to minimize cost and increase component availability, such as with manufacturing
equipment, a more bottom-up approach would be taken, and as many off-the-shelf components
(bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings
would be designed around the selected components.
Computer science[edit]
Software development[edit]
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top-down and bottom-up approaches play a key role.
Top-down approaches emphasize planning and a complete understanding of the system. It is
inherent that no coding can begin until a sufficient level of detail has been reached in the design
of at least some part of the system. Top-down approaches are implemented by attaching the
stubs in place of the module. This, however, delays testing of the ultimate functional units of a
system until significant design is complete. Bottom-up emphasizes coding and early testing,
which can begin as soon as the first module has been specified. This approach, however, runs
the risk that modules may be coded without having a clear idea of how they link to other parts of
the system, and that such linking may not be as easy as first thought. Re-usability of code is one
of the main benefits of the bottom-up approach.[3]
Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus
Wirth. Mills developed structured programming concepts for practical use and tested them in a
1969 project to automate the New York Times morgue index. The engineering and management
success of this project led to the spread of the top-down approach through IBM and the rest of
the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal
programming language, wrote the influential paper Program Development by Stepwise
Refinement. Since Niklaus Wirth went on to develop languages such
as Modula andOberon (where one could define a module before knowing about the entire
program specification), one can infer that top down programming was not strictly what he
promoted. Top-down methods were favored in software engineering until the late
1980s,[3] and object-oriented programming assisted in demonstrating the idea that both aspects
of top-down and bottom-up programming could be utilized.
Modern software design approaches usually combine both top-down and bottom-up
approaches. Although an understanding of the complete system is usually considered necessary
for good design, leading theoretically to a top-down approach, most software projects attempt to
make use of existing code to some degree. Pre-existing modules give designs a bottom-up
flavor. Some design approaches also use an approach where a partially functional system is
designed and coded to completion, and this system is then expanded to fulfill all the
requirements for the project
Programming[edit]
Building blocks are an example of bottom-up design because the parts are first created and then assembled without regard to how the
Top-down and bottom-up are two approaches for the manufacture of products. These terms
were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to
distinguish between molecular manufacturing (to mass-produce large atomically precise objects)
and conventional manufacturing (which can mass-produce large objects that are not atomically
precise). Bottom-up approaches seek to have smaller (usually molecular) components built up
into more complex assemblies, while top-down approaches seek to create nanoscale devices by
using larger, externally controlled ones to direct their assembly.
The top-down approach often uses the traditional workshop or microfabrication methods where
externally controlled tools are used to cut, mill, and shape materials into the desired shape and
order. Micropatterning techniques, such as photolithography and inkjet printing belong to this
category.
Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause
single-molecule components to (a) self-organize or self-assemble into some useful
conformation, or (b) rely on positional assembly. These approaches utilize the concepts
of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry.
Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel
and much cheaper than top-down methods, but could potentially be overwhelmed as the size
and complexity of the desired assembly increases.
An example of top-down processing: Even though the second letter in each word is ambiguous, top-down processing allows for easy
These terms are also employed in neuroscience, cognitive neuroscience and cognitive
psychology to discuss the flow of information in processing.[4] Typically sensory input is
considered "down", and higher cognitive processes, which have more information from other
sources, are considered "up". A bottom-up process is characterized by an absence of higher
level direction in sensory processing, whereas a top-down process is characterized by a high
level of direction of sensory processing by more cognition, such as goals or targets (Beiderman,
19).[3]
According to Psychology notes written by Dr. Charles Ramskov, a Psychology professor at De
Anza College, Rock, Neiser, and Gregory claim that top-down approach involves perception that
is an active and constructive process.[5] Additionally, it is an approach not directly given by
stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions.
According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain
that gives a vague stimulus, perception becomes a top-down approach."[6]
Conversely, Psychology defines bottom-up processing as an approach wherein there is a
progression from the individual elements to the whole. According to Ramskov, one proponent of
bottom-up approach, Gibson, claims that it is a process that includes visual perception that
needs information available from proximal stimulus produced by the distal stimulus.[7] Theoretical
Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and
clearly enough."[6]
Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual
identification, are considered bottom-up processes because they rely primarily on sensory
information, whereas processes such as motor control and directed attention are considered
top-down because they are goal directed. Neurologically speaking, some areas of the brain,
such as area V1 mostly have bottom-up connections.[6] Other areas, such as the fusiform
gyrus have inputs from higher brain areas and are considered to have top-down influence.[8]
The study of visual attention provides an example. If your attention is drawn to a flower in a field,
it may be because the color or shape of the flower are visually salient. The information that
caused you to attend to the flower came to you in a bottom-up fashionyour attention was not
contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast
this situation with one in which you are looking for a flower. You have a representation of what
you are looking for. When you see the object you are looking for, it is salient. This is an example
of the use of top-down information.
In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is
stereotypically the visionary, or the person who sees the larger picture and overview. Such
people focus on the big picture and from that derive the details to support it. "Bottom-up" (or
"small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape.
The expression "seeing the wood for the trees" references the two styles of cognition.[9]
Architecture[edit]
Often, the cole des Beaux-Arts school of design is said to have primarily promoted top-down
design because it taught that an architectural design should begin with a parti, a basic plan
drawing of the overall project.[citation needed]
By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the
study of translating small-scale organizational systems to a larger, more architectural scale (as
with the woodpanel carving and furniture design).
Ecology[edit]
In ecology, top-down control refers to when a top predator controls the structure or population
dynamics of the ecosystem. The classic example is of kelp forestecosystems. In such
ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp.
When otters are removed, urchin populations grow and reduce the kelp forest creating urchin
barrens. In other words, such ecosystems are not controlled by productivity of the kelp but rather
a top predator.
Bottom up control in ecosystems refers to ecosystems in which the nutrient supply and
productivity and type of primary producers (plants and phytoplankton) control the ecosystem
structure. An example would be how plankton populations are controlled by the availability of
nutrients. Plankton populations tend to be higher and more complex in areas where upwelling
brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be
influenced by both types of control.
Abductive reasoning
From Wikipedia, the free encyclopedia
Contents
[hide]
1 History
2 Deduction, induction, and abduction
3 Formalizations of abduction
o 3.1 Logic-based abduction
o 3.2 Set-cover abduction
o 3.3 Abductive validation
o 3.4 Probabilistic abduction
o 3.5 Subjective logic abduction
4 History
o 4.1 1867
o 4.2 1878
o 4.3 1883
o 4.4 1902 and after
o 4.5 Pragmatism
o 4.6 Three levels of logic about abduction
4.6.1 Classification of signs
4.6.2 Critique of arguments
4.6.3 Methodology of inquiry
o 4.7 Other writers
5 Applications
6 See also
7 References
8 Notes
9 External links
History[edit]
The American philosopher Charles Sanders Peirce (18391914) first introduced the term as
"guessing".[7] Peirce said that to abduce a hypothetical explanation from an observed
circumstance is to surmise that may be true because then would be a matter of
course.[8] Thus, to abduce from involves determining that is sufficient, but not necessary,
for .
For example, suppose we observe that the lawn is wet. If it rained last night, then it would be
unsurprising that the lawn is wet. Therefore, by abductive reasoning, the possibility that it rained last
night is reasonable (but note that Peirce did not remain convinced that a single logical form covers
all abduction).[9] Moreover, abducing it rained last night from the observation of the wet lawn can lead
to a false conclusion. In this example, dew, lawn sprinklers, or some other process may have
resulted in the wet lawn, even in the absence of rain.
Peirce argues that good abductive reasoning from P to Q involves not simply a determination
that Q is sufficient for P, but also that Q is among the most economical explanations for P.
Simplification and economy both call for that "leap" of abduction.[10]
Formalizations of abduction[edit]
Logic-based abduction[edit]
In logic, explanation is done from a logical theory representing a domain and a set of
observations . Abduction is the process of deriving a set of explanations of
according to and picking out one of those explanations. For to be an explanation
of according to , it should satisfy two conditions:
is consistent with .
In formal logic, and are assumed to be sets of literals. The two conditions for
being an explanation of according to theory are formalized as:
;
is consistent.
Among the possible explanations satisfying these two conditions, some
other condition of minimality is usually imposed to avoid irrelevant facts (not
contributing to the entailment of ) being included in the explanations.
Abduction is then the process that picks out some member of . Criteria for
picking out a member representing "the best" explanation include the simplicity,
the prior probability, or the explanatory power of the explanation.
A proof theoretical abduction method for first order classical logic based on
the sequent calculus and a dual one, based on semantic tableaux (analytic
tableaux) have been proposed (Cialdea Mayer & Pirri 1993). The methods are
sound and complete and work for full first order logic, without requiring any
preliminary reduction of formulae into normal forms. These methods have also
been extended to modal logic.
Abductive logic programming is a computational framework that extends
normal logic programming with abduction. It separates the theory into two
components, one of which is a normal logic program, used to generate by
means of backward reasoning, the other of which is a set of integrity
constraints, used to filter the set of candidate explanations.
Set-cover abduction[edit]
A different formalization of abduction is based on inverting the function that
calculates the visible effects of the hypotheses. Formally, we are given a set of
hypotheses and a set of manifestations ; they are related by the domain
knowledge, represented by a function that takes as an argument a set of
hypotheses and gives as a result the corresponding set of manifestations. In
other words, for every subset of the hypotheses , their effects are
known to be .
Abduction is performed by finding a set such that . In
other words, abduction is performed by finding a set of hypotheses such
that their effects include all observations .
A common assumption is that the effects of the hypotheses are independent,
follows: The
term on the right hand side of the equation expresses the base rate of
the infection in the population. Similarly, the term expresses the default
likelihood of positive test on a random person in the population. In the
expressions below and denote the base rates of
and its complement respectively, so that
e.g. . The full expression for the
required conditionals and are then
.
This further simplifies to
.
Probabilistic abduction can thus be described as a method for inverting
conditionals in order to apply probabilistic deduction.
A medical test result is typically considered positive or negative, so when
applying the above equation it can be assumed that either
(positive) or (negative). In case the patient tests positive, the
above equation can be simplified to which will give the
correct likelihood that the patient actually is infected.
The Base rate fallacy in medicine,[13] or the Prosecutor's fallacy[14] in legal
reasoning, consists of making the erroneous assumption
that . While this reasoning error often can produce a
relatively good approximation of the correct hypothesis probability value, it can
lead to a completely wrong result and wrong conclusion in case the base rate is
very low and the reliability of the test is not perfect. An extreme example of the
base rate fallacy is to conclude that a male person is pregnant just because he
tests positive in a pregnancy test. Obviously, the base rate of male pregnancy is
zero, and assuming that the test is not perfect, it would be correct to conclude
that the male person is not pregnant.
The expression for probabilistic abduction can be generalised to multinomial
cases,[15] i.e., with a state space of multiple and a state space of
multiple states .
Subjective logic abduction[edit]
Subjective logic generalises probabilistic logic by including parameters for
uncertainty in the input arguments. Abduction in subjective logic is thus similar
to probabilistic abduction described above.[15] The input arguments in subjective
logic are composite functions called subjective opinions which can be binomial
when the opinion applies to a single proposition or multinomial when it applies
to a set of propositions. A multinomial opinion thus applies to a frame (i.e. a
state space of exhaustive and mutually disjoint propositions ), and is denoted
as .
Assume the frames and , the sets of conditional opinions
and , the opinion on , and the base rate function on .
Based on these parameters, subjective logic provides a method for deriving the
set of inverted conditionals and . Using these inverted
conditionals, subjective logic also provides a method for deduction. Abduction in
subjective logic consists of inverting the conditionals and then applying
deduction.
The symbolic notation for conditional abduction is " ", and the operator itself is
denoted as . The expression for subjective logic abduction is
then:[15] .
The advantage of using subjective logic abduction compared to probabilistic
abduction is that uncertainty about the probability values of the input arguments
can be explicitly expressed and taken into account during the analysis. It is thus
possible to perform abductive analysis in the presence of missing or incomplete
input evidence, which normally results in degrees of uncertainty in the output
conclusions.
History[edit]
The philosopher Charles Sanders Peirce (/prs/; 18391914) introduced
abduction into modern logic. Over the years he called such
inference hypothesis,abduction, presumption, and retroduction. He considered it
a topic in logic as a normative field in philosophy, not in purely formal or
mathematical logic, and eventually as a topic also in economics of research.
As two stages of the development, extension, etc., of a hypothesis in scientific
inquiry, abduction and also induction are often collapsed into one overarching
concept the hypothesis. That is why, in the scientific method pioneered
by Galileo and Bacon, the abductive stage of hypothesis formation is
conceptualized simply as induction. Thus, in the twentieth century this collapse
was reinforced by Karl Popper's explication of the hypothetico-deductive model,
where the hypothesis is considered to be just "a guess"[16] (in the spirit of
Peirce). However, when the formation of a hypothesis is considered the result of
a process it becomes clear that this "guess" has already been tried and made
more robust in thought as a necessary stage of its acquiring the status of
hypothesis. Indeed many abductions are rejected or heavily modified by
subsequent abductions before they ever reach this stage.
Before 1900, Peirce treated abduction as the use of a known rule to explain an
observation, e.g., it is a known rule that if it rains the grass is wet; so, to explain
the fact that the grass is wet; one infers that it has rained. This remains the
common use of the term "abduction" in the social sciences and in artificial
intelligence.
Peirce consistently characterized it as the kind of inference that originates a
hypothesis by concluding in an explanation, though an unassured one, for some
very curious or surprising (anomalous) observation stated in a premise. As early
as 1865 he wrote that all conceptions of cause and force are reached through
hypothetical inference; in the 1900s he wrote that all explanatory content of
theories is reached through abduction. In other respects Peirce revised his view
of abduction over the years.[17]
In later years his view came to be:
Rule: All the beans from this Case: These beans are [randomly Rule: All the beans from this
bag are white. selected] from this bag. bag are white.
Case: These beans are from Result: These beans are white. Result: These beans [oddly]
this bag. Rule: All the beans from this bag are white.
Result: These beans are are white. Case: These beans are
white. from this bag.
1883[edit]
Peirce long treated abduction in terms of induction from characters or traits
(weighed, not counted like objects), explicitly so in his influential 1883 "A Theory
of Probable Inference", in which he returns to involving probability in the
hypothetical conclusion.[32] Like "Deduction, Induction, and Hypothesis" in 1878,
it was widely read (see the historical books on statistics by Stephen Stigler),
unlike his later amendments of his conception of abduction. Today abduction
remains most commonly understood as induction from characters and
extension of a known rule to cover unexplained circumstances.
1902 and after[edit]
In 1902 Peirce wrote that he now regarded the syllogistical forms and the
doctrine of extension and comprehension (i.e., objects and characters as
referenced by terms), as being less fundamental than he had earlier
thought.[33] In 1903 he offered the following form for abduction:[8]
The surprising fact, C, is observed;
But if A were true, C would be a matter of course,
Hence, there is reason to suspect that A is true.
The hypothesis is framed, but not asserted, in a premise, then asserted
as rationally suspectable in the conclusion. Thus, as in the earlier
categorical syllogistic form, the conclusion is formulated from some
premise(s). But all the same the hypothesis consists more clearly than
ever in a new or outside idea beyond what is known or observed.
Induction in a sense goes beyond observations already reported in the
premises, but it merely amplifies ideas already known to represent
occurrences, or tests an idea supplied by hypothesis; either way it
requires previous abductions in order to get such ideas in the first
place. Induction seeks facts to test a hypothesis; abduction seeks a
hypothesis to account for facts.
Note that the hypothesis ("A") could be of a rule. It need not even be a
rule strictly necessitating the surprising observation ("C"), which needs
to follow only as a "matter of course"; or the "course" itself could
amount to some known rule, merely alluded to, and also not necessarily
a rule of strict necessity. In the same year, Peirce wrote that reaching a
hypothesis may involve placing a surprising observation under either a
newly hypothesized rule or a hypothesized combination of a known rule
with a peculiar state of facts, so that the phenomenon would be not
surprising but instead either necessarily implied or at least likely.[31]
Peirce did not remain quite convinced about any such form as the
categorical syllogistic form or the 1903 form. In 1911, he wrote, "I do
not, at present, feel quite convinced that any logical form can be
assigned that will cover all 'Retroductions'. For what I mean by a
Retroduction is simply a conjecture which arises in the mind."[9]
Pragmatism[edit]
In 1901 Peirce wrote, "There would be no logic in imposing rules, and
saying that they ought to be followed, until it is made out that the
purpose of hypothesis requires them."[34] In 1903 Peirce
called pragmatism "the logic of abduction" and said that the pragmatic
maxim gives the necessary and sufficient logical rule to abduction in
general.[24] The pragmatic maxim is: "Consider what effects, that might
conceivably have practical bearings, we conceive the object of our
conception to have. Then, our conception of these effects is the whole
of our conception of the object." It is a method for fruitful clarification of
conceptions by equating the meaning of a conception with the
conceivable practical implications of its object's conceived effects.
Peirce held that that is precisely tailored to abduction's purpose in
inquiry, the forming of an idea that could conceivably shape informed
conduct. In various writings in the 1900s[10][35] he said that the conduct of
abduction (or retroduction) is governed by considerations of economy,
belonging in particular to the economics of research. He regarded
economics as a normative science whose analytic portion might be part
of logical methodeutic (that is, theory of inquiry).[36]
Three levels of logic about abduction[edit]
Peirce came over the years to divide (philosophical) logic into three
departments:
Cost: A simple but low-odds guess, if low in cost to test for falsity,
may belong first in line for testing, to get it out of the way. If
surprisingly it stands up to tests, that is worth knowing early in the
inquiry, which otherwise might have stayed long on a wrong though
seemingly likelier track.
Value: A guess is intrinsically worth testing if it has instinctual
plausibility or reasoned objective probability, while subjective
likelihood, though reasoned, can be treacherous.
Interrelationships: Guesses can be chosen for trial strategically for
their
caution, for which Peirce gave as example the game of Twenty
Questions,
breadth of applicability to explain various phenomena, and
incomplexity, that of a hypothesis that seems too simple but
whose trial "may give a good 'leave,' as the billiard-players
say", and be instructive for the pursuit of various and conflicting
hypotheses that are less simple.[41]
Other writers[edit]
Norwood Russell Hanson, a philosopher of science, wanted to grasp a
logic explaining how scientific discoveries take place. He used Peirce's
notion of abduction for this.[42]
Further development of the concept can be found in Peter
Lipton's Inference to the Best Explanation (Lipton, 1991).
Applications[edit]
Applications in artificial intelligence include fault diagnosis, belief
revision, and automated planning. The most direct application of
abduction is that of automatically detecting faults in systems: given a
theory relating faults with their effects and a set of observed effects,
abduction can be used to derive sets of faults that are likely to be the
cause of the problem.
In medicine, abduction can be seen as a component of clinical
evaluation and judgment.[43][44]
Abduction can also be used to model automated planning.[45] Given a
logical theory relating action occurrences with their effects (for example,
a formula of theevent calculus), the problem of finding a plan for
reaching a state can be modeled as the problem of abducting a set of
literals implying that the final state is the goal state.
In intelligence analysis, Analysis of Competing
Hypotheses and Bayesian networks, probabilistic abductive reasoning
is used extensively. Similarly in medical diagnosis and legal reasoning,
the same methods are being used, although there have been many
examples of errors, especially caused by the base rate fallacyand
the prosecutor's fallacy.
Belief revision, the process of adapting beliefs in view of new
information, is another field in which abduction has been applied. The
main problem of belief revision is that the new information may be
inconsistent with the corpus of beliefs, while the result of the
incorporation cannot be inconsistent. This process can be done by the
use of abduction: once an explanation for the observation has been
found, integrating it does not generate inconsistency. This use of
abduction is not straightforward, as adding propositional formulae to
other propositional formulae can only make inconsistencies worse.
Instead, abduction is done at the level of the ordering of preference of
the possible worlds. Preference models use fuzzy logic or utility
models.
In the philosophy of science, abduction has been the key inference
method to support scientific realism, and much of the debate about
scientific realism is focused on whether abduction is an acceptable
method of inference.
In historical linguistics, abduction during language acquisition is often
taken to be an essential part of processes of language change such as
reanalysis andanalogy.[46]
In anthropology, Alfred Gell in his influential book Art and
Agency defined abduction (after Eco[47]) as "a case of synthetic
inference 'where we find some very curious circumstances, which
would be explained by the supposition that it was a case of some
general rule, and thereupon adopt that supposition".[48] Gell criticizes
existing 'anthropological' studies of art, for being too preoccupied with
aesthetic value and not preoccupied enough with the central
anthropological concern of uncovering 'social relationships,' specifically
the social contexts in which artworks are produced, circulated, and
received.[49] Abduction is used as the mechanism for getting from art to
agency. That is, abduction can explain how works of art inspire
a sensus communis: the commonly-held views shared by members that
characterize a given society.[50] The question Gell asks in the book is,
'how does it initially 'speak' to people?' He answers by saying that "No
reasonable person could suppose that art-like relations between people
and things do not involve at least some form of semiosis."[48] However,
he rejects any intimation that semiosis can be thought of as a language
because then he would have to admit to some pre-established
existence of the sensus communis that he wants to claim only emerges
afterwards out of art. Abduction is the answer to this conundrum
because the tentative nature of the abduction concept (Peirce likened it
to guessing) means that not only can it operate outside of any pre-
existing framework, but moreover, it can actually intimate the existence
of a framework. As Gell reasons in his analysis, the physical existence
of the artwork prompts the viewer to perform an abduction that imbues
the artwork with intentionality. A statue of a goddess, for example, in
some senses actually becomes the goddess in the mind of the
beholder; and represents not only the form of the deity but also her
intentions (which are adduced from the feeling of her very presence).
Therefore through abduction, Gell claims that art can have the kind of
agency that plants the seeds that grow into cultural myths. The power
of agency is the power to motivate actions and inspire ultimately the
shared understanding that characterizes any given society.
Defeasible reasoning
From Wikipedia, the free encyclopedia
This article includes a list of references, related reading or external links, but its sources remain
unclear because it lacks inline citations. Please improve this article by introducing more precise
citations. (April 2010)
Defeasible reasoning is a kind of reasoning that is based on reasons that are defeasible, as
opposed to the indefeasible reasons of deductive logic. Defeasible reasoning is a particular kind of
non-demonstrative reasoning, where the reasoning does not produce a full, complete, or final
demonstration of a claim, i.e., where fallibility and corrigibility of a conclusion are acknowledged. In
other words defeasible reasoning produces a contingent statement or claim. Other kinds of non-
demonstrative reasoning are probabilistic reasoning, inductive
reasoning, statistical reasoning, abductive reasoning, and paraconsistent reasoning. Defeasible
reasoning is also a kind of ampliative reasoning because its conclusions reach beyond the pure
meanings of the premises.
The differences between these kinds of reasoning correspond to differences about the conditional
that each kind of reasoning uses, and on what premise (or on what authority) the conditional is
adopted:
Deductive (from meaning postulate, axiom, or contingent assertion): if p then q (i.e., q or not-p)
Defeasible (from authority): if p then (defeasibly) q
Probabilistic (from combinatorics and indifference): if p then (probably) q
Statistical (from data and presumption): the frequency of qs among ps is high (or inference from
a model fit to data); hence, (in the right context) if p then (probably) q
Inductive (theory formation; from data, coherence, simplicity, and confirmation): (inducibly)
"if p then q"; hence, if p then (deducibly-but-revisably) q
Abductive (from data and theory): p and q are correlated, and q is sufficient for p; hence,
if p then (abducibly) q as cause
Defeasible reasoning finds its fullest expression in jurisprudence, ethics and moral
philosophy, epistemology, pragmatics and
conversational conventions inlinguistics, constructivist decision theories, and in knowledge
representation and planning in artificial intelligence. It is also closely identified with prima
facie(presumptive) reasoning (i.e., reasoning on the "face" of evidence), and ceteris paribus (default)
reasoning (i.e., reasoning, all things "being equal").
Contents
[hide]
1 History
2 Political and judicial use
3 Specificity
4 Nature of defeasibility
5 See also
6 References
7 External links
History[edit]
Though Aristotle differentiated the forms of reasoning that are valid for logic and philosophy from the
more general ones that are used in everyday life (seedialectics and rhetoric), 20th century
philosophers mainly concentrated on deductive reasoning. At the end of the 19th century, logic texts
would typically survey both demonstrative and non-demonstrative reasoning, often giving more
space to the latter. However, after the blossoming of mathematical logic at the hands ofBertrand
Russell, Alfred North Whitehead and Willard van Orman Quine, latter-20th century logic texts paid
little attention to the non-deductive modes of inference.
There are several notable exceptions. John Maynard Keynes wrote his dissertation on non-
demonstrative reasoning, and influenced the thinking of Ludwig Wittgenstein on this subject.
Wittgenstein, in turn, had many admirers, including the positivist legal scholar H.L.A. Hart and
the speech act linguist John L. Austin,Stephen Toulmin in rhetoric (Chaim Perelman too), the moral
theorists W.D. Ross and C.L. Stevenson, and the vagueness epistemologist/ontologist Friedrich
Waismann.
The etymology of defeasible usually refers to Middle English law of contracts, where a condition of
defeasance is a clause that can invalidate or annul a contract or deed.
Though defeat, dominate, defer, defy, deprecate and derogate are often used in the same contexts
as defeasible, the
verbs annul and invalidate (and nullify,overturn, rescind, vacate, repeal, debar, void, cancel, counter
mand, preempt, etc.) are more properly correlated with the concept of defeasibility than those words
beginning with the letter d. Many dictionaries do contain the verb, to defease with past
participle, defeased.
Philosophers in moral theory and rhetoric had taken defeasibility largely for granted when American
epistemologists rediscovered Wittgenstein's thinking on the subject: John Ladd, Roderick
Chisholm, Roderick Firth, Ernest Sosa, Robert Nozick, and John L. Pollock all began writing with
new conviction about howappearance as red was only a defeasible reason for believing something
to be red. More importantly Wittgenstein's orientation toward language-games (and away
from semantics) emboldened these epistemologists to manage rather than to expurgate prima
facie logical inconsistency.
At the same time (in the mid-1960s), two more students of Hart and Austin at Oxford, Brian
Barry and David Gauthier, were applying defeasible reasoning to political argument and practical
reasoning (of action), respectively. Joel Feinberg and Joseph Raz were beginning to produce
equally mature works in ethics and jurisprudence informed by defeasibility.
By far the most significant works on defeasibility by the mid-1970s were in epistemology,
where John Pollock's 1974 Knowledge and Justification popularized his terminology
of undercutting and rebutting (which mirrored the analysis of Toulmin). Pollock's work was significant
precisely because it brought defeasibility so close to philosophical logicians. The failure of logicians
to dismiss defeasibility in epistemology (as Cambridge's logicians had done to Hart decades earlier)
landed defeasible reasoning in the philosophical mainstream.
Defeasibility had always been closely related to argument, rhetoric, and law, except in epistemology,
where the chains of reasons, and the origin of reasons, were not often discussed. Nicholas
Rescher's Dialectics is an example of how difficult it was for philosophers to contemplate more
complex systems of defeasible reasoning. This was in part because proponents of informal
logic became the keepers of argument and rhetoric while insisting that formalism was anathema to
argument.
About this time, researchers in artificial intelligence became interested in non-monotonic
reasoning and its semantics. With philosophers such as Pollock and Donald Nute (e.g., defeasible
logic), dozens of computer scientists and logicians produced complex systems of defeasible
reasoning between 1980 and 2000. No single system of defeasible reasoning would emerge in the
same way that Quine's system of logic became a de facto standard. Nevertheless, the 100-year
headstart on non-demonstrative logical calculi, due to George Boole, Charles Sanders Peirce,
and Gottlob Frege was being closed: both demonstrative and non-demonstrative reasoning now
have formal calculi.
There are related (and slightly competing) systems of reasoning that are newer than systems of
defeasible reasoning, e.g., belief revision and dynamic logic. The dialogue logics of Charles
Hamblin and Jim Mackenzie, and their colleagues, can also be tied closely to defeasible reasoning.
Belief revision is a non-constructive specification of the desiderata with which, or constraints
according to which, epistemic change takes place. Dynamic logic is related mainly because, like
paraconsistent logic, the reordering of premises can change the set of justified conclusions.
Dialogue logics introduce an adversary, but are like belief revision theories in their adherence to
deductively consistent states of belief.
Specificity[edit]
One of the main disputes among those who produce systems of defeasible reasoning is the status of
a rule of specificity. In its simplest form, it is the same rule as subclass inheritance preempting class
inheritance:
Approximately half of the systems of defeasible reasoning discussed today adopt a rule of
specificity, while half expect that such preference rules be written explicitly by whoever provides the
defeasible reasons. For example, Rescher's dialectical system uses specificity, as do early systems
of multiple inheritance (e.g., David Touretzky) and the early argument systems of Donald Nute and
of Guillermo Simari and Ronald Loui. Defeasible reasoning accounts of precedent (stare
decisisand case-based reasoning) also make use of specificity (e.g., Joseph Raz and the work of
Kevin D. Ashley and Edwina Rissland). Meanwhile, the argument systems of Henry Prakken and
Giovanni Sartor, of Bart Verheij and Jaap Hage, and the system of Phan Minh Dung do not adopt
such a rule.
Nature of defeasibility[edit]
There is a distinct difference between those who theorize about defeasible reasoning as if it were a
system of confirmational revision (with affinities to belief revision), and those who theorize about
defeasibility as if it were the result of further (non-empirical) investigation. There are at least three
kinds of further non-empirical investigation: progress in a lexical/syntactic process, progress in a
computational process, and progress in an adversary or legal proceeding.
Defeasibility as corrigibility: Here, a person learns something new that annuls a prior inference. In
this case, defeasible reasoning provides a constructive mechanism for belief revision, like a truth
maintenance system as envisioned by Jon Doyle.
Defeasibility as shorthand for preconditions: Here, the author of a set of rules or legislative code
is writing rules with exceptions. Sometimes a set of defeasible rules can be rewritten, with more
cogency, with explicit (local) pre-conditions instead of (non-local) competing rules. Many non-
monotonic systems with fixed-point orpreferential semantics fit this view. However, sometimes the
rules govern a process of argument (the last view on this list), so that they cannot be re-compiled
into a set of deductive rules lest they lose their force in situations with incomplete knowledge or
incomplete derivation of preconditions.
Defeasibility as an anytime algorithm: Here, it is assumed that calculating arguments takes time,
and at any given time, based on a subset of the potentially constructible arguments, a conclusion is
defeasibly justified. Isaac Levi has protested against this kind of defeasibility, but it is well-suited to
the heuristic projects of, for example, Herbert A. Simon. On this view, the best move so far in a
chess-playing program's analysis at a particular depth is a defeasibly justified conclusion. This
interpretation works with either the prior or the next semantical view.
Defeasibility as a means of controlling an investigative or social process: Here, justification is
the result of the right kind of procedure (e.g., a fair and efficient hearing), and defeasible reasoning
provides impetus for pro and con responses to each other. Defeasibility has to do with the
alternation of verdict as locutions are made and cases presented, not the changing of a mind with
respect to new (empirical) discovery. Under this view, defeasible reasoning and defeasible
argumentation refer to the same phenomenon.
Argument from authority
From Wikipedia, the free encyclopedia
Argument from authority, also authoritative argument and appeal to authority, is a common
form of argument which leads to a logical fallacy when used in argumentative reasoning.[1]
In informal reasoning, the appeal to authority is a form of argument attempting to establish
a statistical syllogism.[2] The appeal to authority relies on an argument of the form:[3]
A is an authority on a particular topic
A says something about that topic
A is probably correct
Fallacious examples of using the appeal include any appeal to authority used in the
context of logical reasoning, and appealing to the position of an authority or authorities
to dismiss evidence,[2][4][5][6] as authorities can come to the wrong judgments through error,
bias, dishonesty, or falling prey to groupthink. Thus, the appeal to authority is not a
generally reliable argument for establishing facts.[7]
Contents
[hide]
1 Forms
o 1.1 General
o 1.2 Dismissal of evidence
o 1.3 Appeal to non-authorities
o 1.4 Use in logic
2 Notable examples
o 2.1 Inaccurate chromosome number
o 2.2 The tongue map
o 2.3 Surgical sterilization and puerperal infections
3 Psychological basis
4 See also
5 References
6 Sources
7 External links
Forms[edit]
General[edit]
The argument from authority can take several forms. As a syllogism, the argument has
the following basic structure:[4][8]
A says P about subject matter S.
A should be trusted about subject matter S.
Therefore, P is correct.
The second premise is not accepted as valid, as it amounts to
an unfounded assertion that leads to circular reasoning able to define
person or group A into inerrancy on any subject matter.[4][9]
One real world example of this tautological inerrancy is how Ignaz
Semmelweis' evidence that puerperal fever was caused by a contagious
agent, as opposed to the then-accepted view that it was caused mainly by
environmental factors,[10] was dismissed largely based on appeals to
authority. Multiple critics stated that they did not accept the claims in part
because of the fact that in all the academic literature on puerperal fever
there was nothing that supported the view Semmelweis was
advancing.[11] They were thus effectively using the circular argument that
"the literature is not in error, therefore the literature is not in error".[12]
Dismissal of evidence[edit]
The equally fallacious counter-argument from authority takes the form:[13]
B has provided evidence for position T.
A says position T is incorrect.
Therefore, B's evidence is false.
This form is fallacious as it does not actually refute the
evidence given by B, merely notes that there is disagreement
with it.[13] This form is especially unsound when there is no
indication that A is aware of the evidence given by B.[14]
Appeal to non-authorities[edit]
Fallacious arguments from authority can also be the result of
citing a non-authority as an authority.[4] These arguments
assume that a person without status or authority is inherently
reliable. The appeal to poverty for example is the fallacy of
thinking a conclusion is probably correct because the one who
holds or is presenting it is poor.[15] When an argument holds that
a conclusion is likely to be true precisely because the one who
holds or is presenting it lacks authority, it is a fallacious appeal
to the common man.[5][16][17]
However, it is also a fallacious ad hominem argument to argue
that a person presenting statements lacks authority and thus
their arguments do not need to be considered.[18] As appeals to
a perceived lack of authority, these types of argument are
fallacious for much the same reasons as an appeal to
authority.[19]
Use in logic[edit]
It is fallacious to use any appeal to authority in the context of
logical reasoning. Because the argument from authority is not a
logical argument in that it does not argue something's negation
or affirmation constitutes a contradiction, it is fallacious to
assert that the conclusion must be true.[4] Such a determinative
assertion is alogical non sequitur as the conclusion does not
follow unconditionally, in the sense of being logically
necessary.[20][21]
The only exceptions to this would be an authority which is
logically required to always be correct, such as
an omniscient being that does not lie.[22]
Notable examples[edit]
Inaccurate chromosome number[edit]
In 1923, leading American zoologist Theophilus
Painter declared based on his findings that humans had 24
pairs of chromosomes. From the 1920s to the 1950s, this
continued to be held based on Painter's authority,[23] despite
subsequent counts totaling the correct number of 23.[24] Even
textbooks with photos clearly showing 23 pairs incorrectly
declared the number to be 24 based on the authority of the
then-consensus of 24 pairs.[24]
As Robert Matthews said of the event, "Scientists had preferred
to bow to authority rather than believe the evidence of their
own eyes".[24] As such, their reasoning was an appeal to
authority.[25]
The tongue map[edit]
Another example is that of the tongue map, which purported to
show different areas of taste on the tongue. While it originated
from a misreading of the original text, it got taken up
in textbooks and the scientific literature[26] for nearly a century,
and remained even after being shown to be wrong in the
1970s[27][28] and despite being easily disproven on one's own
tongue.[29][30]
Surgical sterilization and puerperal
infections[edit]
In the mid-to-late 19th century a small minority of doctors, most
notably Ignaz Semmelweis, argued that puerperal fevers were
caused by an infection or toxin[31] the spread of which was
preventable by aseptic technique by physicians such as hand
washing with chlorine.[11] This view was largely
discounted because, as one 1843 paper noted, "writers of
authority...profess a disbelief in [such a] contagion", and
instead held that puerperal fevers were caused by
environmental factors which would render such techniques
irrelevant.[11] This was in spite of evidence against their
proposed explanations, such as Semmelweis' observations that
two side-by-side clinics had radically different rates
of puerperal infection, that puerperal infection was extremely
rare in births that took place outside of hospitals, and that
infection rates were unrelated to weather or seasonal
variations, all of which went against the prevailing explanation
of environmental causes such as miasma.[10]
Psychological basis[edit]
An integral part of the appeal to authority is the cognitive
bias known as the Asch effect.[25] In repeated and modified
instances of the Asch conformity experiments, it was found that
high-status individuals create a stronger likelihood of a subject
agreeing with an obviously false conclusion, despite the subject
normally being able to clearly see that the answer was
incorrect.[32]
Further, humans have been shown to feel strong emotional
pressure to conform to authorities and majority positions. A
repeat of the experiments by another group of researchers
found that "Participants reported considerable distress under
the group pressure", with 59% conforming at least once and
agreeing with the clearly incorrect answer, whereas the
incorrect answer was much more rarely given when no such
pressures were present.[33]