You are on page 1of 15

Synthese (2011) 178:291–305

DOI 10.1007/s11229-009-9540-x

The science question in intelligent design

Sahotra Sarkar

Received: 23 March 2009 / Accepted: 25 March 2009 / Published online: 26 May 2009
© Springer Science+Business Media B.V. 2009

Abstract Intelligent Design creationism is often criticized for failing to be science


because it falls afoul of some demarcation criterion between science and non-science.
This paper argues that this objection to Intelligent Design is misplaced because it
assumes that a consistent non-theological characterization of Intelligent Design is
possible. In contrast, it argues that, if Intelligent Design is taken to be non-theolog-
ical doctrine, it is not intelligible. Consequently, a demarcation criterion cannot be
used to judge its status. This position has the added advantage of providing reasons
to reject Intelligent Design creationism without invoking potentially philosophically
controversial demarcation criteria.

Keywords Creationism · Demarcation problem · Evolution · Fundamentalism ·


Intelligent Design

The purpose of this note is to argue that the doctrine of Intelligent Design (ID)
is not science but not because it falls afoul of some general demarcation criterion
that distinguishes science from non-science as many have claimed, sometimes in the

For critical comments on an earlier version of this note thanks are due to Samet Bagce, Glenn Branch,
and James Justus. In unpublished work, Monton (2006) also criticizes the use of a demarcation criterion
to show what is wrong with Intelligent Design (ID) but goes on to conclude that ID may be science. (For
an earlier critique of the use of demarcation criteria to argue against creationism, see Laudan (1983).)
This note was partly written to develop and elaborate scattered remarks in Sarkar (2007a), Chapter 9,
which were critically questioned by Joe Lapp. Thanks are due to both Lapp and Monton for their input.

S. Sarkar (B)
Section of Integrative Biology, Department of Philosophy, University of Texas at Austin,
1 University Station, #C3500, Austin, TX 78712-1180, USA
e-mail: sarkar@mail.utexas.edu

123
292 Synthese (2011) 178:291–305

public arena.1 Rather, there is a deeper problem with ID: before we can fruitfully
discuss whether ID is science or not, we need a positive account of what design and
intelligence are. Let us restrict our attention to the origin of those familiar (complex)
biological entities the origin of which divide evolutionary biologists and ID propo-
nents. Standard evolutionary theory claims that the genesis of these entities can be
explained by common descent, blind mutations, natural selection, and other agents of
evolutionary change. ID proponents disagree. But they specify precious little about
what their doctrine actually says about the genesis of these entities beyond merely
asserting that they are the product of intelligent design without elaborating what
rules design and intelligence are supposed to follow—typically all they offer, besides
negative argumentation against standard evolutionary theory, are analogies to human
design.
For instance, Dembski, who is probably ID’s most philosophically sophisticated
expositor, asks us to envision the following process for the origin of complex entities:
“(1) A designer conceives a purpose. (2) To accomplish that purpose, the designer
forms a plan. (3) To execute the plan, the designer specifies building materials
and assembly instructions. (4) Finally, the designer or some surrogate applies
the assembly instructions to the building materials.” (Dembski 2002, p. xi)
All we have is an analogy to what a human designer may do. Nevertheless, if we
take this account as indicative of what ID proponents have in mind, we may first con-
clude that design is something causally dependent on a designer for its origin, and
then that the designer is intelligent in the sense that it can form future plans for the
relevant entity (before that entity has already been brought into existence). Finally,
we should be free to assume that the designer (or its surrogate, whatever that may be)
can successfully execute such plans. We may reasonably infer, because the designer is
aware of what these future goals are, and when they are successfully met, the designer
is at least conscious.
Now the designer could be composed of standard physical substances2 or not. In the
first case, the designer would be subject to physical law. ID would then not be science
because there is not even an iota of evidence for the existence of such designers,3

1 See, for instance, Pennock’s testimony in Kitzmiller vs. Dover (2005a) which goes against the more
nuanced discussion of Pennock (1999). Laudan (1983) was probably the first to point out the problems with
invoking a demarcation criterion in this debate.
2 Nothing hangs on the question whether we call these “matter” though ID proponents make much of their
rejection of materialism.
3 Note that what is not being said is that ID is not science because it is false. There is a somewhat subtle
issue here. Suppose that truth and falsity are construed categorically (in the sense that something is either
simply true or simply false) rather than approximately. Now, suppose that we refuse to countenance as
science any claim that is false. Then we are faced with the awkward result that all superseded theories from
the past—including Newton’s mechanics and Maxwell’s electromagnetism—are not science. Refusing to
call something science if there is no evidence whatsoever in favor of it avoids that problem: even the caloric
theory, let alone classical mechanics or electromagnetism, survives that test. (Note that having some evi-
dence should also not be construed as a demarcation criterion in the sense of Sect. 2—non-scientific claims,
such as ethical claims, may also be based on evidence.) However, the issue is largely irrelevant because the
empirical sciences require a concept of truth that admits approximation rather than a categorical concept of
truth.

123
Synthese (2011) 178:291–305 293

but it would not be the case that ID’s claims lacked scientific status. Rejecting ID in
this case does not require recourse to any demarcation criterion between science and
non-science. (Arguably, one could choose to call ID “bad science,” “junk science,” or
“voodoo science” instead of denying that it is science (Nickles 2006), but common
usage suggests that manifestly and completely false claims are simply not science—or,
for that matter, any other form of knowledge.) In the second case, that is, when we
are faced with the possibility of a non-physical conscious designer, assuming that this
designer is responsible for all the spectacular examples of biological adaptation we
find in Earth—and, presumably many other complex phenomena in the universe—we
are left with at least a mild form of theism. We may now choose to worry whether
theism can be science but, since most parties to the debate generally concur that it is
not, we will not pursue this argument further.4
Now, ID proponents are usually—though not always5 —quite clear that the designer
is not supposed to be a physical entity. The trouble is that they also claim that ID does
not necessarily endorse theism.6 This is the position that is incoherent. In other words,
if ID is presented as a theistic doctrine about the origin of entities, I may disagree with
its claims, and note that the question of its scientific status is irrelevant, but at least I
will have to acknowledge that there is a substantive doctrine that is up for discussion.
Shorn of its theistic foundations, ID says nothing substantive. It is at best a metaphor,
a term that acquires sufficient meaning for us through association with other ideas we
understand, sufficient for informal use of the term, but not sufficient enough for us to
specify a precise intended referent. (Throughout this note, “metaphor” will only be
used in this way.)
The rest of this note develops the argument that the categories of ID are so ill-
specified that the doctrine cannot be interpreted as making any substantive—rather
than metaphorical—claim. I will take it as unproblematic that, to be a science, a
doctrine must make substantive claims though making such claims does not by itself
obviously confer scientific status. (For instance, substantive normative claims or math-
ematical claims may not be part of any science.) Making substantive claims is not a
demarcation criterion to distinguish science from non-science, but it is an adequacy test
for a science. ID fails this test. (In what follows, unless explicitly stated otherwise, it
will be assumed (i) that ID claims that there is a non-physical designer of the structures
and systems under contention and (ii) that ID does not entail theism.)
The introduction of this issue provides background on ID, how it, as a religious
doctrine, represents the latest incarnation of creationism, and why its challenge to

4 Sarkar (2007a), Chap. 3, develops the argument that theism of this sort provides no argument for design:
we know too little about potential designers of this ilk to say anything about their plans and abilities to
execute them. See, also Sober (2004) for another version of the same argument. Note that some creationists
do opt for a theistic science—see, for example, Plantinga (1996).
5 One of the presiding characteristics of ID is prevarication: when it is convenient to deny any theological
goal for ID, for instance, in the context of what should be taught as science in public schools in the United
States, some ID proponents are willing to countenance designers who are extraterrestrials or time-traveling
cell biologists from the far future who presumably all remain subject to physical law. See, for instance,
Behe (Behe (1996), pp. 248–250); and, also, his deposition in Kitzmiller vs. Dover (2005b).
6 Behe (2001) is an exception—in order to make the case for ID as strongly non-theistic as possible, we
will ignore this piece.

123
294 Synthese (2011) 178:291–305

evolution is politically important in the contemporary United States even if it is intel-


lectually vacuous. We will not repeat that discussion here.7 As far as the positive
content of ID is concerned, given how little its proponents are willing to specify, it
will suffice here to note that it claims that the origin of complex biological systems is
better explained by intelligent design rather than by evolutionary theory. Section 2 of
this note summarizes why critics of ID are ill-advised to rely on traditional demarcation
criteria to deny scientific status to ID. Section 3 will make the case for the claim that
explicit categorical specification is a prerequisite for any substantive theory, scientific
or not. Section 4 will return to ID and elaborate the objections summarized in the last
paragraph. Section 5 notes some relatively unexpected—and, perhaps, unwelcome—
implications of the argument of this paper.

As noted earlier, philosophers of science have often used a demarcation criterion


to argue against the claim that creationism is science, whether it be old-fashioned
Scientific Creationism or the new-fangled ID doctrine.8 The customary strategy has
been to use some criterion that is supposed to distinguish science from non-science
definitively (that is, in all cases), such as Popper’s flawed criterion of falsifiability9 or
Hempel’s more respectable criterion of testability, and argue that ID fails to satisfy
it. Demarcation criteria have had a long and capricious history in twentieth-century
philosophy of science. (For the many historical antecedents for the project of demar-
cation, see (Nickles 2006).) During the first half of the twentieth-century, producing
such a criterion was central to the logical empiricist project since it was supposed to
underpin the rejection of metaphysical claims which were supposed fall afoul of this
criterion and, therefore, could be subsequently banished from philosophy. Because,
according to logical empiricism (though not always Popper), logical formalization was
supposed to clarify all philosophical issues, any demarcation criterion was required to
be formulated in a formalized language.
Starting with a principle of verifiability, over the years, the logical empiricists
proposed one criterion after another after each was shown to be technically flawed
(Hochberg 2006). These flaws consisted of demonstrating that the criterion either
admitted obvious non-scientific claims or denied equally obvious scientific ones.10 For

7 Pennock (1999); Forrest and Gross (2004); Scott (2004) and Sarkar (2007a) all provide detailed summa-
ries of ID. Scott and Matzke (2007) sets the ID controversy in the context of earlier attempts to introduce
creationism in science curricula in the United States.
8 For ID, Pennock’s testimony in Kitzmiller vs. Dover (2005a) has already been noted. For Scientific
Creationism, see Ruse (1982).
9 This criterion is very popular among biologists (and other scientists) but it is irremediably flawed: it
makes any existential claim unscientific because it is unfalsifiable—see below, in the text, for more discus-
sion of this point. Probabilistic claims, similarly, are never falsifiable with a finite data set. Moreover, this
criterion also has the bizarre consequence that every false claim is scientific no matter how absurd it is. To
the extent that calling a claim scientific connotes something valuable about it, falsifiability is perhaps the
worst criterion one could devise.
10 For an incisive discussion, see Glymour (1980).

123
Synthese (2011) 178:291–305 295

instance, verifiability is inadmissible because no universal law is logically verifiable;


falsifiability is also inadmissible because no singular (existential) claim (for instance,
“there is a common ancestor of humans and chimpanzees”) can be logically falsified.11
This debate continued in full swing until the 1950s, and then slowly petered out, and it
still remains controversial whether any successful formal demarcation criterion can be
formulated.12 It is also far from clear whether the presumed failure of these criteria is
due to there not being a salient distinction between science and non-science, whether
the proposed criteria were simply not good enough, or whether the failure is an artifact
of the particular logical formalisms that were being deployed. The one theme that runs
through these criteria—once verifiability and falsifiability were abandoned—is that
what really matters is testability: to be scientific, a proposal must be empirically test-
able. This seems eminently reasonable—in fact, it seems to capture the very essence of
scientific research based on observation and experimentation. Nevertheless, the issue
is not quite that simple—as we will see below.
Meanwhile, note that most philosophers of science have long abandoned a sharp
line of demarcation between science and non-science. Some, such as Quine, have
tended to deny the utility of all such dichotomous distinctions on pragmatic grounds.
In particular, Quine argued for continuity between science and philosophy (as part of
a comprehensive naturalism in philosophy), a position that denies the possibility of a
strict demarcation between the two spheres. It is therefore not surprising that, when
demarcation criteria were invoked in legal contexts during attempts to introduce Sci-
entific Creationism into US high school science curricula in the 1980s, critics pointed
out that its role was rhetorical and political rather than substantive. Laudan (1983,
p. 349) puts this point forcefully: “If we would stand up and be counted on the side
of reason, we ought to drop terms like ‘pseudo-science’ and ‘unscientific’ from our
vocabulary; they are just hollow phrases which do only emotive work for us.” Ruse
(1982), who had deployed the falsifiability criterion in court testimony, conceded that
the demarcation criterion was being deployed for political ends, and pointed out—with
some justice—that the most promising legal strategy for excluding creationism from
science classes was by arguing that it was religion rather than science on the basis of
a demarcation criterion.
That there is a distinction between science and non-science is not in question.
No one can reasonably confuse Heidegger’s Being and Time or Kripke’s Naming and
Necessity with any competent scientific paper: the distinction is as clear as that between
night and day. What is at stake is whether we can adjudicate the boundary between sci-
ence and non-science so clearly that claims near this boundary are neatly put into one
category or the other. The analogy with night and day is again appropriate: evenings
blend into nights, dawns into day. Speculative theories, especially at the early stages
of their development, may well not satisfy criteria such as testability. Moreover, what

11 In both cases we have to assume a universe with an indefinitely large number of entities.
12 Justus (2006) reviews the state of the art with respect to criteria of cognitive significance. As Popper and
others have pointed out, the conflation of a criterion of cognitive significance with a demarcation criterion
between science and non-science presumes that only scientific claims are meaningful (cognitively signifi-
cant). Any such assumption is question-begging and criteria of cognitive significance are being construed
here only in their role as providing a demarcation between science and non-science.

123
296 Synthese (2011) 178:291–305

is at stake is whether some claim is testable in principle, not necessarily in practice.


Thus, an evolutionary model predicting speciation in some population in a thousand
years is testable in principle though not in practice. But there are situations where even
the distinction between in principle and in practice claims is unclear. Suppose that, to
test some physical theory, we require a sphere of curium-247 with a mass of 10 kg. In
principle there seems to be no problem, but physical law dictates that we can never
assemble such a cube: curium-247 has a critical mass of 7 kg after which it blows up.
Was our claim even in principle testable? The answer is far from clear.
Moreover, again restricting attention to testability as the relevant criterion of demar-
cation, claims that appear not to be testable now may eventually be transformed into
ones by the development of science. Our deepest physical theories today often hypoth-
esize interactions at energies so high that we are sometimes at a loss to imagine how
they can be testable by experiments. But the next generation may think very differently.
More importantly, apparently untestable claims because they contain vague concepts
may be made testable by specifying the concepts precisely, a process that takes time.
Metaphors may be transformed into models.
An example should drive this point home. In the 1950s, several prominent ecolo-
gists including Elton and MacArthur hypothesized a link between the diversity and
stability of ecosystems: more diverse ecosystems were supposed to be more stable.
The claim was sufficiently well-understood for it to be widely debated by ecologists
who disagreed with each other, on the basis of biological experience, on whether more
diverse systems are more likely to persist indefinitely in nature. Yet, until the 1960s,
neither diversity nor stability was specified with sufficient precision to translate the
diversity–stability claim into a testable prediction. (Justus (2007) and Sarkar (2007b)
provide details of this history.) Biologists understood the metaphors of diversity and
stability sufficiently to understand what was intended by the diversity–stability claim,
and this understanding was much more precise than what we can say today about
design and, especially, intelligence (see the next section). This understanding was
aided by the fact that these metaphors were built upon yet another metaphorical claim
of exceptional vintage: the “balance of nature.” In fact, the metaphors of diversity and
stability were sufficiently fecund for the claim to have been systematically transformed
into testable versions in the 1960s by various explications of both concepts. Whether
the transformed claim is true depends on which explications we choose to use—the
matter remains unresolved to this day. The point, though, is that, had we mechanically
wielded any extant demarcation criterion in the 1950s, when the diversity–stability
claim was first explicitly proposed (and before systematic attempts to specify the rele-
vant concepts began), the claim would have fallen afoul of it: the project of establishing
demarcation criteria ignores the contextual dynamism on which scientific progress is
based. (For more on this example, see McIntosh (1985) and Pimm (1991).) The impor-
tance of this point is underscored by the fact that the diversity–stability proved to be
central to the development of theoretical ecology during the last three decades. There
are many other such examples (see Nickles 2006).
There are thus two problems with demarcation criteria: (i) they assume that the
distinction between science and non-science can be made in a context-independent
manner (unlike, for instance, the distinction between day and night which may depend
on the ambient light, for instance, whether we are in a forest or in a desert); and (ii),

123
Synthese (2011) 178:291–305 297

more importantly, they presume a static picture of science in which concepts and
claims do not change as theories develop over time. Note that both of these prob-
lems are independent of whether a successful formal demarcation criterion can be
devised. Returning to the question of ID, the context in which that doctrine has been
pushed is as important to the question of its scientific status as whether it satisfies
a formal demarcation criterion. This is at best a contextually specific sociological
demarcation criterion—it will not serve the philosophical purposes of the traditional
criteria. Though this subsidiary argument will not be pursued much further here, the
fact that testimony during Kitzmiller vs. Dover (2005a) showed that ID’s only sup-
posedly scientific textbook, Of Pandas and People (Davis and Kenyon 1993), was a
repackaging of an earlier uncontroversially creationist text is contextually sufficient
to deny ID scientific status. However, the more important point is that theories—and
their precursors—are not static entities. The fact that we do not know what design and
intelligence means in the context of ID does not rule out the possibility that, eventually,
these concepts will be specified with sufficient precision for ID to become a science.
Meanwhile, let us take a hard look at what it is today.

The discussion of the last section shows that, no matter what the subject of inquiry,
categorizing any claim as not being scientific using a static philosophical (that is, non-
sociological and universal) demarcation criterion is of little value once we acknowl-
edge the dynamic nature of the scientific enterprise. But let us take a step back. Before
we can argue, say, about the testability of a claim, if we are not yielding testability
only as a rhetorical weapon, we must have a reasonably precise idea of what it means.
Many early logical positivists, following Bridgman (1927), required that concepts be
operationally defined (or operationalized) through the specification of an experimen-
tal procedure for their measurement (see Hempel 1966). Though this proposal was
supposed to mimic the strategy that has been successfully deployed in early twenti-
eth-century physics, especially by Einstein when formulating the theory of relativity,
it is obviously too strong since many scientific parameters (for instance, entropy in
physics or the intrinsic growth rate of populations in biology) can only be estimated
from data very indirectly.13 More importantly, in our present context, since we are con-
cerned with concepts at the boundary between science and non-science, a strategy of
definition needs to be broad enough not to require immediate empirical measurement.
Nevertheless, turning to operationalism for insight is a step in the right direction.
For any theory to be comprehensible, the scope of its terms—the categories it uses—
must be specified. Operational definition may be a particularly heavy-handed way of
ensuring such a specification, but had “intelligence” and “design” been operationally
specified by ID proponents, there would be no reason to complain about the substantive
status of ID. Scientific or not, there would have been a theory to be evaluated.

13 There are other problems with operationalism, for instance, two different measurement procedures nec-
essarily specify different concepts even though they may have been intended to measure the same parameter.
See, for instance, Hempel (1961).

123
298 Synthese (2011) 178:291–305

Suppose we relax the requirement of operationalism and impose less stringent con-
ditions on how concepts in our theories must be defined. The standard philosophical
strategy is to require what are called explicit definitions, laying down necessary and
sufficient conditions that an entity must satisfy in order to fall under the scope of
the defined term. (Ideally, operational definitions are such explicit definitions with
the additional requirement that the imposed conditions are instantiated as experimen-
tal procedures.) A weaker form of explicit definition would be to lay down a set of
conditions such that a satisfaction of most—or at least a plurality—of them would
suffice to define a concept. Concepts in everyday language are often no more precise.
We will refer below to strong and weak explicit definitions to distinguish between
these two forms. Whereas strong explicit definition may be an appropriate ideal for a
mature science, weak ones may well provide sufficient precision for the further fruitful
development of an incipient science.
The trouble with explicit definitions is that that, contrary to what operationalists
seem to have believed, scientific concepts do not always get introduced one at a time
so that each can be defined by itself. Rather, they often get introduced in clusters along
with a set of laws of nature that use these concepts to elucidate empirical phenomena.
Moreover, the use of any of these concepts requires use of the others as permitted
by the relevant laws. (In this sense, concepts so introduced are not independent of
the others—ipso facto, they cannot be individually defined independent of the others.
We understand the set of terms because we know how to use these rules. The pro-
cess outlined here is very similar to what Carnap and those who followed him called
“explication”—see Loomis and Juhl (2006).) In Newtonian mechanics, mass, force,
and acceleration come together in Newton’s laws, which also draw on previously
well-established concepts such as rest, motion, and velocity. In the modern (twen-
tieth-century) theory of natural selection, fitness, selection, and drift similarly come
together in the framework of the so-called Modern Synthesis. The laws that these con-
cepts obey define all of them simultaneously. This form of definition is called implicit
definition—the basic idea goes back to Hilbert’s conception of the axiomatic method,
which he applied to the foundations of both mathematics and the empirical sciences
(Majer 2006). One of the major insights of logical empiricism was to recognize that
implicit definitions are both necessary in science and epistemologically legitimate.
Nevertheless, there is a sense in which they may be viewed to be less satisfactory than
explicit definitions, especially strong explicit definitions, because they do not define
each concept independently of the others: when concepts are implicitly defined we do
not understand each of them independently of the others.
It is logically straightforward to show that the implicit definition of a set of con-
cepts through a set of rules need not lead to a set of conditions that even allow weak
explicit definitions of each individual concept. Leaving mathematics aside, in sci-
entific contexts, it is therefore necessary to introduce an additional requirement on
adequate implicit definitions: a relevance condition that connects the concepts to the
empirical world in the appropriate manner. (In the case of explications, the relevance
condition ensures that the explicated concept or explicans is sufficiently similar to
the original explicandum to be explicated.) Roughly, this is done by requiring that
the axioms in the implicit definition can be interpreted as claims about the empirical
world when the concepts occurring in them are given their customary (pre-systematic)

123
Synthese (2011) 178:291–305 299

interpretations. Some examples will clarify the point that is being made. If mass, force,
and acceleration are being implicitly defined by a set of axioms, those axioms should
be interpretable as laws of mechanics when we interpret those terms in the usual sense
prior to any commitment to particular laws of mechanics, for instance, the three laws
of motion that Newton introduced. The same must be true for fitness, selection, and
drift in the context of the theory of natural selection. In the case of diversity and sta-
bility, we understood the metaphors well enough to know when precise definitions
were capturing the relevant biological intuitions, and when they were not. Note that
the relevance condition is not a demarcation criterion: there is no requirement here
that the claims about the empirical world are testable in practice or in principle. The
connection may be weaker. They may just give us enough understanding of what these
claims are to allow us to reason with them, perhaps develop them so as to connect
them eventually to experiment.
There are two lessons to be gleaned from this discussion. First, there has been
wide discussion and careful philosophical analysis of how scientific concepts should
be defined, going well beyond the usual discussions of demarcation. Second, and
more importantly, adequate definition requires positive specification of the concepts
defined, positive in the sense that we have a sense of how they are to be applied to
the empirical world. In the case of explicit definitions, this is obvious. But even in the
case of implicit definitions, there is a positive agenda: the use of the entire set of rules
in empirical contexts.

Let us return to ID. In the voluminous corpus of its proponents, there seems to be no
attempt to define “intelligence” in any way whatsoever. Except in analogy to intel-
ligent human agents, we are not told what it means. ID proponents have even failed
to address minimal questions about its relation to animal intelligence (Elsberry and
Shallitt 2003). If we take the human analogy seriously, all we generate are puzzles.
Consider just one example of what is supposed to be an intelligently designed system:
ID proponents have waxed lyrical over the fairly common bacterial locomotory sys-
tem—the three-part flagellum.14 But why is a three-part system any more intelligent
than, say, a two-part one? According to ID proponents what is most remarkable about
the flagellum is that there is no redundancy built in: destroy one part and the sys-
tem becomes non-functional. Thus, according to Behe, such systems to be irreducibly
complex: the easy lapse into non-functionality is a test of intelligent design. It is easy
to show how such systems can arise through standard evolutionary processes (Orr
1996–1997; Sarkar 2007a). Let us ignore that issue and turn to what intelligence can
possibly mean in this context.
Now, if we accept the customary nuances of that term, why is such fragility a sign of
intelligence? Remember, all we have is the analogy to human intelligence, but human
intelligence suggests that complex systems are better (that is, more reliable and, in that

14 The literature on the bacterial flagellum is large, ever since Behe (1996) made it central to the agenda
of ID. Sarkar (2007a, pp. 111–113) provides a recent summary, drawing heavily on Miller (1999a,b).

123
300 Synthese (2011) 178:291–305

sense, more reflective of intelligence) when they have sufficient built-in redundancy
to guard against easy collapse. In a well-designed house we have fire exits besides
doors; in planes we try to have multiple engines, besides emergency exits. Why is
the bacterial flagellum and similar irremediably fragile systems not more a sign of
incompetence—as Miller (1999a, Chap. 4) has light-heartedly but very insightfully
suggested—than intelligence? There is no positive specification of “intelligence” in
any of these discussions in the ID corpus. All we have is a metaphor that works—to
the extent it does—because we all seem to share cultural ideas about what properties
a supposedly intelligent theistic designer should have. We have no positive specifica-
tion of intelligence. And, as the discussion of incompetent design above shows, our
cultural ideas about minimally competent intelligence takes us in a direction in which
we find irreducible complexity, that is to say, irremediable fragility very poor (and, in
that sense, unintelligent) design.
For design we can find at most two attempts at some definition in the ID corpus,
both due to Dembski. In The Design Inference Dembski (1998) argues that we should
try to explain the origin of some feature by appealing to three possible factors individ-
ually: Regularity, Chance, and Design. The procedure for detecting Design is called
the Explanatory Filter: first we try to explain the origin of a feature by an appeal to
Regularity; if that fails, we appeal to Chance; if that, too, fails we conclude that it is due
to Design. The epistemological absurdity of this inferential procedure is well-known
(Fitelson et al. 1999; Perakh 2004; Sarkar 2007a) and those details will not delay us
here—it is beyond the scope of this note. For instance, why should we invoke Reg-
ularity, Chance, and Design in this specified order? Why are these the only possible
explanatory factors? Why are we not allowed to invoke these factors together? The
last question is particularly pertinent because the theory of natural selection invokes
regularity (selection) and chance together.
Leave all that aside. The crucial point here is that Dembski provides no account of
design other than saying that it is the complement of regularity and chance. He simply
assumes that the three factors are mutually exclusive and jointly exhaustive. No argu-
ment is offered for this assumption. There is no positive specification of design. Why,
for instance, is something not due to regularity and chance then due to design rather
than, say, Dasein? While such comical possibilities are intriguing, the more serious
point is that, even if we grant that regularity and chance are mutually exclusive, why is
it that what is left beyond them design—if we mean by “design” anything close to its
customary usage in ordinary language? Even if we accept that Dembski’s procedure
provides something akin to an implicit definition of design—along with regularity and
chance—the relevance condition is not satisfied. We are simply supposed to take it as
obvious that design is all that is left once we go beyond regularity and chance but we
are not told why. Once again all we have is a metaphor that is supposed to do the work
because of our prior knowledge of what design is supposed to be, the role it plays in
Christian theology, and how it has been used in past attempts to denigrate evolution
on religious grounds. We have no positive specification of design.
Dembski’s more recent work shows more potential. In No Free Lunch, we can
gather (though this is a charitable reading since the issue is not explicitly treated) that
an entity has design if it carries a sufficient amount of “complex specified information
(CSI) (Dembski 2002)” Here, at least, there is some hope that there will be positive

123
Synthese (2011) 178:291–305 301

specification through some form of definition. There is no explicit definition, strong


or weak. However, there is a procedure specified—somewhat similar to the Explan-
atory Filter—which allows us to infer the presence of CSI. Moreover, in this case, it
is possible that this procedure will allow something reasonable close to an implicit
definition that will satisfy the relevance condition. The reasoning is ultimately based
on the assumption that sufficiently low probability events will carry a sufficiently high
amount of CSI, and these will show design when that they can also be viewed as
conforming to some pattern. Let us, therefore, look at this procedure in some detail
(following the treatment of Sarkar (2007a, pp. 123–124)).
Roughly, we are supposed to observe some event, E. Since we need to compute the
probability of E, we next find the class  of possible events to which E belongs. (
will be the reference class used to assign a probability to E.) Then we find a pattern,
T , which describes E, taking care that T is epistemically independent of E in the
sense that we do not use our knowledge of E to find T . (This idea of a match to a
pattern is what makes the metaphor of design relatively straightforward in this case,
and thus satisfies the relevance criterion to a sufficient extent.) Finally, we compute
the probability that a random member of the class  would conform to the pattern T .
We choose the largest k such that this probability is less than or equal to 21k . Then the
event E has k bits of CSI. If k > 500, then E, according to this scheme, is supposed
to display design.
There is genuine progress here since the appeal to a pattern gives us some idea what
design is supposed to be about—and, as noted earlier, the relevance condition is satis-
fied. The trouble is that, on closer inspection, the proposal turns out to be incoherent, a
fact that is hidden from much of Dembski’s intended audience by the use of an aston-
ishing amount of irrelevant formalism. (This is the general rhetorical strategy of No
Free Lunch (Dembski 2002) as Perakh (2004) has pointed out.) First, information is
never defined even though quantitative measures of information, which this procedure
requires, depend on how information is defined. Second, as Shallit and Elsberry (2004)
have shown, it is impossible to satisfy the condition of determining whether the event
E conforms to the pattern T without using our knowledge of T —what Dembski says
is incoherent. Third, the number 500 is only motivated through Dembski’s views about
what would be a really improbable event. We have pretense to positive specification
here, but still none in practice.
Where are we left? We have no positive specification of “intelligence” whatsoever.
We only have, at best, an incoherent attempt at a positive specification of “design.” In
other words, we have no theory of ID at all. It follows, that we are in no position to
judge whether the theory meets some demarcation criterion should we want to play
that game. Note, though, that we are not trying to deny scientific status to CSI on the
basis of such a demarcation criterion. In fact, because we have the intuitively reason-
able idea that design means conforming to a target, we can imagine that the concept of
CSI can eventually be characterized sufficiently precisely, perhaps even operational-
ized, and connected to the empirical world. We should not forget that the development
of many sciences followed this pattern of gradual clarification of concepts over time:
earlier we mentioned the case of diversity and stability in ecology, and noted that there
are many other such examples.

123
302 Synthese (2011) 178:291–305

There may thus be a future theory of ID. Meanwhile, at present ID is not yet science
simply because we have no such (substantive, rather than metaphorical) theory of ID
that is comprehensible without implicit appeal to prior acquaintance with Christian
theology and the metaphorical invocation of its fundamental concepts of intelligence
and design.

The argument of this note was designed to show that the most credible philosophical
argument against ID being treated as science is to point out the absence of any positive
specification of its fundamental concepts, intelligence and design, rather than to have
recourse to a demarcation criterion between science and non-science. The basic claim
is that, in the absence of such a specification, ID cannot be a substantive theory, sci-
entific or not. In the case of intelligence, there is no positive specification at all. In the
case of design, there is no coherent specification. If this argument is sound, it has the
following two somewhat unexpected implications.
First, virtually all critics of ID argue that part of what makes it not science is that
it violates methodological naturalism, that we can understand how the world works
through our usual scientific methods, deploying logic and experiments in tandem.
Some ID proponents and other creationists accept methodological naturalism, while
others reject it either on its own (e.g., Plantinga 1996) or because they think it slides
into metaphysical naturalism (e.g., Johnson 1995). The argument of this paper shows
that the issue of naturalism is independent of the question as to what ails ID when
it is supposed to be science. There can be coherent substantive non-naturalist and
even anti-naturalist epistemological positions: what is unacceptable about ID, in con-
trast, is that it does not have substantive content. In other words, while the question
of naturalism may be interesting and important in the philosophy of science, delin-
eating why ID is not science requires no commitment to naturalism. Note that, I am
happy to promote naturalism in all contexts, including a modest metaphysical natural-
ism (Sarkar 2007a), but the point here is that the status of naturalism is independent
of what is intellectually lacking in the claim that ID is science. This point remains
true even though a rejection of naturalism is an explicit and important part of the ID
agenda.
Second, this discussion raises troubling questions regarding parts of the 2005 deci-
sion by Judge John E. Jones III in Kitzmiller vs. Dover (2005a), which is justly
viewed as a watershed development in the campaign by ID proponents to intro-
duce ID in high school science curricula in the United States. The Establishment
Clause of the First Amendment to the United States Constitution prohibits any gov-
ernment entity from aiding the establishment of any religious doctrine. Applying the
so-called Endorsement Test to see whether the Dover Area School Board’s actions
violated the Establishment Clause, Jones concluded that (i) an “objective” observer,
(ii) an “objective” student, and (iii) an “objective” Dover citizen would all recog-
nize that teaching ID or harping on alleged problems with evolutionary theory con-
stitute religious strategies that have demonstrably emerged from earlier versions of
creationism.

123
Synthese (2011) 178:291–305 303

These conclusions were sufficient to show that ID fails the Endorsement Test.15
This means ID violates the Establishment Clause and cannot form part of high school
science curricula. Though this essentially resolved the legal questions in Kitzmiller vs.
Dover (2005a,b), Jones went on to address the question whether ID is science because,
after 21 days of expert testimony, he felt, probably correctly, that:
“no other tribunal in the United States [was] in a better position . . . to traipse
into this controversial area. Finally, we will offer our conclusion on whether ID
is science not because it is essential to our holding that an Establishment Clause
violation has occurred in this case, but also in the hope that it may prevent the
obvious waste of judicial and other resources which would be occasioned by a
subsequent trial involving the precise question which is before us (Kitzmiller vs.
Dover 2005b).”
Jones used three demarcation criteria, two of which referred to the failure of ID’s
negative argumentation against evolution, and the other constituted an endorsement
of methodological naturalism. Throughout the judgment Jones relied on a claim that
scientific explanations may not rely on supernatural causes that is, those which cannot
be investigated through standard scientific methods (empirical tests and reasoning).
What Jones says is correct: that scientific explanations should only use resources to
which we have epistemic access through standard scientific methods is a cogent ade-
quacy condition for science.16 Nevertheless, if the argument of this paper is sound,
Jones need not have entered this controversial territory even though the legal prec-
edent he put in place—as he suspected—may turn out to be a commendable social
service in the long run. (Some other legal scholars—see, for example, Loomis and
Juhl (2006)—continue to base objections to the teaching of ID in public schools on
the role of naturalism in science and on demarcation criteria. This strategy seems to
me to be misplaced.)
Thus, to show that ID is not science does not require recourse to methodological
naturalism, let alone a precise demarcation criterion between science and non-science.
Let me end by suggesting that the objection to ID raised here is the “natural” response
we should have had to that doctrine in the first place provided that we do not priv-
ilege religion over science with respect to factual claims about the empirical world.
Faced by ID, if we add the claim that the designer is a conscious physical entity, the
natural reaction should be to regard ID as coherent but with no evidence whatsoever
to support it and all evidence against. We would not think of it as science. But if we
are told that the designer is not physical, and that we are not talking about a conscious
designer modeled on the Judeo-Christian-Islamic “God,” we no longer have any clue
what “intelligence” means. Once again, ID is not science but, now, mainly because
we simply do not know what it is saying.
Finally, attempting to cash out these intuitions about what is wrong with ID in terms
of demarcation criteria or on the basis of naturalism is both unnecessary and, it seems

15 This point is missed by Monton (2006) who suggests that Jones’ demarcation criteria played a decisive
role in the Dover decision. Note that Jones also used the Lemon test which will not be discussed here.
16 Note that this condition can also not be used as a demarcation criterion between science and non-science:
we may pursue a naturalistic agenda in ethics or in mathematics without transforming it into science.

123
304 Synthese (2011) 178:291–305

to me, a tactical mistake: it allows ID proponents to exploit philosophical controversies


to camouflage the basic incoherence of their claims. (In their new pamphlet respond-
ing to the Kitzmiller vs. Dover (2005a,b) decision, the Discovery Institute has already
started playing this game (DeWolf et al. 2006.) Perhaps the situation was different in
the 1980s when Ruse (1982) argued for the effectiveness of the falsifiability criterion
in the legal context. However, the new generation of ID creationists is somewhat more
philosophically and much more rhetorically sophisticated for that strategy to continue
to work. Ultimately, the most damning part of the decision in the Kitzmiller vs. Dover
(2005a,b) case should be the way in which Judge Jones interpreted the Endorsement
Test and not his addendum on demarcation criteria.

References

Behe, M. J. (1996). Darwin’s black box: The biochemical challenge to evolution. New York: Free Press.
Behe, M. J. (2001). The edge of evolution: The search for the limits of Darwinism. New York: Free Press.
Bridgman, P. W. (1927). The logic of modern physics. New York: Macmillan.
Davis, P., & Kenyon, P. H. (1993). Of pandas and people: The central questions of biological origins, (2nd
ed.). Dallas: Haughton Publishing.
Dembski, W. A. (1998). The design inference: Eliminating chance through small probabilities. New York:
Cambridge University Press
Dembski, W. A. (2002). No free lunch: Why specified complexity cannot be purchased without intelligence.
Lanham, MD: Rowman & Littlefield.
DeWolf, D., West, J., Luskin, C., and Witt, J. (2006). Traipsing into evolution: Intelligent design and the
Kitzmiller vs. Dover decision. Seattle: Discovery Institute.
Elsberry, W., & Shallitt, J. (2003). Information theory, evolutionary computation, and Dembski’s ‘com-
plex specified information.’ Accessed 18 December 2005 from http://www.antievolution.org/people/
wre/papers/eandsdembski.pdf.
Fitelson, B., Stephens, C., and Sober, E. (1999). How not to detect design–critical notice: William A.
Dembski, the design inference. Philosophy of Science, 66, 472–488.
Forrest, B., & Gross, P. R. (2004). Creationism’s Trojan horse: The wedge of intelligent design. New York:
Oxford University Press.
Glymour, C. (1980). Theory and evidence. Princeton: Princeton University Press.
Hempel, C. G. (1961). A logical appraisal of operationism. In P. Frank (Ed.), The validation of scientific
theories (pp. 56–69). New York: Collier.
Hempel, C. G. (1966). Philosophy of natural science. Englewood Cliffs, NJ: Prentice-Hall.
Hochberg, H. (2006). Verifiability. In S. Sarkar & J. Pfeifer (Eds.), The philosophy of science: An encylo-
pedia, Vol. 2 (pp. 851–864). New York: Routledge.
Johnson, P. E. (1995). Reason in the balance: The case against naturalism in science, law and education.
Downers Grove, IL: InterVarsity Press.
Justus, J. (2006). Cognitive significance. In S. Sarkar & J. Pfeifer (Eds.), The philosophy of science: An
encyclopedia. Vol. 1 (pp. 131–140). New York: Routledge
Justus, J. (2007). The stability-diversity-complexity debate of community ecology: A philosophical analysis.
Ph. D. Dissertation, University of Texas.
Kitzmiller vs. Dover. (2005a). Dover Area School District. 400 F Supp 2d 707.
Kitzmiller vs. Dover. (2005b). Dover Area School District. 400 F Supp 1255.
Laudan, L. (1983). The demise of the demarcation problem. In M. E. Ruse (Ed.), But is it science?
(pp. 337–350). Amherst, NY: Prometheus Press.
Lofaso, A. M. (2006). Does changing the definition of science solve the establishment clause problem
for teaching intelligent design as science in public schools? Doing an end-run around the constitution.
Pierce Law Review, 4, 219–277.
Loomis, E., & Juhl, C. F. (2006). Explication. In S. Sarkar & J. Pfeifer (Eds.), The philosophy of science:
An encyclopedia. Vol. 1 (pp. 287–294). New York: Routledge.
Majer, U. (2006). David Hilbert. In S. Sarkar & J. Pfeifer (Eds.), The philosophy of science: An encyclopedia,
Vol. 1 (pp. 356–361). New York: Routledge.

123
Synthese (2011) 178:291–305 305

McIntosh, R. P. (1985). The background of ecology: Concept and theory. Cambridge, UK: Cambridge
University Press.
Miller, K. R. (1999a). Finding Darwin’s god. New York: Harper Collins.
Miller, K. R. (1999b). The evolution of vertebrate blood clotting. Accessed 10 December 2005, from http://
www.millerandlevine.com/km/evol/DI/clot/Clotting.html.
Monton, B. (2006). Is intelligent design science? Dissecting the Dover decision. Unpublished. Accessed
28 July 2007, available from http://philsci-archive.pitt.edu/archive/00002592/.
Nickles, T. (2006). The problem of demarcation. In S. Sarkar & J. Pfeifer (Eds.), The philosophy of science:
An encyclopedia. Vol. 1 (pp. 188–197). New York: Routledge.
Orr, H. A. (1996–1997). Darwin v. intelligent design (Again). Boston Review. December/ January, pp. 28–31
Pennock, R. T. (1999). The tower of Babel: The evidence against the new creationism. Cambridge, MA:
MIT Press.
Perakh, M. (2004). Unintelligent design. Amherst, NY: Prometheus Books.
Pimm, S. L. (1991). The balance of nature?: Ecological issues in the conservation of species and commu-
nities. Chicago: University of Chicago Press.
Plantinga, A. (1996). Methodological naturalism? In J. M. van der Meer (Ed.), Facets of faith science:
Historiography and modes of interaction, Vol. 1 (pp. 177–221). Lanham, MD: University Press of
America.
Ruse, M. (1982). Pro judice. Science, Technology, and Human Values, 7, 19–23.
Sarkar, S. (2007a). Doubting Darwin? Creationist designs on evolution. Oxford, UK: Blackwell.
Sarkar, S. (2007b). From ecological diversity to biodiversity. In D. L. Hull & M. Ruse (Eds.), The Cambridge
companion to the philosophy of biology. Cambridge, UK: Cambridge University Press.
Scott, E. C. (2004). Evolution vs. creationism : An introduction. Westport: Greenwood Press.
Scott, E. C., & Matzke, N. J. (2007). Biological design in science classrooms. Proceedings of the National
Academy of Sciences (USA), 104, 8669–8676.
Shallit, J., & Elsberry, W. R. (2004). Playing games with probablity: Dembski’s complex specified infor-
mation. In M. Young & T. Edis (Eds.), Why intelligent design fails: A scientific critique of the new
creationism (pp. 121–138). New Brunswick: Rutgers University Press.
Sober, E. (2004). The design argument. In W. Mann (Ed.), Blackwell guide to the philosophy of religion
(pp. 27–54). Oxford, UK: Blackwell.

123

You might also like