Professional Documents
Culture Documents
An
INTRODUCTION
to the
FORMAL STUDY
of
REASONING
JOHN L. POLLOCK
University of Arizona
1
The Formal Study
of Reasoning
1. If I push the box beneath the bananas, then it will be beneath the
bananas.
2. If the box is beneath the bananas and I climb on top of it, then I will
be able to reach the bananas.
3. If I am able to reach the bananas, then I can retrieve them.
Therefore, if I push the box beneath the bananas and climb on top of it,
then I can retrieve the bananas.
1. If P, then B.
2. If B and C, then A.
3. If A, then R.
What should be noticed is that this reasoning would be equally good reasoning
if we let P, B, C, A, and R abbreviate anything else. It is the general
structure of the reasoning that makes it good reasoning.
To solve more complex problems, we may have to engage in more
complex reasoning. For instance, if I want to see a movie, and the only way
I can get there is by taxi, I may reason as follows:
1. If I am at the movie theatre and I buy a ticket, I can see the movie.
2. If I am at the movie theatre and I have seven dollars, I can buy a
ticket.
3. If I am at home and I pay a cab driver ten dollars and ask him to
drive me to the theatre, he will do so.
4. If the cab driver drives me to the theatre, then I will be at the theatre.
5. If I pay the cab driver ten dollars, then I will have ten dollars less
than I did to begin with.
This reasoning is much like the reasoning involved in retrieving the bananas,
except that it also involves some mathematical reasoning.
Solving different kinds of problems may require different kinds of
knowledge about the world and different kinds of reasoning. If my goal is
to send a rocket to the moon, I will have to know a lot about physics and
engineering, and my reasoning will be heavily mathematical. If my goal is
to be elected mayor of Tucson, I will need different kinds of knowledge and
the reasoning will also be different. If my goal is to write a logic text, the
required knowledge and reasoning will be different yet.
1. If P, then B.
2. If B and C, then A.
3. If A, then R.
It is the form of the reasoning that makes it good rather than its content.
Logic is the study of this phenomenon. Consider another example. The
following is clearly good reasoning:
THE FORMAL STUDY OF REASONING 6
One may doubt the truth of the premises, but the reasoning is impeccable.
That is, given the premises, one can infer the conclusion. The correctness of this
inference, however, has nothing to do with the fact that it is about Republicans,
spouses, and political trouble. The inference has the general form:
1. All A are B.
2. No B is C.
Therefore, no A is C
This reasoning is equally good, and for the same reason. Again, one might
object to the premises, but given the premises, the conclusion follows.
Reasoning that is good because of the form of the reasoning can be
called formally good reasoning. Logic is the study of formally good reasoning.
In logic we investigate what forms of reasoning generate formally good
reasoning.
but they are identical twins so the eyewitness could not tell them
apart. The detectives have discovered the following clues:
We also know:
We can solve this puzzle by reasoning from the premises in the following
way:
THE FORMAL STUDY OF REASONING 8
9. It follows from (1) that if John wasn’t the murderer, Joe was.
10. It follows from (3) that either John was in Phoenix or Joe was in
Phoenix.
11. Suppose John is not the murderer. Then we can reason as follows:
12. From (11) it follows that Joe is the murderer.
13. From (2) and (12) it follows that Joe was in Tucson.
14. From (8) and (13) it follows that Joe was not in Phoenix.
15. From (3) and (14) it follows that John was in Phoenix.
16. From (4) and (15) it follows that Joe had breakfast with Maria.
17. From (5) and (16) it follows that Joe made up with Maria.
18. From (6) and (17) it follows that John asked Maria to make up
with Joe.
19. From (7) and (18) it follows that John was in Tucson.
20. From (3) and (19) it follows that Joe was in Phoenix, and hence
was not the murderer.
21. From (1) and (20) it follows that John is the murderer.
22. So from the supposition that John is not the murderer, it follows that
he is. Hence that supposition cannot be true, so:
Consider how this puzzle was solved. We began with premises (1)–(8), and
we ended with the conclusion (23). However, it is not initially obvious that
the conclusion follows from the premises, so we fill the gap with a number
of intermediate inferences each of which is obvious by itself. Stringing
those inferences together allows us to establish the conclusion. This illustrates
that reasoning is essentially a tool whose purpose is to enable us to acquire
new knowledge on the basis of things we already know.
When we represent our reasoning in this way, what we write is an
argument. An argument is a list of statements each of which is either a
premise or inferred from earlier steps of the argument. The individual
steps represent either premises or inferences from previous steps. It is con-
venient to number the steps of the argument, and include an explanation of
how each statement is inferred.
First Puzzle:
When Arthur, Bill, and Carter eat out:
1. Each orders either ham or pork (but not both).
2. If Arthur orders ham, Bill orders pork.
3. Either Arthur or Carter orders ham, but not both.
4. Bill and Carter do not both order pork.
Who could have ordered ham one day and pork another day?
THE FORMAL STUDY OF REASONING 9
Second Puzzle:
Fred knows five women: Ann, Betty, Carol, Deb, and Eve.
1. The women are in two age brackets: three women are under 30, and two
women are over 30.
2. Two women are teachers and the other three women are secretaries.
3. Ann and Carol are in the same age bracket.
4. Deb and Eve are in different age brackets.
5. Betty and Eve have the same occupation.
6. Carol and Deb have different occupations.
7. Of the five women, Fred will marry the teacher over thirty.
8. Each of these women is either a secretary or a teacher but not both.
9. Fred will marry one of these women.
Whom will Fred marry?
It is clear that this is not a deductive argument. The fact that you have not
observed any earthworms with wings does not make it impossible for there
to be some. Europeans once reasoned similarly about swans, concluding
that all swans are white. But it turned out that there are black swans in
Australia.
It is a general characteristic of inductive reasoning that it is not deductive.
In fact, that is the whole point of it. The value of inductive reasoning is that
it allows us to discover general truths about the world. We can never
observe general truths. All we can do is observe particular instances of
them, and infer what the general truths are on the basis of the particular
instances. This is what science is all about.
There is a temptation to suppose that if inductive reasoning is not
deductive, then there is something wrong with it. But that is to mistake
what the purpose of reasoning is. Reasoning is a tool of rationality, and
what it does is lead us to reasonable beliefs. The beliefs must be reasonable,
but they need not be infallible. None of our interesting beliefs about the
world are infallible. Try to think of anything you know that you know on
the basis of premises that deductively guarantee its truth. The only examples
you are apt to find are mathematical examples or simple tautologies like “It
is snowing or it isn’t snowing”. Virtually all of the copious knowledge you
have of the world and in terms of which you successfully solve practical
problems is had on the basis of reasoning that is not deductive.
Good arguments that are not deductive make their conclusions reason-
able, but do not absolutely guarantee their truth. When we employ such
arguments, we must always be prepared to retract the conclusions if subse-
quent discoveries mandate that. Such arguments are said to be defeasible,
because they can be “defeated”. The considerations that defeat a defeasible
inference are called defeaters. For example, consider the earthworm argument.
What might a defeater be? Here is an obvious example:
5. Deductive Inference
5.1 Logical Possibility
It is customary to try to define deductive inference in terms of logical
possibility. This is the way we proceeded when we introduced the term in
section four. We said that a deductive inference is one in which the conclusion
could not possibly be false while the premises are true.
There are many kinds of possibility. There is an “epistemic” sense in
it is possible that the President is now in Honduras but it is not possible
that he is now on Mars. This is epistemic in the sense that, “for all we
know”, he might be on Honduras, but our knowledge rules out the possibility
of his being on Mars.
There is also a non-epistemic sense in which, even when we know
that the President is currently in Washington, we are willing to say that the
President could now be in Honduras but could not now be on Mars. This
has to do with “practical possibility”. The President could have gone to
Honduras, but he could not have gone to Mars even if he wanted to.
Another variety of possibility is physical possibility. Something is
physically possible just in case it is not ruled out by fundamental laws of
nature. For example, the special theory of relativity tells us that nothing
can move faster than the velocity of light, so traveling faster than light is
physically impossible. On the other hand, traveling at 99.999% the velocity
THE FORMAL STUDY OF REASONING 12
P1
P2
P. 3
..
Pn
Therefore, Q.
Note that when we write the argument this way, we are ignoring the
intermediate inferences. That is, we are focusing on the premises and
THE FORMAL STUDY OF REASONING 13
conclusion and abstracting from how the conclusion was gotten from the
premises. We say that such an argument is valid if, and only if, it is logically
impossible for its premises to be true while the conclusion is false. In other words,
the argument is valid if, and only if, the inference it encodes is deductive.
Corresponding to the preceding argument we can construct the
statement:
If P1 and P2 and P3 and . . . and P n, then Q.
This statement says that if the premises of the argument are all true then the
conclusion is true. This statement is called the corresponding conditional of
the argument. A conditional is any statement of the form “If P then Q.” The
corresponding conditional of an argument is the conditional in which P
says that the premises of the argument are all true and Q is the conclusion
of the argument. For example, the corresponding conditional of the following
argument:
is
If all Republicans are faithful to their spouses, and no one who is faithful
to his or her spouse will ever get into political trouble, then no Republican
will ever get into political trouble.
An argument is valid if, and only if, its corresponding conditional is logically
necessary.
If an argument is valid and its premises are true, it follows that its
conclusion is true. It should be emphasized, however, that if the premises
of a valid argument are false, it does not follow that the conclusion is also
false. The following is an example of a valid argument with (presumably)
false premises and a true conclusion:
THE FORMAL STUDY OF REASONING 14
Given a valid argument, the only time we can infer anything about whether
its conclusion is true is when all its premises are true. If one or more of its
premises are false, then its conclusion can be either true or false. The only
combination of truth and falsity that is prohibited in a valid argument is all
premises being true and the conclusion being false. Any other combination
is possible.
The premises and conclusion are unchanged from the previous argument,
so the new argument is still valid. However, (3) does not follow from (1)
and (2), and the conclusion does not follow from (3), so the argument is not
inferentially correct. The argument is only valid “by accident”, because
two errors of reasoning cancelled out.
The term “sound” is sometimes used to refer to arguments that are
not only valid, but also through and through inferentially correct.
Unfortunately, that conflicts with the more customary use of “sound” to
refer to arguments that also have true premises, so we will not adopt that
terminology here. We might say instead that an argument is deductively
correct if, and only if, every inference in it is deductive (and hence inferentially
correct). If an argument is deductively correct, it follows that it is valid, but
as we have seen, an argument can be valid without being deductively correct.
objection was that we cannot make clear sense of these concepts by providing
philosophical analyses of them. That is a rather strange objection, because
it could be applied equally to most of our fundamental concepts. For example,
philosophers have been no more successful at providing philosophical
analyses of the concept of a physical object, or the concept of time. Does
that mean that such concepts should be viewed as suspect? Surely not.
But regardless of whether the Quinean arguments are any good, they
have convinced a number of philosophers. Without the concepts of logical
necessity and logical possibility, we cannot explain deductive inference as
above. This throws the foundations of logic into disarray. However, there
is another way of characterizing deductive inferences. A deductive inference
is one for which there are no (could be no) defeaters. This is to characterize
deductive inference “functionally”, by describing how it works and what
role it plays in human cognition. This approach takes defeasible inference
to be the norm, and then characterizes deductive inference to be inference
that is not subject to defeat.
This approach cries out for an explanation of what determines whether
there are or could be any defeaters. That is to be explained by explaining
where the structure of reasoning comes from. My own conviction is that it
is dictated by the cognitive architecture of the human mind, which in turn
is determined by our physiology. But this is a long story, and I will not
pursue the details here.1
This approach provides a way of resurrecting logical necessity from
its Quinean coffin. The resurrection proceeds by turning the standard story
on its head and explaining logical necessity in terms of deductive inference
rather than the other way around. Deductive inference is characterized
functionally, as sketched above, and necessary truths are just those that can
be established purely deductively, without using any defeasible steps in
our reasoning.
1
Some of the details of this story are spelled out in John Pollock, Cognitive
Carpentry, MIT Press, 1995; and in John Pollock and Joseph Cruz, Contemporary Theories of
Knowledge, second edition, Rowman and Littlefield, 1999.
THE FORMAL STUDY OF REASONING 16
6. Formal Logic
6.1 Argument Forms and Formal Validity
It was remarked above that an argument can be inferentially correct
either because of the particular concepts that are employed in it or because
of its general structure. When an argument is inferentially correct for the
latter reason, we say that it’s form makes it correct.
The form of an argument is obtained by replacing some of its terms by
schematic letters. For example, we might write a simple argument form as:
2
See Pollock and Cruz, Contemporary Theories of Knowledge.
THE FORMAL STUDY OF REASONING 17
1. All A are B.
2. No B is C.
Therefore, no A is C
will be logically necessary. The source of the necessity is the form of the
statement rather than the content of its constituents.
Statements do not have unique logical forms. Consider the statement
If John comes and Mary stays home then Bill be will unhappy.
If P and Q then R.
If S then R.
A statement might have one logical form all of whose instances are logically
necessary, and another (more general) logical form some instances of which
are not logically necessary. For example, consider the statement
THE FORMAL STUDY OF REASONING 18
If all featherless bipeds are men, and all men are mortal, then all
featherless bipeds are mortal.
Every statement of this form is logically necessary. But it also has the more
general form:
If P then Q.
is logically necessary, but it does not exhibit a form that is formally necessary.
1. All A are B.
2. No B is C.
Therefore, no A is C
is
If all A are B and no If all ducks have feathers and nothing having
B is C then no A is C. feathers is a mammal then no ducks are mammals.
7. Logical Theories
7.1 The Implicational Calculus
Logic is the study of valid argument forms and formally necessary
statement forms. It was remarked above that there is no such thing as the
form of a statement or argument. Statements and arguments can exemplify
many different forms. Different logical theories are generated by focusing on
different classes of statement and argument forms. To take a slightly simplistic
example, consider all statement and argument forms that can be constructed
using just “and”, “if ... then”, punctuation, and schematic letters standing
for statements. Examples of statement forms constructed using these mate-
rials are:
If P and Q then R.
If, if P then Q and if Q then R, then if P then R.
Syntax
The first step in constructing a logical theory is generally to introduce
an artificial representation of the statement forms studied. We could express
them in English (augmented with schematic variables), but there are two
problems with that. First, they would be hard to read. Second, English
sentences are often ambiguous, and it is desirable to have a notation that
avoids that ambiguity. For example, consider the expression we wrote
above:
This could mean several different things depending upon how the parts of
the expression are grouped together. We could disambiguate it using pa-
rentheses for punctuation, in either of the following ways:
If [(if P then Q) and (if Q then R)] then (if P then R).
If {if P then [Q and (if Q then R)]} then (if P then R).
The first reading is probably the more natural, but the rules of English do
not preclude the second reading. We want to avoid such ambiguities, and
the best way to do it is to use parentheses for punctuation, just as we did
above.
The English words “and” and “if ... then” also turn out to be ambiguous.
For example, “P and Q” can express simple conjunction, saying merely that P
and Q are both true, or it can express a temporal relation as in “He lay
down and fell asleep”. As a simple conjunction, “P and Q” is equivalent to
“Q and P”, but “He lay down and fell asleep” means something different
from “He fell asleep and lay down”. To avoid this source of ambiguity, it is
customary to introduce artificial symbols to replace the English words, and
rule that these symbols have some particular interpretation of the English
words as their meanings. The standard procedure is to symbolize “P and
Q” as (P & Q), taking “&” to express what I called “simple conjunction”.
“If P then Q” is symbolized as (P → Q). Using these devices, we can
symbolize the two readings of “If, if P then Q and if Q then R, then if P then
R” as:
The expressions we can write in this way using “&”, “→”, schematic letters,
and parentheses are called formulas of the implicational calculus.
In general, in constructing a logical theory, the first step is to define an
artificial symbolism and define formulas of the theory to be the expressions of
THE FORMAL STUDY OF REASONING 21
the symbolism that express statement forms studied by the theory. The
formulas will be constructed out of logical symbols, in this case “&” and
“→”, schematic letters, and parentheses. This part of the logical theory is
its syntax.
Semantics
The next step in constructing a logical theory is to define what the
formulas mean, i.e., to explain precisely what statement forms are expressed
by any given formula. This explanation constitutes the semantics of the
theory. The general procedure is to define the notion of an interpretation,
and then explain the meaning of a formula by saying which interpretations
make it true.
The definition of an interpretation varies a great deal from one logical
theory to another. In general, interpretations are defined so that they abstract
from the entire meaning and just specify the minimal amount of information
needed to determine whether a formula is true or false. In the implicational
calculus, it turns out that all we have to know to determine the truth value
of a formula (i.e., whether it is true or false) is the truth values of the
schematic letters out of which it is built. So we can define an interpretation
of the implicational calculus to be an assignment of truth values (truth or
falsity) to schematic letters. Other logical theories may have to include
more or different information in their interpretations.
Given a definition of the interpretations of the logical theory, formulas
receive their meanings by specifying the rules that determine under which
interpretations they are true. The truth rule for “&” is obvious:
The truth rule for “→” is not obvious. The following rule is normally used:
(P → Q) is true if, and only if, it is not the case that Q is false but P is
true.
This rule will be discussed at length in the next chapter. For now it will just
be used as an illustration of the kind of rule one might adopt.
Repeated application of these rules enables us to determine the truth
value of any formula of the implicational calculus under any interpretation.
For example, suppose P is assigned truth, and Q and R are assigned falsity.
Then consider the formula
The truth rule for “→” makes (P → Q) and (P → R) false, but makes (Q →
R) true. The rule for “&” then makes [(P → Q) & (Q → R)] false. Finally,
the rule for “→” makes the whole formula true.
Different logical theories use different techniques for formulating the
semantics. In general, the desideratum is to describe the semantics in a
THE FORMAL STUDY OF REASONING 22
A formula is formally necessary if, and only if, it is true under every
interpretation.
This theorem holds for the implicational calculus, although we will not
prove it here.
Formulas true under every interpretation are said to be valid. This is
unfortunate terminology, because it has no direct connection to the use of
“valid” in talking about arguments. However, it is the standard terminology,
so we will conform to convention and use it here. The reader is warned,
however, not to confuse the two meanings of “valid”.
Defining validity for formulas in this way, the preceding theorem can
be re-expressed as:
Arguments
Our central interest in logic is usually with arguments rather than
statements. The third step in constructing a logical theory is to specify
what arguments can be constructed out of the formulas of the theory. Different
theories can deal with arguments having different structures. The simplest
arguments will be finite lists of formulas, constructed according to specified
rules of inference. Rules of inference are rules telling us that certain formulas
can be inferred from others. For example, in the implicational calculus we
might use the following rule of inference in constructing arguments:
This simple rule of inference has a long history, and accordingly has a Latin
name, modus ponens (meaning “method of affirmation”).
Another simple rule that we might adopt is sometimes called adjunction:
Let us also adopt a pair of rules that are collectively called simplification:
1. (P & Q) (premise)
2. (P → R) (premise)
3. (Q → S) (premise)
4. P (simplification, from 1)
5. Q (simplification from 1)
6. R (modus ponens, from 2 and 4)
7. S (modus ponens, from 3 and 5)
8. (R & S) (adjunction, from 6 and 7)
Derivations are argument forms. They are the argument forms studied
by the logical theory in which they occur. A derivation is said to be a
derivation of whatever formula appears on its last line from whatever premises
are used in it. Thus the preceding derivation is a derivation of (R & S) from
(P & Q), (P → R), and (Q → S).
We have seen that we can evaluate argument forms in terms of their
corresponding conditionals. An argument form is valid if, and only if, its
corresponding conditional is formally necessary. If the formal necessity of
formulas corresponds to validity, then an argument form is valid if, and
only if, its corresponding conditional is a valid formula. (Note the two
distinct senses of “valid” in the preceding sentence.) Hence the semantics
we have adopted for a logical theory provides the basis for investigating
the validity of argument forms constructed within the theory.
To facilitate the investigation of argument forms within a logical theory,
we define:
A set of formulas P1 ,...,Pn entails a formula Q if, and only if, the
corresponding conditional (P1 &...&Pn) → Q is valid.
The rules given above for the implicational calculus are sound but not
complete. Notice that the use of “sound” in talking about systems of der-
THE FORMAL STUDY OF REASONING 24
8. Summary
Reasoning is inferentially correct if, and only if, given the premises, the
reasoning supports the conclusion. This is independent of whether the
premises are actually true.
A sound argument is one that is inferentially correct and also has true
premises.
An inference is deductive if, and only if, it is logically impossible for the
conclusion to be false when the premises are true.
An argument is valid if, and only if, it is logically impossible for its
conclusion to be false when its premises are true, i.e., if, and only if, the
inference it encodes is deductive.
THE FORMAL STUDY OF REASONING 25
An argument form is valid if, and only if, every argument of that form is
valid.
A statement form is formally necessary if, and only if, every statement of
that form is logically necessary.
It was then shown that we can evaluate the validity of an argument form by
looking at its corresponding conditional:
The semantics explains the meaning of the formulas. It does this by defining
what an interpretation is and giving truth rules that determine which
formulas are true relative to an interpretation. In the implicational calculus,
an interpretation is an assignment of truth values to the schematic letters.
We defined:
A formula is valid if, and only if, it is true under every interpretation.
A set of formulas P1,...,P n entails a formula Q if, and only if, the
corresponding conditional (P1 &...&Pn) → Q is valid.
The
Propositional
Calculus
2
The Syntax of the
Propositional Calculus
where now R stands for “It is raining in the mountains”, S stands for “It is
snowing in the mountains”, and T stands for “It is raining in Tucson”.
The symbols we use to replace the sentential connectives are called the
logical symbols of the propositional calculus. They are as follows:
Symbol Meaning
~P it is false that P
(P & Q) P and Q (or, both P and Q)
(P ∨ Q) either P or Q
(P → Q) if P then Q
(P ↔ Q) P if and only if Q
• ~P is the negation of P.
• (P & Q) is the conjunction of P and Q, and P and Q are the conjuncts.
• (P ∨ Q) is the disjunction of P and Q, and P and Q are the disjuncts.
• (P → Q) is a conditional, P is the antecedent, and Q is the consequent.
• (P ↔ Q) is a biconditional. There are no standard names for the left
and right side of a biconditional.
In writing the rules in this way, A and B are used as variables to stand for
arbitrary formulas of the propositional calculus. Such variables are called
metalinguistic variables.
These rules define the concept of a formula of the propositional calculus.
That is, an expression is a formula of the propositional calculus if, and only
if, it can be constructed by repeated application of these rules. To illustrate,
consider the formula
We can construct this formula using the above six rules, as follows. We
begin with the smallest parts and work outwards. By Rule 1, P, Q, and R
are formulas. Then by Rule 2, ~Q is a formula. By Rule 3, as ~Q and R are
both formulas, (~Q & R) is a formula. Then by Rule 5, as P and (~Q & R)
are both formulas, (P → (~Q & R)) is a formula.
We can construct very complicated formulas using these rules. For
example, consider the formula
((P → (Q3 ↔ ~R4 )) ↔ ~~(~P & ~( Q3 ↔ (R4 ∨ (R17 & ~~P)))))
Let us see how we would construct this formula using the six rules. Again,
we begin with the smallest parts and work outwards. By Rule 1:
Q3, R4 , R17
are formulas because they are atomic formulas. As P and R4 are formulas,
we can use Rule 2 to construct the formulas:
~P, ~R4 .
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 32
P Q3 R4 P Q3 R4 R17 P Rule 1
~R4 ~P ~P Rule 2
~~P Rule 2
(R17 & ~~P)) Rule 3
(R4 ∨ (R17 & ~~P)) Rule 4
(Q3 ↔ (R4 ∨ (R17 & ~~P))) Rule 6
~(Q3 ↔ (R4 ∨ (R17 & ~~P)))) Rule 2
(~P& ~(Q3 ↔ (R4 ∨ (R17 & ~~P)))) Rule 3
~(~P& ~(Q3 ↔ (R4 ∨ (R17 & ~~P)))) Rule 2
~~(~P& ~(Q3 ↔ (R4 ∨ (R17 & ~~P)))) Rule 2
(Q3 ↔ ~R4 ) Rule 6
(P → (Q3 ↔ ~R4 )) Rule 5
((P → (Q3 ↔ ~R4 )) ↔ ~~(~P& ~(Q3 ↔ (R4 ∨ (R17 & ~~P))))) Rule 6
Atomic formulas are the simplest formulas. Let us say that a formula
is molecular if it is not atomic. Then molecular formulas are those formulas
that contain logical symbols.
Sometimes we will want to talk about one formula being a part of
another formula. This just means that the first occurs somewhere in the
second. For example, P, Q, R, (P → Q), ~(P → Q), and [~P → Q) ∨ R] are all
parts of the formula ~[~(P → Q) ∨ R]. Let us also adopt the convention of
saying that a formula is part of itself. Then
~[~(P → Q) ∨ R]
is also a part of ~[~(P → Q) ∨ R]. The atomic parts of a formula are those
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 34
parts of the formula that are atomic. Thus P, Q, and R are the atomic parts
of ~[~(P → Q) ∨ R], while (P → Q), ~(P → Q), [~(P → Q) ∨ R] and ~[~(P →
Q) ∨ R] are parts that are not atomic.
With the exception of negations and atomic formulas, all formulas
must have outer parentheses. However, the outer parentheses are not
necessary for making the formula unambiguous. The outer parentheses are
only needed when the formula is used in constructing a larger formula. For
example,
P→Q
is unambiguous. However, to form its negation we cannot write
~P → Q.
That would be the abbreviation for (~P → Q), whereas what we want is ~(P
→ Q). So for the sake of enhanced readability, we could allow ourselves to
abbreviate formulas by omitting outer parentheses. But if we do that we
must take care to supply the omitted parentheses when combining formulas
with other formulas. And it must be borne in mind that this is strictly an
informal abbreviation, and what we write in that way is technically not a
formula.
Exercises
Exercises
have a picnic. (R G P)
2. It is not true that if it is cloudy then it will rain. (C R)
3. If the sun shines in the morning it will rain, and if the sun does not
shine in the morning then it will not rain. (S R)
4. If we get plenty of sunshine, then if it rains the flowers will grow. (S
R F)
5. It is false that, there is a woman in the next room if and only if Jim
said there is. (W S)
6. It is false that there is a woman in the next room, if and only if, Jim
said there is. (W S)
7. Either the entrails will contain cockroaches or they will not, and if
they do then the gods are angered, and if they do not then Venus
will be in apposition to Jupiter. (E G V)
8. If she is an acrobat or she is a clown, then she lives in that trailer. (A
C T)
9. Harry will go to the bank today if and only if the market drops, and
if Harry goes to the bank today the supervisors will hide, and if the
supervisors hide and Harry does not go to the bank today then Emmett
will lose his shirt on the stock market. (H M S E)
10. If Francis Bacon wrote Hamlet and Shakespeare wrote Macbeth, then
either Shakespeare was Bacon or the theater manager was a crook.
(H M S C)
5. Conditionals
The sentential connective that has the greatest number of importantly
different uses is “if … then”. On any of its uses, it is said to express a
conditional, but there are several different kinds of conditionals.
1
“The logic of conditionals”, Inquiry 8 (1965), 166-197.
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 38
The former is clearly true. After all, Kennedy got shot. But there is no
reason to think that the latter is true. Had Oswald been thwarted, Kennedy
would most likely have survived.
Most uses of subjunctive conditionals are to express what are called
counterfactual conditionals. A counterfactual conditional is about what would
have happened had something that is actually false been true. By contrast,
indicative conditionals are normally used to talk about what is true if
something we are not sure about is true.
There is an extensive literature on subjunctive conditionals, stemming
largely from Robert Stalnaker2 and David Lewis 3. My own account of
subjunctive conditionals can be found in my book The Foundations of
Philosophical Semantics4. There is no consensus on the analysis of subjunctive
conditionals, but it is agreed by everyone that however they are to be analyzed,
they are not properly expressed by the “→” of the propositional calculus.
Somewhat surprisingly, indicative conditionals have proved more
recalcitrant to analysis than subjunctive conditionals, and there is little
agreement about whether the “→” of the propositional calculus properly
expresses indicative conditionals.
2
“A theory of conditionals”, in N. Rescher, Studies in Logical Theory, Blackwell:
Oxford, 1968.
3
Counterfactuals, Harvard University Press: Cambrdige Mass, 1973.
4
Princeton University Press, 1984.
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 39
true or Q will be true; that is, (~P ∨ Q) will be true. So we see that if the
statement “If P then Q” is true, then (~P ∨ Q) is true.
2. In order to show that “If P then Q” and (~P ∨ Q) are true under exactly
the same circumstances, it must also be shown that if (~P ∨ Q) is true then
“If P then Q” is true. So let us suppose that (~P ∨ Q) is true. A disjunction
(A ∨ B) is true if, and only if, at least one of its disjuncts is true; that is, if,
and only if, either A is true or B is true. Thus if (~P ∨ Q) is true, then either
~P is true or Q is true. To show that “If P then Q” is true, we can reason as
follows:
If P is true, then ~P cannot be true. But by hypothesis, either ~P or Q
must be true. So if P is true then Q must be true; that is, “If P then Q” is
true.
Thus it has been shown that if (~P ∨ Q) is true, then “If P then Q” is true.
So we have an argument that seems to show that “If P then Q” and
(~P ∨ Q) are true in exactly the same circumstances. In other words, the
indicative conditional is properly analyzed as being the same as the material
conditional. However, as persuasive as the argument seems to be, the
conclusion is somewhat counterintuitive. The difficulty is that for a
disjunction to be true, all that is required is that at least one disjunct is true.
For instance, the disjunction “That is either gold or fool’s gold” is true if it is
gold and true if it is fool’s gold. This has the consequence that (~P ∨ Q) is
true if ~P is true, i.e., if P is false. But that is equivalent to saying that the
material conditional (P → Q) is true whenever its antecedent P is false. To
test this claim, consider an example. Suppose I have some influence with
the state Lottery Commissioner, but not enough to affect the outcome of the
lottery. I am also scrupulously honest and know that I would never use my
influence for any purpose. So I know that I will not use my influence with
the Lottery Commissioner. But from that I can infer that the material
conditional “If I use my influence with the Lottery Commissioner, you will
win the lottery” is true. If we understand this as an ordinary indicative
conditional, there is some temptation to deny that it is true, because my
influence is not so great that I could actually influence the outcome of the
lottery.
Examples like this are often taken to show that the material conditional
is not the same thing as the indicative conditional. However, we must not
be too quick to draw this conclusion. First, note that the most natural
interpretation of “If I use my influence with the Lottery Commissioner, you
will win the lottery” is as a subjunctive conditional, not an indicative
conditional. That is, what would ordinarily be taken as being asserted is
that if I were to use my influence, you would win the lottery. The distinction
between subjunctive and indicative conditionals is in terms of meaning, not
in terms of the words used. The words are ambiguous and can be used to
express either an indicative conditional or a subjunctive conditional.
Still, one can employ the sentence “If I use my influence with the
Lottery Commissioner, you will win the lottery” to express an indicative
conditional, and under the circumstances described there is something odd
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 40
about such a conditional. It does not seem reasonable to assert this just
because I know that I will not use my influence.
However, its being unreasonable to assert the conditional is not the
same thing as its being false. Under the circumstances described, it would
be equally odd to assert the disjunction “Either I won’t use my influence or
you will win the lottery”. The explanation for this may lie in what are
called “implicatures”.
5
“Logic and conversation”, in P. Cole and J. L. Morgan (eds.), Syntax and
Semantics, Academic Press: New York, 1975.
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 41
6. Paraphrasing
Sometimes it is necessary to paraphrase a statement before it can be
symbolized. Consider the statement
John and Joe both came to the party.
We want to symbolize this as a conjunction, but it cannot be symbolized
directly because “and” stands between two names rather than between two
sentences. Before we can symbolize it we must paraphrase it so that the
sentential connective connects two sentences:
John came to the party and Joe came to the party.
Then letting P be “John came to the party” and Q be “Joe came to the
party”, we can symbolize it as (P & Q).
Another example of such paraphrasing is found in the following
statement:
If either Kennedy or Khrushchev had been weaker willed concerning
either Berlin or Cuba, the cold war would have turned into a hot war.
This must be paraphrased first as
If either Kennedy had been weaker willed concerning either Berlin or
Cuba, or Khrushchev had been weaker willed concerning either Berlin
or Cuba, then the cold war would have turned into a hot war.
Then this must be paraphrased again as
If either Kennedy had been weaker willed concerning Berlin or Kennedy
had been weaker willed concerning Cuba, or Khrushchev had been
weaker willed concerning Berlin or Khrushchev had been weaker willed
concerning Cuba, then the cold war would have turned into a hot war.
Finally then, this can be symbolized as {[(P ∨ Q) ∨ (R ∨ S)] → T}.
Other kinds of paraphrasing may also be necessary. There are
expressions in English such as “unless”, “ but”, “if”, “only if”, “neither …
nor”, that are much like the sentential connectives. Whenever these occur
in a statement, the statement must be paraphrased to replace them by
sentential connectives. For example,
Neither Joe came to the party nor John came to the party
means the same thing as
Joe didn't come to the party and John didn't come to the party
and so it must be paraphrased in that way. In general, “Neither P nor Q”
can be paraphrased as (~P & ~Q). Equivalently, it can be symbolized as
~(P ∨ Q).
“But” is like “and” but emphasizes a contrast because the two conjuncts.
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 43
We may say “He came but he didn't like it” rather than “He came and he
didn't like it” merely to indicate that the two conjuncts do not go naturally
together. This emphasis makes no difference for logic, so we can symbolize
“P but Q “ as (P & Q).
The behavior of the expressions “if” and “only if” is somewhat
surprising. There is a strong temptation to identify “P if Q” with “If P then
Q”, but in fact it should be the other way around. That is, “P if Q” means
the same thing as “If Q then P”. Consider the statement “The crops will be
destroyed if there is a flood”. To say this is not to say that the crops might
not be destroyed anyway, for example, by a drought. But the statement “If
the crops are destroyed there is a flood” precludes the possibility of them
being destroyed by a drought as opposed to a flood. Therefore it cannot be
a proper paraphrase of “The crops will be destroyed if there is a flood”.
The proper paraphrase is “If there is a flood then the crops will be destroyed”.
In general, “P if Q” can be symbolized as (Q → P).
“P only if Q” works just the other way around. It means “If P then
Q”. Consider the statement “The crops will be destroyed only if there is a
flood”. This means that the only way the crops can be destroyed is by a
flood, and hence if the crops are destroyed then there must have been a
flood. This then means “ If the crops are destroyed then there is a flood”.
Note that “P if and only if Q” is just the conjunction of “P if Q” and “P only
if Q”, and thus that (P ↔ Q) is equivalent to [(P → Q) & (Q → P)].
One further expression that can be paraphrased in terms of the sentential
connectives is “unless”. “P unless Q” can be paraphrased as (~Q → P).
Suppose we want to paraphrase the statement “We will go to the beach
unless it rains”. This is the same thing as saying “If it doesn't rain then we
will go to the beach” that is (~Q → P).
One thing to beware of in symbolizing the forms of statements is that
there are other uses of some of the sentential connectives in which they do
not have their ordinary meaning. We noted in chapter one that “and”
sometimes means “and then” rather than simply “and”. In its ordinary use,
“P and Q” means “It is true that P, and it is true that Q”. On this reading,
“P and Q” means the same thing as “Q and P”. For example, “This is red
and that is white” means the same thing as “That is white and this is red”.
But consider the use of “and” in “He lay down and fell asleep”. This
clearly does not mean the same thing as “He fell asleep and lay down”.
The former means “He lay down and then fell asleep”. This temporal use
of “and” is not among the logical concepts the propositional calculus deals
with. In particular, it cannot be symbolized simply as “&”.
Exercises
Symbolize the forms of the following statements, using the sentential letters
indicated:
1. Neither Jack nor Jim will come unless Mary comes. (A I M)
2. We will not get there on time unless we speed, but if we speed we
will not get there at all. (T S G)
THE SYNTAX OF THE PROPOSITIONAL CALCULUS 44
3. We can get the door open only if we use an acetylene torch on it, but
then the door will be ruined. (O A R)
4. The river will not overflow its banks unless we either have an early
thaw or heavy rains, but we will not have heavy rains. (O E H)
5. Unless we have a flat tire, we can get there on time if we speed, but
we will have a flat tire if we speed. (F T S)
6. Neither Jack nor Jim will come if Mary comes, unless Joan and Mary
both come. (A I M O)
7. Jeremy will get a Mercedes Benz for Christmas only if he does not
offend Santa Claus, but Jeremy will offend Santa Claus if he does not
believe in him, and Jeremy does not believe in Santa Claus. (M O B)
8. Rain is imminent. (you pick the sentential letters on this one)
9. John will not come unless Jim comes, and Jim will not come if Jeffrey
comes, but Jeffrey will only come if John does not come. (O I E)
10. It will rain if the barometer drops, but if it rains it will cool off later,
and it will not cool off later. (R B C)
7. Summary
The propositional calculus studies statement forms and argument forms
that can be constructed out of “and”, “or”, “it is false that”, “if ... then”, and
“if and only if”. These are the sentential connectives, and they are symbolized
using the logical symbols &, ∨, ~, →, and ↔.
Subjunctive conditionals are about what would be the case if something else
were the case.
Indicative conditionals are about what is the case if something else is the case.
Practice Problems
Symbolize the forms of the following statements, using the sentential letters
indicated:
1. Interpretations
Formulas of the propositional calculus express statement forms. In
chapter two, we gave informal descriptions of the meanings of the logical
symbols of the propositional calculus, and relied upon that for our
understanding of the statement forms expressed by formulas. The next
step is to give a more precise description of those meanings. The objective
is to give a description that is sufficiently precise to allow us to use
mathematical tools in studying formal necessity and the validity of argument
forms.
Our general approach will be the same as for the implicational calculus.
We will begin by defining the notion of an interpretation, and then we will
give truth rules for the logical symbols. It is important to bear in mind that
formulas do not express statements—they express statement forms. As
such, they are not true or false. Typically, one and the same formula will be
the form of both true statements and false statements. For example, (P ∨ Q)
is the form of both of the following statements:
that this is true for all formulas of the propositional calculus. For this
reason, we will take an interpretation to simply assign truth values to
sentential letters rather than assign entire meanings. So:
An interpretation of the propositional calculus is an assignment of truth
values (truth or falsity) to the sentential letters.
We can then talk about formulas being true or false relative to an interpretation.
2. Truth Rules
To verify the claim that we can compute the truth value of any formula
if we know the truth values of the sentential letters occurring in it, we must
give the truth rules that allow us to make that computation. The rules we
employ are the following:
rule 1
~S true rule 1
rule 2
rule 4
[(P & ~S) → ~Q] false (R ∨ S) false
rule 1
~(R ∨ S) true
rule 5
{[(P & ~S) → ~Q] ↔ ~(R ∨ S)} false
Given the truth values of the atomic parts of a formula, it becomes a mechanical
matter to calculate the truth value of the whole formula using the above
truth rules. We begin with the smallest parts of the formula, calculate their
truth values, and then work outwards until we get the whole formula. The
calculation of the truth value parallels the construction we would use in
building the formula from its atomic parts.
3. Truth Tables
The truth rules allow us to compute the truth value of any formula
relative to an interpretation. There are infinitely many sentential letters,
because we allow the use of numerical subscripts. It follows that there are
infinitely many interpretations of the propositional calculus. However, when
focusing on a single formula, the only part of an interpretation that is relevant
to its truth value is the assignment to the sentential letters that are atomic
parts of the formula. There will be only finitely many ways interpretations
can assign truth values to any finite set of sentential letters. For example, if
we consider the sentential letters P, Q, and R, we can tabulate all the ways
of assigning truth values to them as follows:
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 50
P Q R
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
Note how we arranged the truth values to ensure that we have an exhaustive
list. In the rightmost column (under R), the truth values alternate. In the
next column (under Q) they alternate two at a time. In the final column
(under P) they alternate four at a time. If there were another column, they
would alternate eight at a time,n
and so on. In general, given a list of n
sentential letters, there are 2 combinations of truth values that can be
assigned to them. Thus for two sententential letters there are four
combinations of truth values, for three sentential letters there are eight, for
four there are sixteen, and so forth.
We can express the truth rules for the logical symbols in tabular form
as follows:
P ~P
T F
F T
P Q (P & Q)
T T T
T F F
F T F
F F F
P Q (P ∨ Q)
T T T
T F T
F T T
F F F
P Q (P → Q)
T T T
T F F
F T T
F F T
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 51
P Q (P ↔ Q)
T T T
T F F
F T F
F F T
In a similar way, we can tabulate the truth values of that formula for
all possible combinations of truth values of its atomic parts. Consider the
formula [(P & ~Q) ↔ (~(P ∨ Q) → Q)]. The following table can be constructed.
At each step the column of truth values being computed is greyed out, and
the columns from which the computations is made are marked in a lighter
grey:
Step 1
P Q [(P & ~Q) ↔ (~(P ∨ Q) → Q)]
T T
T F
F T
F F
This table is called a truth table. On the left-hand side of a truth table we list
all possible combinations of truth values for the atomic parts of the formula.
On the right-hand side are columns corresponding to each occurrence of a
logical symbol in the formula, and in that column we list the truth value of
the part of the formula that is constructed using that connective. We enclose
the column of truth values that are the truth values of the entire formula in
a box and print it in bold italic type.
The truth table for a formula gives us a record of its truth value under
all possible interpretations. The only thing difficult about constructing truth
tables is keeping track of which columns to use in computing the truth
values for each new column. It is easy to become confused about that. The
best way to avoid confusion is to keep in mind how the formula is constructed.
For example, in step 7 we are computing the truth value of a biconditional.
Ask yourself what the left and right sides of the biconditional are, and
which columns record their truth values. Those are then the columns to
which you appeal in computing the truths values of the biconditional. You
may find it useful to connect matching parentheses with lines, as was done
on page 33.
Exercises
4. Tautologies
Our main interest is in the formal necessity of statement forms and the
formal soundness of argument forms. Using truth tables we can give very
simple characterizations of these concepts for statement forms and argument
forms that can be symbolized in the propositional calculus. Let us begin by
discussing formal necessity.
Consider the logically necessary statement “Either it is snowing or it is
not snowing”. This statement has the form (P ∨ ~P). Any statement that
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 54
has this form will be necessary. Why is this so? Because regardless of what
we let P mean, either P or ~P will be true, and so (P ∨ ~P) will be true. To
make this clearer, consider the truth table for (P ∨ ~P):
P (P ∨ ~P)
T T F
F T T
Notice that (P ∨ ~P) comes out true on every line of this truth table. This is
enough to make it formally necessary, because regardless of how we interpret
P, it will have one of the truth values listed in the truth table, and then (P ∨
~P) will be true. Thus we can see by examining the truth table that any
statement having the form (P ∨ ~P) will be true under all circumstances,
regardless of whether P is true or false. This means that the statement is
necessary. So any statement of the form (P ∨ ~P) is necessary, and hence
this statement form is formally necessary.
It was remarked in chapter one that a formula true under every
interpretation is said to be valid. In the propositional calculus, a formula is
valid if, and only if, it is true on every line of its truth table. Such formulas
are called tautologies. (P ∨ ~P) is therefore a tautology. We write ¤ A to
indicate that a formula A is a tautology. For example, we can write
¤ (P ∨ ~P).
The significance of tautologies is that they are the formally necessary
statement forms of the propositional calculus. To see this, suppose we have
some formula of that propositional calculus that is a tautology, such as
{[P → (Q → R)] ↔ [(P & Q) → R]}. Consider any statement that has this
form. Then, even without knowing whether P, Q, and R are true, we can
verify that the statement itself is true just by seeing that the formula comes
out true on every line of its truth table. This is because, regardless of what
statements are symbolized by P, Q, and R, they will have some truth values,
and those truth values will correspond to a line of the truth table. The
statement symbolized by [P → (Q → R)] ↔ [(P & Q) → R]} will then be true
if and only if the formula is true on that line of its truth table. But the
formula is true on every line of its truth table, so it follows that the statement
will be true regardless of how the world might be (regardless of the truth
values of P, Q, and R). In other words, any statement of this form will be
necessary, and the statement form will be formally necessary. Therefore,
we can conclude:
If a formula is a tautology then the statement form it symbolizes is
formally necessary.
The converse of this principle is also true:
If a formula is a not tautology then the statement form it symbolizes is
not formally necessary.
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 55
This is because for every line of the truth table there will be some statement
having the form symbolized by the formula and having the truth value the
formula has on that line. To illustrate, consider the formula [P → (P & Q)].
This is not a tautology. It will be false when P is true but Q is false. But
now pick any statements having the truth values that, when assigned to P
and Q, make [P → (P & Q)] false. For instance, we might let P stand for
“2+2 = 4” and Q stand for “2+3 = 6”. The statement
[2+2 = 4 → (2+2 = 4 & 2+3 = 6)]
is then be true if and only if [P → (P & Q)] is true on that line of its truth
table where P is true and Q is false. Hence the statement is false. It follows
that not all statements of the form [P → (P & Q)] are logically necessary (in
particular, this one isn’t), and so [P → (P & Q)] does not symbolize a
formally necessary statement form.
We can conclude in general then that:
A formula of the propositional calculus is a tautology if, and only if, it
is the form of a formally necessary statement form.
Thus truth tables and tautologies give us a way of talking about the formal
necessity of any statement whose form can be expressed in the propositional
calculus.
5. Metatheorems
Logical necessity was explained using metaphors and by giving
examples, but not by giving a precise definition. That has the consequence
that although we can reason about the properties of logical necessity and
the attendant concepts of formal necessity and the validity of argument
forms, we cannot literally prove theorems about these concepts in the sense
that one proves theorems in mathematics. In this respect, they contrast
sharply with the concept of a tautology. The concept of a tautology is a
mathematically precise concept. We could, for example, write a computer
program that mechanically checks formulas to determine whether they are
tautologies. It would do this by constructing a truth table.
Because tautologicity is a mathematically precise concept, we can
literally prove mathematical theorems about it. For example, one important
fact about tautologies is:
The conjunction of two tautologies is itself a tautology.
That is, if we begin with two formulas, A and B, and ¤ A and ¤ B, then
¤ (A & B). This principle is known as adjunctivity. To establish this,
suppose that ¤ A and ¤ B. Then any assignment of truth values to the
atomic parts of A and B will make them both true. But then any such
assignment of truth values will also make the conjunction (A & B) true, and
so it is also a tautology.
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 56
Exercises
6. Tautological Implication
The concept of tautologicity provides the vehicle for a mathematical
study of formal necessity in the propositional calculus. We can define an
analogous concept that allows us to study the validity of argument forms
expressed in the propositional calculus. We saw in chapter one that an
argument form is valid if, and only if, its corresponding conditional is
formally necessary. In the propositional calculus, formal necessity coincides
with tautologicity, so it follows that:
An argument form expressed in the propositional calculus is valid if,
and only if, its corresponding condition is a tautology.
This is captured by the concept of tautological implication. We define:
A set of formulas A1,…,An tautologically implies a formula B if, and
only if, ¤ [(A1 &…&An) → B].
We abbreviate “A1,…,An tautologically implies B” as “A1 ,…,An ¤ B”.
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 58
All A are B.
All B are C.
I1. (A & B) ¤ A
simplification
I2. (A & B) ¤ B
I3. A ¤ (A ∨ B)
addition
I4. B ¤ (A ∨ B)
I5. ~A ¤ (A → B)
I6. B ¤ (A → B)
I7. ~(A → B) ¤ A
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 59
I8. ~(A → B) ¤ ~B
I9. A, (A → B) ¤ B modus ponens
I10. ~B, (A → B) ¤ ~A modus tollens
I11. ~A, (A ∨ B) ¤ B
disjunctive syllogism
I12. ~B, (A ∨ B) ¤ A
I13. (A → B), (B → C) ¤ (A → C) hypothetical syllogism
I14. A, B ¤ (A & B) adjunction
I15. (A ∨ B), (A → C ), (B → C) ¤ C dilemma
Exercises
7. Tautological Equivalence
On several occasions it has been remarked that one formula is
“equivalent” to another. For instance, it was remarked that “Neither P nor
Q” could be symbolized as either (~P & ~Q) or ~(P ∨ Q), and that the two
formulas were equivalent. Now we are in a position to make this concept
of equivalence precise. The sense in which two formulas of the propositional
calculus can be equivalent is that they are true under the same circumstances.
More precisely, they are true relative to the same interpretations. This
concept of equivalence is called tautological equivalence:
Two formulas of the propositional calculus are tautologically equivalent
if, and only if, they are true relative to the same interpretations.
Thus, for example, (~P & ~Q) and ~(P ∨ Q) are true relative to the same
interpretations. This can be verified by constructing a truth table:
P Q (~P & ~Q) ~(P ∨ Q)
T T F F F F T
T F F F T F T
F T T F F F T
F F T T T T F
If two formulas have the same truth values, then their biconditional is
true. E.g., we could rewrite the preceding truth table as follows:
P Q [(~P & ~Q) ↔ ~(P ∨ Q)]
T T F F F T F T
T F F F T T F T
F T T F F T F T
F F T T T T T F
Let us examine each of these equivalences separately, and see why it is true:
E1. ~~A is true if, and only if, ~A is not true, and ~A is not true if, and
only if, it is not the case that A is not true; that is, if and only if A is true.
E2. ~(A ∨ B) is true if, and only if, ( A ∨ B) is false. But (A ∨ B) is false if,
and only if, both disjuncts are false; that is, if and only if both A and B are
false. But this is the same as saying that ~A and ~B are both true; that is,
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 62
Exercises
8. Some Metatheorems
particles in the universe. And yet, it is not out of the question that we could
show such an implication to hold. For example, it might have the form
(P1 & ... & P300 ) ¤ P1 .
We can give a very simple argument to show that this implication holds,
without having to construct a truth table.
The preceding example shows that it is often better to reason about
tautologicity and tautological implication rather than constructing truth
tables. Reasoning about these concepts consists of proving metatheorems.
So let us turn to some metatheorems that will subsequently prove useful.
Let us begin with a simple one:
Metatheorem: If a formula is either tautologically equivalent to, or
tautologically implied by a tautology, then it is itself a tautology.
Proof: This holds because, if if a formula is tautologically implied by a
tautology, then the formula must be true on every line of its truth table on
which the tautology is true. Since the tautology is true on every line of the
truth table, the formula itself must also be true on every line of its truth
table; thus it is a tautology. ■
Next note that tautological implication is transitive:
Metatheorem: If one formula (or set of formulas) tautologically implies
a second formula, and the second formula tautologically implies a third,
then the first formula (or set of formulas) tautologically implies the
third.
Proof: Suppose we have a set of formulas A1 ,…,An and B and C , and
suppose A1 ,…,An ¤ B and B ¤ C . Let us show that it follows from this that
A1 ,…,An ¤ C . By the definition of tautological implication, as A1 ,…,An ¤ B
and B ¤ C , C is true on every line of the truth table on which B is true, and
B is true on every line on which A1 ,…,An are true. Hence C is true on every
line on which A1,…,An are true. That means that A1,…,An ¤ C. ■
Because tautological implication is transitive, we can establish
tautological implications using several steps. For example, suppose we
want to show that (P & Q) ¤ (P ∨ Q). We could do this by using a truth
table and showing that ¤ [(P & Q) → (P ∨ Q)]. But it is much easier to do it
as follows. By I1, (P & Q) ¤ P. By I3, P ¤ (P ∨ Q). Therefore, by the
transitivity of implication, (P & Q) ¤ (P ∨ Q).
Using the transitivity of tautological implication in this way we can
string simple implications together to establish more complicated
implications. If we want to show that one formula, A, implies another
formula, B, we might do this by showing that A implies some other formula
and that the second formula implies a third formula (and hence A implies
the third formula); then the third formula implies a fourth formula (and
hence A implies the fourth formula), and so on until we obtain the formula
B. In other words, if we have a string of implications of the form A ¤ A1 ,
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 65
By E7,
(P ↔ Q) ¤ [(P → Q) & (Q → P)],
and by I1,
[(P → Q) & (Q → P)] ¤ (P → Q).
Thus by the transitivity of implication,
P, (P ↔ Q) ¤ (P → Q).
Then by the fact that implication is adjunctive,
P, (P ↔ Q) ¤ [P & (P → Q)].
By I9,
[P & (P → Q)] ¤ Q.
Thus by transitivity again,
P, (P ↔ Q) ¤ Q.
We can generalize the adjunctivity of tautological implication
somewhat:
Metatheorem: If a set of formulas A1 ,…,An tautologically implies each
formula in another set B1 ,…,Bm then A1,…,An implies the conjunction
(B1 & … & Bm) (where the inner parentheses can be in any order).
Proof: Suppose the set A1,…,An implies each of B1 ,…,Bm. Then as A1,…,An
¤ B1 and A1,…,An ¤ B2, we must have A1,…,An ¤ (B1 & B2). Then as
A1 ,…,An ¤ B3 we must have A1 ,…,An ¤ (B1 & B2 & B3). And so on. Thus
A1 ,…,An ¤ (B1 & … & Bm). ■
We can also generalize the transitivity of tautological implication
somewhat:
Metatheorem: If each formula in a set B1,…,Bm of formulas is
tautologically implied by the set of formulas A1,…,An. and B1,…,Bm ¤
C, then A1,…,An ¤ C .
This is called strong transitivity. The form of this principle becomes more
obvious if we diagram the relations between the formulas as follows:
A1 ,…,An ¤ B1
A1 ,…,An ¤ B2
¤C
...
A1 ,…,An ¤ Bm
by A1,…,An. Then as we have just seen, A1,…,An ¤ (B1 & … & Bm). If
B1 ,…,Bm ¤ C , then (B1 & … & Bm ) ¤ C . So by transitivity, A1 ,…,An ¤ C .
The difference between transitivity and strong transitivity is that in the
former we have a set of formulas implying a single formula, which in turn
implies another formula, whereas in the latter we have a set of formulas
implying a second set of formulas, which in turn implies another formula. ■
Strong transitivity is of fundamental importance in understanding the
structure of reasoning. Arguments typically have a kind of “tree structure”
wherein we start with some premises, draw intermediate conclusions from
the premises, draw further intermediate conclusions from the intermediate
conclusions, and finally infer our desired conclusion from some of those
intermediate steps. For instance, we might have an argument whose structure
could be diagrammed as follows:
A1 A2 A3 A4 A5
B1 B2
B3 B4
C
When we reason like this, we assume that the argument has established
that the final conclusion is implied by the initial premises. It is strong
transitivity that allows us to conclude that B3 and B4 are implied by the
initial premises, and then to conclude from that that C is implied by the
initial premises. Without strong transitivity, we would have to somehow
get the conclusion from the premises in a single step.
Another important fact about implication is the following:
Metatheorem: If C 1,…,C n, A ¤ B, then C1 ,…,Cn ¤ (A → B).
In other words, we can remove one formula from the head of the implication
and make it the antecedent of a conditional in the conclusion of the implication.
This is the principle of conditionalization. It is true for the following reason.
Proof: Suppose C 1,…,C n, A ¤ B. By the definition of tautological implication,
this means that the conditional [(C1 & … & C n & A) → B] is a tautology. By
E15 this is tautologically equivalent to the conditional
[(C 1 & … & C n) → (A → B)].
We have seen that a formula tautologically equivalent to a tautology is itself
a tautology, so ¤ (C1 & … & Cn) → (A → B)]. Then C1 ,…,Cn ¤ (A → B). ■
The principle of conditionalization is of considerable use in establishing
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 68
(P → Q), (Q → R) ¤ (P → R).
Then by strong transitivity,
(~P ∨ Q), (~Q ∨ R) ¤ (P → R).
Byy E4,
(P → R) ¤ (~P ∨ R).
Then by transitivity,
(~P ∨ Q), (~Q ∨ R) ¤ (~P ∨ R).
General arguments like this provide an alternative to truth tables in
establishing tautologies and tautological implications. However, it can
reasonably be doubted whether the construction of such arguments is easier
than constructing a truth table. At this point the reader is no doubt wondering
how we know which principle to use at any given point in constructing a
proof. This question will be answered in the next chapter.
Exercises
9. Failures of Tautologicity
To show that a formula is a tautology, one must either construct an
entire truth table or a give a general argument to that effect. This can be
difficult. It is generally much easier to show that a formula is not a tautology.
It suffices to find a single assignment of truth values (an interpretation) that
makes the formula false. Equivalently, one need only find a single line of
the truth table on which the formula is false. For example, suppose we
want to show that [(P → Q) & (Q → P)] is not a tautology. This is a
conjunction, so it suffices to make one conjunct false. For each conjunct,
there is only one way to make it false. For example, focusing on the first
conjunct, we can make P true and Q false. That makes the whole formula
false, showing that it is not a tautology.
A similar technique can be used to show that a proposed tautological
THE SEMANTICS OF THE PROPOSITIONAL CALCULUS 70
implication fails. For example, to show that the two premises (P → ~Q), (Q
→ ~R) do not jointly imply (P → R), it suffices to find an assignment of
truth values making the premises true and the conclusion false. The only
way to make the conclusion false is to make P true and R false. That
automatically makes the first premise true, and if we also make Q false then
the second premise is true as well. So the proposed implication does not
hold.
Exercises
1. Arguments Expressed in
the Propositional Calculus
We have seen that we can symbolize a wide variety of statement
forms using formulas of the propositional calculus. It follows that we can
also express argument forms constructed out of those statement forms. Thus
when presented with the following argument:
1. If Joseph lives in Phoenix, he lives in Arizona.
2. If Joseph lives in Tucson, he lives in Arizona.
3. Joseph lives in either Phoenix or Tucson.
4. Therefore, Joseph lives in Arizona.
we might symbolize its form as follows:
(P → A)
(T → A)
(P ∨ T)
A
We have seen that an argument form expressed in the propositional calculus
is formally valid if, and only if, its premises tautologically imply its conclusion.
Equivalently, it is formally valid if, and only if, its corresponding conditional
is a tautology. Thus we could assess the formal validity of this argument
form by constructing a truth table for the formula
{[(P → A) & (T → A)] & (P ∨ T)] → A}.
However, there is a simpler way to go about it. We can observe that this
argument is an instance of I15, dilemma. Thus we already know that this is
a formally valid argument form.
Consider an argument that is a little more complicated:
1. If Joseph lives in either Arizona or California, he must pay sales tax
on this purchase.
2 If Joseph lives in Phoenix, he lives in Arizona.
3 If Joseph lives in Tucson, he lives in Arizona.
4. Joseph lives in either Phoenix or Tucson.
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 72
(A ∨ C) [(A ∨ C) → S]
by I9
S
This diagram is just a way of representing reasoning, using tautological
implications we already know to establish new ones. To make it look more
like ordinary reasoning, we can rewrite it as a sequence of formulas rather
than a tree:
1. [(A ∨ C) → S] given
2. (P → A) given
3. (T → A) given
4. (P ∨ T) given
5. A by I15, from (2), (3), and (4).
6. (A ∨ C) by I3 from (5)
7. S by I9 from (1) and (6).
What this illustrates is that much ordinary reasoning is just a way of
establishing tautological implications by using “obvious” implications and
principles like strong transitivity that tell us how new implications can be
obtained from old.
The purpose of this chapter is to investigate the process of constructing
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 73
2. Derivations
Derivations will be sequences of “lines”, where a line has four parts.
The general form of a line is:
(i,j,k,...) n. formula explanation
Lines are numbered sequentially, and n is the line number. Line numbers
are used for easy reference to particular lines. formula is a formula of the
propositional calculus. It is the formula inferred on that line of the derivation.
explanation is an explanation of how formula was inferred. Explanations will
always have the general form:
rule, from (a), (b), ...
where rule is a rule of inference and (a), (b), ... are the numbers of the lines
from which formula was inferred using the rule of inference.
An argument always begins with some premises, and what it establishes
is that its conclusion follows from its premises. So what a derivation is
really establishing is a tautological implication. Some of the lines of a
derivation represent premises, and other lines represent conclusions drawn
from those premises. Not all lines need be inferred from the same premises,
so we need a way of keeping track of which premises are used in getting
each line. That is the purpose of the initial list of numbers (i,j,k,...). These
are the line numbers of the premises used in getting that line, and are called
the premise numbers of the line.
The lines of a derivation are really shorthand for reports of tautological
implications. For instance, if a derivation contains a line
(3,5) 7. (P ∨ ~R) explanation
what this is telling us is that (P ∨ ~R) is tautologically implied by the
formulas on lines (3) and (5) of the derivation.
It will turn out that it is possible to have a line of a derivation that
lacks premise numbers. What that signifies is that the formula on that line
was inferred in such a way that it does not depend on any premises. In
other words, it is a tautology. Given that this is the significance of a line’s
not having premise numbers, all other lines must have premise numbers.
In particular, premises themselves must have premise numbers. This may
seem odd, because a premise does not depend upon anything else. It is just
a premise. What can we use for the premise numbers of such a line? A
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 74
simple choice is the line number of the line itself. That is, if we want to
record a premise P on, say, the third line of a derivation, we can do it by
writing:
(3) 3. P premise
This says that P is adopted as a premise on line (3), and the premise number
tells us that the formula on line (3) implies itself. This is a trivial use of
premise numbers, but its purpose is to reserve lines with no premise numbers
for recording tautologies.
3. Linear Derivations
To say precisely which sequences of lines count as derivations, we
must say which rules of inference we are permitted to use in constructing
derivations. Perhaps the simplest and most obvious rule of inference is one
telling us that if a formula A is tautologically implied by some other formulas
B1 ,…,Bm that already occur in the derivation, then we can write A on any
subsequent line, citing the lines on which B1 ,…,Bm occur in our explanation.
When we make an inference of this sort, we must have some fixed list of
tautologically implication to which we can appeal, and for this purpose we
will use I1–I15 and E1–E20 from chapter three. Equivalences are implications
that go in both directions, and so they can be used to make an inference in
either direction. For instance, we could cite E2 to make an inference from
~(P ∨ Q) to (~P & ~Q) or to make an inference from (~P & ~Q) to ~(P ∨ Q).
This is made precise by the following rule of inference:
RULE I: IMPLICATION: If, according to one of the implications I1 – I15 or
E1–E20, a set of formulas B1 ,…,Bm appearing on one or more lines
of a derivation tautologically implies a further formula A, then we
can write A on any later line of the derivation, taking as premise
numbers all of the premise numbers of B1,…,Bm.
Rule I allows us to add a line to a derivation by appealing to previous
lines, but to get started we need a rule for introducing premises. For this
purpose we will use the following rule:
RULE P: PREMISE INTRODUCTION: Any formula can be written on any line
of a derivation provided that we let the premise number be the line
number of that line.
This rule lets us take anything as a premise at any time. This may seem
overly liberal. Shouldn’t we only be allowed to take something as a premise
if it is known to be true? If our purpose is to establish the truth of the
conclusion, then there is no point in inferring it from premises not known
to be true, because all the derivation will establish is that the conclusion is
true if the premises are true. However, there is nothing logically wrong with
taking premises that are not known to be true. Recall that a line of a
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 75
(1) 1. (P ↔ Q) premise
(2) 2. (Q ↔ R) premise
(1) 3. [(P → Q) & (Q → P)] (E7), 1
(2) 4. [(Q → R) & (R → Q)] (E7), 2
(1) 5. (P → Q) (I1), 3
(2) 6. (Q → R) (I1), 4
(1, 2) 7. (P → R) (I13), 5,6
(2) 8. (R → Q) (I2), 4
(1) 9. (Q → P) (I2), 3
(1, 2) 10. (R → P) (I13), 8,9
(1, 2) 11. [(P → R) & (R → P)] (I14), 7,10
(1, 2) 12. (P ↔ R) (E7), 11
(A & B) ¤ A.
Exercises
4. Suppositional Arguments
Although we can do a lot of reasoning using just rules I and P, many
of the implications we will want to establish cannot be proven using just
these rules. For most purposes, we need at least one additional rule. Recall
the principle of conditionalization, discussed in chapter three:
If C1 ,…,Cn, A ¤ B, then C 1,…,C n ¤ (A → B).
This tells us that we can remove one formula from the head of the implication
and make it the antecedent of a conditional in the conclusion of the implication.
There is a corresponding rule of inference that can be used in constructing
derivations in the propositional calculus:
RULE C: C ONDITIONALIZATION: If some formula, B appears on a line of a
derivation, and A is any premise of that line, then on any later line
of the derivation we can write the conditional (A → B), taking for
premise numbers all the premise numbers of B except for the line
number of A.
Rule C is essentially just a rule that allows us to say that if we have derived
a formula from a premise, then if that premise is true the formula is true.
The following is a simple illustration of the use of Rule C:
(1) 1. P premise
(1) 2. (P ∨ Q) (I3), 1
3. [P → (P ∨ Q)] C, 1, 2
From the premise P we derived the conclusion (P ∨ Q). Thus, if P is true,
then (P ∨ Q) is true. But this means that [P → (P ∨ Q)] is true; and this is
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 77
(1) 1. P premise
(2) 2. Q premise
(1,2) 3. (P & Q) (I14), 1 ,2
(1) 4. [Q → (P & Q)] C, 2, 3
5. {P → [Q → (P & Q)]} C, 1, 4
{P → [Q → (P & Q)]}
is true. This latter formula no longer depends upon any premises, because
we have taken the only premise of the consequent and written it explicitly
as the antecedent of the conditional.
In the preceding derivation, line 5 has no premise numbers. As indicated
earlier, this means it is a tautology. In any derivation, we must begin by
adopting a premise. There is no other way to get started. But the important
thing about suppositional reasoning is that it allows you to adopt a premise
“temporarily”, as a supposition, and then elminate dependence on it later
by discharging it. This is what makes it possible to construct derivations of
tautologies even though the reasoning must begin from premises.
In effect, there are two kinds of premises. There may be premises that
are given. They constitute the background knowledge we can use in our
reasoning. But other premises are not facts we know to be true but rather
suppositions that we adopt as premises for the purposes of the reasoning and
with the intent of discharging them later. The most common strategy used
in constructing a derivation of a conditional is to take the antecedent as a
premise, try to derive the consequent from it, and then conditionalize to
obtain the conditional and eliminate dependence on the premise. This
illustrates why it can be useful to adopt premises that are not known to be
true. They can instead be adopted for strategic reasons, provided we can
later eliminate dependence on them through conditionalization.
At any given point in a derivation, any formula can be adopted as a
premise, but most premises will not be useful. If a premise is not among
the premises we are initially given, then it will only be useful if it can later
be eliminated by conditionalization or some other discharge rule. And a
premise can only be eliminated by conditionalization if we have an interest
in obtaining a conditional having that premise as its antecedent. The
conditional might be the conclusion we are trying to derive. But it can also
happen that we are trying to derive a conclusion that is not a conditional,
but which can be derived from a conditional by using other inference rules.
For example, suppose we want to construct a derivation of the tautology (P
∨ ~(P & Q)) (i.e., a derivation from no premises). By (E12), (P ∨ ~(P & Q)) is
equivalent to (~(P & Q) ∨ P), and by (E4) the latter is equivalent to the
conditional ((P & Q) → P). The conditional ((P & Q) → P) can be obtained
using conditionalization. So the whole derivation is as follows:
Exercises
Part One.
1. Using conditionalization, construct a derivation (from no premises)
of the tautology [(P & Q) → (Q ∨ R)].
2. Construct a derivation of the formula (P ↔ ~~P).
3. Construct a derivation of the formula ~(P & ~P).
Part Two. In each of the following purported derivations, check the lines
for errors, and indicate the numbers of the lines on which errors occur. An
error is said to occur in a line if the line cannot be introduced in accordance
with the rules of inference. This means that in checking a line for errors,
you only look at that line to see whether it has the necessary relation to the
previous lines, and treat the previous lines as if they were all correct. If a
line is incorrect, try to find a way of correcting it (perhaps by adding additional
lines).
1. (1) 1. P premise
(2) 2. (P → Q) premise
(1, 2) 3. Q (I9), 1, 2
2. 1. (P ∨ Q) premise
3. (1) 1. (P ∨ Q) premise
(2) 2. ~Q premise
(1, 2) 3. P (I11), 1, 2
(1) 4. [ ~Q → (P ∨ Q)] C, 2, 1
(1, 2) 5. {P & [ ~Q → (P ∨ Q)]} (I14), 3, 4
(2) 2. ~R premise
(1) 3. {[R → ~(S ∨ ~T)] & [ ~(S ∨ ~T ) → R]} (E7),1
(1) 4. [R → ~(S ∨ ~T)] (I1), 3
(1, 2) 5. ~~(S ∨ ~T) (I10), 2, 4
(1, 2) 6. (S ∨ ~T) (E1), 5
(7) 7. T premise
(1, 2) 8. (~T ∨ S) (E12), 6
(1, 2, 7) 9. S (I11), 7, 8
(1, 7) 10. (~R → S) C, 2, 9
(11) 11. (R ↔ S) premise
(11) 12. [(R → S) & (S → R)] (E7), 11
(11) 13. (S → R) (I2), 12
(1, 2, 7, 11) 14. R (I9), 9, 13
(1, 7, 11) 15. (R & ~R) (I14), 2, 14
(7, 11) 16. {[R ↔ ~(S ∨ ~T)] → (R & ~R)} C, 1, 15
(17) 17. R premise
18. (R → R) C, 17, 17
19. ~(R & ~R) (E5), 18
(7, 11) 20. ~[R ↔ ~(S ∨ ~T)] (I10), 16, 19
(11) 21. {T → ~[R ↔ ~(S ∨ ~T)]} C, 7, 20
(11) 22. (S → {T → ~[R ↔ ~(S ∨ ~T)]}) C, 9, 21
(11) 23. {(S & T) → ~[R ↔ ~(S ∨ ~T)]} (E15), 22
(11) 24. {[R ↔ ~(S ∨ ~T)] → ~(S & T)} (E16), 23
5. Strategies
Derivations provide an effective tool for investigating the validity of
arguments expressed in English. For example, consider the following argu-
ment:
1. If the river floods or heavy rains later in the summer, then our entire
wheat crop will be destroyed.
2. If our entire wheat crop will be destroyed then the community will be
bankrupt.
3. The river will flood if, and only if, there is an early thaw in the mountains.
as follows:
1. [(F ∨ H) → D]
2. (D → B)
3. (F ↔ E)
4. [E → (D & B)]
“This is all very well,” you may reply, “but how does one find
derivations?” The answer is that it requires ingenuity. The construction of
derivations is not so mechanical as the construction of truth tables. However,
there are definite strategies that are helpful in constructing derivations. If
these strategies are employed systematically, it is always possible to construct
a derivation of a formula from a set of premises that tautologically implies
it. We can list strategies for each type of formula (for example, conjunction,
disjunction, and so on).
5.1 Conditionals
We might be confronted with either of two situations with regard to
conditionals. We might be trying to get a conditional as a consequence of
something else, or we might be trying to get something else as a consequence
of a conditional. These cases are treated separately.
Backward reasoning
Let us suppose first that we are trying to derive a conditional from
something else. The last derivation above illustrates what is generally the
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 82
(1) 1. Q premise
(2) 2. R premise
(1, 2) 3. (Q & R) (I14), 1, 2
(1) 4. [R → (Q & R)] C, 2, 3
5. {Q → [R → (Q & R)]} C, 1, 4
Annotated derivations
A derivation only documents the forward reasoning and the interest
discharges in our reasoning. To help keep track of what we have done in
searching for a derivation, it is often useful to include more information by
indicating how the backward reasoning went as well. For this purpose we
can write annotated derivations. For example, we can annotate the preceding
derivation as follows:
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 83
1. {Q → [R → (Q & R)]}
(1) 1. Q premise
(1) 2. [R → (Q & R)] for interest 1 by C using premise (1)
(2) 2. R premise
(1,2) 3. (Q & R) for interest 2 by C using premise (2)
(1,2) 4. Q for interest 3 by (I14) discharged by (1)
(1,2) 5. R for interest 3 by (I14) discharged by (2)
(1, 2) 3. (Q & R) (I14), 1, 2 discharges interest 3
(1) 4. [R → (Q & R)] C, 2, 3 discharges interest 2
The bold italicized text records the construction and discharge of interests.
The interests are numbered sequentially and offset to the right to distinguish
them from conclusions. To the right of each interest is an explanation of
why it was adopted. In that explanation, numbers in parentheses refer to
conclusions and numbers not in parentheses refer to earlier interests. The
list of premise numbers preceding an interest indicates which premises can
be used in deriving a conclusion that discharges the interest. Interest 2 is an
interest in obtaining the consequent of the conditional in interest 1, so we
can also use the antecedent of the conditional (premise 1) in getting the
consequent for the purpose of employing conditionalization. Similarly for
interest 3.
With one exception, when an interest is added to an annotated derivation
by reasoning backward from another interest, the new interest has the same
premise numbers as the interest from which it was obtained. The exception
concerns interests adopted by conditionalization. Such an interest is
accompanied by a new premise (the supposition of the antecedent of the
desired conditional), and the number of that new premise is added to the
premise numbers of the interest in the consequent of the conditional. This
is illustrated in interests 2 and 3 of the preceding annotated derivation.
This represents the fact that for purposes of conditionalization, we can use
the premise recording the supposition of the antecedent in trying to derive
the consequent. Rule R, which will be introduced in section seven, works
similarly. That is, it introduces a new premise (a supposition) which can
then be used in trying to derive the interests generated by rule R.
The bidirectional reasoning recorded in an annotated derivation is
called interest driven reasoning because the structure of the reasoning is driven
as much by what we are trying to prove as by whatever premises we might
be given.
Forward reasoning
When we are trying to derive a conditional from something else, we
are engaged in backward reasoning and subsequent interest discharge. If
we are instead trying to derive something from a conditional, we are engaged
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 84
1. [P → (Q → R)] 2. (P → Q)
[(P → Q) → (P → R)] (~P → R)
(~R → Q)
5.2 Conjunctions
The strategies for reasoning forward or backward with conjunctions
are simple and basically similar. When reasoning forward from a conjunction
(A & B), it is generally a good idea to “take the conjunction apart”, using I1
and I2 to obtain the individual conjuncts and then reason from them.
Conversely, when trying to infer a conjunction from other formulas, it is
generally best to try to obtain the conjuncts separately and then conjoin
them using I14. Both of these strategies are illustrated by the following
annotated derivation of [(Q ∨ R) & (P ∨ S)] from (P & Q):
Give an annotated derivation of the formula below the line from the formulas
above the line. Keep in mind the strategies for constructing derivations.
1. (P → Q)
(P → R)
[P → (Q & R)]
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 86
5.3 Disjunctions
There is a variety of strategies for use with disjunctions. The strategies
for backward reasoning are different from those for forward reasoning, so
we will discuss them separately.
Backward reasoning
The strategies for backward reasoning are strategies for deriving
disjunctions from other formulas. The simplest case occurs when we can
derive one of the disjuncts separately and then infer the disjunction by I3 or
I4 (addition). This was illustrated by the preceding derivation. We could
expand its annotation as follows:
Forward reasoning
In reasoning forwards from a disjunction, one strategy is to somehow
obtain the negation of one of the disjuncts and then infer the other disjunct
in accordance with I11 or I12 (disjunctive syllogism). This was the strategy
employed on line 3 of the preceding derivation. However, one is not usually
so fortunate as to be able to derive the negation of one of the disjuncts, so
this strategy is not often applicable.
A strategy that can be used in general for reasoning forward from a
disjunction proceeds in terms of I15 (dilemma). That is, if we want to derive
some conclusion C from a disjunction of the form (A ∨ B), we can do that
by deriving the two conditionals (A → C) and (B → C ). In other words,
we derive our desired conclusion C separately from each disjunct, and then
use I15 to get C . An example of this would be the following:
Give an annotated derivation of the formula below the line from the formula
above the line. Do this in two different ways: (1) Use the strategy for
deriving a disjunction from something else, and disjunctive syllogism; (2)
use dilemma:
1. [Q ∨ (P & R)]
(Q ∨ R)
5.4 Biconditionals
The strategy for dealing with biconditionals is basically simple—turn
them into somethng else that we already know how to deal with. We have
two equivalences that are useful for this purpose, E7 and E8. By E7 the
biconditional (A ↔ B) is equivalent to the conjunction [(A → B) &
(B → A)], so if we wish to derive something from a biconditional, we can
first transform the biconditional into a conjunction, using E7, and then proceed
as with conjunctions. Conversely, if we wish to derive a biconditional from
something else, we can first derive the conjunction to which it is equivalent,
and then use E7 to transform it into the biconditional. The conjuncts are
conditionals, so we will typically use conditionalization to derive them.
Here is an example, in which we derive (~P ↔ ~Q) from (P ↔ Q):
(1) 1. (P ↔ Q) premise
(1) 1. (~P ↔ ~Q)
(1) 2. [(P → Q) & (Q → P)] E7, 1
(1) 3. (P → Q) (I1), 2
(1) 4. (Q → P) (I2), 2
(1) 2. [(~P → ~Q) & (~Q → ~P)] E17, for interest 1
(1) 3. (~P → ~Q)
for interest 2 by (I14)
(1) 4. (~Q → ~P)
(1) 5. (~P → ~Q) E16, 4 this discharges interest 3
(1) 6. (~Q → ~P) E16, 3 this discharges interest 4
(1) 7. [(~P → ~Q) & (~Q → ~P)] (I14), 5, 6
this discharges interest 2
(1) 8. (~P ↔ ~Q) E7, 7 this discharges interest 1
1. (A ↔ B) 2. (A ↔ B)
~(A & B) (B ↔ C)
~B (A → C)
5.5 Negations
A general strategy for dealing with negations is to use one of the
equivalences E1, E2, E3, E5, or E6 to convert the negated formula into
something else. This works for the negation of anything other than an
atomic formula. The negation of a negation drops both negations, the negation
of a disjunction becomes a conjunction, the negation of a conjunction becomes
a disjunction, the negation of a conditional becomes a conjunction, and the
negation of a biconditional becomes another biconditional. Thus we can
eliminate the negation and use the strategies that are applicable to the
equivalent formula to which the negation is transformed. For example, if
we want to derive (~P ∨ ~Q) from ~(P ∨ Q), we might proceed as follows:
obtain each conjunct separately. In this case we only needed the second
conjunct, which we obtained by I2: ~(S & T). This is a negation, so we used
E3 to drive the negation in and convert it to a disjunction (~S ∨ ~T). We
were looking for (S → ~T), which we could then obtain by (E4). Thus we
derived (S → ~T) from the first disjunct. Next we turned to the second
disjunct, taking it as a new premise. It is a conjunction, so we obtained each
conjunct separately, using I1 and I2. To obtain (S → ~T) we took the
antecedent as a premise and tried to derive the consequent. That was done
immediately by I9 and I10. Thus we obtained (S → ~T) from each disjunct
of line 1, and hence by I15 it followed from line 1.
Exercises
3. ~(R → S) 4. (~P → R)
[R → (S ∨ ~T)] [ ~Q → (R → S)]
~T [(P ∨ Q) ∨ (R & S)]
6. Double Negations
The rules and strategies discussed above are adequate for deriving
formulas from any premises that tautologically imply them. However, the
derivations produced in this way are often more complex than they need to
be. To simplify things, we will add two more rules of inference. The rule
DN of double negation will be presented in this section, and reductio ad
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 92
This reasoning is made complex by the fact that to eliminate the double
negation in (~~P ∨ P), we must get ~~P on a line by itself and then use (E1).
If we could eliminate the double negation inside the disjunction, we could
reason as follows:
The inference from (2) to (3) is not licensed by rule I, beause as we saw
above, rule I can only be applied to the entire formula on a line—not to
parts of a formula. However, in this case the inference is clearly valid. The
following metatheorem holds in general:
If A is a formula containing a double negation ~~B as one of its
parts, and A* results from replacing the occurrence of ~~B by B,
then A is tautologically equivalent to A*.
It is easy to see that this principle holds. Just think about the truth tables
for A and A*. Because ~~B and B are equivalent, they will have the same
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 93
truth value on any given line of the truth table for A or A*. The other parts
of the formulas are identical, so they will also have the same truth values,
and hence the truth values for A and A* must come out the same on each
line. So A and A* are tautologically equivalent.
This metatheorem justifies the adoption of an additional inference rule
allowing us to introduce or eliminate double negations as we see fit:
RULE DN: D OUBLE NEGATION:
If some formula, A, appears on a line of a derivation, and a double
negation ~~B is one of its parts, then where A* results from replacing
a single occurrence of ~~B by B, A* can be written on any later line
of the derivation, taking for premise numbers all the premise numbers
of A.
If some formula, A, appears on a line of a derivation, and a formula
B is one of its parts, then where A* results from replacing a single
occurrence of B by ~~B, A* can be written on any later line of the
derivation, taking for premise numbers all the premise numbers of
A.
Notice that DN has two parts. The first part allows us to eliminate a double
negation whenever we want, and the second part allows us to introduce a
double negation whenever that it strategically desirable. Notice also that
DN only allows us to introduce or eliminate a single double negation at a
time. To introduce or eliminate several double negations requires several
applications of the rule.
Exercises
For each of the following, give two derivations of the formula below the
line from the formulas above the line. Keep in mind the strategies for
constructing derivations. In the first derivation, use rule DN. In the second
derivation, do not use rule DN.
1. (P → ~~Q) 2. (P → ~~Q)
(~Q → ~P) (Q → ~~R)
(P → ~~R)
7. Reductio ad Absurdum
A very useful strategy that we have not yet discussed is called reductio
ad absurdum (reduction to absurdity). This is actually a pair of related
strategies, both based upon the same idea. The idea is that to show that
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 94
This illustrates the use of rule R, but the astute reader may notice that it
would have been easier to get the same conclusion by using the strategy for
conditionals, taking P as a premise and deriving R. Typically, derivations
based upon reductio ad absurdum will be long, and if there is another way to
solve the problem it is better to do it that way. Like the use of dilemma,
reductio ad absurdum should be viewed as a “last resort” strategy, to be used
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 96
Exercises
Part (a) For each of the following, give an annotated derivation of the formula
below the line from the formulas above the line. Keep in mind the strategies
for constructing derivations. Try doing the derivations both with and without
rule R.
1. (P → Q) 2. [P → (Q → R)]
~(P → R) (P → Q)
~(Q → R) (R → ~Q)
~P
Part (b) For each of the following, give an annotated derivation of the
formula below the line from the formulas above the line.
[Q → ~(S ∨ ~T)]
[T → ~(R & S)]
[ ~T → (R & S)]
(S ↔ ~T)
Part (c) Show that the following formulas are tautologies by giving annotated
derivations of them; that is, derivations the last lines of which have no
premise numbers.
1. (P ∨ ~P)
2. [(P & Q) → (~Q → R)]
3. [(P ↔ Q ) → (~P ↔ ~Q)]
4. {(P → Q) → [(Q → R ) → (P → R)]}
5. [P → (~P → Q)]
6. [P → (Q → P)]
7. {[(P & Q) → R] ↔ [(P & ~R) → ~Q]}
8. {P ↔ [~P → (Q & ~Q)]}
9. [(P → Q) ∨ (Q → P)]
Part (d) For each of the following, if the formula below the line is tautologically
implied by the formulas above the line, construct a derivation to show that
it is. If the formula below the line is not tautologically implied by the
formulas above the line, show that it is not by a suitable assignment of truth
values to the atomic parts of the formulas.
Part (e) Symbolize each of the following arguments and construct a derivation
of the conclusion from the premises:
1. If John comes to the party, Joe will not come. Joe will not come unless
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 98
Mary comes, but Mary will only come if John comes. Therefore, Joe
will not come to the party.
2. If the bank rejects his loan application, Mr. Horner will either have to
get the money somewhere else or sell his business. If the bank will not
lend him the money, he will not be able to get it anywhere else either.
But if he sells his business, his only alternative will be to join the Peace
Corps. Thus if the bank rejects his loan application, Mr. Horner will
join the Peace Corps.
3. If the Stanford beats UCLA in basketball, they will score a great victory.
If Stanford announces that they are cutting their budget for sports,
UCLA will cut its sports budget. If UCLA cuts its sports budget, but
the Stanford does not cut theirs, then Stanford will beat UCLA in
basketball. Stanford is very devious, and will announce that they have
cut their budget without actually doing so. Therefore, Stanford will
score a great victory.
4. If the class does well, and the professor knows it, he will not give them
a test. But the professor will not know that the class does well unless
he gives them a test. Therefore, if the class does well the professor will
not know it.
Part (f) Try your hand at the following derivations, but don’t be disappointed
if you do not get them. They are difficult.
1. (Q → R)
[R → (P & Q)]
[P → (Q ∨ R)]
(P ↔ Q)
2. no premises
[{(P ∨ Q) & [(~P ∨ Q) & (P ∨ ~Q)]} → ~(~P ∨ ~Q)]
3. no premises
{[(P ↔ Q) ↔ R] ↔ [P ↔ (Q ↔ R)]}
DERIVATIONS IN THE PROPOSITIONAL CALCULUS 99
4. no premises
({[P & (Q → R)] →S} ↔ {[~P ∨ (Q ∨ S)] & [~P ∨ (~R ∨ S)]})
Part Two:
The
Predicate
Calculus
5
The Syntax of the
Predicate Calculus
1. All A are B.
2. All B are C.
2. Predicates, Relations,
and Individual Constants
To deal with the forms of statements like “All men are mortal” or “All
apples are red”, we need the concept of a predicate. When we say something
about some individual (a person, object, number, and so on), we ascribe a
predicate to that individual. If we say “John is a bachelor”, we ascribe the
predicate “is a bachelor” to John. In order to symbolize the form of a
statement like “John is a bachelor”, we use capital letters, with or without
numerical subscripts, to stand for predicates. We might use B to stand for
“is a bachelor”. We must also have some way of symbolizing those
expressions, like “John”, or “the girl with the red hair”, that denote
individuals. For this purpose we use the lower case letters from a through
s, with or without numerical subscripts. They are called individual constants.
We might let j stand for “John”. Using predicate letters and individual
constants we can symbolize the form of the statement “John is a bachelor”
THE SYNTAX OF THE PREDICATE CALCULUS 103
T b j
In general, if we have a relation “(1) ... (2)”, and we have some statement of
the form “b ... c”, where b and c denote individuals, then in deciding in
what order to write b and c after the relation symbol, we match up the
statement with the relation and see which individual constant corresponds
to which number. The individual constant corresponding to (1) is written
first, and the individual constant corresponding to (2) is written second.
To illustrate this with a slightly more complex example, consider the
three-place relation “is between”. For example, we might say that John is
between Bob and William. In symbolizing this statement we might let B
THE SYNTAX OF THE PREDICATE CALCULUS 104
stand for “(1) is between (2) and (3)”. Then letting j stand for “John”, b for
“Bob”, and m for “William”, we can symbolize the form of “John is between
Bob and William” as Bjbm. Had we instead let B stand for “(2) is between
(1) and (3)”, then the statement would be symbolized as Bbjm.
Exercises
A. Letting T be “(1) is taller than (2)” and B be “(1) is between (2) and (3)”,
and letting j stand for “John”, b for “Bob”, and m for “William”, symbolize
the forms of the following statements:
1. If Bob is taller than John, then it is not the case that John is taller than
Bob.
2. If John is taller than Bob, and Bob is taller than William, then John is
taller than William.
3. It is not the case that John is taller than himself.
4. If John is between Bob and William, then John is between William
and Bob.
5. If John is between Bob and William, then neither is Bob between
John and William nor is William between John and Bob.
B. Letting B be “(2) is between (1) and (3)”, symbolize the fourth and fifth
statements of the preceding exercise.
1
Notice that when we write “Given any two numbers”, we do not mean
“Given any two different numbers”. The two numbers may be equal; for
example x and y in x + y = y + x may both refer to the same number.
THE SYNTAX OF THE PREDICATE CALCULUS 105
from other objects we may be talking about. A variable will be any lower
case letter of the alphabet from t through z, with or without numerical
subscripts. For example, x, z, t2 , and w137 are variables.
We can paraphrase statements in which “every”, “all”, “any”, “some”,
and “there is” occur by using variables. “Given any two objects, if the first
is taller than the second, then it is not the case that the second is taller than
the first” can be paraphrased as “Given any two objects x and y, if x is taller
than y, then it is not the case that y is taller than x”. If we now write
variables after relation symbols just like we wrote individual constants after
them, we can partially symbolize this statement as “Given any two objects,
x and y, (Txy → ~Tyx)”. Simple statements like “Everything is red” can
also be paraphrased in this way: “Given anything x, Rx”.
Now consider how we might symbolize the form of the statement “All
apples are red”. The first step is to paraphrase it to get it into the form
“Given anything x, .. ” What can we put in the blank? “All apples are red”
means “Everything that is an apple is red”, or “Given anything if it is an
apple then it is red”. So this can be paraphrased as “Given anything x, if x
is an apple then x is red”. Letting A be “is an apple” and R be “is red”, we
can partially symbolize this as “Given anything x, (Ax → Rx)”. Now all
that remains is to introduce some way to symbolize the phrase “Given
anything x”. We will do this using an inverted “A”: “(∀x)”. Then, “Given
anything x, (Ax → Rx)” can be symbolized as (∀x)(Ax → Rx). “(∀x)” is
called the universal quantifier with respect to x. Similarly, “(∀y)” is the universal
quantifier with respect to y, and so on. Note that it makes no difference
which particular variable we use in symbolizing something. We can
symbolize “All apples are red” as either (∀x)(Ax → Rx) or (∀y)(Ay → Ry).
The reason for having more than one variable at our disposal will become
apparent when we discuss formulas that contain more than one quantifier.
In general, a statement of the form “All A are B” can be paraphrased
as “Given anything x, if x is A then x is B”. So “All A are B” can be
symbolized as (∀x)(Ax → Bx).
The universal quantifier corresponds to the expressions “all” and
“every”. It is convenient to have another kind of quantifier corresponding
to the expressions “some” and “there is”. Consider the statement “Some
apples are green”. This can be paraphrased as “There is something x, that
is a green apple”. To say that something is a green apple is just to say that
it is both green and an apple. So “Some apples are green” can be paraphrased
as “There is something x, such that (Gx & Ax)”. Next we want to introduce
a symbol for the phrase “There is something x, such that”. The symbol
“(∃x)” will be used. This is called the existential quantifier with respect to x.
“Some apples are green” can be symbolized as (∃x)(Gx & Ax). In general, a
statement of the form “Some A is B” or “There is an A that is B” can be
symbolized as (∃x)(Ax & Bx).
It should be noted that in symbolizing a statement of the form “Some
A is B” we use a conjunction, whereas in symbolizing a statement of the
form “All A are B” we use a conditional. There seems to be a strong
temptation to mix these up, and either symbolize “All A are B” as (∀x)(Ax
& Bx) or “Some A is B” as (∃x)(Ax → Bx). These are wrong for the following
reasons. First consider (∀x)(Ax & Bx). This says that everything is both an
A and a B. But that is not what we mean when we say that all A are B. We
THE SYNTAX OF THE PREDICATE CALCULUS 106
No apple is blue.
It is not the case that there is a blue apple.
~(∃x)(x is an apple and x is blue)
~(∃x)(Ax & Bx)
No apple is blue.
Every apple is non-blue.
(∀x)(if x is an apple then x is not blue)
(∀x)(Ax → ~Bx)
In general, “No A is B” can be paraphrased either as “It is not the case that
there is an A that is B” or as “Given any A, it is not a B”. The first
paraphrase produces the symbolization ~(∃x)(Ax & Bx), and the second
produces the symbolization (∀x)(Ax → ~Bx). It will be shown later that the
formulas ~(∃x)(Ax & Bx) and (∀x)(Ax → ~Bx) are equivalent, so it makes no
difference which way we symbolize a statement involving “No”.
The English word “any” behaves peculiarly. In some contexts it
THE SYNTAX OF THE PREDICATE CALCULUS 107
Exercises
Letting P stand for “was a president of the United States”, B for “(1) was
president before (2)”, and letting g stand for “Washington”, l for “Lincoln”,
and j for “Jefferson”, symbolize the forms of the following statements:
1. There was no president before Washington.
2. Washington was president before anyone who was president after
Lincoln.
3. Someone was president before Lincoln and after Jefferson.
4. Everyone who was president before Jefferson was president before
Lincoln.
positive integer”, let E stand for “is even”, let O stand for “is odd”, let L
stand for “(1) is less than (2)”, and let I stand for “(1) is equal to (2)”. Let a1
stand for the numeral “1”, a2 for the numeral “2”, a3 for the numeral “3”,
and so on.
First consider a few examples that do not involve quantifiers. The
statement “1 is less than 2” will be symbolized as La1 a2. The statement “1 is
less than or equal to 2” can be paraphrased as “1 is less than 2, or 1 is equal
to 2”, which can then be symbolized as (La1 a2 ∨ Ia1 a2). The statement “7 is
greater than 5”, can first be paraphrased as “5 is less than 7”, and then
symbolized as La5a7 .
Now consider some statements involving quantifiers. The statement
“Every positive integer is either odd or even” can be paraphrased as “Given
anything x, if x is a positive integer than either x is odd or x is even”. This
can then be partially symbolized as (∀x)[if Px then either Ox or Ex], and
then finished as (∀x)[Px → (Ox ∨ Ex)].
The statement “There is a positive integer greater than every positive
integer” can be paraphrased as “There is something x, such that x is a
positive integer, and x is greater than every positive integer”. This in turn
can be partially symbolized as (∃x)[Px and x is greater than every positive
integer]. This can be paraphrased as (∃x)[Px and given anything y, if y is a
positive integer then x is greater than y]. This can be partially symbolized
as (∃x)[Px and (∀y)(if Py then Lyx)], and then finished as (∃x)[Px & (∀y)(Py
→ Lyx)].
The symbolization of the last statement illustrates the general pattern
that should be followed in complex cases. The process of symbolizing is
done in two steps First we introduce quantifiers and variables. Second we
symbolize the rest as in the propositional calculus, beginning with the smallest
parts and working outwards. It is illuminating to diagram this process as
follows:
Exercises
A. Letting P stand for “was a president of the United States”, B for “(1) was
president before (2)”, D for “(1) died before (2)”, A for “(1) was born before
(2)”, and P2 for “is a person”, and letting g stand for “Washington”, l for
“Lincoln”, and j for “Jefferson”, symbolize the forms of the following
statements:
1. Given any two presidents, if the first was born before the second,
then the first died before the second.
2. Some presidents were born before others, and some died after others.
3. There is at least one president who was born before another president
and died after him.
4. There were no two presidents between Jefferson and Lincoln such
that one was born before the other but died after him.
5. There is a president who was president both before and after another
president.
6. Given any three past presidents, if the first died before the second,
and the second died before the third, then the first died before the
third, but it is not the case that given any three presidents, if the first
was born before the second, and the second was president before the
third, then the third died after the second.
7. Given any two presidents, if the first died before Jefferson, and Lincoln
died before the second, then if the second was born before the first,
then the second was president before the first and died after
Washington but before Jefferson.
8. No president was president both before and after two presidents
who in turn were each president both before and after each other.
B. Letting D stand for “is a dog”, A for “is a day”, S for “is a stone”, L for
“is an angel”, R for “is a time when it rains”, B for “is rolling”, P for “is a
time when it pours”, F for “is a fool”, M for “is moss”, H for “(1) has (2)”, U
for “(1) rushes into (2)”, G for “(1) gathers (2)”, and E for “(1) fears to tread
in (2)”, the following formulas symbolize familiar sayings. Express these
sayings in English.
THE SYNTAX OF THE PREDICATE CALCULUS 111
1. (∀x)(Rx → Px)
2. (∀x)[Dx → (∃y)(Ay & Hxy)]
3. (∀x)[(Sx & Bx) → ~ (∃y)(My & Gxy)]
4. (∀x)(∀y){[(∃z)(Lz & Ezy) & Uxy] → Fx}
5. Symbolizing Relations
Thus far we have only been concerned with symbolizing the forms of
statements, but the same thing can be done with relations. Some relations
can be viewed as constructed out of other simpler relations. For example,
the relation “Either (1) loves (2), or (2) loves (1)” is constructed out of the
simpler relation “(1) loves (2)”. Taking the relation symbol L to stand for
the simpler relation “(1) loves (2)”, we can symbolize the form of the complex
relation by writing [L(1)(2) ∨ L(2)(1)]. We can then go a step further and
replace the numbers in parentheses by variables. The result is the formula
(Lxy ∨ Lyx), which gives us the logical form of the complex relation.
Consider a more complicated example:
If (1) strikes (2) in the presence of (3), then (3) will think poorly of (1),
but (2) will be embarrassed and will think poorly of himself.
Letting S stand for “(1) strikes (2) in the presence of (3)”, letting T stand for
“(1) thinks poorly of (2)”, and letting E stand for “is embarrassed”, we can
symbolize the complex relation as
If S(1)(2)(3), then T(3)(1), but E(2) and T(2)(2)
{S(1)(2)(3) → [T(3)(1) & (E(2) & T(2)(2))]}
When we replace the numbers by variables we get
{Sxyz → [Tzx & (Ey & Tyy)]}
In replacing the numbers by variables it makes no difference which variables
we choose. We could have symbolized the above relation equally well as
{Szyx → [Txz & (Ey & Tyy)]}.
Sometimes we must use quantifiers in symbolizing the form of a relation.
The relation
(1) is the father of the mother of (2)
can be paraphrased as
There is something, x, such that (1) is the father of x and x is the mother
of (2).
Letting F stand for “(1) is the father of (2)” and M stand for “(1) is the
mother of (2)”, we can symbolize this as
(∃x)(F(1)x & Mx(2)).
THE SYNTAX OF THE PREDICATE CALCULUS 112
Exercises
A. Symbolize the forms of the following relations, letting M stand for “(1)
is married to (2)”, W stand for “(1) is a wife of (2)”, F stand for “(1) is the
father of (2)”, P stand for “(1) is a parent of (2)”, B stand for “(1) is a brother
of (2)”, S stand for “(1) is a sister of (2)”, and D stand for “(1) is a daughter
of (2)”.
1. (1) is a grandfather of (2).
2. (1) is the paternal grandfather of (2).
3. (1) is a sister-in-law of (2). (A sister-in-law is a wife of a brother or a
sister of a spouse.)
4. (1) is the paternal grandfather of (2)’s maternal grandmother.
5. (1) is a cousin of (2). (A cousin is an offspring of a sibling of a parent.)
6. (1) is a niece of (2). (A niece is a daughter of one's brother or sister or
daughter of the brother or sister of one's spouse.)
7. (1) is a nephew of (2). (A nephew is a son of one's brother or sister or son
of the brother or sister of one's spouse.)
8. (1) is an uncle of (2). (An uncle is a brother of a parent or the spouse of a
sister of a parent.)
9. (1) is a cousin of a nephew of an uncle of (2).
10. A cousin of a sibling of (1) is a nephew of (2) and a grandmother of
(3).
the quantifier “(∀x)” binds all occurrences of x and the quantifier “(∃y)”
binds all occurrences of y in the formula (∀x)(Ax → (∃y)Rxy). It should be
noticed that different occurrences of the same variable can be bound by
different quantifiers. The statement “Either all apples are red or all apples
are green” can be symbolized as
[(∀x)(Ax → Rx) ∨ (∀x)(Ax → Gx)].
Here the first and second occurrences of x are bound by the first quantifier,
and the third and fourth occurrences are bound by the second quantifier.
(We don’t count the variable in the quantifier itself as an occurrence.) To
take a more complex case, in
(∀x)[(Ax → Bx) & (∃x)Ax]
the universal quantifier binds the first two occurrences of x, and the existential
quantifier binds the third occurrence of x. An occurrence of a variable is
always bound by the innermost quantifier that can bind it. Thus in the above
example the third occurrence of x is bound by the existential quantifier
rather than the universal quantifier. If the existential quantifier were omitted
then all three occurrences of x would be bound by the universal quantifier.
In general, in a formula of the form (∀x)( . . . x . . . ) or (∃x)( . . . x . . . ), the
quantifier binds all occurrences of x within the formula that are not already
bound by another quantifier.
The formula that immediately follows a quantifier and in which any
occurrences of the variable of that quantifier not previously bound are bound
by the quantifier, is called the scope of the quantifier. So, in the formula
[(∀x)(Ax → Rx) ∨ (∀x)(Ax → Gx)], (Ax → Rx) is the scope of the first
quantifier, and (Ax → Gx) is the scope of the second quantifier. In the
formula (∀x)(Ax → (∃y)Rxy), (Ax → (∃y)Rxy) is the scope of “(∀x)”, and
Rxy is the scope of “(∃y)”. An occurrence of a variable is bound by a quantifier
if, and only if, the quantifier contains that variable, and the occurrence in question
is within the scope of that quantifier and not already bound by another quantifier.
Notice that in a formula we get by symbolizing the form of a statement,
each occurrence of a variable must be bound by some quantifier. This is
because we only introduce variables along with quantifiers that bind them.
Variables only result from symbolizing English quantifiers like “all”, “each”,
“every”, “some”, and so on. On the other hand, when we symbolize relations,
the variables that we put in place of the numbers in parentheses are not
bound by any quantifiers. If an occurrence of a variable in a formula is not
bound by any quantifier, it is called a free occurrence. For example, the
occurrence of z in (x)(∃y)(Fxy ↔ Hxyz) is free. Hence, the difference between
the formulas that result from symbolizing the forms of statements and those
that result from symbolizing the forms of relations is that the former never
have free occurrences of variables, and the latter always have fee occurrences
of variables.
A formula of the predicate calculus that does not have any free
occurrences of variables is called a closed formula. So we always get closed
formulas when we symbolize the forms of statements. A formula that
contains free occurrences of variables is called an open formula. Open formulas
symbolize relations.
THE SYNTAX OF THE PREDICATE CALCULUS 114
Exercises
B. For each of the following formulas, tell which (if any) occurrences of
variables are free:
1. Bxa
2. (∀x)(Fxa → Bxy)
3. [(∃x)(Gx & Hyz) ∨ Hxx]
4. (∀x)[Hxa ↔ (∃y)(∃z)(Hyx → ~Hzz)]
5. (∀x)(Hx1 → Bxx)
6. [(∀x)(∃y)(Bxyz ∨ ~ Bzyx) ∨ ~(Bxyz ∨ ~Bzyx)]
7. Universes of Discourse
When we write (∀x)Fx, we are saying that everything in the universe
has the property F. This is sometimes expressed by saying that the variables
in quantifiers “range over” everything in the universe. However, we are
often interested in only a subset of all the objects in the universe. For
example, when we write “x + y = y + x” in mathematics, we are implicitly
supposing that our variables range only over numbers. We could express
this precisely by taking N to symbolize “is a number” and writing:
(∀x)(∀y)[(Nx & Ny) → x + y = y + x].
However, when we are only interested in talking about numbers, it is more
convenient to eliminate the use of N and just write:
(∀x)(∀y) x + y = y + x
understanding that the variables just range over numbers. When we do
this we say that we are employing a restricted universe of discourse. If our
THE SYNTAX OF THE PREDICATE CALCULUS 115
Exercises
Taking the class of human beings as your universe of discourse, and letting
M stand for “(1) is married to (2)”, W stand for “(1) is a wife of (2)”, F stand
for “(1) is the father of (2)”, P stand for “(1) is a parent of (2)”, B stand for
“(1) is a brother of (2)”, S stand for “(1) is a sister of (2)”, a stand for
“Arthur”, and b stand for “Bartholomew”, symbolize the forms of the
following statements. You can assume that Arthur and Bartholemew are
are humans and that parents and siblings of humans are humans.
1. It is false that anyone is both the father of Arthur and the mother of
Bartholomew.
2. Arthur has a mother who is married to his father.
3. Arthur is the paternal grandfather of Bartholomew.
4. Arthur is a sibling of someone.
5. Everyone who is married to anyone is the offspring of someone.
6. One of Arthur’s grandfathers is a brother of one of Bartholomew’s
grandmothers.
7. Anyone who is married to someone’s brother is someone’s sister-in-
law.
8. No one is the father of a brother of his mother.
THE SYNTAX OF THE PREDICATE CALCULUS 116
in question.
A peculiarity of this definition is that by Rules 7 or 8 a quantifier can
be appended to a formula even if that formula contains no free occurrences
of the variable of the quantifier. Such a quantifier is said to be vacuous. We
could have framed Rules 7 and 8 in such a way as to preclude vacuous
quantifiers, but that would have made them more complicated. It is simpler
to allow vacuous quantifiers, as long as it is clearly understood what they
mean. A formula with a vacuous quantifier will be taken to mean the same
thing as the corresponding formula without the quantifier. Thus, for example,
the formula (∀x)((∃y)Fy → (∃z)Gz) means the same thing as ((∃y)Fy →
(∃z)Gz). In other words, a vacuous quantifier leaves the meaning of a
formula unchanged.
Exercises
Which of the following are formulas of the predicate calculus? Which are
closed formulas and which are open formulas?
1. (∀x)(∀y)(Fxy → Fyx)
2. (P → (∃x)Gxy)
3. ((∀x)(∃y)(∀z)Gxy & Fyz)
4. (∃z)(∀x)Fxz → (∀x)(∃y)Gxy
5. (∃x) ~(P & (∀x)(Fx → Gx))
6. (∀x)(∀y)((Bxa → Bxy) ∨ ~(Hxyz & Bzy))
7. (∀x)(Fx) → (∃y)(Gxy → Fy)
8. ((∀w)Pw ↔ ~(∃z)(Wp & Pz))
9. (Wa & ~Wa)
10. ((P & Q) → R)
Practice Problems
Symbolize the forms of the following statements, using the symbols indicated:
1. If someone is dead, there is a murderer in the house. [D: “is dead”;
P: “is a person”; M: “is a murderer”; H: “is in the house”]
2. If someone is dead, Jones killed him. [D: “is dead”; P: “is a person”;
K: “(1) killed (2)”; j: Jones]
3. Any student in some course John is in will flunk it if John does. [S:
“is a student”; C: is a course”; I: “(1) is in (2)”; F: “(1) flunks (2)”, j:
THE SYNTAX OF THE PREDICATE CALCULUS 118
John]
4. If all members of the ping pong team are incapacitated, then if Castro
invites them to visit Cuba none of them will be able to go. [P: “is a
member of the ping pong team”; I: “is incapacitated”; V: “(1) invites
(2) to visit (3)”; A: “(1) is able to go to (2)”; a: Castro; c: Cuba]
5. If the President gets his way, no congressman will dare vote against
the budget unless he (the congressman) is either retiring or from
California. [G: “gets his way”; C: “is a congressman”; D: “dares to
vote against the budget”; R: “is retiring”; F: “is from California”; p:
the President]
6. No one who has been to Xanadu and seen the stately pleasure dome
now remembers how to get there. [P: “is a person”; B: “(1) has been
to (2)”; R: “(1) remembers how to get to (2)”; a: Xanadu; d: the stately
pleasure dome]
7. If there are no prime numbers between 13 and 17, then if any number
between 13 and 17 is divisible by 13, it is also divisible by 4. [N: “is a
number”; P: “is prime”; L: (1) is less than (2)”; D: “(1) is divisible by
(2)”; a3 , a4, a13, a17]
8. If anyone steals the royal crown and melts it down, no one will miss
it unless the king does. [S: “(1) steals (2); P: “is a person”; M: “(1)
melts (2) down”; I: “(1) misses (2)”; k: the king; c: the royal crown]
9. No one respects anyone who does not respect anyone. [P: “is a
person”; R: “(1) respects (2)”]
10. Anyone who is taller than someone is taller than someone who isn’t
taller than anyone. {P: “is a person”; T: “(1) is taller than (2)”]
6
The Semantics of the
Predicate Calculus
[(∀x)(x∈A ↔ x∈B) → A = B]
Because the identity of a set is determined by its members, one convenient
way to refer to a set having a small number of members is by listing its
members. The customary way to do this is to enclose the list of members in
braces, { and }. We can, for example, refer to the set of all integers between
one and four by writing {2, 3}. It makes no difference in what order we list
the members of the set. If the members are the same, the set is the same. So
{2, 3} and {3,2} are the same set, and {1,2, 3}, {1, 3,2}, {2,1, 3}, {2, 3,1}, {3,1,2},
and {3,2,1} are the same set.
Sometimes a set will have just one member. For example, we can talk
about the set of all integers between one and three, which is the set whose
only member is two. This set can be referred to as {2}. A set having only
one member is called the unit set of that object. Thus {2} is the unit set of
two.
Sometimes a set will have too many members to make it practical to
list all of them. A set might even have an infinite number of members, like
the set of all integers. When this happens we cannot refer to the set by
simply listing the members, but we can adapt the above notation to take
care of this. If for some predicate A we want to talk about the set of all
objects that are A (such as the set of all past presidents), we can simply
write {x x is A}. This is read “the set of all objects, x, such that x is A”.
Using these symbols the set of all past presidents is {x x is a past president}.
There is one set that deserves special mention because of its somewhat
paradoxical-seeming nature. This is the set that has no members. This is
called the empty set, and is designated by the symbol ∅. The empty set can
be introduced in many different ways. For example, ∅ = {x x both is and
is not an integer} because there is nothing that both is and is not an integer.
Notice that there can only be one empty set—this follows from the
extensionality of sets. If we had two sets neither of which had any members,
then they would have the same members (that is, (∀x)(x∈A ↔ x∈B) would
be true), and so they would be the same set.
A set that is just the opposite of the empty set is the universal set. The
universal set is the set containing everything. It contains all numbers, material
objects, people, and so on. The universal set is denoted by U. We can
define it as U = {x x either is or is not an integer} because everything either
is or is not an integer.
The order in which we list the members of a set makes no difference
to the identity of the set. But sometimes we would like it to make a difference.
This leads to the concept of an ordered-set, or a sequence. A sequence is a list
of objects having a definite order. For example, we may talk about the
sequence of integers from one to ten, listed in order from smallest to largest.
This will be distinct from the sequence of integers from one to ten listed in
order from largest to smallest.
Just as { and } are used to talk about sets, 〈 and 〉 are used to talk about
sequences. Thus, the sequence of integers from one to ten listed in order
from smallest to largest will be the sequence 〈1,2, 3, 4, 5, 6, 7, 8, 9,10〉, and
THE SEMANTICS OF THE PREDICATE C ALCULUS 121
the sequence of integers from one to ten listed in the reverse order will be
〈10, 9, 8, 7, 6, 5, 4, 3,2,1〉. These are different sequences. Unlike sets, sequences
are not extensional. Whereas the identity of a set is determined solely by its
members, the identity of a sequence is determined by the combination of its
members and their order.
We will frequently want to talk about sequences of fixed length. A
sequence of two objects will be called an ordered pair. Thus 〈1,2〉, 〈John
Kennedy, F. Scott Fitzgerald〉, 〈Hiroshima, √ 2〉 are all ordered pairs. Similarly,
a sequence of three objects is called an ordered triple, and a sequence of four
objects is called an ordered quadruple. Above a certain point we run out of
names like this, so we use another kind of name. An ordered pair can also
be called an ordered two-tuple, and ordered triple can be called an ordered
three-tuple, and so on. This terminology will work for a sequence of any
length. A sequence of 137 objects can be called an ordered 137-tuple. In
general, given any number n, we can talk about sequences of n objects and
call them ordered n-tuples.
There is a close relationship between sets and relations. The simplest
relations are one-place relations, or predicates. Corresponding to each
predicate is the set of all objects to which that predicate can be truly ascribed.
For example, corresponding to the predicate “is a bachelor” there is the set
of all bachelors; that is {x x is a bachelor}. And corresponding to the
predicate “is an integer” there is the set of all integers; that is, {x x is an
integer}. In other words, given any predicate A, there corresponds to it {x
x is A}. This set is called the extension of the predicate A. An object is a
member of the extension of a predicate if, and only if, that predicate can
be truly ascribed to that object.
Next consider two-place relations, such as “(1) is the brother of (2)”.
There is also a set corresponding to this, but now it is a set of ordered pairs.
For example, corresponding to the relation “(1) is the brother of (2)” is the
set of all ordered pairs for which the first element of the pair is the brother
of the second element of the pair. Or corresponding to the relation “(1) is
an integer less than (2), and (2) is an integer less than four”, we have the set
of all ordered pairs of integers such that the first integer in each pair is less
than the second integer, and the second integer is less than four. This is just
the set {〈1,2〉, 〈1, 3〉, 〈2, 3〉}. In general, given a two-place relation “(1) ... (2)”
(where we fill something in for the blank), we can talk about the set of all
ordered pairs which are such that the first member of each pair stands in
this relation to the second member of that pair. This set can be referred to
as “the set of all ordered pairs, 〈x, y〉, such that x ... y”, and can be symbolized
as {〈x, y〉 x ... y}. Corresponding to the relation “(1) is the brother of (2)” is
{〈x, y〉 x is the brother of y}. This set of ordered pairs is called the extension
of the relation. The extension of the relation “(1) is an integer between (2)
and (3), and (2) is greater than four and (3) is less than nine” is the set {〈6, 5,
7〉, 〈6, 5, 8〉, 〈7, 5, 8〉, 〈7, 6, 8〉}. Given an n-place relation, the elements of a
particular ordered n-tuple of objects stand in this relation to one another
if, and only if, the ordered n-tuple is a member of the extension of the
relation.
THE SEMANTICS OF THE PREDICATE C ALCULUS 122
Exercises
2. Interpretations
Interpretations must contain enough information to enable us to
determine the truth values of formulas relative to the interpretations. For
this purpose, an interpretation must specify a universe of discourse for the
quantifiers, and it must interpret the sentential letters, relation symbols,
and individual constants. The universe of discourse will be called the domain
of the interpretation. In the propositional calculus, it proved unnecessary to
specify the entire meaning of a sentential letter. For the purpose of computing
truth values, it was sufficient to simply assign truth values to the sentential
letters. That remains true in the predicate calculus, so we will take an
interpretation of the predicate calculus to assign truth values to sentential
letters. Similarly, an interpretation will assign a denotation to each individual
constant. The denotation of an individual constant is the object it denotes
or names.
A one-place relation symbol is used to express a predicate. The statement
symbolized by Fc is true if, and only if, the predicate expressed by F can be
correctly ascribed to the object denoted by c. But as we saw in section one,
that condition is satisfied if, and only if, the object denoted by c is in the
extension of the predicate expressed by F. Thus for the purpose of determining
the truth value of Fc, it suffices to know the extension of F and the denotation
of c. We do not have to know the “meaning” of F, in the sense of knowing
THE SEMANTICS OF THE PREDICATE C ALCULUS 123
3. Truth Rules
To compute the truth value of a formula relative to an interpretation,
we need truth rules, so let us turn to the task of formulating such rules. The
first thing to observe is that we cannot talk about the truth or falsity of an
open formula—only of closed formulas. Open formulas symbolize relations
THE SEMANTICS OF THE PREDICATE C ALCULUS 124
rather than statements, and only statements are true or false. For instance,
we cannot ask whether the relation (1) is the brother of (2) is true. That
question makes no sense. Truth rules can only be applicable to closed
formulas.
The truth rules employed in the propositional calculus are still correct
for the predicate calculus. However, because of the greater expressive power
the predicate calculus, those rules are not sufficient for computing truth
values for all formulas. The simplest cases they omit are those of atomic
formulas involving relation symbols. To compute truth values for these
formulas we need the following truth rules:
(1) If F is a relation symbol and c is an individual constant, Fc is true
relative to an interpretation if and only if the interpretation assigns an
object c as the denotation of c, a set F of objects as the extension of F,
and c∈F.
(2) If F is a relation symbol and c 1,...,c n are individual constants, Fc1 ...cn
is true relative to an interpreation if and only if the intepretation
assigns objects c1,...,cn as the denotations of c1 ,...,cn, a set F of ordered
n-tuples as the extension of F , and 〈c1,...,cn 〉∈F.
For example, suppose we use H to express the mathematical relation x+y =
z, and restrict our domain to the positive integers less than 4. So the
domain is the set {1,2,3}, and the extension of H is {〈1,1,2〉,〈1,2,3〉,〈2,1,3〉}.
Then if the denotation of b is 1, c is 2, and d is 3 (so Hbc expresses the
statement “1+2 = 3”), Hbc is true if, and only if, 〈1,2,3〉 is in the extension of
H. This condition is satisfied, so Hbc is true relative to this interpretation.
We will retain the following rules from the propositional calculus:
(3) An atomic closed formula that is a sentential letter is true under an
interpretation if, and only if, the interpretation assigns the truth value
“true” to it.
(4) If A is any closed formula, ~A is true if, and only if, A is false.
(5) If A and B are any closed formulas, (A & B) is true if and only if both
A and B are true.
(6) If A and B are any closed formulas, (A ∨ B) is true if and only if
either A or B (or both) are true.
(7) If A and B are any closed formulas, (A → B) is true if and only if
either A is false or B is true.
(8) If A and B are any closed formulas, (A ↔ B) is true if and only A and
B have the same truth value, i.e., either both are true or both are false.
Using the preceding truth rules we can determine the truth value of
any closed formula that does not contain quantifiers. For example, suppose
we want to know the truth value of the closed formula [P ↔ (Qa & ~Fab)]
under the interpretation that assigns the domain {1,2, 3}, the denotation 1 to
THE SEMANTICS OF THE PREDICATE C ALCULUS 125
not occur in the formula, e.g., c, and construct the formula [Fc ↔ (Gc ∨
Ha)]. Then (∀x)[Fx ↔ (Gx ∨ Ha)] is true under an interpretation if, and
only if, [Fc ↔ (Gc ∨ Ha)] is true under every c-variant of the interpretation.
Notice that it makes no difference which individual constant we choose,
just as long as it does not already occur in the formula.
The formula [Fc ↔ (Gc ∨ Ha)] is obtained by substituting c for every
occurrence of x in [Fx ↔ (Gx ∨ Ha)]. A complication arises from the fact
that a formula can contain more than one quantifier binding the same variable.
For example, in computing the truth value of (∀x)[Fx ↔ (Gx ∨ (∃x)Hx)], we
would construct the formula [Fc ↔ (Gc ∨ (∃x)Hx)] and ask whether it is
true under every c-variant. [Fc ↔ (Gc ∨ (∃x)Hx)] is constructed by substituting
c for every free occurrence of x in [Fx ↔ (Gx ∨ (∃x)Hx)]. The bound occurrences
are left unchanged. It is convenient to introduce some notation for talking
about substitution. Let Sb(c/ x)P be the result of substituting c for every
free occurrence of x in the formula P. Then we can formulate a precise
truth rule for universal generalizations as follows:
(9) A universal generalization (∀x )P is true under an interpretation if,
and only if, when we choose some individual constant c that does not
occur in P, Sb( c/ x)P is true under every c-variant of the initial
interpretation.
That Sb(c/ x)P is true under every c -variant of the initial interpretation is
simply another way of saying that Sb(c/ x)P would be true regardless of
what we let the denotation of c be.
Some examples follow of the use of this rule to determine whether
closed formulas of the predicate calculus are true under interpretations.
Consider first the closed formula (∀x)(Fx → Gx) and the interpretation in
which the domain is the set of integers {1,2, 3}, the extension assigned to G
is {1,2}, and the extension assigned to F is {2}. By Rule 9, (∀x)(Fx → Gx) is
true under this interpretation if, and only if, (Fc → Gc) is true under every
c-variant of this interpretation. There are three things in the domain, so
there are three c-variants—we can let the denotation of c be either 1, or 2, or
3. So let us consider each one separately and show that (Fc → Gc) is true
under every one of them. First let the denotation of c be 1. Gc is true under
that interpretation if, and only if,1∈{1,2}. This is the case, so Gc is true. Fc
is true under this c-variant if, and only if,1∈{2}, which is not the case. So Fc
is false. Then, by Rule 7 (Fc → Gc) is true under this c-variant. Next
consider the c-variant that assigns the denotation 2 to c. Gc is true under
that c-variant because 2∈{1,2}. So by Rule 7, (Fc → Gc) is true under the
second c-variant. The third (and last) c-variant assigns 3 to c. Both Fc and
Gc are false under this c-variant, because it is not the case that 3∈{2}, and it
is not the case that 3∈{1,2}. Again by Rule 7, (Fc → Gc) is true. Therefore,
(Fc → Gc) is true under each c-variant of our initial interpretation, and so
by Rule 9, (∀x)(Fx → Gx) is true under that interpretation.
For a more complex example, consider how we can compute the truth
value of the formula
THE SEMANTICS OF THE PREDICATE C ALCULUS 127
[(∀x)Hx ↔ (P → ~(∀x)~Hx)]
under the interpretation that assigns the domain {1,2}, and assigns “true” to
P, and the extension {1} to H. First evaluate the truth value of (∀x)Hx.
(∀x)Hx is true under our interpretation if, and only if, Hc is true under each
c-variant of the interpretation. There are two c-variants—that assigning 1
to c, and that assigning 2 to c. Hc is true under the first c-variant because
1∈{1}, but Hc is false under the second c-variant because it is not the case
that 2∈{1}. Thus Hc is not true under every c-variant, and hence (∀x)Hx is
not true under the initial interpretation. Thus the left side of the biconditional
is false. Next consider (∀x)~Hx. This is true under the interpretation if,
and only if, ~Hc is true under each c-variant of the interpretation. But by
Rule 4, ~Hc is true under an c-variant if, and only if, Hc is false under that
c-variant. We have seen that Hc is true under the first c-variant and false
under the second, so ~Hc is false under the first c-variant and true under
the second. Therefore (∀x)~Hx is not true under the initial interpretation.
P is true, so then by Rule 7, (P → ~(∀x)~Hx) is true under the interpretation.
Hence we have a biconditional the left side of which is false, and the right
side of which is true, so according to Rule 8 the biconditional is false under
the initial interpretation.
It should be mentioned that interpretations will frequently be chosen
which assign sets of integers as domains. This is only because integers are
convenient to work with. These interpretations do not have any privileged
status among interpretations other than their convenience. We could in
principle use any set of objects for the domain of an interpretation.
Next a truth rule for existential generalizations must be constructed.
The rule here is completely analogous to the case of universal generalizations.
We want a closed formula of the form (∃x)Fx to be true if, and only if, there
is something in the domain which is an F; that is, if, and only if, there is at
least one thing in the domain such that, if we choose it as the denotation of
c, Fc will be true. So our rule is the following:
(10) An existential generalization (∃ x)P is true under an interpretation if,
and only if, when we choose some individual constant c that does not
occur in P, Sb(c/ x)P is true under at least one c-variant of the
interpretation.
Consider some examples of the use of Rule 10. Let us calculate the
truth value of (∃x)Bxg under the interpretation that assigns the set of all
past presidents of the United States as the domain, assigns George Washington
as the denotation of g, and assigns to B the set of all ordered pairs of past
presidents such that the first member of each ordered pair was president
before the second member. Here B is the relation “(1) was president before
(2)”. To determine the truth value of (∃x)Bxg under this interpretation we
must first choose an individual constant not occurring in Bxg. We cannot
choose g, because it occurs in Bxg, so let us choose b. Then (∃x)Bxg is true
under this interpretation if, and only if, Bbg is true under at least one
b-variant of this interpretation. But there is no denotation we can assign to
THE SEMANTICS OF THE PREDICATE C ALCULUS 128
Lcb is true under the second and third b-variant of the first c-variant,
so (∃y)Lcy is true under the first c-variant. Lcb is true under the third
b-variant of the second c-variant, so (∃y)Lcy is also true under the second
c-variant. But Lcb is not true under any b-variant of the third c-variant, so
(∃y)Lcy is false under the third c-variant. Therefore, it is not true that
(∃y)Lcy is true under every c-variant of the initial interpretation, and so
(∀x)(∃y)Lxy is false under the initial interpretation.
It will often be unnecessary to work through all of the steps of the
truth rules in order to determine whether a closed formula is true under a
given interpretation. If we can see what a closed formula means under an
interpretation we can often see immediately whether it is true. Recall the
above interpretation of (∃x)Bxg. Under that interpretation, (∃x)Bxg means
“There was someone who was president before Washington”. Seeing this,
we know immediately that the closed formula is false. Or consider the
interpretation of the closed formula (∀x)(∃y)Lxy in the preceding example.
Under that interpretation this closed formula means “For every integer
there is another integer such that the first is smaller than the second”, that
is, “For every integer there is a larger integer”. If the domain had been the
THE SEMANTICS OF THE PREDICATE C ALCULUS 129
set of all integers this would have been true, but as the domain is just {1,2,
3} it is false—there is no integer in the domain larger than 3. Explicit appeal
to the truth rules is cumbersome and often unnecessary. However, we
must have such rules for those cases in which the meaning of a closed
formula is so complicated that we cannot tell simply by reflection whether
it is true.
Exercises
Let the universe of discourse be the set {1,2, 3}, and let G mean “is greater
than one”, F mean “is less than three”, H mean “(1) is not equal to (2)”, I
mean “(1) is equal to (2)”, L mean “(1) is less than (2)”, P mean “one is not
equal to two”, and let a denote 1, b denote 2, and c denote 3. This generates
the interpretation that assigns the domain {1,2, 3} to the quantifiers, the
extensions {1,2} to F, {2, 3} to G, {〈1,2〉, 〈1, 3〉, 〈2,1〉, 〈2, 3〉, 〈3,1〉, 〈3,2〉} to H,
{〈1,1〉, 〈2,2〉, 〈3, 3〉} to I, and {〈1,2〉, 〈1, 3〉, 〈2, 3〉} to L, and assigns “true” to P,1
to a,2 to b, and 3 to c. Under this interpretation, express the meaning of
each of the following closed formulas in idiomatic English and determine
whether it is true:
1. Fc 11. (∀x)(∀y)(Ixy → ~Hxy)
2. [(Fa & Fb) & Fc] 12. (∃x)(∀y)(Lxy ∨ Ixy)
3. (∀x)Fx 13. (∀x)(∀y)(Lxy → ~Ixy)
4. (∃x)(∀y)(Fx → Fy) 14. [(∀x)Gx ↔ ((Ga & Gb) & Gc)]
5. (∃x)Gx 15. (∃y)(∀x)(Lxy ∨ Icx)
6. (Iab ∨ ~Iab) 16. (P ↔ (∃x)~Ixx)
7. (Iab ∨ ~Hab) 17. (∀x)(∀y)(∀z)[(Lxy & Lyz) → Lxz]
8. (∀x)(∃y)Ixy 18. (∀x)~Lxx
9. (∃y)(∀x)Ixy 19. (∀x)(∀y)(Lxy → ~Lyx)
10. ((∀x)Ixx & (∀x)~Hxx)
4. Valid Formulas
4.1 Validity and Formal Necessity
What makes it possible to undertake a mathematically precise
investigation of formal necessity in the propositional calculus is the fact that
tautologicity and formal necessity coincide. That is, a formula is a tautology
THE SEMANTICS OF THE PREDICATE C ALCULUS 130
if, and only if, it expresses a formally necessary statement form. Because
tautologicity is a mathematically precise concept, this facilitates the study of
formal necessity.
We can proceed in an analogous way in the predicate calculus. A
closed formula of the predicate calculus is said to be valid if, and only if, it is
true under every interpretation. We write ¤ A to indicate that a formula A
is a valid. It can then be shown that a formula of the predicate calculus is
valid if, and only if, it expresses a formally necessary statement form.
However, the argument for this is more involved than the analogous
argument for the propositional calculus.
We have seen that it is useful to be able to symbolize statements using
a restricted universe of discourse. However, that creates difficulties for
connecting validity and formal necessity, so let us first consider what happens
when we do not use restricted universes of discourse. In other words, let
us assume that the universe of discourse is the universal set. If a closed
formula of the predicate calculus is true under every interpretation, then in
particular it will be true under every interpretation whose domain is the
universal set. Therefore, if this closed formula is what we obtain when we
symbolize a statement (without using a restricted universe of discourse),
then that statement must be true. The statement is made true by its logical
form, so it is formally necessary. We can conclude then that if a closed
formula is valid, then any statement having that form is formally necessary.
Now it must be seen whether the converse is true. Is it true that if a
statement can be symbolized in the predicate calculus, and it is formally
necessary, then it is valid? At first this may seem dubious, because valid
formulas must be true under all interpretations, including those with small
domains like {1,2,3}. We cannot get such a domain by symbolizing a statement
while using the universal set as the universe of discourse. It is apparent
that if a formula A results from symbolizing the form of a formally necessary
statement P, then the formula A must be true under every interpretation
whose domain is the universal set. This is because every such interpretation
must correspond to a statement having the same logical form as P, and
since P is formally necessary, every statement having that logical form
must be true. But this only shows that A must be true under every
interpretation having this one fixed domain. In order to show that A is
valid we must show that it is true under every interpretation having any
nonempty domain. Surprisingly, this can be shown. This turns upon what
is known as the Löwenheim Theorem, which says:
If a formula A of the predicate calculus is true under every interpretation
having some particular infinite domain, then A is true under every
interpretation having any nonempty domain (that is, A is valid).
interpretation having this domain, (∃y)(Hcy & ~Hyy) must be true under
every c-variant of the interpretation. But there is only one c-variant, that
assigning 1 to c. So (∃y)(Hcy & ~Hyy) must be true under the c-variant of
the interpretation that assigns 1 to c. But (∃y)(Hcy & ~Hyy) is true under
that c-variant if, and only if, (Hcb & ~Hbb) is true under at least one b-variant
of that c-variant. But again, there is only one b-variant, that assigning 1 to
b. So (Hcb & ~Hbb) must be true under some interpretation that assigns 1
to both c and b. But this is impossible, because if c and b have the same
denotation, then Hcb will be true if, and only if, Hbb is true (because it is
the same ordered pair in both cases), and so (Hcb & ~Hbb) cannot be true.
This shows that there cannot be an interpretation having a one-element
domain in which (∀x)(∃y)(Hxy & ~Hyy) is true.
Let us see if we can find an interpretation having a two-element domain,
{1,2}, in which (∀x)(∃y)(Hxy & ~Hyy) is true. This closed formula will be
true under an interpretation having this domain if, and only if, (Hcb &
~Hbb) is true under some b-variant of each c-variant of the interpretation.
There are two c-variants—those assigning 1 and 2 to c—and there are two
b-variants of each c-variant—those assigning 1 and 2 to b. Is there an
extension we can assign to H that will give this result? First consider the
c-variant assigning the denotation 1 to c. Under some b-variant of this
c-variant (Hcb & ~Hbb) must be true. For (Hcb & ~Hbb) to be true we
must have Hcb true and Hbb false. To have Hcb true under some b-variant
of c-variant, there must be some ordered pair in the extension of H whose
first element is 1. The second element must either be 1 or 2, depending
upon which denotation b has. If we allow Hcb to be true under the b-variant
that assigns 1 to b, then 〈1,1〉 will be a member of the extension of H. But
then Hbb will also be true, and (Hcb & ~Hbb) will then be false; so this
b-variant will not do the trick. Suppose then that we let Hcb be true under
the other b-variant—that assigning 2 to b. Then 〈1,2〉 must be in the extension
of H. Also, in order to have Hbb false, 〈2,2〉 must not be in the extension of
H. Thus, by requiring that 〈1,2〉 is in the extension of H, and 〈2,2〉 is not in
the extension of H, we can make (Hcb & ~Hbb) true in some b-variant of
the first c-variant. Now let us see if we can also make it true in the second
c-variant. Here we assign 2 to c. Again, we want to make (Hcb & ~Hbb)
true under some b-variant. Let us see whether we can make it true under
the b-variant that assigns 1 to b. In order to make Hcb true and Hbb false
under that b-variant of this c-variant, we must have 〈2,1〉 in the extension of
H, and we must not have 〈1,1〉 in the extension of H. Therefore, we see that
we can make (∀x)(∃y)(Hxy & ~Hyy) true under an interpretation whose
domain is {1,2} if we can assign an extension to H which contains 〈1,2〉 and
〈2,1〉, but does not contain 〈1,1〉 or 〈2,2〉. We can do this simply by assigning
the extension {〈1,2〉, 〈2,1〉}. Therefore, (∀x)(∃y)(Hxy & ~Hyy) is consistent.
We can also attempt to solve this problem by looking for a restricted
universe of discourse and a meaning for H that will make the formula
express a true statement. If we let the universe of discourse be the set of
positive integers, and we let H mean “(1) is less than (2)”, then (∀x)(∃y)(Hxy
& ~Hyy) means “For every positive integer, there is a positive integer greater
THE SEMANTICS OF THE PREDICATE C ALCULUS 136
than it but not greater than itself”. Thus (∀x)(∃y)(Hxy & ~Hyy) is true
under that interpretation whose domain is the set of positive integers, and
which assigns to H the extension {〈x, y〉 x is less than y}. Another meaning
we might assign to H to make (∀x)(∃y)(Hxy & ~Hyy) true is “(1) is not
equal to (2)”. Then (∀x)(∃y)(Hxy & ~Hyy) means “Given any positive
integer, there is a positive integer that is unequal to it but not unequal to
itself”. Notice that if we let H mean this, and we let our universe of discourse
be {1,2}, we arrive at the same interpretation as we did in the preceding
paragraph merely by considering the truth rules. However, it may be hard
to find such intuitive interpretations for complex formulas. Using the truth
rules for guidance is often a better way to find interpretations making complex
formulas true.
It is generally much easier to work with interpretations having small
domains, so when trying to find an interpretation under which a closed
formula is false, it is generally wise to begin with the smallest possible
domain, a one-element domain. If this does not work, then try a two-element
domain, and so on. Occasionally, however, a closed formula will only be
true under an interpretation having an infinite domain. The closed formula
{(∀x)(∀y)(∀z)[(Rxy & Ryz) → Rxz] & [(∀x)~Rxx & (∀x)(∃y)Rxy]}
can be shown to be false under any interpretation having a finite domain.
But it is consistent, because it can be made true in an interpretation having
an infinite domain. If we let the domain be the set of all positive integers,
and let R be the relation “(1) is a positive integer less than (2)”, this closed
formula is true. So if one cannot find an interpretation having a small
domain under which a closed formula is true, it does not automatically
follow that the closed formula is inconsistent. You might have to try an
infinite domain in order to make the closed formula true.
We have seen that we can show a closed formula to be consistent or
invalid by finding a single interpretation under which it is true, or false,
respectively. But it is more difficult to show that a closed formula is
inconsistent or valid. In the propositional calculus, truth tables provide a
mechanical procedure for determining whether a formula is valid (i.e., a
tautology). Truth tables can get long, but they are still finite structures that
in principle survey all of the possible assignments of truth values to the
atomic parts of the formula. There is no similar mechanical procedure that
can be used for testing validity in the predicate calculus. There are always
infinitely many ways of interpreting the quantifiers, relation symbols, and
individual constants in a formula of the predicate calculus. There is no way
to survey infinitely many interpretations, so there can be nothing like truth
tables for the predicate calculus. Instead, we must give some sort of general
argument to show that the closed formula is either true under all
interpretations (valid), or else false under all interpretations (inconsistent),
depending upon what we are trying to show. Let us look at some examples
of this.
Suppose first that we want to establish the validity of (∀x)(Px → Px).
This closed formula is valid if, and only if, it is true under every interpretation.
THE SEMANTICS OF THE PREDICATE C ALCULUS 137
Exercises
THE SEMANTICS OF THE PREDICATE C ALCULUS 138
Exercises
Prove the following metatheorems. The proofs are similar to the proofs
given in the propositional calculus.
1. Show that if a closed formula is implied by a valid closed formula,
then it is valid.
2. Show that implication is strongly transitive.
3. Show that implication is adjunctive.
4. Using the result of 2, show that if A, B, and C are closed formulas,
and A ¤ B, B ¤ C , and C ¤ A, then A, B, and C are all equivalent.
6. Universal Generalization
A principle of fundamental importance in the predicate calculus is the
principle of universal generalization:
Metatheorem: If A is any formula of the predicate calculus that contains
free occurrences of some variable x but not of any other variable, then
given any individual constant c not occurring in A, if Sb(c/x )A is
valid, then (∀ x)A is valid.
For example, consider the formula (Fx ∨ ~Fx). The closed formula (Fc ∨
~Fc) is valid, because it is a tautology. Therefore, by the above principle,
(∀x)(Fx ∨ ~Fx) is valid. This principle is proven as follows. Suppose
Sb(c /x)A is valid. This means that Sb( c/ x)A is true under every
interpretation. In particular, given any one interpretation, Sb(c/ x)A is not
only true under that interpretation, but also under every other interpretation
exactly like that interpretation except that it assigns something different as
the denotation of c. In other words, given any interpretation, Sb(c /x)A is
true under every c-variant of that interpretation. Therefore, given any
interpretation, (∀x )A is true under that interpretation. Thus, (∀x)A is
valid. ■
is true of all objects in a certain domain, say the integers. In order to prove
that it is true of all integers, we might reason as follows. First we pick some
arbitrary integer, and show that it is true of that integer. Then we observe
that our proof that it is true of that integer does not assume anything about
that integer which is not true of all integers, and so the same proof will
work for any other integer as well. Thus we conclude that it is true of every
integer. Observing that the proof does not assume anything about the
integer we chose that is not true of all integers is the same as observing that
the premises of our proof do not refer to that integer—that they do not
contain c. So we are just using universal generalization.
Exercise
1. Rules of Inference
Derivations in the predicate calculus have essentially the same structure
as derivations in the propositional calculus. In particular, we continue to
employ the inference rules P (premise introduction), I (implication), C
(conditionalization), DN (double negation), and R (reductio ad absurdum). We
have added two implications and six equivalences to the list of implications
and equivalences used by rule I, but the rule is conceptually unchanged. In
addition, we will add one new inference rule. This will be based upon the
principle of universal generalization discussed in chapter six. That rule
was formulated as follows:
Suppose A is any formula of the predicate calculus that contains free
occurrences of some variable x but not of any other variable, and suppose
that c is some individual constant that does not occur in A. Then if
B1 ,…,Bn are closed formulas none of which contain c , and B1,…,Bn ¤
Sb(c /x)A, it is also true that B1,…,Bn ¤ (∀x)A.
Recall that lines of a derivation express implications. They tell us that the
premises of the line imply the conclusion drawn on that line (or if a line has
no premises, they tell us that the conclusion is valid). Accordingly, we can
employ universal generalization to reason as follows:
RULE UG: UNIVERSAL GENERALIZATION: Suppose A is a formula of the
predicate calculus, x is a variable, and A does not contain free occurrences
of any variable other than x , and suppose c is some individual constant
that does not occur in A. Then if Sb(c /x)A appears on some line of a
derivation, and it either has no premises or none of its premises contain
c, we can write (∀ x)A on any later line of the derivation, taking for
premise numbers all the premise numbers of Sb(c /x)A.
We will later add an additional rule of inference to make it easier to construct
derivations in the predicate calculus, but for now we will take derivations
to be defined by the rules P, I, C, DN, R, and UG. That is, a derivation in
the predicate calculus is any sequence of lines that can be constructed in
accordance with these six rules of inference.
Now let us look at some sample derivations without, for the moment,
worrying about the strategies involved in constructing them. Suppose we
want to construct a derivation of (∀x)[Fx → (Fx ∨ Gx)]. We might proceed
as follows:
DERIVATIONS IN THE PREDICATE CALCULUS 144
(1) 1. Fa P
(1) 2. (Fa ∨ Ga) (I3), 1
3. [Fa → (Fa ∨ Ga)] C, 1, 2
4. (∀x)[Fx → (Fx ∨ Gx)] UG, 3
To take another example, suppose we want to derive (∀x)(∃y)Fxy from
(∃y)(∀x)Fxy. We might proceed as follows:
(1) 1. (∃y)(∀x)Fxy P
(2) 2. (∀x)Fxa P
(2) 3. Fba (I16), 2
(2) 4. (∃y)Fby (I17), 3
(2) 5. (∀x)(∃y)Fxy UG, 4
6. [(∀x)Fxa → (∀x)(∃y)Fxy] C, 2, 5
7. (∀y)[(∀x)Fxy → (∀x)(∃y)Fxy] UG, 6
8. [(∃y)(∀x)Fxy → (∀x)(∃y)Fxy] (E24), 7
(1) 9. (∀x)(∃y)Fxy (I9), 1, 8
Suppose next we want to derive (∃y)Gy from (∀x)(Fx → Gx) and
(∃z)Fz. We might proceed as follows:
Exercises
In each of the following purported derivations, check the lines for errors,
and indicate the numbers of the lines on which errors occur. An error is
said to occur in a line if the line cannot be introduced in accordance with
the rules of inference. This means that in checking a line for errors, you
only look at that line to see whether it has the necessary relation to the
previous lines, and treat the previous lines as if they were all correct. If a
line is incorrect, try to find a way of correcting it (perhaps by adding additional
lines).
DERIVATIONS IN THE PREDICATE CALCULUS 145
2. Strategies
Just as in the propositional calculus, there are strategies that can guide
one in the construction of derivations. The strategies for conjunctions,
disjunctions, conditionals, and biconditionals are the same as before. The
strategy for negations must be augmented by the observation that, by E21,
~(∀x)P is equivalent to (∃ x)~P, and ~(∃x)P is equivalent to (∀x )~P. To
these we add strategies for universal generalizations and existential
generalizations.
Forward reasoning
Typically, when reasoning forward from a universal generalization
we use I16 to obtain one or more instances of the generalization, and then
continue our reasoning from the instances. This is called universal instantiation.
For example, suppose suppose we want to derive (Fa & Ga) from (∀x)Fx.
We might proceed as follows:
(1) 1. (∀x)Fx P
(1) 2. Fa (I16), 1
(1) 3. Fb (I16), 1
(1) 4. (Fa & Fb) (I14), 1
Backward reasoning
Backward reasoning consists of adopting interest in other conclusions
from which we could infer the formula we are trying to derive. If our
objective is to derive a universal generalization, we will usually do this by
employing UG. For instance, suppose we want to derive (∀x)(~Gx → ~Fx)
from (∀x)(Fx → Gx). Then we construct the following annotated derivation:
Exercises
2. (∀x)Fx
(∀x)(Fx → Gx)
(∀x)Gx
Backward reasoning
To derive an existential generalization from something else, we typically
use I17, deriving the existential generalizatrrion from an instance of it. For
example, if we want to derive (∃x)Fx from (∀x)Fx, we might proceed as
follows:
(1) 1. (∀x)Fx P
(1) 1. (∃ x)Fx
(2) 2. Fa for 1 by (I17)
(1) 2. Fa (I16), 1 this discharges interest 2
(1) 3. (∃x)Fx (I17), 2 this discharges interest 1
Forward reasoning
Our most complex strategy concerns how to reason forward from an
existential generalization. Before considering the general strategy, let us
look at an example of it. To derive (∃x)Gx from (∃x)(Fx & Gx). We might
proceed as follows:
Then we used UG, to obtain (∀x)[(Fx & Gx) → (∃x)Gx]. Because x does not
occur free in (∃x)Gx, we were able to use E24 to convert the universal
generalization into [(∃x)(Fx & Gx) → (∃x)Gx], and finally, by modus ponens,
we inferred our desired conclusion from the original premises.
Generalizing this strategy, suppose we want to derive some closed
formula B from the premise (∃ x)A together with some other premises (whose
numbers are indicated in the following derivation by the dots in the left-hand
column). We choose an individual constant c, that does not occur in B or A
or any of the premises, and proceed as follows:
(1) 1. ... P
(2) 2. ... P premises other
. than (∃ x)A
.
.
(n) n. (∃ x)A P
(1,...,n) 2. B for 1 by existential instantiation
(n + 1) n + 1. Sb(c /x)A P
.
.
. derivation of
B from Sb(c /x)A
(n + 1, . . . ) k. B this discharges interest 2
... k + 1. [Sb(c /x)A → B] C, (n + 1), k
... k + 2. (∀ x)(A → B) UG, k + 1
... k + 3. [(∃ x)A → B] (E24), k + 2
(n, . . . ) k + 4. B (I9), n, k + 3
Notice that the second and third derivation on page 144 both use existential
instantiation.
The intuitive rationale for existential instantiation is the following.
Suppose we know that (∃x)Fx is true, i.e., that something is an F. From this
we want to derive a conclusion Q. We might begin our argument by
saying, “We know that something is an F; call it ‘c’”. This is the same as
DERIVATIONS IN THE PREDICATE CALCULUS 149
Exercises
1. Fa
(∃x)(Fx ∨ Gx)
2. (∃x)Fx
(∃x)(Fx ∨ Gx)
3. (∃x)(Fx ∨ Gx)
~(∃x)Fx
(∃x)Gx
(1) 1. (∀x)(∃y)(∀z)Fxyz P
(1) 1. (∃ x)(∀ z)(∃y)Fxyz
(1) 2. (∃y)(∀z)Fayz (I16), 1
DERIVATIONS IN THE PREDICATE CALCULUS 150
of B except for j, then we can write B on any later line of the derivation,
replacing the premise number j by the premise numbers of line i.
The condition that c not occur in B or any of its premises other than Sb( c /x)A
is required for us to be able to use UG in inferring (∀ x)(A → B) from
[Sb(c /x)A → B].
Using EI, we can shorten the preceding derivation as follows:
Exercises
1. (∀x)(Px → ~Px)
~Pa
4. [(∃x)Fx → (∀y)Gy]
(∀x)(∀y)(Fx → Gy)
5. (∀x)(Fx → Gx)
~(∀x)Gx
(∃y)~Fy
7. (∀x)(∃y)Rxy
(∀x)(∀y)(Rxy → Ryx)
(∀x)(∀y)(∀z)[(Rxy & Ryz) → Rxz]
(∀x)Rxx
D. Try your hand at the following derivations. These are famously difficult
derivations taken from the literature on automated theorem proving.1 Some
of these are hard enough that even professional logicians and mathematicians
may require several hours to construct a derivation.
1. Pelletier’s problem 26
[(∃x)Px ↔ (∃y)Qy]
(∀x)(∀y)[(Px & Qy) → (Rx ↔ Sy)]
[(∀x)(Px → Rx) ↔ (∀y)(Qy → Sy)]
1
Most of these are from the list compiled by Jeff Pelletier, “Seventy-five
problems for testing automatic theorem provers”, Journal of Automated Reasoning 2 (1986),
191-216.
DERIVATIONS IN THE PREDICATE CALCULUS 155
2. Pelletier’s problem 29
3. Pelletier’s problem 33
– no premises –
{(∀x)[(Pa & (Px → Pb))→ Pc]↔ (∀x)[(~Pa ∨ (Px ∨ Pc)) &(~Pa ∨ (~Pb∨ Pc))]}
– no premises –
{[(∃x)(∀y)[(Px ↔ Py)↔ ((∃z)Qz ↔ (∀w)Qw)]↔ [(∃u)(∀v)(Qu↔ Qv)↔ ((∃x)Px ↔ (∀y)Py)]}
5. Pelletier’s problem 37
6. Pelletier’s problem 42
– no premises –
~(∃y)(∀x)[Fxy ↔ ~(∃z)(Fxz & Fzx)]
7. Pelletier’s problem 43
Define set equality (Q) as having exactly the same members. Prove
set equality is symmetric.
8. Pelletier’s problem 45
(∀x){(Fx & (∀y)[(Gy & Hxy) → Jxy]) → (∀y)[(Gy & Hxy) → Ky]}
~(∃y)(Ly & Ky)
(∃x){[Fx & (∀y)(Hxy → Ly)] & (∀y)[(Gy & Hxy) → Jxy]}
(∃x)[Fx & ~(∃y)(Gy & Hxy)]
9. Pelletier’s problem 46
Wolves, foxes, birds, caterpillars, and snails are animals, and there
are some of each of them. Also, there are some grains, and grains
are plants. Every animal either likes to eat all plants or all animals
much smaller than itself that like to eat some plants. Caterpillars
and snails are much smaller than birds, which are much smaller
than foxes, which in turn are much smaller than wolves. Wolves do
not like to eat foxes or grains, while birds like to eat caterpillars but
not snails. Caterpillars and snails like to eat some plants. Therefore,
there is an animal that likes to eat a grain-eating animal.
12. This is a problem in group theory taken from Chang and Lee, Symbolic
Logic and Mechanical Theorem Proving, Academic Press, 1973. Pxyz symbolizes
“x·y = z”.
(∀x)Pxex
(∀x)Pexx
(∀x)(∀y)(∀z)(∀u)(∀v)(∀w){[Pxyu & (Pyzv & Puzw)] → Pxvw}
(∀x)(∀y)(∀z)(∀u)(∀v)(∀w){[Pxyu & (Pyzv & Pxvw)] → Puzww}
(∀x)Pxxe
Pabc
Pbac