Professional Documents
Culture Documents
1 Affine transformation 1
1.1 Mathematical definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Alternative definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Augmented matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Affine transformation of the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Examples of affine transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Affine transformations over the real numbers . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.2 Affine transformation over a finite field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.3 Affine transformation in plane geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Arithmetical hierarchy 10
2.1 The arithmetical hierarchy of formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 The arithmetical hierarchy of sets of natural numbers . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Relativized arithmetical hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Arithmetic reducibility and degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 The arithmetical hierarchy of subsets of Cantor and Baire space . . . . . . . . . . . . . . . . . . . 12
2.6 Extensions and variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 Meaning of the notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.9 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.10 Relation to Turing machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Arity 15
3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Nullary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
i
ii CONTENTS
3.1.2 Unary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.4 Ternary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.5 n-ary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.6 Variable arity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Other names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Boolean domain 19
4.1 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Boolean expression 21
5.1 Boolean operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 Boolean function 23
6.1 Boolean functions in applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
9 Clone (algebra) 59
9.1 Abstract clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
10 Computability theory 61
10.1 Computable and uncomputable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.2 Turing computability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.3 Areas of research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.3.1 Relative computability and the Turing degrees . . . . . . . . . . . . . . . . . . . . . . . . 63
10.3.2 Other reducibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.3 Rice’s theorem and the arithmetical hierarchy . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.4 Reverse mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.5 Numberings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.6 The priority method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.7 The lattice of recursively enumerable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.8 Automorphism problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.9 Kolmogorov complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.10 Frequency computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.11 Inductive inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.12 Generalizations of Turing computability . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.3.13 Continuous computability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
iv CONTENTS
11 De Morgan’s laws 71
11.1 Formal notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
11.1.1 Substitution form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.1.2 Set theory and Boolean algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.1.3 Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.1.4 Text searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.3 Informal proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.3.1 Negation of a disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.3.2 Negation of a conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.4 Formal proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
11.5 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
11.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
11.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
11.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
12 Formal language 79
12.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.2 Words over an alphabet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.3 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.4.1 Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.5 Language-specification formalisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.6 Operations on languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7.1 Programming languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7.2 Formal theories, systems and proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
12.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9.1 Citation footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9.2 General references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13 Functional completeness 86
CONTENTS v
14 Halting problem 90
14.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
14.2 Importance and consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
14.3 Representation as a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
14.4 Sketch of proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
14.5 Proof as a corollary of the uncomputability of Kolmogorov complexity . . . . . . . . . . . . . . . 93
14.6 Common pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
14.7 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.8 Relationship with Gödel’s incompleteness theorems . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.9 Recognizing partial solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.10History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.11Avoiding the halting problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.12See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.13Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.14References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.15External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
24 Negation 173
24.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
24.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
24.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.1 Double negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.2 Distributivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.3 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.4 Self dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.4 Rules of inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.5 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.6 Kripke semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
24.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
24.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
24.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
24.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
37 Subset 237
37.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.2 ⊂ and ⊃ symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.4 Other properties of inclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Affine transformation
In geometry, an affine transformation, affine map[1] or an affinity (from the Latin, affinis, “connected with”) is a
function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain
parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or
distances between points, though it does preserve ratios of distances between points lying on a straight line.
Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection,
rotation, shear mapping, and compositions of them in any combination and sequence.
If X and Y are affine spaces, then every affine transformation f : X → Y is of the form x 7→ M x + b , where
M is a linear transformation on X and b is a vector in Y . Unlike a purely linear transformation, an affine map
need not preserve the zero point in a linear space. Thus, every linear transformation is affine, but not every affine
transformation is linear.
For many purposes an affine space can be thought of as Euclidean space, though the concept of affine space is far more
general (i.e., all Euclidean spaces are affine, but there are affine spaces that are non-Euclidean). In affine coordinates,
which include Cartesian coordinates in Euclidean spaces, each output coordinate of an affine map is a linear function
(in the sense of calculus) of all input coordinates. Another way to deal with affine transformations systematically is to
select a point as the origin; then, any affine transformation is equivalent to a linear transformation (of position vectors)
followed by a translation.
−−−−−−−→ −−→
f (P ) f (Q) = φ(P Q)
or
f (Q) − f (P ) = φ(Q − P )
f : (O + ⃗x) 7→ (B + φ(⃗x)).
If an origin O′ ∈ B is also chosen, this can be decomposed as an affine transformation g : A → B that sends O 7→ O′
, namely
1
2 CHAPTER 1. AFFINE TRANSFORMATION
An image of a fern-like fractal that exhibits affine self-similarity. Each of the leaves of the fern is related to each other leaf by an
affine transformation. For instance, the red leaf can be transformed into both the small dark blue leaf and the large light blue leaf
by a combination of reflection, rotation, scaling, and translation.
−−→
followed by the translation by a vector ⃗b = O′ B .
The conclusion is that, intuitively, f consists of a translation and a linear map.
∑
λi = 1,
i∈I
we have[2]
( )
∑ ∑
f λi ai = λi f (ai ) .
i∈I i∈I
1.2 Representation
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector
algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally,
in the finite-dimensional case, if the linear map is represented as a multiplication by a matrix A and the translation as
the addition of a vector ⃗b , an affine map f acting on a vector ⃗x can be represented as
[ ] [ ][ ]
⃗y A ⃗b ⃗x
=
1 0 ... 0 1 1
⃗y = A⃗x + ⃗b.
The above-mentioned augmented matrix is called affine transformation matrix, or projective transformation matrix (as
it can also be used to perform projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of K n and GL(n,
k). This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a
translation, in which the origin must necessarily be mapped to some other point. By appending the additional coor-
dinate “1” to every vector, one essentially considers the space to be mapped as a subset of a space with an additional
dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the
4 CHAPTER 1. AFFINE TRANSFORMATION
Affine transformations on the 2D plane can be performed in three dimensions. Translation is done by shearing along over the z axis,
and rotation is performed around the z axis.
origin of the original space can be found at (0,0, ... 0, 1). A translation within the original space by means of a
linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The
coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is
Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into
one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision
and robotics.
If the vectors ⃗x1 , . . . , ⃗xn+1 are a basis of the domain’s projective vector space and if ⃗y1 , . . . , ⃗yn+1 are the corre-
sponding vectors in the codomain vector space then the augmented matrix M that achieves this affine transformation
[ ] [ ]
⃗y ⃗x
=M
1 1
is
1.3. PROPERTIES 5
[ ][ ]−1
⃗y1 . . . ⃗yn+1 ⃗x1 . . . ⃗xn+1
M=
1 ... 1 1 ... 1
This formulation works irrespective of whether any of the domain, codomain and image vector spaces have the same
number of dimensions.
For example, the affine transformation of a vector plane is uniquely determined from the knowledge of where the
three vertices of a non-degenerate triangle are mapped to.
1.3 Properties
An affine transformation preserves:
1. The collinearity relation between points; i.e., points which lie on the same line (called collinear points) continue
to be collinear after the transformation.
2. Ratios of vectors along a line; i.e., for distinct collinear points p1 , p2 , p3 , the ratio of −
p−→ −−→
1 p2 and p2 p3 is the
−−−−−−−→ −−−−−−−→
same as that of f (p1 )f (p2 ) and f (p2 )f (p3 ) .
3. More generally barycenters of weighted collections of points.
An affine transformation is invertible if and only if A is invertible. In the matrix representation, the inverse is:
[ ]
A−1 −A−1⃗b
0 ... 0 1
The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear
group of degree n as subgroup and is itself a subgroup of the general linear group of degree n + 1.
The similarity transformations form the subgroup where A is a scalar times an orthogonal matrix. For example, if
the affine transformation acts on the plane and if the determinant of A is 1 or −1 then the transformation is an equi-
areal mapping. Such transformations form a subgroup called the equi-affine group[3] A transformation that is both
equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of transformations which preserve orientation: those where the determinant of
A is positive. In the last case this is in 3D the group of rigid body motions (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation.
This may make it easier to classify and understand the transformation. For example, describing a transformation
as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the
transformation than describing it as a combination of a translation and a rotation. However, this depends on application
and context.
• pure translations,
• scaling in a given direction, with respect to a line in another direction (not necessarily perpendicular), combined
with translation that is not purely in the direction of scaling; taking “scaling” in a generalized sense it includes
the cases that the scale factor is zero (projection) or negative; the latter includes reflection, and combined with
translation it includes glide reflection,
• rotation combined with a homothety and a translation,
• shear mapping combined with a homothety and a translation, or
6 CHAPTER 1. AFFINE TRANSFORMATION
A1
A2
Z
B2
B1
C2
C1
A central dilation. The triangles A1B1Z, A1C1Z, and B1C1Z get mapped to A2B2Z, A2C2Z, and B2C2Z, respectively.
To visualise the general affine transformation of the Euclidean plane, take labelled parallelograms ABCD and A′B′C′D′.
Whatever the choices of points, there is an affine transformation T of the plane taking A to A′, and each vertex sim-
ilarly. Supposing we exclude the degenerate case where ABCD has zero area, there is a unique such affine transfor-
mation T. Drawing out a whole grid of parallelograms based on ABCD, the image T(P) of any point P is determined
by noting that T(A) = A′, T applied to the line segment AB is A′B′, T applied to the line segment AC is A′C′, and T
respects scalar multiples of vectors based at A. [If A, E, F are collinear then the ratio length(AF)/length(AE) is equal
to length(A′F′)/length(A′E′).] Geometrically T transforms the grid based on ABCD to that based in A′B′C′D′.
Affine transformations don't respect lengths or angles; they multiply area by a constant factor
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by
its effect on signed areas (as defined, for example, by the cross product of vectors).
{ a′ } = M { a } ⊕ { v },
For instance, the affine transformation of the element {a} = y7 + y6 + y3 + y = {11001010} in big-endian binary
notation = {CA} in big-endian hexadecimal notation, is calculated as follows:
1.6. SEE ALSO 7
a′0 = a0 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 1 = 0 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 1 = 1
a′1 = a0 ⊕ a1 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 1 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 1 = 0
a′2 = a0 ⊕ a1 ⊕ a2 ⊕ a6 ⊕ a7 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1
a′3 = a0 ⊕ a1 ⊕ a2 ⊕ a3 ⊕ a7 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1
a′4 = a0 ⊕ a1 ⊕ a2 ⊕ a3 ⊕ a4 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 = 0
a′5 = a1 ⊕ a2 ⊕ a3 ⊕ a4 ⊕ a5 ⊕ 1 = 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 = 1
a′6 = a2 ⊕ a3 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ 1 = 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 = 1
a′7 = a3 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 0 = 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1.
Thus, {a′} = y7 + y6 + y5 + y3 + y2 + 1 = {11101101} = {ED}.
[ ] [ ][ ] [ ]
x 0 1 x −100
7→ +
y 2 1 y −100
Transforming the three corner points of the original triangle (in red) gives three new points which form the new
triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but
not for all quadrilaterals.
1.7 Notes
[1] Berger, Marcel (1987), p. 38.
[2] Schneider, Philip K. & Eberly, David H. (2003). Geometric Tools for Computer Graphics. Morgan Kaufmann. p. 98.
ISBN 978-1-55860-594-7.
[3] Oswald Veblen (1918) Projective Geometry, volume 2, pp. 105–7.
1.8 References
• Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
• Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press,
ISBN 978-0-521-44177-3
• Sharpe, R. W. (1997). Differential Geometry: Cartan’s Generalization of Klein’s Erlangen Program. New York:
Springer. ISBN 0-387-94732-9.
8 CHAPTER 1. AFFINE TRANSFORMATION
Arithmetical hierarchy
In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene-Mostowski hierarchy classifies
certain sets based on the complexity of formulas that define them. Any set that receives a classification is called
arithmetical.
The arithmetical hierarchy is important in recursion theory, effective descriptive set theory, and the study of formal
theories such as Peano arithmetic.
The Tarski-Kuratowski algorithm provides an easy way to get an upper bound on the classifications assigned to a
formula and the set it defines.
The hyperarithmetical hierarchy and the analytical hierarchy extend the arithmetical hierarchy to classify additional
formulas and sets.
The arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The clas-
sifications are denoted Σ0n and Π0n for natural numbers n (including 0). The Greek letters here are lightface symbols,
which indicates that the formulas do not contain set parameters.
If a formula ϕ is logically equivalent to a formula with only bounded quantifiers then ϕ is assigned the classifications
Σ00 and Π00 .
The classifications Σ0n and Π0n are defined inductively for every natural number n using the following rules:
• If ϕ is logically equivalent to a formula of the form ∃n1 ∃n2 · · · ∃nk ψ , where ψ is Π0n , then ϕ is assigned the
classification Σ0n+1 .
• If ϕ is logically equivalent to a formula of the form ∀n1 ∀n2 · · · ∀nk ψ , where ψ is Σ0n , then ϕ is assigned the
classification Π0n+1 .
Also, a Σ0n formula is equivalent to a formula that begins with some existential quantifiers and alternates n − 1 times
between series of existential and universal quantifiers; while a Π0n formula is equivalent to a formula that begins with
some universal quantifiers and alternates similarly.
Because every formula is equivalent to a formula in prenex normal form, every formula with no set quantifiers is
assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is
assigned the classification Σ0n or Π0n it will be assigned the classifications Σ0m and Π0m for every m greater than n.
The most important classification assigned to a formula is thus the one with the least n, because this is enough to
determine all the other classifications.
10
2.2. THE ARITHMETICAL HIERARCHY OF SETS OF NATURAL NUMBERS 11
A set X of natural numbers is defined by formula φ in the language of Peano arithmetic (the first-order language with
symbols “0” for zero, “S” for the successor function, "+" for addition, "×" for multiplication, and "=" for equality), if
the elements of X are exactly the numbers that satisfy φ. That is, for all natural numbers n,
n ∈ X ⇔ N |= ϕ(n),
where n is the numeral in the language of arithmetic corresponding to n . A set is definable in first order arithmetic
if it is defined by some formula in the language of Peano arithmetic.
Each set X of natural numbers that is definable in first order arithmetic is assigned classifications of the form Σ0n ,
Π0n , and ∆0n , where n is a natural number, as follows. If X is definable by a Σ0n formula then X is assigned the
classification Σ0n . If X is definable by a Π0n formula then X is assigned the classification Π0n . If X is both Σ0n and
Π0n then X is assigned the additional classification ∆0n .
Note that it rarely makes sense to speak of ∆0n formulas; the first quantifier of a formula is either existential or
universal. So a ∆0n set is not defined by a ∆0n formula; rather, there are both Σ0n and Π0n formulas that define the set.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of the natural numbers.
Instead of formulas with one free variable, formulas with k free number variables are used to define the arithmetical
hierarchy on sets of k-tuples of natural numbers.
Just as we can define what it means for a set X to be recursive relative to another set Y by allowing the computation
defining X to consult Y as an oracle we can extend this notion to the whole arithmetic hierarchy and define what it
means for X to be Σ0n , ∆0n or Π0n in Y, denoted respectively Σ0,Y n ∆0,Y
n and Π0,Y
n . To do so, fix a set of integers
Y and add a predicate for membership in Y to the language of Peano arithmetic. We then say that X is in Σ0,Y n if
it is defined by a Σ0n formula in this expanded language. In other words X is Σ0,Y n if it is defined by a Σ0n formula
allowed to ask questions about membership in Y. Alternatively one can view the Σ0,Y n sets as those sets that can be
built starting with sets recursive in Y and alternately taking unions and intersections of these sets up to n times.
For example let Y be a set of integers. Let X be the set of numbers divisible by an element of Y. Then X is defined
by the formula ϕ(n) = ∃m∃t(Y (m) ∧ m × t = n) so X is in Σ0,Y 1 (actually it is in ∆0,Y
0 as well since we could
bound both quantifiers by n).
Arithmetical reducibility is an intermediate notion between Turing reducibility and hyperarithmetic reducibility.
A set is arithmetical (also arithmetic and arithmetically definable) if it is defined by some formula in the language
of Peano arithmetic. Equivalently X is arithmetical if X is Σ0n or Π0n for some integer n. A set X is arithmetical
in a set Y, denoted X ≤A Y , if X is definable a some formula in the language of Peano arithmetic extended by a
predicate for membership in Y. Equivalently, X is arithmetical in Y if X is in Σ0,Y
n or Π0,Y
n for some integer n. A
synonym for X ≤A Y is: X is arithmetically reducible to Y.
The relation X ≤A Y is reflexive and transitive, and thus the relation ≡A defined by the rule
X ≡A Y ⇔ X ≤A Y ∧ Y ≤A X
is an equivalence relation. The equivalence classes of this relation are called the arithmetic degrees; they are partially
ordered under ≤A .
12 CHAPTER 2. ARITHMETICAL HIERARCHY
• A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function
from ω to ω to the characteristic function of its graph. A subset of Baire space is given the classification Σ1n ,
Π1n , or ∆1n if and only if the corresponding subset of Cantor space has the same classification.
• An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy
of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of
Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same
classifications as the first definition.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor
space, using formulas with several free variables. The arithmetical hierarchy can be defined on any effective Polish
space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of
ordinary second-order arithmetic.
Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set
of integers. In fact boldface 0n is just the union of Σ0,Y
n for all sets of integers Y. Note that the boldface hierarchy
is just the standard hierarchy of Borel sets.
• If the relation R(n1 , . . . , nl , m1 , . . . , mk ) is Σ0n then the relation S(n1 , . . . , nl ) = ∀m1 · · · ∀mk R(n1 , . . . , nl , m1 , . . . , mk )
is defined to be Π0n+1
• If the relation R(n1 , . . . , nl , m1 , . . . , mk ) is Π0n then the relation S(n1 , . . . , nl ) = ∃m1 · · · ∃mk R(n1 , . . . , nl , m1 , . . . , mk )
is defined to be Σ0n+1
This variation slightly changes the classification of some sets. It can be extended to cover finitary relations on the
natural numbers, Baire space, and Cantor space.
2.7. MEANING OF THE NOTATION 13
2.8 Examples
• The Σ01 sets of numbers are those definable by a formula of the form ∃n1 · · · ∃nk ψ(n1 , . . . , nk , m) where ψ
has only bounded quantifiers. These are exactly the recursively enumerable sets.
• The set of natural numbers that are indices for Turing machines that compute total functions is Π02 . Intuitively,
an index e falls into this set if and only if for every m “there is an s such that the Turing machine with index
e halts on input m after s steps”. A complete proof would show that the property displayed in quotes in the
previous sentence is definable in the language of Peano arithmetic by a Σ01 formula.
• Every Σ01 subset of Baire space or Cantor space is an open set in the usual topology on the space. Moreover,
for any such set there is a computable enumeration of Gödel numbers of basic open sets whose union is the
original set. For this reason, Σ01 sets are sometimes called effectively open. Similarly, every Π01 set is closed
and the Π01 sets are sometimes called effectively closed.
• Every arithmetical subset of Cantor space or Baire space is a Borel set. The lightface Borel hierarchy extends
the arithmetical hierarchy to include additional Borel sets. For example, every Π02 subset of Cantor or Baire
space is a Gδ set (that is, a set which equals the intersection of countably many open sets). Moreover, each
of these open sets is Σ01 and the list of Gödel numbers of these open sets has a computable enumeration.
If ϕ(X, n, m) is a Σ00 formula with a free set variable X and free number variables n, m then the Π02 set
{X | ∀n∃mϕ(X, n, m)} is the intersection of the Σ01 sets of the form {X | ∃mϕ(X, n, m)} as n ranges over
the set of natural numbers.
2.9 Properties
The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy
of subsets of Cantor or Baire space.
• The collections Π0n and Σ0n are closed under finite unions and finite intersections of their respective elements.
• A set is Σ0n if and only if its complement is Π0n . A set is ∆0n if and only if the set is both Σ0n and Π0n , in which
case its complement will also be ∆0n .
• The inclusions Π0n ⊊ Π0n+1 and Σ0n ⊊ Σ0n+1 hold for all n and the inclusion Σ0n ∪ Π0n ⊊ ∆0n+1 holds for
n ≥ 1 . Thus the hierarchy does not collapse.
14 CHAPTER 2. ARITHMETICAL HIERARCHY
Post’s theorem establishes a close connection between the arithmetical hierarchy of sets of natural numbers and the
Turing degrees. In particular, it establishes the following facts for all n ≥ 1:
• The set ∅(n) (the nth Turing jump of the empty set) is many-one complete in Σ0n .
• The set N \ ∅(n) is many-one complete in Π0n .
The polynomial hierarchy is a “feasible resource-bounded” version of the arithmetical hierarchy in which polyno-
mial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the
Turing machines involved). It gives a finer classification of some sets of natural numbers that are at level ∆01 of the
arithmetical hierarchy.
• Hierarchy (mathematics)
• Polynomial hierarchy
2.12 References
• Japaridze, Giorgie (1994), “The logic of arithmetical hierarchy”, Annals of Pure and Applied Logic 66 (2):
89–112, doi:10.1016/0168-0072(94)90063-9, Zbl 0804.03045.
• Moschovakis, Yiannis N. (1980), Descriptive Set Theory, Studies in Logic and the Foundations of Mathematics
100, North Holland, ISBN 0-444-70199-0, Zbl 0433.03025.
• Nies, André (2009), Computability and randomness, Oxford Logic Guides 51, Oxford: Oxford University
Press, ISBN 978-0-19-923076-1, Zbl 1169.03034.
• Rogers, H., jr (1967), Theory of recursive functions and effective computability, Maidenhead: McGraw-Hill,
Zbl 0183.01401.
Chapter 3
Arity
In logic, mathematics, and computer science, the arity i /ˈærɨti/ of a function or operation is the number of arguments
or operands the function or operation accepts. The arity of a relation (or predicate) is the dimension of the domain
in the corresponding Cartesian product. (A function of arity n thus has arity n+1 considered as a relation.) The term
springs from words like unary, binary, ternary, etc. Unary functions or predicates may be also called “monadic";
similarly, binary functions may be called “dyadic”.
In mathematics arity may also be named rank,[1][2] but this word can have many other meanings in mathematics. In
logic and philosophy, arity is also called adicity and degree.[3][4] In linguistics, arity is usually named valency.[5]
In computer programming, there is often a syntactical distinction between operators and functions; syntactical op-
erators usually have arity 0, 1, or 2. Functions vary widely in the number of arguments, though large numbers can
become unwieldy. Some programming languages also offer support for variadic functions, i.e. functions syntactically
accepting a variable number of arguments.
3.1 Examples
The term “arity” is rarely employed in everyday usage. For example, rather than saying “the arity of the addition
operation is 2” or “addition is an operation of arity 2” one usually says “addition is a binary operation”. In general, the
naming of functions or operators with a given arity follows a convention similar to the one used for n-based numeral
systems such as binary and hexadecimal. One combines a Latin prefix with the -ary ending; for example:
3.1.1 Nullary
Sometimes it is useful to consider a constant to be an operation of arity 0, and hence call it nullary.
Also, in non-functional programming, a function without arguments can be meaningful and not necessarily constant
(due to side effects). Often, such functions have in fact some hidden input which might be global variables, including
the whole state of the system (time, free memory, ...). The latter are important examples which usually also exist in
“purely” functional programming languages.
15
16 CHAPTER 3. ARITY
3.1.2 Unary
Examples of unary operators in mathematics and in programming include the unary minus and plus, the increment
and decrement operators in C-style languages (not in logical languages), and the factorial, reciprocal, floor, ceiling,
fractional part, sign, absolute value, complex conjugate, and norm functions in mathematics. The two’s comple-
ment, address reference and the logical NOT operators are examples of unary operators in math and programming.
According to Quine, a more suitable term is “singulary”.[6]
All functions in lambda calculus and in some functional programming languages (especially those descended from
ML) are technically unary, but see n-ary below.
3.1.3 Binary
Most operators encountered in programming are of the binary form. For both programming and mathematics these
can be the multiplication operator, the addition operator, the division operator. Logical predicates such as OR, XOR,
AND, IMP are typically used as binary operators with two distinct operands.
3.1.4 Ternary
From C, C++, C#, Java, Perl and variants comes the ternary operator ?:, which is a so-called conditional operator,
taking three parameters. Forth also contains a ternary operator, */, which multiplies the first two (one-cell) numbers,
dividing by the third, with the intermediate result being a double cell number. This is used when the intermediate
result would overflow a single cell. Python has a ternary conditional expression, x if C else y. The dc calculator has
several ternary operators, such as |, which will pop three values from the stack and efficiently compute xy mod z with
arbitrary precision. Additionally, many assembly language instructions are ternary or higher, such as MOV %AX,
(%BX,%CX), which will load (MOV) into register AX the contents of a calculated memory location that is the sum
(parenthesis) of the registers BX and CX.
3.1.5 n-ary
From a mathematical point of view, a function of n arguments can always be considered as a function of one single
argument which is an element of some product space. However, it may be convenient for notation to consider n-ary
functions, as for example multilinear maps (which are not linear maps on the product space, if n≠1).
The same is true for programming languages, where functions taking several arguments could always be defined as
functions taking a single argument of some composite type such as a tuple, or in languages with higher-order functions,
by currying.
• Nullary means 0-ary (from nūllus, zero not being well-understood in antiquity).
• Unary means 1-ary (from cardinal unus, rather than singulary from distributive singulī ).
• Binary means 2-ary.
• Ternary means 3-ary.
3.3. SEE ALSO 17
An alternative nomenclature is derived in a similar fashion from the corresponding Greek roots; for example, niladic
(or medadic), monadic, dyadic, triadic, polyadic, and so on. Thence derive the alternative terms adicity and adinity
for the Latin-derived arity.
These words are often used to describe anything related to that number (e.g., undenary chess is a chess variant with
an 11×11 board, or the Millenary Petition of 1603).
3.4 References
[1] Michiel Hazewinkel (2001). Encyclopaedia of Mathematics, Supplement III. Springer. p. 3. ISBN 978-1-4020-0198-7.
[2] Eric Schechter (1997). Handbook of Analysis and Its Foundations. Academic Press. p. 356. ISBN 978-0-12-622760-4.
[3] Michael Detlefsen; David Charles McCarty; John B. Bacon (1999). Logic from A to Z. Routeledge. p. 7. ISBN 978-0-
415-21375-2.
[4] Nino B. Cocchiarella; Max A. Freund (2008). Modal Logic: An Introduction to its Syntax and Semantics. Oxford University
Press. p. 121. ISBN 978-0-19-536658-7.
[5] David Crystal (2008). Dictionary of Linguistics and Phonetics (6th ed.). John Wiley & Sons. p. 507. ISBN 978-1-405-
15296-9.
[6] Quine, W. V. O. (1940), Mathematical logic, Cambridge, MA: Harvard University Press, p. 13
[7] Oliver, Alex (2004). “Multigrade Predicates”. Mind 113: 609–681. doi:10.1093/mind/113.452.609.
18 CHAPTER 3. ARITY
• Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2. Especially pp. 22–24.
Chapter 4
Boolean domain
In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpre-
tations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually
written as {0, 1},[1][2][3] {false, true}, {F, T},[4] {⊥, ⊤} [5] or B. [6][7]
The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The
initial object in the category of bounded lattices is a Boolean domain.
In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming
languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true.
However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for
example, falsity is represented by the number 0 and truth is represented by the number 1 or −1 respectively, and all
variables that can take these values can also take any other numerical values.
4.1 Generalizations
The Boolean domain {0, 1} can be replaced by the unit interval [0,1], in which case rather than only taking values 0
or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x,
conjunction (AND) is replaced with multiplication ( xy ), and disjunction (OR) is defined via De Morgan’s law to be
1 − (1 − x)(1 − y) .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic
and probabilistic logic. In these interpretations, a value is interpreted as the “degree” of truth – to what extent a
proposition is true, or the probability that the proposition is true.
4.3 Notes
[1] Dirk van Dalen, Logic and Structure. Springer (2004), page 15.
[2] David Makinson, Sets, Logic and Maths for Computing. Springer (2008), page 13.
[3] George S. Boolos and Richard C. Jeffrey, Computability and Logic. Cambridge University Press (1980), page 99.
[4] Elliott Mendelson, Introduction to Mathematical Logic (4th. ed.). Chapman & Hall/CRC (1997), page 11.
[5] Eric C. R. Hehner, A Practical Theory of Programming. Springer (1993, 2010), page 3.
[6] Ian Parberry (1994). Circuit Complexity and Neural Networks. MIT Press. p. 65. ISBN 978-0-262-16148-0.
19
20 CHAPTER 4. BOOLEAN DOMAIN
[7] Jordi Cortadella et al. (2002). Logic Synthesis for Asynchronous Controllers and Interfaces. Springer Science & Business
Media. p. 73. ISBN 978-3-540-43152-7.
Chapter 5
Boolean expression
In computer science, a Boolean expression is an expression in a programming language that produces a Boolean
value when evaluated, i.e. one of true or false. A Boolean expression may be composed of a combination of the
Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.[1]
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.[2]
5.2 Examples
• The expression “5 > 3” is evaluated as true.
• “5>=3” and “3<=5” are equivalent Boolean expressions, both of which are evaluated as true.
• Of course, most Boolean expressions will contain at least one variable (X > 3), and often more (X > Y).
• Expression (mathematics)
5.4 References
[1] Gries, David; Schneider, Fred B. (1993), “Chapter 2. Boolean Expressions”, A Logical Approach to Discrete Math, Mono-
graphs in Computer Science, Springer, p. 25ff, ISBN 9780387941158.
[2] van Melkebeek, Dieter (2000), Randomness and Completeness in Computational Complexity, Lecture Notes in Computer
Science 1950, Springer, p. 22, ISBN 9783540414926.
[3] E.g. for Java see Brogden, William B.; Green, Marcus (2003), Java 2 Programmer, Que Publishing, p. 45, ISBN
9780789728616.
21
22 CHAPTER 5. BOOLEAN EXPRESSION
Boolean function
In mathematics and logic, a (finitary) Boolean function (or switching function) is a function of the form ƒ : Bk →
B, where B = {0, 1} is a Boolean domain and k is a non-negative integer called the arity of the function. In the case
where k = 0, the “function” is essentially a constant element of B.
Every k-ary Boolean function can be expressed as a propositional formula in k variables x1 , …, xk, and two propo-
k
sitional formulas are logically equivalent if and only if they express the same Boolean function. There are 22 k-ary
functions for every k.
• Boolean algebra
• Boolean domain
• Boolean-valued function
• Logical connective
• Truth function
• Truth table
23
24 CHAPTER 6. BOOLEAN FUNCTION
6.3 References
• Crama, Y; Hammer, P. L. (2011), Boolean Functions, Cambridge University Press.
• Hazewinkel, Michiel, ed. (2001), “Boolean function”, Encyclopedia of Mathematics, Springer, ISBN 978-1-
55608-010-4
• Janković, Dragan; Stanković, Radomir S.; Moraga, Claudio (November 2003). “Arithmetic expressions opti-
misation using dual polarity property” (PDF). Serbian Journal of Electrical Engineering 1 (71 - 80, number 1).
Retrieved 2015-06-07.
• Mano, M. M.; Ciletti, M. D. (2013), Digital Design, Pearson.
Chapter 7
In set theory, the cardinality of the continuum is the cardinality or “size” of the set of real numbers R , sometimes
called the continuum. It is an infinite cardinal number and is denoted by |R| or c (a lowercase fraktur script “c”).
The real numbers R are more numerous than the natural numbers N . Moreover, R has the same number of elements
as the power set of N . Symbolically, if the cardinality of N is denoted as ℵ0 , the cardinality of the continuum is
c = 2ℵ0 > ℵ0 .
This was proven by Georg Cantor in his 1874 uncountability proof, part of his groundbreaking study of different
infinities, and later more simply in his diagonal argument. Cantor defined cardinality in terms of bijective functions:
two sets have the same cardinality if and only if there exists a bijective function between them.
Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many
other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In
other words, the open interval (a,b) is equinumerous with R. This is also true for several other infinite sets, such as
any n-dimensional Euclidean space Rn (see space filling curve). That is,
7.1 Properties
7.1.1 Uncountability
Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that
the set of real numbers is uncountably infinite; i.e. c is strictly greater than the cardinality of the natural numbers, ℵ0
:
ℵ0 < c.
In other words, there are strictly more real numbers than there are integers. Cantor proved this statement in several
different ways. See Cantor’s first uncountability proof and Cantor’s diagonal argument.
25
26 CHAPTER 7. CARDINALITY OF THE CONTINUUM
1. Define a map f : R → P(Q) from the reals to the power set of the rationals by sending each real number x to
the set {q ∈ Q | q ≤ x} of all rationals less than or equal to x (with the reals viewed as Dedekind cuts, this
is nothing other than the inclusion map in the set of sets of rationals). This map is injective since the rationals
are dense in R. Since the rationals are countable we have that c ≤ 2ℵ0 .
2. Let {0,2}N be the set of infinite sequences with values in set {0,2}. This set clearly has cardinality 2ℵ0 (the
natural bijection between the set of binary sequences and P(N) is given by the indicator function). Now asso-
ciate to each such sequence (ai) the unique real number in the interval [0,1] with the ternary-expansion given
by the digits (ai), i.e. the i-th digit after the decimal point is ai. The image of this map is called the Cantor set.
It is not hard to see that this map is injective, for by avoiding points with the digit 1 in their ternary expansion
we avoid conflicts created by the fact that the ternary-expansion of a real number is not unique. We then have
that 2ℵ0 ≤ c .
c = |P (N)| = 2ℵ0 .
(A different proof of c = 2ℵ0 is given in Cantor’s diagonal argument. This proof constructs a bijection from {0,1}N
to R.)
The cardinal equality c2 = c can be demonstrated using cardinal arithmetic:
By using the rules of cardinal arithmetic one can also show that
cℵ0 = ℵ0 ℵ0 = nℵ0 = cn = ℵ0 c = nc = c,
cc = (2ℵ0 )c = 2c×ℵ0 = 2c ,
1/2 = 0.50000...
1/3 = 0.33333...
π = 3.14159....
(This is true even when the expansion repeats as in the first two examples.) In any given case, the number of digits
is countable since they can be put into a one-to-one correspondence with the set of natural numbers N . This fact
makes it sensible to talk about (for example) the first, the one-hundredth, or the millionth digit of π . Since the natural
numbers have cardinality ℵ0 , each real number has ℵ0 digits in its expansion.
Since each real number can be broken into an integer part and a decimal fraction, we get
ℵ0
c ≤ ℵ0 · 10ℵ0 ≤ 2ℵ0 · (24 ) = 2ℵ0 +4·ℵ0 = 2ℵ0
since
7.2. BETH NUMBERS 27
ℵ0 + 4 · ℵ 0 = ℵ0 .
On the other hand, if we map 2 = {0, 1} to {3, 7} and consider that decimal fractions containing only 3 or 7 are only
a part of the real numbers, then we get
2ℵ0 ≤ c .
and thus
c = 2ℵ0 .
The sequence of beth numbers is defined by setting ℶ0 = ℵ0 and ℶk+1 = 2ℶk . So c is the second beth number,
beth-one:
c = ℶ1 .
The third beth number, beth-two, is the cardinality of the power set of R (i.e. the set of all subsets of the real line):
2c = ℶ2 .
The famous continuum hypothesis asserts that c is also the second aleph number ℵ1 . In other words, the continuum
hypothesis states that there is no set A whose cardinality lies strictly between ℵ0 and c
This statement is now known to be independent of the axioms of Zermelo–Fraenkel set theory with the axiom of
choice (ZFC). That is, both the hypothesis and its negation are consistent with these axioms. In fact, for every
nonzero natural number n, the equality c = ℵn is independent of ZFC (the case n = 1 is the continuum hypothesis).
The same is true for most other alephs, although in some cases equality can be ruled out by König’s theorem on
the grounds of cofinality, e.g., c ̸= ℵω . In particular, c could be either ℵ1 or ℵω1 , where ω1 is the first uncountable
ordinal, so it could be either a successor cardinal or a limit cardinal, and either a regular cardinal or a singular cardinal.
• any (nondegenerate) closed or open interval in R (such as the unit interval [0, 1] )
For instance, for all a, b ∈ R such that a < b we can define the bijection
f : R → (a, b)
arctan x + π2
x 7→ · (b − a) + a
π
Now we show the cardinality of an infinite interval. For all a ∈ R we can define the bijection
f : R → (a, ∞)
{
arctan x + π2 + a if x < 0
x 7→
x + π2 + a if x ≥ 0
and similarly for all b ∈ R
f : R → (−∞, b)
{
x − π2 + b if x < 0
x 7→
arctan x − π
2 +b if x ≥ 0
We note that the set of real algebraic numbers is countably infinite (assign to each formula its Gödel
number.) So the cardinality of the real algebraic numbers is ℵ0 . Furthermore, the real algebraic numbers
and the real transcendental numbers are disjoint sets whose union is R . Thus, since the cardinality of
R is c , the cardinality of the real transcendental numbers is c − ℵ0 = c . A similar result follows for
complex transcendental numbers, once we have proved that |C| = c .
• the power set of the natural numbers P(N) (the set of all subsets of the natural numbers)
• the set 2R of indicator functions defined on subsets of the reals (the set 2R is isomorphic to P(R) – the indicator
function chooses elements of each subset to include)
• the set RR of all functions from R to R
• the Lebesgue σ-algebra of R , i.e., the set of all Lebesgue measurable sets in R .
• the Stone–Čech compactifications of N , Q and R
7.6 References
[1] Was Cantor Surprised?, Fernando Q. Gouvêa
• Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag,
New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition).
• Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. ISBN
3-540-44085-2.
• Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. ISBN 0-444-86839-9.
This article incorporates material from cardinality of the continuum on PlanetMath, which is licensed under the Creative
Commons Attribution/Share-Alike License.
Chapter 8
Charles Sanders Peirce (/ˈpɜrs/,[9] like “purse”, September 10, 1839 – April 19, 1914) was an American philoso-
pher, logician, mathematician, and scientist who is sometimes known as “the father of pragmatism". He was educated
as a chemist and employed as a scientist for 30 years. Today he is appreciated largely for his contributions to logic,
mathematics, philosophy, scientific methodology, and semiotics, and for his founding of pragmatism.
An innovator in mathematics, statistics, philosophy, research methodology, and various sciences, Peirce considered
himself, first and foremost, a logician. He made major contributions to logic, but logic for him encompassed much
of that which is now called epistemology and philosophy of science. He saw logic as the formal branch of semiotics,
of which he is a founder, and which foreshadowed the debate among logical positivists and proponents of philosophy
of language that dominated 20th century Western philosophy; additionally, he defined the concept of abductive rea-
soning, as well as rigorously formulated mathematical induction and deductive reasoning. As early as 1886 he saw
that logical operations could be carried out by electrical switching circuits; the same idea was used decades later to
produce digital computers.[10]
In 1934, the philosopher Paul Weiss called Peirce “the most original and versatile of American philosophers and
America’s greatest logician”.[11] Webster’s Biographical Dictionary said in 1943 that Peirce was “now regarded as the
most original thinker and greatest logician of his time.”[12]
8.1 Life
Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin
Peirce, himself a professor of astronomy and mathematics at Harvard University and perhaps the first serious re-
search mathematician in America. At age 12, Charles read his older brother’s copy of Richard Whately's Elements
of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and
reasoning.[13] He went on to earn the A.B. and A.M. from Harvard; in 1863 the Lawrence Scientific School awarded
him a B.Sc. that was Harvard’s first summa cum laude chemistry degree;[14] and otherwise his academic record was
undistinguished.[15] At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and
William James.[16] One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce.
This opinion proved fateful, because Eliot, while President of Harvard 1869–1909—a period encompassing nearly
all of Peirce’s working life—repeatedly vetoed Harvard’s employing Peirce in any capacity.[17]
Peirce suffered from his late teens onward from a nervous condition then known as “facial neuralgia”, which would
today be diagnosed as trigeminal neuralgia. Brent says that when in the throes of its pain “he was, at first, almost
stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to
violent outbursts of temper”.[18] Its consequences may have led to the social isolation which made his life’s later years
so tragic.
30
8.1. LIFE 31
Peirce’s birthplace. Now Lesley University's Graduate School of Arts and Social Sciences
the Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with
the Confederacy.[21] At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to
determine small local variations in the Earth's gravity.[19] He was elected a resident fellow of the American Academy
of Arts and Sciences in January 1867.[22] The Survey sent him to Europe five times,[23] first in 1871 as part of a
group sent to observe a solar eclipse; there, he sought out Augustus De Morgan, William Stanley Jevons, and William
Kingdon Clifford,[24] British mathematicians and logicians whose turn of mind resembled his own. From 1869 to
1872, he was employed as an Assistant in Harvard’s astronomical observatory, doing important work on determining
the brightness of stars and the shape of the Milky Way.[25] On April 20, 1877 he was elected a member of the National
Academy of Sciences.[26] Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain
frequency,[27] the kind of definition employed from 1960 to 1983.
During the 1880s, Peirce’s indifference to bureaucratic detail waxed while his Survey work’s quality and timeliness
waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries,
ultimately thousands during 1883–1909, on philosophy, logic, science, and other subjects for the encyclopedic Century
Dictionary.[28] In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of
Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds.[29] In 1891,
Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.[30] He never again
held regular employment.
In 1879, Peirce was appointed Lecturer in logic at the new Johns Hopkins University, which had strong departments
in a number of areas that interested him, such as philosophy (Royce and Dewey completed their PhDs at Hopkins),
psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study
with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce’s work on mathematics and
logic). 1883 saw publication of his Studies in Logic by Members of the Johns Hopkins University containing works by
himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom
were his graduate students.[31] Peirce’s nontenured position at Hopkins was the only academic appointment he ever
32 CHAPTER 8. CHARLES SANDERS PEIRCE
held.
Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants,
and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American sci-
entist of the day, Simon Newcomb.[32] Peirce’s efforts may also have been hampered by a difficult personality; Brent
conjectures as to further psychological difficulty.[33]
Peirce’s personal life worked against his professional success. After his first wife, Harriet Melusina Fay (“Zina”),
left him in 1875,[34] Peirce, while still legally married, became involved with Juliette, whose name, given variously
as Froissy and Pourtalai[35] and nationality (she spoke French[36] ) remain uncertain.[37] When his divorce from Zina
became final in 1883, he married Juliette.[38] That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce,
while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal
led to his dismissal in January 1884.[39] Over the years Peirce sought academic employment at various universities
without success.[40] He had no children by either marriage.[41]
Cambridge, where Peirce was born and raised, New York City, where he often visited and sometimes lived, and Milford, where he
spent the later years of his life with his second wife Juliette.
8.1.3 Poverty
In 1887 Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km2 ) of rural land near Milford,
Pennsylvania, which never yielded an economic return.[42] There he had an 1854 farmhouse remodeled to his design.[43]
The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives,[44] Charles
writing prolifically, much of it unpublished to this day (see Works). Living beyond their means soon led to grave
financial and legal difficulties.[45] He spent much of his last two decades unable to afford heat in winter and sub-
sisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old
manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a
while.[46] Several people, including his brother James Mills Peirce[47] and his neighbors, relatives of Gifford Pinchot,
settled his debts and paid his property taxes and mortgage.[48]
Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary
entries, and reviews for The Nation (with whose editor, Wendell Phillips Garrison, he became friendly). He did
translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial
mathematical calculations for Langley’s research on powered flight. Hoping to make money, Peirce tried inventing.[49]
He began but did not complete a number of books.[50] In 1888, President Grover Cleveland appointed him to the
Assay Commission.[51]
From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago,[52] who introduced Peirce to
editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal The Monist, which
eventually published at least 14 articles by Peirce.[53] He wrote many texts in James Mark Baldwin's Dictionary of
Philosophy and Psychology (1901–5); half of those credited to him appear to have been written actually by Christine
8.1. LIFE 33
Ladd-Franklin under his supervision.[54] He applied in 1902 to the newly formed Carnegie Institution for a grant
to write a systematic book of his life’s work. The application was doomed; his nemesis Newcomb served on the
Institution’s executive committee, and its President had been the President of Johns Hopkins at the time of Peirce’s
dismissal.[55]
The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his Will
to Believe (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898
and 1903).[56] Most important, each year from 1907 until James’s death in 1910, James wrote to his friends in the
Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated
by designating James’s eldest son as his heir should Juliette predecease him.[57] It has been believed that this was also
why Peirce used “Santiago” (“St. James” in English) as a middle name, but he appeared in print as early as 1890 as
Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).
Peirce died destitute in Milford, Pennsylvania, twenty years before his widow.
Peirce grew up in a home where the supremacy of the white Anglo-Saxon male was taken for granted, Irish immigrants
were considered inferior and Negro slavery was considered natural.[58]
34 CHAPTER 8. CHARLES SANDERS PEIRCE
Arisbe in 2011
Until the outbreak of the Civil War his father described himself as a secessionist, but after the outbreak of the war, this
stopped and he became a Union partisan, supporting with donations the Sanitary Commission, the leading Northern
war charity. No members of the Peirce family volunteered or enlisted. Peirce shared his father’s views and liked to
use the syllogism
All Men are equal in their political rights, Negroes are Men - Negroes are equal in political rights to whites
to illustrate the unreliability of traditional forms of logic.[59] See: Peirce’s law#Other proofs of Peirce’s law
8.2 Reception
Bertrand Russell (1959) wrote,[60] “Beyond doubt [...] he was one of the most original minds of the later nine-
teenth century, and certainly the greatest American thinker ever.” (Russell and Whitehead's Principia Mathematica,
published from 1910 to 1913, does not mention Peirce; Peirce’s work was not widely known until later.)[61] A. N.
Whitehead, while reading some of Peirce’s unpublished manuscripts soon after arriving at Harvard in 1924, was
struck by how Peirce had anticipated his own “process” thinking. (On Peirce and process metaphysics, see Lowe
1964.[25] ) Karl Popper viewed Peirce as “one of the greatest philosophers of all times”.[62] Yet Peirce’s achievements
were not immediately recognized. His imposing contemporaries William James and Josiah Royce[63] admired him,
and Cassius Jackson Keyser at Columbia and C. K. Ogden wrote about Peirce with respect, but to no immediate
effect.
The first scholar to give Peirce his considered professional attention was Royce’s student Morris Raphael Cohen,
the editor of an anthology of Peirce’s writings titled Chance, Love, and Logic (1923) and the author of the first
bibliography of Peirce’s scattered writings.[64] John Dewey studied under Peirce at Johns Hopkins[31] and, from 1916
onwards, Dewey’s writings repeatedly mention Peirce with deference. His 1938 Logic: The Theory of Inquiry is
much influenced by Peirce.[65] The publication of the first six volumes of the Collected Papers (1931–35), the most
important event to date in Peirce studies and one that Cohen made possible by raising the needed funds,[66] did not
prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not
become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939),
Feibleman (1946), and Goudge (1950), the 1941 Ph.D. thesis by Arthur W. Burks (who went on to edit volumes 7
and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946.
Its Transactions, an academic quarterly specializing in Peirce, pragmatism, and American philosophy, has appeared
8.3. WORKS 35
since 1965.
In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced
on an autograph letter by Peirce. So began her 40 years of research on Peirce the mathematician and scientist,
culminating in Eisele (1976, 1979, 1985). Beginning around 1960, the philosopher and historian of ideas Max Fisch
(1900–1995) emerged as an authority on Peirce; Fisch (1986)[67] includes many of his relevant articles, including a
wide-ranging survey (Fisch 1986: 422–48) of the impact of Peirce’s thought through 1983.
Peirce has gained a significant international following, marked by university research centers devoted to Peirce studies
and pragmatism in Brazil (CeneP/CIEP), Finland (HPRC, including Commens), Germany (Wirth’s group, Hoffman’s
and Otte’s group, and Deuser’s and Härle’s group[68] ), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His
writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish.
Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirceans of note. For many years, the
North American philosophy department most devoted to Peirce was the University of Toronto's, thanks in good part
to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana
University - Purdue University Indianapolis, home of the Peirce Edition Project (PEP), and the Pennsylvania State
University.
Currently, considerable interest is being taken in Peirce’s ideas by researchers wholly outside the
arena of academic philosophy. The interest comes from industry, business, technology, intelligence or-
ganizations, and the military; and it has resulted in the existence of a substantial number of agencies,
institutes, businesses, and laboratories in which ongoing research into and development of Peircean con-
cepts are being vigorously undertaken.
—Robert Burch, 2001, updated 2010[19]
In recent years, Peirce’s trichotomy of signs is exploited by a growing number of practitioners for marketing and
design tasks.
8.3 Works
Peirce’s reputation rests largely on a number of academic papers published in American scientific and scholarly
journals such as Proceedings of the American Academy of Arts and Sciences, the Journal of Speculative Philosophy,
The Monist, Popular Science Monthly, the American Journal of Mathematics, Memoirs of the National Academy of
Sciences, The Nation, and others. See Articles by Peirce, published in his lifetime for an extensive list with links
to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published
in his lifetime[69] was Photometric Researches (1878), a 181-page monograph on the applications of spectrographic
methods to astronomy. While at Johns Hopkins, he edited Studies in Logic (1883), containing chapters by himself
and his graduate students. Besides lectures during his years (1879–1884) as Lecturer in Logic at Johns Hopkins, he
gave at least nine series of lectures, many now published; see Lectures by Peirce.
Harvard University obtained from Peirce’s widow soon after his death the papers found in his study, but did not
microfilm them until 1964. Only after Richard Robin (1967)[70] catalogued this Nachlass did it become clear that
Peirce had left approximately 1650 unpublished manuscripts, totaling over 100,000 pages,[71] mostly still unpublished
except on microfilm. On the vicissitudes of Peirce’s papers, see Houser (1989).[72] Reportedly the papers remain in
unsatisfactory condition.[73]
The first published anthology of Peirce’s articles was the one-volume Chance, Love and Logic: Philosophical Essays,
edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957,
1958, 1972, 1994, and 2009, most still in print. The main posthumous editions[74] of Peirce’s works in their long
trek to light, often multi-volume, and some still in print, have included:
1931–58: Collected Papers of Charles Sanders Peirce (CP), 8 volumes, includes many published works, along with
a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition
drawn from Peirce’s work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from
1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while
texts from various stages in Peirce’s development are often combined, requiring frequent visits to editors’ notes.[75]
Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.
36 CHAPTER 8. CHARLES SANDERS PEIRCE
1975–87: Charles Sanders Peirce: Contributions to The Nation, 4 volumes, includes Peirce’s more than 300 reviews
and articles published 1869–1908 in The Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online.
1976: The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished
Peirce manuscripts on mathematical subjects, along with Peirce’s important published mathematical articles. Edited
by Carolyn Eisele, back in print.
1977: Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby (2nd edition 2001),
included Peirce’s entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce’s other published corre-
spondence is largely limited to the 14 letters included in volume 8 of the Collected Papers, and the 20-odd pre-1890
items included so far in the Writings. Edited by Charles S. Hardwick with James Cook, out of print.
1982–now: Writings of Charles S. Peirce, A Chronological Edition (W), Volumes 1–6 & 8, of a projected 30. The
limited coverage, and defective editing and organization, of the Collected Papers led Max Fisch and others in the
1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological
edition. Only seven volumes have appeared to date, but they cover the period from 1859–1892, when Peirce carried
out much of his best-known work. W 8 was published in November 2010; and work continues on W 7, 9, and 11. In
print and online.
1985: Historical Perspectives on Peirce’s Logic of Science: A History of Science, 2 volumes. Auspitz has said,[76] “The
extent of Peirce’s immersion in the science of his day is evident in his reviews in the Nation [...] and in his papers,
grant applications, and publishers’ prospectuses in the history and practice of science”, referring latterly to Historical
Perspectives. Edited by Carolyn Eisele, back in print.
1992: Reasoning and the Logic of Things collects in one place Peirce’s 1898 series of lectures invited by William
James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.
1992–98: The Essential Peirce (EP), 2 volumes, is an important recent sampler of Peirce’s philosophical writings.
Edited (1) by Nathan Hauser and Christian Kloesel and (2) by PEP editors, in print.
1997: Pragmatism as a Principle and Method of Right Thinking collects Peirce’s 1903 Harvard “Lectures on Prag-
matism” in a study edition, including drafts, of Peirce’s lecture manuscripts, which had been previously published in
abridged form; the lectures now also appear in EP 2. Edited by Patricia Ann Turisi, in print.
2010: Philosophy of Mathematics: Selected Writings collects important writings by Peirce on the subject, many not
previously in print. Edited by Matthew E. Moore, in print.
8.4 Mathematics
Peirce’s most important work in pure mathematics was in logical and foundational areas. He also worked on linear
algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem,
and the nature of continuity.
He worked on applied mathematics in economics, engineering, and map projections (such as the Peirce quincuncial
projection), and was especially active in probability and statistics.[77]
Discoveries
Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came
to be appreciated only long after he died:
In 1860[78] he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who
completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) Paradoxien des
Unendlichen.
↓
The Peirce arrow,
symbol for "(neither)...nor...”, also called the Quine dagger.
In 1880–81[79] he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical
NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan’s Laws).
In 1881[80] he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and
Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite
8.4. MATHEMATICS 37
The Peirce quincuncial projection of a sphere keeps angles true except at several isolated points and results in less distortion of area
than in other projections.
set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of
an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper
subsets.
In 1885[81] he distinguished between first-order and second-order quantification.[82][83] In the same paper he set out
what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady
2000,[84] pp. 132–3).
In 1886 he saw that Boolean calculations could be carried out via electrical switches,[10] anticipating Claude Shannon
by more than 50 years.
By the later 1890s[85] he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based
on them are John F. Sowa's conceptual graphs and Sun-Joo Shin’s diagrammatic reasoning.
Peirce wrote drafts for an introductory textbook, with the working title The New Elements of Mathematics, that
presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished
mathematical manuscripts finally appeared[77] in The New Elements of Mathematics by Charles S. Peirce (1976),
edited by mathematician Carolyn Eisele.
38 CHAPTER 8. CHARLES SANDERS PEIRCE
PQ P
P Q P Q
P Q
Nature of mathematics
Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences
(of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series,
and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father
Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity
but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and
that logic itself is part of philosophy and is the science about drawing conclusions necessary and otherwise.[86]
Beginning with his first paper on the “Logic of Relatives” (1870), Peirce extended the theory of relations that Augustus
De Morgan had just recently awakened from its Cinderella slumbers. Much of the mathematics of relations now taken
for granted was “borrowed” from Peirce, not always with all due credit; on that and on how the young Bertrand Russell,
especially his Principles of Mathematics and Principia Mathematica, did not do Peirce justice, see Anellis (1995).[61]
In 1918 the logician C. I. Lewis wrote, “The contributions of C.S. Peirce to symbolic logic are more numerous and
varied than those of any other writer — at least in the nineteenth century.”[87] Beginning in 1940, Alfred Tarski and
his students rediscovered aspects of Peirce’s larger vision of relational logic, developing the perspective of relation
algebra.
Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice
theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean
8.4. MATHEMATICS 39
ideas in work of Edgar F. Codd, who was a doctoral student[88] of Arthur W. Burks, a Peirce scholar. In economics,
relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and
utility and by Kenneth J. Arrow in Social Choice and Individual Values, following Arrow’s association with Tarski at
City College of New York.
On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982)[82] documented that
Frege’s work on the logic of quantifiers had little influence on his contemporaries, although it was published four
years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and
logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly
through Peirce’s “On the Algebra of Logic: A Contribution to the Philosophy of Notation”[81] (1885), published
in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who
ignored Frege. They also adopted and modified Peirce’s notations, typographical variants of those now used. Peirce
apparently was ignorant of Frege’s work, despite their overlapping achievements in logic, philosophy of language, and
the foundations of mathematics.
Peirce’s work on formal logic had admirers besides Ernst Schröder:
• Philosophical algebraist William Kingdon Clifford[89] and logician William Ernest Johnson, both British;
• The Polish school of logic and foundational mathematics, including Alfred Tarski;
• Arthur Prior, who praised and studied Peirce’s logical work in a 1964 paper[25] and in Formal Logic (saying on
page 4 that Peirce “perhaps had a keener eye for essentials than any other logician before or since.”).
A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce’s writings and, along
with Peirce’s logical work more generally, is exposited and defended in Hilary Putnam (1982);[82] the Introduction in
Nathan Houser et al. (1997);[90] and Randall Dipert’s chapter in Cheryl Misak (2004).[91]
8.4.2 Continua
Continuity and synechism are central in Peirce’s philosophy: “I did not at first suppose that it was, as I gradually came
to find it, the master-Key of philosophy”.[92]
From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua.
He long held that the real numbers constitute a pseudo-continuum;[93] that a true continuum is the real subject matter
of analysis situs (topology); and that a true continuum of instants exceeds—and within any lapse of time has room
for—any Aleph number (any infinite multitude as he called it) of instants.[94]
In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): “It
is on May 26, 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection
of any multitude. From now on, there are different kinds of continua, which have different properties.”[95]
8.5 Philosophy
It is not sufficiently recognized that Peirce’s career was that of a scientist, not a philosopher; and that
during his lifetime he was known and valued chiefly as a scientist, only secondarily as a logician, and
scarcely at all as a philosopher. Even his work in philosophy and logic will not be understood until this
fact becomes a standing premise of Peircean studies.
—Max Fisch 1964, p. 486.[25]
Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he
lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages of Kant's Critique of Pure
Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines,
including mathematics, logic, philosophy, statistics, astronomy,[25] metrology,[3] geodesy, experimental psychology,[4]
economics,[5] linguistics,[6] and the history and philosophy of science. This work has enjoyed renewed interest and
approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration
of how philosophy can be applied effectively to human problems.
Peirce’s philosophy includes (see below in related sections) a pervasive three-category system, belief that truth is
immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic
as formal semiotic on signs, on arguments, and on inquiry’s ways—including philosophical pragmatism (which he
founded), critical common-sensism, and scientific method—and, in metaphysics: Scholastic realism, e.g. John Duns
Scotus, belief in God, freedom, and at least an attenuated immortality, objective idealism, and belief in the reality of
continuity and of absolute chance, mechanical necessity, and creative love. In his work, fallibilism and pragmatism
may seem to work somewhat like skepticism and positivism, respectively, in others’ work. However, for Peirce, falli-
bilism is balanced by an anti-skepticism and is a basis for belief in the reality of absolute chance and of continuity,[100]
and pragmatism commits one to anti-nominalist belief in the reality of the general (CP 5.453–7).
For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the
special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at
any waking moment, and does not settle questions by resorting to special experiences.[101] He divided such philosophy
into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics,
and logic), and (3) metaphysics; his views on them are discussed in order below.
On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to the American
Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication,
involving three universal categories that Peirce developed in response to reading Aristotle, Kant, and Hegel, categories
that Peirce applied throughout his work for the rest of his life.[19] Peirce scholars generally regard the “New List”
as foundational or breaking the ground for Peirce’s “architectonic”, his blueprint for a pragmatic philosophy. In the
categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "How
To Make Our Ideas Clear" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work.
“On a New List of Categories” is cast as a Kantian deduction; it is short but dense and difficult to summarize. The
following table is compiled from that and later works.[102] In 1893, Peirce restated most of it for a less advanced
audience.[103]
*Note: An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process.
Peirce did not write extensively in aesthetics and ethics,[110] but came by 1902 to hold that aesthetics, ethics, and logic,
in that order, comprise the normative sciences.[111] He characterized aesthetics as the study of the good (grasped as
the admirable), and thus of the ends governing all conduct and thought.[112]
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 41
Peirce regarded logic per se as a division of philosophy, as a normative science based on esthetics and ethics, as more
basic than metaphysics,[113] and as “the art of devising methods of research”.[114] More generally, as inference, “logic
is rooted in the social principle”, since inference depends on a standpoint that, in a sense, is unlimited.[115] Peirce
called (with no sense of deprecation) “mathematics of logic” much of the kind of thing which, in current research
and applications, is called simply “logic”. He was productive in both (philosophical) logic and logic’s mathematics,
which were connected deeply in his work and thought.
Peirce argued that logic is formal semiotic, the formal study of signs in the broadest sense, not only signs that are
artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that
“all this universe is perfused with signs, if it is not composed exclusively of signs”,[116] along with their representational
and inferential relations. He argued that, since all thought takes time, all thought is in signs[117] and sign processes
(“semiosis”) such as the inquiry process. He divided logic into: (1) speculative grammar, or stechiology, on how signs
can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody
or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative or universal
rhetoric, or methodeutic,[118] the philosophical theory of inquiry, including pragmatism.
Presuppositions of logic
In his “F.R.L.” [First Rule of Logic] (1899), Peirce states that the first, and “in one sense, the sole”, rule of reason
is that, to learn, one needs to desire to learn and desire it without resting satisfied with that which one is inclined to
think.[113] So, the first rule is, to wonder. Peirce proceeds to a critical theme in research practices and the shaping of
theories:
...there follows one corollary which itself deserves to be inscribed upon every wall of the city of philos-
ophy:
Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense
that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that “the one unpardonable
offence” is a philosophical barricade against truth’s advance, an offense to which “metaphysicians in all ages have
shown themselves the most addicted”. Peirce in many writings holds that logic precedes metaphysics (ontological,
religious, and physical).
Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that
something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely
basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and
anomalous phenomena. To refuse absolute theoretical certainty is the heart of fallibilism, which Peirce unfolds into
refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic’s presupposition of fallibilism
leads at length to the view that chance and continuity are very real (tychism and synechism).[100]
The First Rule of Logic pertains to the mind’s presuppositions in undertaking reason and logic, presuppositions, for
instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational
relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as,
collectively, hopes which, in particular cases, one is unable seriously to doubt.[119]
Four incapacities
In three articles in 1868–69,[117][120][121] Peirce rejected mere verbal or hyperbolic doubt and first or ultimate prin-
ciples, and argued that we have (as he numbered them[120] ):
1. No power of Introspection. All knowledge of the internal world comes by hypothetical reasoning from known
external facts.
42 CHAPTER 8. CHARLES SANDERS PEIRCE
2. No power of Intuition (cognition without logical determination by previous cognitions). No cognitive stage is
absolutely first in a process. All mental action has the form of inference.
3. No power of thinking without signs. A cognition must be interpreted in a subsequent cognition in order to be
a cognition at all.
(The above sense of the term “intuition” is almost Kant’s, said Peirce. It differs from the current looser sense that
encompasses instinctive or anyway half-conscious inference.)
Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes
of reasoning,[121] and the falsity of philosophical Cartesianism (see below).
Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself[120] and later said that to
“dismiss make-believes” is a prerequisite for pragmatism.[122]
Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought’s
processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry
rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference,
and, as its culmination, a theory of inquiry for the task of saying 'how science works’ and devising research methods.
This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way
to the principles of all methods.[114] Influences radiate from points on parallel lines of inquiry in Aristotle's work,
in such loci as: the basic terminology of psychology in On the Soul; the founding description of sign relations in
On Interpretation; and the differentiation of inference into three modes that are commonly translated into English
as abduction, deduction, and induction, in the Prior Analytics, as well as inference by analogy (called paradeigma by
Aristotle), which Peirce regarded as involving the other three modes.
Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He
called it both semiotic and semeiotic. Both are current in singular and plural. He based it on the conception of a triadic
sign relation, and defined semiosis as “action, or influence, which is, or involves, a cooperation of three subjects, such
as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between
pairs”.[123] As to signs in thought, Peirce emphasized the reverse:
To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way
of saying that every thought must be interpreted in another, or that all thought is in signs.
—Peirce 1868.[117]
Peirce held that all thought is in signs, issuing in and from interpretation, where 'sign' is the word for the broadest
variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental
concepts and ideas, all as determinations of a mind or quasi-mind, that which at least functions like a mind, as in the
work of crystals or bees[124] — the focus is on sign action in general rather than on psychology, linguistics, or social
studies (fields which he also pursued).
Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena
to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of
inquiry on semiotics’ three levels:
1. Conditions for meaningfulness. Study of significatory elements and combinations, their grammar.
2. Validity, conditions for true representation. Critique of arguments in their various separate modes.
3. Conditions for determining interpretations. Methodology of inquiry in its mutually interacting modes.
Peirce uses examples often from common experience, but defines and discusses such things as assertion and inter-
pretation in terms of philosophical logic. In a formal vein, Peirce said:
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 43
On the Definition of Logic. Logic is formal semiotic. A sign is something, A, which brings some-
thing, B, its interpretant sign, determined or created by it, into the same sort of correspondence (or a
lower implied sort) with something, C, its object, as that in which itself stands to C. This definition no
more involves any reference to human thought than does the definition of a line as the place within
which a particle lies during a lapse of time. It is from this definition that I deduce the principles of
logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism of
Weierstrassian severity, and that is perfectly evident. The word “formal” in the definition is also defined.
—Peirce, “Carnegie Application”, The New Elements of Mathematics v. 4, p. 54.
8.6.2 Signs
Main article: Semiotic elements and classes of signs (Peirce)
See also: Representation (arts) § Peirce and representation and Sign (semiotics) § Triadic signs
A list of noted writings by Peirce on signs and sign relations is at Semiotic elements and classes of signs (Peirce)#References
and further reading.
Sign relation
Peirce’s theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim.
Anything is a sign — not absolutely as itself, but instead in some relation or other. The sign relation is the key. It
defines three roles encompassing (1) the sign, (2) the sign’s subject matter, called its object, and (3) the sign’s meaning
or ramification as formed into a kind of effect called its interpretant (a further sign, for example a translation). It is
an irreducible triadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are
not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further
interpretants.
Extension × intension = information. Two traditional approaches to sign relation, necessary though insufficient, are
the way of extension (a sign’s objects, also called breadth, denotation, or application) and the way of intension (the
objects’ characteristics, qualities, attributes referenced by the sign, also called depth, comprehension, significance, or
connotation). Peirce adds a third, the way of information, including change of information, to integrate the other
two approaches into a unified whole.[125] For example, because of the equation above, if a term’s total amount of
information stays the same, then the more that the term 'intends’ or signifies about objects, the fewer are the objects
to which the term 'extends’ or applies.
Determination. A sign depends on its object in such a way as to represent its object — the object enables and, in a
sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction.
The interpretant depends likewise on both the sign and the object — an object determines a sign to determine an
interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign
determination is triadic. For example, an interpretant does not merely represent something which represented an
object; instead an interpretant represents something as a sign representing the object. The object (be it a quality or
fact or law or even fictional) determines the sign to an interpretant through one’s collateral experience[126] with the
object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an
absent object. Peirce used the word “determine” not in a strictly deterministic sense, but in a sense of “specializes,”
bestimmt,[127] involving variable amount, like an influence.[128] Peirce came to define representation and interpretation
in terms of (triadic) determination.[129] The object determines the sign to determine another sign — the interpretant
— to be related to the object as the sign is related to the object, hence the interpretant, fulfilling its function as sign
of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is
definitive of sign, object, and interpretant in general.[128]
Semiotic elements
Peirce held there are exactly three basic elements in semiosis (sign action):
1. A sign (or representamen)[130] represents, in the broadest possible sense of “represents”. It is something inter-
pretable as saying something about something. It is not necessarily symbolic, linguistic, or artificial—a cloud
might be a sign of rain for instance, or ruins the sign of ancient civilization.[131] As Peirce sometimes put it (he
44 CHAPTER 8. CHARLES SANDERS PEIRCE
defined sign at least 76 times[128] ), the sign stands for the object to the interpretant. A sign represents its object
in some respect, which respect is the sign’s ground.[106]
2. An object (or semiotic object) is a subject matter of a sign and an interpretant. It can be anything thinkable,
a quality, an occurrence, a rule, etc., even fictional, such as Prince Hamlet.[132] All of those are special or
partial objects. The object most accurately is the universe of discourse to which the partial or special object
belongs.[132] For instance, a perturbation of Pluto’s orbit is a sign about Pluto but ultimately not only about
Pluto. An object either (i) is immediate to a sign and is the object as represented in the sign or (ii) is a dynamic
object, the object as it really is, on which the immediate object is founded “as on bedrock”.[133]
3. An interpretant (or interpretant sign) is a sign’s meaning or ramification as formed into a kind of idea or effect,
an interpretation, human or otherwise. An interpretant is a sign (a) of the object and (b) of the interpretant’s
“predecessor” (the interpreted sign) as a sign of the same object. An interpretant either (i) is immediate to a
sign and is a kind of quality or possibility such as a word’s usual meaning, or (ii) is a dynamic interpretant,
such as a state of agitation, or (iii) is a final or normal interpretant, a sum of the lessons which a sufficiently
considered sign would have as effects on practice, and with which an actual interpretant may at most coincide.
Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign
denotes, the mind needs some experience of that sign’s object, experience outside of, and collateral to, that sign or
sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all
in much the same terms.[126]
Classes of signs
Among Peirce’s many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the
second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its inter-
pretant. Also, each of the three typologies is a three-way division, a trichotomy, via Peirce’s three phenomenological
categories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.[134]
I. Qualisign, sinsign, legisign (also called tone, token, type, and also called potisign, actisign, famisign):[135] This ty-
pology classifies every sign according to the sign’s own phenomenological category—the qualisign is a quality, a
possibility, a “First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a “Second"; and
the legisign is a habit, a rule, a representational relation, a “Third”.
II. Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign’s
way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual
connection to its object, and the symbol by a habit or rule for its interpretant.
III. Rheme, dicisign, argument (also called sumisign, dicisign, suadisign, also seme, pheme, delome,[135] and regarded as
very broadened versions of the traditional term, proposition, argument): This typology classifies every sign according
to the category which the interpretant attributes to the sign’s way of denoting its object—the rheme, for example a
term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a
sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object
in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural
element of inference.
Every sign belongs to one class or another within (I) and within (II) and within (III). Thus each of the three ty-
pologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many
co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality,
and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified
at this level of analysis.
Borrowing a brace of concepts from Aristotle, Peirce examined three basic modes of inference — abduction, deduction,
and induction — in his “critique of arguments” or “logic proper”. Peirce also called abduction “retroduction”, “pre-
sumption”, and, earliest of all, “hypothesis”. He characterized it as guessing and as inference to an explanatory
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 45
hypothesis. He sometimes expounded the modes of inference by transformations of the categorical syllogism Bar-
bara (AAA), for example in “Deduction, Induction, and Hypothesis” (1878).[136] He does this by rearranging the rule
(Barbara’s major premise), the case (Barbara’s minor premise), and the result (Barbara’s conclusion):
Peirce 1883 in “A Theory of Probable Inference” (Studies in Logic) equated hypothetical inference with the induction
of characters of objects (as he had done in effect before[120] ). Eventually dissatisfied, by 1900 he distinguished them
once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and
comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive
inference:[137]
The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for
its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. “Deduction
proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that
something may be.”[138] Peirce did not remain quite convinced that one logical form covers all abduction.[139] In his
methodeutic or theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference
and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical
explanation, deductive prediction, inductive testing.
8.6.4 Pragmatism
Main articles: Pragmaticism, Pragmatic maxim and Pragmatic theory of truth § Peirce
Peirce’s recipe for pragmatic thinking, which he called pragmatism and, later, pragmaticism, is recapitulated in several
versions of the so-called pragmatic maxim. Here is one of his more emphatic reiterations of it:
Consider what effects that might conceivably have practical bearings you conceive the objects of your
conception to have. Then, your conception of those effects is the whole of your conception of the object.
As a movement, pragmatism began in the early 1870s in discussions among Peirce, William James, and others in the
Metaphysical Club. James among others regarded some articles by Peirce such as "The Fixation of Belief" (1877)
and especially "How to Make Our Ideas Clear" (1878) as foundational to pragmatism.[140] Peirce (CP 5.11–12), like
James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar
attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems.
Peirce differed from James and the early John Dewey, in some of their tangential enthusiasms, in being decidedly more
rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical
moods.
In 1905 Peirce coined the new name pragmaticism “for the precise purpose of expressing the original definition”,
saying that “all went happily” with James’s and F.C.S. Schiller's variant uses of the old name “pragmatism” and that
he coined the new name because of the old name’s growing use in “literary journals, where it gets abused”. Yet
he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his
differences with James as well as literary author Giovanni Papini's declaration of pragmatism’s indefinability. Peirce
in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists,
but he remained allied with them on other issues.[141]
Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce’s pragmatism is a method
of clarification of conceptions of objects. It equates any conception of an object to a conception of that object’s
effects to a general extent of the effects’ conceivable implications for informed practice. It is a method of sorting
out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not
practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his
“Illustrations of the Logic of Science” series of articles. In the second one, "How to Make Our Ideas Clear", Peirce
discussed three grades of clearness of conception:
1. Clearness of a conception familiar and readily used, even if unanalyzed and undeveloped.
46 CHAPTER 8. CHARLES SANDERS PEIRCE
2. Clearness of a conception in virtue of clearness of its parts, in virtue of which logicians called an idea “distinct”,
that is, clarified by analysis of just what makes it applicable. Elsewhere, echoing Kant, Peirce called a likewise
distinct definition “nominal” (CP 5.553).
3. Clearness in virtue of clearness of conceivable practical implications of the object’s conceived effects, such as
fosters fruitful reasoning, especially on difficult problems. Here he introduced that which he later called the
pragmatic maxim.
By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of
the presuppositions of reasoning in general. In clearness’s second grade (the “nominal” grade), he defined truth as
a sign’s correspondence to its object, and the real as the object of such correspondence, such that truth and the real
are independent of that which you or I or any actual, definite community of inquirers think. After that needful but
confined step, next in clearness’s third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion
which would be reached, sooner or later but still inevitably, by research taken far enough, such that the real does
depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance
for the long-run validity of the rule of induction.[142] Peirce argued that even to argue against the independence and
discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth
with just such independence and discoverability.
Peirce said that a conception’s meaning consists in "all general modes of rational conduct" implied by “acceptance”
of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive
to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such
consequent general modes is the whole meaning. His pragmatism does not equate a conception’s meaning, its in-
tellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda),
outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite
set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism
also bears no resemblance to “vulgar” pragmatism, which misleadingly connotes a ruthless and Machiavellian search
for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of
experimentational mental reflection[143] arriving at conceptions in terms of conceivable confirmatory and disconfir-
matory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use
and improvement of verification.[144]
Peirce’s pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry,[145]
which he variously called speculative, general, formal or universal rhetoric or simply methodeutic.[118] He applied his
pragmatism as a method throughout his work.
Theory of inquiry
Rival methods of inquiry In The Fixation of Belief (1877), Peirce described inquiry in general not as the pursuit
of truth per se but as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like,
and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry
as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome,
or hyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from
least to most successful:
1. The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but
leads to trying to ignore contrary information and others’ views as if truth were intrinsically private, not public.
The method goes against the social impulse and easily falters since one may well notice when another’s opinion
seems as good as one’s own initial opinion. Its successes can be brilliant but tend to be transitory.
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 47
2. The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be
majestic and long-lasting, but it cannot regulate people thoroughly enough to withstand doubts indefinitely,
especially when people learn about other societies present and past.
3. The method of the a priori — which promotes conformity less brutally but fosters opinions as something
like tastes, arising in conversation and comparisons of perspectives in terms of “what is agreeable to reason.”
Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable
but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
4. The method of science — wherein inquiry supposes that the real is discoverable but independent of particular
opinion, such that, unlike in the other methods, inquiry can, by its own account, go wrong (fallibilism), not
only right, and thus purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and
traditional sentiment, and that the scientific method is best suited to theoretical research,[147] which in turn should not
be trammeled by the other methods and practical ends; reason’s “first rule”[113] is that, in order to learn, one must desire
to learn and, as a corollary, must not block the way of inquiry. Scientific method excels the others finally by being
deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices
can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt,
Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief’s integrity, seek as
truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method.
Scientific method Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predic-
tions and testing, pragmatism points beyond the usual duo of foundational alternatives: deduction from self-evident
truths, or rationalism; and induction from experiential phenomena, or empiricism.
Based on his critique of three modes of argument and different from either foundationalism or coherentism, Peirce’s
approach seeks to justify claims by a three-phase dynamic of inquiry:
Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalization
simpliciter, which is a mere re-labeling of phenomenological patterns. Peirce’s pragmatism was the first time the
scientific method was proposed as an epistemology for philosophical questions.
A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This
is an operational notion of truth used by scientists.
Peirce extracted the pragmatic model or theory of inquiry from its raw materials in classical logic and refined it in
parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning.
Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle
understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of
thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first
be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that
induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief.
No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the
integrity of inquiry strongly limits the effective modularity of its principal components.
Peirce’s outline of the scientific method in §III–IV of “A Neglected Argument”[148] is summarized below (except as
otherwise noted). There he also reviewed plausibility and inductive precision (issues of critique of arguments).
1. Abductive (or retroductive) phase. Guessing, inference to explanatory hypotheses for selection of those best worth
trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the
hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in
one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content
of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way
48 CHAPTER 8. CHARLES SANDERS PEIRCE
for a surprising or complicated phenomenon. The modicum of success in our guesses far exceeds that of random
luck, and seems born of attunement to nature by developed or inherent instincts, especially insofar as best guesses
are optimally plausible and simple in the sense of the “facile and natural”, as by Galileo's natural light of reason and
as distinct from “logical simplicity”.[149] Abduction is the most fertile but least secure mode of inference. Its general
rationale is inductive: it succeeds often enough and it has no substitute in expediting us toward new truths.[150] In
1903, Peirce called pragmatism “the logic of abduction”.[151] Coordinative method leads from abducting a plausible
hypothesis to judging it for its testability[152] and for how its trial would economize inquiry itself.[153] The hypothesis,
being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves
to scientific tests. A simple but unlikely guess, if not costly to test for falsity, may belong first in line for testing. A
guess is intrinsically worth testing if it has plausibility or reasonably objective probability, while subjective likelihood,
though reasoned, can be misleadingly seductive. Guesses can be selected for trial strategically, for their caution (for
which Peirce gave as example the game of Twenty Questions), breadth, or incomplexity.[154] One can discover only
that which would be revealed through their sufficient experience anyway, and so the point is to expedite it; economy
of research demands the leap, so to speak, of abduction and governs its art.[153]
2. Deductive phase. Two stages:
i. Explication. Not clearly premised, but a deductive analysis of the hypothesis so as to render its parts
as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of conse-
quences of the hypothesis as predictions about evidence to be found. Corollarial or, if needed, Theore-
matic.
3. Inductive phase. Evaluation of the hypothesis, inferring from observational or experimental tests of its deduced
consequences. The long-run validity of the rule of induction is deducible from the principle (presuppositional to
reasoning in general) that the real “is only the object of the final opinion to which sufficient investigation would
lead";[142] in other words, anything excluding such a process would never be real. Induction involving the ongoing
accumulation of evidence follows “a method which, sufficiently persisted in,” will “diminish the error below any
predesignate degree.” Three stages:
i. Classification. Not clearly premised, but an inductive classing of objects of experience under general
ideas.
ii. Probation: direct Inductive Argumentation. Crude or Gradual in procedure. Crude Induction,
founded on experience in one mass (CP 2.759), presumes that future experience on a question will
not differ utterly from all past experience (CP 2.756). Gradual Induction makes a new estimate of the
proportion of truth in the hypothesis after each test, and is Qualitative or Quantitative. Qualitative Grad-
ual Induction depends on estimating the relative evident weights of the various qualities of the subject
class under investigation (CP 2.759; see also CP 7.114–20). Quantitative Gradual Induction depends on
how often, in a fair sample of instances of S, S is found actually accompanied by P that was predicted
for S (CP 2.758). It depends on measurements, or statistics, or counting.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly,
then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final
judgment on the whole result”.
Against Cartesianism Peirce drew on the methodological implications of the four incapacities — no genuine
introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the
absolutely incognizable — to attack philosophical Cartesianism, of which he said that:[120]
1. “It teaches that philosophy must begin in universal doubt” — when, instead, we start with preconceptions, “preju-
dices [...] which it does not occur to us can be questioned”, though we may find reason to question them later. “Let
us not pretend to doubt in philosophy what we do not doubt in our hearts.”
2. “It teaches that the ultimate test of certainty is...in the individual consciousness” — when, instead, in science
a theory stays on probation till agreement is reached, then it has no actual doubters left. No lone individual can
reasonably hope to fulfill philosophy’s multi-generational dream. When “candid and disciplined minds” continue to
disagree on a theoretical issue, even the theory’s author should feel doubts about it.
3. It trusts to “a single thread of inference depending often upon inconspicuous premisses” — when, instead, philos-
ophy should, “like the successful sciences”, proceed only from tangible, scrutinizable premisses and trust not to any
8.7. PHILOSOPHY: METAPHYSICS 49
one argument but instead to “the multitude and variety of its arguments” as forming, not a chain at least as weak as
its weakest link, but “a cable whose fibers”, soever “slender, are sufficiently numerous and intimately connected”.
4. It renders many facts “absolutely inexplicable, unless to say that 'God makes them so' is to be regarded as an
explanation”[155] — when, instead, philosophy should avoid being “unidealistic”,[156] misbelieving that something real
can defy or evade all possible ideas, and supposing, inevitably, “some absolutely inexplicable, unanalyzable ultimate”,
which explanatory surmise explains nothing and so is inadmissible.
I formerly defined the possible as that which in a given state of information (real or feigned) we do
not know not to be true. But this definition today seems to me only a twisted phrase which, by means of
two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not
true, because we see they are impossible.
Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the
pragmaticist is committed to a strong modal realism by conceiving of objects in terms of predictive general conditional
propositions about how they would behave under certain circumstances.[158]
Psychical or religious metaphysics. Peirce believed in God, and characterized such belief as founded in an instinct
explorable in musing over the worlds of ideas, brute facts, and evolving habits — and it is a belief in God not as
an actual or existent being (in Peirce’s sense of those words), but all the same as a real being.[159] In "A Neglected
Argument for the Reality of God" (1908),[148] Peirce sketches, for God’s reality, an argument to a hypothesis of God as
the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling
in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features
of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such
purposefulness will “stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing
an “infinitely incomprehensible” being, starts off at odds with its own nature as a purportively true conception, and
so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and
as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though
God as the Necessary Being is not vague or growing; but the hypothesis will hold it to be more false to say the
opposite, that God is purposeless. Peirce also argued that the will is free[160] and (see Synechism) that there is at least
an attenuated kind of immortality.
Physical metaphysics. Peirce held the view, which he called objective idealism, that “matter is effete mind, in-
veterate habits becoming physical laws”.[161] Peirce asserted the reality of (1) absolute chance (his tychist view), (2)
mechanical necessity (anancist view), and (3) that which he called the law of love (agapist view), echoing his categories
Firstness, Secondness, and Thirdness, respectively. He held that fortuitous variation (which he also called “sporting”),
mechanical necessity, and creative love are the three modes of evolution (modes called “tychasm”, “anancasm”, and
“agapasm”)[162] of the cosmos and its parts. He found his conception of agapasm embodied in Lamarckian evolution;
the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a
mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that
overall he was a synechist, holding with reality of continuity,[163] especially of space, time, and law.[164]
Peirce outlined two fields, “Cenoscopy” and “Science of Review”, both of which he called philosophy. Both included
philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:[101]
50 CHAPTER 8. CHARLES SANDERS PEIRCE
1. Science of Discovery.
(a) Mathematics.
(b) Cenoscopy (philosophy as discussed earlier in this article—categorial, normative, metaphysical), as First
Philosophy, concerns positive phenomena in general, does not rely on findings from special sciences, and
includes the general study of inquiry and scientific method.
(c) Idioscopy, or the Special Sciences (of nature and mind).
2. Science of Review, as Ultimate Philosophy, arranges "...the results of discovery, beginning with digests, and
going on to endeavor to form a philosophy of science”. His examples included Humboldt's Cosmos, Comte's
Philosophie positive, and Spencer's Synthetic Philosophy.
Peirce placed, within Science of Review, the work and theory of classifying the sciences (including mathematics and
philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are
of interest both as a map for navigating his philosophy and as an accomplished polymath’s survey of research in his
time.
8.10 Notes
[1] Hacking, Ian (1990), “A Universe of Chance”, The Taming of Chance, pp. 200–215, Cambridge U. Pr.
[2] Stigler, Stephen M. (1978). “Mathematical statistics in the early States”. Annals of Statistics 6: 239–265 [248]. doi:10.1214/aos/1176344123.
JSTOR 2958876. MR 483118.
[3] Crease, Robert P (2009). “Charles Sanders Peirce and the first absolute measurement standard: In his brilliant but troubled
life, Peirce was a pioneer in both metrology and philosophy”. Physics Today 62 (12): 39–44. doi:10.1063/1.3273015.
[4] Cadwallader, Thomas C. (1974). “Charles S. Peirce (1839-1914): The first American experimental psychologist”. Journal
of the History of the Behavioral Sciences 10 (3): 291. doi:10.1002/1520-6696(197407)10:3<291::AID-JHBS2300100304>3.0.CO;2-
N.
[5] Wible, James R. (2008), "The Economic Mind of Charles Sanders Peirce", Contemporary Pragmatism, v. 5, n. 2, Decem-
ber, pp. 39-67
[6] Nöth, Winfried (2000), "Charles Sanders Peirce, Pathfinder in Linguistics", Digital Encyclopedia of Charles S. Peirce.
[7] Joseph Brent (1998). Charles Sanders Peirce: A Life (2 ed.). Indiana University Press. p. 18. ISBN 9780253211613.
Peirce had strong, though unorthodox, religious convictions. Although he was a communicant in the Episcopal church for
most of his life, he expressed contempt for the theologies, metaphysics, and practices of established religions.
[8] Brent, Joseph (1998), Charles Sanders Peirce: A Life, 2nd edition, Bloomington and Indianapolis: Indiana University Press
(catalog page); also NetLibrary.
[9] “Peirce”, in the case of C.S. Peirce, always rhymes with the English-language word “terse” and so, in most dialects, is
pronounced exactly like the English-language word " purse ". See "Note on the Pronunciation of 'Peirce'", Peirce Project
Newsletter, v. 1, nos. 3/4, Dec. 1994.
[10] Peirce, C. S., “Letter, Peirce to A. Marquand", dated 1886, W 5:541–3, Google Preview. See Burks, Arthur W., “Review:
Charles S. Peirce, The new elements of mathematics", Bulletin of the American Mathematical Society v. 84, n. 5 (1978),
pp. 913–18, see 917. PDF Eprint. Also p. xliv in Houser, Nathan, Introduction, W 5.
[11] Weiss, Paul (1934), “Peirce, Charles Sanders” in the Dictionary of American Biography. Arisbe Eprint.
[12] “Peirce, Benjamin”, subheading “Charles Sanders”, in Webster’s Biographical Dictionary (1943/1960), Springfield, MA:
Merriam-Webster.
8.10. NOTES 51
[14] “Peirce, Charles Sanders” (1898), The National Cyclopedia of American Biography, v. 8, p. 409.
[15] B:54–6
[16] B:363–4
[18] B:40
[19] Burch, Robert (2001, 2010), "Charles Sanders Peirce", Stanford Encyclopedia of Philosophy.
[20] B:139
[21] B:61-2
[22] B:69
[23] B:368
[24] B:79-81
[25] Moore, Edward C., and Robin, Richard S., eds., (1964), Studies in the Philosophy of Charles Sanders Peirce, Second Series,
Amherst: U. of Massachusetts Press. On Peirce the astronomer, see Lenzen’s chapter.
[26] B:367
[27] Fisch, Max (1983), “Peirce as Scientist, Mathematician, Historian, Logician, and Philosopher”, Studies in Logic (new
edition), see p. x.
[30] B:202
[33] B:xv
[34] B:98–101
[35] B:141
[36] B:148
[40] In 1885 (B:369); in 1890 and 1900 (B:215, 273); in 1891 (B:215–16); and in 1892 (B:151–2, 222).
[41] B:77
[43] B:13
[44] B:369–74
[45] B:191
[46] B:246
[47] B:242
[48] B:271
52 CHAPTER 8. CHARLES SANDERS PEIRCE
[49] B:249–55
[50] B:371
[51] B:189
[52] B:370
[53] B:205–6
[54] B:374–6
[55] B:279–89
[58] Brent, Joseph (1998). Charles Sanders Peirce, a life. Bloomington, Indiana: Indiana University Press. p. 34. ISBN
0-253-21161-1.
[59] Menand, Louis (2001). The Metaphysical Club. London: Flamingo. pp. 161–162. ISBN 0-00-712690-5.
[61] Anellis, Irving H. (1995), “Peirce Rustled, Russell Pierced: How Charles Peirce and Bertrand Russell Viewed Each Other’s
Work in Logic, and an Assessment of Russell’s Accuracy and Role in the Historiography of Logic”, Modern Logic 5, 270–
328. Arisbe Eprint.
[63] See Royce, Josiah, and Kernan, W. Fergus (1916), “Charles Sanders Peirce”, The Journal of Philosophy, Psychology, and
Scientific Method v. 13, pp. 701–9. Arisbe Eprint.
[66] B:8
[67] Fisch, Max (1986), Peirce, Semeiotic, and Pragmatism, Kenneth Laine Ketner and Christian J. W. Kloesel, eds., Bloom-
ington, Indiana: Indiana U. Pr.
[68] Theological Research Group in C.S. Peirce’s Philosophy (Hermann Deuser, Justus-Liebig-Universität Gießen; Wilfred
Härle, Philipps-Universität Marburg, Germany).
[70] Robin, Richard S. (1967), Annotated Catalogue of the Papers of Charles S. Peirce. Amherst MA: University of Mas-
sachusetts Press.
[71] “The manuscript material now (1997) comes to more than a hundred thousand pages. These contain many pages of no
philosophical interest, but the number of pages on philosophy certainly number much more than half of that. Also, a
significant but unknown number of manuscripts have been lost.” — Joseph Ransdell (1997), “Some Leading Ideas of
Peirce’s Semiotic”, end note 2, 1997 light revision of 1977 version in Semiotica 19:157–78.
[72] Houser, Nathan, “The Fortunes and Misfortunes of the Peirce Papers”, Fourth Congress of the IASS, Perpignan, France,
1989. Signs of Humanity, v. 3, 1992, pp. 1259–68. Eprint
[73] Memorandum to the President of Charles S. Peirce Society by Ahti-Veikko Pietarinen, U. of Helsinki, March 29, 2012.
Eprint.
[75] See 1987 review by B. Kuklick (of Peirce by Christopher Hookway), in British Journal for the Philosophy of Sciencev. 38,
n. 1, pp. 117-19. First page.
[76] Auspitz, Josiah Lee (1994), “The Wasp Leaves the Bottle: Charles Sanders Peirce”, The American Scholar, v. 63, n. 4,
autumn, 602–18. Arisbe Eprint.
[77] Burks, Arthur W., “Review: Charles S. Peirce, The new elements of mathematics", Bulletin of the American Mathematical
Society v. 84, n. 5 (1978), pp. 913–18 (PDF).
8.10. NOTES 53
[78] Peirce (1860 MS), “Orders of Infinity”, News from the Peirce Edition Project, September 2010 (PDF), p. 6, with the
manuscript’s text. Also see logic historian Irving Anellis’s November 11, 2010 comment at peirce-l.
[79] Peirce (MS, winter of 1880–81), “A Boolean Algebra with One Constant”, CP 4.12–20, W 4:218-21. Google Preview.
See Roberts, Don D. (1973), The Existential Graphs of Charles S. Peirce, p. 131.
[80] Peirce (1881), “On the Logic of Number”, American Journal of Mathematics v. 4, pp. 85−95. Reprinted (CP 3.252–88),
(W 4:299–309). See See Shields, Paul (1997), “Peirce’s Axiomatization of Arithmetic”, in Houser et al., eds., Studies in
the Logic of Charles S. Peirce.
[81] Peirce (1885), “On the Algebra of Logic: A Contribution to the Philosophy of Notation”, American Journal of Mathematics
7, two parts, first part published 1885, pp. 180–202 (see Houser in linked paragraph in “Introduction” in W 4). Presented,
National Academy of Sciences, Newport, RI, 14–17 October 1884 (see EP 1, Headnote 16). 1885 is the year usually given
for this work. Reprinted CP 3.359–403, W 5:162–90, EP 1:225–8, in part.
[82] Putnam, Hilary (1982), “Peirce the Logician”, Historia Mathematica 9, 290–301. Reprinted, pp. 252–60 in Putnam
(1990), Realism with a Human Face, Harvard. Excerpt with article’s last five pages.
[83] It was in Peirce’s 1885 “On the Algebra of Logic”. See Byrnes, John (1998), “Peirce’s First-Order Logic of 1885”, Trans-
actions of the Charles S. Peirce Society v. 34, n. 4, pp. 949-76.
[84] Brady, Geraldine (2000), From Peirce to Skolem: A Neglected Chapter in the History of Logic, North-Holland/Elsevier
Science BV, Amsterdam, Netherlands.
[85] See Peirce (1898), Lecture 3, “The Logic of Relatives” (not the 1897 Monist article), Reasoning and the Logic of Things ,
pp. 146–64, see 151.
[86] Peirce (1898), “The Logic of Mathematics in Relation to Education” in Educational Review v. 15, pp. 209–16 (via Internet
Archive). Reprinted CP 3.553–62. See also his “The Simplest Mathematics” (1902 MS), CP 4.227–323.
[87] Lewis, Clarence Irving (1918), A Survey of Symbolic Logic, see ch. 1, §7 “Peirce”, pp. 79–106, see p. 79 (Internet Archive).
Note that Lewis’s bibliography lists works by Frege, tagged with asterisks as important.
[88] Avery, John (2003) Information theory and evolution, p. 167; also Mitchell, Melanie, "My Scientific Ancestry".
[89] Beil, Ralph G. and Ketner, Kenneth (2003), “Peirce, Clifford, and Quantum Theory”, International Journal of Theoretical
Physics v. 42, n. 9, pp. 1957-1972.
[90] Houser, Roberts, and Van Evra, eds. (1997), Studies in the Logic of Charles Sanders Peirce, Indiana U., Bloomington, IN.
[91] Misak, ed. (2004), The Cambridge Companion to Peirce, Cambridge U., UK.
[93] Peirce (1903 MS), CP 6.176: “But I now define a pseudo-continuum as that which modern writers on the theory of functions
call a continuum. But this is fully represented by [...] the totality of real values, rational and irrational [...].”
[94] Peirce (1902 MS) and Ransdell, Joseph, ed. (1998), “Analysis of the Methods of Mathematical Demonstration”, Memoir
4, Draft C, MS L75.90–102, see 99–100. (Once there, scroll down).
[95] See:
• Peirce (1908), “Some Amazing Mazes (Conclusion), Explanation of Curiosity the First”, The Monist, v. 18, n. 3,
pp. 416-64, see 463−4. Reprinted CP 4.594-642, see 642.
• Havenel, Jérôme (2008), “Peirce’s Clarifications on Continuity”, Transactions Winter 2008 pp. 68–133, see 119.
Abstract.
[96] Peirce condemned the use of “certain likelihoods" (EP 2:108–9) even more strongly than he criticized Bayesian methods.
Indeed Peirce used a bit of Bayesian inference in criticizing parapsychology (W 6:76).
[97] Miller, Richard W. (1975), “Propensity: Popper or Peirce?", British Journal for the Philosophy of Science (site), v. 26, n.
2, pp. 123–32. doi:10.1093/bjps/26.2.123. Eprint.
[98] Haack, Susan and Kolenda, Konstantin (1977), “Two Fallibilists in Search of the Truth”, Proceedings of the Aristotelian
Society, Supplementary Volumes, v. 51, pp. 63–104. JSTOR 4106816
[99] Peirce CS, Jastrow J. On Small Differences in Sensation. Memoirs of the National Academy of Sciences 1885;3:73-83.
[100] Peirce (1897) “Fallibilism, Continuity, and Evolution”, CP 1.141–75 (Eprint), placed by the CP editors directly after
“F.R.L.” (1899, CP 1.135–40).
54 CHAPTER 8. CHARLES SANDERS PEIRCE
[101] Peirce (1903), CP 1.180-202 Eprint and (1906) “The Basis of Pragmaticism”, EP 2:372–3, see "Philosophy" at CDPT.
[103] Peirce (1893), “The Categories” MS 403. Arisbe Eprint, edited by Joseph Ransdell, with information on the re-write, and
interleaved with the 1867 “New List” for comparison.
[104] “Minute Logic”, CP 2.87, c.1902 and A Letter to Lady Welby, CP 8.329, 1904. See relevant quotes under "Categories,
Cenopythagorean Categories" in Commens Dictionary of Peirce’s Terms (CDPT), Bergman & Paalova, eds., U. of Helsinki.
[106] The ground blackness is the pure abstraction of the quality black. Something black is something embodying blackness,
pointing us back to the abstraction. The quality black amounts to reference to its own pure abstraction, the ground black-
ness. The question is not merely of noun (the ground) versus adjective (the quality), but rather of whether we are considering
the black(ness) as abstracted away from application to an object, or instead as so applied (for instance to a stove). Yet note
that Peirce’s distinction here is not that between a property-general and a property-individual (a trope). See "On a New
List of Categories" (1867), in the section appearing in CP 1.551. Regarding the ground, cf. the Scholastic conception of
a relation’s foundation, Google limited preview Deely 1982, p. 61
[107] A quale in this sense is a such, just as a quality is a suchness. Cf. under “Use of Letters” in §3 of Peirce’s “Description of
a Notation for the Logic of Relatives”, Memoirs of the American Academy, v. 9, pp. 317–78 (1870), separately reprinted
(1870), from which see p. 6 via Google books, also reprinted in CP 3.63:
Now logical terms are of three grand classes. The first embraces those whose logical form involves only
the conception of quality, and which therefore represent a thing simply as “a —.” These discriminate objects
in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an
object as it is in itself as such (quale); for example, as horse, tree, or man. These are absolute terms. (Peirce,
1870. But also see “Quale-Consciousness”, 1898, in CP 6.222–37.)
[110] "Charles S. Peirce on Esthetics and Ethics: A Bibliography" (PDF) by Kelly A. Parker in 1999.
[111] Peirce (1902 MS), Carnegie Application, edited by Joseph Ransdell, Memoir 2, see table.
[113] Peirce (1899 MS), “F.R.L.” [First Rule of Logic], CP 1.135–40, Eprint
[114] Peirce (1882), “Introductory Lecture on the Study of Logic” delivered September 1882, Johns Hopkins University Circulars,
v. 2, n. 19, pp. 11–12 (via Google), November 1882. Reprinted (EP 1:210–14; W 4:378–82; CP 7.59–76). The definition
of logic quoted by Peirce is by Peter of Spain.
[115] Peirce (1878), “The Doctrine of Chances”, Popular Science Monthly, v. 12, pp. 604–15 (CP 2.645–68, W 3:276–90, EP
1:142–54).
...death makes the number of our risks, the number of our inferences, finite, and so makes their mean result
uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely
great. .... ...logicality inexorably requires that our interests shall not be limited. .... Logic is rooted in the social
principle.
[117] Peirce, (1868), “Questions concerning certain Faculties claimed for Man”, Journal of Speculative Philosophy v. 2, n. 2,
pp. 103−14. On thought in signs, see p. 112. Reprinted CP 5.213-63 (on thought in signs, see 253), W 2:193-211, EP
2:11-27. Arisbe Eprint.
[119] Peirce (1902), The Carnegie Institute Application, Memoir 10, MS L75.361-2, Arisbe Eprint.
[120] Peirce (1868), “Some Consequences of Four Incapacities”, Journal of Speculative Philosophy v. 2, n. 3, pp. 140−57.
Reprinted CP 5.264-317, W 2:211-42, EP 1:28-55. Arisbe Eprint.
[121] Peirce, “Grounds of Validity of the Laws of Logic: Further Consequences of Four Incapacities”, Journal of Speculative
Philosophy v. II, n. 4, pp. 193−208. Reprinted CP 5.318-357, W 2:242-272 (PEP Eprint), EP 1:56-82.
8.10. NOTES 55
[122] Peirce (1905), “What Pragmatism Is”, The Monist, v. XV, n. 2, pp. 161-81, see 167. Reprinted CP 5.411-37, see 416.
Arisbe Eprint.
[125] Peirce (1867), “Upon Logical Comprehension and Extension” (CP 2.391–426), (W 2:70–86).
[126] See pp. 404–9 in “Pragmatism” in EP 2. Ten quotes on collateral experience from Peirce provided by Joseph Ransdell can
be viewed here at peirce-l’s Lyris archive. Note: Ransdell’s quotes from CP 8.178–9 are also in EP 2:493–4, which gives
their date as 1909; and his quote from CP 8.183 is also in EP 2:495–6, which gives its date as 1909.
[128] See "76 definitions of the sign by C.S.Peirce", collected by Robert Marty (U. of Perpignan, France).
[129] Peirce, A Letter to Lady Welby (1908), Semiotic and Significs, pp. 80–1:
I define a Sign as anything which is so determined by something else, called its Object, and so determines
an effect upon a person, which effect I call its Interpretant, that the latter is thereby mediately determined by
the former. My insertion of “upon a person” is a sop to Cerberus, because I despair of making my own broader
conception understood.
[130] “Representamen”, properly with the 'a' long and stressed (/rɛprəzɛnˈteɪmən/ rep-rə-zen-TAY-mən), was adopted (not coined)
by Peirce as his technical term for the sign as covered in his theory, in case a divergence should come to light between his
theoretical version and the popular senses of the word “sign”. He eventually stopped using “representamen”. See EP
2:272–3 and Semiotic and Significs p. 193, quotes in "Representamen" at CDPT.
[131] Eco, Umberto (1984). Semiotics and the Philosophy of Language. Bloomington & Indianapolis: Indiana University Press.
p. 15. ISBN 978-0-253-20398-4.
[132] Peirce (1909), A Letter to William James, EP 2:492-502. Fictional object, 498. Object as universe of discourse, 492. See
"Dynamical Object" at CDPT.
[134] Peirce (1903 MS), “Nomenclature and Divisions of Triadic Relations, as Far as They Are Determined”, under other titles
in Collected Papers (CP) v. 2, paragraphs 233–72, and reprinted under the original title in Essential Peirce (EP) v. 2, pp.
289–99. Also see image of MS 339 (August 7, 1904) supplied to peirce-l by Bernard Morand of the Institut Universitaire
de Technologie (France), Département Informatique.
[136] Popular Science Monthly, v. 13, pp. 470–82, see 472 or the book at Wikisource. CP 2.619–44, see 623.
• On correction of “A Theory of Probable Inference”, see quotes from “Minute Logic”, CP 2.102, c. 1902, and from
the Carnegie Application (L75), 1902, Historical Perspectives on Peirce’s Logic of Science v. 2, pp. 1031–1032.
• On new logical form for abduction, see quote from Harvard Lectures on Pragmatism, 1903, CP 5.188–189.
See also Santaella, Lucia (1997) “The Development of Peirce’s Three Types of Reasoning: Abduction, Deduction, and
Induction”, 6th Congress of the IASS. Eprint.
[139] A Letter to J. H. Kehler (dated 1911), The New Elements of Mathematics v. 3, pp. 203–4, see in "Retroduction" at CDPT.
[142] “That the rule of induction will hold good in the long run may be deduced from the principle that reality is only the object
of the final opinion to which sufficient investigation would lead”, in Peirce (1878 April), “The Probability of Induction”, p.
718 (via Internet Archive ) in Popular Science Monthly, v. 12, pp. 705–18. Reprinted in CP 2.669–93, W 3:290–305, EP
1:155–69, elsewhere.
[144] See CP 1.34 Eprint (in “The Spirit of Scholasticism”), where Peirce ascribed the success of modern science less to a novel
interest in verification than to the improvement of verification.
[145] See Joseph Ransdell's comments and his tabular list of titles of Peirce’s proposed list of memoirs in 1902 for his Carnegie
application, Eprint
[146] Peirce (1905), “Issues of Pragmaticism”, The Monist, v. XV, n. 4, pp. 481−99. Reprinted CP 5.438-63. Also important:
CP 5.497-525.
[147] Peirce, “Philosophy and the Conduct of Life”, Lecture 1 of the 1898 Cambridge (MA) Conferences Lectures, CP 1.616–48
in part and Reasoning and the Logic of Things, 105–22, reprinted in EP 2:27–41.
[148] Peirce (1908), "A Neglected Argument for the Reality of God", published in large part, Hibbert Journal v. 7, 90–112.
Reprinted with an unpublished part, CP 6.452–85, Selected Writings pp. 358–79, EP 2:434–50, Peirce on Signs 260–78.
[149] See also Nubiola, Jaime (2004), "Il Lume Naturale: Abduction and God", Semiotiche I/2, 91–102.
[150] Peirce (c. 1906), “PAP (Prolegomena to an Apology for Pragmatism)" (MS 293), The New Elements of Mathematics v. 4,
pp. 319–20, first quote under "Abduction" at CDPT.
[151] Peirce (1903), “Pragmatism – The Logic of Abduction”, CP 5.195–205, especially 196. Eprint.
[153] See MS L75.329–330, from Draft D of Memoir 27 of Peirce’s application to the Carnegie Institution:
Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not
troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics.
The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of
discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first
question of heuretic, is to be governed by economical considerations.
[154] Peirce, C. S., “On the Logic of Drawing Ancient History from Documents”, EP 2, see 107-9. On Twenty Questions, see
109:
Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do.
[156] However, Peirce disagreed with Hegelian absolute idealism. See for example CP 8.131.
[157] Peirce (1868), “Nominalism versus Realism”, Journal of Speculative Philosophy v. 2, n. 1, pp. 57−61. Reprinted (CP
6.619–24), (W 2:144–53).
• Peirce (1897), “The Logic of Relatives”, The Monist v. VII, n. 2 pp. 161–217, see 206 (via Google). Reprinted CP
3.456–552.
• Peirce (1905), “Issues of Pragmaticism”, The Monist v. XV, n. 4, pp. 481–99, see 495–6 (via Google). Reprinted
(CP 5.438–63, see 453–7).
• Peirce (c. 1905), Letter to Signor Calderoni, CP 8.205–13, see 208.
• Lane, Robert (2007), “Peirce’s Modal Shift: From Set Theory to Pragmaticism”, Journal of the History of Philosophy,
v. 45, n. 4.
[159] Peirce in his 1906 “Answers to Questions concerning my Belief in God”, CP 6.495, Eprint, reprinted in part as “The
Concept of God” in Philosophical Writings of Peirce, J. Buchler, ed., 1940, pp. 375–8:
I will also take the liberty of substituting “reality” for “existence.” This is perhaps overscrupulosity; but I
myself always use exist in its strict philosophical sense of “react with the other like things in the environment.”
Of course, in that sense, it would be fetichism to say that God “exists.” The word “reality,” on the contrary,
is used in ordinary parlance in its correct philosophical sense. [....] I define the real as that which holds its
characters on such a tenure that it makes not the slightest difference what any man or men may have thought
them to be, or ever will have thought them to be, here using thought to include, imagining, opining, and willing
(as long as forcible means are not used); but the real thing’s characters will remain absolutely untouched.
[160] See his “The Doctrine of Necessity Examined” (1892) and “Reply to the Necessitarians” (1893), to both of which editor
Paul Carus responded.
8.11. EXTERNAL LINKS 57
[161] Peirce (1891), “The Architecture of Theories”, The Monist v. 1, pp. 161–76, see p. 170, via Internet Archive. Reprinted
(CP 6.7–34) and (EP 1:285–97, see p. 293).
[163] Peirce (1893), “Evolutionary Love”, The Monist v. 3, pp. 176–200. Reprinted CP 6.278–317, EP 1:352–72. Arisbe Eprint
[164] See p. 115 in Reasoning and the Logic of Things (Peirce’s 1898 lectures).
Peirce sites
• Arisbe: The Peirce Gateway, Joseph Ransdell, ed. Over 100 online writings by Peirce as of 11/24/10, with
annotations. 100s of online papers on Peirce. The peirce-l e-forum. Much else.
• Center for Applied Semiotics (CAS) (1998–2003), Donald Cunningham & Jean Umiker-Sebeok, Indiana U.
• Centro de Estudos Peirceanos (CeneP) and Centro Internacional de Estudos Peirceanos (CIEP), Lucia Santaella
et al., Pontifical Catholic U. of São Paulo (PUC-SP), Brazil. In Portuguese, some English.
• Centro StudiPeirce, Carlo Sini, Rossella Fabbrichesi, et al., U. of Milan, Italy. In Italian and English. Part of
Pragma.
• Charles S. Peirce Foundation. Co-sponsoring the 2014 Peirce International Centennial Congress (100th an-
niversary of Peirce’s death).
• Collegium for the Advanced Study of Picture Act and Embodiment: The Peirce Archive. Humboldt U, Berlin,
Germany. Cataloguing Peirce’s innumerable drawings & graphic materials. More info (Prof. Aud Sissel Hoel).
• Digital Encyclopedia of Charles S. Peirce, João Queiroz (now at UFJF) & Ricardo Gudwin (at Unicamp), eds.,
U. of Campinas, Brazil, in English. 84 authors listed, 51 papers online & more listed, as of 1/31/09.
• Grupo de Estudios Peirceanos (GEP) / Peirce Studies Group, Jaime Nubiola, ed., U. of Navarra, Spain. Big
study site, Peirce & others in Spanish & English, bibliography, more.
• Helsinki Peirce Research Center (HPRC), Ahti-Veikko Pietarinen et al., U. of Helsinki, with Commens: Vir-
tual Centre for Peirce Studies, Mats Bergman & Sami Paavola, eds. 23 papers by 11 authors as of 11/24/10.
—Commens Dictionary of Peirce’s Terms (CDPT): Peirce’s own definitions, often many per term across the
decades.
58 CHAPTER 8. CHARLES SANDERS PEIRCE
• Institute for Studies in Pragmaticism, Kenneth Laine Ketner, Clyde Hendrick, et al., Texas Tech U. Peirce’s
life and works.
• International Research Group on Abductive Inference, Uwe Wirth et al., eds., Goethe U., Frankfurt, Germany.
Uses frames. Click on link at bottom of its home page for English. Moved to U. of Gießen, Germany, home
page not in English but see Artikel section there.
• L'I.R.S.C.E. (1974–2003)—Institut de Recherche en Sémiotique, Communication et Éducation, Gérard De-
ledalle, Joëlle Réthoré, U. of Perpignan, France.
• Minute Semeiotic, Vinicius Romanini, U. of São Paulo, Brazil. English, Portuguese.
• Peirce at Signo: Theoretical Semiotics on the Web, Louis Hébert, director, supported by U. of Québec. Theory,
application, exercises of Peirce’s Semiotics and Esthetics. English, French.
• Peirce Edition Project (PEP), Indiana U.-Purdue U. Indianapolis (IUPUI). André De Tienne, Nathan Houser,
et al. Editors of the Writings of Charles S. Peirce (W) and The Essential Peirce (EP) v. 2. Many study aids such
as the Robin Catalog of Peirce’s manuscripts & letters and:
—Biographical introductions to EP 1–2 and W 1–6 & 8
—Most of W 2 readable online.
—PEP’s branch at Université du Québec à Montréal (UQÀM). Working on W 7: Peirce’s work on the Century
Dictionary. Definition of the week.
Clone (algebra)
• C contains all the projections πkn : An → A, defined by πkn (x1 , …,xn) = xk,
Given an algebra in a signature σ, the set of operations on its carrier definable by a σ-term (the term functions) is a
clone. Conversely, every clone can be realized as the clone of term functions in a suitable algebra.
If A and B are algebras with the same carrier such that every basic function of A is a term function in B and vice versa,
then A and B have the same clone. For this reason, modern universal algebra often treats clones as a representation
of algebras which abstracts from their signature.
There is only one clone on the one-element set. The lattice of clones on a two-element set is countable, and has been
completely described by Emil Post (see Post’s lattice). Clones on larger sets do not admit a simple classification; there
κ
are continuum clones on a finite set of size at least three, and 22 clones on an infinite set of cardinality κ.
such that
• c ∗ (π1,n,...,πn,n) = c
• πk,m ∗ (c1,...,cm) = c
59
60 CHAPTER 9. CLONE (ALGEBRA)
isomorphic. Conversely every abstract clone determines an algebraic theory with an n-ary operation for each element
of Cn. This gives a bijective correspondence between abstract clones and algebraic theories.
Every abstract clone C induces a Lawvere theory in which the morphisms m→n are elements of (Cm)n . This induces
a bijective correspondence between Lawvere theories and abstract clones.
9.2 References
[1] Denecke, Klaus. Menger algebras and clones of terms, East-West Journal of Mathematics 5 2 (2003),179-193.
• Ralph N. McKenzie, George F. McNulty, and Walter F. Taylor, Algebras, Lattices, Varieties, Vol. 1, Wadsworth
& Brooks/Cole, Monterey, CA, 1987.
• F. William Lawvere: Functorial semantics of algebraic theories, Columbia University, 1963. Available online
at Reprints in Theory and Applications of Categories
Chapter 10
Computability theory
Computability theory, also called recursion theory, is a branch of mathematical logic, of computer science, and
of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.
The basic questions addressed by recursion theory are “What does it mean for a function on the natural numbers
to be computable?" and “How can noncomputable functions be classified into a hierarchy based on their level of
noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched.
The field has since grown to include the study of generalized computability and definability. Invention of the cen-
tral combinatorial object of recursion theory, namely the Universal Turing Machine, predates and predetermines the
invention of modern computers. Historically, the study of algorithmically undecidable sets and functions was moti-
vated by various problems in mathematics that turned to be undecidable; for example, word problem for groups and
the like. There are several applications of the theory to other branches of mathematics that do not necessarily con-
centrate on undecidability. The early applications include the celebrated Higman’s embedding theorem that provides
a link between recursion theory and group theory, results of Michael O. Rabin and Anatoly Maltsev on algorithmic
presentations of algebras, and the negative solution to Hilbert’s Tenth Problem. The more recent applications in-
clude algorithmic randomness, results of Soare et al. who applied recursion-theoretic methods to solve a problem in
algebraic geometry,[1] and the very recent work of Slaman et al. on normal numbers that solves a problem in analytic
number theory.
Recursion theory overlaps with proof theory, effective descriptive set theory, model theory, and abstract algebra. Ar-
guably, computational complexity theory is a child of recursion theory; both theories share the same technical tool,
namely the Turing Machine. Recursion theorists in mathematical logic often study the theory of relative computabil-
ity, reducibility notions and degree structures described in this article. This contrasts with the theory of subrecursive
hierarchies, formal methods and formal languages that is common in the study of computability theory in computer
science. There is a considerable overlap in knowledge and methods between these two research communities; how-
ever, no firm line can be drawn between them. For instance, parametrized complexity was invented by a complexity
theorist Michael Fellows and a recursion theorist Rod Downey.
“Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general
recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact
61
62 CHAPTER 10. COMPUTABILITY THEORY
that with this concept one has for the first time succeeded in giving an absolute notion to an interest-
ing epistemological notion, i.e., one not depending on the formalism chosen.*"(Gödel 1946 in Davis
1965:84).[3]
With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be
effectively decided. Church (1936a, 1936b) and Turing (1936), inspired by techniques used by Gödel (1931) to prove
his incompleteness theorems, independently demonstrated that the Entscheidungsproblem is not effectively decidable.
This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical
propositions are true or false.
Many problems of mathematics have been shown to be undecidable after these initial examples were established.
In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be
effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s
that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in
a finitely presented group, will decide whether the element represented by the word is the identity element of the
group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich’s theorem, which implies
that Hilbert’s tenth problem has no effective solution; this problem asked whether there is an effective procedure
to decide whether a Diophantine equation over the integers has a solution in the integers. The list of undecidable
problems gives additional examples of problems with no computable solution.
The study of which mathematical constructions can be effectively performed is sometimes called recursive math-
ematics; the Handbook of Recursive Mathematics (Ershov et al. 1998) covers many of the known results in this
field.
draws ideas and results from the others, and most recursion theorists are familiar with the majority of them.
Recursion theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing
computability defined using oracle Turing machines, introduced by Turing (1939). An oracle Turing machine is a
hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions
of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form “Is
n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable.
Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an
oracle cannot.
Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells
whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively)
computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then
the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives
a precise measure of how uncomputable the set is.
The natural examples of sets that are not computable, including many different sets that encode variants of the halting
problem, have two properties in common:
2. Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a
total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or
m-equivalent).
Many-one reductions are “stronger” than Turing reductions: if a set A is many-one reducible to a set B, then A is
Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets
are all many-one equivalent, it is possible to construct recursively enumerable sets A and B such that A is Turing
reducible to B but not many-one reducible to B. It can be shown that every recursively enumerable set is many-
one reducible to the halting problem, and thus the halting problem is the most complicated recursively enumerable
set with respect to many-one reducibility and with respect to Turing reducibility. Post (1944) asked whether every
recursively enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is
no recursively enumerable set with a Turing degree intermediate between those two.
As intermediate results, Post defined natural types of recursively enumerable sets like the simple, hypersimple and
hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem
with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other
reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of re-
cursively enumerable sets of intermediate Turing degree; this problem became known as Post’s problem. After ten
years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable
sets and the halting problem, but they failed to show that any of these degrees contains a recursively enumerable set.
Very soon after this, Friedberg and Muchnik independently solved Post’s problem by establishing the existence of
recursively enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing
degrees of the recursively enumerable sets which turned out to possess a very complicated and non-trivial structure.
There are uncountably many sets that are not recursively enumerable, and the investigation of the Turing degrees
of all sets is as central in recursion theory as the investigation of the recursively enumerable Turing degrees. Many
degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative
to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute
a function f which dominates every computable function g in the sense that there is a constant c depending on g such
that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic
sets; and the degrees below the halting problem of limit-recursive sets.
The study of arbitrary (not necessarily recursively enumerable) Turing degrees involves the study of the Turing jump.
Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle
Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the
64 CHAPTER 10. COMPUTABILITY THEORY
original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the
Turing jump of another set. Post’s theorem establishes a close relationship between the Turing jump operation and
the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability
in arithmetic.
Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set
of Turing degrees containing recursively enumerable sets. A deep theorem of Shore and Slaman (1999) states that the
function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees.
A recent survey by Ambos-Spies and Fejer (2006) gives an overview of this research and its historical progression.
An ongoing area of research in recursion theory studies reducibility relations other than Turing reducibility. Post
(1944) introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing ma-
chine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with.
Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one
example.
The strong reducibilities include:
One-one reducibility A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f
such that each n is in A if and only if f(n) is in B.
Many-one reducibility This is essentially one-one reducibility without the constraint that f be injective. A is many-
one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only
if f(n) is in B.
Truth-table reducibility A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine
that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this
is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the
oracle simultaneously, and then having seen their answers is able to produce an output without asking additional
questions regardless of the oracle’s answer to the initial queries. Many variants of truth-table reducibility have
also been studied.
Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed
in the article Reduction (recursion theory).
The major research on strong reducibilities has been to compare their theories, both for the class of all recursively
enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the
reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is
the union of infinitely many truth-table degrees.
Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have
also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These
reducibilities are closely connected to definability over the standard model of arithmetic.
The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems
of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was
studied in detail by Stephen Simpson and others; Simpson (1999) gives a detailed discussion of the program. The
set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers
is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive
comprehension, which states that the powerset of the naturals is closed under Turing reducibility.
10.3.5 Numberings
A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in
the numbering on the input x. Numberings can be partial-recursive although some of its members are total recursive,
that is, computable functions. Admissible numberings are those into which all others can be translated. A Friedberg
numbering (named after its discoverer) is a one-one numbering of all partial-recursive functions; it is necessarily
not an admissible numbering. Later research dealt also with numberings of other classes like classes of recursively
enumerable sets. Goncharov discovered for example a class of recursively enumerable sets for which the numberings
fall into exactly two classes with respect to recursive isomorphisms.
For further explanation, see the section Post’s problem and the priority method in the article Turing
degree.
Post’s problem was solved with a method called the priority method; a proof using this method is called a priority
argument. This method is primarily used to construct recursively enumerable sets with particular properties. To use
this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known
as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties.
Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to
the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each
stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers
from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause
another to become unsatisfied; the priority order is used to decide what to do in such an event.
Priority arguments have been employed to solve many problems in recursion theory, and have been classified into a
hierarchy based on their complexity (Soare 1987). Because complex priority arguments can be technical and difficult
to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results
proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof
for the existence of Friedberg numberings without using the priority method.
When Post defined the notion of a simple set as an r.e. set with an infinite complement not containing any infinite r.e.
set, he started to study the structure of the recursively enumerable sets under inclusion. This lattice became a well-
studied structure. Recursive sets can be defined in this structure by the basic result that a set is recursive if and only
if the set and its complement are both recursively enumerable. Infinite r.e. sets have always infinite recursive subsets;
but on the other hand, simple sets exist but do not have a coinfinite recursive superset. Post (1944) introduced already
hypersimple and hyperhypersimple sets; later maximal sets were constructed which are r.e. sets such that every r.e.
superset is either a finite variant of the given maximal set or is co-finite. Post’s original motivation in the study of this
lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of
the recursive sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution
to his problem applied priority methods instead; Harrington and Soare (1991) found eventually such a property.
66 CHAPTER 10. COMPUTABILITY THEORY
The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by
Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of
the research was independent, and the unity of the concept of randomness was not understood at the time). The
main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as
the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine
when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by
invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent
study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area.
For that reason, a recent research conference in this area was held in January 2007[4] and a list of open problems[5]
is maintained by Joseph Miller and Andre Nies.
form (f(0),f(1),...,f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e
of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns
every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of
all computable functions is not learnable. Many related models have been considered and also the learning of classes
of recursively enumerable sets from positive data is a topic studied from Gold’s pioneering paper in 1967 onwards.
such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and
recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow[7] and
Simpson.[8] Some commentators argue that both the names recursion theory and computability theory fail to convey
the fact that most of the objects studied in recursion theory are not computable.[9]
Rogers (1967) has suggested that a key property of recursion theory is that its results and structures should be invariant
under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in
geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any
structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn
on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the
infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest
in recursion theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the
natural numbers.
• Computability logic
• Transcomputational problem
10.8 Notes
[1] Csima, Barbara F., et al. “Bounding prime models.” The Journal of Symbolic Logic 69.04 (2004): 1117-1142.
[2] Many of these foundational papers are collected in The Undecidable (1965) edited by Martin Davis.
[3] The full paper can also be found at pages 150ff (with commentary by Charles Parsons at 144ff) in Feferman et al. editors
1990 Kurt Gödel Volume II Publications 1938-1974, Oxford University Press, New York, ISBN 978-0-19-514721-6. Both
reprintings have the following footnote * added to the Davis volume by Gödel in 1965: “To be more precise: a function
of integers is computable in any formal system containing arithmetic if and only if it is computable in arithmetic, where a
function f is called computable in S if there is in S a computable term representing f (p. 150).
[5] The homepage of Andre Nies has a list of open problems in Kolmogorov complexity
[6] Mathscinet searches for the titles like “computably enumerable” and “c.e.” show that many papers have been published
with this terminology as well as with the other one.
[7] Lance Fortnow, "Is it Recursive, Computable or Decidable?,” 2004-2-15, accessed 2006-1-9.
[8] Stephen G. Simpson, "What is computability theory?,” FOM email list, 1998-8-24, accessed 2006-1-9.
[9] Harvey Friedman, "Renaming recursion theory,” FOM email list, 1998-8-28, accessed 2006-1-9.
10.9 References
Undergraduate level texts • S. B. Cooper, 2004. Computability Theory, Chapman & Hall/CRC. ISBN 1-
58488-237-9
10.9. REFERENCES 69
Advanced texts • S. Jain, D. Osherson, J. Royer and A. Sharma, 1999. Systems that learn, an introduction to
learning theory, second edition, Bradford Book. ISBN 0-262-10077-0
• S. Kleene, 1952. Introduction to Metamathematics, North-Holland (11th printing; 6th printing added
comments). ISBN 0-7204-2103-9
• M. Lerman, 1983. Degrees of unsolvability, Perspectives in Mathematical Logic, Springer-Verlag. ISBN
3-540-12155-2.
• Andre Nies, 2009. Computability and Randomness, Oxford University Press, 447 pages. ISBN 978-0-
19-923076-1.
• P. Odifreddi, 1989. Classical Recursion Theory, North-Holland. ISBN 0-444-87295-7
• P. Odifreddi, 1999. Classical Recursion Theory, Volume II, Elsevier. ISBN 0-444-50205-X
• H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition
1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1
• G Sacks, 1990. Higher Recursion Theory, Springer-Verlag. ISBN 3-540-19305-7
• S. G. Simpson, 1999. Subsystems of Second Order Arithmetic, Springer-Verlag. ISBN 3-540-64882-8
• R. I. Soare, 1987. Recursively Enumerable Sets and Degrees, Perspectives in Mathematical Logic, Springer-
Verlag. ISBN 0-387-15299-7.
Survey papers and collections • K. Ambos-Spies and P. Fejer, 2006. "Degrees of Unsolvability.” Unpublished
preprint.
• H. Enderton, 1977. “Elements of Recursion Theory.” Handbook of Mathematical Logic, edited by J.
Barwise, North-Holland (1977), pp. 527–566. ISBN 0-7204-2285-X
• Y. L. Ershov, S. S. Goncharov, A. Nerode, and J. B. Remmel, 1998. Handbook of Recursive Mathematics,
North-Holland (1998). ISBN 0-7204-2285-X
• M. Fairtlough and S. Wainer, 1998. “Hierarchies of Provably Recursive Functions”. In Handbook of
Proof Theory, edited by S. Buss, Elsevier (1998).
• R. I. Soare, 1996. Computability and recursion, Bulletin of Symbolic Logic v. 2 pp. 284–321.
Research papers and collections • Burgin, M. and Klinger, A. “Experience, Generations, and Limits in Ma-
chine Learning.” Theoretical Computer Science v. 317, No. 1/3, 2004, pp. 71–91
• A. Church, 1936a. “An unsolvable problem of elementary number theory.” American Journal of Math-
ematics v. 58, pp. 345–363. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• A. Church, 1936b. “A note on the Entscheidungsproblem.” Journal of Symbolic Logic v. 1, n. 1, and v.
3, n. 3. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• M. Davis, ed., 1965. The Undecidable—Basic Papers on Undecidable Propositions, Unsolvable Problems
and Computable Functions, Raven, New York. Reprint, Dover, 2004. ISBN 0-486-43228-9
• R. M. Friedberg, 1958. “Three theorems on recursive enumeration: I. Decomposition, II. Maximal Set,
III. Enumeration without repetition.” The Journal of Symbolic Logic, v. 23, pp. 309–316.
• Gold, E. Mark (1967), Language Identification in the Limit (PDF) 10, Information and Control, pp. 447–
474
• L. Harrington and R. I. Soare, 1991. “Post’s Program and incomplete recursively enumerable sets”,
Proceedings of the National Academy of Sciences of the USA, volume 88, pages 10242—10246.
• C. Jockusch jr, “Semirecursive sets and positive reducibility”, Trans. Amer. Math. Soc. 137
(1968) 420-436
• S. C. Kleene and E. L. Post, 1954. “The upper semi-lattice of degrees of recursive unsolvability.”
Annals of Mathematics v. 2 n. 59, 379–407.
70 CHAPTER 10. COMPUTABILITY THEORY
• Moore, C. (1996). “Recursion theory on the reals and continuous-time computation”. Theoretical
Computer Science. CiteSeerX: 10.1.1.6.5519.
• J. Myhill, 1956. “The lattice of recursively enumerable sets.” The Journal of Symbolic Logic, v.
21, pp. 215–220.
• Orponen, P. (1997). “A survey of continuous-time computation theory”. Advances in algorithms,
languages, and complexity. CiteSeerX: 10.1.1.53.1991.
• E. Post, 1944, “Recursively enumerable sets of positive integers and their decision problems”,
Bulletin of the American Mathematical Society, volume 50, pages 284–316.
• E. Post, 1947. “Recursive unsolvability of a problem of Thue.” Journal of Symbolic Logic v. 12,
pp. 1–11. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• Shore, Richard A.; Slaman, Theodore A. (1999), “Defining the Turing jump” (PDF), Mathematical
Research Letters 6: 711–722, doi:10.4310/mrl.1999.v6.n6.a10, ISSN 1073-2780, MR 1739227
• T. Slaman and W. H. Woodin, 1986. "Definability in the Turing degrees.” Illinois J. Math. v. 30
n. 2, pp. 320–334.
• R. I. Soare, 1974. “Automorphisms of the lattice of recursively enumerable sets, Part I: Maximal
sets.” Annals of Mathematics, v. 100, pp. 80–120.
• A. Turing, 1937. “On computable numbers, with an application to the Entscheidungsproblem.”
Proceedings of the London Mathematics Society, ser. 2 v. 42, pp. 230–265. Corrections ibid.
v. 43 (1937) pp. 544–546. Reprinted in “The Undecidable”, M. Davis ed., 1965. PDF from
comlab.ox.ac.uk
• A. Turing, 1939. “Systems of logic based on ordinals.” Proceedings of the London Mathematics
Society, ser. 2 v. 45, pp. 161–228. Reprinted in “The Undecidable”, M. Davis ed., 1965.
De Morgan’s laws
In propositional logic and boolean algebra, De Morgan’s laws[1][2][3] are a pair of transformation rules that are both
valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules
allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:
or informally as:
The rules can be expressed in formal language with two propositions P and Q as:
and
where:
Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs.
De Morgan’s laws are an example of a more general concept of mathematical duality.
71
72 CHAPTER 11. DE MORGAN’S LAWS
¬(P ∧ Q)
∴ ¬P ∨ ¬Q
and negation of disjunction
¬(P ∨ Q)
∴ ¬P ∧ ¬Q
and expressed as a truth-functional tautology or theorem of propositional logic:
(P ∧ Q) ≡ ¬(¬P ∨ ¬Q)
(P ∨ Q) ≡ ¬(¬P ∧ ¬Q)
This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a
substitution.
A∪B ≡A∩B
A∩B ≡A∪B
where:
• A is the negation of A, the overline being written above the terms to be negated
• ∩ is the intersection operator (AND)
• ∪ is the union operator (OR)
∩ ∪
Ai ≡ Ai
i∈I i∈I
∪ ∩
Ai ≡ Ai
i∈I i∈I
11.1.3 Engineering
In electrical and computer engineering, De Morgan’s laws are commonly written as:
A·B ≡A+B
and
A + B ≡ A · B,
where:
• · is a logical AND
• + is a logical OR
• the overbar is the logical NOT of what is underneath the overbar.
The corpus of documents containing “cars” or “trucks” can be represented by four documents:
To evaluate Search A, clearly the search “(cars OR trucks)” will hit on Documents 1, 2, and 3. So the negation of
that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search “(NOT cars)” will hit on documents that do not contain “cars”, which is Documents 2
and 4. Similarly the search “(NOT trucks)” will hit on Documents 1 and 4. Applying the AND operator to these two
searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will return the same set of documents
(Documents 1, 2, 4):
11.2 History
The laws are named after Augustus De Morgan (1806–1871),[6] who introduced a formal version of the laws to
classical propositional logic. De Morgan’s formulation was influenced by algebraization of logic undertaken by George
Boole, which later cemented De Morgan’s claim to the find. Nevertheless, a similar observation was made by Aristotle,
and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down
11.3. INFORMAL PROOF 75
the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes
rules of conversion that follow the lines of De Morgan’s laws.[9] Still, De Morgan is given credit for stating the laws
in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan’s laws can be
proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid inferences in proofs
and deductive arguments.
In the case of its application to a disjunction, consider the following claim: “it is false that either of A or B is true”,
which is written as:
¬(A ∨ B)
In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true,
which may be written directly as:
(¬A) ∧ (¬B)
If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in
English, this follows the logic that “since two things are both false, it is also false that either of them is true”.
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that “not
A” and “not B” are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction
must thus be true, and the result is identical to the first claim.
The application of De Morgan’s theorem to a conjunction is very similar to its application to a disjunction both in
form and rationale. Consider the following claim: “it is false that A and B are both true”, which is written as:
¬(A ∧ B)
In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction
of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or
equivalently, one or more of “not A” and “not B” must be true). This may be written directly as:
(¬A) ∨ (¬B)
Presented in English, this follows the logic that “since it is false that two things are both true, at least one of them
must be false”.
Working in the opposite direction again, the second expression asserts that at least one of “not A” and “not B” must
be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their
conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression
is identical to the first claim.
76 CHAPTER 11. DE MORGAN’S LAWS
11.5 Extensions
≥1 &
& ≥1
De Morgan’s Laws represented as a circuit with logic gates
In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always
find its dual), since in the presence of the identities governing negation, one may always introduce an operator that
is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely
the existence of negation normal forms: any formula is equivalent to another formula where negations only occur
applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications,
for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where
it is a prerequisite for finding the conjunctive normal form and disjunctive normal form of a formula. Computer
programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in
computations in elementary probability theory.
11.6. SEE ALSO 77
Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be
the operator Pd defined by
This idea can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:
D = {a, b, c}.
Then
and
and
□p ≡ ¬♢¬p,
♢p ≡ ¬□¬p.
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of
normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up
models using Kripke semantics.
11.7 References
[1] Copi and Cohen
[2] Hurley
[8] William of Ockham, Summa Logicae, part II, sections 32 and 33.
[9] Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially
Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0
Formal language
This article is about a technical term in mathematics and computer science. For related studies about natural lan-
guages, see Formal semantics (linguistics). For formal modes of speech in natural languages, see Register (sociolin-
guistics).
In mathematics, computer science, and linguistics, a formal language is a set of strings of symbols that may be
NP VP
N' V'
79
80 CHAPTER 12. FORMAL LANGUAGE
words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas.
A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar,
also called its formation rule.
The field of formal language theory studies primarily the purely syntactical aspects of such languages—that is,
their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the
syntactic regularities of natural languages. In computer science, formal languages are used among others as the
basis for defining the grammar of programming languages and formalized versions of subsets of natural languages
in which the words of the language represent concepts that are associated with particular meanings or semantics. In
computational complexity theory, decision problems are typically defined as formal languages, and complexity classes
are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In
logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems,
and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation
of formal languages in this way.
12.1 History
The first formal language is thought be the one used by Gottlob Frege in his Begriffsschrift (1879), literally meaning
“concept writing”, and which Frege described as a “formal language of pure thought.”[2]
Axel Thue's early Semi-Thue system which can be used for rewriting strings was influential on formal grammars.
An alphabet, in the context of formal languages, can be any set, although it often makes sense to use an alphabet in the
usual sense of the word, or more generally a character set such as ASCII or Unicode. Alphabets can also be infinite;
e.g. first-order logic is often expressed using an alphabet which, besides symbols such as ∧, ¬, ∀ and parentheses,
contains infinitely many elements x0 , x1 , x2 , … that play the role of variables. The elements of an alphabet are called
its letters.
A word over an alphabet can be any finite sequence, or string, of characters or letters, which sometimes may include
spaces, and are separated by specified word separation characters. The set of all words over an alphabet Σ is usually
denoted by Σ* (using the Kleene star). The length of a word is the number of characters or letters it is composed
of. For any alphabet there is only one word of length 0, the empty word, which is often denoted by e, ε or λ. By
concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original
words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as
formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.
12.3 Definition
A formal language L over an alphabet Σ is a subset of Σ* , that is, a set of words over that alphabet. Sometimes
the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of
'well-formed expressions’.
In computer science and mathematics, which do not usually deal with natural languages, the adjective “formal” is
often omitted as redundant.
While formal language theory usually concerns itself with formal languages that are described by some syntactical
rules, the actual definition of the concept “formal language” is only as above: a (possibly infinite) set of finite-length
strings composed from a given alphabet, no more nor less. In practice, there are many languages that can be described
by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the
intuitive concept of a “language,” one described by syntactic rules. By an abuse of the definition, a particular formal
language is often thought of as being equipped with a formal grammar that describes it.
12.4. EXAMPLES 81
12.4 Examples
The following rules describe a formal language L over the alphabet Σ = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, = }:
• Every nonempty string that does not contain "+" or "=" and does not start with “0” is in L.
• The string “0” is in L.
• A string containing "=" is in L if and only if there is exactly one "=", and it separates two valid strings of L.
• A string containing "+" but not "=" is in L if and only if every "+" in the string separates two valid strings of L.
• No string is in L other than those implied by the previous rules.
Under these rules, the string “23+4=555” is in L, but the string "=234=+" is not. This formal language expresses
natural numbers, well-formed addition statements, and well-formed addition equalities, but it expresses only what they
look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication
that “0” means the number zero, or that "+" means addition.
12.4.1 Constructions
For finite languages one can explicitly enumerate all well-formed words. For example, we can describe a language L
as just L = {"a”, “b”, “ab”, “cba"}. The degenerate case of this construction is the empty language, which contains
no words at all (L = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are infinitely many words: “a”, “abb”,
“ababba”, “aaababbbbaab”, …. Therefore formal languages are typically infinite, and describing an infinite formal
language is not as simple as writing L = {"a”, “b”, “ab”, “cba"}. Here are some examples of formal languages:
• What is their expressive power? (Can formalism X describe every language that formalism Y can describe?
Can it describe other languages?)
82 CHAPTER 12. FORMAL LANGUAGE
• What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described
by formalism X?)
• What is their comparability? (How difficult is it to decide whether two languages, one described in formalism
X and one in formalism Y, or in X again, are actually the same language?).
Surprisingly often, the answer to these decision problems is “it cannot be done at all”, or “it is extremely expen-
sive” (with a characterization of how expensive). Therefore, formal language theory is a major application area of
computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on
the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-
free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are
widely used in practical applications.
• The concatenation L1 L2 consists of all strings of the form vw where v is a string from L1 and w is a string from
L2 .
• The intersection L1 ∩ L2 of L1 and L2 consists of all strings which are contained in both languages
• The complement ¬L of a language with respect to a given alphabet consists of all strings over the alphabet that
are not in the language.
• The Kleene star: the language consisting of all words that are concatenations of 0 or more words in the original
language;
• Reversal:
• String homomorphism
Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed
under a particular operation when the operation, applied to languages in the class, always produces a language in the
same class again. For instance, the context-free languages are known to be closed under union, concatenation, and
intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract
families of languages studies the most common closure properties of language families in their own right.[3]
12.7 Applications
A compiler usually has two distinct components. A lexical analyzer, generated by a tool like lex, identifies the tokens
of the programming language grammar, e.g. identifiers or keywords, which are themselves expressed in a simpler
formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, usually
12.7. APPLICATIONS 83
generated by a parser generator like yacc, attempts to decide if the source program is valid, that is if it belongs to the
programming language for which the compiler was built. Of course, compilers do more than just parse the source
code—they usually translate it into some executable format. Because of this, a parser usually outputs more than a
yes/no answer, typically an abstract syntax tree, which is used by subsequent stages of the compiler to eventually
generate an executable containing machine code that runs directly on the hardware, or some intermediate code that
requires a virtual machine to execute.
Symbols and
strings of symbols
Well-formed formulas
Theorems
This diagram shows the syntactic divisions within a formal system. Strings of symbols may be broadly divided into nonsense and
well-formed formulas. The set of well-formed formulas is divided into theorems and non-theorems.
A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as propositions)
each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last
sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be
interpreted as true propositions.
Main articles: Formal semantics (logic), Interpretation (logic) and Model theory
Formal languages are entirely syntactic in nature but may be given semantics that give meaning to the elements of the
language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language,
and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often
done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as mathematical
structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived
from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes
true.
12.9 References
[2] Martin Davis (1995). “Influences of Mathematical Logic on Computer Science”. In Rolf Herken. The universal Turing
machine: a half-century survey. Springer. p. 290. ISBN 978-3-211-82637-9.
[3] Hopcroft & Ullman (1979), Chapter 11: Closure properties of families of languages.
• Grzegorz Rozenberg, Arto Salomaa, Handbook of Formal Languages: Volume I-III, Springer, 1997, ISBN
3-540-61486-9.
• Patrick Suppes, Introduction to Logic, D. Van Nostrand, 1957, ISBN 0-442-08072-7.
• Language at PlanetMath.org.
• University of Maryland, Formal Language Definitions
• James Power, “Notes on Formal Language Theory and Parsing”, 29 November 2002.
• Drafts of some chapters in the “Handbook of Formal Language Theory”, Vol. 1-3, G. Rozenberg and A.
Salomaa (eds.), Springer Verlag, (1997):
• Alexandru Mateescu and Arto Salomaa, “Preface” in Vol.1, pp. v-viii, and “Formal Languages: An
Introduction and a Synopsis”, Chapter 1 in Vol. 1, pp.1-39
• Sheng Yu, “Regular Languages”, Chapter 2 in Vol. 1
• Jean-Michel Autebert, Jean Berstel, Luc Boasson, “Context-Free Languages and Push-Down Automata”,
Chapter 3 in Vol. 1
• Christian Choffrut and Juhani Karhumäki, “Combinatorics of Words”, Chapter 6 in Vol. 1
• Tero Harju and Juhani Karhumäki, “Morphisms”, Chapter 7 in Vol. 1, pp. 439 - 510
• Jean-Eric Pin, “Syntactic semigroups”, Chapter 10 in Vol. 1, pp. 679-746
• M. Crochemore and C. Hancart, “Automata for matching patterns”, Chapter 9 in Vol. 2
• Dora Giammarresi, Antonio Restivo, “Two-dimensional Languages”, Chapter 4 in Vol. 3, pp. 215 - 267
Chapter 13
Functional completeness
In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express
all possible truth tables by combining members of the set into a Boolean expression.[1][2] A well-known complete set
of connectives is { AND, NOT }, consisting of binary conjunction and negation. The singleton sets { NAND } and
{ NOR } are also functionally complete.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) ade-
quate.[3]
From the point of view of digital electronics, functional completeness means that every possible logic gate can be
realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from
either only binary NAND gates, or only binary NOR gates.
A → B := ¬A ∨ B
A ↔ B := (A → B) ∧ (B → A).
86
13.3. CHARACTERIZATION OF FUNCTIONAL COMPLETENESS 87
A ∨ B := ¬(¬A ∧ ¬B).
A ∨ B := (A → B) → B.
No further simplifications are possible. Hence ¬ and one of {∧, ∨, →} are each minimal functionally complete
subsets of {¬, ∧, ∨, →, ↔} .
Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of
the following sets of connectives:
• The monotonic connectives; changing the truth value of any connected variables from F to T without changing
any from T to F never makes these connectives change their return value from T to F, e.g. ∨ , ∧ , ⊤ , ⊥ .
• The affine connectives, such that each connected variable either always or never affects the truth value these
connectives return, e.g. ¬ , ⊤ , ⊥ , ↔ , ̸↔ .
• The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are
reversed, so is the truth value these connectives return, e.g. ¬ , MAJ(p,q,r).
• The truth-preserving connectives; they return the truth value T under any interpretation which assigns T to
all variables, e.g. ∨ , ∧ , ⊤ , → , ↔ .
• The falsity-preserving connectives; they return the truth value F under any interpretation which assigns F to
all variables, e.g. ∨ , ∧ , ⊥ , ̸→ , ̸↔ .
In fact, Post gave a complete description of the lattice of all clones (sets of operations closed under composition and
containing all projections) on the two-element set {T, F}, nowadays called Post’s lattice, which implies the above
result as a simple corollary: the five mentioned sets of connectives are exactly the maximal clones.
Three elements { ∨ , ↔ , ⊥ }, { ∨ , ↔ , ̸↔ }, { ∨ , ̸↔ , ⊤ }, { ∧ , ↔ , ⊥ }, { ∧ , ↔ , ̸↔ }, { ∧ , ̸↔ , ⊤ }.
There are no minimal functionally complete sets of more than three at most binary logical connectives.[9] Constant
unary or binary connectives and binary connectives that depend only on one of the arguments have been suppressed
to keep the list readable. E.g. the set consisting of binary ∨ and the binary connective given by negation of the first
argument (ignoring the second) is another minimal functionally complete set.
13.5 Examples
• ¬A = A NAND A
• A ∧ B = ¬(A NAND B) = (A NAND B) NAND (A NAND B)
• A ∨ B = (A NAND A) NAND (B NAND B)
• ¬A = A NOR A
• A ∧ B = (A NOR A) NOR (B NOR B)
• A ∨ B = (A NOR B) NOR (A NOR B)
Note that, an electronic circuit or a software function is optimized by the reuse, that reduce the number of gates. For
instance, the “A ∧ B” operation, when expressed by NAND gates, is implemented with the reuse of “A NAND B”,
• Algebra of sets
• Boolean algebra
13.9. REFERENCES 89
13.9 References
[1] Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-
238452-3. (“Complete set of logical connectives”).
[2] Nolt, John; Rohatyn, Dennis; Varzi, Achille (1998), Schaum’s outline of theory and problems of logic (2nd ed.), New York:
McGraw–Hill, ISBN 978-0-07-046649-4. ("[F]unctional completeness of [a] set of logical operators”).
[3] Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Defines
“expressively adequate”, shortened to “adequate set of connectives” in a section heading.)
[4] Wesselkamper, T.C. (1975), “A sole sufficient operator”, Notre Dame Journal of Formal Logic 16: 86–88, doi:10.1305/ndjfl/1093891614
[5] Massey, G.J. (1975), “Concerning an alleged Sheffer function”, Notre Dame Journal of Formal Logic 16 (4): 549–550,
doi:10.1305/ndjfl/1093891898
[6] Wesselkamper, T.C. (1975), “A Correction To My Paper” A. Sole Sufficient Operator”, Notre Dame Journal of Formal
Logic 16 (4): 551, doi:10.1305/ndjfl/1093891899
[7] The term was originally restricted to binary operations, but since the end of the 20th century it is used more generally.
Martin, N.M. (1989), Systems of logic, Cambridge University Press, p. 54, ISBN 978-0-521-36770-7.
[8] Scharle, T.W. (1965), “Axiomatization of propositional calculus with Sheffer functors”, Notre Dame J. Formal Logic 6 (3):
209–217, doi:10.1305/ndjfl/1093958259.
[9] Wernick, William (1942) “Complete Sets of Logical Functions,” Transactions of the American Mathematical Society 51:
117–32. In his list on the last page of the article, Wernick does not distinguish between ← and →, or between ̸← and ̸→ .
Halting problem
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary
computer program and an input, whether the program will finish running or continue to run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs
cannot exist. A key part of the proof was a mathematical definition of a computer and program, which became known
as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a
decision problem.
Jack Copeland (2004) attributes the term halting problem to Martin Davis.[1]
14.1 Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model
of computation, i.e., all programs that can be written in some given programming language that is general enough
to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program,
whether the program will eventually halt when run with that input. In this abstract framework, there are no resource
limitations on the amount of memory or time required for the program’s execution; it can take arbitrarily long, and
use arbitrarily much storage space, before halting. The question is simply whether the given program will ever halt
on a particular input.
For example, in pseudocode, the program
does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program
does halt.
While deciding whether these programs halt is simple, more complex programs prove problematic.
One approach to the problem might be to run the program for some number of steps and check if it halts. But if the
program does not halt, it is unknown whether the program will eventually halt or run forever.
Turing proved no algorithm can exist which will always correctly decide whether, for a given arbitrary program and
its input, the program halts when run with that input; the essence of Turing’s proof is that any such algorithm can be
made to contradict itself, and therefore cannot be correct.
90
14.3. REPRESENTATION AS A SET 91
the lambda calculus had already been published in April 1936.) Subsequently, many other undecidable problems
have been described; the typical method of proving a problem to be undecidable is with the technique of reduction.
To do this, it is sufficient to show that if a solution to the new problem were found, it could be used to decide an
undecidable problem by transforming instances of the undecidable problem into instances of the new problem. Since
we already know that no method can decide the old problem, no method can decide the new problem either. Often
the new problem is reduced to solving the halting problem. (Note: the same technique is used to demonstrate that a
problem is NP complete, only in this case, rather than demonstrating that there is no solution, it demonstrates there
is no polynomial time solution, assuming P ≠ NP).
For example, one such consequence of the halting problem’s undecidability is that there cannot be a general algorithm
that decides whether a given statement about natural numbers is true or not. The reason for this is that the proposition
stating that a certain program will halt given a certain input can be converted into an equivalent statement about
natural numbers. If we had an algorithm that could solve every statement about natural numbers, it could certainly
solve this one; but that would determine whether the original program halts, which is impossible, since the halting
problem is undecidable.
Rice’s theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial prop-
erty, there is no general decision procedure that, for all programs, decides whether the partial function implemented
by the input program has that property. (A partial function is a function which may not always produce a result, and
so is used to model programs, which can either produce results or fail to halt.) For example, the property “halt for the
input 0” is undecidable. Here, “non-trivial” means that the set of partial functions that satisfy the property is neither
the empty set nor the set of all partial functions. For example, “halts or fails to halt on input 0” is clearly true of all
partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports “true.” Also, note
that this theorem holds only for properties of the partial function implemented by the program; Rice’s Theorem does
not apply to properties of the program itself. For example, “halt on input 0 within 100 steps” is not a property of
the partial function that is implemented by the program—it is a property of the program implementing the partial
function and is very much decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally
is said to represent the probability that a randomly produced program halts. These numbers have the same Turing
degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely
computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few
digits can be calculated in simple cases.
While Turing’s proof shows that there can be no general method or algorithm to determine whether algorithms halt,
individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often
show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof.
But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to
determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in
an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of
research is known as automated termination analysis.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing ma-
chine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods.
However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle
machines). It is an open question whether there can be actual deterministic physical processes that, in the long run,
elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be
harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing
machine amongst other things. It is also an open question whether any such unknown physical processes are involved
in the working of the human brain, and whether humans can solve the halting problem (Copeland 2004, p. 15).
This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it
contains.[2] However, the complement of this set is not recursively enumerable.[2]
There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting
problem is such a formulation. Examples of such sets include:
• { i | there is an input x such that program i eventually halts when run with input x }.
{
1 if program i input on halts x,
h(i, x) =
0 otherwise.
Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of
computation.
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i)
and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.
The proof proceeds by directly establishing that every total computable function with two arguments differs from the
required function h. To this end, given any total computable binary function f, the following partial function g is also
computable by some program e:
{
0 iff (i, i) = 0,
g(i) =
undefined otherwise.
The verification that g is computable relies on the following constructs (or their equivalents):
• duplication of values (program e computes the inputs i,i for f from the input i for g),
• conditional branching (program e selects between two results depending on the value it computes for f(i,i)),
• returning a value of 0.
Because g is partial computable, there must be a program e that computes g, by the assumption that the model of
computation is Turing-complete. This program is one of all the programs on which the halting function h is defined.
The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
It follows from the definition of g that exactly one of the following two cases must hold:
• f(e,e) = 0 and so g(e) = 0. In this case h(e,e) = 1, because program e halts on input e.
• f(e,e) ≠ 0 and so g(e) is undefined. In this case h(e,e) = 0, because program e does not halt on input e.
14.5. PROOF AS A COROLLARY OF THE UNCOMPUTABILITY OF KOLMOGOROV COMPLEXITY 93
In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two
arguments, all such functions must differ from h.
This proof is analogous to Cantor’s diagonal argument. One may visualize a two-dimensional array with one column
and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row
j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The
construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position
(i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of
the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined ( g(e) = 0 in this case
), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined,
then halting function f(e,e) = 0, which leads to g(e) = 0 under g's construction. This contradicts the assumption that
g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be
the halting function h.
...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repeti-
tive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the
machine... (italics in original, Minsky 1967, p. 24)
Minsky warns us, however, that machines such as computers with e.g., a million small parts, each with two states,
will have at least 21,000,000 possible states:
This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate
at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the
time of a journey through such a cycle (Minsky 1967 p. 25):
Minsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata “have a number
of theoretical limitations":
...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the
mere finiteness [of] the state diagram may not carry a great deal of significance. (Minsky p. 25)
94 CHAPTER 14. HALTING PROBLEM
It can also be decided automatically whether a nondeterministic machine with finite memory halts on none of, some
of, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.
14.7 Formalization
In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result
is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational
power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag
systems.
What is important is that the formalization allows a straightforward mapping of algorithms to some data type that
the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as
Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms
define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms
to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n
characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.
14.10 History
Further information: History of algorithms
• 1900: David Hilbert poses his “23 questions” (now known as Hilbert’s problems) at the Second International
Congress of Mathematicians in Paris. “Of these, the second was that of proving the consistency of the 'Peano
axioms' on which, as he had shown, the rigour of mathematics depended”. (Hodges p. 83, Davis’ commentary
in Davis, 1965, p. 108)
• 1920–1921: Emil Post explores the halting problem for tag systems, regarding it as a candidate for unsolvability.
(Absolutely unsolvable problems and relatively undecidable propositions – account of an anticipation, in Davis,
1965, pp. 340–433.) Its unsolvability was not established until much later, by Marvin Minsky (1967).
• 1928: Hilbert recasts his 'Second Problem' at the Bologna International Congress. (Reid pp. 188–189) Hodges
claims he posed three questions: i.e. #1: Was mathematics complete? #2: Was mathematics consistent? #3:
Was mathematics decidable? (Hodges p. 91). The third question is known as the Entscheidungsproblem
(Decision Problem). (Hodges p. 91, Penrose p. 34)
• 1930: Kurt Gödel announces a proof as an answer to the first two of Hilbert’s 1928 questions [cf Reid p.
198]. “At first he [Hilbert] was only angry and frustrated, but then he began to try to deal constructively with
the problem... Gödel himself felt—and expressed the thought in his paper—that his work did not contradict
Hilbert’s formalistic point of view” (Reid p. 199)
• 1931: Gödel publishes “On Formally Undecidable Propositions of Principia Mathematica and Related Systems
I”, (reprinted in Davis, 1965, p. 5ff)
• 19 April 1935: Alonzo Church publishes “An Unsolvable Problem of Elementary Number Theory”, wherein
he identifies what it means for a function to be effectively calculable. Such a function will have an algorithm,
and "...the fact that the algorithm has terminated becomes effectively known ...” (Davis, 1965, p. 100)
• 1936: Church publishes the first proof that the Entscheidungsproblem is unsolvable. (A Note on the Entschei-
dungsproblem, reprinted in Davis, 1965, p. 110.)
• 7 October 1936: Emil Post's paper “Finite Combinatory Processes. Formulation I” is received. Post adds to his
“process” an instruction "(C) Stop”. He called such a process “type 1 ... if the process it determines terminates
for each specific problem.” (Davis, 1965, p. 289ff)
• 1937: Alan Turing's paper On Computable Numbers With an Application to the Entscheidungsproblem reaches
print in January 1937 (reprinted in Davis, 1965, p. 115). Turing’s proof departs from calculation by recursive
functions and introduces the notion of computation by machine. Stephen Kleene (1952) refers to this as one
of the “first examples of decision problems proved unsolvable”.
• 1939: J. Barkley Rosser observes the essential equivalence of “effective method” defined by Gödel, Church,
and Turing (Rosser in Davis, 1965, p. 273, “Informal Exposition of Proofs of Gödel’s Theorem and Church’s
Theorem”)
• 1943: In a paper, Stephen Kleene states that “In setting up a complete algorithmic theory, what we do is
describe a procedure ... which procedure necessarily terminates and in such manner that from the outcome we
can read a definite answer, 'Yes’ or 'No,' to the question, 'Is the predicate value true?'.”
• 1952: Kleene (1952) Chapter XIII (“Computable Functions”) includes a discussion of the unsolvability of
the halting problem for Turing machines and reformulates it in terms of machines that “eventually stop”, i.e.
halt: "... there is no algorithm for deciding whether any given machine, when started from any given situation,
eventually stops.” (Kleene (1952) p. 382)
• 1952: "Martin Davis thinks it likely that he first used the term 'halting problem' in a series of lectures that he
gave at the Control Systems Laboratory at the University of Illinois in 1952 (letter from Davis to Copeland, 12
December 2001).” (Footnote 61 in Copeland (2004) pp. 40ff)
96 CHAPTER 14. HALTING PROBLEM
14.13 Notes
[1] In none of his work did Turing use the word “halting” or “termination”. Turing’s biographer Hodges does not have the word
“halting” or words “halting problem” in his index. The earliest known use of the words “halting problem” is in a proof by
Davis (1958, p. 70–71):
“Theorem 2.2 There exists a Turing machine whose halting problem is recursively unsolvable.
“A related problem is the printing problem for a simple Turing machine Z with respect to a symbol Sᵢ".
Davis adds no attribution for his proof, so one infers that it is original with him. But Davis has pointed out that a statement
of the proof exists informally in Kleene (1952, p. 382). Copeland (2004, p 40) states that:
“The halting problem was so named (and it appears, first stated) by Martin Davis [cf Copeland footnote 61]...
(It is often said that Turing stated and proved the halting theorem in 'On Computable Numbers’, but strictly
this is not true).”
[2] Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, pp. 236–237, ISBN
9780191620805.
[3] Stated without proof in: "Course notes for Data Compression - Kolmogorov complexity", 2005, P.B. Miltersen, p.7
14.14 References
• Alan Turing, On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the
London Mathematical Society, Series 2, Volume 42 (1937), pp 230–265, doi:10.1112/plms/s2-42.1.230. —
Alan Turing, On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction, Pro-
ceedings of the London Mathematical Society, Series 2, Volume 43 (1938), pp 544–546, doi:10.1112/plms/s2-
43.6.544 . Free online version of both parts This is the epochal paper where Turing defines Turing machines,
formulates the halting problem, and shows that it (as well as the Entscheidungsproblem) is unsolvable.
14.14. REFERENCES 97
• Sipser, Michael (2006). “Section 4.2: The Halting Problem”. Introduction to the Theory of Computation
(Second Edition ed.). PWS Publishing. pp. 173–182. ISBN 0-534-94728-X.
• c2:HaltingProblem
• B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial
Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford
UK, ISBN 0-19-825079-7.
• Davis, Martin (1965). The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And
Computable Functions. New York: Raven Press.. Turing’s paper is #3 in this volume. Papers include those by
Godel, Church, Rosser, Kleene, and Post.
• Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University
Press, 1962. Re: the problem of paradoxes, the authors discuss the problem of a set not be an object in any
of its “determining functions”, in particular “Introduction, Chap. 1 p. 24 "...difficulties which arise in formal
logic”, and Chap. 2.I. “The Vicious-Circle Principle” p. 37ff, and Chap. 2.VIII. “The Contradictions” p. 60ff.
• Martin Davis, “What is a computation”, in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random
House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines for the non-
specialist. Davis reduces the Turing Machine to a far-simpler model based on Post’s model of a computation.
Discusses Chaitin proof. Includes little biographies of Emil Post, Julia Robinson.
• Marvin Minsky, Computation, Finite and Infinite Machines, Prentice-Hall, Inc., N.J., 1967. See chapter 8,
Section 8.2 “The Unsolvability of the Halting Problem.” Excellent, i.e. readable, sometimes fun. A classic.
• Roger Penrose, The Emperor’s New Mind: Concerning computers, Minds and the Laws of Physics, Oxford
University Press, Oxford England, 1990 (with corrections). Cf: Chapter 2, “Algorithms and Turing Machines”.
An over-complicated presentation (see Davis’s paper for a better model), but a thorough presentation of Turing
machines and the halting problem, and Church’s Lambda Calculus.
• John Hopcroft and Jeffrey Ullman, Introduction to Automata Theory, Languages and Computation, Addison-
Wesley, Reading Mass, 1979. See Chapter 7 “Turing Machines.” A book centered around the machine-
interpretation of “languages”, NP-Completeness, etc.
• Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf Chapter “The Spirit of Truth”
for a history leading to, and a discussion of, his proof.
• Constance Reid, Hilbert, Copernicus: Springer-Verlag, New York, 1996 (first published 1970). Fascinat-
ing history of German mathematics and physics from 1880s through 1930s. Hundreds of names familiar to
mathematicians, physicists and engineers appear in its pages. Perhaps marred by no overt references and few
footnotes: Reid states her sources were numerous interviews with those who personally knew Hilbert, and
Hilbert’s letters and papers.
• Edward Beltrami, What is Random? Chance and order in mathematics and life, Copernicus: Springer-Verlag,
New York, 1999. Nice, gentle read for the mathematically inclined non-specialist, puts tougher stuff at the
end. Has a Turing-machine model in it. Discusses the Chaitin contributions.
• Ernest Nagel and James R. Newman, Godel’s Proof, New York University Press, 1958. Wonderful writing
about a very difficult subject. For the mathematically inclined non-specialist. Discusses Gentzen's proof on
pages 96–97 and footnotes. Appendices discuss the Peano Axioms briefly, gently introduce readers to formal
logic.
• Taylor Booth, Sequential Machines and Automata Theory, Wiley, New York, 1967. Cf Chapter 9, Turing
Machines. Difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-
recursion with reference to Turing Machines, halting problem. Has a Turing Machine model in it. References
at end of Chapter 9 catch most of the older books (i.e. 1952 until 1967 including authors Martin Davis, F.
C. Hennie, H. Hermes, S. C. Kleene, M. Minsky, T. Rado) and various technical papers. See note under
Busy-Beaver Programs.
98 CHAPTER 14. HALTING PROBLEM
• Busy Beaver Programs are described in Scientific American, August 1984, also March 1985 p. 23. A reference
in Booth attributes them to Rado, T.(1962), On non-computable functions, Bell Systems Tech. J. 41. Booth
also defines Rado’s Busy Beaver Problem in problems 3, 4, 5, 6 of Chapter 9, p. 396.
• David Bolter, Turing’s Man: Western Culture in the Computer Age, The University of North Carolina Press,
Chapel Hill, 1984. For the general reader. May be dated. Has yet another (very simple) Turing Machine
model in it.
• Stephen Kleene, Introduction to Metamathematics, North-Holland, 1952. Chapter XIII (“Computable Func-
tions”) includes a discussion of the unsolvability of the halting problem for Turing machines. In a departure
from Turing’s terminology of circle-free nonhalting machines, Kleene refers instead to machines that “stop”,
i.e. halt.
• Logical Limitations to Machine Ethics, with Consequences to Lethal Autonomous Weapons - paper discussed
in: Does the Halting Problem Mean No Moral Robots?
• A 2-Minute Proof of the 2nd-Most Important Theorem of the 2nd Millennium - a proof in only 13 lines
Chapter 15
Historians and sociologists have remarked on the occurrence, in science, of "multiple independent discovery". Robert
K. Merton defined such “multiples” as instances in which similar discoveries are made by scientists working indepen-
dently of each other.[1] “Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a
new discovery which, unknown to him, somebody else has made years before.”[2]
Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus
by Isaac Newton, Gottfried Wilhelm Leibniz and others, described by A. Rupert Hall;[3] the 18th-century discovery
of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier and others; and the theory of the evolution
of species, independently advanced in the 19th century by Charles Darwin and Alfred Russel Wallace.
Multiple independent discovery, however, is not limited to only a few historic instances involving giants of scientific
research. Merton believed that it is multiple discoveries, rather than unique ones, that represent the common pattern
in science.[4]
Merton contrasted a “multiple” with a “singleton”—a discovery that has been made uniquely by a single scientist or
group of scientists working together.[5]
Merton's hypothesis is also discussed extensively in Harriet Zuckerman's Scientific Elite.[6]
• Galileo Galilei and Simon Stevin: Hydrostatic paradox (Stevin ca. 1585, Galileo ca. 1610).
• Scipione dal Ferro (1520) and Niccolò Tartaglia (1535) independently developed a method for solving cubic
equations.
99
100 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Copernicus
Galileo
• Boyle’s law (sometimes referred to as the “Boyle-Mariotte law”) is one of the gas laws and basis of derivation
for the Ideal gas law, which describes the relationship between the product pressure and volume within a closed
system as constant when temperature remains at a fixed measure. The law was named for chemist and physicist
102 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Newton
Robert Boyle who published the original law in 1662. The French physicist Edme Mariotte discovered the
same law independently of Boyle in 1676.
• Newton–Raphson method – Joseph Raphson (1690), Isaac Newton (Newton’s work was written in 1671, but
not published until 1736).
15.5. 18TH CENTURY 103
Leibniz
• Brachistochrone problem solved by Johann Bernoulli, Jakob Bernoulli, Isaac Newton, Gottfried Wilhelm Leib-
niz, Guillaume de l'Hôpital, and Ehrenfried Walther von Tschirnhaus. The problem was posed in 1696 by
Johann Bernoulli, and its solutions were published next year.
Laplace
• Leyden Jar – Ewald Georg von Kleist (1745) and Pieter van Musschenbroek (1745-46).
• Lightning rod – Benjamin Franklin (1749) and Prokop Diviš (1754) (debated: Diviš's apparatus is assumed to
have been more effective than Franklin’s lightning rods in 1754, but was intended for a different purpose than
lightning protection).
• Oxygen – Carl Wilhelm Scheele (Uppsala, 1773), Joseph Priestley (Wiltshire, 1774). The term was coined by
Antoine Lavoisier (1777).
15.6. 19TH CENTURY 105
• Black-hole theory: John Michell, in a 1783 paper in The Philosophical Transactions of the Royal Society, wrote:
“If the semi-diameter of a sphere of the same density as the Sun in the proportion of five hundred to one, and by
supposing light to be attracted by the same force in proportion to its [mass] with other bodies, all light emitted
from such a body would be made to return towards it, by its own proper gravity.”[9] A few years later, a similar
idea was suggested independently by Pierre-Simon Laplace.[10]
• A method for measuring the specific heat of a solid substance was devised independently by Benjamin Thomp-
son, Count Rumford; and by Johan Wilcke, who published his discovery first (apparently not later than 1796,
when he died).
Gauss
• In 1869, Dmitri Ivanovich Mendeleev published his periodic table of chemical elements, and the following
year Julius Lothar Meyer published his independently constructed version.
• In 1876, Oskar Hertwig and Hermann Fol independently described the entry of sperm into the egg and the
subsequent fusion of the egg and sperm nuclei to form a single new nucleus.
• In 1876, Elisha Gray and Alexander Graham Bell filed a patent on discovery of the telephone on the same day.
• In 1877 Charles Cros described the principles of the phonograph that was, independently, constructed the
following year by Thomas Edison.
15.6. 19TH CENTURY 107
Faraday
• The Hall–Héroult process for inexpensively producing aluminum was independently discovered in 1886 by the
American engineer-inventor Charles Martin Hall and the French scientist Paul Héroult.[16]
108 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
• Two proofs of the prime number theorem (the asymptotic law of the distribution of prime numbers) were
obtained independently by Jacques Hadamard and Charles de la Vallée-Poussin and appeared in the same year
(1896).
• Discovery of radioactivity (1896) independently by Henri Becquerel and Silvanus Thompson.[17]
• Discovery of thorium radioactivity (1898) by Gerhard Carl Schmidt and Maria Skłodowska Curie.[18]
• Linguists Filip Fyodorovich Fortunatov and Ferdinand de Saussure independently formulated the sound law
now known as the Saussure–Fortunatov law.[19]
• In mathematics, the Gelfond–Schneider theorem is a result which establishes the transcendence of a large class
of numbers. It was originally proved in 1934 by Aleksandr Gelfond and again independently proved in 1935
by Theodor Schneider.
• The Penrose triangle, also known as the “tribar”, is an impossible object. It was first created by the Swedish
artist Oscar Reutersvärd in 1934. The mathematician Roger Penrose independently devised and popularised it
in the 1950s.
• In computer science, the concept of the “universal computing machine” (now generally called the "Turing
Machine") was proposed by Alan Turing, but also independently by Emil Post,[26] both in 1936. Similar
approaches, also aiming to cover the concept of universal computing, were introduced by S.C. Kleene and by
Alonzo Church that same year. Also in 1936, Konrad Zuse tried to build a binary electrically-driven mechanical
calculator with limited programability; however, Zuse’s machine was never fully functional. The Atanasoff–
Berry Computer (“ABC”), designed by John Vincent Atanasoff and Clifford Berry, was the first fully electronic
digital computing device;[27] while not programmable, it pioneered important elements of modern computing,
including binary arithmetic and electronic switching elements,[28][29] though its special-purpose nature and lack
of a changeable, stored program distinguish it from modern computers.
• The atom bomb was independently thought of by Leó Szilárd,[30] Józef Rotblat[31] and others.
• The jet engine, independently invented by them, was used in working aircraft by Hans von Ohain (1939),
Secondo Campini (1940) and Frank Whittle (1941).
• In agriculture, the ability of synthetic auxins 2,4-D, 2,4,5-T, and MCPA to act as hormone herbicides was
discovered independently by four groups in the United States and Great Britain: William G. Templeman and
coworkers (1941); Philip Nutman, Gerard Thornton, and Juda Quastel (1942); Franklin Jones (1942); and
Ezra Kraus, John W. Mitchell, and Charles L. Hamner (1943). All four groups were subject to various aspects
of wartime secrecy, and the exact order of discovery is a matter of some debate.[32]
• Polio vaccine (1950–63): Hilary Koprowski, Jonas Salk, Albert Sabin.
• the integrated circuit was devised independently by Jack Kilby in 1958[33] and half a year later by Robert
Noyce.[34] Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit.[35]
• The Higgs boson was developed into a full relativistic model in 1964 independently and almost simultaneously
by three groups of physicists: by François Englert and Robert Brout; by Peter Higgs; and by Gerald Guralnik,
C. R. Hagen, and Tom Kibble.
• Quantum electrodynamics and renormalization (1930s–40s): Ernst Stueckelberg, Julian Schwinger, Richard
Feynman, and Sin-Itiro Tomonaga, for which the latter 3 received the 1965 Nobel Prize in Physics.
• The maser, a precursor to the laser, was described by Russian scientists in 1952, and built independently by
scientists at Columbia University in 1953. The laser itself was developed independently by Gordon Gould at
Columbia University and by researchers at Bell Labs, and by the Russian scientist Aleksandr Prokhorov.
• Kolmogorov complexity, also known as “Kolmogorov–Chaitin complexity”, descriptive complexity, etc., of an
object such as a piece of text is a measure of the computational resources needed to specify the object. The
concept was independently introduced by Ray Solomonoff, Andrey Kolmogorov and Gregory Chaitin in the
1960s.[36]
• The concept of packet switching, a communications method in which discrete blocks of data (packets) are
routed between nodes over data links, was first explored by Paul Baran in the early 1960s, and then indepen-
dently a few years later by Donald Davies.
• Cosmic background radiation as a signature of the Big Bang was confirmed by Arno Penzias and Robert Wilson
of Bell Labs. Penzias and Wilson had been testing a very sensitive microwave detector when they noticed that
their equipment was picking up a strange noise that was independent of the orientation (direction) of their
instrument. At first they thought the noise was generated due to pigeon droppings in the detector, but even
after they removed the droppings the noise was still detected. Meanwhile, at nearby Princeton University two
physicists, Robert Dicke and Jim Peebles, were working on a suggestion of George Gamow's that the early
universe had been hot and dense; they believed its hot glow could still be detected but would be so red-shifted
that it would manifest as microwaves. When Penzias and Wilson learned about this, they realized that they had
already detected the red-shifted microwaves and (to the disappointment of Dicke and Peebles) were awarded
the 1978 Nobel Prize in physics.[10]
110 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
• The Cocke–Younger–Kasami algorithm was independently discovered three times: by T. Kasami (1965), by
Daniel H. Younger (1967), and by John Cocke and Jacob T. Schwartz (1970).
• The Wagner–Fischer algorithm, in computer science, was discovered and published at least six times.[37]:43
• In 1970, Howard Temin and David Baltimore independently discovered reverse transcriptase enzymes.
• The Knuth–Morris–Pratt string searching algorithm was developed by Donald Knuth and Vaughan Pratt and
independently by J. H. Morris.
• The Cook–Levin theorem (also known as “Cook’s theorem”), a result in computational complexity theory, was
proven independently by Stephen Cook (1971 in the U.S.) and by Leonid Levin (1973 in the USSR). Levin
was not aware of Cook’s achievement because of communication difficulties between East and West during the
Cold War. The other way round, Levin’s work was not widely known in the West until around 1978.[38]
• Mevastatin (compactin; ML-236B) was independently discovered by Akira Endo in Japan in a culture of Peni-
cillium citrinium[39] and by a British group in a culture of Penicillium brevicompactum.[40] Both reports were
published in 1976.
• The Bohlen–Pierce scale, a harmonic, non-octave musical scale, was independently discovered by Heinz Bohlen
(1972), Kees van Prooijen (1978) and John R. Pierce (1984).
• RSA, an algorithm suitable for signing and encryption in public-key cryptography, was publicly described in
1977 by Ron Rivest, Adi Shamir and Leonard Adleman. An equivalent system had been described in 1973
in an internal document by Clifford Cocks, a British mathematician working for the UK intelligence agency
GCHQ, but his work was not revealed until 1997 due to its top-secret classification.
• Asymptotic freedom, which states that the strong nuclear interaction between quarks decreases with decreasing
distance, was discovered in 1973 by David Gross and Frank Wilczek, and by David Politzer, and was published
in the same edition of the journal Physical Review Letters.[41] For their work the three received the Nobel Prize
in Physics in 2004.
• The J/ψ meson was independently discovered by a group at the Stanford Linear Accelerator Center, headed
by Burton Richter, and by a group at Brookhaven National Laboratory, headed by Samuel Ting of MIT. Both
announced their discoveries on November 11, 1974. For their shared discovery, Richter and Ting shared the
1976 Nobel Prize in Physics.
• The use of elliptic curves in cryptography (Elliptic curve cryptography) was suggested independently by Neal
Koblitz and Victor S. Miller in 1985.
• The Immerman–Szelepcsényi theorem, another fundamental result in computational complexity theory, was
proven independently by Neil Immerman and Róbert Szelepcsényi in 1987.[42]
• In 1989, Thomas R. Cech (Colorado) and Sidney Altman (Yale) won the Nobel Prize in chemistry for their
independent discovery in the 1980s of ribozymes – for the “discovery of catalytic properties of RNA” – using
different approaches. Catalytic RNA was an unexpected finding, something they were not looking for, and it
required rigorous proof that there was no contaminating protein enzyme.
• In 1993, groups led by Donald S. Bethune at IBM and Sumio Iijima at NEC independently discovered single-
wall carbon nanotubes and methods to produce them using transition-metal catalysts.
• Conductive polymers: Between 1963 and 1977, doped and oxidized highly-conductive polyacetylene deriva-
tives were independently discovered, “lost”, and then rediscovered at least four times. The last rediscovery
won the 2000 Nobel prize in Chemistry, for the “discovery and development of conductive polymers”. This
was without reference to the previous discoveries. Citations in article "Conductive polymers.”
15.8. 21ST CENTURY 111
15.9 Quotations
“When the time is ripe for certain things, these things appear in different places in the manner of
violets coming to light in early spring.”
— Farkas Bolyai to his son János in urging him to claim the invention of non-Euclidean geometry
without delay,
quoted in Li & Vitanyi, An introduction to Kolmogorov Complexity and Its Applications, 1st ed., p. 83.
15.11 Notes
[1] Robert K. Merton, “Resistance to the Systematic Study of Multiple Discoveries in Science”, European Journal of Sociology,
4:237–82, 1963. Reprinted in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations,
Chicago, University of Chicago Press,1973, pp. 371–82.
[3] A. Rupert Hall, Philosophers at War, New York, Cambridge University Press, 1980.
[4] Robert K. Merton, “Singletons and Multiples in Scientific Discovery: a Chapter in the Sociology of Science”, Proceedings
of the American Philosophical Society, 105: 470–86, 1961. Reprinted in Robert K. Merton, The Sociology of Science:
Theoretical and Empirical Investigations, Chicago, University of Chicago Press, 1973, pp. 343–70.
[6] Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, Free Press, 1979.
[7] “Copernicus seems to have drawn up some notes [on the displacement of good coin from circulation by debased coin] while
he was at Olsztyn in 1519. He made them the basis of a report on the matter, written in German, which he presented to the
Prussian Diet held in 1522 at Grudziądz... He later drew up a revised and enlarged version of his little treatise, this time in
Latin, and setting forth a general theory of money, for presentation to the Diet of 1528.” Angus Armitage, The World of
Copernicus, 1951, p. 91.
112 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
[8] Roger Penrose, The Road to Reality, Vintage Books, 2005, p. 103.
[9] Alan Ellis, “Black Holes – Part 1 – History”, Astronomical Society of Edinburgh, Journal 39, 1999. A description of
Michell’s theory of black holes.
[10] Stephen Hawking, A Brief History of Time, Bantam, 1996, pp. 43-45.
[11] Gauss, Carl Friedrich, “Nachlass: Theoria interpolationis methodo nova tractata”, Werke, Band 3, Göttingen, Königliche
Gesellschaft der Wissenschaften, 1866, pp. 265–327.
[12] Heideman, M. T., D. H. Johnson, and C. S. Burrus, “Gauss and the history of the fast Fourier transform”, Archive for
History of Exact Sciences, vol. 34, no. 3 (1985), pp. 265–277.
[13] Roger Penrose, The Road to Reality, Vintage Books, 2005, p. 81.
[15] “Aug. 18, 1868: Helium Discovered During Total Solar Eclipse”, http://www.wired.com/thisdayintech/2009/08/dayintech_
0818/
[16] Isaac Asimov, Asimov’s Biographical Encyclopedia of Science and Technology, p. 933.
[17] “Had Becquerel... not [in 1896] presented his discovery to the Académie des Sciences the day after he made it, credit for
the discovery of radioactivity, and even a Nobel Prize, would have gone to Silvanus Thompson.” Robert William Reid,
Marie Curie, New York, New American Library, 1974, ISBN 0002115395, pp. 64-65.
[18] "Marie Curie was... beaten in the race to tell of her discovery that thorium gives off rays in the same way as uranium.
Unknown to her, a German, Gerhard Carl Schmidt, had published his finding in Berlin two months earlier.” Robert William
Reid, Marie Curie, New York, New American Library, 1974, ISBN 0002115395, p. 65.
[20] Barbara Goldsmith, Obsessive Genius: The Inner World of Marie Curie, New York, W.W. Norton, 2005, ISBN 0-393-
05137-4, p. 166.
[23] M.J. O'Dowd, E.E. Philipp, The History of Obstetrics & Gynaecology, London, Parthenon Publishing Group, 1994, p. 547.
[24] Eggleton, Philip; Eggleton, Grace Palmer (1927). “The inorganic phosphate and a labile form of organic phosphate in the
gastrocnemius of the frog”. Biochemical Journal 21 (1): 190–195. PMC 1251888. PMID 16743804.
[25] Fiske, Cyrus H.; Subbarow, Yellapragada (1927). “The nature of the 'inorganic phosphate' in voluntary muscle”. Science
65 (1686): 401–403. doi:10.1126/science.65.1686.401. PMID 17807679.
[26] See the “bibliographic notes” at the end of chapter 7 in Hopcroft & Ullman, Introduction to Automata, Languages, and
Computation, Addison-Wesley, 1979.
[27] Ralston, Anthony; Meek, Christopher, eds. (1976), Encyclopedia of Computer Science (second ed.), pp. 488–489, ISBN
0-88405-321-0
[28] Campbell-Kelly, Martin; Aspray, William (1996), Computer: A History of the Information Machine, New York: Basic
Books, p. 84, ISBN 0-465-02989-2.
[29] Jane Smiley, The Man Who Invented the Computer: The Biography of John Atanasoff, Digital Pioneer, 2010.
[30] Richard Rhodes, The Making of the Atomic Bomb, New York, Simon and Schuster, 1986, ISBN 0671441337, p. 27.
[32] Troyer, James (2001). “In the beginning: the multiple discovery of the first hormone herbicides”. Weed Science 49 (2):
290–297. doi:10.1614/0043-1745(2001)049[0290:ITBTMD]2.0.CO;2.
[33] The Chip that Jack Built, ca. 2008, HTML, Texas Instruments, retrieved 29 May 2008.
[34] Christophe Lécuyer, Making Silicon Valley: Innovation and the Growth of High Tech, 1930-1970, MIT Press, 2006, ISBN
0262122812, p. 129.
[35] Nobel Web AB, 10 October 2000 The Nobel Prize in Physics 2000, retrieved 29 May 2008.
15.12. REFERENCES 113
[36] See Chapter 1.6 in the first edition of Li & Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, who cite
Chaitin (1975): “this definition [of Kolmogorov complexity] was independently proposed about 1965 by A.N. Kolmogorov
and me ... Both Kolmogorov and I were then unaware of related proposals made in 1960 by Ray Solomonoff.”
[37] Navarro, Gonzalo (2001). “A guided tour to approximate string matching” (PDF). ACM Computing Surveys 33 (1): 31–88.
doi:10.1145/375360.375365.
[39] Endo, Akira; Kuroda, M.; Tsujita, Y. (1976). “ML-236A, ML-236B, and ML-236C, new inhibitors of cholesterogenesis
produced by Penicillium citrinium”. Journal of Antibiotics (Tokyo), '". 1976' 29 (12): 1346–8. doi:10.7164/antibiotics.29.1346.
PMID 1010803.
[40] Brown, Alian G.; Smale, Terry C.; King, Trevor J.; Hasenkamp, Rainer; Thompson, Ronald H. (1976). “Crystal and
Molecular Structure of Compactin, a New Antifungal Metabolite from Penicillium brevicompactum”. J. Chem. Soc.,
Perkin Trans 1: 1165–1170. doi:10.1039/P19760001165.
[41] D. J. Gross, F. Wilczek, Ultraviolet behavior of non-abeilan gauge theoreies, Phys. Rev. Letters 30 (1973) 1343-1346; H.
D. Politzer, Reliable perturbative results for strong interactions, Phys. Rev. Letters 30 (1973) 1346-1349
[43] Paál, G.; Horváth, I.; Lukács, B. (1992). Astrophysics and Space Science 191: 107. Bibcode:1992Ap&SS.191..107P.
doi:10.1007/BF00644200. Missing or empty |title= (help)
15.12 References
• Armitage, Angus (1951). The World of Copernicus. New York: Mentor Books.
• Isaac Asimov, Asimov’s Biographical Encyclopedia of Science and Technology, second revised edition, New
York, Doubleday, 1982.
• N.E. Collinge (1985). The Laws of Indo-European. Amsterdam: John Benjamins. ISBN 0-915027-75-5.
(U.S.), ISBN 90-272-2102-2 (Europe).
• Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-
Completeness. W.H. Freeman. ISBN 0-7167-1045-5.
• A. Rupert Hall, Philosophers at War, New York, Cambridge University Press, 1980.
• David Lamb, Multiple Discovery: The Pattern of Scientific Progress, Amersham, Avebury Press, 1984.
• Ming Li and Paul Vitanyi (1993). An Introduction to Kolmogorov Complexity and Its Applications. New York:
Springer-Verlag. ISBN 0-387-94053-7. (U.S.), ISBN 3-540-94053-7 (Europe).
• Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, University of Chicago
Press, 1973.
• Robert K. Merton, On Social Structure and Science, edited and with an introduction by Piotr Sztompka,
University of Chicago Press, 1996.
• Robert William Reid, Marie Curie, New York, New American Library, 1974, ISBN 0002115395.
• Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, Free Press, 1979.
114 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
• Apperceptual: The Heroic Theory of Scientific Development at the Wayback Machine (archived May 12,
2008), Peter Turney, January 15, 2007
• A Survey of Russian Approaches to Perebor (Brute-Force Searches) Algorithms, by B.A. Trakhtenbrot, in the
Annals of the History of Computing, 6(4):384-400, 1984.
Lobachevsky
116 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Darwin
15.13. EXTERNAL LINKS 117
Mendeleyev
118 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Bell
15.13. EXTERNAL LINKS 119
Becquerel
120 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Skłodowska Curie
15.13. EXTERNAL LINKS 121
Einstein
122 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Szilárd
15.13. EXTERNAL LINKS 123
Noyce
124 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Nambu
15.13. EXTERNAL LINKS 125
Higgs
126 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Penzias
15.13. EXTERNAL LINKS 127
Baltimore
128 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES
Ting
15.13. EXTERNAL LINKS 129
Immerman
Logic gate
“Discrete logic” redirects here. For discrete circuitry, see Discrete circuit.
In electronics, a logic gate is an idealized or physical device implementing a Boolean function; that is, it performs a
logical operation on one or more logical inputs, and produces a single logical output. Depending on the context, the
term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer
to a non-ideal physical device[1] (see Ideal and real op-amps for comparison).
Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be con-
structed using vacuum tubes, electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or
even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions
can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the
algorithms and mathematics that can be described with Boolean logic.
Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory,
all the way up through complete microprocessors, which may contain more than 100 million gates. In modern practice,
most gates are made from field-effect transistors (FETs), particularly MOSFETs (metal–oxide–semiconductor field-
effect transistors).
Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design be-
cause their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[2]
In reversible logic, Toffoli gates are used.
To build a functionally complete logic system, relays, valves (vacuum tubes), or transistors can be used. The simplest
family of logic gates using bipolar transistors is called resistor-transistor logic (RTL). Unlike simple diode logic
gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic
functions. RTL gates were used in early integrated circuits. For higher speed and better density, the resistors used
in RTL were replaced by diodes resulting in diode-transistor logic (DTL). Transistor-transistor logic (TTL) then
supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-
effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still further, most contemporary
chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-
channel) MOSFET devices to achieve a high speed with low power dissipation.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400
series by Texas Instruments, the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these
fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a large
number of mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic
devices such as FPGAs has removed the 'hard' property of hardware; it is now possible to change the logic design of
a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware
130
16.2. SYMBOLS 131
16.2 Symbols
A synchronous 4-bit up/down decade counter symbol (74LS192) in accordance with ANSI/IEEE Std. 91-1984 and IEC Publication
60617-12.
There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984
and its supplement ANSI/IEEE Std 91a-1991. The “distinctive shape” set, based on traditional schematics, is used
132 CHAPTER 16. LOGIC GATE
for simple drawings, and derives from MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described
as “military”, reflecting its origin. The “rectangular shape” set, based on ANSI Y32.14 and other early industry
standards, as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation
of a much wider range of devices than is possible with the traditional symbols.[3] The IEC standard, IEC 60617-12,
has been adopted by other standards, such as EN 60617-12:1999 in Europe, BS EN 60617-12:1999 in the United
Kingdom, and DIN EN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 60617-12 was to provide a uniform method of describing the complex
logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and
OR gates. They could be medium scale circuits such as a 4-bit counter to a large scale circuit such as a microprocessor.
IEC 617-12 and its successor IEC 60617-12 do not explicitly show the “distinctive shape” symbols, but do not prohibit
them.[3] These are, however, shown in ANSI/IEEE 91 (and 91a) with this note: “The distinctive-shape symbol is,
according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard.”
IEC 60617-12 correspondingly contains the note (Section 2.1) “Although non-preferred, the use of other symbols
recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall
not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form
complex symbols (for example, use as embedded symbols) is discouraged.” This compromise was reached between
the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with
one another.
A third style of symbols was in use in Europe and is still widely used in European academia. See the column “DIN
40700” in the table in the German Wikipedia.
In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate
arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description
Languages (HDL) such as Verilog or VHDL.
The two input exclusive-OR is true only when the two input values are different, false if they are equal, regardless of
the value. If there are more than two inputs, the gate generates a true at its output if the number of trues at its input
is odd. In practice, these gates are built from combinations of simpler logic gates.
The 7400 chip, containing four NANDs. The two additional pins supply power (+5 V) and connect the ground.
considered in the “signaled” (active, on) state. Consider the simplified case where a two-input NAND gate is used
to drive a motor when either of its inputs are brought low by a switch. The “signaled” state (motor on) occurs when
either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan
version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble
at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol
shows both inputs and output in the polarity that will drive the motor.
De Morgan’s theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as
combinations of only NOR gates, for economic reasons.
134 CHAPTER 16. LOGIC GATE
Logic gates can also be used to store data. A storage element can be constructed by connecting several gates in a
"latch" circuit. More complicated designs that use clock signals and that change only on a rising or falling edge of the
clock are called edge-triggered "flip-flops". The combination of multiple flip-flops in parallel, to store a multiple-bit
value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a
sequential logic system since its output can be influenced by its previous state(s).
These logic circuits are known as computer memory. They vary in performance, based on factors of speed, complex-
ity, and reliability of storage, and many different types of designs are used based on the application.
B
B
A C A C
A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off, the switch is open.
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-
impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used
on buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control
circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in
cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive
voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High
impedance would mean that the output is effectively disconnected from the circuit.
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established
that by using the binary system, the principles of arithmetic and logic could be combined. In an 1886 letter, Charles
Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[7] Eventually,
vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can
be used as an AND logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition
5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of
the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built
electromechanical logic gates for his computer Z1 (from 1935–38). Claude E. Shannon introduced the use of Boolean
algebra in the analysis and design of switching circuits in 1937. Active research is taking place in molecular logic
gates.
16.8. IMPLEMENTATIONS 135
16.8 Implementations
Main article: Unconventional computing
Since the 1990s, most logic gates are made in CMOS technology (i.e. NMOS and PMOS transistors are used). Often
millions of logic gates are packaged in a single integrated circuit.
There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL
(resistor-diode logic), RTL (resistor-transistor logic), DTL (diode-transistor logic), TTL (transistor-transistor logic)
and CMOS (complementary metal oxide semiconductor). There are also sub-variants, e.g. standard CMOS logic vs.
advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower
PMOS transistors.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early
electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-
mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic
gates, including on a molecular scale.[8] Logic gates have been made out of DNA (see DNA nanotechnology)[9] and
used to create a computer called MAYA (see MAYA II). Logic gates can be made from quantum mechanical ef-
fects (though quantum computing usually diverges from boolean design). Photonic logic gates use non-linear optical
effects.
In principle any method that leads to a gate that is functionally complete (for example, either a NOR or a NAND
gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not
needed, and can be replaced by digital multiplexers.
16.10 References
[1] Jaeger, Microelectronic Circuit Design, McGraw-Hill 1997, ISBN 0-07-032482-4, pp. 226-233
[2] Tinder, Richard F. (2000). Engineering digital design: Revised Second Edition. pp. 317–319. ISBN 0-12-691295-5.
Retrieved 2008-07-04.
[3] Overview of IEEE Standard 91-1984 Explanation of Logic Symbols, Doc. No. SDYZ001A, Texas Instruments Semicon-
ductor Group, 1996
[4] Peirce, C. S. (manuscript winter of 1880–81), “A Boolean Algebra with One Constant”, published 1933 in Collected Papers
v. 4, paragraphs 12–20. Reprinted 1989 in Writings of Charles S. Peirce v. 4, pp. 218-21, Google Preview. See Roberts,
Don D. (2009), The Existential Graphs of Charles S. Peirce, p. 131.
[5] Hans Kleine Büning; Theodor Lettmann (1999). Propositional logic: deduction and algorithms. Cambridge University
Press. p. 2. ISBN 978-0-521-63017-7.
[6] John Bird (2007). Engineering mathematics. Newnes. p. 532. ISBN 978-0-7506-8555-9.
[7] Peirce, C. S., “Letter, Peirce to A. Marquand", dated 1886, Writings of Charles S. Peirce, v. 5, 1993, pp. 541–3. Google
Preview. See Burks, Arthur W., “Review: Charles S. Peirce, The new elements of mathematics", Bulletin of the American
Mathematical Society v. 84, n. 5 (1978), pp. 913–18, see 917. PDF Eprint.
• Bostock, Geoff (1988). Programmable logic devices: technology and applications. New York: McGraw-Hill.
ISBN 978-0-07-006611-3. Retrieved 28 November 2012.
• Brown, Stephen D.; Francis, Robert J.; Rose, Jonathan; Vranesic, Zvonko G. (1992). Field Programmable
Gate Arrays. Boston, MA: Kluwer Academic Publishers. ISBN 978-0-7923-9248-4. Retrieved 28 November
2012.
Chapter 17
Logical biconditional
In logic and mathematics, the logical biconditional (sometimes known as the material biconditional) is the logical
connective of two statements asserting "p if and only if q", where q is an antecedent and p is a consequent.[1] This is
often abbreviated p iff q. The operator is denoted using a doubleheaded arrow (↔), a prefixed E (Epq), an equality
sign (=), an equivalence sign (≡), or EQV. It is logically equivalent to (p → q) ∧ (q → p), or the XNOR (exclusive
nor) boolean operator. It is equivalent to "(not p or q) and (not q or p)". It is also logically equivalent to "(p and q) or
(not p and not q)", meaning “both or neither”.
The only difference from material conditional is the case when the hypothesis is false but the conclusion is true. In
that case, in the conditional, the result is true, yet in the biconditional the result is false.
In the conceptual interpretation, a = b means “All a 's are b 's and all b 's are a 's"; in other words, the sets a and b
coincide: they are identical. This does not mean that the concepts have the same meaning. Examples: “triangle” and
“trilateral”, “equiangular trilateral” and “equilateral triangle”. The antecedent is the subject and the consequent is the
predicate of a universal affirmative proposition.
In the propositional interpretation, a ⇔ b means that a implies b and b implies a; in other words, that the propositions
are equivalent, that is to say, either true or false at the same time. This does not mean that they have the same meaning.
Example: “The triangle ABC has two equal sides”, and “The triangle ABC has two equal angles”. The antecedent is
the premise or the cause and the consequent is the consequence. When an implication is translated by a hypothetical
(or conditional) judgment the antecedent is called the hypothesis (or the condition) and the consequent is called the
thesis.
A common way of demonstrating a biconditional is to use its equivalence to the conjunction of two converse conditionals,
demonstrating these separately.
When both members of the biconditional are propositions, it can be separated into two conditionals, of which one
is called a theorem and the other its reciprocal. Thus whenever a theorem and its reciprocal are true we have a
biconditional. A simple theorem gives rise to an implication whose antecedent is the hypothesis and whose consequent
is the thesis of the theorem.
It is often said that the hypothesis is the sufficient condition of the thesis, and the thesis the necessary condition of
the hypothesis; that is to say, it is sufficient that the hypothesis be true for the thesis to be true; while it is necessary
that the thesis be true for the hypothesis to be true also. When a theorem and its reciprocal are true we say that its
hypothesis is the necessary and sufficient condition of the thesis; that is to say, that it is at the same time both cause
and consequence.
17.1 Definition
Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if and only if both operands are false or both operands are true.
137
138 CHAPTER 17. LOGICAL BICONDITIONAL
x1 ↔ ... ↔ xn
meant as equivalent to
¬ (¬x1 ⊕ ... ⊕ ¬xn )
The central Venn diagram below,
and line (ABC ) in this matrix
represent the same operation.
The left Venn diagram below, and the lines (AB ) in these matrices represent the same operation.
x1 ↔ ... ↔ xn
meant as shorthand for
( x1 ∧ ... ∧ xn )
∨ (¬x1 ∧ ... ∧ ¬xn )
The Venn diagram directly below,
and line (ABC ) in this matrix
represent the same operation.
17.2 Properties
commutativity: yes
associativity: yes
distributivity: Biconditional doesn't distribute over any binary function (not even itself),
but logical disjunction (see there) distributes over biconditional.
idempotency: no
monotonicity: no
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: no
When all inputs are false, the output is not false.
Walsh spectrum: (2,0,0,2)
Nonlinearity: 0 (the function is linear)
140 CHAPTER 17. LOGICAL BICONDITIONAL
Like all connectives in first-order logic, the biconditional has rules of inference that govern its use in formal proofs.
Biconditional introduction allows you to infer that, if B follows from A, and A follows from B, then A if and only if
B.
For example, from the statements “if I'm breathing, then I'm alive” and “if I'm alive, then I'm breathing”, it can be
inferred that “I'm breathing if and only if I'm alive” or, equally inferrable, “I'm alive if and only if I'm breathing.”
B→AA→B∴A↔BB→AA→B∴B↔A
Biconditional elimination allows one to infer a conditional from a biconditional: if ( A ↔ B ) is true, then one may
infer one direction of the biconditional, ( A → B ) and ( B → A ).
For example, if it’s true that I'm breathing if and only if I'm alive, then it’s true that if I'm breathing, I'm alive; likewise,
it’s true that if I'm alive, I'm breathing.
Formally:
(A↔B)∴(A→B)
also
(A↔B)∴(B→A)
• Logical equivalence
• Logical equality
• XNOR gate
• Biconditional elimination
• Biconditional introduction
17.6. NOTES 141
17.6 Notes
[1] Handbook of Logic, page 81
17.7 References
• Brennan, Joseph G. Handbook of Logic, 2nd Edition. Harper & Row. 1961
This article incorporates material from Biconditional on PlanetMath, which is licensed under the Creative Commons
Attribution/Share-Alike License.
Chapter 18
Logical conjunction
"∧" redirects here. For the logic gate, see AND gate. For exterior product, see Exterior algebra.
In logic and mathematics, and is the truth-functional operator of logical conjunction; the and of a set of operands is
true if and only if all of its operands are true. The logical connective that represents this operator is typically written
as ∧ or · .
"A and B" is true only if A is true and B is true.
An operand of a conjunction is a conjunct.
Related concepts in other fields are:
142
18.1. NOTATION 143
18.1 Notation
And is usually expressed with an infix operator: in mathematics and logic, ∧; in electronics, · ; and in programming
languages, & or and. In Jan Łukasiewicz's prefix notation for logic, the operator is K, for Polish koniunkcja.[1]
18.2 Definition
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if and only if both of its operands are true.
The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the
expression. In keeping with the concept of vacuous truth, when conjunction is defined as an operator or function of
144 CHAPTER 18. LOGICAL CONJUNCTION
arbitrary arity, the empty conjunction (AND-ing over an empty set of operands) is often defined as having the result
1.
Conjunctions of the arguments on the left — The true bits form a Sierpinski triangle.
A,
B.
Therefore, A and B.
A,
B
18.4. PROPERTIES 145
⊢A∧B
Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from
any conjunction of either element of that conjunction.
A and B.
Therefore, A.
...or alternately,
A and B.
Therefore, B.
A∧B
⊢A
...or alternately,
A∧B
⊢B
18.4 Properties
commutativity: yes
associativity: yes
distributivity: with various operations, especially with or
idempotency: yes
monotonicity: yes
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: yes
When all inputs are false, the output is false.
Walsh spectrum: (1,−1,−1,1)
Nonlinearity: 1 (the function is bent)
If using binary values for true (1) and false (0), then logical conjunction works exactly like normal arithmetic multiplication.
146 CHAPTER 18. LOGICAL CONJUNCTION
• 0 AND 0 = 0,
• 0 AND 1 = 0,
• 1 AND 0 = 0,
• 1 AND 1 = 1.
The operation can also be applied to two binary words viewed as bitstrings of equal length, by taking the bitwise
AND of each pair of bits at corresponding positions. For example:
This can be used to select part of a bitstring using a bit mask. For example, 10011101 AND 00001000 = 00001000
extracts the fifth bit of an 8-bit bitstring.
In computer networking, bit masks are used to derive the network address of a subnet within an existing network
from a given IP address, by ANDing the IP address and the subnet mask.
Logical conjunction “AND” is also used in SQL operations to form database queries.
The Curry-Howard correspondence relates logical conjunction to product types.
The word “and” can also imply a partition of a thing into parts, as “The American flag is red, white, and blue.” Here
it is not meant that the flag is at once red, white, and blue, but rather that it has a part of each color.
18.9 References
[1] Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.
Logical connective
This article is about connectives in logical systems. For connectors in natural languages, see discourse connective.
For other logical symbols, see List of logic symbols.
In logic, a logical connective (also called a logical operator) is a symbol or word used to connect two or more
sentences (of either a formal or a natural language) in a grammatically valid way, such that the sense of the compound
sentence produced depends only on the original sentences.
The most common logical connectives are binary connectives (also called dyadic connectives) which join two
sentences which can be thought of as the function’s operands. Also commonly, negation is considered to be a unary
connective.
Logical connectives along with quantifiers are the two main types of logical constants used in formal systems such
as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, presented as a
truth function.
[1]
A logical connective is similar to but not equivalent to a conditional operator.
19.1 In language
The words and and so are grammatical conjunctions joining the sentences (A) and (B) to form the compound sentences
(C) and (D). The and in (C) is a logical connective, since the truth of (C) is completely determined by (A) and (B):
it would make no sense to affirm (A) and (B) but deny (C). However, so in (D) is not a logical connective, since it
would be quite reasonable to affirm (A) and (B) but deny (D): perhaps, after all, Jill went up the hill to fetch a pail of
water, not because Jack had gone up the hill at all.
Various English words and word pairs express logical connectives, and some of them are synonymous. Examples
(with the name of the relationship in parentheses) are:
• “and” (conjunction)
• “and then” (conjunction)
148
19.2. COMMON LOGICAL CONNECTIVES 149
The word “not” (negation) and the phrases “it is false that” (negation) and “it is not the case that” (negation) also
express a logical connective – even though they are applied to a single statement, and do not connect two statements.
• Negation: the symbol ¬ appeared in Heyting in 1929.[2][3] (compare to Frege's symbol A in his Begriffsschrift);
the symbol ~ appeared in Russell in 1908;[4] an alternative notation is to add an horizontal line on top of the
formula, as in P ; another alternative notation is to use a prime symbol as in P'.
• Conjunction: the symbol ∧ appeared in Heyting in 1929[2] (compare to Peano's use of the set-theoretic notation
of intersection ∩[5] ); & appeared at least in Schönfinkel in 1924;[6] . comes from Boole's interpretation of logic
as an elementary algebra.
• Disjunction: the symbol ∨ appeared in Russell in 1908[4] (compare to Peano's use of the set-theoretic notation
of union ∪); the symbol + is also used, in spite of the ambiguity coming from the fact that the + of ordinary
elementary algebra is an exclusive or when interpreted logically in a two-element ring; punctually in the history
a + together with a dot in the lower right corner has been used by Peirce,[7]
• Implication: the symbol → can be seen in Hilbert in 1917;[8] ⊃ was used by Russell in 1908[4] (compare to
Peano’s inverted C notation); ⇒ was used in Vax.[9]
• Biconditional: the symbol ≡ was used at least by Russell in 1908;[4] ↔ was used at least by Tarski in 1940;[10] ⇔
was used in Vax; other symbols appeared punctually in the history such as ⊃⊂ in Gentzen,[11] ~ in Schönfinkel[6]
or ⊂⊃ in Chazal.[12]
• True: the symbol 1 comes from Boole's interpretation
∧ of logic as an elementary algebra over the two-element
Boolean algebra; other notations include to be found in Peano.
∨
• False: the symbol 0 comes also from Boole’s interpretation of logic as a ring; other notations include to be
found in Peano.
Some authors used letters for connectives at some time of the history: u. for conjunction (German’s “und” for
“and”) and o. for disjunction (German’s “oder” for “or”) in earlier works by Hilbert (1904); Np for negation, Kpq
for conjunction, Dpq for alternative denial, Apq for disjunction, Xpq for joint denial, Cpq for implication, Epq for
biconditional in Łukasiewicz (1929);[13] cf. Polish notation.
19.2.3 Redundancy
Such logical connective as converse implication ← is actually the same as material conditional with swapped ar-
guments, so the symbol for converse implication is redundant. In some logical calculi (notably, in classical logic)
certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is
the classical equivalence between ¬P ∨ Q and P → Q. Therefore, a classical-based logical system does not need the
conditional operator "→" if "¬" (not) and "∨" (or) are already in use, or may use the "→" only as a syntactic sugar
for a compound having one negation and one disjunction.
There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs. These
correspond to possible choices of binary logical connectives for classical logic. Different implementation of classical
logic can choose different functionally complete subsets of connectives.
One approach is to choose a minimal set, and define other connectives by some logical form, like in the example with
material conditional above. The following are the minimal functionally complete sets of operators in classical logic
whose arities do not exceed 2:
19.3. PROPERTIES 151
See more details about functional completeness in classical logic at Functional completeness in truth function.
Another approach is to use on equal rights connectives of a certain convenient and functionally complete, but not
minimal set. This approach requires more propositional axioms and each equivalence between logical forms must be
either an axiom or provable as a theorem.
But intuitionistic logic has the situation more complicated. Of its five connectives {∧, ∨, →, ¬, ⊥} only negation ¬
has to be reduced to other connectives (see details). Neither of conjunction, disjunction and material conditional has
an equivalent form constructed of other four logical connectives.
19.3 Properties
Some logical connectives possess properties which may be expressed in the theorems containing the connective. Some
of those properties that a logical connective may have are:
• Associativity: Within an expression containing two or more of the same associative connectives in a row, the
order of the operations does not matter as long as the sequence of the operands is not changed.
• Commutativity: The operands of the connective may be swapped preserving logical equivalence to the original
expression.
• Distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a
· b) + (a · c) for all operands a, b, c.
• Idempotence: Whenever the operands of the operation are the same, the compound is logically equivalent to
the operand.
• Absorption: A pair of connectives ∧ , ∨ satisfies the absorption law if a ∧ (a ∨ b) = a for all operands a, b.
• Monotonicity: If f(a1 , ..., an) ≤ f(b1 , ..., bn) for all a1 , ..., an, b1 , ..., bn ∈ {0,1} such that a1 ≤ b1 , a2 ≤ b2 ,
..., an ≤ bn. E.g., ∨ , ∧ , ⊤ , ⊥ .
• Affinity: Each variable always makes a difference in the truth-value of the operation or it never makes a
difference. E.g., ¬ , ↔ , ↮ , ⊤ , ⊥ .
• Duality: To read the truth-value assignments for the operation from top to bottom on its truth table is the same
as taking the complement of reading the table of the same or another connective from bottom to top. Without
resorting to truth tables it may be formulated as g̃ (¬a1 , ..., ¬an) = ¬g(a1 , ..., an). E.g., ¬ .
• Truth-preserving: The compound all those argument are tautologies is a tautology itself. E.g., ∨ , ∧ , ⊤ , →
, ↔ , ⊂. (see validity)
• Falsehood-preserving: The compound all those argument are contradictions is a contradiction itself. E.g., ∨
, ∧ , ↮ , ⊥ , ⊄, ⊅. (see validity)
• Involutivity (for unary connectives): f(f(a)) = a. E.g. negation in classical logic.
For classical and intuitionistic logic, the "=" symbol means that corresponding implications "…→…" and "…←…" for
logical compounds can be both proved as theorems, and the "≤" symbol means that "…→…" for logical compounds
is a consequence of corresponding "…→…" connectives for propositional variables. Some many-valued logics may
have incompatible definitions of equivalence and order (entailment).
Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of
many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and
disjunction over conjunction, as well as for the absorption law.
In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is
self-dual, the latter is also self-dual in intuitionistic logic.
152 CHAPTER 19. LOGICAL CONNECTIVE
However not all authors use the same order; for instance, an ordering in which disjunction is lower precedence than
implication or bi-implication has also been used.[15] Sometimes precedence between conjunction and disjunction is
unspecified requiring to provide it explicitly in given formula with parentheses. The order of precedence determines
which connective is the “main connective” when interpreting a non-atomic formula.
19.7 Notes
[1] Cogwheel. “What is the difference between logical and conditional /operator/". Stack Overflow. Retrieved 9 April 2015.
[3] Denis Roegel (2002), Petit panorama des notations logiques du 20e siècle (see chart on page 2).
[4] Russell (1908) Mathematical logic as based on the theory of types (American Journal of Mathematics 30, p222–262, also
in From Frege to Gödel edited by van Heijenoort).
[6] Schönfinkel (1924) Über die Bausteine der mathematischen Logik, translated as On the building blocks of mathematical
logic in From Frege to Gödel edited by van Heijenoort.
[10] Tarski (1940) Introduction to logic and to the methodology of deductive sciences.
[14] O'Donnell, John; Hall, Cordelia; Page, Rex (2007), Discrete Mathematics Using a Computer, Springer, p. 120, ISBN
9781846285981.
[15] Jackson, Daniel (2012), Software Abstractions: Logic, Language, and Analysis, MIT Press, p. 263, ISBN 9780262017152.
19.8 References
• Bocheński, Józef Maria (1959), A Précis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, D. Reidel, Dordrecht, South Holland.
• Enderton, Herbert (2001), A Mathematical Introduction to Logic (2nd ed.), Boston, MA: Academic Press,
ISBN 978-0-12-238452-3
• Gamut, L.T.F (1991), “Chapter 2”, Logic, Language and Meaning 1, University of Chicago Press, pp. 54–64,
OCLC 21372380
• Rautenberg, W. (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Sci-
ence+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6.
• Lloyd Humberstone (2010), "Sentence Connectives in Formal Logic", Stanford Encyclopedia of Philosophy
(An abstract algebraic logic approach to connectives.)
Logical disjunction
“Disjunction” redirects here. For the logic gate, see OR gate. For separation of chromosomes, see Meiosis. For
disjunctions in distribution, see Disjunct distribution.
In logic and mathematics, or is the truth-functional operator of (inclusive) disjunction, also known as alternation;
the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that
represents this operator is typically written as ∨ or + .
"A or B" is true if A is true, or if B is true, or if both A and B are true.
In logic, or by itself means the inclusive or, distinguished from an exclusive or, which is false when both of its
arguments are true, while an “or” is true in that case.
An operand of a disjunction is called a disjunct.
Related concepts in other fields are:
154
20.1. NOTATION 155
20.1 Notation
Or is usually expressed with an infix operator: in mathematics and logic, ∨; in electronics, +; and in programming
languages, | or or. In Jan Łukasiewicz's prefix notation for logic, the operator is A, for Polish alternatywa.[1]
20.2 Definition
Logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value
of false if and only if both of its operands are false. More generally, a disjunction is a logical formula that can have
156 CHAPTER 20. LOGICAL DISJUNCTION
one or more literals separated only by ORs. A single literal is often considered to be a degenerate disjunction.
The disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original
expression. In keeping with the concept of vacuous truth, when disjunction is defined as an operator or function of
arbitrary arity, the empty disjunction (OR-ing over an empty set of operands) is generally defined as false.
Disjunctions of the arguments on the left — The false bits form a Sierpinski triangle.
20.3 Properties
• Commutativity
• Associativity
• Idempotency
• Monotonicity
• Truth-preserving validity
20.4. SYMBOL 157
• False-preserving validity
If using binary values for true (1) and false (0), then logical disjunction works almost like binary addition. The only
difference is that 1 ∨ 1 = 1 , while 1 + 1 = 10 .
20.4 Symbol
The mathematical symbol for logical disjunction varies in the literature. In addition to the word “or”, and the formula
“Apq", the symbol " ∨ ", deriving from the Latin word vel (“either”, “or”) is commonly used for disjunction. For
example: "A ∨ B " is read as "A or B ". Such a disjunction is false if both A and B are false. In all other cases it is
true.
All of the following are disjunctions:
A∨B
¬A ∨ B
A ∨ ¬B ∨ ¬C ∨ D ∨ ¬E.
The corresponding operation in set theory is the set-theoretic union.
A out
B
OR logic gate
• 0 or 0 = 0
• 0 or 1 = 1
• 1 or 0 = 1
• 1 or 1 = 1
• 1010 or 1100 = 1110
The or operator can be used to set bits in a bit field to 1, by or-ing the field with a constant field with the relevant bits
set to 1. For example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged.
20.6 Union
The membership of an element of an union set in set theory is defined in terms of a logical disjunction: x ∈ A ∪ B if
and only if (x ∈ A) ∨ (x ∈ B). Because of this, logical disjunction satisfies many of the same identities as set-theoretic
union, such as associativity, commutativity, distributivity, and de Morgan’s laws.
20.9 Notes
• George Boole, closely following analogy with ordinary mathematics, premised, as a necessary condition to
the definition of “x + y”, that x and y were mutually exclusive. Jevons, and practically all mathematical logi-
cians after him, advocated, on various grounds, the definition of “logical addition” in a form which does not
necessitate mutual exclusiveness.
20.10. EXTERNAL LINKS 159
20.11 References
[1] Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.
Chapter 21
Material conditional
“Logical conditional” redirects here. For other related meanings, see Conditional statement.
Not to be confused with material inference.
The material conditional (also known as "material implication", "material consequence", or simply "implication",
Venn diagram of A → B .
If a member of the set described by this diagram (the red areas) is a member of A , it is in the intersection of A and B , and it
therefore is also in B .
"implies" or "conditional") is a logical connective (or a binary operator) that is often symbolized by a forward ar-
row "→". The material conditional is used to form statements of the form "p→q" (termed a conditional statement)
which is read as “if p then q” or “p only if q” and conventionally compared to the English construction “If...then...”.
But unlike the English construction, the material conditional statement "p→q" does not specify a causal relationship
between p and q and is to be understood to mean “if p is true, then q is also true” such that the statement "p→q"
is false only when p is true and q is false.[1] Intuitively, consider that a given p being true and q being false would
prove an “if p is true, q is always also true” statement false, even when the “if p then q” does not represent a causal
160
21.1. DEFINITIONS OF THE MATERIAL CONDITIONAL 161
relationship between p and q. Instead, the statement describes p and q as each only being true when the other is
true, and makes no claims that p causes q. However, note that such a general and informal way of thinking about
the material conditional is not always acceptable, as will be discussed. As such, the material conditional is also to be
distinguished from logical consequence.
The material conditional is also symbolized using:
1. p ⊃ q (Although this symbol may be used for the superset symbol in set theory.);
2. p ⇒ q (Although this symbol is often used for logical consequence (i.e. logical implication) rather than for
material conditional.)
With respect to the material conditionals above, p is termed the antecedent, and q the consequent of the conditional.
Conditional statements may be nested such that either or both of the antecedent or the consequent may themselves
be conditional statements. In the example "(p→q) → (r→s)" both the antecedent and the consequent are conditional
statements.
In classical logic p → q is logically equivalent to ¬(p ∧ ¬q) and by De Morgan’s Law logically equivalent to ¬p ∨ q
.[2] Whereas, in minimal logic (and therefore also intuitionistic logic) p → q only logically entails ¬(p ∧ ¬q) ; and in
intuitionistic logic (but not minimal logic) ¬p ∨ q entails p → q .
Truth table
The truth table associated with the material conditional p→q is identical to that of ¬p∨q and is also denoted by Cpq.
It is as follows:
It may also be useful to note that in Boolean algebra, true and false can be denoted as 1 and 0 respectively with an
equivalent table.
1. Modus ponens;
2. Conditional proof;
3. Classical contraposition;
4. Classical reductio ad absurdum.
Unlike the truth-functional one, this approach to logical connectives permits the examination of structurally identi-
cal propositional forms in various logical systems, where somewhat different properties may be demonstrated. For
example, in intuitionistic logic which rejects proofs by contraposition as valid rules of inference, (p → q) ⇒ ¬p ∨ q
is not a propositional theorem, but the material conditional is used to define negation.
162 CHAPTER 21. MATERIAL CONDITIONAL
• Both → and |= are monotonic; i.e., if Γ |= ψ then ∆ ∪ Γ |= ψ , and if φ → ψ then (φ ∧ α) → ψ for any α,
Δ. (In terms of structural rules, this is often referred to as weakening or thinning.)
These principles do not hold in all logics, however. Obviously they do not hold in non-monotonic logics, nor do they
hold in relevance logics.
Other properties of implication (the following expressions are always true, for any logical values of variables):
• reflexivity: a → a
• totality: (a → b) ∨ (b → a)
• truth preserving: The interpretation under which all variables are assigned a truth value of 'true' produces a
truth value of 'true' as a result of material implication.
The truth-functional theory of the conditional was integral to Frege's new logic (1879). It was taken
up enthusiastically by Russell (who called it “material implication”), Wittgenstein in the Tractatus, and
the logical positivists, and it is now found in every logic text. It is the first theory of conditionals which
students encounter. Typically, it does not strike students as obviously correct. It is logic’s first surprise.
Yet, as the textbooks testify, it does a creditable job in many circumstances. And it has many defenders.
It is a strikingly simple theory: “If A, B" is false when A is true and B is false. In all other cases, “If A,
B" is true. It is thus equivalent to "~(A&~B)" and to "~A or B". "A ⊃ B" has, by stipulation, these truth
conditions.
— Dorothy Edgington, The Stanford Encyclopedia of Philosophy, “Conditionals”[4]
The meaning of the material conditional can sometimes be used in the natural language English “if condition then
consequence" construction (a kind of conditional sentence), where condition and consequence are to be filled with
English sentences. However, this construction also implies a “reasonable” connection between the condition (protasis)
and consequence (apodosis) (see Connexive logic).
The material conditional can yield some unexpected truths when expressed in natural language. For example, any
material conditional statement with a false antecedent is true (see vacuous truth). So the statement “if 2 is odd then 2
is even” is true. Similarly, any material conditional with a true consequent is true. So the statement “if I have a penny
in my pocket then Paris is in France” is always true, regardless of whether or not there is a penny in my pocket. These
problems are known as the paradoxes of material implication, though they are not really paradoxes in the strict sense;
that is, they do not elicit logical contradictions. These unexpected truths arise because speakers of English (and other
natural languages) are tempted to equivocate between the material conditional and the indicative conditional, or other
conditional statements, like the counterfactual conditional and the material biconditional. It is not surprising that a
rigorously defined truth-functional operator does not correspond exactly to all notions of implication or otherwise
expressed by 'if...then...' sentences in English (or their equivalents in other natural languages). For an overview of
some the various analyses, formal and informal, of conditionals, see the “References” section below.
21.4.1 Conditionals
• Counterfactual conditional
• Indicative conditional
• Corresponding conditional
• Strict conditional
21.5 References
[1] Magnus, P.D (January 6, 2012). “forallx: An Introduction to Formal Logic” (PDF). Creative Commons. p. 25. Retrieved
28 May 2013.
[2] Teller, Paul (January 10, 1989). “A Modern Formal Logic Primer: Sentence Logic Volume 1” (PDF). Prentice Hall. p.
54. Retrieved 28 May 2013.
[3] Clarke, Matthew C. (March 1996). “A Comparison of Techniques for Introducing Material Implication”. Cornell Univer-
sity. Retrieved March 4, 2012.
[4] Edgington, Dorothy (2008). Edward N. Zalta, ed. “Conditionals”. The Stanford Encyclopedia of Philosophy (Winter 2008
ed.).
• Edgington, Dorothy (2001), “Conditionals”, in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic,
Blackwell.
• Quine, W.V. (1982), Methods of Logic, (1st ed. 1950), (2nd ed. 1959), (3rd ed. 1972), 4th edition, Harvard
University Press, Cambridge, MA.
• Stalnaker, Robert, “Indicative Conditionals”, Philosophia, 5 (1975): 269–286.
Monotonic function
“Monotonicity” redirects here. For information on monotonicity as it pertains to voting systems, see monotonicity
criterion.
“Monotonic” redirects here. For other uses, see Monotone (disambiguation).
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves
Figure 1. A monotonically increasing function. It is strictly increasing on the left and right while just non-decreasing in the middle.
165
166 CHAPTER 22. MONOTONIC FUNCTION
the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order
theory.
The terms “non-decreasing” and “non-increasing” should not be confused with the (much weaker) negative qualifi-
cations “not decreasing” and “not increasing”. For example, the function of figure 3 first falls, then rises, then falls
again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
The term monotonic transformation can also possibly cause some confusion because it refers to a transformation
by a strictly increasing function. Notably, this is the case in economics with respect to the ordinal properties of a
utility function being preserved across a monotonic transform (see also monotone preferences).[1]
A function f (x) is said to be absolutely monotonic over an interval (a, b) if the derivatives of all orders of f are
nonnegative at all points on the interval.
• f has limits from the right and from the left at every point of its domain;
• f has a limit at positive or negative infinity ( ±∞ ) of either a real number, ∞ , or (−∞) .
• f can only have jump discontinuities;
• f can only have countably many discontinuities in its domain.
168 CHAPTER 22. MONOTONIC FUNCTION
These properties are the reason why monotonic functions are useful in technical work in analysis. Two facts about
these functions are:
• if f is a monotonic function defined on an interval I , then f is differentiable almost everywhere on I , i.e. the
set {x : x ∈ I} of numbers x in I such that f is not differentiable in x has Lebesgue measure zero. In addition,
this result cannot be improved to countable: see Cantor function.
An important application of monotonic functions is in probability theory. If X is a random variable, its cumulative
distribution function FX (x) = Prob(X ≤ x) is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreas-
ing.
When f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f , then there is an
inverse function on T for f .
(T u − T v, u − v) ≥ 0 ∀u, v ∈ X.
Kachurovskii’s theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset G of X × X∗ is said to be a monotone set if for every pair [u1 ,w1 ] and [u2 ,w2 ] in G,
(w1 − w2 , u1 − u2 ) ≥ 0.
G is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph
of a monotone operator G(T) is a monotone set. A monotone operator is said to be maximal monotone if its graph
is a maximal monotone set.
for all x and y in its domain. The composite of two monotone mappings is also monotone.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain
of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special
applications are found in these places. Some notable special monotone functions are order embeddings (functions for
which x ≤ y if and only if f(x) ≤ f(y)) and order isomorphisms (surjective order embeddings).
This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is
also admissible, monotonicity is a stricter requirement than admissibility. In some heuristic algorithms, such as A*,
the algorithm can be considered optimal if it is monotonic.[2]
• Pseudo-monotone operator
• Total monotonicity
22.8 Notes
[1] See the section on Cardinal Versus Ordinal Utility in Simon & Blume (1994).
[2] Conditions for optimality: Admissibility and consistency pg. 94-95 (Russell & Norvig 2010).
170 CHAPTER 22. MONOTONIC FUNCTION
22.9 Bibliography
• Bartle, Robert G. (1976). The elements of real analysis (second edition ed.).
• Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. ISBN 0-7167-0442-0.
• Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manch-
ester University Press. ISBN 0-7190-3341-1.
• Renardy, Michael and Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in
Applied Mathematics 13 (Second edition ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.
• Riesz, Frigyes and Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN
978-0-486-66289-3.
• Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle
River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.
• Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first edition ed.). ISBN 978-0-
393-95733-4. (Definition 9.31)
• Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram
Demonstrations Project.
n-ary group
In mathematics, and in particular universal algebra, the concept of n-ary group (also called n-group or multiary
group) is a generalization of the concept of group to a set G with an n-ary operation instead of a binary operation.[1]
By an n-ary operation is meant any set map f: Gn → G from the n-th Cartesian power of G to G. The axioms for
an n-ary group are defined in such a way that they reduce to those of a group in the case n = 2. The earliest work
on these structures was done in 1904 by Kasner and in 1928 by Dörnte;[2] the first systematic account of (what were
then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions
of the American Mathematical Society.[3]
23.1 Axioms
23.1.1 Associativity
The easiest axiom to generalize is the associative law. Ternary associativity is the polynomial identity (abc)de =
a(bcd)e = ab(cde), i.e. the equality of the three possible bracketings of the string abcde in which any three consecutive
symbols are bracketed. (Here it is understood that the equations hold for arbitrary choices of elements a,b,c,d,e in
G.) In general, n-ary associativity is the equality of the n possible bracketings of a string consisting of n+(n−1) =
2n-1 distinct symbols with any n consecutive symbols bracketed. A set G which is closed under an associative n-
ary operation is called an n-ary semigroup. A set G which is closed under any (not necessarily associative) n-ary
operation is called an n-ary groupoid.
171
172 CHAPTER 23. N-ARY GROUP
element e (called an n-ary identity or neutral element) such that any string of n-elements consisting of all e's, apart
from one place, is mapped to the element at that place. E.g., in a quaternary group with identity e, eeae = a for every
a.
An n-ary group containing a neutral element is reducible. Thus, an n-ary group that is not reducible does not contain
such elements. There exist n-ary groups with more than one neutral element. If the set of all neutral elements of an
n-ary group is non-empty it forms an n-ary subgroup.[4]
Some authors include an identity in the definition of an n-ary group but as mentioned above such n-ary operations
are just repeated binary operations. Groups with intrinsically n-ary operations do not have an identity element.[5]
23.2 Example
The following is an example of a three element ternary group, one of four such groups[6]
23.4 References
[1] Dudek, W.A. (2001), “On some old and new problems in n-ary groups”, Quasigroups and Related Systems 8: 15–36.
[2] W. Dörnte, Untersuchungen über einen verallgemeinerten Gruppenbegriff, Mathematische Zeitschrift, vol. 29 (1928), pp.
1-19.
[3] E. L. Post, Polyadic groups, Transactions of the American Mathematical Society 48 (1940), 208–350.
[4] Wiesław A. Dudek, Remarks to Głazek’s results on n-ary groups, Discussiones Mathematicae. General Algebra and Appli-
cations 27 (2007), 199–233.
[5] Wiesław A. Dudek and Kazimierz Głazek, Around the Hosszú-Gluskin theorem for n-ary groups, Discrete Mathematics
308 (2008), 486–4876.
[6] http://home.comcast.net/~{}tamivox/dave/math/tern_quasi/assoc1234.html
Negation
In logic, negation, also called logical complement, is an operation that takes a proposition p to another proposition
“not p", written ¬p, which is interpreted intuitively as being true when p is false and false when p is true. Negation
is thus a unary (single-argument) logical connective. It may be applied as an operation on propositions, truth values,
or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes
truth to falsity and vice versa. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation,
the negation of a proposition p is the proposition whose proofs are the refutations of p.
24.1 Definition
No agreement exists as to the possibility of defining negation, as to its logical status, function, and meaning, as to its
field of applicability..., and as to the interpretation of the negative judgment, (F.H. Heinemann 1944).[1]
Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value
of true when its operand is false and a value of false when its operand is true. So, if statement A is true, then ¬A
(pronounced “not A”) would therefore be false; and conversely, if ¬A is true, then A would be false.
The truth table of ¬p is as follows:
Classical negation can be defined in terms of other logical operations. For example, ¬p can be defined as p → F, where
"→" is logical consequence and F is absolute falsehood. Conversely, one can define F as p & ¬p for any proposition
p, where "&" is logical conjunction. The idea here is that any contradiction is false. While these ideas work in both
classical and intuitionistic logic, they do not work in Brazilian logic, where contradictions are not necessarily false.
But in classical logic, we get a further identity: p → q can be defined as ¬p ∨ q, where "∨" is logical disjunction: “not
p, or q".
Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to
pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic
respectively.
24.2 Notation
The negation of a proposition p is notated in different ways in various contexts of discussion and fields of application.
Among these variants are the following:
In set theory \ is also used to indicate 'not member of': U \ A is the set of all members of U that are not members of
A.
No matter how it is notated or symbolized, the negation ¬p / −p can be read as “it is not the case that p", “not that p",
or usually more simply (though not grammatically) as “not p".
173
174 CHAPTER 24. NEGATION
24.3 Properties
24.3.2 Distributivity
De Morgan’s laws provide a way of distributing negation over disjunction and conjunction :
24.3.3 Linearity
In Boolean algebra, a linear function is one such that:
If there exists a0 , a1 , ..., a ∈ {0,1} such that f(b1 , ..., b ) = a0 ⊕ (a1 ∧ b1 ) ⊕ ... ⊕ (a ∧ b ), for all b1 , ..., b ∈ {0,1}.
Another way to express this is that each variable always makes a difference in the truth-value of the operation or it
never makes a difference. Negation is a linear logical operator.
24.5 Programming
As in mathematics, negation is used in computer science to construct logical statements.
24.6. KRIPKE SEMANTICS 175
The "!" signifies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl,
and PHP. “NOT” is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired
syntax such as Pascal, Ada, Eiffel and Seed7. Some languages (C++, Perl, etc.) provide more than one operator for
negation. A few languages like PL/I and Ratfor use ¬ for negation. Some modern computers and operating systems
will display ¬ as ! on files encoded in ASCII. Most modern languages allow the above statement to be shortened from
if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster
programs.
In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s
and 0s to 1s. See bitwise operation. This is often used to create ones’ complement or "~" in C or C++ and two’s
complement (just simplified to "-" or the negative sign since this is equivalent to taking the arithmetic negative value
of the number) as it basically creates the opposite (negative value equivalent) or mathematical complement of the
value (where both values are added together they create a whole).
To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from
negative to positive (it is negative because “x < 0” yields true)
unsigned int abs(int x) { if (x < 0) return -x; else return x; }
Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e.
will have identical results for any input (note that depending on the compiler used, the actual instructions performed
by the computer may differ).
This convention occasionally surfaces in written speech, as computer-related slang for not. The phrase !voting, for
example, means “not voting”.
• Logical disjunction
• NOT gate
• Bitwise NOT
• Ampheck
• Apophasis
• Cyclic negation
• Grammatical polarity
• Negation (linguistics)
• Negation as failure
176 CHAPTER 24. NEGATION
• Square of opposition
• Binary opposition
24.8 References
[1] Horn, Laurence R (2001). “Chapter 1”. A NATURAL HISTORY OF NEGATION (PDF). Stanford University: CLSI Pub-
lications. p. 1. ISBN 1-57586-336-7. Retrieved 29 Dec 2013.
• Wansing, Heinrich, 2001, “Negation”, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic, Blackwell.
• Marco Tettamanti, Rosa Manenti, Pasquale A. Della Rosa, Andrea Falini, Daniela Perani, Stefano F. Cappa
and Andrea Moro (2008). “Negation in the brain: Modulating action representation”, NeuroImage Volume
43, Issue 2, 1 November 2008, pages 358–367, http://dx.doi.org/10.1016/j.neuroimage.2008.08.004/
• Hazewinkel, Michiel, ed. (2001), “Negation”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-
010-4
• NOT, on MathWorld
Chapter 25
NOR gate
This article is about NOR in the sense of an electronic logic gate (e.g. CMOS 4001). For NOR in the purely logical
sense, see Logical NOR. For other uses of NOR or Nor, see Nor (disambiguation).
“4001” redirects here. For the year, see 5th millennium.
The NOR gate is a digital logic gate that implements logical NOR - it behaves according to the truth table to the
right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW
output (0) results. NOR is the result of the negation of the OR operator. It can also be seen as an AND gate with all
the inputs inverted. NOR is a functionally complete operation—NOR gates can be combined to generate any other
logical function. By contrast, the OR operator is monotonic as it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—including CMOS and TTL. In such logic
families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is
some forms of the domino logic family.
The original Apollo Guidance Computer used 4,100 ICs, each one containing only a single 3-input NOR gate.
25.1 Symbols
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or
'rectangular') symbol, as well as the deprecated DIN symbol. For more information see Logic Gate Symbols. The
ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.
25.2.1 Availability
These devices are available from most semiconductor manufacturers such as Fairchild Semiconductor, Philips or
Texas Instruments. These are usually available in both through-hole DIP and SOIC format. Datasheets are readily
available in most datasheet databases.
In the popular CMOS and TTL logic families, NOR gates with up to 8 inputs are available:
• CMOS
177
178 CHAPTER 25. NOR GATE
A = f0 B = cc
03 C = aa
0c 30 11 55
e2 8a
1c 15 08
e8 = CARRY 61
96 = SUM
• TTL
In the older RTL and ECL families, NOR gates were efficient and most commonly used.
25.3 Implementations
The diagrams above show the construction of a 2-input NOR gate using NMOS logic circuitry. If either of the inputs
are high, the corresponding N-channel MOSFET is turned on and the output is pulled low; otherwise the output is
pulled high through the pull-up resistor.
The diagram below shows a 2-input NOR gate using CMOS technology. The diodes and resistors on the inputs are
to protect the CMOS components from damage due to electrostatic discharge (ESD) and play no part in the logical
function of the circuit.
25.3.1 Alternatives
If no specific NOR gates are available, one can be made from NAND gates, because NAND and NOR gates are
considered the “universal gates”, meaning that they can be used to make all the others.[1]
• NOT gate
• NAND gate
• XOR gate
• XNOR gate
• NOR logic
25.5 References
[1] Mano, M. Morris and Charles R. Kime. Logic and Computer Design Fundamentals, Third Edition. Prentice Hall, 2004. p.
73.
A Post canonical system, as created by Emil Post, is a string-manipulation system that starts with finitely-many
strings and repeatedly transforms them by applying a finite set j of specified rules of a certain form, thus generating a
formal language. Today they are mainly of historical relevance because every Post canonical system can be reduced to
a string rewriting system (semi-Thue system), which is a simpler formulation. Both formalisms are Turing complete.
26.1 Definition
A Post canonical system is a triplet (A,I,R), where
• A is a finite alphabet, and finite (possibly empty) strings on A are called words.
• I is a finite set of initial words.
• R is a finite set of string-transforming rules (called production rules), each rule being of the following form:
181
182 CHAPTER 26. POST CANONICAL SYSTEM
g$ → $h
Post 1943 proved the remarkable Normal-form Theorem, which applies to the most-general type of Post canonical
system:
Given any Post canonical system on an alphabet A, a Post canonical system in normal form can be
constructed from it, possibly enlarging the alphabet, such that the set of words involving only letters of
A that are generated by the normal-form system is exactly the set of words generated by the original
system.
Tag systems, which comprise a universal computational model, are notable examples of Post normal-form system,
being also monogenic. (A canonical system is said to be monogenic if, given any string, at most one new string can be
produced from it in one step — i.e., the system is deterministic.)
P1 gP2 → P1 hP2
That is, each production rule is a simple substitution rule, often written in the form g → h. It has been proved that
any Post canonical system is reducible to such a substitution system, which, as a formal grammar, is also called a
phrase-structure grammar, or a type-0 grammar in the Chomsky hierarchy.
26.4 References
• Emil Post, “Formal Reductions of the General Combinatorial Decision Problem,” American Journal of Math-
ematics 65 (2): 197-215, 1943.
• Marvin Minsky, Computation: Finite and Infinite Machines, Prentice-Hall, Inc., N.J., 1967.
Chapter 27
Not to be confused with the other Post’s problem on the existence of incomparable r.e. Turing degrees.
Not to be confused with PCP theorem.
The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946.[1]
Because it is simpler than the halting problem and the Entscheidungsproblem it is often used in proofs of undecid-
ability.
27.2.1 Example 1
Consider the following two lists:
A solution to this problem would be the sequence (3, 2, 3, 1), because
183
184 CHAPTER 27. POST CORRESPONDENCE PROBLEM
27.2.2 Example 2
Again using blocks to represent an instance of the problem, the following is an example that has infinitely many
solutions in addition to the kind obtained by merely “repeating” a solution.
In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions):
The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation
of a Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing
machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot
be decidable either. The following discussion is based on Michael Sipser's textbook Introduction to the Theory of
Computation.[2]
In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing
machine’s computation. This means it will list a string describing the initial state, followed by a string describing the
next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some
separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine
consists of three parts:
• The current state of the finite state machine which operates the tape head.
Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down
as part of our state. To describe the state of the finite control, we create new symbols, labelled q1 through qk, for each
of the finite state machine’s k states. We insert the correct symbol into the string describing the tape’s contents at the
position of the tape head, thereby indicating both the tape head’s position and the current state of the finite control.
For the alphabet {0,1}, a typical state might look something like:
101101110q7 00110.
A simple computation history would then look something like this:
q0 101#1q4 01#11q2 1#1q8 10.
We start out with this block, where x is the input string and q0 is the start state:
The top starts out “lagging” the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol
a in the tape alphabet, as well as #, we have a “copy” block, which copies it unmodified from one state to the next:
We also have a block for each position transition the machine can make, showing how the tape head moves, how the
finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in
state 4, and then writes a 1 and moves right, changing to state 7:
Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match.
To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step
will cause a symbol near the tape head to vanish, one at a time, until none remain. If qf is an accepting state, we can
represent this with the following transition blocks, where a is a tape alphabet symbol:
There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial
tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing
machine computation.
The previous example
q0 101#1q4 01#11q2 1#1q8 10.
is represented as the following solution to the Post correspondence problem:
27.4. VARIANTS 185
27.4 Variants
Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new
problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an
apparently weaker version.
• The problem may be phrased in terms of monoid morphisms f, g from the free monoid B∗ to the free monoid
A∗ where B is of size n. The problem is to determine whether there is a word w in B+ such that f(w) = g(w).[3]
• The condition that the alphabet A have at least two symbols is required since the problem is decidable if A has
only one symbol.
• A simple variant is to fix n, the number of tiles. This problem is decidable if n ≤ 2, but remains undecidable
for n ≥ 5. It is unknown whether the problem is decidable for 3 ≤ n ≤ 4.[4]
• The circular Post correspondence problem asks whether indexes i1 , i2 , . . . can be found such that αi1 · · · αik
and βi1 · · · βik are conjugate words, i.e., they are equal modulo rotation. This variant is undecidable.[5]
• One of the most important variants of PCP is the bounded Post correspondence problem, which asks if we
can find a match using no more than k tiles, including repeated tiles. A brute force search solves the problem
in time O(2k ), but this may be difficult to improve upon, since the problem is NP-complete.[6] Unlike some
NP-complete problems like the boolean satisfiability problem, a small variation of the bounded problem was
also shown to be complete for RNP, which means that it remains hard even if the inputs are chosen at random
(it is hard on average over uniformly distributed inputs).[7]
• Another variant of PCP is called the marked Post Correspondence Problem, in which each ui must begin
with a different symbol, and each vi must also begin with a different symbol. Halava, Hirvensalo, and de Wolf
showed that this variation is decidable in exponential time. Moreover, they showed that if this requirement
is slightly loosened so that only one of the first two characters need to differ (the so-called 2-marked Post
Correspondence Problem), the problem becomes undecidable again.[8]
• The Post Embedding Problem is another variant where one looks for indexes i1 , i2 , . . . such that αi1 · · · αik
is a (scattered) subword of βi1 · · · βik . This variant is easily decidable since, when some solutions exist, in
particular a length-one solution exists. More interesting is the Regular Post Embedding Problem, a further
variant where one looks for solutions that belong to a given regular language (submitted, e.g., under the form
of a regular expression on the set {1, . . . , N } ). The Regular Post Embedding Problem is still decidable but,
because of the added regular constraint, it has a very high complexity that dominates every multiply recursive
function.[9]
• The Identity Correspondence Problem (ICP) asks whether a finite set of pairs of words (over a group alpha-
bet) can generate an identity pair by a sequence of concatenations. The problem is undecidable and equivalent
to the following Group Problem: is the semigroup generated by a finite set of pairs of words (over a group
alphabet) a group.[10]
27.5 References
[1] E. L. Post (1946). “A variant of a recursively unsolvable problem” (PDF). Bull. Amer. Math. Soc 52.
[2] Michael Sipser (2005). “A Simple Undecidable Problem”. Introduction to the Theory of Computation (2nd ed.). Thomson
Course Technology. pp. 199–205. ISBN 0-534-95097-3.
[3] Salomaa, Arto (1981). Jewels of Formal Language Theory. Pitman Publishing. pp. 74–75. ISBN 0-273-08522-0. Zbl
0487.68064.
[4] T. Neary (2015). “Undecidability in Binary Tag Systems and the Post Correspondence Problem for Five Pairs of Words”.
In Ernst W. Mayr and Nicolas Ollinger. 32nd International Symposium on Theoretical Aspects of Computer Science (STACS
2015). STACS 2015 30. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. pp. 649–661. doi:10.4230/LIPIcs.STACS.2015.649.
186 CHAPTER 27. POST CORRESPONDENCE PROBLEM
[5] K. Ruohonen (1983). “On some variants of Post’s correspondence problem”. Acta Informatica (Springer) 19 (4): 357–367.
doi:10.1007/BF00290732.
[6] Michael R. Garey; David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness.
W.H. Freeman. p. 228. ISBN 0-7167-1045-5.
[7] Y. Gurevich (1991). “Average case completeness”. J. Comp. Sys. Sci. (Elsevier Science) 42 (3): 346–398. doi:10.1016/0022-
0000(91)90007-R.
[8] V. Halava; M. Hirvensalo and R. de Wolf (2001). “Marked PCP is decidable”. Theor. Comp. Sci. (Elsevier Science) 255:
193–204. doi:10.1016/S0304-3975(99)00163-2.
[9] P. Chambart; Ph. Schnoebelen (2007). “Post embedding problem is not primitive recursive, with applications to channel
systems”. Lecture Notes in Computer Science. Lecture Notes in Computer Science (Springer) 4855: 265–276. doi:10.1007/978-
3-540-77050-3_22. ISBN 978-3-540-77049-7.
[10] Paul C. Bell; Igor Potapov (2010). “On the Undecidability of the Identity Correspondence Problem and its Applications
for Word and Matrix Semigroups”. International Journal of Foundations of Computer Science (World Scientific) 21.6:
963–978. doi:10.1142/S0129054110007660.
Post’s inversion formula for Laplace transforms, named after Emil Post, is a simple-looking but usually impractical
formula for evaluating an inverse Laplace transform.
The statement of the formula is as follows: Let f(t) be a continuous function on the interval [0, ∞) of exponential
order, i.e.
f (t)
sup <∞
t>0 ebt
for some real number b. Then for all s > b, the Laplace transform for f(t) exists and is infinitely differentiable with
respect to s. Furthermore, if F(s) is the Laplace transform of f(t), then the inverse Laplace transform of F(s) is given
by
( )k+1 ( )
−1 (−1)k k k
f (t) = L {F (s)} = lim F (k)
k→∞ k! t t
28.2 References
• Widder, D. V. (1946), The Laplace Transform, Princeton University Press
• Elementary inversion of the Laplace transform. Bryan, Kurt. Accessed June 14, 2006.
187
Chapter 29
Post’s lattice
In logic and universal algebra, Post’s lattice denotes the lattice of all clones on a two-element set {0, 1}, ordered
by inclusion. It is named for Emil Post, who published a complete description of the lattice in 1941.[1] The relative
simplicity of Post’s lattice is in stark contrast to the lattice of clones on a three-element (or larger) set, which has the
cardinality of the continuum, and a complicated inner structure. A modern exposition of Post’s result can be found
in Lau (2006).[2]
πkn (x1 , . . . , xn ) = xk ,
and given an m-ary function f, and n-ary functions g1 , ..., gm, we can construct another n-ary function
{
n 1 if{i | xi = 1} ≥ k,
thk (x1 , . . . , xn ) =
0 otherwise.
For example, th1 n is the large disjunction of all the variables xi, and thnn is the large conjunction. Of particular
importance is the majority function
188
29.2. NAMING OF CLONES 189
for every i ≤ n, a, b ∈ 2n , and c, d ∈ 2. Equivalently, the functions expressible as f(x1 , ..., xn) = a0 +
a1 x1 + ... + anxn for some a0 , a.
• U is the set of essentially unary functions, i.e., functions which depend on at most one input variable: there
exists an i = 1, ..., n such that f(a) = f(b) whenever ai = bi.
• Λ is the set of conjunctive
∧ functions: f(a ∧ b) = f(a) ∧ f(b). The clone Λ consists of the conjunctions
f (x1 , . . . , xn ) = i∈I xi for all subsets I of {1, ..., n} (including the empty conjunction, i.e., the constant 1),
and the constant 0.
• V is the set of disjunctive
∨ functions: f(a ∨ b) = f(a) ∨ f(b). Equivalently, V consists of the disjunctions
f (x1 , . . . , xn ) = i∈I xi for all subsets I of {1, ..., n} (including the empty disjunction 0), and the constant
1.
• For any k ≥ 1, T0 k is the set of functions f such that
a1 ∧ · · · ∧ ak = 0 ⇒ f (a1 ) ∧ · · · ∧ f (ak ) = 0.
∩∞ k
Moreover, T∞ 0 = k=1 T0 is the set of functions bounded above by a variable: there exists i = 1, ..., n
such that f(a) ≤ ai for all a.
As a special case, P0 = T0 1 is the set of 0-preserving functions: f(0) = 0.
a1 ∨ · · · ∨ ak = 1 ⇒ f (a1 ) ∨ · · · ∨ f (ak ) = 1,
∩∞ k
and T∞1 = k=1 T1 is the set of functions bounded below by a variable: there exists i = 1, ..., n such
that f(a) ≥ ai for all a.
The special case P1 = T1 1 consists of the 1-preserving functions: f(1) = 1.
• The largest clone of all functions is denoted ⊤, the smallest clone (which contains only projections) is denoted
⊥, and P = P0 P1 is the clone of constant-preserving functions.
190 CHAPTER 29. POST’S LATTICE
The set of all clones is a closure system, hence it forms a complete lattice. The lattice is countably infinite, and all its
members are finitely generated. All the clones are listed in the table below.
The eight infinite families have actually also members with k = 1, but these appear separately in the table: T0 1 = P0 ,
T1 1 = P1 , PT0 1 = PT1 1 = P, MT0 1 = MP0 , MT1 1 = MP1 , MPT0 1 = MPT1 1 = MP.
The lattice has a natural symmetry mapping each clone C to its dual clone C d = {f d | f ∈ C}, where f d (x1 , ..., xn) =
¬f(¬x1 , ..., ¬xn) is the de Morgan dual of a Boolean function f. For example, Λd = V, (T0 k )d = T1 k , and Md = M.
29.4. APPLICATIONS 191
29.4 Applications
The complete classification of Boolean clones given by Post helps to resolve various questions about classes of Boolean
functions. For example:
• An inspection of the lattice shows that the maximal clones different from ⊤ (often called Post’s classes) are
M, D, A, P0 , P1 , and every proper subclone of ⊤ is contained in one of them. As a set B of connectives is
functionally complete if and only if it generates ⊤, we obtain the following characterization: B is functionally
complete iff it is not included in one of the five Post’s classes.
• The satisfiability problem for Boolean formulas is NP-complete by Cook’s theorem. Consider a restricted
version of the problem: for a fixed finite set B of connectives, let B-SAT be the algorithmic problem of checking
whether a given B-formula is satisfiable. Lewis[3] used the description of Post’s lattice to show that B-SAT is
NP-complete if the function ↛ can be generated from B (i.e., [B] ⊇ T0 ∞ ), and in all the other cases B-SAT is
polynomial-time decidable.
192 CHAPTER 29. POST’S LATTICE
29.5 Variants
Post originally did not work with the modern definition of clones, but with the so-called iterative systems, which are
sets of operations closed under substitution
as well as permutation and identification of variables. The main difference is that iterative systems do not necessarily
contain all projections. Every clone is an iterative system, and there are 20 non-empty iterative systems which are not
clones. (Post also excluded the empty iterative system from the classification, hence his diagram has no least element
and fails to be a lattice.) As another alternative, some authors work with the notion of a closed class, which is an
iterative system closed under introduction of dummy variables. There are four closed classes which are not clones:
the empty set, the set of constant 0 functions, the set of constant 1 functions, and the set of all constant functions.
Composition alone does not allow to generate a nullary function from the corresponding unary constant function, this
is the technical reason why nullary functions are excluded from clones in Post’s classification. If we lift the restriction,
we get more clones. Namely, each clone C in Post’s lattice which contains at least one constant function corresponds
to two clones under the less restrictive definition: C, and C together with all nullary functions whose unary versions
are in C.
29.6 References
[1] E. L. Post, The two-valued iterative systems of mathematical logic, Annals of Mathematics studies, no. 5, Princeton Uni-
versity Press, Princeton 1941, 122 pp.
[2] D. Lau, Function algebras on finite sets: Basic course on many-valued logic and clone theory, Springer, New York, 2006,
668 pp. ISBN 978-3-540-36022-3
[3] H. R. Lewis, Satisfiability problems for propositional calculi, Mathematical Systems Theory 13 (1979), pp. 45–53.
Chapter 30
Post’s theorem
In computability theory Post’s theorem, named after Emil Post, describes the connection between the arithmetical
hierarchy and the Turing degrees.
30.1 Background
The statement of Post’s theorem uses several concepts relating to definability and recursion theory. This section gives
a brief overview of these concepts, which are covered in depth in their respective articles.
The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arith-
metic. A formula is said to be Σ0m if it is an existential statement in prenex normal form (all quantifiers at the front)
with m alternations between existential and universal quantifiers applied to a quantifier free formula. Formally a
formula ϕ(s) in the language of Peano arithmetic is a Σ0m formula if it is of the form
where ρ is a quantifier free formula and Q is ∀ if m is even and ∃ if m is odd. Note that any formula of the form
( )( )( )
∃n11 ∃n12 · · · ∃n1j1 ∀n21 · · · ∀n2j2 ∃n31 · · · · · · (Q1 nm
1 · · · ) ρ(n1 , . . . njm , x1 , . . . , xk )
1 m
where ρ contains only bounded quantifiers is provably equivalent to a formula of the above form from the axioms of
Peano arithmetic. Thus it isn't uncommon to see Σ0m formulas defined in this alternative and technically nonequivalent
manner since in practice the distinction is rarely important.
A set of natural numbers A is said to be Σ0m if it is definable by a Σ0m formula, that is, if there is a Σ0m formula ϕ(s)
such that each number n is in A if and only if ϕ(n) holds. It is known that if a set is Σ0m then it is Σ0n for any n > m
, but for each m there is a Σ0m+1 set that is not Σ0m . Thus the number of quantifier alternations required to define a
set gives a measure of the complexity of the set.
Post’s theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set
A of natural numbers is said to be Σ0m relative to a set B, written Σ0,B 0
m , if A is definable by a Σm formula in an
extended language that includes a predicate for membership in B.
While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level
of uncomputability of sets of natural numbers. A set A is said to be Turing reducible to a set B, written A ≤T B , if
there is an oracle Turing machine that, given an oracle for B, computes the characteristic function of A. The Turing
jump of a set A is a form of the Halting problem relative to A. Given a set A, the Turing jump A′ is the set of indices
of oracle Turing machines that halt on input 0 when run with oracle A. It is known that every set A is Turing reducible
to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set.
Post’s theorem uses finitely iterated Turing jumps. For any set A of natural numbers, the notation A(n) indicates the
n-fold iterated Turing jump of A. Thus A(0) is just A, and A(n+1) is the Turing jump of A(n) .
193
194 CHAPTER 30. POST’S THEOREM
1. A set B is Σ0n+1 if and only if B is recursively enumerable by an oracle Turing machine with an oracle for ∅(n)
(n)
, that is, if and only if B is Σ0,∅
1 .
2. The set ∅(n) is Σ0n complete for every n > 0 . This means that every Σ0n set is many-one reducible to ∅(n) .
Post’s theorem has many corollaries that expose additional relationships between the arithmetical hierarchy and the
Turing degrees. These include:
(n)
1. Fix a set C. A set B is Σ0,C 0,C
n+1 if and only if B is Σ1 . This is the relativization of the first part of Post’s
theorem to the oracle C.
30.3 References
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1; ISBN
0-07-053522-1
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin,
1987. ISBN 3-540-15299-7
Chapter 31
Post–Turing machine
The article Turing machine gives a general introduction to Turing machines, while this article covers a
specific class of Turing machines.
A Post–Turing machine is a “program formulation” of an especially simple type of Turing machine, comprising a
variant of Emil Post's Turing-equivalent model of computation described below. (Post’s model and Turing’s model,
though very similar to one another, were developed independently. Turing’s paper was received for publication in
May 1936, followed by Post’s in October.) A Post–Turing machine uses a binary alphabet, an infinite sequence of
binary storage locations, and a primitive programming language with instructions for bi-directional movement among
the storage locations and alteration of their contents one at a time. The names “Post–Turing program” and “Post–
Turing machine” were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the
name “Turing–Post program” (Davis, in Steen p. 241).
Specifically, the i th “direction” (instruction) given to the worker is to be one of the following forms:
(A) Perform operation Oi [Oi = (a), (b), (c) or (d)] and then follow direction ji,
(B) Perform operation (e) and according as the answer is yes or no correspondingly follow direction ji' or
ji' ' ,
195
196 CHAPTER 31. POST–TURING MACHINE
(C) Stop.
(The above indented text and italics are as in the original.) Post remarks that this formulation is “in its initial stages”
of development, and mentions several possibilities for “greater flexibility” in its final “definitive form”, including
(1) replacing the infinity of boxes by a finite extensible symbol space, “extending the primitive operations
to allow for the necessary extension of the given finite symbol space as the process proceeds”,
(2) using an alphabet of more than two symbols, “having more than one way to mark a box”,
(3) introducing finitely-many “physical objects to serve as pointers, which the worker can identify and
move from box to box”.
“Our quadruplets are quintuplets in the Turing development. That is, where our standard instruction
orders either a printing (overprinting) or motion, left or right, Turing’s standard instruction always order
a printing and a motion, right, left, or none"(footnote 12, Undecidable p. 300)
Like Turing he defined erasure as printing a symbol “S0”. And so his model admitted quadruplets of only three types
(cf p. 294 Undecidable):
qi Sj L ql,
qi Sj R ql,
qi Sj Sk ql
At this time he was still retaining the Turing state-machine convention – he had not formalized the notion of an
assumed sequential execution of steps until a specific test of a symbol “branched” the execution elsewhere.
write 0
write 1
move left
move right
if scanning 0 then goto instruction i
if scanning 1 then goto instruction j
where sequential execution is assumed, and Post’s single "if ... then ... else" has been “atomised” into two “if ... then
...” statements. (Here '1' and '0' are used where Wang used “marked” and “unmarked”, respectively, and the initial
tape is assumed to contain only '0’s except for finitely-many '1’s.)
Wang noted the following:
• “Since there is no separate instruction for halt (stop), it is understood that the machine will stop when it has
arrived at a stage that the program contains no instruction telling the machine what to do next.” (p. 65)
31.4. 1974: FIRST DAVIS MODEL 197
• “In contrast with Turing who uses a one-way infinite tape that has a beginning, we are following Post in the use
of a 2-way infinite tape.” (p. 65)
• Unconditional gotos are easily derived from the above instructions, so “we can freely use them too”. (p. 84)
Any binary-tape Turing machine is readily converted to an equivalent “Wang program” using the above instructions.
“Write 1
“Write B
“To A if read 1
“To A if read B
“RIGHT
“LEFT
“PRINT 1
“PRINT 0
“GO RIGHT
“GO LEFT
“GO TO STEP i IF 1 IS SCANNED
“GO TO STEP i IF 0 IS SCANNED
“STOP
“A Turing–Post program is then a list of instructions, each of which is of one of these seven kinds. Of course in
an actual program the letter i in a step of either the fifth or sixth kind must replaced with a definite (positive whole)
number.” (Davis in Steen, p. 247).
• Confusion arises if one does not realize that a “blank” tape is actually printed with all zeroes — there is no
“blank”.
• Splits Post’s "GO TO" ("branch" or “jump”) instruction into two, thus creating a larger (but easier-to-use)
instruction set of seven rather than Post’s six instructions.
• Does not mention that instructions PRINT 1, PRINT 0, GO RIGHT and GO LEFT imply that, after execution,
the “computer” must go to the next step in numerical sequence.
198 CHAPTER 31. POST–TURING MACHINE
Note that only one type of “jump” – a conditional GOTO – is specified; for an unconditional jump a string of GOTO’s
must test each symbol.
This model reduces to the binary { 0, 1 } versions presented above, as shown here:
The following “reduction” (decomposition, atomizing) method – from 2-symbol Turing 5-tuples to a sequence of
2-symbol Post–Turing instructions – can be found in Minsky (1961). He states that this reduction to “a program ... a
sequence of Instructions" is in the spirit of Hao Wang’s B-machine (italics in original, cf Minsky (1961) p. 439).
(Minsky’s reduction to what he calls “a sub-routine” results in 5 rather than 7 Post–Turing instructions. He did not
atomize Wi0: “Write symbol Si0; go to new state Mi0”, and Wi1: “Write symbol Si1; go to new state Mi1”. The
following method further atomizes Wi0 and Wi1; in all other respects the methods are identical.)
This reduction of Turing 5-tuples to Post–Turing instructions may not result in an “efficient” Post–Turing program,
but it will be faithful to the original Turing-program.
In the following example, each Turing 5-tuple of the 2-state busy beaver converts into
The table represents just a single Turing “instruction”, but we see that it consists of two lines of 5-tuples, one for
the case “tape symbol under head = 1”, the other for the case “tape symbol under head = 0”. Turing observed
(Undecidable, p. 119) that the left-two columns – “m-configuration” and “symbol” – represent the machine’s current
“configuration” – its state including both Tape and Table at that instant – and the last three columns are its subsequent
“behavior”. As the machine cannot be in two “states” at once, the machine must “branch” to either one configuration
or the other:
After the “configuration branch” (J1 xxx) or (J0 xxx) the machine follows one of the two subsequent “behaviors”. We
list these two behaviors on one line, and number (or label) them sequentially (uniquely). Beneath each jump (branch,
go to) we place its jump-to “number” (address, location):
Per the Post–Turing machine conventions each of the Print, Erase, Left, and Right instructions consist of two actions:
And per the Post–Turing machine conventions the conditional “jumps” J0xxx, J1xxx consist of two actions:
And per the Post–Turing machine conventions the unconditional “jump” Jxxx consists of a single action, or if we
want to regularize the 2-action sequence:
Which, and how many, jumps are necessary? The unconditional jump Jxxx is simply J0 followed immediately by
J1 (or vice versa). Wang (1957) also demonstrates that only one conditional jump is required, i.e. either J0xxx or
J1xxx. However, with this restriction the machine becomes difficult to write instructions for. Often only two are
used, i.e.
but the use of all three { J0xxx, J1xxx, Jxxx } does eliminate extra instructions. In the 2-state Busy Beaver example
that we use only { J1xxx, Jxxx }.
The state diagram of a two-state busy beaver (little drawing, right-hand corner) converts to the equivalent Post–Turing
machine with the substitution of 7 Post–Turing instructions per “Turing” state. The HALT instruction adds the 15th
state:
A “run” of the 2-state busy beaver with all the intermediate steps of the Post–Turing machine shown:
31.7. EXAMPLES OF THE POST–TURING MACHINE 201
The following is a two-state Turing busy beaver with additional instructions 15–20 to demonstrate the use of “Erase”,
J0, etc. These will erase the 1’s written by the busy beaver:
Additional Post–Turing instructions 15 through 20 erase the symbols created by the busy beaver. These “atomic”
instructions are more “efficient” than their Turing-state equivalents (of 7 Post–Turing instructions). To accomplish
the same task a Post–Turing machine will (usually) require fewer Post–Turing states than a Turing-machine, because
(i) a jump (go-to) can occur to any Post–Turing instruction (e.g. P, E, L, R) within the Turing-state, (ii) a grouping
of move-instructions such as L, L, L, P are possible, etc.:
202 CHAPTER 31. POST–TURING MACHINE
31.8. EXAMPLE: MULTIPLY 3 × 4 WITH A POST–TURING MACHINE 203
• Move head far right. Establish (i.e. “clear”) register c by placing a single blank and then a mark to
the right of b
• a_loop: Move head right once, test for the bottom of a' (a blank). If blank then done else erase
mark;
• Move head right to b' . Move head right once past the top mark of b' ;
• b_loop: If head is at the bottom of b' (a blank) then move head to far left of a' , else:
An example of “multiply” a × b = c on a Post–Turing machine. At the start, the tape (shown on the left) has two numbers on it –
a' = 3' (4 marks), b' = 4' (5 marks). (A single mark would represent “0”.) At the end the tape will have the product c' = 12' (13
marks) to the right of b. Note “top” and “bottom” are there just to clarify what the P–T machine is doing.
• Return to b_loop.
Multiply a × b = c, for example: 3 × 4 = 12. The scanned square is indicated by brackets around the
mark i.e. [ | ]. An extra mark serves to indicate the symbol “0":
At the start of the computation a' is 4 unary marks, then a separator blank, b' is 5 unary
marks, then a separator mark. An unbounded number of empty spaces must be available for
c to the right:
....a'.b'.... = : ....[ | ] | | | . | | | | | ....
During the computation the head shuttles back and forth from a' to b' to c' back to b'
then to c' , then back to b' , then to c' ad nauseam while the machine counts through b'
and increments c' . Multiplicand a' is slowly counted down (its marks erased – shown for
reference with x’s below). A “counter” inside b' moves to the right through b (an erased mark
shown being read by the head as [ . ] ) but is reconstructed after each pass when the head
returns from incrementing c' :
....a.b.... = : ....xxx | . | | [ . ] | | . | | | | | | | ...
At end of computation: c' is 13 marks = “successor of 12” appearing to the right of b' . a'
has vanished in process of the computation
....b.c = ......... | | | | | . | | | | | | | | | | | | | ...
31.9 Footnotes
^ a: Difference between Turing- and Post–Turing machine models
In his chapter XIII Computable Functions, Kleene adopts the Post model; Kleene’s model uses a blank and one symbol
“tally mark ¤" (Kleene p. 358), a “treatment closer in some respects to Post 1936. Post 1936 considered computation
with a 2-way infinite tape and only 1 symbol” (Kleene p. 361). Kleene observes that Post’s treatment provided a
further reduction to “atomic acts” (Kleene p. 357) of “the Turing act” (Kleene p. 379). As described by Kleene “The
Turing act” is the combined 3 (time-sequential) actions specified on a line in a Turing table: (i) print-symbol/erase/do-
nothing followed by (ii) move-tape-left/move-tape-right/do-nothing followed by (iii) test-tape-go-to-next-instruction:
31.10. REFERENCES 205
e.g. “s1Rq1” means “Print symbol "¤", then move tape right, then if tape symbol is "¤" then go to state q1”. (See
Kleene’s example P. 358).
Kleene observes that Post atomized these 3-actions further into two types of 2-actions. The first type is a “print/erase”
action, the second is a “move tape left/right action": (1.i) print-symbol/erase/do-nothing followed by (1.ii) test-tape-
go-to-next-instruction, OR (2.ii) move-tape-left/move-tape-right/do-nothing followed by (2.ii) test-tape-go-to-next-
instruction.
But Kleene observes that while
“Indeed it could be argued that the Turing machine act is already compound, and consists psychologically
in a printing and change in state of mind, followed by a motion and another state of mind [, and] Post
1947 does thus separate the Turing act into two; we have not here, primarily because it saves space in
the machine tables not to do so."(Kleene p. 379)
In fact Post’s treatment (1936) is ambiguous; both (1.1) and (2.1) could be followed by "(.ii) go to next instruction in
numerical sequence”. This represents a further atomization into three types of instructions: (1) print-symbol/erase/do-
nothing then go-to-next-instruction-in-numerical-sequence, (2) move-tape-left/move-tape-right/do-nothing then go-
to-next-instruction-in-numerical-sequence (3)test-tape then go-to-instruction-xxx-else-go-to-next-instruction-in-numerical-
sequence.
31.10 References
• Stephen C. Kleene, Introduction to Meta-Mathematics, North-Holland Publishing Company, New York, 10th
edition 1991, first published 1952. Chapter XIII is an excellent description of Turing machines; Kleene uses a
Post-like model in his description and admits the Turing model could be further atomized, see Footnote 1.
• Martin Davis, editor: The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And
Computable Functions, Raven Press, New York, 1965. Papers include those by Gödel, Church, Rosser, Kleene,
and Post.
• Martin Davis, “What is a computation”, in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random
House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines. Davis reduces
the Turing Machine to a far-simpler model based on Post’s model of a computation. Includes a little biography
of Emil Post.
• Martin Davis, Computability: with Notes by Barry Jacobs, Courant Institute of Mathematical Sciences, New
York University, 1974.
• Martin Davis, Ron Sigal, Elaine J. Weyuker, (1994) Computability, Complexity, and Languages: Fundamentals
of Theoretical Computer Science – 2nd edition, Academic Press: Harcourt, Brace & Company, San Diego,
1994 ISBN 0-12-206382-1 (First edition, 1983).
• Fred Hennie, Introduction to Computability, Addison–Wesley, 1977.
• Marvin Minsky, (1961), Recursive Unsolvability of Post’s problem of 'Tag' and other Topics in Theory of Turing
Machines, Annals of Mathematics, Vol. 74, No. 3, November, 1961.
• Roger Penrose, The Emperor’s New Mind: Concerning computers, Minds and the Laws of Physics, Oxford
University Press, Oxford England, 1990 (with corrections). Cf: Chapter 2, “Algorithms and Turing Machines”.
An overly-complicated presentation (see Davis’s paper for a better model), but a thorough presentation of
Turing machines and the halting problem, and Church’s lambda calculus.
• Hao Wang (1957): “A variant to Turing’s theory of computing machines”, Journal of the Association for
Computing Machinery (JACM) 4, 63–92.
Chapter 32
Propositional calculus
Propositional calculus (also called propositional logic, sentential calculus, or sentential logic) is the branch of
mathematical logic concerned with the study of propositions (whether they are true or false) that are formed by other
propositions with the use of logical connectives, and how their value depends on the truth value of their components.
Logical connectives are found in natural languages. In English for example, some examples are “and” (conjunction),
“or” (disjunction), “not” (negation) and “if” (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic:
Both premises and the conclusions are propositions. The premises are taken for granted and then with the application
of modus ponens (an inference rule) the conclusion follows.
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decom-
posed anymore by logical connectives, this inference can be restated replacing those atomic statements with statement
letters, which are interpreted as variables representing statements:
P →Q
P
Q
The same can be stated succinctly in the following way:
P → Q, P ⊢ Q
When P is interpreted as “It’s raining” and Q as “it’s cloudy” the above symbolic expressions can be seen to exactly
correspond with the original expression in natural language. Not only that, but they will also correspond with any
other inference of this form, which will be valid on the same basis that this inference is.
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted
to represent propositions. A system of inference rules and axioms allows certain formulas to be derived. These
derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such
formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may
be interpreted as proof of the proposition represented by the theorem.
When a formal system is used to represent formal logic, only statement letters are represented directly. The natural
language propositions that arise when they're interpreted are outside the scope of the system, and the relation between
the formal system and its interpretation is likewise outside the formal system itself.
Usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a
truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-
order logic.
206
32.1. HISTORY 207
32.1 History
Main article: History of logic
Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philoso-
phers, it was developed into a formal logic by Chrysippus in the 3rd century BC[1] and expanded by the Stoics. The
logic was focused on propositions. This advancement was different from the traditional syllogistic logic which was fo-
cused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood
. Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.[2]
Propositional logic was eventually refined using symbolic logic. The 17th/18th century philosopher Gottfried Leibniz
has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his
work was the first of its kind, it was unknown to the larger logical community. Consequently, many of the advances
achieved by Leibniz were reachieved by logicians like George Boole and Augustus De Morgan completely independent
of Leibniz.[3]
Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Frege’s
predicate logic was an advancement from the earlier propositional logic. One author describes predicate logic as
combining “the distinctive features of syllogistic logic and propositional logic.”[4] Consequently, predicate logic ush-
ered in a new era in logic’s history; however, advances in propositional logic were still made after Frege, including
Natural Deduction, Truth-Trees and Truth-Tables. Natural deduction was invented by Gerhard Gentzen and Jan
Łukasiewicz. Truth-Trees were invented by Evert Willem Beth.[5] The invention of truth-tables, however, is of con-
troversial attribution.
Within works by Frege[6] and Bertrand Russell,[7] one finds ideas influential in bringing about the notion of truth tables.
The actual 'tabular structure' (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or
Emil Post (or both, independently).[6] Besides Frege and Russell, others credited with having ideas preceding truth-
tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure
include Łukasiewicz, Schröder, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving
Lewis.[7] Ultimately, some have concluded, like John Shosky, that “It is far from clear that any one person should be
given the title of 'inventor' of truth-tables.”.[7]
32.2 Terminology
In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas),
a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation,
intended to be interpreted to be logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted to be statements,
and the rules, known to be inference rules, are typically intended to be truth-preserving. In this setting, the rules (which
may include axioms) can then be used to derive (“infer”) formulas representing true statements from given formulas
representing true statements.
The set of axioms may be empty, a nonempty finite set, a countably infinite set, or be given by axiom schemata. A
formal grammar recursively defines the expressions and well-formed formulas of the language. In addition a semantics
may be given which defines truth and valuations (or interpretations).
The language of a propositional calculus consists of
1. a set of primitive symbols, variously referred to be atomic formulas, placeholders, proposition letters, or vari-
ables, and
A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of
operator symbols according to the rules of the grammar.
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propo-
sitional constants represent some particular proposition, while propositional variables range over the set of all atomic
propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by
208 CHAPTER 32. PROPOSITIONAL CALCULUS
A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ,
and χ.
1. their language, that is, the particular collection of primitive symbols and operator symbols,
2. the set of axioms, or distinguished formulas, and
3. the set of inference rules.
Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a
number by a letter in mathematics, for instance, a = 5. All propositions require exactly one of two truth-values: true
or false. For example, let P be the proposition that it is raining outside. This will be true (P) if it is raining outside
and false otherwise (¬P).
• We then define truth-functional operators, beginning with negation. (¬P represents the negation of P, which
can be thought of as the denial of P. In the example above, (¬P expresses that it is not raining outside, or by a
more standard reading: “It is not the case that it is raining outside.” When P is true, (¬P is false; and when P
is false, (¬P is true. {(¬¬P always has the same truth-value as P.
• Conjunction is a truth-functional connective which forms a proposition out of two simpler propositions, for
example, P and Q. The conjunction of P and Q is written P ∧ Q, and expresses that each are true. We read P
∧ Q for “P and Q”. For any two propositions, there are four possible assignments of truth values:
1. P is true and Q is true
2. P is true and Q is false
3. P is false and Q is true
4. P is false and Q is false
The conjunction of P and Q is true in case 1 and is false otherwise. Where P is the proposition that it is
raining outside and Q is the proposition that a cold-front is over Kansas, P ∧ Q is true when it is raining
outside and there is a cold-front over Kansas. If it is not raining outside, then P ∧ Q is false; and if there
is no cold-front over Kansas, then P ∧ Q is false.
• Disjunction resembles conjunction in that it forms a proposition out of two simpler propositions. We write it
P ∨ Q, and it is read “P or Q”. It expresses that either P or Q is true. Thus, in the cases listed above, the
disjunction of P and Q is true in all cases except 4. Using the example above, the disjunction expresses that
it is either raining outside or there is a cold front over Kansas. (Note, this use of disjunction is supposed to
resemble the use of the English word “or”. However, it is most like the English inclusive “or”, which can be
used to express the truth of at least one of two propositions. It is not like the English exclusive “or”, which
expresses the truth of exactly one of two propositions. That is to say, the exclusive “or” is false when both P
and Q are true (case 1). An example of the exclusive or is: You may have a bagel or a pastry, but not both.
Often in natural language, given the appropriate context, the addendum “but not both” is omitted but implied.
In mathematics, however, “or” is always inclusive or; if exclusive or is meant it will be specified, possibly by
“xor”.)
• Material conditional also joins two simpler propositions, and we write P → Q, which is read “if P then Q”.
The proposition to the left of the arrow is called the antecedent and the proposition to the right is called
the consequent. (There is no such designation for conjunction or disjunction, since they are commutative
operations.) It expresses that Q is true whenever P is true. Thus it is true in every case above except case 2,
because this is the only case when P is true but Q is not. Using the example, if P then Q expresses that if it is
raining outside then there is a cold-front over Kansas. The material conditional is often confused with physical
causation. The material conditional, however, only relates two propositions by their truth-values—which is not
the relation of cause and effect. It is contentious in the literature whether the material implication represents
logical causation.
32.3. BASIC CONCEPTS 209
• Biconditional joins two simpler propositions, and we write P ↔ Q, which is read “P if and only if Q”. It
expresses that P and Q have the same truth-value, thus P if and only if Q is true in cases 1 and 4, and false
otherwise.
It is extremely helpful to look at the truth tables for these different operators, as well as the method of analytic tableaux.
32.3.2 Argument
The propositional calculus then defines an argument to be a set of propositions. A valid argument is a set of proposi-
tions, the last of which follows from—or is implied by—the rest. All other arguments are invalid. The simplest valid
argument is modus ponens, one instance of which is the following set of propositions:
1. P → Q
2. P
∴ Q
This is a set of three propositions, each line is a proposition, and the last follows from the rest. The first two lines are
called premises, and the last line the conclusion. We say that any proposition C follows from any set of propositions
(P1 , ..., Pn ) , if C must be true whenever every member of the set (P1 , ..., Pn ) is true. In the argument above, for
any P and Q, whenever P → Q and P are true, necessarily Q is true. Notice that, when P is true, we cannot consider
cases 3 and 4 (from the truth table). When P → Q is true, we cannot consider case 2. This leaves only case 1, in
which Q is also true. Thus Q is implied by the premises.
This generalizes schematically. Thus, where φ and ψ may be any propositions at all,
1. φ → ψ
2. φ
∴ ψ
Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such
set), modus ponens is sufficient to prove all other argument forms in propositional logic, thus they may be considered
to be a derivative. Note, this is not true of the extension of propositional logic to other logics like first-order logic.
First-order logic requires at least one additional rule of inference in order to obtain completeness.
210 CHAPTER 32. PROPOSITIONAL CALCULUS
The significance of argument in formal logic is that one may obtain new truths from established truths. In the first
example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is
deduced. In this way, we define a deduction system to be a set of all propositions that may be deduced from another
set of propositions. For instance, given the set of propositions A = {P ∨ Q, ¬Q ∧ R, (P ∨ Q) → R} , we can
define a deduction system, Γ, which is the set of all propositions which follow from A. Reiteration is always assumed,
so P ∨ Q, ¬Q ∧ R, (P ∨ Q) → R ∈ Γ . Also, from the first element of A, last element, as well as modus ponens,
R is a consequence, and so R ∈ Γ . Because we have not included sufficiently complete axioms, though, nothing
else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce
(P ∨ Q) ↔ (¬P → Q) , this one is too weak to prove such a proposition.
• The alpha set A is a finite set of elements called proposition symbols or propositional variables. Syntactically
speaking, these are the most basic elements of the formal language L , otherwise referred to as atomic formulas
or terminal elements. In the examples to follow, the elements of A are typically the letters p, q, r, and so on.
• The omega set Ω is a finite set of elements called operator symbols or logical connectives. The set Ω is partitioned
into disjoint subsets as follows:
Ω = Ω0 ∪ Ω1 ∪ . . . ∪ Ωj ∪ . . . ∪ Ωm .
Ω1 = {¬},
Ω2 ⊆ {∧, ∨, →, ↔}.
A frequently adopted convention treats the constant logical values as operators of arity zero, thus:
Ω0 = {0, 1}.
Some writers use the tilde (~), or N, instead of ¬; and some use the ampersand (&), the prefixed K, or
· instead of ∧ . Notation varies even more for the set of logical values, with symbols like {false, true},
{F, T}, or {⊥, ⊤} all being seen in various contexts instead of {0, 1}.
• The zeta set Z is a finite set of transformation rules that are called inference rules when they acquire logical
applications.
• The iota set I is a finite set of initial points that are called axioms when they receive logical interpretations.
The language of L , also known as its set of formulas, well-formed formulas, is inductively defined by the following
rules:
32.5. EXAMPLE 1. SIMPLE AXIOM SYSTEM 211
Repeated applications of these rules permits the construction of complex formulas. For example:
1. By rule 1, p is a formula.
2. By rule 2, ¬p is a formula.
3. By rule 1, q is a formula.
• The alpha set A , is a finite set of symbols that is large enough to supply the needs of a given discussion, for
example:
A = {p, q, r, s, t, u}.
• Of the three connectives for conjunction, disjunction, and implication ( ∧, ∨ , and →), one can be taken as
primitive and the other two can be defined in terms of it and negation (¬).[8] Indeed, all of the logical connectives
can be defined in terms of a sole sufficient operator. The biconditional (↔) can of course be defined in terms
of conjunction and implication, with a ↔ b defined as (a → b) ∧ (b → a) .
Ω = Ω1 ∪ Ω 2
Ω1 = {¬},
Ω2 = {→}.
• An axiom system discovered by Jan Łukasiewicz formulates a propositional calculus in this language as follows.
The axioms are all substitution instances of:
• (p → (q → p))
• The rule of inference is modus ponens (i.e., from p and (p → q) , infer q). Then a ∨ b is defined as ¬a → b ,
and a ∧ b is defined as ¬(a → ¬b) . This system is used in Metamath set.mm formal proof database.
212 CHAPTER 32. PROPOSITIONAL CALCULUS
• The alpha set A , is a finite set of symbols that is large enough to supply the needs of a given discussion, for
example:
A = {p, q, r, s, t, u}.
Ω1 = {¬},
Ω2 = {∧, ∨, →, ↔}.
In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the
inference rules of a so-called natural deduction system. The particular system presented here has no initial points,
which means that its interpretation for logical applications derives its theorems from an empty axiom set.
Our propositional calculus has ten inference rules. These rules allow us to derive other true formulas given a set of
formulas that are assumed to be true. The first nine simply state that we can infer certain well-formed formulas from
other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the
rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer
a certain other formula. Since the first nine rules don't do this they are usually described as non-hypothetical rules,
and the last one as a hypothetical rule.
In describing the transformation rules, we may introduce a metalanguage symbol ⊢ . It is basically a convenient
shorthand for saying “infer that”. The format is Γ ⊢ ψ , in which Γ is a (possibly empty) set of formulas called
premises, and ψ is a formula called conclusion. The transformation rule Γ ⊢ ψ means that if every proposition in Γ is
a theorem (or has the same truth value as the axioms), then ψ is also a theorem. Note that considering the following
rule Conjunction introduction, we will know whenever Γ has more than one formula, we can always safely reduce it
into one formula using conjunction. So for short, from that time on we may represent Γ as one formula instead of a
set. Another omission for convenience is when Γ is an empty set, in which case Γ may not appear.
• One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be
arranged as follows:
Interpret A ⊢ A as “Assuming A, infer A”. Read ⊢ A → A as “Assuming nothing, infer that A implies A”, or “It is
a tautology that A implies A”, or “It is always true that A implies A”.
• A satisfies (φ → ψ) if and only if it is not the case that A satisfies φ but not ψ
• A satisfies (φ ↔ ψ) if and only if A satisfies both φ and ψ or satisfies neither one of them
With this definition we can now formalize what it means for a formula φ to be implied by a certain set S of formulas.
Informally this is true if in all worlds that are possible given the set of formulas S the formula φ also holds. This
leads to the following formal definition: We say that a set S of well-formed formulas semantically entails (or implies)
a certain well-formed formula φ if all truth assignments that satisfy all the formulas in S also satisfy φ.
Finally we define syntactical entailment such that φ is syntactically entailed by S if and only if we can derive it with
the inference rules that were presented above in a finite number of steps. This allows us to formulate exactly what it
means for the set of inference rules to be sound and complete:
Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula φ then S semantically
entails φ.
Completeness: If the set of well-formed formulas S semantically entails the well-formed formula φ then S syntac-
tically entails φ.
For the above set of rules this is indeed the case.
(a) Assume for arbitrary G and A that if G proves A in n or fewer steps, then G implies A.
(b) For each possible application of a rule of inference at step n + 1, leading to a new theorem B, show that
G implies B.
Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used,
Step II involves showing that each of the axioms is a (semantic) logical truth.
The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The
proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will
systematically cover all the further sentences that might be provable—by considering each case where we might reach
a logical conclusion using an inference rule—and shows that if a new sentence is provable, it is also logically implied.
(For example, we might have a rule telling us that from “A” we can derive “A or B”. In III.a We assume that if A is
provable it is implied. We also know that if A is provable then “A or B” is provable. We have to show that then “A
or B” too is implied. We do so by appeal to the semantic definition and the assumption we just made. A is provable
from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But
any valuation making A true makes “A or B” true, by the defined semantics for “or”. So any valuation which makes
32.9. SOUNDNESS AND COMPLETENESS OF THE RULES 215
all of G true makes “A or B” true. So “A or B” is implied.) Generally, the Inductive step will consist of a lengthy but
simple case-by-case analysis of all the rules of inference, showing that each “preserves” semantic implication.
By the definition of provability, there are no sentences provable other than by being a member of G, an axiom, or
following by a rule; so if all of those are semantically implied, the deduction calculus is sound.
(a) Place an ordering on all the sentences in the language (e.g., shortest first, and equally long ones in extended
alphabetical ordering), and number them (E 1 , E 2 , ...)
(b) Define a series G of sets (G0 , G1 , ...) inductively:
i. G0 = G
ii. If Gk ∪ {Ek+1 } proves A, then Gk+1 = Gk
iii. If Gk ∪ {Ek+1 } does not prove A, then Gk+1 = Gk ∪ {Ek+1 }
(c) Define G∗ as the union of all the G . (That is, G∗ is the set of all the sentences that are in any G .)
(d) It can be easily shown that
QED
1. a is assigned T, or
2. a is assigned F.
Since P has ℵ0 , that is, denumerably many propositional symbols, there are 2ℵ0 = c , and therefore uncountably
many distinct possible interpretations of P .[10]
• A sentence of propositional logic is true under an interpretation I iff I assigns the truth value T to that sentence.
If a sentence is true under an interpretation, then that interpretation is called a model of that sentence.
• φ is false under an interpretation I iff φ is not true under I .[10]
• A sentence of propositional logic is logically valid if it is true under every interpretation
32.11.1 Axioms
Let φ, χ, and ψ stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek
letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:
• Axiom THEN-2 may be considered to be a “distributive property of implication with respect to implication.”
• Axioms AND-1 and AND-2 correspond to “conjunction elimination”. The relation between AND-1 and AND-
2 reflects the commutativity of the conjunction operator.
• Axioms OR-1 and OR-2 correspond to “disjunction introduction.” The relation between OR-1 and OR-2 re-
flects the commutativity of the disjunction operator.
• Axiom NOT-3 is called "tertium non datur" (Latin: “a third is not given”) and reflects the semantic valuation
of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value,
at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.
ϕ, ϕ → χ ⊢ χ
Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to
the right of the turnstile. Then the deduction theorem can be stated as follows:
If the sequence
ϕ1 , ϕ2 , ..., ϕn , χ ⊢ ψ
ϕ1 , ϕ2 , ..., ϕn ⊢ χ → ψ
218 CHAPTER 32. PROPOSITIONAL CALCULUS
This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional
calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems
about the soundness or completeness of propositional calculus.
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as
another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof
inference rule which is part of the first version of propositional calculus introduced in this article.
The converse of DT is also valid:
If the sequence
ϕ1 , ϕ2 , ..., ϕn ⊢ χ → ψ
ϕ1 , ϕ2 , ..., ϕn , χ ⊢ ψ
in fact, the validity of the converse of DT is almost trivial compared to that of DT:
If
ϕ1 , ..., ϕn ⊢ χ → ψ
then
ϕ1 , ..., ϕn , χ ⊢ χ → ψ
ϕ1 , ..., ϕn , χ ⊢ χ
and from (1) and (2) can be deduced
ϕ1 , ..., ϕn , χ ⊢ ψ
The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example,
the axiom AND-1,
⊢ϕ∧χ→ϕ
can be transformed by means of the converse of the deduction theorem into the inference rule
ϕ∧χ⊢ϕ
which is conjunction elimination, one of the ten inference rules used in the first version (in this article) of the propo-
sitional calculus.
ϕ = A, χ = B → A, ψ = A
32.12. EQUIVALENCE TO EQUATIONAL LOGICS 219
2. A → ((B → A) → A)
ϕ = A, χ = B → A
3. (A → (B → A)) → (A → A)
4. A → (B → A)
ϕ = A, χ = B
5. A → A
ϕ1 , ϕ2 , . . . , ϕn ⊢ ψ
ϕ1 ∧ ϕ2 ∧ . . . ∧ ϕn ≤ ψ
x ⊢ y
The difference between implication x → y and inequality or entailment x ≤ y or x ⊢ y is that the former is internal
to the logic while the latter is external. Internal implication between two terms is another term of the same kind.
Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and
is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily
understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
220 CHAPTER 32. PROPOSITIONAL CALCULUS
Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as
described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a
more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as
the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition
in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism
per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any
proof will do and there is no point in distinguishing them.
It is possible to generalize the definition of a formal language from a set of finite sequences over a finite basis to include
many other sets of mathematical structures, so long as they are built up by finitary means from finite materials. What’s
more, many of these families of formal structures are especially well-suited for use in logic.
For example, there are many families of graphs that are close enough analogues of formal languages that the concept
of a calculus is quite easily and naturally extended to them. Indeed, many species of graphs arise as parse graphs
in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on
formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs,
simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many
advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings
to parse graphs is called parsing and the inverse mapping from parse graphs to strings is achieved by an operation
that is called traversing the graph.
Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways.
(Aristotelian “syllogistic” calculus, which is largely supplanted in modern logic, is in some ways simpler – but in other
ways more complex – than propositional calculus.) The most immediate way to develop a more complex logical
calculus is to introduce rules that are sensitive to more fine-grained details of the sentences being used.
First-order logic (a.k.a. first-order predicate logic) results when the “atomic sentences” of propositional logic are
broken up into terms, variables, predicates, and quantifiers, all keeping the rules of propositional logic with some
new ones introduced. (For example, from “All dogs are mammals” we may infer “If Rover is a dog then Rover is
a mammal”.) With the tools of first-order logic it is possible to formulate a number of theories, either with explicit
axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these;
others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of
first-order logic. Thus, it makes sense to refer to propositional logic as “zeroth-order logic”, when comparing it with
these logics.
Modal logic also offers a variety of inferences that cannot be captured in propositional calculus. For example, from
“Necessarily p” we may infer that p. From p we may infer “It is possible that p”. The translation between modal
logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on
Boolean or Heyting algebras, different from the Boolean operations, interpreting the possibility modality, and in the
case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity
is the De Morgan dual of possibility). The first operator preserves 0 and disjunction while the second preserves 1 and
conjunction.
Many-valued logics are those allowing sentences to have values other than true and false. (For example, neither and
both are standard “extra values"; “continuum logic” allows each sentence to have any of an infinite number of “degrees
of truth” between true and false.) These logics often require calculational devices quite distinct from propositional
calculus. When the values form a Boolean algebra (which may have more than two or even infinitely many values),
many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the
values form an algebra that is not Boolean.
32.15. SOLVERS 221
32.15 Solvers
Finding solutions to propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g.,
DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended
the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.
32.17 References
[1] Ancient Logic (Stanford Encyclopedia of Philosophy)
[2] Marenbon, John (2007). Medieval philosophy: an historical and philosophical introduction. Routledge. p. 137.
[3] Leibniz’s Influence on 19th Century Logic
[4] Hurley, Patrick (2007). A Concise Introduction to Logic 10th edition. Wadsworth Publishing. p. 392.
[5] Beth, Evert W.; “Semantic entailment and formal derivability”, series: Mededlingen van de Koninklijke Nederlandse
Akademie van Wetenschappen, Afdeling Letterkunde, Nieuwe Reeks, vol. 18, no. 13, Noord-Hollandsche Uitg. Mij.,
Amsterdam, 1955, pp. 309–42. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University
Press, 1969
[6] Truth in Frege
[7] Russell’s Use of Truth-Tables
[8] Wernick, William (1942) “Complete Sets of Logical Functions,” Transactions of the American Mathematical Society 51,
pp. 117–132.
[9] Toida, Shunichi (2 August 2009). “Proof of Implications”. CS381 Discrete Structures/Discrete Mathematics Web Course
Material. Department Of Computer Science, Old Dominion University. Retrieved 10 March 2010.
[10] Hunter, Geoffrey (1971). Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of
California Pres. ISBN 0-520-02356-0.
• Formal Predicate Calculus, contains a systematic formal development along the lines of Alternative calculus
• forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sen-
tential logic.
“Enumerable set” redirects here. For the set-theoretic concept, see Countable set.
In computability theory, traditionally called recursion theory, a set S of natural numbers is called recursively enu-
merable, computably enumerable, semidecidable, provable or Turing-recognizable if:
• There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S.
Or, equivalently,
• There is an algorithm that enumerates the members of S. That means that its output is simply a list of the
members of S: s1 , s2 , s3 , ... . If necessary, this algorithm may run forever.
The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably
enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase.
In computational complexity theory, the complexity class containing all recursively enumerable sets is RE. In recursion
theory, the lattice of r.e. sets under inclusion is denoted E .
Semidecidability:
• The set S is recursively enumerable. That is, S is the domain (co-range) of a partial recursive
function.
• There is a partial recursive function f such that:
{
1 if x ∈ S
f (x) =
undefined/does not halt if x ∈ /S
Enumerability:
223
224 CHAPTER 33. RECURSIVELY ENUMERABLE SET
• The set S is the range of a total recursive function or empty. If S is infinite, the function can be
chosen to be injective.
• The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of
values may be necessary in this case.
Diophantine:
• There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over
the natural numbers such that
x ∈ S ⇔ ∃a, b, c, d, e, f, g, h, i (p(x, a, b, c, d, e, f, g, h, i) = 0).
• There is a polynomial from the integers to the integers such that the set S contains exactly the
non-negative numbers in its range.
The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing.
The Diophantine characterizations of a recursively enumerable set, while not as straightforward or intuitive as the first
definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert’s Tenth Problem. Diophantine
sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence
was only remarked more than three decades after the introduction of recursively enumerable sets). The number of
bound variables in the above definition of the Diophantine set is the best known so far; it might be that a lower number
can be used to define all diophantine sets.
33.3 Examples
• Every recursive set is recursively enumerable, but it is not true that every recursively enumerable set is recursive.
For recursive sets, the algorithm must also say if an input is not in the set – this is not required of recursively
enumerable sets.
• A recursively enumerable language is a recursively enumerable subset of a formal language.
• The set of all provable sentences in an effectively presented axiomatic system is a recursively enumerable set.
• Matiyasevich’s theorem states that every recursively enumerable set is a Diophantine set (the converse is trivially
true).
• The simple sets are recursively enumerable but not recursive.
• The creative sets are recursively enumerable but not recursive.
• Any productive set is not recursively enumerable.
• Given a Gödel numbering ϕ of the computable functions, the set {⟨i, x⟩ | ϕi (x) ↓} (where ⟨i, x⟩ is the Cantor
pairing function and ϕi (x) ↓ indicates ϕi (x) is defined) is recursively enumerable (cf. picture for a fixed x).
This set encodes the halting problem as it describes the input parameters for which each Turing machine halts.
• Given a Gödel numbering ϕ of the computable functions, the set {⟨x, y, z⟩ | ϕx (y) = z} is recursively
enumerable. This set encodes the problem of deciding a function value.
• Given a partial function f from the natural numbers into the natural numbers, f is a partial recursive function
if and only if the graph of f, that is, the set of all pairs ⟨x, f (x)⟩ such that f(x) is defined, is recursively
enumerable.
33.4 Properties
If A and B are recursively enumerable sets then A ∩ B, A ∪ B and A × B (with the ordered pair of natural numbers
mapped to a single natural number with the Cantor pairing function) are recursively enumerable sets. The preimage
of a recursively enumerable set under a partial recursive function is a recursively enumerable set.
A set is recursively enumerable if and only if it is at level Σ01 of the arithmetical hierarchy.
33.5. REMARKS 225
A set T is called co-recursively enumerable or co-r.e. if its complement N \ T is recursively enumerable. Equiva-
lently, a set is co-r.e. if and only if it is at level Π01 of the arithmetical hierarchy.
A set A is recursive (synonym: computable) if and only if both A and the complement of A are recursively enumerable.
A set is recursive if and only if it is either the range of an increasing total recursive function or finite.
Some pairs of recursively enumerable sets are effectively separable and some are not.
33.5 Remarks
According to the Church–Turing thesis, any effectively calculable function is calculable by a Turing machine, and
thus a set S is recursively enumerable if and only if there is some algorithm which yields an enumeration of S. This
cannot be taken as a formal definition, however, because the Church–Turing thesis is an informal conjecture rather
than a formal axiom.
The definition of a recursively enumerable set as the domain of a partial function, rather than the range of a total
recursive function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion
theories, such as α-recursion theory, the definition corresponding to domains has been found to be more natural. Other
texts use the definition in terms of enumerations, which is equivalent for recursively enumerable sets.
33.6 References
• Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1;
ISBN 0-07-053522-1.
• Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag,
Berlin, 1987. ISBN 3-540-15299-7.
• Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–
1181.
Chapter 34
Semi-Thue system
In theoretical computer science and mathematical logic a string rewriting system (SRS), historically called a semi-
Thue system, is a rewriting system over strings from a (usually finite) alphabet. Given a binary relation R between
fixed strings over the alphabet, called rewrite rules, denoted by s → t , an SRS extends the rewriting relation to all
strings in which the left- and right-hand side of the rules appear as substrings, that is usv → utv , where s , t , u ,
and v are strings.
The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Thus they constitute a
natural framework for solving the word problem for monoids and groups.
An SRS can be defined directly as an abstract rewriting system. It can also be seen as a restricted kind of a term
rewriting system. As a formalism, string rewriting systems are Turing complete. The semi-Thue name comes from
the Norwegian mathematician Axel Thue, who introduced systematic treatment of string rewriting systems in a 1914
paper.[1] Thue introduced this notion hoping to solve the word problem for finitely presented semigroups. It wasn't
until 1947 the problem was shown to be undecidable— this result was obtained independently by Emil Post and A.
A. Markov Jr.[2][3]
34.1 Definition
A string rewriting system or semi-Thue system is a tuple (Σ, R) where
• Σ is an alphabet, usually assumed finite.[4] The elements of the set Σ∗ (* is the Kleene star here) are finite
(possibly empty) strings on Σ , sometimes called words in formal languages; we will simply call them strings
here.
• R is a binary relation on strings from Σ , i.e., R ⊆ Σ∗ × Σ∗ . Each element (u, v) ∈ R is called a (rewriting)
rule and is usually written u → v .
Since →R is a relation on Σ∗ , the pair (Σ∗ , →R ) fits the definition of an abstract rewriting system. Obviously R is a
subset of →R . Some authors use a different notation for the arrow in →R (e.g. ⇒R ) in order to distinguish it from
R itself ( → ) because they later want to be able to drop the subscript and still avoid confusion between R and the
one-step rewrite induced by R .
Clearly in a semi-Thue system we can form a (finite or infinite) sequence of strings produced by starting with an
initial string s0 ∈ Σ∗ and repeatedly rewriting it by making one substring-replacement at a time:
226
34.2. THUE CONGRUENCE 227
s0 →R s1 →R s2 →R . . .
∗
A zero-or-more-steps rewriting like this is captured by the reflexive transitive closure of →R , denoted by →R (see
abstract rewriting system#Basic notions). This is called the rewriting relation or reduction relation on Σ∗ induced
by R .
A semi-Thue system is also a special type of Post canonical system, but every Post canonical system can also be
reduced to an SRS. Both formalism are Turing complete, and thus equivalent to Noam Chomsky's unrestricted gram-
mars, which are sometimes called semi-Thue grammars.[7] A formal grammar only differs from a semi-Thue system
by the separation of the alphabet in terminals and non-terminals, and the fixation of a starting symbol amongst non-
terminals. A minority of authors actually define a semi-Thue system as a triple (Σ, A, R) , where A ⊆ Σ∗ is called
the set of axioms. Under this “generative” definition of semi-Thue system, an unrestricted grammar is just a semi-
Thue system with a single axiom in which one partitions the alphabet in terminals and non-terminals, and makes the
axiom a nonterminal.[8] The simple artifice of partitioning the alphabet in terminals and non-terminals is a powerful
one; it allows the definition of the Chomsky hierarchy based on what combination of terminals and non-terminals
rules contain. This was a crucial development in the theory of formal languages.
• L-system
• MU puzzle
34.8 Notes
[4] In Book and Otto a semi-Thue system is defined over a finite alphabet through most of the book, except chapter 7 when
monoid presentation are introduced, when this assumption is quietly dropped.
[7] D.I.A. Cohen, Introduction to Computer Theory, 2nd ed., Wiley-India, 2007, ISBN 81-265-1334-9, p.572
[8] Dan A. Simovici, Richard L. Tenney, Theory of formal languages with applications, World Scientific, 1999 ISBN 981-02-
3729-4, chapter 4
34.9. REFERENCES 229
34.9 References
34.9.1 Monographs
• Ronald V. Book and Friedrich Otto, String-rewriting Systems, Springer, 1993, ISBN 0-387-97965-4.
• Matthias Jantzen, Confluent string rewriting, Birkhäuser, 1988, ISBN 0-387-13715-7.
34.9.2 Textbooks
• Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theo-
retical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1, chapter 7
• Elaine Rich, Automata, computability and complexity: theory and applications, Prentice Hall, 2007, ISBN
0-13-228806-0, chapter 23.5.
34.9.3 Surveys
• Samson Abramsky, Dov M. Gabbay, Thomas S. E. Maibaum (ed.), Handbook of Logic in Computer Science:
Semantic modelling, Oxford University Press, 1995, ISBN 0-19-853780-8.
• Grzegorz Rozenberg, Arto Salomaa (ed.), Handbook of Formal Languages: Word, language, grammar, Springer,
1997, ISBN 3-540-60420-0.
Sheffer stroke
Venn diagram of A ↑ B
In Boolean functions and propositional calculus, the Sheffer stroke, named after Henry M. Sheffer, written "|" (see
vertical bar, not to be confused with "||" which is often used to represent disjunction), “Dpq", or "↑" (an upwards
arrow), denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary
language as “not both”. It is also called nand (“not and”) or the alternative denial, since it says in effect that at least
one of its operands is false. In Boolean algebra and digital electronics it is known as the NAND operation.
Like its dual, the NOR operator (also known as the Peirce arrow or Quine dagger), NAND can be used by itself,
without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This
property makes the NAND gate crucial to modern digital electronics, including its use in NAND flash memory and
computer processor design.
230
35.1. DEFINITION 231
35.1 Definition
The NAND operation is a logical operation on two logical values. It produces a value of true, if — and only if — at
least one of the propositions is false.
35.2 History
The stroke is named after Henry M. Sheffer, who in 1913 published a paper in the Transactions of the American
Mathematical Society (Sheffer 1913) providing an axiomatization of Boolean algebras using the stroke, and proved
its equivalence to a standard formulation thereof by Huntington employing the familiar operators of propositional
logic (and, or, not). Because of self-duality of Boolean algebras, Sheffer’s axioms are equally valid for either of the
NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for non-disjunction (NOR)
in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It was Jean Nicod who
first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current
practice.[1] Russell and Whitehead used the Sheffer stroke in the 1927 second edition of Principia Mathematica and
suggested it as a replacement for the “or” and “not” operations of the first edition.
Charles Sanders Peirce (1880) had discovered the functional completeness of NAND or NOR more than 30 years
earlier, using the term ampheck (for 'cutting both ways’), but he never published his finding.
35.3 Properties
NAND does not possess any of the following five properties, each of which is required to be absent from, and the
absence of all of which is sufficient for, at least one member of a set of functionally complete operators: truth-
preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth- (falsity-) preserving
if its value is truth (falsity) whenever all of its arguments are truth (falsity).) Therefore {NAND} is a functionally
complete set.
This can also be realized as follows: All three elements of the functionally complete set {AND, OR, NOT} can be
constructed using only NAND. Thus the set {NAND} must be functionally complete as well.
35.5.1 Symbols
pn for natural numbers n
(|)
The Sheffer stroke commutes but does not associate (e.g., (T|T)|F = T, but T|(T|F) = F). Hence any formal system
including the Sheffer stroke must also include a means of indicating grouping. We shall employ '(' and ')' to this effect.
232 CHAPTER 35. SHEFFER STROKE
35.5.2 Syntax
Construction Rule I: For each natural number n, the symbol pn is a well-formed formula (wff), called an atom.
Construction Rule II: If X and Y are wffs, then (X|Y) is a wff.
Closure Rule: Any formulae which cannot be constructed by means of the first two Construction Rules are not wffs.
The letters U, V, W, X, and Y are metavariables standing for wffs.
A decision procedure for determining whether a formula is well-formed goes as follows: “deconstruct” the formula
by applying the Construction Rules backwards, thereby breaking the formula into smaller subformulae. Then repeat
this recursive deconstruction process to each of the subformulae. Eventually the formula should be reduced to its
atoms, but if some subformula cannot be so reduced, then the formula is not a wff.
35.5.3 Calculus
All wffs of the form
((U|(V|W))|((Y|(Y|Y))|((X|V)|((U|X)|(U|X)))))
(U|(V|W)), U ⊢ W
35.5.4 Simplification
Since the only connective of this logic is |, the symbol | could be discarded altogether, leaving only the parentheses to
group the letters. A pair of parentheses must always enclose a pair of wffs. Examples of theorems in this simplified
notation are
(p(p(q(q((pq)(pq)))))),
(p(p((qq)(pp)))).
(U) := (UU)
((U)) ≡ U
for any U. This simplification causes the need to change some rules:
(p(p(q(q((pq)(pq)))))) becomes
|p|p|q|q||pq|pq, and
(p(p((qq)(pp)))) becomes,
: |p|p||qq|pp.
This follows the same rules as the parenthesis version, with opening parenthesis replaced with a Sheffer stroke and
the (redundant) closing parenthesis removed.
• AND gate
• Boolean domain
• CMOS
• Laws of Form
• Logic gate
• Logical graph
• NAND logic
• NAND gate
• NOR gate
• NOT gate
• OR gate
• Peirce’s law
• Propositional logic
• XOR gate
• Peirce arrow
35.7 Notes
[1] Church (1956:134)
234 CHAPTER 35. SHEFFER STROKE
35.8 References
• Bocheński, Józef Maria (1960), Précis of Mathematical Logic, translated from the French and German editions
by Otto Bird, Dordrecht, South Holland: D. Reidel.
• Church, Alonzo, (1956) Introduction to mathematical logic, Vol. 1, Princeton: Princeton University Press.
• Nicod, Jean G. P., (1917) “A Reduction in the Number of Primitive Propositions of Logic”, Proceedings of
the Cambridge Philosophical Society, Vol. 19, pp. 32–41.
• Charles Sanders Peirce, 1880, “A Boolian[sic] Algebra with One Constant”, in Hartshorne, C. and Weiss, P.,
eds., (1931–35) Collected Papers of Charles Sanders Peirce, Vol. 4: 12–20, Cambridge: Harvard University
Press.
• Sheffer, H. M. (1913), “A set of five independent postulates for Boolean algebras, with application to logical
constants”, Transactions of the American Mathematical Society 14: 481–488, JSTOR 1988701
Singleton (mathematics)
In mathematics, a singleton, also known as a unit set,[1] is a set with exactly one element. For example, the set {0}
is a singleton.
The term is also used for a 1-tuple (a sequence with one element).
36.1 Properties
Within the framework of Zermelo–Fraenkel set theory, the axiom of regularity guarantees that no set is an element
of itself. This implies that a singleton is necessarily distinct from the element it contains,[1] thus 1 and {1} are not
the same thing, and the empty set is distinct from the set containing only the empty set. A set such as {{1, 2, 3}} is
a singleton as it contains a single element (which itself is a set, however, not a singleton).
A set is a singleton if and only if its cardinality is 1. In the standard set-theoretic construction of the natural numbers,
the number 1 is defined as the singleton {0}.
In axiomatic set theory, the existence of singletons is a consequence of the axiom of pairing: for any set A, the axiom
applied to A and A asserts the existence of {A, A}, which is the same as the singleton {A} (since it contains A, and
no other set, as an element).
If A is any set and S is any singleton, then there exists precisely one function from A to S, the function sending every
element of A to the single element of S. Thus every singleton is a terminal object in the category of sets.
A singleton has the property that every function from it to any arbitrary set is injective. The only non-singleton set
with this property is the empty set.
Structures built on singletons often serve as terminal objects or zero objects of various categories:
• The statement above shows that the singleton sets are precisely the terminal objects in the category Set of sets.
No other sets are terminal.
• Any singleton admits a unique topological space structure (both subsets are open). These singleton topological
spaces are terminal objects in the category of topological spaces and continuous functions. No other spaces are
terminal in that category.
• Any singleton admits a unique group structure (the unique element serving as identity element). These singleton
groups are zero objects in the category of groups and group homomorphisms. No other groups are terminal in
that category.
235
236 CHAPTER 36. SINGLETON (MATHEMATICS)
b : X → {0, 1}
Then S is called a singleton if and only if there is some y ∈ X such that for all x ∈ X,
b(x) = (x = y)
Traditionally, this definition was introduced by Whitehead and Russell[2] along with the definition of the natural
number 1, as
def def
1 = α̂{(∃x).α = ιȷx} , where ιȷx = ŷ(y = x) .
36.5 References
[1] Stoll, Robert (1961). Sets, Logic and Axiomatic Theories. W. H. Freeman and Company. pp. 5–6.
[2] Whitehead, Alfred North; Bertrand Russell (1910). Principia Mathematica. p. 37.
Chapter 37
Subset
B
A
237
238 CHAPTER 37. SUBSET
“contained” inside B, that is, all elements of A are also elements of B. A and B may coincide. The relationship of one
set being a subset of another is called inclusion or sometimes containment.
The subset relation defines a partial order on sets.
The algebra of subsets forms a Boolean algebra in which the subset relation is called inclusion.
37.1 Definitions
If A and B are sets and every element of A is also an element of B, then:
If A is a subset of B, but A is not equal to B (i.e. there exists at least one element of B which is not an element of A),
then
For any set S, the inclusion relation ⊆ is a partial order on the set P(S) of all subsets of S (the power set of S).
When quantified, A ⊆ B is represented as: ∀x{x∈A → x∈B}.[1]
37.3 Examples
• The set A = {1, 2} is a proper subset of B = {1, 2, 3}, thus both expressions A ⊆ B and A ⊊ B are true.
• The set D = {1, 2, 3} is a subset of E = {1, 2, 3}, thus D ⊆ E is true, and D ⊊ E is not true (false).
• Any set is a subset of itself, but not a proper subset. (X ⊆ X is true, and X ⊊ X is false for any set X.)
• The empty set { }, denoted by ∅, is also a subset of any given set X. It is also always a proper subset of any set
except itself.
• The set {x: x is a prime number greater than 10} is a proper subset of {x: x is an odd number greater than 10}
• The set of natural numbers is a proper subset of the set of rational numbers; likewise, the set of points in a line
segment is a proper subset of the set of points in a line. These are two examples in which both the subset and
the whole set are infinite, and the subset has the same cardinality (the concept that corresponds to size, that is,
the number of elements, of a finite set) as the whole; such cases can run counter to one’s initial intuition.
• The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite
but the latter set has a larger cardinality (or power) than the former set.
37.4. OTHER PROPERTIES OF INCLUSION 239
regular
polygons polygons
• A is a proper subset of B
• C is a subset but no proper subset of B
37.6 References
[1] Rosen, Kenneth H. (2012). Discrete Mathematics and Its Applications (7th ed.). New York: McGraw-Hill. p. 119. ISBN
978-0-07-338309-5.
[2] Rudin, Walter (1987), Real and complex analysis (3rd ed.), New York: McGraw-Hill, p. 6, ISBN 978-0-07-054234-1,
MR 924157
240 CHAPTER 37. SUBSET
C
B
A
A B and B C imply A C
Tag system
A tag system is a deterministic computational model published by Emil Leon Post in 1943 as a simple form of Post
canonical system. A tag system may also be viewed as an abstract machine, called a Post tag machine (not to be
confused with Post-Turing machines)—briefly, a finite state machine whose only tape is a FIFO queue of unbounded
length, such that in each transition the machine reads the symbol at the head of the queue, deletes a fixed number of
symbols from the head, and to the tail appends a symbol-string preassigned to the deleted symbol. (Because all of the
indicated operations are performed in each transition, a tag machine strictly has only one state.)
38.1 Definition
A tag system is a triplet (m, A, P), where
• A is a finite alphabet of symbols, one of which is a special halting symbol. All finite (possibly empty) strings
on A are called words.
• P is a set of production rules, assigning a word P(x) (called a production) to each symbol x in A. The production
(say P(H)) assigned to the halting symbol is seen below to play no role in computations, but for convenience is
taken to be P(H) = 'H'.
The term m-tag system is often used to emphasise the deletion number. Definitions vary somewhat in the literature
(cf References), the one presented here being that of Rogozhin.
• A halting word is a word that either begins with the halting symbol or whose length is less than m.
• A transformation t (called the tag operation) is defined on the set of non-halting words, such that if x denotes
the leftmost symbol of a word S, then t(S) is the result of deleting the leftmost m symbols of S and appending
the word P(x) on the right.
• A computation by a tag system is a finite sequence of words produced by iterating the transformation t, starting
with an initially given word and halting when a halting word is produced. (By this definition, a computation
is not considered to exist unless a halting word is produced in finitely-many iterations. Alternative definitions
allow nonhalting computations, for example by using a special subset of the alphabet to identify words that
encode output.)
The use of a halting symbol in the above definition allows the output of a computation to be encoded in the final word
alone, whereas otherwise the output would be encoded in the entire sequence of words produced by iterating the tag
operation.
A common alternative definition uses no halting symbol and treats all words of length less than m as halting words.
Another definition is the original one used by Post 1943 (described in the historical note below), in which the only
halting word is the empty string.
241
242 CHAPTER 38. TAG SYSTEM
This is merely to illustrate a simple 2-tag system that uses a halting symbol.
2-tag system Alphabet: {a,b,c,H} Production rules: a --> ccbaH b --> cca c --> cc Computation Initial word: baa
acca caccbaH ccbaHcc baHcccc Hcccccca (halt).
This simple 2-tag system is adapted from [De Mol, 2008]. It uses no halting symbol, but halts on any word of length
less than 2, and computes a slightly modified version of the Collatz sequence.
In the original Collatz sequence, the successor of n is either n/2 (for even n) or 3n + 1 (for odd n). The value 3n + 1
is clearly even for odd n, hence the next term after 3n + 1 is surely (3n + 1)/2. In the sequence computed by the tag
system below we skip this intermediate step, hence the successor of n is (3n + 1)/2 for odd n.
In this tag system, a positive integer n is represented by the word aa...a with n a’s.
2-tag system Alphabet: {a,b,c} Production rules: a --> bc b --> a c --> aaa Computation Initial word: aaa <--> n=3
abc cbc caaa aaaaa <--> 5 aaabc abcbc cbcbc cbcaaa caaaaaa aaaaaaaa <--> 8 aaaaaabc aaaabcbc aabcbcbc bcbcbcbc
bcbcbca bcbcaa bcaaa aaaa <--> 4 aabc bcbc bca aa <--> 2 bc a <--> 1 (halt)
• If x denotes the leftmost symbol of a nonempty word S, then t(S) is the operation consisting of first appending
the word P(x) to the right end of S, and then deleting the leftmost m symbols of the result — deleting all if
there be less than m symbols.
The above remark concerning the Turing-completeness of the set of m-tag systems, for any m > 1, applies also to
these tag systems as originally defined by Post.
38.5. CYCLIC TAG SYSTEMS 243
According to a footnote in Post 1943, B. P. Gill suggested the name for an earlier variant of the problem in which
the first m symbols are left untouched, but rather a check mark indicating the current position moves to the right by
m symbols every step. The name for the problem of determining whether or not the check mark ever touches the end
of the sequence was then dubbed the “problem of tag”, referring to the children’s game of tag.
A cyclic tag system is a modification of the original tag system. The alphabet consists of only two symbols, 0 and
1, and the production rules comprise a list of productions considered sequentially, cycling back to the beginning of
the list after considering the “last” production on the list. For each production, the leftmost symbol of the word is
examined—if the symbol is 1, the current production is appended to the right end of the word; if the symbol is 0,
no characters are appended to the word; in either case, the leftmost symbol is then deleted. The system halts if and
when the word becomes empty.
38.5.1 Example
Cyclic Tag System Productions: (010, 000, 1111) Computation Initial Word: 11001 Production Word ---------- ----
---------- 010 11001 000 1001010 1111 001010000 010 01010000 000 1010000 1111 010000000 010 10000000 .
...
Cyclic tag systems were created by Matthew Cook under the employ of Stephen Wolfram, and were used in Cook’s
demonstration that the Rule 110 cellular automaton is universal. A key part of the demonstration was that cyclic tag
systems can emulate a Turing-complete class of tag systems.
An m-tag system with alphabet {a1 , ..., an} and corresponding productions {P1 , ..., Pn} is emulated by a cyclic tag
system with m*n productions (Q1 , ..., Qn, -, -, ..., -), where all but the first n productions are the empty string (denoted
by '-'). The Qk are encodings of the respective Pk, obtained by replacing each symbol of the tag system alphabet by
a length-n binary string as follows (these are to be applied also to the initial word of a tag system computation):
a1 = 100...00 a2 = 010...00 . . . an = 000...01
That is, ak is encoded as a binary string with a 1 in the kth position from the left, and 0’s elsewhere. Successive lines
of a tag system computation will then occur encoded as every (m*n)th line of its emulation by the cyclic tag system.
38.6.1 Example
38.8 References
• Cocke, J., and Minsky, M.: “Universality of Tag Systems with P=2”, J. Assoc. Comput. Mach. 11, 15–20,
1964.
• De Mol, L.: “Tag systems and Collatz-like functions”, Theoretical Computer Science , 390:1, 92–101, January
2008. (Preprint Nr. 314.)
• Marvin Minsky 1961, Recursive Unsolvability of Post’s Problem of “Tag” and other Topics in Theory of Turing
Machines”, the Annals of Mathematics, 2nd ser., Vol. 74, No. 3. (Nov., 1961), pp. 437–455. Stable URL: http:
//links.jstor.org/sici?sici=0003-486X%2819611%292%3A74%3A3%3C437%3ARUOPPO%3E2.0.CO%3B2-N.
• Marvin Minsky, 1967, Computation: Finite and Infinite Machines, Prentice–Hall, Inc. Englewoord Cliffs, N.J.,
no ISBN, Library of Congress Card Catalog number 67-12342.
In a chapter 14 titled “Very Simple Bases for Computability”, Minsky presents a very readable
(and exampled) subsection 14.6 The Problem of “Tag” and Monogenic Canonical Systems
(pp. 267–273) (this sub-section is indexed as “tag system”). Minsky relates his frustrating
experiences with the general problem: “Post found this (00, 1101) problem “intractable,” and
so did I, even with the help of a computer.” He comments that an “effective way to decide, for
any string S, whether this process will ever repeat when started with S” is unknown although
a few specific cases have been proven unsolvable. In particular he mentions Cocke’s Theorem
and Corollary 1964.
• Post, E.: “Formal reductions of the combinatorial decision problem”, American Journal of Mathematics, 65
(2), 197–215 (1943). (Tag systems are introduced on p. 203ff.)
• Rogozhin, Yu.: “Small Universal Turing Machines”, Theoret. Comput. Sci. 168, 215–240, 1996.
• Wang, H.: “Tag Systems and Lag Systems”, Math. Annalen 152, 65–74, 1963.
Truth table
A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean func-
tions, and propositional calculus—to compute the functional values of logical expressions on each of their functional
arguments, that is, on each combination of values taken by their logical variables (Enderton, 2001). In particular,
truth tables can be used to tell whether a propositional expression is true for all legitimate input values, that is, logically
valid.
Practically, a truth table is composed of one column for each input variable (for example, A and B), and one final
column for all of the possible results of the logical operation that the table is meant to represent (for example, A XOR
B). Each row of the truth table therefore contains one possible configuration of the input variables (for instance, A=true
B=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig
Wittgenstein is often credited with their invention in the Tractatus Logico-Philosophicus,[1] though they appeared at
least a year earlier in a paper on propositional logic by Emil Leon Post.[2]
Logical identity is an operation on one logical value, typically the value of a proposition, that produces a value of true
if its operand is true and a value of false if its operand is false.
The truth table for the logical identity operator is as follows:
Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of
true if its operand is false and a value of false if its operand is true.
The truth table for NOT p (also written as ¬p, Np, Fpq, or ~p) is as follows:
245
246 CHAPTER 39. TRUTH TABLE
Here is a truth table giving definitions of all 16 of the possible truth functions of two binary variables (P and Q are
thus boolean variables: information about notation may be found in Bocheński (1959), Enderton (2001), and Quine
(1982); for details about the operators see the Key below):
where T = true and F = false. The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The L id row shows the operator’s left identities if it has any - values I such that I op Q = Q. The R id row shows
the operator’s right identities if it has any - values I such that P op I = P.[note 1]
The four combinations of input values for p, q, are read by row from the table above. The output function for each p,
q combination, can be read, by row, from the table.
Key:
The key is oriented by column, rather than row. There are four columns rather than four rows, to display the four
combinations of p, q, as input.
p: T T F F
q: T F T F
There are 16 rows in this key, one row for each binary function of the two binary variables, p, q. For example, in
row 2 of this Key, the value of Converse nonimplication (' ↚ ') is solely T, for the column denoted by the unique
combination p=F, q=T; while in row 2, the value of that ' ↚ ' operation is F for the three remaining columns of p, q.
The output row for ↚ is thus
2: F F T F
and the 16-row[3] key is
Logical operators can also be visualized using Venn diagrams.
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if both of its operands are true.
The truth table for p AND q (also written as p ∧ q, Kpq, p & q, or p · q) is as follows:
In ordinary language terms, if both p and q are true, then the conjunction p ∧ q is true. For all other assignments of
logical values to p and to q the conjunction p ∧ q is false.
It can also be said that if p, then p ∧ q is q, otherwise p ∧ q is p.
Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if at least one of its operands is true.
The truth table for p OR q (also written as p ∨ q, Apq, p || q, or p + q) is as follows:
Stated in English, if p, then p ∨ q is p, otherwise p ∨ q is q.
Logical implication or the material conditional are both associated with an operation on two logical values, typically
the values of two propositions, that produces a value of false just in the singular case the first operand is true and the
second operand is false.
The truth table associated with the material conditional if p then q (symbolized as p → q) and the logical implication
p implies q (symbolized as p ⇒ q, or Cpq) is as follows:
It may also be useful to note that p → q is equivalent to ¬p ∨ q.
39.3. APPLICATIONS 247
Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if both operands are false or both operands are true.
The truth table for p XNOR q (also written as p ↔ q, Epq, p = q, or p ≡ q) is as follows:
So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have different truth
values.
Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if one but not both of its operands is true.
The truth table for p XOR q (also written as p ⊕ q, Jpq, or p ≠ q) is as follows:
For two propositions, XOR can also be written as (p ∧ ¬q) ∨ (¬p ∧ q).
The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a
value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands
is false.
The truth table for p NAND q (also written as p ↑ q, Dpq, or p | q) is as follows:
It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or
composed from other operations. Many such compositions are possible, depending on the operations that are taken
as basic or “primitive” and the operations that are taken as composite or “derivative”.
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: ¬(p ∧ q), and the disjunction of negations: (¬p) ∨ (¬q) can be tabulated as follows:
The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value
of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is
true. ↓ is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sufficient operator.
The truth table for p NOR q (also written as p ↓ q, Xpq, ¬(p ∨ q)) is as follows:
The negation of a disjunction ¬(p ∨ q), and the conjunction of negations (¬p) ∧ (¬q) can be tabulated as follows:
Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional
arguments p and q, produces the identical patterns of functional values for ¬(p ∧ q) as for (¬p) ∨ (¬q), and for ¬(p ∨ q)
as for (¬p) ∧ (¬q). Thus the first and second expressions in each pair are logically equivalent, and may be substituted
for each other in all contexts that pertain solely to their logical values.
This equivalence is one of De Morgan’s laws.
39.3 Applications
Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:
This demonstrates the fact that p → q is logically equivalent to ¬p ∨ q.
248 CHAPTER 39. TRUTH TABLE
T = true, F = false
∧ = AND (logical conjunction)
∨ = OR (logical disjunction)
∨ = XOR (exclusive or)
∧ = XNOR (exclusive nor)
→ = conditional “if-then”
← = conditional "(then)-if”
Note that this table does not describe the logic operations necessary to implement this operation, rather it simply
specifies the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically
equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types
of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or
one. The number of combinations of these two values is 2×2, or four. So the result is four possible outputs of C and
R. If one were to use base 3, the size would increase to 3×3, or nine possible outputs.
The first “addition” example above is called a half-adder. A full-adder is when the carry from the previous operation
is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's
logic:
A B C* | C R 0 0 0 | 0 0 0 1 0 | 0 1 1 0 0 | 0 1 1 1 0 | 1 0 0 0 1 | 0 1 0 1 1 | 1 0 1 0 1 | 1 0 1 1 1 | 1 1 Same as previous,
but.. C* = Carry from previous adder
39.4 History
Irving Anellis has done the research to show that C.S. Peirce appears to be the earliest logician (in 1893) to devise a
truth table matrix. From the summary of his paper:
In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell’s
1912 lecture on “The Philosophy of Logical Atomism” truth table matrices. The matrix for negation is
Russell’s, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein.
It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth
table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An
unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with
the composition of Peirce’s “On the Algebra of Logic: A Contribution to the Philosophy of Notation”
that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth
table for the conditional.
39.5 Notes
[1] The operators here with equal left and right identities (XOR, AND, XNOR, and OR) are also commutative monoids because
they are also associative. While this distinction may be irrelevant in a simple discussion of logic, it can be quite important
in more advanced mathematics. For example, in category theory an enriched category is described as a base category
enriched over a monoid, and any of these operators can be used for enrichment.
• First-order logic
• Functional completeness
• Karnaugh maps
• Logic gate
• Logical connective
• Logical graph
• Method of analytic tableaux
• Propositional calculus
• Truth function
39.7 References
[1] Georg Henrik von Wright (1955). “Ludwig Wittgenstein, A Biographical Sketch”. The Philosophical Review 64 (4): 527–
545 (p. 532, note 9). JSTOR 2182631.
[2] Emil Post (July 1921). “Introduction to a general theory of elementary propositions”. American Journal of Mathematics
43 (3): 163–185. JSTOR 2370324.
• Enderton, H. (2001). A Mathematical Introduction to Logic, second edition, New York: Harcourt Academic
Press. ISBN 0-12-238452-0
• Quine, W.V. (1982), Methods of Logic, 4th edition, Cambridge, MA: Harvard University Press.
Truth value
“True and false” redirects here. For the book, see True and False: Heresy and Common Sense for the Actor. For the
Unix commands, see true and false (commands). For other uses, see True (disambiguation) and False (disambigua-
tion).
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a
proposition to truth.
In classical logic, with its intended semantics, the truth values are true (1 or T) and untrue or false (0 or ⊥); that
is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding
semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical
biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false.
Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan’s laws:
¬(p∧q) ⇔ ¬p ∨ ¬q
¬(p∨q) ⇔ ¬p ∧ ¬q
Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is
referred to as valuation.
In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a truth value only if they
can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of
the statement from those axioms. A statement is false if you can deduce a contradiction from it. This leaves open the
possibility of statements that have not yet been assigned a truth value.
Unproved statements in Intuitionistic logic are not given an intermediate truth value (as is sometimes mistakenly
asserted). Indeed, you can prove that they have no third truth value, a result dating back to Glivenko in 1928[1]
Instead statements simply remain of unknown truth value, until they are either proved or disproved.
There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpreta-
tion. See also, Intuitionistic Logic - Semantics.
251
252 CHAPTER 40. TRUTH VALUE
Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions.
For example, intuitionistic logic lacks a complete set of truth values because its semantics, the Brouwer–Heyting–
Kolmogorov interpretation, is specified in terms of provability conditions, and not directly in terms of the necessary
truth of formulae.
But even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics.
The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, compared to Boolean algebra
semantics of classical propositional calculus.
• Bayesian probability
• Circular reasoning
• Degree of truth
• False dilemma
• Paradox
• Slingshot argument
• Supervaluationism
• Truth-value semantics
• Verisimilitude
40.7 References
[1] Proof that intuitionistic logic has no third truth value, Glivenko 1928
40.8. EXTERNAL LINKS 253
Turing degree
In computer science and mathematical logic the Turing degree (named after Alan Turing) or degree of unsolvability
of a set of natural numbers measures the level of algorithmic unsolvability of the set. The concept of Turing degree
is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems. The
Turing degree of a set tells how difficult it is to solve the decision problem associated with the set, that is, to determine
whether an arbitrary number is in the given set.
Two sets are Turing equivalent if they have the same level of unsolvability; each Turing degree is a collection of
Turing equivalent sets, so that two sets are in different Turing degrees exactly when they are not Turing equivalent.
Furthermore, the Turing degrees are partially ordered so that if the Turing degree of a set X is less than the Turing
degree of a set Y then any (noncomputable) procedure that correctly decides whether numbers are in Y can be
effectively converted to a procedure that correctly decides whether numbers are in X. It is in this sense that the Turing
degree of a set corresponds to its level of algorithmic unsolvability.
The Turing degrees were introduced by Emil Leon Post (1944), and many fundamental results were established by
Stephen Cole Kleene and Post (1954). The Turing degrees have been an area of intense research since then. Many
proofs in the area make use of a proof technique known as the priority method.
For the rest of this article, the word set will refer to a set of natural numbers. A set X is said to be Turing reducible
to a set Y if there is an oracle Turing machine that decides membership in X when given an oracle for membership
in Y. The notation X ≤T Y indicates that X is Turing reducible to Y.
Two sets X and Y are defined to be Turing equivalent if X is Turing reducible to Y and Y is Turing reducible to X.
The notation X ≡T Y indicates that X and Y are Turing equivalent. The relation ≡T can be seen to be an equivalence
relation, which means that for all sets X, Y, and Z:
• X ≡T X
• X ≡T Y implies Y ≡T X
• If X ≡T Y and Y ≡T Z then X ≡T Z.
A Turing degree is an equivalence class of the relation ≡T. The notation [X] denotes the equivalence class containing
a set X. The entire collection of Turing degrees is denoted D .
The Turing degrees have a partial order ≤ defined so that [X] ≤ [Y] if and only if X ≤T Y. There is a unique Turing
degree containing all the computable sets, and this degree is less than every other degree. It is denoted 0 (zero)
because it is the least element of the poset D . (It is common to use boldface notation for Turing degrees, in order to
distinguish them from sets. When no confusion can occur, such as with [X], the boldface is not necessary.)
254
41.2. BASIC PROPERTIES OF THE TURING DEGREES 255
For any sets X and Y, X join Y, written X Y, is defined to be the union of the sets {2n : n ∈ X} and {2m+1 : m ∈
Y}. The Turing degree of X Y is the least upper bound of the degrees of X and Y. Thus D is a join-semilattice.
The least upper bound of degrees a and b is denoted a ∪ b. It is known that D is not a lattice, as there are pairs of
degrees with no greatest lower bound.
For any set X the notation X′ denotes the set of indices of oracle machines that halt when using X as an oracle. The
set X′ is called the Turing jump of X. The Turing jump of a degree [X] is defined to be the degree [X′]; this is a
valid definition because X′ ≡T Y′ whenever X ≡T Y. A key example is 0′, the degree of the halting problem.
• For each degree a, the set of degrees below a is at most countable. The set of degrees greater than a has size
2ℵ0 .
• There are pairs of degrees with no greatest lower bound. Thus D is not a lattice.
• Every countable partially ordered set can be embedded in the Turing degrees.
• For any degree a there is a degree b such that a < b and b′ = a′; such a degree b is called low relative to a.
• Shore and Slaman (1999) showed that the jump operator is definable in the first-order structure of the degrees
with the language ⟨ ≤, =⟩.
• (G. E. Sacks, 1964) The r.e degrees are dense; between any two r.e. degrees there is a third r.e degree.
• (A. H. Lachlan, 1966a and C. E. M. Yates, 1966) There are two r.e. degrees with no greatest lower bound in
the r.e. degrees.
• (A. H. Lachlan, 1966a and C. E. M. Yates, 1966) There is a pair of nonzero r.e. degrees whose greatest lower
bound is 0.
• (S. K. Thomason, 1971) Every finite distributive lattice can be embedded into the r.e. degrees. In fact, the
countable atomless Boolean algebra can be embedded in a manner that preserves suprema and infima.
• (A. H. Lachlan and R. I. Soare, 1980) Not all finite lattices can be embedded in the r.e. degrees (via an
embedding that preserves suprema and infima). The following particular lattice cannot be embedded in the r.e.
degrees:
• (A. H. Lachlan, 1966b) There is no pair of r.e. degrees whose greatest lower bound is 0 and whose least upper
bound is 0′. This result is informally called the nondiamond theorem.
• (L. A. Harrington and T. A. Slaman, see Nies, Shore, and Slaman (1998)) The first-order theory of the r.e.
degrees in the language ⟨ 0, ≤, = ⟩ is many-one equivalent to the theory of true first order arithmetic.
Emil Post studied the r.e. Turing degrees and asked whether there is any r.e. degree strictly between 0 and 0′. The
problem of constructing such a degree (or showing that none exist) became known as Post’s problem. This problem
was solved independently by Friedberg and Muchnik in the 1950s, who showed that these intermediate r.e. degrees
do exist. Their proofs each developed the same new method for constructing r.e. degrees which came to be known
as the priority method. The priority method is now the main technique for establishing results about r.e. sets.
41.6. SEE ALSO 257
The idea of the priority method for constructing an r.e. set X is to list a countable sequence of requirements that X
must satisfy. For example, to construct an r.e. set X between 0 and 0′ it is enough to satisfy the requirements Ae and
Be for each natural number e, where Ae requires that the oracle machine with index e does not compute 0′ from X
and Be requires that the Turing machine with index e (and no oracle) does not compute X. These requirements are
put into a priority ordering, which is an explicit bijection of the requirements and the natural numbers. The proof
proceeds inductively with one stage for each natural number; these stages can be thought of as steps of time during
which the set X is enumerated. At each stage, numbers may be put into X or forever prevented from entering X in an
attempt to satisfy requirements (that is, force them to hold once all of X has been enumerated). Sometimes, a number
can be enumerated into X to satisfy one requirement but doing this would cause a previously satisfied requirement to
become unsatisfied (that is, to be injured). The priority order on requirements is used to determine which requirement
to satisfy in this case. The informal idea is that if a requirement is injured then it will eventually stop being injured after
all higher priority requirements have stopped being injured, although not every priority argument has this property.
An argument must be made that the overall set X is r.e. and satisfies all the requirements. Priority arguments can be
used to prove many facts about r.e. sets; the requirements used and the manner in which they are satisfied must be
carefully chosen to produce the required result.
41.7 References
• Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ISBN 0-521-22384-9;
ISBN 0-521-29465-7
• Odifreddi, P. G. (1989), Classical Recursion Theory, Studies in Logic and the Foundations of Mathematics
125, Amsterdam: North-Holland, ISBN 978-0-444-87295-1, MR 982269
• Odifreddi, P. G. (1999), Classical recursion theory. Vol. II, Studies in Logic and the Foundations of Mathe-
matics 143, Amsterdam: North-Holland, ISBN 978-0-444-50205-6, MR 1718169
• Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1;
ISBN 0-07-053522-1
• Sacks, Gerald E. Degrees of Unsolvability (Annals of Mathematics Studies), Princeton University Press. ISBN
978-0691079417
• Shore, R. The theories of the T, tt, and wtt r.e. degrees: undecidability and beyond. Proceedings of the IX
Latin American Symposium on Mathematical Logic, Part 1 (Bahía Blanca, 1992), 61–70, Notas Lógica Mat.,
38, Univ. Nac. del Sur, Bahía Blanca, 1993.
• Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag,
Berlin, 1987. ISBN 3-540-15299-7
• Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–
1181. MR 508451
• Lachlan, A.H. (1966a), “Lower Bounds for Pairs of Recursively Enumerable Degrees”, Proceedings of the
London Mathematical Society 3 (1): 537, doi:10.1112/plms/s3-16.1.537.
• Lachlan, A.H. (1966b), “The impossibility of finding relative complements for recursively enumerable de-
grees”, J. Symb. Logic 31 (3): 434–454, doi:10.2307/2270459, JSTOR 2270459.
• Lachlan, A.H.; Soare, R.I. (1980), “Not every finite lattice is embeddable in the recursively enumerable de-
grees”, Advances in Math 37: 78–82, doi:10.1016/0001-8708(80)90027-4.
• Nies, André; Shore, Richard A.; Slaman, Theodore A. (1998), “Interpretability and definability in the recur-
sively enumerable degrees”, Proceedings of the London Mathematical Society 77 (2): 241–291, doi:10.1112/S002461159800046X
ISSN 0024-6115, MR 1635141
• Post, Emil L. (1944), “Recursively enumerable sets of positive integers and their decision problems”, Bulletin
of the American Mathematical Society 50 (5): 284–316, doi:10.1090/S0002-9904-1944-08111-1, ISSN 0002-
9904, MR 0010514
• Sacks, G.E. (1964), “The recursively enumerable degrees are dense”, Annals of Mathematics, Second Series
80 (2): 300–312, doi:10.2307/1970393, JSTOR 1970393.
• Shore, Richard A.; Slaman, Theodore A. (1999), “Defining the Turing jump”, Mathematical Research Letters
6: 711–722, doi:10.4310/mrl.1999.v6.n6.a10, ISSN 1073-2780, MR 1739227
• Simpson, Stephen G. (1977), “First-order theory of the degrees of recursive unsolvability”, Annals of Math-
ematics, Second Series 105 (1): 121–139, doi:10.2307/1971028, ISSN 0003-486X, JSTOR 1971028, MR
0432435
• Thomason, S.K. (1971), “Sublattices of the recursively enumerable degrees”, Z. Math. Logik Grundlag. Math.
17: 273–280, doi:10.1002/malq.19710170131.
• Yates, C.E.M. (1966), “A minimal pair of recursively enumerable degrees”, J. Symbolic Logic 31 (2): 159–168,
doi:10.2307/2269807, JSTOR 2269807.
Chapter 42
Universal algebra
Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures
themselves, not examples (“models”) of algebraic structures. For instance, rather than take particular groups as the
object of study, in universal algebra one takes “the theory of groups” as an object of study.
42.1.1 Equations
After the operations have been specified, the nature of the algebra can be further limited by axioms, which in universal
algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary
operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y,
and z of the set A.
42.2 Varieties
Main article: Variety (universal algebra)
An algebraic structure that can be defined by identities is called a variety, and these are sufficiently important that
some authors consider varieties the only object of study in universal algebra, while others consider them an object.
Restricting one’s study to varieties rules out:
• Predicate logic, notably quantification, including universal quantification ( ∀ ), except before an equation, and
existential quantification ( ∃ )
• All relations except equality, in particular inequalities, both a ≠ b and order relations
259
260 CHAPTER 42. UNIVERSAL ALGEBRA
In this narrower definition, universal algebra can be seen as a special branch of model theory, typically dealing with
structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality),
and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example ordered groups are not studied in
mainstream universal algebra because they involve an ordering relation.
A more fundamental restriction is that universal algebra cannot study the class of fields, because there is no type
(a.k.a. signature) in which all field laws can be written as equations (inverses of elements are defined for all non-zero
elements in a field, so inversion cannot simply be added to the type).
One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that
has finite products. For example, a topological group is just a group in the category of topological spaces.
42.2.1 Examples
Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way – the
usual definitions often involve quantification or inequalities.
Groups
To see how this works, let’s consider the definition of a group. Normally a group is defined in terms of a single binary
operation ∗, subject to these axioms:
(Some authors also use an axiom called "closure", stating that x ∗ y belongs to the set A whenever x and y do. But
from a universal algebraist’s point of view, that is already implied by calling ∗ a binary operation.)
This definition of a group is problematic from the point of view of universal algebra. The reason is that the axioms of
the identity element and inversion are not stated purely in terms of equational laws but also have clauses involving the
phrase “there exists ... such that ...”. This is inconvenient; the list of group properties can be simplified to universally
quantified equations by adding a nullary operation e and a unary operation ~ in addition to the binary operation ∗.
Then list the axioms for these three operations as follows:
• Associativity: x ∗ (y ∗ z) = (x ∗ y) ∗ z.
• Identity element: e ∗ x = x = x ∗ e; formally: ∀x. e∗x=x=x∗e.
• Inverse element: x ∗ (~x) = e = (~x) ∗ x formally: ∀x. x∗~x=e=~x∗x.
(Of course, we usually write "x−1 " instead of "~x", which shows that the notation for operations of low arity is not
always as given in the second paragraph.)
What has changed is that in the usual definition there are:
• 3 operations: one binary, one unary, and one nullary (signature (2,1,0))
42.3. BASIC CONSTRUCTIONS 261
It is important to check that this really does capture the definition of a group. The reason that it might not is that
specifying one of these universal groups might give more information than specifying one of the usual kind of group.
After all, nothing in the usual definition said that the identity element e was unique; if there is another identity element
e', then it is ambiguous which one should be the value of the nullary operator e. Proving that it is unique is a common
beginning exercise in classical group theory textbooks. The same thing is true of inverse elements. So, the universal
algebraist’s definition of a group is equivalent to the usual definition.
At first glance this is simply a technical difference, replacing quantified laws with equational laws. However, it has
immediate practical consequences – when defining a group object in category theory, where the object in question
may not be a set, one must use equational laws (which make sense in general categories), and cannot use quantified
laws (which do not make sense, as objects in general categories do not have elements). Further, the perspective
of universal algebra insists not only that the inverse and identity exist, but that they be maps in the category. The
basic example is of a topological group – not only must the inverse exist element-wise, but the inverse map must be
continuous (some authors also require the identity map to be a closed inclusion, hence cofibration, again referring to
properties of the map).
defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional
algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under
geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids.
42.6 Generalizations
A more generalised programme along these lines is carried out by category theory. Given a list of operations and
axioms in universal algebra, the corresponding algebras and homomorphisms are the objects and morphisms of a
category. Category theory applies to many situations where universal algebra does not, extending the reach of the
theorems. Conversely, many theorems that hold in universal algebra do not generalise all the way to category theory.
Thus both fields of study are useful.
A more recent development in category theory that generalizes operations is operad theory – an operad is a set of
operations, similar to a universal algebra.
Another development is partial algebra where the operators can be partial functions.
42.7 History
In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra
had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De
Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.[1]
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic
structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: “The main idea of
the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but
rather the comparative study of their several structures.” At the time George Boole's algebra of logic made a strong
counterpoint to ordinary number algebra, so the term “universal” served to calm strained sensibilities.
Whitehead’s early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole’s
algebra of logic. Whitehead wrote in his book:
“Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative
study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic
symbolism in particular. The comparative study necessarily presupposes some previous separate study,
comparison being impossible without knowledge.” [2]
Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s,
when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics
and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred
Tarski, Andrzej Mostowski, and their students (Brainerd 1967).
In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff’s papers, dealing
with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of
mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly
Maltsev in the 1940s went unnoticed because of the war. Tarski’s lecture at the 1950 International Congress of
Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by
Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others.
In the late 1950s, Edward Marczewski[3] emphasized the importance of free algebras, leading to the publication of
more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski,
Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others.
42.8. SEE ALSO 263
42.9 Footnotes
[1] Grätzer, George. Universal Algebra, Van Nostrand Co., Inc., 1968, p. v.
[2] Quoted in Grätzer, George. Universal Algebra, Van Nostrand Co., Inc., 1968.
[3] Marczewski, E. “A general scheme of the notions of independence in mathematics.” Bull. Acad. Polon. Sci. Ser. Sci.
Math. Astronom. Phys. 6 (1958), 731–736.
42.10 References
• Bergman, George M., 1998. An Invitation to General Algebra and Universal Constructions (pub. Henry Helson,
15 the Crescent, Berkeley CA, 94708) 398 pp. ISBN 0-9655211-4-1.
• Birkhoff, Garrett, 1946. Universal algebra. Comptes Rendus du Premier Congrès Canadien de Mathématiques,
University of Toronto Press, Toronto, pp. 310–326.
• Brainerd, Barron, Aug–Sep 1967. Review of Universal Algebra by P. M. Cohn. American Mathematical
Monthly, 74(7): 878–880.
• Burris, Stanley N., and H.P. Sankappanavar, 1981. A Course in Universal Algebra Springer-Verlag. ISBN
3-540-90578-2 Free online edition.
• Cohn, Paul Moritz, 1981. Universal Algebra. Dordrecht, Netherlands: D.Reidel Publishing. ISBN 90-277-
1213-1 (First published in 1965 by Harper & Row)
• Freese, Ralph, and Ralph McKenzie, 1987. Commutator Theory for Congruence Modular Varieties, 1st ed.
London Mathematical Society Lecture Note Series, 125. Cambridge Univ. Press. ISBN 0-521-34832-3. Free
online second edition.
• Grätzer, George, 1968. Universal Algebra D. Van Nostrand Company, Inc.
• Higgins, P. J. Groups with multiple operators. Proc. London Math. Soc. (3) 6 (1956), 366–416.
• Higgins, P.J., Algebras with a scheme of operators. Mathematische Nachrichten (27) (1963) 115–132.
• Hobby, David, and Ralph McKenzie, 1988. The Structure of Finite Algebras American Mathematical Society.
ISBN 0-8218-3400-2. Free online edition.
• Jipsen, Peter, and Henry Rose, 1992. Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer
Verlag. ISBN 0-387-56314-8. Free online edition.
• Pigozzi, Don. General Theory of Algebras.
• Smith, J.D.H., 1976. Mal'cev Varieties, Springer-Verlag.
• Whitehead, Alfred North, 1898. A Treatise on Universal Algebra, Cambridge. (Mainly of historical interest.)
264 CHAPTER 42. UNIVERSAL ALGEBRA
Kevin B12, Pmanderson, Tail, Urhixidur, Stevenzenith, Reflex Reaction, Mike Rosoft, D6, Jayjg, Discospinster, Rich Farmbrough, Leib-
niz, FranksValli, Yuval madar, Dmr2, Bender235, Kbh3rd, Chalst, Kwamikagami, Wareh, Jpgordon, Nk, Cherlin, (aeropagitica), Mdd,
Knucmo2, Jumbuck, Chira, SlimVirgin, Snowolf, SteinbDJ, Kazvorpal, Jackhynes, Velho, Kelly Martin, Kzollman, Ruud Koot, Table-
top, Bbatsell, Palica, AdinaBob, SqueakBox, Magister Mathematicae, BD2412, Search4Lancer, Drbogdan, Rjwilmsi, Koavf, Zbxgscqf,
KYPark, Lockley, Jivecat, Salix alba, MZMcBride, Heah, Chekaz, Kl833x9~enwiki, Thekohser, John Deas, FlaBot, Doc glasgow, JY-
Ouyang, RexNL, Eric.dane~enwiki, David91, YurikBot, RussBot, Hede2000, Yamara, Ori Livneh, Archelon, Gaius Cornelius, The-
Grappler, NawlinWiki, Atfyfe, Wiki alf, Leutha, Joe House, BirgitteSB, Ragesoss, Moe Epsilon, Aaron charles, Tony1, Bota47, BraneJ,
Haemo, Maunus, Tomisti, Avraham, Igiffin, Jmchen, WAS 4.250, Deville, Lt-wiki-bot, Ninly, J. Van Meter, Chase me ladies, I'm the
Cavalry, Fang Aili, Tevildo, JLaTondre, Lifesnadir, Luk, Sardanaphalus, JJL, SmackBot, Reedy, Brockorgan, DCDuring, Gilliam, Mhss,
Bluebot, Kurykh, Thumperward, Josteinn, Go for it!, Colonies Chris, Mike hayes, WikiPedant, Tsca.bot, Vanished User 0001, Kaiserb-
Bot, Mhym, Dreadstar, Only, Jon Awbrey, Leon..., Michael Rogers, ArglebargleIV, JzG, John, Cyberstrike2000x, Geminatea, The Man
in Question, Brent williams, Beetstra, Meco, CharlesMartel, Laurapr, Swampyank, Phuzion, Christian Roess, K, Billy Hathorn, Cyrusc,
CmdrObot, Geremia, Thomasmeeks, Ezrakilty, Moreschi, Sdorrance, Typewritten, Gregbard, Jac16888, Cydebot, Reywas92, Master
son, Gimmetrow, Headbomb, West Brom 4ever, SomeStranger, Gerry Ashton, Epistemologist, AnnMBake, LogicMan, Wylie Ali, Luna
Santin, MengTheMagnificent, Tjmayerinsf, Danny lost, MECU, Arx Fortis, Deflective, Postcard Cathy, The Transhumanist, M4701,
Xact, Nikolaos Bakalis, Magioladitis, Askari Mark, Jay Gatsby, David Eppstein, GhostofSuperslum, Rickard Vogelberg, B9 humming-
bird hovering, Santiago Saint James, Way of Inquiry, Zelda Zilwaukee, Sgeo-BOT, R'n'B, Johnpacklambert, Huzzlet the bot, Terrek,
Nigholith, Peter Graif, Olav Smith, Created Equal, All Men Are, The Proposition That, And Dedicated To, Conceived In Liberty, A
New Nation, On This Continent, Brought Forth, Farmer Kiss, Our Fathers, And Seven, Four Score And Seven Years Ago, Edward
E. Nigma, Notreallydavid, Sir Humphrey Appleby, Ypetrachenko, Semi Virgil, Venus Verdigris, Coppertwig, Plasticup, Diverting In-
ternet Gossip Dress Up Game, Idiothek, Plindenbaum, Pastordavid, Slim Margin, DASonnenfeld, Jagness, Chromancer, Ripberger,
Jmco, Pasixxxx, Trikaduliana, Jeff G., Samboner, Hockey Knight In Canada, Love And Fellowship, WOSlinker, Frank232s, Butseri-
ouslyfolks, TXiKiBoT, Davehi1, Edvard Munchkin, ElinorD, GcSwRhIc, The Tetrast, Don4of4, Cummings01, Knight Of The Woeful
Countenance, PDFbot, Eldredo, Cnilep, Alcmaeonid, AlleborgoBot, GirasoleDE, SieBot, Darrell Wheeler, Ødipus sic, BotMultichill,
Odd nature, Dearborn Wacker, Hugh16, VI-LIII, Armand Boisblanc, Monegasque, Jlwelsh, Voice Of Xperience, Lisatwo, Lightmouse,
Vojvodaen, Sheez Louise, AtomikWeasel, Alt.pasta, Wahrmund, Ed Nauseum, Francvs, Jinx The Phinque, ClueBot, SummerWith-
Morons, Tequila Mockingbard, Ziemia Cieszynska, PipepBot, Rodhullandemu, Graysnots, Plastikspork, Goofé Dean, Jonny WÅÐÐ,
Name Sleightly Anonymized, Razimantv, Spaghettisburg Address, Mild Bill Hiccup, Who’s Wife, Sureupon, Badlytwice, Dancesince,
DragonBot, Ktr101, Sun Creator, Arjayay, Hans Adler, Would Goods, Kato9Tales, JustZeFaxMaam, Genesiswinter, Qwfp, Palnot, Sil-
vonenBot, The Alchimist, G7a, Kbdankbot, Addbot, Grashoofd, MrOllie, Favonian, Christopher Lee Adams, Ariel Black, LinkFA-Bot,
Mdnavman, Numbo3-bot, Lightbot, JEN9841, Ben Ben, Legobot, Luckas-bot, Yobot, Bunnyhop11, Rsquire3, Robson correa de ca-
margo, Queen of the Dishpan, Jean Santeuil, AnomieBOT, JackieBot, Materialscientist, Bob Burkhardt, ArthurBot, Xqbot, Srich32977,
Omnipaedista, ArkinAardvark, RibotBOT, Shadowjams, FrescoBot, ClausVind, Adam04, Recognizance, Cristian.albanese, Louperibot,
Kiefer.Wolfowitz, Kirshank, Tbhotch, MegaSloth, Beyond My Ken, EmausBot, WikitanvirBot, RA0808, Ebe123, KHamsun, ZéroBot,
AManWithNoPlan, Tolly4bolly, Albert Wilford Monroe, ChuispastonBot, RockMagnetist, Kohser 3.0, AntisocialSubversive, ClueBot
NG, Goose friend, Tsk351, Masssly, Widr, Moulbrey, BG19bot, Gruebleen, Lawandeconomics1, Allecher, Pohjannaula, Jean How-
bree, MrBill3, Larstebil, Anthrophilos, Ninmacer20, Damrvrhunter, Tdaim, Dexbot, SamCardioNgo, VIAFbot, Nimetapoeg, Jodosma,
Dani0rad, Trenturrs, Shrikarsan, Sol1, Plerome7, Max Plodwonka, Sly of he Green, Jpeeps, VS2219, Dukon, KasparBot, Wattenoh and
Anonymous: 151
• Clone (algebra) Source: https://en.wikipedia.org/wiki/Clone_(algebra)?oldid=623579268 Contributors: Zundark, EmilJ, Salix alba, Din-
gar, Sam Staton, Epsilon0, Hans Adler, Addbot, Gabriele ricci, Luckas-bot, Jesse V., Stephan Spahn, BG19bot, Deltahedron, JMP EAX
and Anonymous: 3
• Computability theory Source: https://en.wikipedia.org/wiki/Computability_theory?oldid=656711116 Contributors: AxelBoldt, Edward,
Michael Hardy, Modster, Pde, Cyp, Hyacinth, Wwheeler, Phoebe, David.Monniaux, Phil Boswell, Pigsonthewing, MathMartin, Giftlite,
Siroxo, Vivero~enwiki, Beland, Paul August, Ben Standeven, Peter M Gerdes, Haham hanuka, Msh210, Ruud Koot, Teemu Leisti,
Rjwilmsi, Benanhalt, Bgwhite, Wavelength, Hairy Dude, KSmrq, Trovatore, Ott2, Closedmouth, Sardanaphalus, SmackBot, Scott Paeth,
Mhss, Chris the speller, Bluebot, Jjalexand, Pietaster, Readams, Byelf2007, Mr. Random, Wvbailey, Harryboyles, Breno, IronGargoyle,
JMK, CmdrObot, CBM, Myasuda, Gregbard, Cydebot, Blaisorblade, Headbomb, Escarbot, Pmt6sbc, JAnDbot, Mberridge, Avaya1,
MetsBot, A3nm, David Eppstein, GermanX, Stephanwehner, Krishnachandranvn, VolkovBot, JohnBlackburne, Onore Baka Sama, Frank
Stephan, SieBot, Kumioko (renamed), CBM2, LarRan, HairyFotr, Polyamorph, UKoch, Qwfp, Brentsmith101, Dsimic, Multipundit, Ad-
dbot, GorgeUbuasha, Amirobot, AnomieBOT, Materialscientist, Citation bot, Xqbot, MarcelB612, RedBot, MastiBot, Pokus9999, Quo-
tient group, Hauke Pribnow, KHamsun, Jmencisom, Bethnim, Andrebask, ClueBot NG, Snotbot, MerlIwBot, Helpful Pixie Bot, BG19bot,
Flosfa, Brad7777, Gabriela Cunha Sampaio, Adrian1231, Dexbot, Deltahedron, Cerabot~enwiki, Lugia2453, Jochen Burghardt, Star767,
Darkangel77777 and Anonymous: 54
• De Morgan’s laws Source: https://en.wikipedia.org/wiki/De_Morgan’{}s_laws?oldid=668423258 Contributors: The Anome, Tarquin,
Jeronimo, Mudlock, Michael Hardy, TakuyaMurata, Ihcoyc, Ijon, AugPi, DesertSteve, Charles Matthews, Dcoetzee, Choster, Dysprosia,
Xiaodai~enwiki, Hyacinth, David Shay, SirPeebles, Fredrik, Dorfl, Hadal, Giftlite, Starblue, DanielZM, Guppyfinsoup, Smimram, ES-
kog, Chalst, Art LaPella, EmilJ, Scrutchfield, Linj, Alphax, Boredzo, Larry V, Jumbuck, Smylers, Oleg Alexandrov, Linas, Mindmatrix,
Bkkbrad, Btyner, Graham87, Miserlou, The wub, Marozols, Mathbot, Subtractive, DVdm, YurikBot, Wavelength, RobotE, Hairy Dude,
Michael Slone, Cori.schlegel, Saric, Cdiggins, Lt-wiki-bot, Rodrigoq~enwiki, SmackBot, RDBury, Gilliam, MooMan1, Mhss, JRSP,
DHN-bot~enwiki, Ebertek, Coolv, Cybercobra, Jon Awbrey, Vina-iwbot~enwiki, Petrejo, Gobonobo, Darktemplar, 16@r, Loadmaster,
Drae, MTSbot~enwiki, Adambiswanger1, Nutster, JForget, Gregbard, Kanags, Thijs!bot, Epbr123, Jojan, Helgus, Futurebird, AntiVan-
dalBot, Hannes Eder, MikeLynch, JAnDbot, Jqavins, Nitku, Stdazi, Gwern, General Jazza, R'n'B, Bongomatic, Ttwo, Javawizard, Kratos
84, Policron, TWiStErRob, VolkovBot, TXiKiBoT, Drake Redcrest, Ttennebkram, Smoseson, SieBot, Squelle, Fratrep, Melcombe, Into
The Fray, ClueBot, B1atv, Mild Bill Hiccup, Cholmeister, PixelBot, Alejandrocaro35, Hans Adler, Cldoyle, Rror, Alexius08, Addbot,
Mitch feaster, Tide rolls, Luckas-bot, Yobot, Linket, KamikazeBot, Eric-Wester, AnomieBOT, Materialscientist, DannyAsher, Obersach-
sebot, Xqbot, Capricorn42, Boongie, Action ben, JascalX, Omnipaedista, Jsorr, Mfwitten, Rapsar, Stpasha, RBarryYoung, DixonDBot,
Teknad, EmausBot, WikitanvirBot, Mbonet, Chewings72, Davikrehalt, Llightex, ClueBot NG, Wcherowi, Benjgil, Widr, Helpful Pixie
Bot, David815, Sylvier11, Waleed.598, ChromaNebula, Jochen Burghardt, Epicgenius, Bluemathman, G S Palmer, Idonei, Scotus12,
Danlarteygh and Anonymous: 143
• Formal language Source: https://en.wikipedia.org/wiki/Formal_language?oldid=662891308 Contributors: LC~enwiki, Jan Hidders, An-
dre Engels, Youandme, Rp, Ahoerstemeier, Nanshu, Schneelocke, Charles Matthews, Bemoeial, Hyacinth, Cleduc, Spikey, Robbot,
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 267
Jaredwf, Millosh, Tobias Bergemann, Connelly, Giftlite, Kim Bruning, Eequor, ConradPino, MarkSweep, Mukerjee, Tyler McHenry,
Discospinster, Rich Farmbrough, Dbachmann, Ntennis, Chalst, Nk, Jonsafari, Obradovic Goran, Waku, Helix84, Mdd, Jumbuck, MrTree,
Stephan Leeds, Nuno Tavares, Linas, Ruud Koot, Flamingspinach, ThomasOwens, Sdornan, LKenzo, FlaBot, Eubot, PlatypeanArch-
cow, Pexatus, Chobot, MithrandirMage, YurikBot, Wavelength, RussBot, Hede2000, Archelon, Rick Norwood, Muu-karhu, Bkil, Aaron
Schulz, BOT-Superzerocool, Bota47, Saric, Ripper234, Arthur Rubin, Jogers, GrinBot~enwiki, TuukkaH, Finell, SmackBot, Wic2020,
Incnis Mrsi, Unyoyega, Od Mishehu, Fikus, Jpvinall, Jerome Charles Potts, J. Spencer, Frap, Cerebralpayne, Jon Awbrey, FelisLeo,
SashatoBot, Astuishin, Mike Fikes, Iridescent, Dreftymac, Adriatikus, DBooth, CRGreathouse, CBM, Ezrakilty, Gregbard, Alaibot,
NERIUM, Nick Number, Cyclonenim, Vantelimus, Danny lost, VictorAnyakin, Hermel, AndriesVanRenssen, Nyq, JNW, Tedickey, Cic,
David Eppstein, J.delanoy, Trusilver, Brest, Daniele.tampieri, Policron, Bonadea, Halukakin, Idioma-bot, Philomathoholic, VolkovBot,
AlnoktaBOT, Philogo, ASHPvanRenssen, AlleborgoBot, Newbyguesses, MiNombreDeGuerra, Fratrep, OKBot, CBM2, Classicalecon,
ClueBot, Alpha Beta Epsilon, Excirial, Quercus basaseachicensis, Alexbot, NuclearWarfare, Hans Adler, Addbot, Cuaxdon, Lemmey-
BOT, OlEnglish, Zorrobot, Jarble, JakobVoss, Legobot, Luckas-bot, Yobot, Pcap, AnomieBOT, Jim1138, JackieBot, Hahahaha4, Cita-
tion bot, Clickey, Xqbot, GrouchoBot, Charvest, Wei.cs, Serberimor, MastiBot, RobinK, Full-date unlinking bot, ActivExpression, Gzorg,
Lokentaren, LoStrangolatore, Mean as custard, Ankog, Dziadgba, Architectchao, Tijfo098, ClueBot NG, Helpful Pixie Bot, Garsd, Sff9,
ChrisGualtieri, Dmunene, Chunliang Lyu, Dulaambaw, Akerbos, Jochen Burghardt, Isthatmoe, KasparBot and Anonymous: 87
• Functional completeness Source: https://en.wikipedia.org/wiki/Functional_completeness?oldid=665400365 Contributors: Slrubenstein,
Michael Hardy, Paul Murray, Ancheta Wis, Kaldari, Guppyfinsoup, EmilJ, Nortexoid, Domster, CBright, LOL, Paxsimius, Qwer-
tyus, Kbdank71, MarSch, Jameshfisher, R.e.s., RichF, SmackBot, InverseHypercube, CBM, Gregbard, Cydebot, Krauss, Swpb, Sergey
Marchenko, Joshua Issac, FMasic, Saralee Arrowood Viognier, Francvs, Hans Adler, Cnoguera, Dsimic, Addbot, Yobot, TechBot, Infvwl,
Citation bot 1, Abazgiri, Dixtosa, ZéroBot, Tijfo098, Helpful Pixie Bot and Anonymous: 21
• Halting problem Source: https://en.wikipedia.org/wiki/Halting_problem?oldid=667972227 Contributors: Damian Yerrick, AxelBoldt,
Derek Ross, LC~enwiki, Vicki Rosenzweig, Wesley, Robert Merkel, Jan Hidders, Andre Engels, Wahlau, Hephaestos, Stevertigo, Michael
Hardy, Booyabazooka, Dominus, Cole Kitchen, Wapcaplet, Graue, Fwappler, JeremyR, Cyp, Muriel Gottrop~enwiki, Salsa Shark, Qed,
Rotem Dan, Ehn, Charles Matthews, Timwi, Dcoetzee, Dysprosia, Doradus, Furrykef, David.Monniaux, Phil Boswell, DaleNixon, Rob-
bot, Craig Stuntz, RedWolf, Cogibyte, MathMartin, Rursus, Paul G, Guillermo3, Tobias Bergemann, Ramir, Solver, Ancheta Wis, Giftlite,
DavidCary, Dratman, Siroxo, Rchandra, Neilc, DNewhall, Sam Hocevar, Gazpacho, PhotoBox, Mormegil, Rich Farmbrough, Avriette,
Guanabot, ArnoldReinhold, Roodog2k, DcoetzeeBot~enwiki, Bender235, Pt, Jantangring, Army1987, Cyclist, Enric Naval, Obradovic
Goran, Jumbuck, Hackwrench, Axl, Sligocki, Suruena, Oleg Alexandrov, Ataru, MattGiuca, Ruud Koot, GregorB, Ryansking, Wul-
fila, Tslocum, Graham87, BD2412, Zoz, Oddcowboy, Bubba73, SLi, FlaBot, Mathbot, NekoDaemon, Jameshfisher, Chobot, Adam
Lindberg, Banaticus, YurikBot, Wavelength, Hairy Dude, RussBot, Spl, Robert A West, Trovatore, R.e.s., ZacBowling, Eighty~enwiki,
Thiseye, Thsgrn, Muu-karhu, Hv, Zwobot, Bota47, DaveWF, Arthur Rubin, Abeliano, Claygate, Tsiaojian lee, True Pagan Warrior,
SmackBot, Radak, Stux, InverseHypercube, Alksub, Eskimbot, Canderra, Ohnoitsjamie, Hmains, SpaceDude, Thumperward, Jfsamper,
Torzsmokus, Liontooth, Jgoulden, Anatoly Vorobey, Byelf2007, Lambiam, Wvbailey, Nagle, BenRayfield, Meor, WAREL, Zero sharp,
Atreys, FatalError, CRGreathouse, CBM, Chrisahn, Myasuda, Gregbard, Stormwyrm, Cydebot, Mon4, UberScienceNerd, Jdvelasc,
Thijs!bot, Amlz, VictorAnyakin, JAnDbot, LeedsKing, PhilipHunt, .anacondabot, Yurei-eggtart, Singularity, MetsBot, David Eppstein,
Ekotkie, Projectstann, R'n'B, Maurice Carbonaro, Aqwis, Tatrgel, Bah23, Billinghurst, Mikez302, Sapphic, HiDrNick, Macaw3, SieBot,
Ncapito, Likebox, Yahastu, TimMorley, Svick, SuperMarioBrah, CBM2, ClueBot, Jeffreykegler, James Lednik, Catuila, Gundersen53,
XLinkBot, Luke.mccrohon, SilvonenBot, Addbot, Ijriims, Ronhjones, Fieldday-sunday, Download, Verbal, PV=nRT, Yobot, Pcap, PM-
Lawrence, Bility, AnomieBOT, Garga2, PiracyFundsTerrorism, Mahtab mk, Xqbot, Martnym, Nippashish, Ehird, Thehelpfulbot, Fres-
coBot, Arlen22, Kwiki, Pinethicket, Rushbugled13, The.megapode, RedBot, Joshtch, Full-date unlinking bot, Proof Theorist, Specs112,
EmausBot, John of Reading, BillyPreset, Quondum, Zustra, Erget2005, ChuispastonBot, Jiri 1984, Widr, Helpful Pixie Bot, Bengski68,
Pedro.atmc, Marler8997, S.Chepurin, Farhanarrafi, IjonTichyIjonTichy, APerson, Jochen Burghardt, Blackbombchu, KasparBot and
Anonymous: 196
• List of multiple discoveries Source: https://en.wikipedia.org/wiki/List_of_multiple_discoveries?oldid=665549844 Contributors: Michael
Hardy, Ww, Hyacinth, Kmote, Piotrus, Rich Farmbrough, Florian Blaschke, Bender235, Eleland, Logologist, Sligocki, Richard Arthur
Norton (1958- ), Mindmatrix, Waldir, Pictureuploader, BD2412, Qwertyus, Rjwilmsi, KenBailey, RussBot, Dtrebbien, RFBailey, Peg-
ship, SmackBot, Fantasizer, Jagged 85, Hmains, Nbarth, Colonies Chris, Makyen, Dicklyon, Novangelis, Rhetth, CRGreathouse, BeenAroundAWhile,
Gregbard, Mattergy, Doug Weller, Smartse, PloniAlmoni, JamesBWatson, Nyttend, Nucleophilic, Martin Hühne, David J Wilson, Nigholith,
AdderUser, Rogerdpack, Nihil novi, Flyer22, Enyavar, Galapah, Editor2020, Siri Keeton, Candalua, Amble, Emitter~enwiki, Citation
bot, Acebulf, Abc518, Obankston, RjwilmsiBot, WildBot, John of Reading, GoingBatty, Boojum Snark, Spatrick99, Midas02, Luke-
Hancock, BG19bot, Manoguru, NanoMadScientist, Arttechlaw, Kozmokonstans, Sminthopsis84, Inanygivenhole, M.AliShoaib, Noyster,
Mfb, Filedelinkerbot, BethNaught and Anonymous: 30
• Logic gate Source: https://en.wikipedia.org/wiki/Logic_gate?oldid=665017088 Contributors: AxelBoldt, Magnus Manske, Peter Winnberg,
Derek Ross, MarXidad, The Anome, BenBaker, Jkominek, Mudlock, Heron, Stevertigo, Frecklefoot, RTC, Michael Hardy, Booy-
abazooka, Mahjongg, Dominus, SGBailey, Ixfd64, Karada, Mac, Glenn, Netsnipe, GRAHAMUK, Arteitle, Reddi, Dysprosia, Colin
Marquardt, Maximus Rex, Mrand, Omegatron, Jni, Sjorford, Robbot, Lord Kelvin, Pingveno, Bkell, Ianml, Paul Murray, Mushroom, An-
cheta Wis, Centrx, Giftlite, Andy, DavidCary, Peruvianllama, Everyking, Pashute, AJim, Andris, Espetkov, Vadmium, LucasVB, Kaldari,
CSTAR, Creidieki, Jlang, Kineox~enwiki, Mormegil, Discospinster, Rich Farmbrough, Luzian~enwiki, Roo72, LindsayH, SocratesJedi,
ESkog, ZeroOne, Plugwash, Nabla, CanisRufus, Aecis, Diomidis Spinellis, Smalljim, La goutte de pluie, Hooperbloob, Jumbuck, Guy
Harris, Arthena, Blues-harp, Lectonar, Pion, Bantman, N313t3~enwiki, BRW, Wtshymanski, Rick Sidwell, Cburnett, Deadworm222,
Bonzo, Alai, Axeman89, LunaticFringe, Bookandcoffee, Dan100, Cipherswarm, Smark33021, Boothy443, Mindmatrix, Jonathan de
Boyne Pollard, Bkkbrad, VanFowler, Kglavin, Karmosin, The Nameless, V8rik, BD2412, Syndicate, ZanderSchubert, GOD, Bruce1ee,
Ademkader, DoubleBlue, Firebug, Mirror Vax, Latka, Ewlyahoocom, Swtpc6800, Fresheneesz, Vonkje, DVdm, Bgwhite, The Ram-
bling Man, YurikBot, Adam1213, RussBot, Akamad, Stephenb, Yyy, Robchurch, FreelanceWizard, Zwobot, Rohanmittal, StuRat, Reyk,
Urocyon, HereToHelp, Anclation~enwiki, Easter Monkey, SorryGuy, AMbroodEY, JDspeeder1, Adam outler, Crystallina, SmackBot,
Eveningmist, Jcbarr, Frymaster, Canthusus, Folajimi, Andy M. Wang, Lindosland, JoeKearney, SynergyBlades, Oli Filth, MovGP0,
Lightspeedchick, Jjbeard~enwiki, Audriusa, Ian Burnet~enwiki, Can't sleep, clown will eat me, Nick Levine, KevM, Atilme, Epachamo,
Hgilbert, Jon Awbrey, Shadow148, SashatoBot, Lambiam, Kuru, MagnaMopus, Athernar, Igor Markov, Mgiganteus1, JHunterJ, Ci-
kicdragan, Robert Bond, Dicklyon, Mets501, Dacium, JYi, J Di, Aeons, Rangi42, Marysunshine, Eassin, Tawkerbot2, DonkeyKong64,
Drinibot, Circuit dreamer, Skoch3, Arnavion, Gregbard, Rajiv Beharie, Mblumber, Abhignarigala, Mello newf, Dancter, Tawkerbot4,
DumbBOT, Omicronpersei8, Lordhatrus, Thijs!bot, Epbr123, N5iln, Al Lemos, Marek69, DmitTrix, James086, Towopedia, Eleuther,
Stannered, AntiVandalBot, USPatent, MER-C, Wasell, Massimiliano Lincetto, Bongwarrior, VoABot II, JNW, Yandman, Rhdv, M
268 CHAPTER 42. UNIVERSAL ALGEBRA
3bdelqader, Robin S, Rickterp, MartinBot, Rettetast, Glrx, J.delanoy, Jonpro, Feis-Kontrol, Zen-in, Jeepday, Eibx, Bigdumbdinosaur,
FreddieRic, Hanacy, Sunderland06, Cometstyles, Tiggerjay, Tygrrr, DorganBot, Alex:D, Barber32, Idioma-bot, VolkovBot, Hersfold,
AlnoktaBOT, Lear’s Fool, Philip Trueman, PNG crusade bot, TXiKiBoT, GLPeterson, Mamidanna, Murugango, Djkrajnik, Salvar,
The Tetrast, Corvus cornix, Jackfork, Inductiveload, Dirkbb, Updatebjarni, STEDMUNDS07, Logan, Neparis, SieBot, Niv.sarig, I Like
Cheeseburgers, ToePeu.bot, Gerakibot, Teh Naab, Berserkerus, Evaluist, Oxymoron83, Steven Zhang, WimdeValk, ClueBot, The Thing
That Should Not Be, Rilak, Boing! said Zebedee, CounterVandalismBot, Namazu-tron, Alexbot, Ftbhrygvn, EddyJ07, Dspark76, Hans
Adler, The Red, Abhishek Jacob, Horselover Frost, Versus22, Egmontaz, DumZiBoT, XLinkBot, Marylee23, MystBot, Iranway, Addbot,
Willking1979, Melab-1, A0602336, Chef Super-Hot, Ashton1983, Vishnava, Fluffernutter, Rchard2scout, Hmorris94, Tyw7, Tide rolls,
Lightbot, OlEnglish, Legobot, PlankBot, Luckas-bot, Ptbotgourou, THEN WHO WAS PHONE?, Knownot, Alienskull, AnomieBOT,
0x38I9J*, Jim1138, JackieBot, Piano non troppo, Keithbob, Materialscientist, Spirit469, Citation bot, Bean49, Xqbot, RMSfromFtC,
Sketchmoose, Big angry doggy, Capricorn42, Coretheapple, RibotBOT, Elep2009, XPEHOPE3, Joaquin008, Vdsharma12, FrescoBot,
Roman12345, Machine Elf 1735, Cannolis, Pinethicket, Jschnur, RedBot, MastiBot, SpaceFlight89, Forward Unto Dawn, Cnwilliams,
Wikitikitaka, Blackenblue, Vrenator, Zvn, Clarkcj12, MrX, Meistro god, Galloping Moses, EmausBot, John of Reading, Trinibones,
Wikipelli, Draconicfire, GOVIND SANGLI, Wayne Slam, Dmitry123456, Ontyx, Carmichael, Tijfo098, GrayFullbuster, Protoborg,
Stevenglowa, ClueBot NG, Jack Greenmaven, Morgankevinj huggle, VladikVP, Marechal Ney, Masssly, Vibhijain, Jk2q3jrklse, Helpful
Pixie Bot, Wbm1058, Lowercase sigmabot, Mark Arsten, CitationCleanerBot, Snow Blizzard, Husigeza, RscprinterBot, Safeskiboy-
dunknoe, FrederickE, Teammm, Mrt3366, Rsmary, Sha-256, Harsh 2580, Lugia2453, Itsmeshiva, Red-eyed demon, Jamesmcmahon0,
Tentinator, Lilbonanza, Mz bankie, Jianhui67, Abhinav dw6, Cdouglas32, Trax support, TerryAlex, Gfdsfgfgfg, Areyoureadylouie, Char-
liegirl321, Bobbbbbbbbvvvvvcvcv, KasparBot and Anonymous: 549
• Logical biconditional Source: https://en.wikipedia.org/wiki/Logical_biconditional?oldid=629322133 Contributors: Patrick, TakuyaMu-
rata, BAxelrod, Dysprosia, Snobot, Giftlite, DavidCary, Recentchanges, Lethe, Andycjp, Discospinster, Paul August, Elwikipedista~enwiki,
Oleg Alexandrov, Velho, MattGiuca, BD2412, Kbdank71, Jittat~enwiki, Gaius Cornelius, Arthur Rubin, SmackBot, InverseHypercube,
Melchoir, WookieInHeat, GoOdCoNtEnT, Bluebot, Qphilo, Robma, Radagast83, Jon Awbrey, Lambiam, Bjankuloski06en~enwiki, Rain-
warrior, Mets501, Adambiswanger1, CBM, Gregbard, Cydebot, Julian Mendez, Shirulashem, Letranova, Infovarius, CommonsDelinker,
J.delanoy, Freekh, Anonymous Dissident, Wolfrock, Graymornings, Tiny plastic Grey Knight, Francvs, DEMcAdams, ClueBot, Watch-
duck, Alejandrocaro35, MilesAgain, 1ForTheMoney, Addbot, Jarble, Meisam, Yobot, Worldbruce, TaBOT-zerem, AnomieBOT, Ma-
chine Elf 1735, 777sms, Mijelliott, Kuzmaka, JSquish, Chharvey, Wayne Slam, DASHBotAV, Chester Markel, Masssly, MerlIwBot,
Pine, Hanlon1755 and Anonymous: 35
• Logical conjunction Source: https://en.wikipedia.org/wiki/Logical_conjunction?oldid=650824164 Contributors: AxelBoldt, Toby Bar-
tels, Enchanter, B4hand, Mintguy, Stevertigo, Chas zzz brown, Michael Hardy, EddEdmondson, Justin Johnson, TakuyaMurata, Poor
Yorick, Andres, Dysprosia, Jitse Niesen, Fredrik, Voodoo~enwiki, Goodralph, Snobot, Giftlite, Oberiko, Lethe, Yekrats, Jason Quinn,
Macrakis, Brockert, Leonard Vertighel, ALE!, Wikimol, Rdsmith4, Poccil, Richie, RuiMalheiro, Cfailde, SocratesJedi, Paul August,
Emvee~enwiki, Rzelnik, Ling Kah Jai, Oleg Alexandrov, Mindmatrix, Bluemoose, Btyner, LimoWreck, Graham87, BD2412, Kbdank71,
VKokielov, Jameshfisher, Fresheneesz, Chobot, Hede2000, Dijxtra, Trovatore, Mditto, EAderhold, Vanished user 34958, JoanneB, Tom
Morris, Melchoir, Bluebot, Cybercobra, Richard001, Jon Awbrey, Lambiam, Clark Mobarry, TastyPoutine, JoshuaF, Happy-melon,
Daniel5127, Gregbard, Cydebot, Thijs!bot, JAnDbot, Slacka123, VoABot II, Vujke, Gwern, Oren0, Santiago Saint James, Crisneda2000,
R'n'B, CommonsDelinker, AdrienChen, On This Continent, GaborLajos, Policron, Enix150, Trevor Goodyear, Hotfeba, TXiKiBoT,
Geometry guy, Wikiisawesome, Wolfrock, SieBot, WarrenPlatts, Oxymoron83, Majorbrainy, Callowschoolboy, Francvs, Classicale-
con, DEMcAdams, Niceguyedc, Watchduck, Hans Adler, Lab-oratory, Addbot, MrOllie, CarsracBot, Meisam, Legobot, Luckas-bot,
Yobot, BG SpaceAce, АлександрВв, No names available, MastiBot, H.ehsaan, Magmalex, EmausBot, Mjaked, 2andrewknyazev, Friet-
jes, Masssly, Scwarebang, Interapple and Anonymous: 91
• Logical connective Source: https://en.wikipedia.org/wiki/Logical_connective?oldid=666225091 Contributors: AxelBoldt, Rmhermen,
Christian List, Stevertigo, Michael Hardy, Dominus, Justin Johnson, TakuyaMurata, Ahoerstemeier, AugPi, Andres, Dysprosia, Hyacinth,
Robbot, Sbisolo, Ojigiri~enwiki, Filemon, Snobot, Giftlite, DavidCary, Risk one, Siroxo, Boothinator, Wiki Wikardo, Kaldari, Sam Ho-
cevar, Indolering, Abdull, Rfl, Jiy, Guanabot, Paul August, ZeroOne, Elwikipedista~enwiki, Charm, Chalst, Shanes, EmilJ, Spoon!,
Kappa, SurrealWarrior, Suruena, Bookandcoffee, Oleg Alexandrov, Joriki, Mindmatrix, Graham87, BD2412, Kbdank71, Hiding, Fresh-
eneesz, Chobot, YurikBot, RussBot, Gaius Cornelius, Rick Norwood, Trovatore, Cullinane, Arthur Rubin, Masquatto, Nahaj, SmackBot,
Incnis Mrsi, InvictaHOG, JRSP, Chris the speller, Bluebot, Tolmaion, Jon Awbrey, Lambiam, Nishkid64, Bjankuloski06en~enwiki,
JHunterJ, RichardF, JRSpriggs, CRGreathouse, CBM, Gregbard, Cydebot, Julian Mendez, DumbBOT, Letranova, Jdm64, Danger, Dun-
canHill, David Eppstein, Nleclerc~enwiki, R'n'B, Christian424, GoatGuy, Darkvix, Arcanedude91, Policron, VolkovBot, Jeff G., TXiKi-
BoT, Anonymous Dissident, Philogo, Dmcq, Sergio01, SieBot, Gerakibot, Yintan, Skippydo, Huku-chan, Denisarona, Francvs, ClueBot,
Justin W Smith, Ktr101, Watchduck, ZuluPapa5, Hans Adler, MilesAgain, Hugo Herbelin, Djk3, Johnuniq, Addbot, Mortense, Melab-1,
Download, SpBot, Peti610botH, Loupeter, Yobot, Amirobot, AnomieBOT, Racconish, ArthurBot, Xqbot, El Caro, Luis Felipe Schenone,
Entropeter, FrescoBot, Citation bot 1, RandomDSdevel, Pinethicket, Timboat, Der Elbenkoenig, Dude1818, Orenburg1, Hriber, Green-
fernglade, Ipersite, BAICAN XXX, טוב,נו, Seabuoy, Mentibot, Tijfo098, Mhiji, ClueBot NG, Thebombzen, Masssly, Helpful Pixie Bot,
Owarihajimari, Weaktofu, Hanlon1755, Fuebar, Tommor7835, Everymorning, Star767, Dai Pritchard, Sk8rcoolkat6969, Student342
and Anonymous: 87
• Logical disjunction Source: https://en.wikipedia.org/wiki/Logical_disjunction?oldid=639359080 Contributors: AxelBoldt, Bryan Derk-
sen, Tarquin, Toby Bartels, B4hand, Mintguy, Patrick, D, Michael Hardy, Pit~enwiki, Stephen C. Carlson, Ixfd64, Justin Johnson,
TakuyaMurata, Poor Yorick, DesertSteve, Dysprosia, Colin Marquardt, Robbot, Kowey, Voodoo~enwiki, Tobias Bergemann, Giftlite, Re-
centchanges, Lethe, Macrakis, Espetkov, Bact, Poccil, Guanabot, SocratesJedi, Paul August, ZeroOne, Jnestorius, Daemondust, Blinken,
Obradovic Goran, Hesperian, Emvee~enwiki, Ling Kah Jai, Oleg Alexandrov, Thryduulf, Mindmatrix, Kzollman, Bluemoose, Mandarax,
LimoWreck, BD2412, Kbdank71, Xiao Li, FlaBot, Gringo300, Mathbot, Fresheneesz, Chobot, YurikBot, Gaius Cornelius, Dijxtra,
Trovatore, Tony1, Mditto, Acetic Acid, Vanished user 34958, Nahaj, Katieh5584, Tom Morris, Melchoir, BiT, Bluebot, Kurykh, Or-
angeDog, Cybercobra, Charles Merriam, Jon Awbrey, EdC~enwiki, Doc Daneeka, RekishiEJ, CBM, Gregbard, Cydebot, Julian Mendez,
PamD, Thijs!bot, Wikid77, Moulder, Nick Number, JAnDbot, Arachnocapitalist, Slacka123, Laymanal, Magioladitis, Tony Winter,
David65536, Santiago Saint James, CommonsDelinker, On This Continent, Supuhstar, Policron, Althepal, Enix150, VolkovBot, TXiKi-
BoT, Gwib, Bbukh, World.suman, SieBot, Soler97, Ctxppc, AlanUS, Anyeverybody, Francvs, Classicalecon, ClueBot, C xong, Rumping,
Watchduck, Hans Adler, Wernhervonbraun, MrVanBot, CarsracBot, AndersBot, FiriBot, Tripsone, Meisam, Legobot, Luckas-bot, Max-
damantus, Charlatino, MauritsBot, Xqbot, Ruy Pugliesi, FrescoBot, RedBot, Gamewizard71, Dinamik-bot, EmausBot, Matthewbeckler,
2andrewknyazev, Pengkeu, ClueBot NG, Masssly, Scwarebang, PhnomPencil, CarrieVS, Fuebar, Lemnaminor and Anonymous: 86
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 269
Zero sharp, CBM, Cydebot, JAnDbot, R'n'B, VolkovBot, Kyle the bot, Jamelan, Dddenton, Addbot, Luckas-bot, ArthurBot, FrescoBot,
Abradoks, EmausBot, Mentibot, Bomazi, Helpful Pixie Bot, Palmik~enwiki and Anonymous: 4
• Post–Turing machine Source: https://en.wikipedia.org/wiki/Post%E2%80%93Turing_machine?oldid=574352346 Contributors: Zun-
dark, The Anome, Michael Hardy, Ahoerstemeier, Palfrey, Altenmann, Giftlite, Rich Farmbrough, Night Gyr, Pearle, Gene Nygaard,
Casey Abell, Trovatore, R.e.s., SmackBot, Jushi, Mhss, Robth, Wvbailey, Urebelscum, CRGreathouse, CBM, ShelfSkewed, Aruffo,
A.M.L., XyBot, Simuloid, AlleborgoBot, Mild Bill Hiccup, StevenDH, Addbot, Yobot, Pcap, AnomieBOT, Noq, AdjustShift, Leventov,
Calcyman, John of Reading, WikitanvirBot, Amr.rs, Jay8g, A.kernitsky, CarrieVS and Anonymous: 9
• Propositional calculus Source: https://en.wikipedia.org/wiki/Propositional_calculus?oldid=668343718 Contributors: The Anome, Tar-
quin, Jan Hidders, Tzartzam, Michael Hardy, JakeVortex, Kku, Justin Johnson, Minesweeper, Looxix~enwiki, AugPi, Rossami, Ev-
ercat, BAxelrod, Charles Matthews, Dysprosia, Hyacinth, UninvitedCompany, BobDrzyzgula, Robbot, Benwing, MathMartin, Rorro,
GreatWhiteNortherner, Marc Venot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Gubbubu, Gadfium, LiDaobing, Grauw, Almit39, Ku-
tulu, Creidieki, Urhixidur, PhotoBox, EricBright, Extrapiramidale, Rich Farmbrough, Guanabot, FranksValli, Paul August, Glenn Willen,
Elwikipedista~enwiki, Tompw, Chalst, BrokenSegue, Cmdrjameson, Nortexoid, Varuna, Red Winged Duck, ABCD, Xee, Nightstallion,
Bookandcoffee, Oleg Alexandrov, Japanese Searobin, Joriki, Linas, Mindmatrix, Ruud Koot, Trevor Andersen, Waldir, Graham87, Qw-
ertyus, Kbdank71, Porcher, Koavf, PlatypeanArchcow, Margosbot~enwiki, Kri, Gareth E Kegg, Roboto de Ajvol, Hairy Dude, Russell C.
Sibley, Gaius Cornelius, Ihope127, Rick Norwood, Trovatore, TechnoGuyRob, Jpbowen, Cruise, Voidxor, Jerome Kelly, Arthur Rubin,
Reyk, Teply, GrinBot~enwiki, SmackBot, Michael Meyling, Imz, Incnis Mrsi, Srnec, Mhss, Bluebot, Cybercobra, Jon Awbrey, Andeggs,
Ohconfucius, Lambiam, Wvbailey, Scientizzle, Loadmaster, Mets501, Pejman47, JulianMendez, Adriatikus, Zero sharp, JRSpriggs,
George100, Harold f, Vaughan Pratt, CBM, ShelfSkewed, Sdorrance, Gregbard, Cydebot, Julian Mendez, Taneli HUUSKONEN, Ap-
plemeister, GeePriest, Salgueiro~enwiki, JAnDbot, Thenub314, Hut 8.5, Magioladitis, Paroswiki, MetsBot, JJ Harrison, Epsilon0, San-
tiago Saint James, R'n'B, N4nojohn, Wideshanks, TomS TDotO, Created Equal, The One I Love, Our Fathers, STBotD, Mistercupcake,
VolkovBot, JohnBlackburne, TXiKiBoT, Lynxmb, The Tetrast, Philogo, Wikiisawesome, General Reader, Jmath666, VanishedUser-
ABC, Sapphic, Newbyguesses, SieBot, Iamthedeus, Дарко Максимовић, Jimmycleveland, OKBot, Svick, Huku-chan, Francvs, Clue-
Bot, Unica111, Wysprgr2005, Garyzx, Niceguyedc, Thinker1221, Shivakumar2009, Estirabot, Alejandrocaro35, Reuben.cornel, Hans
Adler, MilesAgain, Djk3, Lightbearer, Addbot, Rdanneskjold, Legobot, Yobot, Tannkrem, Stefan.vatev, Jean Santeuil, AnomieBOT,
Materialscientist, Ayda D, Doezxcty, Cwchng, Omnipaedista, SassoBot, January2009, Thehelpfulbot, FrescoBot, LucienBOT, Xenfreak,
HRoestBot, Dinamik-bot, EmausBot, John of Reading, 478jjjz, Chharvey, Chewings72, Bomazi, Tijfo098, MrKoplin, Frietjes, Help-
ful Pixie Bot, Brad7777, Wolfmanx122, Hanlon1755, Jochen Burghardt, Mark viking, Mrellisdee, Christian Nassif-Haynes, Matthew
Kastor, Marco volpe, Jwinder47, Mario Castelán Castro, Eavestn, SiriusGR and Anonymous: 148
• Recursively enumerable set Source: https://en.wikipedia.org/wiki/Recursively_enumerable_set?oldid=654867810 Contributors: Michael
Hardy, Hyacinth, Aleph4, Hmackiernan, Kiwibird, MathMartin, Dissident, Neilc, Satyadev, Bender235, MKI, Hackwrench, Caesura,
Amelio Vázquez, ByteBaron, Mathbot, NekoDaemon, BMF81, Roboto de Ajvol, YurikBot, Archelon, Trovatore, Jpbowen, DYLAN
LENNON~enwiki, That Guy, From That Show!, SmackBot, Pkirlin, Eskimbot, Mhss, Benkovsky, Viebel, Henning Makholm, Zde~enwiki,
Mets501, Zero sharp, JRSpriggs, CBM, Gregbard, Cydebot, Julian Mendez, Epbr123, Hermel, David Eppstein, Jamelan, Frank Stephan,
Justin W Smith, Wanderer57, Addbot, Pcap, Omnipaedista, VladimirReshetnikov, Per Ardua, O.fasching.logic.at, Pritish.kamath, Emaus-
Bot, WikitanvirBot, ZéroBot, Mariannealexandrino, Jochen Burghardt, Verdana Bold, DigitalRunes and Anonymous: 24
• Semi-Thue system Source: https://en.wikipedia.org/wiki/Semi-Thue_system?oldid=659670210 Contributors: Stephan Schulz, Math-
Martin, Thv, Woohookitty, Linas, SDC, Chris Pressey, NekoDaemon, Gurch, YurikBot, Dobromila, R.e.s., Ott2, Nielses, PhS, SmackBot,
Nejko, Wvbailey, The Real Marauder, R'n'B, HiEv, Stereotype441, Ivan Štambuk, Bmcm, PixelBot, Addbot, Yobot, Pcap, FrescoBot,
SchreyP, Warchiw, Tijfo098, Snotbot, Jochen Burghardt and Anonymous: 7
• Sheffer stroke Source: https://en.wikipedia.org/wiki/Sheffer_stroke?oldid=659255469 Contributors: AxelBoldt, Fubar Obfusco, David
spector, Vik-Thor, Michael Hardy, AugPi, Jouster, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Cameronc, Johnleemk, Robbot, Saaska,
Rorro, Paul Murray, Snobot, Giftlite, DocWatson42, Brouhaha, Zigger, Gubbubu, Halo, Sam, Urhixidur, Ratiocinate, Rich Farmbrough,
Leibniz, Pie4all88, TheJames, SocratesJedi, Paul August, Chalst, EmilJ, Nortexoid, Redfarmer, Emvee~enwiki, Dominic, Bookandcoffee,
Drakferion, Woohookitty, Mindmatrix, Steven Luo, Ruud Koot, Wayward, BD2412, Qwertyus, Kbdank71, R.e.b., Ademkader, FlaBot,
Mathbot, George Leung, Algebraist, RobotE, Sceptre, Imagist, Archelon, Ksyrie, NormalAsylum, Dijxtra, Trovatore, Nad, Yahya Abdal-
Aziz, Prolineserver, JMRyan, Rohanmittal, Luethi, JoanneB, SmackBot, Melchoir, Mhss, Chris the speller, Bluebot, Thumperward, UU,
Cybercobra, Jon Awbrey, Lambiam, Loadmaster, Yoderj, CBM, Ezrakilty, Gregbard, Nilfanion, Rotiro, Cydebot, Julian Mendez, Ase-
nine, SpK, Royas, MER-C, Magioladitis, VoABot II, Vujke, Seba5618, Santiago Saint James, Kloisiie, Olmsfam, Somejan, Josephholsten,
The Tetrast, Philogo, Manusharma, Jamelan, Inductiveload, Dogah, CultureDrone, Francvs, ClueBot, Plastikspork, Achlaug, Watchduck,
Dspark76, Hans Adler, Addbot, Meisam, Bunnyhop11, TaBOT-zerem, Erud, M&M987, Dante Cardoso Pinto de Almeida, LittleWink,
Dhanyavaada, Omerta-ve, Dega180, Gamewizard71, RjwilmsiBot, Arielkoiman, Set theorist, Zahnradzacken, Hpvpp, SporkBot, ClueBot
NG, Masssly, Jones11235813, MerlIwBot, SOFooBah, Yamaha5, Brianlen, SarahRMadden, Sofia Koutsouveli and Anonymous: 65
• Singleton (mathematics) Source: https://en.wikipedia.org/wiki/Singleton_(mathematics)?oldid=665329666 Contributors: AxelBoldt,
Toby Bartels, Patrick, Michael Hardy, Revolver, Charles Matthews, Dysprosia, Fibonacci, Robbot, MathMartin, Merovingian, To-
bias Bergemann, Giftlite, Lethe, Danny Rathjens, Oleg Alexandrov, Dionyziz, Isnow, Jshadias, Salix alba, YurikBot, NawlinWiki,
Reyk, KnightRider~enwiki, Melchoir, Octahedron80, Lambiam, IronGargoyle, Loadmaster, Cydebot, Thijs!bot, Jamelan, SieBot, Svick,
Marc van Leeuwen, Addbot, M-J, Luckas-bot, TaBOT-zerem, Obersachsebot, Noamz, Sabarwolf39, RjwilmsiBot, EmausBot, Widr,
MKKowalczyk, Mark viking, Monkbot, GeoffreyT2000 and Anonymous: 16
• Subset Source: https://en.wikipedia.org/wiki/Subset?oldid=658760669 Contributors: Damian Yerrick, AxelBoldt, Youssefsan, XJaM,
Toby Bartels, StefanRybo~enwiki, Edward, Patrick, TeunSpaans, Michael Hardy, Wshun, Booyabazooka, Ellywa, Oddegg, Andres,
Charles Matthews, Timwi, Hyacinth, Finlay McWalter, Robbot, Romanm, Bkell, 75th Trombone, Tobias Bergemann, Tosha, Giftlite,
Fropuff, Waltpohl, Macrakis, Tyler McHenry, SatyrEyes, Rgrg, Vivacissamamente, Mormegil, EugeneZelenko, Noisy, Deh, Paul Au-
gust, Engmark, Spoon!, SpeedyGonsales, Obradovic Goran, Nsaa, Jumbuck, Raboof, ABCD, Sligocki, Mac Davis, Aquae, LFaraone,
Chamaeleon, Firsfron, Isnow, Salix alba, VKokielov, Mathbot, Harmil, BMF81, Chobot, Roboto de Ajvol, YurikBot, Alpt, Dmharvey,
KSmrq, NawlinWiki, Trovatore, Nick, Szhaider, Wasseralm, Sardanaphalus, Jacek Kendysz, BiT, Gilliam, Buck Mulligan, SMP, Or-
angeDog, Bob K, Dreadstar, Bjankuloski06en~enwiki, Loadmaster, Vedexent, Amitch, Madmath789, Newone, CBM, Jokes Free4Me,
345Kai, SuperMidget, Gregbard, WillowW, MC10, Thijs!bot, Headbomb, Marek69, RobHar, WikiSlasher, Salgueiro~enwiki, JAnDbot,
.anacondabot, Pixel ;-), Pawl Kennedy, Emw, ANONYMOUS COWARD0xC0DE, RaitisMath, JCraw, Tgeairn, Ttwo, Maurice Car-
bonaro, Acalamari, Gombang, NewEnglandYankee, Liatd41, VolkovBot, CSumit, Deleet, Rei-bot, Anonymous Dissident, James.Spudeman,
PaulTanenbaum, InformationSpace, Falcon8765, AlleborgoBot, P3d4nt, NHRHS2010, Garde, Paolo.dL, OKBot, Brennie8, Jons63,
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 271
Loren.wilton, ClueBot, GorillaWarfare, PipepBot, The Thing That Should Not Be, DragonBot, Watchduck, Hans Adler, Computer97,
Noosentaal, Versus22, PCHS-NJROTC, Andrew.Flock, Reverb123, Addbot, , Fyrael, PranksterTurtle, Numbo3-bot, Zorrobot, Jar-
ble, JakobVoss, Luckas-bot, Yobot, Synchronism, AnomieBOT, Jim1138, Materialscientist, Citation bot, Martnym, NFD9001, Char-
vest, 78.26, XQYZ, Egmontbot, Rapsar, HRoestBot, Suffusion of Yellow, Agent Smith (The Matrix), RenamedUser01302013, ZéroBot,
Alexey.kudinkin, Chharvey, Quondum, Chewings72, 28bot, ClueBot NG, Wcherowi, Matthiaspaul, Bethre, Mesoderm, O.Koslowski,
AwamerT, Minsbot, Pratyya Ghosh, YFdyh-bot, Ldfleur, ChalkboardCowboy, Saehry, Stephan Kulla, , Ilya23Ezhov, Sandshark23,
Quenhitran, Neemasri, Prince Gull, Maranuel123, Alterseemann, Rahulmr.17 and Anonymous: 179
• Tag system Source: https://en.wikipedia.org/wiki/Tag_system?oldid=647556667 Contributors: Edward, Michael Hardy, Docu, Angela,
Charles Matthews, Doradus, Giftlite, RJHall, Gene Nygaard, Dionyziz, R.e.s., KnightRider~enwiki, Mhss, Chlewbot, JohnnyNyquist,
Wvbailey, Misof, Cydebot, A.M.L., Gwern, Freekh, Niceguyedc, Spitfire, Addbot, Lightbot, PI314r, Lueckless, Ashutosh y0078 and
Anonymous: 4
• Truth table Source: https://en.wikipedia.org/wiki/Truth_table?oldid=667166695 Contributors: Mav, Bryan Derksen, Tarquin, Larry
Sanger, Webmaestro, Heron, Hephaestos, Bdesham, Patrick, Michael Hardy, Wshun, Liftarn, Ixfd64, Justin Johnson, Delirium, Jimf-
bleak, AugPi, Andres, DesertSteve, Charles Matthews, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Pakaran, Banno, Robbot, RedWolf,
Snobot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Vadmium, Lst27, Antandrus, JimWae, Schnits, Creidieki, Joyous!, Rich Farm-
brough, ArnoldReinhold, Paul August, CanisRufus, Gershwinrb, Robotje, Billymac00, Nortexoid, Jonsafari, Mdd, LutzL, Alansohn,
Gary, Noosphere, Cburnett, Crystalllized, Bookandcoffee, Oleg Alexandrov, Mindmatrix, Bluemoose, Abd, Graham87, BD2412, Kb-
dank71, Xxpor, JVz, Koavf, Tangotango, Bubba73, FlaBot, Maustrauser, Fresheneesz, Aeroknight, Chobot, DVdm, Bgwhite, Yurik-
Bot, Wavelength, SpuriousQ, Trovatore, Sir48, Kyle Barbour, Cheese Sandwich, Pooryorick~enwiki, Rofthorax, CWenger, Leonar-
doRob0t, Cmglee, KnightRider~enwiki, SmackBot, InverseHypercube, KnowledgeOfSelf, Vilerage, Canthusus, The Rhymesmith, Mhss,
Gaiacarra, Can't sleep, clown will eat me, Chlewbot, Cybercobra, Uthren, Gschadow, Jon Awbrey, Antonielly, Nakke, Dr Smith, Parikshit
Narkhede, Beetstra, Dicklyon, Mets501, Richardcook, Danlev, CRGreathouse, CBM, WeggeBot, Gregbard, Slazenger, Starylon, Cyde-
bot, Flowerpotman, Julian Mendez, Lee, Letranova, Oreo Priest, AntiVandalBot, Kitty Davis, Quintote, Vantelimus, K ganju, JAnDbot,
Avaya1, Olaf, Holly golightly, Johnbrownsbody, R27smith200245, Santiago Saint James, Sevillana~enwiki, CZ Top, Aston Martian, On
This Continent, LordAnubisBOT, Bergin, NewEnglandYankee, Policron, Lights, VolkovBot, The Tetrast, AlleborgoBot, Logan, SieBot,
Paradoctor, Krawi, Djayjp, Flyer22, WimdeValk, Francvs, Someone the Person, ParisianBlade, Hans Adler, XTerminator2000, Wstorr,
Vegetator, Aitias, Qwfp, Staticshakedown, Addbot, Ghettoblaster, AgadaUrbanit, Kiril Simeonovski, C933103, Clon89, Luckas-bot,
Yobot, Nallimbot, Fox89, Racconish, Quad4rax, Xqbot, Addihockey10, RibotBOT, Jonesey95, MastiBot, TBloemink, Onel5969, Mean
as custard, ZéroBot, Tijfo098, ChuispastonBot, ClueBot NG, Akuindo, Millermk, WikiPuppies, Jk2q3jrklse, Wbm1058, Bmusician,
Ceklock, Joydeep, Supernerd11, CitationCleanerBot, Annina.kari, Achal Singh, Wolfmanx122, La marts boys, JYBot, ChamithN, Swash-
ski, Aichotoitinhyeu97, KasparBot and Anonymous: 276
• Truth value Source: https://en.wikipedia.org/wiki/Truth_value?oldid=639584536 Contributors: Mav, Toby Bartels, Patrick, Michael
Hardy, TakuyaMurata, Sethmahoney, Charles Matthews, Hyacinth, ErikDunsing, Tobias Bergemann, Giftlite, Rich Farmbrough, Chalst,
EmilJ, Alansohn, SlimVirgin, AndrejBauer, Cyro, Apokrif, BD2412, Josh Parris, Mayumashu, WhyBeNormal, Chobot, RussBot, Trova-
tore, Zwobot, Tomisti, FatherBrain, SmackBot, Incnis Mrsi, Mhss, Octahedron80, Frap, Chlewbot, Matchups, Nakon, Jon Awbrey,
Byelf2007, Wvbailey, Dbtfz, Kuru, Bjankuloski06en~enwiki, Gveret Tered, CRGreathouse, CBM, Simeon, Gregbard, Cydebot, Xcep-
ticZP, Robertinventor, Letranova, Liquid-aim-bot, JAnDbot, Jackmass, Faizhaider, R'n'B, Senu, VolkovBot, LBehounek, Andrewaskew,
T-9000, Francvs, Watchduck, Hans Adler, Addbot, Luckas-bot, Yobot, Zagothal, ArthurBot, Xqbot, Noamz, RibotBOT, FrescoBot,
Pinethicket, Thabick, EmausBot, Rami radwan, Solomonfromfinland, ZéroBot, Card Zero, Tijfo098, Thecameraguy12345678, Kephir,
ArkadiuszGlowaLaskowski, Gronk Oz and Anonymous: 23
• Turing degree Source: https://en.wikipedia.org/wiki/Turing_degree?oldid=657136623 Contributors: AxelBoldt, Michael Hardy, Gabbe,
Ciphergoth, Charles Matthews, MathMartin, Tobias Bergemann, Gro-Tsen, David Johnson, Alfeld, Gauge, Tompw, Chalst, Peter M
Gerdes, Dalf, Rjwilmsi, NekoDaemon, Tdoune, Bgwhite, Hairy Dude, Archelon, Trovatore, Dr.Halfbaked, SmackBot, Selfworm, Mhss,
Zero sharp, RSido, CmdrObot, CBM, Cydebot, Christian75, Shlomi Hillel, David Eppstein, Jeyradan, CBM2, Cab.jones, MystBot,
Addbot, TutterMouse, Luckas-bot, TaBOT-zerem, Kilom691, Mon oncle, Citation bot, Citation bot 1, Suslindisambiguator, BG19bot,
Walber026, Anrnusna, Toanboe and Anonymous: 9
• Universal algebra Source: https://en.wikipedia.org/wiki/Universal_algebra?oldid=653169020 Contributors: AxelBoldt, Bryan Derksen,
Zundark, Andre Engels, Toby~enwiki, Toby Bartels, Youandme, Michael Hardy, GTBacchus, Andres, Revolver, Charles Matthews,
Dysprosia, Aleph4, Robbot, Fredrik, Sanders muc, Kowey, Fuelbottle, Giftlite, APH, Sam Hocevar, AlexChurchill, Zaslav, Tompw,
Rgdboer, EmilJ, AshtonBenson, Msh210, ABCD, Linas, Mindmatrix, Smmurphy, Isnow, Magidin, Jrtayloriv, Wavelength, Hairy Dude,
Chaos, Wiki alf, Arthur Rubin, RonnieBrown, SmackBot, El Fahno, Alink, Nbarth, Zvar, Spakoj~enwiki, Henning Makholm, WillowW,
HStel, Sam Staton, Sadeghd, Rlupsa, Knotwork, JAnDbot, Sean Tilson, Twisted86, David Eppstein, JaGa, Pavel Jelínek, Trusilver,
TheSeven, JohnBlackburne, AllS33ingI, Popopp, Synthebot, Nicks221, SieBot, Sneakfast, Tkeu, Excirial, He7d3r, Hans Adler, Kaba3,
Algebran, Addbot, Delaszk, Loupeter, Legobot, Yobot, Ptbotgourou, AnomieBOT, RibotBOT, Oursipan, Thehelpfulbot, Ilovegroupthe-
ory, D stankov, Yaddie, Rausch, Eivuokko, Nascar1996, GoingBatty, Quondum, ClueBot NG, Bezik, Frietjes, MerlIwBot, Beaumont877,
Brad7777, Duxwing, Jochen Burghardt, NQ, JMP EAX and Anonymous: 60
42.12.2 Images
• File:2D_affine_transformation_matrix.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2c/2D_affine_transformation_
matrix.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Cmglee
• File:2_state_busy_beaver.JPG Source: https://upload.wikimedia.org/wikipedia/commons/4/4d/2_state_busy_beaver.JPG License: CC-
BY-SA-3.0 Contributors:
• Own work
• Model run in Excel, copied to Autosketch, saved as .Jpg
Original artist: Wvbailey (talk) (Uploads)
• File:2_state_busy_beaver_2.JPG Source: https://upload.wikimedia.org/wikipedia/commons/8/89/2_state_busy_beaver_2.JPG License:
CC-BY-SA-3.0 Contributors: Own work Original artist: User:Wvbailey
272 CHAPTER 42. UNIVERSAL ALGEBRA