You are on page 1of 292

Functional completeness

From Wikipedia, the free encyclopedia


Contents

1 Affine transformation 1
1.1 Mathematical definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Alternative definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Augmented matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Affine transformation of the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Examples of affine transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Affine transformations over the real numbers . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.2 Affine transformation over a finite field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.3 Affine transformation in plane geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Arithmetical hierarchy 10
2.1 The arithmetical hierarchy of formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 The arithmetical hierarchy of sets of natural numbers . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Relativized arithmetical hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Arithmetic reducibility and degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 The arithmetical hierarchy of subsets of Cantor and Baire space . . . . . . . . . . . . . . . . . . . 12
2.6 Extensions and variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 Meaning of the notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.9 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.10 Relation to Turing machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Arity 15
3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Nullary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

i
ii CONTENTS

3.1.2 Unary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Binary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.4 Ternary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.5 n-ary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.6 Variable arity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Other names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Boolean domain 19
4.1 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Boolean expression 21
5.1 Boolean operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Boolean function 23
6.1 Boolean functions in applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7 Cardinality of the continuum 25


7.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.1.1 Uncountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.1.2 Cardinal equalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.1.3 Alternative explanation for c = 2ℵ0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Beth numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 The continuum hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.4 Sets with cardinality of the continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.5 Sets with greater cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

8 Charles Sanders Peirce 30


8.1 Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
8.1.1 Early employment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
8.1.2 Johns Hopkins University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.1.3 Poverty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
CONTENTS iii

8.1.4 Slavery, the Civil War and racism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33


8.2 Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.3 Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.4 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.4.1 Mathematics of logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.4.2 Continua . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.4.3 Probability and statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.5 Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.5.1 Theory of categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.5.2 Aesthetics and ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.6 Philosophy: logic, or semiotic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.6.1 Logic as philosophical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.6.2 Signs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.6.3 Modes of inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.6.4 Pragmatism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8.7 Philosophy: metaphysics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.8 Science of review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

9 Clone (algebra) 59
9.1 Abstract clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

10 Computability theory 61
10.1 Computable and uncomputable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.2 Turing computability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.3 Areas of research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.3.1 Relative computability and the Turing degrees . . . . . . . . . . . . . . . . . . . . . . . . 63
10.3.2 Other reducibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.3 Rice’s theorem and the arithmetical hierarchy . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.4 Reverse mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.5 Numberings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.6 The priority method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.7 The lattice of recursively enumerable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.8 Automorphism problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.9 Kolmogorov complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.10 Frequency computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.11 Inductive inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3.12 Generalizations of Turing computability . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.3.13 Continuous computability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
iv CONTENTS

10.4 Relationships between definability, proof and computability . . . . . . . . . . . . . . . . . . . . . 67


10.5 Name of the subject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.6 Professional organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

11 De Morgan’s laws 71
11.1 Formal notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
11.1.1 Substitution form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.1.2 Set theory and Boolean algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.1.3 Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.1.4 Text searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.3 Informal proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.3.1 Negation of a disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.3.2 Negation of a conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.4 Formal proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
11.5 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
11.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
11.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
11.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

12 Formal language 79
12.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.2 Words over an alphabet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.3 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.4.1 Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.5 Language-specification formalisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.6 Operations on languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7.1 Programming languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.7.2 Formal theories, systems and proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
12.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9.1 Citation footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.9.2 General references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
12.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

13 Functional completeness 86
CONTENTS v

13.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86


13.2 Informal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.3 Characterization of functional completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.4 Minimal functionally complete operator sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.6 In other domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.7 Set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

14 Halting problem 90
14.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
14.2 Importance and consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
14.3 Representation as a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
14.4 Sketch of proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
14.5 Proof as a corollary of the uncomputability of Kolmogorov complexity . . . . . . . . . . . . . . . 93
14.6 Common pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
14.7 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.8 Relationship with Gödel’s incompleteness theorems . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.9 Recognizing partial solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.10History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.11Avoiding the halting problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.12See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.13Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.14References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.15External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

15 List of multiple discoveries 99


15.1 13th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.2 14th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.3 16th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.4 17th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
15.5 18th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.6 19th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.7 20th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
15.8 21st century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
15.9 Quotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
15.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
15.11Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
15.12References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
15.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
vi CONTENTS

16 Logic gate 130


16.1 Electronic gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
16.2 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
16.3 Universal logic gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
16.4 De Morgan equivalent symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
16.5 Data storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
16.6 Three-state logic gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
16.7 History and development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
16.8 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
16.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
16.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
16.11Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

17 Logical biconditional 137


17.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
17.1.1 Truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
17.1.2 Venn diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
17.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
17.3 Rules of inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
17.3.1 Biconditional introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
17.3.2 Biconditional elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
17.4 Colloquial usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
17.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
17.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
17.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

18 Logical conjunction 142


18.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
18.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
18.2.1 Truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
18.3 Introduction and elimination rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
18.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
18.5 Applications in computer engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
18.6 Set-theoretic correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
18.7 Natural language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
18.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
18.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
18.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

19 Logical connective 148


19.1 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
19.1.1 Natural language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
CONTENTS vii

19.1.2 Formal languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149


19.2 Common logical connectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
19.2.1 List of common logical connectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
19.2.2 History of notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
19.2.3 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
19.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
19.4 Order of precedence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
19.5 Computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
19.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
19.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
19.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
19.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
19.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

20 Logical disjunction 154


20.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
20.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
20.2.1 Truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
20.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
20.4 Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
20.5 Applications in computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
20.5.1 Bitwise operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
20.5.2 Logical operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.5.3 Constructive disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.6 Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.7 Natural language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
20.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
20.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

21 Material conditional 160


21.1 Definitions of the material conditional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
21.1.1 As a truth function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
21.1.2 As a formal connective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
21.2 Formal properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
21.3 Philosophical problems with material conditional . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
21.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.4.1 Conditionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
21.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
viii CONTENTS

22 Monotonic function 165


22.1 Monotonicity in calculus and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
22.1.1 Some basic applications and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
22.2 Monotonicity in Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
22.3 Monotonicity in functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
22.4 Monotonicity in order theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
22.5 Monotonicity in the context of search algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
22.6 Boolean functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
22.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
22.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
22.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
22.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

23 n-ary group 171


23.1 Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23.1.1 Associativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23.1.2 Inverses / unique solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23.1.3 Definition of n-ary group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23.1.4 Identity / neutral elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
23.1.5 Weaker axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
23.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
23.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
23.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
23.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

24 Negation 173
24.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
24.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
24.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.1 Double negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.2 Distributivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.3 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.3.4 Self dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.4 Rules of inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.5 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
24.6 Kripke semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
24.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
24.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
24.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
24.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

25 NOR gate 177


CONTENTS ix

25.1 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177


25.2 Hardware description and pinout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
25.2.1 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
25.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
25.3.1 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
25.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
25.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
25.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

26 Post canonical system 181


26.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
26.1.1 Example (well-formed bracket expressions) . . . . . . . . . . . . . . . . . . . . . . . . . 181
26.2 Normal-form theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
26.3 String rewriting systems, Type-0 formal grammars . . . . . . . . . . . . . . . . . . . . . . . . . . 182
26.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

27 Post correspondence problem 183


27.1 Definition of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
27.2 Example instances of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
27.2.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
27.2.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
27.3 Proof sketch of undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
27.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
27.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
27.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

28 Post’s inversion formula 187


28.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
28.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

29 Post’s lattice 188


29.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
29.2 Naming of clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
29.3 Description of the lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
29.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
29.5 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
29.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

30 Post’s theorem 193


30.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
30.2 Post’s theorem and corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
30.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

31 Post–Turing machine 195


x CONTENTS

31.1 1936: Post model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


31.2 1947: Post’s formal reduction of the Turing 5-tuples to 4-tuples . . . . . . . . . . . . . . . . . . . 196
31.3 1954, 1957: Wang model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
31.4 1974: first Davis model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
31.5 1978 second Davis model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
31.6 1994 (2nd Edition) Davis–Sigal–Weyuker’s Post–Turing program model . . . . . . . . . . . . . . 198
31.7 Examples of the Post–Turing machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
31.7.1 Atomizing Turing quintuples into a sequence of Post–Turing instructions . . . . . . . . . . 198
31.7.2 2-state Busy Beaver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
31.7.3 Two state busy beaver followed by “tape cleanup” . . . . . . . . . . . . . . . . . . . . . . 201
31.8 Example: Multiply 3 × 4 with a Post–Turing machine . . . . . . . . . . . . . . . . . . . . . . . . 203
31.9 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
31.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

32 Propositional calculus 206


32.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
32.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
32.3 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
32.3.1 Closure under operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
32.3.2 Argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
32.4 Generic description of a propositional calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
32.5 Example 1. Simple axiom system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
32.6 Example 2. Natural deduction system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
32.7 Basic and derived argument forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
32.8 Proofs in propositional calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
32.8.1 Example of a proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
32.9 Soundness and completeness of the rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
32.9.1 Sketch of a soundness proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
32.9.2 Sketch of completeness proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
32.9.3 Another outline for a completeness proof . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
32.10Interpretation of a truth-functional propositional calculus . . . . . . . . . . . . . . . . . . . . . . . 216
32.10.1 Interpretation of a sentence of truth-functional propositional logic . . . . . . . . . . . . . . 216
32.11Alternative calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
32.11.1 Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
32.11.2 Inference rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
32.11.3 Meta-inference rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
32.11.4 Example of a proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
32.12Equivalence to equational logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
32.13Graphical calculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
32.14Other logical calculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
32.15Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
32.16See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
CONTENTS xi

32.16.1 Higher logical levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221


32.16.2 Related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
32.17References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
32.18Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
32.18.1 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
32.19External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

33 Recursively enumerable set 223


33.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
33.2 Equivalent formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
33.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
33.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
33.5 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
33.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

34 Semi-Thue system 226


34.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
34.2 Thue congruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
34.3 Factor monoid and monoid presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
34.4 The word problem for semi-Thue systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
34.5 Connections with other notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
34.6 History and importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
34.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
34.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
34.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
34.9.1 Monographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
34.9.2 Textbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
34.9.3 Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
34.9.4 Landmark papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

35 Sheffer stroke 230


35.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.1.1 Truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.4 Introduction, elimination, and equivalencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.5 Formal system based on the Sheffer stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.5.1 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
35.5.2 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
35.5.3 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
35.5.4 Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
35.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
xii CONTENTS

35.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233


35.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
35.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

36 Singleton (mathematics) 235


36.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
36.2 In category theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
36.3 Definition by indicator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
36.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
36.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236

37 Subset 237
37.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.2 ⊂ and ⊃ symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
37.4 Other properties of inclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
37.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

38 Tag system 241


38.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
38.1.1 Example: A simple 2-tag illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
38.1.2 Example: Computation of Collatz sequences . . . . . . . . . . . . . . . . . . . . . . . . 242
38.2 Turing-completeness of m-tag systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
38.3 The 2-tag halting problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
38.4 Historical note on the definition of tag system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
38.4.1 Origin of the name “tag” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
38.5 Cyclic tag systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
38.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
38.6 Emulation of tag systems by cyclic tag systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
38.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
38.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
38.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
38.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

39 Truth table 245


39.1 Unary operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
39.1.1 Logical false . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
39.1.2 Logical identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
39.1.3 Logical negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
39.1.4 Logical true . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
39.2 Binary operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
CONTENTS xiii

39.2.1 Truth table for all binary logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . 246


39.2.2 Logical conjunction (AND) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
39.2.3 Logical disjunction (OR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
39.2.4 Logical implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
39.2.5 Logical equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
39.2.6 Exclusive disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
39.2.7 Logical NAND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
39.2.8 Logical NOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
39.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
39.3.1 Truth table for most commonly used logical operators . . . . . . . . . . . . . . . . . . . . 248
39.3.2 Condensed truth tables for binary operators . . . . . . . . . . . . . . . . . . . . . . . . . 248
39.3.3 Truth tables in digital logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
39.3.4 Applications of truth tables in digital electronics . . . . . . . . . . . . . . . . . . . . . . . 248
39.4 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
39.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
39.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
39.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
39.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
39.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250

40 Truth value 251


40.1 Classical logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
40.2 Intuitionistic and constructive logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
40.3 Multi-valued logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
40.4 Algebraic semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
40.5 In other theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
40.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
40.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
40.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

41 Turing degree 254


41.1 Turing equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
41.2 Basic properties of the Turing degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
41.3 Structure of the Turing degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
41.3.1 Order properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
41.3.2 Properties involving the jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
41.3.3 Logical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
41.4 Structure of the r.e. Turing degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
41.5 Post’s problem and the priority method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
41.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
41.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
41.7.1 Monographs (undergraduate level) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
xiv CONTENTS

41.7.2 Monographs and survey articles (graduate level) . . . . . . . . . . . . . . . . . . . . . . . 257


41.7.3 Research papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

42 Universal algebra 259


42.1 Basic idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
42.1.1 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
42.2 Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
42.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
42.3 Basic constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
42.4 Some basic theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
42.5 Motivations and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
42.6 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
42.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
42.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
42.9 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
42.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
42.11External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
42.12Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 265
42.12.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
42.12.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
42.12.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Chapter 1

Affine transformation

In geometry, an affine transformation, affine map[1] or an affinity (from the Latin, affinis, “connected with”) is a
function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain
parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or
distances between points, though it does preserve ratios of distances between points lying on a straight line.
Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection,
rotation, shear mapping, and compositions of them in any combination and sequence.
If X and Y are affine spaces, then every affine transformation f : X → Y is of the form x 7→ M x + b , where
M is a linear transformation on X and b is a vector in Y . Unlike a purely linear transformation, an affine map
need not preserve the zero point in a linear space. Thus, every linear transformation is affine, but not every affine
transformation is linear.
For many purposes an affine space can be thought of as Euclidean space, though the concept of affine space is far more
general (i.e., all Euclidean spaces are affine, but there are affine spaces that are non-Euclidean). In affine coordinates,
which include Cartesian coordinates in Euclidean spaces, each output coordinate of an affine map is a linear function
(in the sense of calculus) of all input coordinates. Another way to deal with affine transformations systematically is to
select a point as the origin; then, any affine transformation is equivalent to a linear transformation (of position vectors)
followed by a translation.

1.1 Mathematical definition


An affine map[1] f : A → B between two affine spaces is a map on the points that acts linearly on the vectors (that
is, the vectors between points of the space). In symbols, f determines a linear transformation φ such that, for any
pair of points P, Q ∈ A :

−−−−−−−→ −−→
f (P ) f (Q) = φ(P Q)

or

f (Q) − f (P ) = φ(Q − P )

We can interpret this definition in a few other ways, as follows.


If an origin O ∈ A is chosen, and B denotes its image f (O) ∈ B , then this means that for any vector ⃗x :

f : (O + ⃗x) 7→ (B + φ(⃗x)).

If an origin O′ ∈ B is also chosen, this can be decomposed as an affine transformation g : A → B that sends O 7→ O′
, namely

1
2 CHAPTER 1. AFFINE TRANSFORMATION

An image of a fern-like fractal that exhibits affine self-similarity. Each of the leaves of the fern is related to each other leaf by an
affine transformation. For instance, the red leaf can be transformed into both the small dark blue leaf and the large light blue leaf
by a combination of reflection, rotation, scaling, and translation.

g : (O + ⃗x) 7→ (O′ + φ(⃗x)),


1.2. REPRESENTATION 3

−−→
followed by the translation by a vector ⃗b = O′ B .
The conclusion is that, intuitively, f consists of a translation and a linear map.

1.1.1 Alternative definition


Given two affine spaces A and B , over the same field, a function f : A → B is an affine map if and only if for every
family {(ai , λi )}i∈I of weighted points in A such that


λi = 1,
i∈I

we have[2]

( )
∑ ∑
f λi ai = λi f (ai ) .
i∈I i∈I

In other words, f preserves barycenters.

1.2 Representation
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector
algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally,
in the finite-dimensional case, if the linear map is represented as a multiplication by a matrix A and the translation as
the addition of a vector ⃗b , an affine map f acting on a vector ⃗x can be represented as

⃗y = f (⃗x) = A⃗x + ⃗b.

1.2.1 Augmented matrix


Using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear
map using a single matrix multiplication. The technique requires that all vectors are augmented with a “1” at the end,
and all matrices are augmented with an extra row of zeros at the bottom, an extra column—the translation vector—to
the right, and a “1” in the lower right corner. If A is a matrix,

[ ] [ ][ ]
⃗y A ⃗b ⃗x
=
1 0 ... 0 1 1

is equivalent to the following

⃗y = A⃗x + ⃗b.

The above-mentioned augmented matrix is called affine transformation matrix, or projective transformation matrix (as
it can also be used to perform projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of K n and GL(n,
k). This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a
translation, in which the origin must necessarily be mapped to some other point. By appending the additional coor-
dinate “1” to every vector, one essentially considers the space to be mapped as a subset of a space with an additional
dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the
4 CHAPTER 1. AFFINE TRANSFORMATION

Affine transformations on the 2D plane can be performed in three dimensions. Translation is done by shearing along over the z axis,
and rotation is performed around the z axis.

origin of the original space can be found at (0,0, ... 0, 1). A translation within the original space by means of a
linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The
coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is
Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into
one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision
and robotics.

Example augmented matrix

If the vectors ⃗x1 , . . . , ⃗xn+1 are a basis of the domain’s projective vector space and if ⃗y1 , . . . , ⃗yn+1 are the corre-
sponding vectors in the codomain vector space then the augmented matrix M that achieves this affine transformation

[ ] [ ]
⃗y ⃗x
=M
1 1
is
1.3. PROPERTIES 5

[ ][ ]−1
⃗y1 . . . ⃗yn+1 ⃗x1 . . . ⃗xn+1
M=
1 ... 1 1 ... 1

This formulation works irrespective of whether any of the domain, codomain and image vector spaces have the same
number of dimensions.
For example, the affine transformation of a vector plane is uniquely determined from the knowledge of where the
three vertices of a non-degenerate triangle are mapped to.

1.3 Properties
An affine transformation preserves:

1. The collinearity relation between points; i.e., points which lie on the same line (called collinear points) continue
to be collinear after the transformation.
2. Ratios of vectors along a line; i.e., for distinct collinear points p1 , p2 , p3 , the ratio of −
p−→ −−→
1 p2 and p2 p3 is the
−−−−−−−→ −−−−−−−→
same as that of f (p1 )f (p2 ) and f (p2 )f (p3 ) .
3. More generally barycenters of weighted collections of points.

An affine transformation is invertible if and only if A is invertible. In the matrix representation, the inverse is:

[ ]
A−1 −A−1⃗b
0 ... 0 1

The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear
group of degree n as subgroup and is itself a subgroup of the general linear group of degree n + 1.
The similarity transformations form the subgroup where A is a scalar times an orthogonal matrix. For example, if
the affine transformation acts on the plane and if the determinant of A is 1 or −1 then the transformation is an equi-
areal mapping. Such transformations form a subgroup called the equi-affine group[3] A transformation that is both
equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of transformations which preserve orientation: those where the determinant of
A is positive. In the last case this is in 3D the group of rigid body motions (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation.
This may make it easier to classify and understand the transformation. For example, describing a transformation
as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the
transformation than describing it as a combination of a translation and a rotation. However, this depends on application
and context.

1.4 Affine transformation of the plane


Affine transformations in two real dimensions include:

• pure translations,
• scaling in a given direction, with respect to a line in another direction (not necessarily perpendicular), combined
with translation that is not purely in the direction of scaling; taking “scaling” in a generalized sense it includes
the cases that the scale factor is zero (projection) or negative; the latter includes reflection, and combined with
translation it includes glide reflection,
• rotation combined with a homothety and a translation,
• shear mapping combined with a homothety and a translation, or
6 CHAPTER 1. AFFINE TRANSFORMATION

A1
A2
Z
B2
B1
C2

C1

A central dilation. The triangles A1B1Z, A1C1Z, and B1C1Z get mapped to A2B2Z, A2C2Z, and B2C2Z, respectively.

• squeeze mapping combined with a homothety and a translation.

To visualise the general affine transformation of the Euclidean plane, take labelled parallelograms ABCD and A′B′C′D′.
Whatever the choices of points, there is an affine transformation T of the plane taking A to A′, and each vertex sim-
ilarly. Supposing we exclude the degenerate case where ABCD has zero area, there is a unique such affine transfor-
mation T. Drawing out a whole grid of parallelograms based on ABCD, the image T(P) of any point P is determined
by noting that T(A) = A′, T applied to the line segment AB is A′B′, T applied to the line segment AC is A′C′, and T
respects scalar multiples of vectors based at A. [If A, E, F are collinear then the ratio length(AF)/length(AE) is equal
to length(A′F′)/length(A′E′).] Geometrically T transforms the grid based on ABCD to that based in A′B′C′D′.
Affine transformations don't respect lengths or angles; they multiply area by a constant factor

area of A′B′C′D′ / area of ABCD.

A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by
its effect on signed areas (as defined, for example, by the cross product of vectors).

1.5 Examples of affine transformations

1.5.1 Affine transformations over the real numbers

Functions f : R → R, f(x) = mx + c with m and c constant, are commonplace affine transformations.

1.5.2 Affine transformation over a finite field

The following equation expresses an affine transformation in GF(28 ):

{ a′ } = M { a } ⊕ { v },

For instance, the affine transformation of the element {a} = y7 + y6 + y3 + y = {11001010} in big-endian binary
notation = {CA} in big-endian hexadecimal notation, is calculated as follows:
1.6. SEE ALSO 7

a′0 = a0 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 1 = 0 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 1 = 1
a′1 = a0 ⊕ a1 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 1 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 1 = 0
a′2 = a0 ⊕ a1 ⊕ a2 ⊕ a6 ⊕ a7 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1
a′3 = a0 ⊕ a1 ⊕ a2 ⊕ a3 ⊕ a7 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1
a′4 = a0 ⊕ a1 ⊕ a2 ⊕ a3 ⊕ a4 ⊕ 0 = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 = 0
a′5 = a1 ⊕ a2 ⊕ a3 ⊕ a4 ⊕ a5 ⊕ 1 = 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 = 1
a′6 = a2 ⊕ a3 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ 1 = 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 = 1
a′7 = a3 ⊕ a4 ⊕ a5 ⊕ a6 ⊕ a7 ⊕ 0 = 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1.
Thus, {a′} = y7 + y6 + y5 + y3 + y2 + 1 = {11101101} = {ED}.

1.5.3 Affine transformation in plane geometry


In ℝ2 , the transformation shown at left is accomplished using the map given by:

[ ] [ ][ ] [ ]
x 0 1 x −100
7→ +
y 2 1 y −100
Transforming the three corner points of the original triangle (in red) gives three new points which form the new
triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but
not for all quadrilaterals.

1.6 See also


• The transformation matrix for an affine transformation
• Affine geometry
• 3D projection
• Homography
• Flat (geometry)

1.7 Notes
[1] Berger, Marcel (1987), p. 38.
[2] Schneider, Philip K. & Eberly, David H. (2003). Geometric Tools for Computer Graphics. Morgan Kaufmann. p. 98.
ISBN 978-1-55860-594-7.
[3] Oswald Veblen (1918) Projective Geometry, volume 2, pp. 105–7.

1.8 References
• Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
• Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press,
ISBN 978-0-521-44177-3
• Sharpe, R. W. (1997). Differential Geometry: Cartan’s Generalization of Klein’s Erlangen Program. New York:
Springer. ISBN 0-387-94732-9.
8 CHAPTER 1. AFFINE TRANSFORMATION

A simple affine transformation on the real plane

1.9 External links


• Hazewinkel, Michiel, ed. (2001), “Affine transformation”, Encyclopedia of Mathematics, Springer, ISBN 978-
1-55608-010-4
• Geometric Operations: Affine Transform, R. Fisher, S. Perkins, A. Walker and E. Wolfart.
• Weisstein, Eric W., “Affine Transformation”, MathWorld.
• Affine Transform by Bernard Vuilleumier, Wolfram Demonstrations Project.
• Affine Transformation on PlanetMath
• Free Affine Transformation software
1.9. EXTERNAL LINKS 9

No change Translate Scale about origin


1 0 0 1 0 X W 0 0
0 1 0 0 1 Y 0 H 0
0 0 1 0 0 1 0 0 1
y y y
(0,1) (0,H)
1 1 1
(0,0) (1,0) (X,Y) (W,0)
0 1 x 0 1 x 0 1 x

Rotate about origin Shear in x direction Shear in y direction


cos θ sin θ 0 1 A 0 1 0 0
-sin θ cos θ 0 0 1 0 B 1 0
0 0 1 0 0 1 0 0 1
y y y
(A,1) (0,1)
1 (sin θ, cos θ) 1 1
θ (1,B)
(1,0)
0 1 x 0 1 x 0 1 x
(cos θ, -sin θ)

Reflect about origin Reflect about x-axis Reflect about y-axis


-1 0 0 1 0 0 -1 0 0
0 -1 0 0 -1 0 0 1 0
0 0 1 0 0 1 0 0 1
y y y
(0,1)
1 1 1
(-1,0) (1,0)
0 1 x 0 1 x (-1,0) 0 1 x
(0,-1) (0,-1)
Effect of applying various 2D affine transformation matrices on a unit square. Note that the reflection matrices are special cases of
the scaling matrix.

• Affine Transformation with MATLAB


Chapter 2

Arithmetical hierarchy

In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene-Mostowski hierarchy classifies
certain sets based on the complexity of formulas that define them. Any set that receives a classification is called
arithmetical.
The arithmetical hierarchy is important in recursion theory, effective descriptive set theory, and the study of formal
theories such as Peano arithmetic.
The Tarski-Kuratowski algorithm provides an easy way to get an upper bound on the classifications assigned to a
formula and the set it defines.
The hyperarithmetical hierarchy and the analytical hierarchy extend the arithmetical hierarchy to classify additional
formulas and sets.

2.1 The arithmetical hierarchy of formulas

The arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The clas-
sifications are denoted Σ0n and Π0n for natural numbers n (including 0). The Greek letters here are lightface symbols,
which indicates that the formulas do not contain set parameters.
If a formula ϕ is logically equivalent to a formula with only bounded quantifiers then ϕ is assigned the classifications
Σ00 and Π00 .
The classifications Σ0n and Π0n are defined inductively for every natural number n using the following rules:

• If ϕ is logically equivalent to a formula of the form ∃n1 ∃n2 · · · ∃nk ψ , where ψ is Π0n , then ϕ is assigned the
classification Σ0n+1 .

• If ϕ is logically equivalent to a formula of the form ∀n1 ∀n2 · · · ∀nk ψ , where ψ is Σ0n , then ϕ is assigned the
classification Π0n+1 .

Also, a Σ0n formula is equivalent to a formula that begins with some existential quantifiers and alternates n − 1 times
between series of existential and universal quantifiers; while a Π0n formula is equivalent to a formula that begins with
some universal quantifiers and alternates similarly.
Because every formula is equivalent to a formula in prenex normal form, every formula with no set quantifiers is
assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is
assigned the classification Σ0n or Π0n it will be assigned the classifications Σ0m and Π0m for every m greater than n.
The most important classification assigned to a formula is thus the one with the least n, because this is enough to
determine all the other classifications.

10
2.2. THE ARITHMETICAL HIERARCHY OF SETS OF NATURAL NUMBERS 11

2.2 The arithmetical hierarchy of sets of natural numbers

A set X of natural numbers is defined by formula φ in the language of Peano arithmetic (the first-order language with
symbols “0” for zero, “S” for the successor function, "+" for addition, "×" for multiplication, and "=" for equality), if
the elements of X are exactly the numbers that satisfy φ. That is, for all natural numbers n,

n ∈ X ⇔ N |= ϕ(n),

where n is the numeral in the language of arithmetic corresponding to n . A set is definable in first order arithmetic
if it is defined by some formula in the language of Peano arithmetic.
Each set X of natural numbers that is definable in first order arithmetic is assigned classifications of the form Σ0n ,
Π0n , and ∆0n , where n is a natural number, as follows. If X is definable by a Σ0n formula then X is assigned the
classification Σ0n . If X is definable by a Π0n formula then X is assigned the classification Π0n . If X is both Σ0n and
Π0n then X is assigned the additional classification ∆0n .
Note that it rarely makes sense to speak of ∆0n formulas; the first quantifier of a formula is either existential or
universal. So a ∆0n set is not defined by a ∆0n formula; rather, there are both Σ0n and Π0n formulas that define the set.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of the natural numbers.
Instead of formulas with one free variable, formulas with k free number variables are used to define the arithmetical
hierarchy on sets of k-tuples of natural numbers.

2.3 Relativized arithmetical hierarchies

Just as we can define what it means for a set X to be recursive relative to another set Y by allowing the computation
defining X to consult Y as an oracle we can extend this notion to the whole arithmetic hierarchy and define what it
means for X to be Σ0n , ∆0n or Π0n in Y, denoted respectively Σ0,Y n ∆0,Y
n and Π0,Y
n . To do so, fix a set of integers
Y and add a predicate for membership in Y to the language of Peano arithmetic. We then say that X is in Σ0,Y n if
it is defined by a Σ0n formula in this expanded language. In other words X is Σ0,Y n if it is defined by a Σ0n formula
allowed to ask questions about membership in Y. Alternatively one can view the Σ0,Y n sets as those sets that can be
built starting with sets recursive in Y and alternately taking unions and intersections of these sets up to n times.
For example let Y be a set of integers. Let X be the set of numbers divisible by an element of Y. Then X is defined
by the formula ϕ(n) = ∃m∃t(Y (m) ∧ m × t = n) so X is in Σ0,Y 1 (actually it is in ∆0,Y
0 as well since we could
bound both quantifiers by n).

2.4 Arithmetic reducibility and degrees

Arithmetical reducibility is an intermediate notion between Turing reducibility and hyperarithmetic reducibility.
A set is arithmetical (also arithmetic and arithmetically definable) if it is defined by some formula in the language
of Peano arithmetic. Equivalently X is arithmetical if X is Σ0n or Π0n for some integer n. A set X is arithmetical
in a set Y, denoted X ≤A Y , if X is definable a some formula in the language of Peano arithmetic extended by a
predicate for membership in Y. Equivalently, X is arithmetical in Y if X is in Σ0,Y
n or Π0,Y
n for some integer n. A
synonym for X ≤A Y is: X is arithmetically reducible to Y.
The relation X ≤A Y is reflexive and transitive, and thus the relation ≡A defined by the rule

X ≡A Y ⇔ X ≤A Y ∧ Y ≤A X

is an equivalence relation. The equivalence classes of this relation are called the arithmetic degrees; they are partially
ordered under ≤A .
12 CHAPTER 2. ARITHMETICAL HIERARCHY

2.5 The arithmetical hierarchy of subsets of Cantor and Baire space


The Cantor space, denoted 2ω , is the set of all infinite sequences of 0s and 1s; the Baire space, denoted ω ω or N ,
is the set of all infinite sequences of natural numbers. Note that elements of the Cantor space can be identified with
sets of integers and elements of the Baire space with functions from integers to integers.
The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can
naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification Σ0n if it is
definable by a Σ0n formula. The set is assigned the classification Π0n if it is definable by a Π0n formula. If the set is both
Σ0n and Π0n then it is given the additional classification ∆0n . For example let O ⊂ 2ω be the set of all infinite binary
strings which aren't all 0 (or equivalently the set of all non-empty sets of integers). As O = {X ∈ 2ω |∃n(X(n) = 1)}
we see that O is defined by a Σ01 formula and hence is a Σ01 set.
Note that while both the elements of the Cantor space (regarded as sets of integers) and subsets of the Cantor space
are classified in arithmetic hierarchies, these are not the same hierarchy. In fact the relationship between the two
hierarchies is interesting and non-trivial. For instance the Π0n elements of the Cantor space are not (in general)
the same as the elements X of the Cantor space so that {X} is a Π0n subset of the Cantor space. However, many
interesting results relate the two hierarchies.
There are two ways that a subset of Baire space can be classified in the arithmetical hierarchy.

• A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function
from ω to ω to the characteristic function of its graph. A subset of Baire space is given the classification Σ1n ,
Π1n , or ∆1n if and only if the corresponding subset of Cantor space has the same classification.

• An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy
of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of
Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same
classifications as the first definition.

A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor
space, using formulas with several free variables. The arithmetical hierarchy can be defined on any effective Polish
space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of
ordinary second-order arithmetic.
Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set
of integers. In fact boldface 0n is just the union of Σ0,Y
n for all sets of integers Y. Note that the boldface hierarchy
is just the standard hierarchy of Borel sets.

2.6 Extensions and variations


It is possible to define the arithmetical hierarchy of formulas using a language extended with a function symbol for
each primitive recursive function. This variation slightly changes the classification of some sets.
A more semantic variation of the hierarchy can be defined on all finitary relations on the natural numbers; the following
definition is used. Every computable relation is defined to be Σ00 and Π00 . The classifications Σ0n and Π0n are defined
inductively with the following rules.

• If the relation R(n1 , . . . , nl , m1 , . . . , mk ) is Σ0n then the relation S(n1 , . . . , nl ) = ∀m1 · · · ∀mk R(n1 , . . . , nl , m1 , . . . , mk )
is defined to be Π0n+1

• If the relation R(n1 , . . . , nl , m1 , . . . , mk ) is Π0n then the relation S(n1 , . . . , nl ) = ∃m1 · · · ∃mk R(n1 , . . . , nl , m1 , . . . , mk )
is defined to be Σ0n+1

This variation slightly changes the classification of some sets. It can be extended to cover finitary relations on the
natural numbers, Baire space, and Cantor space.
2.7. MEANING OF THE NOTATION 13

2.7 Meaning of the notation


The following meanings can be attached to the notation for the arithmetical hierarchy on formulas.
The subscript n in the symbols Σ0n and Π0n indicates the number of alternations of blocks of universal and existential
number quantifiers that are used in a formula. Moreover, the outermost block is existential in Σ0n formulas and
universal in Π0n formulas.
The superscript 0 in the symbols Σ0n , Π0n , and ∆0n indicates the type of the objects being quantified over. Type
0 objects are natural numbers, and objects of type i + 1 are functions that map the set of objects of type i to the
natural numbers. Quantification over higher type objects, such as functions from natural numbers to natural numbers,
is described by a superscript greater than 0, as in the analytical hierarchy. The superscript 0 indicates quantifiers over
numbers, the superscript 1 would indicate quantification over functions from numbers to numbers (type 1 objects),
the superscript 2 would correspond to quantification over functions that take a type 1 object and return a number, and
so on.

2.8 Examples

• The Σ01 sets of numbers are those definable by a formula of the form ∃n1 · · · ∃nk ψ(n1 , . . . , nk , m) where ψ
has only bounded quantifiers. These are exactly the recursively enumerable sets.

• The set of natural numbers that are indices for Turing machines that compute total functions is Π02 . Intuitively,
an index e falls into this set if and only if for every m “there is an s such that the Turing machine with index
e halts on input m after s steps”. A complete proof would show that the property displayed in quotes in the
previous sentence is definable in the language of Peano arithmetic by a Σ01 formula.

• Every Σ01 subset of Baire space or Cantor space is an open set in the usual topology on the space. Moreover,
for any such set there is a computable enumeration of Gödel numbers of basic open sets whose union is the
original set. For this reason, Σ01 sets are sometimes called effectively open. Similarly, every Π01 set is closed
and the Π01 sets are sometimes called effectively closed.

• Every arithmetical subset of Cantor space or Baire space is a Borel set. The lightface Borel hierarchy extends
the arithmetical hierarchy to include additional Borel sets. For example, every Π02 subset of Cantor or Baire
space is a Gδ set (that is, a set which equals the intersection of countably many open sets). Moreover, each
of these open sets is Σ01 and the list of Gödel numbers of these open sets has a computable enumeration.
If ϕ(X, n, m) is a Σ00 formula with a free set variable X and free number variables n, m then the Π02 set
{X | ∀n∃mϕ(X, n, m)} is the intersection of the Σ01 sets of the form {X | ∃mϕ(X, n, m)} as n ranges over
the set of natural numbers.

2.9 Properties
The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy
of subsets of Cantor or Baire space.

• The collections Π0n and Σ0n are closed under finite unions and finite intersections of their respective elements.

• A set is Σ0n if and only if its complement is Π0n . A set is ∆0n if and only if the set is both Σ0n and Π0n , in which
case its complement will also be ∆0n .

• The inclusions ∆0n ⊊ Π0n and ∆0n ⊊ Σ0n hold for n ≥ 1 .

• The inclusions Π0n ⊊ Π0n+1 and Σ0n ⊊ Σ0n+1 hold for all n and the inclusion Σ0n ∪ Π0n ⊊ ∆0n+1 holds for
n ≥ 1 . Thus the hierarchy does not collapse.
14 CHAPTER 2. ARITHMETICAL HIERARCHY

2.10 Relation to Turing machines


The Turing computable sets of natural numbers are exactly the sets at level ∆01 of the arithmetical hierarchy. The
recursively enumerable sets are exactly the sets at level Σ01 .
No oracle machine is capable of solving its own halting problem (a variation of Turing’s proof applies). The halting
problem for a ∆0,Y
n oracle in fact sits in Σ0,Y
n+1 .

Post’s theorem establishes a close connection between the arithmetical hierarchy of sets of natural numbers and the
Turing degrees. In particular, it establishes the following facts for all n ≥ 1:

• The set ∅(n) (the nth Turing jump of the empty set) is many-one complete in Σ0n .
• The set N \ ∅(n) is many-one complete in Π0n .

• The set ∅(n−1) is Turing complete in ∆0n .

The polynomial hierarchy is a “feasible resource-bounded” version of the arithmetical hierarchy in which polyno-
mial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the
Turing machines involved). It gives a finer classification of some sets of natural numbers that are at level ∆01 of the
arithmetical hierarchy.

2.11 See also


• Interpretability logic

• Hierarchy (mathematics)
• Polynomial hierarchy

2.12 References
• Japaridze, Giorgie (1994), “The logic of arithmetical hierarchy”, Annals of Pure and Applied Logic 66 (2):
89–112, doi:10.1016/0168-0072(94)90063-9, Zbl 0804.03045.

• Moschovakis, Yiannis N. (1980), Descriptive Set Theory, Studies in Logic and the Foundations of Mathematics
100, North Holland, ISBN 0-444-70199-0, Zbl 0433.03025.

• Nies, André (2009), Computability and randomness, Oxford Logic Guides 51, Oxford: Oxford University
Press, ISBN 978-0-19-923076-1, Zbl 1169.03034.

• Rogers, H., jr (1967), Theory of recursive functions and effective computability, Maidenhead: McGraw-Hill,
Zbl 0183.01401.
Chapter 3

Arity

In logic, mathematics, and computer science, the arity i /ˈærɨti/ of a function or operation is the number of arguments
or operands the function or operation accepts. The arity of a relation (or predicate) is the dimension of the domain
in the corresponding Cartesian product. (A function of arity n thus has arity n+1 considered as a relation.) The term
springs from words like unary, binary, ternary, etc. Unary functions or predicates may be also called “monadic";
similarly, binary functions may be called “dyadic”.
In mathematics arity may also be named rank,[1][2] but this word can have many other meanings in mathematics. In
logic and philosophy, arity is also called adicity and degree.[3][4] In linguistics, arity is usually named valency.[5]
In computer programming, there is often a syntactical distinction between operators and functions; syntactical op-
erators usually have arity 0, 1, or 2. Functions vary widely in the number of arguments, though large numbers can
become unwieldy. Some programming languages also offer support for variadic functions, i.e. functions syntactically
accepting a variable number of arguments.

3.1 Examples
The term “arity” is rarely employed in everyday usage. For example, rather than saying “the arity of the addition
operation is 2” or “addition is an operation of arity 2” one usually says “addition is a binary operation”. In general, the
naming of functions or operators with a given arity follows a convention similar to the one used for n-based numeral
systems such as binary and hexadecimal. One combines a Latin prefix with the -ary ending; for example:

• A nullary function takes no arguments.

• A unary function takes one argument.

• A binary function takes two arguments.

• A ternary function takes three arguments.

• An n-ary function takes n arguments.

3.1.1 Nullary

Sometimes it is useful to consider a constant to be an operation of arity 0, and hence call it nullary.
Also, in non-functional programming, a function without arguments can be meaningful and not necessarily constant
(due to side effects). Often, such functions have in fact some hidden input which might be global variables, including
the whole state of the system (time, free memory, ...). The latter are important examples which usually also exist in
“purely” functional programming languages.

15
16 CHAPTER 3. ARITY

3.1.2 Unary
Examples of unary operators in mathematics and in programming include the unary minus and plus, the increment
and decrement operators in C-style languages (not in logical languages), and the factorial, reciprocal, floor, ceiling,
fractional part, sign, absolute value, complex conjugate, and norm functions in mathematics. The two’s comple-
ment, address reference and the logical NOT operators are examples of unary operators in math and programming.
According to Quine, a more suitable term is “singulary”.[6]
All functions in lambda calculus and in some functional programming languages (especially those descended from
ML) are technically unary, but see n-ary below.

3.1.3 Binary
Most operators encountered in programming are of the binary form. For both programming and mathematics these
can be the multiplication operator, the addition operator, the division operator. Logical predicates such as OR, XOR,
AND, IMP are typically used as binary operators with two distinct operands.

3.1.4 Ternary
From C, C++, C#, Java, Perl and variants comes the ternary operator ?:, which is a so-called conditional operator,
taking three parameters. Forth also contains a ternary operator, */, which multiplies the first two (one-cell) numbers,
dividing by the third, with the intermediate result being a double cell number. This is used when the intermediate
result would overflow a single cell. Python has a ternary conditional expression, x if C else y. The dc calculator has
several ternary operators, such as |, which will pop three values from the stack and efficiently compute xy mod z with
arbitrary precision. Additionally, many assembly language instructions are ternary or higher, such as MOV %AX,
(%BX,%CX), which will load (MOV) into register AX the contents of a calculated memory location that is the sum
(parenthesis) of the registers BX and CX.

3.1.5 n-ary
From a mathematical point of view, a function of n arguments can always be considered as a function of one single
argument which is an element of some product space. However, it may be convenient for notation to consider n-ary
functions, as for example multilinear maps (which are not linear maps on the product space, if n≠1).
The same is true for programming languages, where functions taking several arguments could always be defined as
functions taking a single argument of some composite type such as a tuple, or in languages with higher-order functions,
by currying.

3.1.6 Variable arity


In computer science, a function accepting a variable number of arguments is called variadic. In logic and philosophy,
predicates or relations accepting a variable number of arguments are called multigrade, anadic, or variably polyadic.[7]

3.2 Other names


There are Latinate names for specific arities, primarily based on Latin distributive numbers meaning “in group of n",
though some are based on cardinal numbers or ordinal numbers. Only binary and ternary are both commonly used
and derived from distributive numbers.

• Nullary means 0-ary (from nūllus, zero not being well-understood in antiquity).
• Unary means 1-ary (from cardinal unus, rather than singulary from distributive singulī ).
• Binary means 2-ary.
• Ternary means 3-ary.
3.3. SEE ALSO 17

• Quaternary means 4-ary.


• Quinary means 5-ary.
• Senary means 6-ary.
• Septenary means 7-ary.
• Octonary means 8-ary (alternatively octary).
• Novenary means 9-ary (alternatively nonary, from ordinal).
• Denary means 10-ary (alternatively decenary)
• Polyadic, multary and multiary mean 2 or more operands (or parameters).
• n-ary means n operands (or parameters), but is often used as a synonym of “polyadic”.

An alternative nomenclature is derived in a similar fashion from the corresponding Greek roots; for example, niladic
(or medadic), monadic, dyadic, triadic, polyadic, and so on. Thence derive the alternative terms adicity and adinity
for the Latin-derived arity.
These words are often used to describe anything related to that number (e.g., undenary chess is a chess variant with
an 11×11 board, or the Millenary Petition of 1603).

3.3 See also


• Logic of relatives
• Binary relation
• Triadic relation
• Theory of relations
• Signature (logic)
• Parameter
• Variadic
• Valency
• n-ary code
• n-ary group

3.4 References
[1] Michiel Hazewinkel (2001). Encyclopaedia of Mathematics, Supplement III. Springer. p. 3. ISBN 978-1-4020-0198-7.

[2] Eric Schechter (1997). Handbook of Analysis and Its Foundations. Academic Press. p. 356. ISBN 978-0-12-622760-4.

[3] Michael Detlefsen; David Charles McCarty; John B. Bacon (1999). Logic from A to Z. Routeledge. p. 7. ISBN 978-0-
415-21375-2.

[4] Nino B. Cocchiarella; Max A. Freund (2008). Modal Logic: An Introduction to its Syntax and Semantics. Oxford University
Press. p. 121. ISBN 978-0-19-536658-7.

[5] David Crystal (2008). Dictionary of Linguistics and Phonetics (6th ed.). John Wiley & Sons. p. 507. ISBN 978-1-405-
15296-9.

[6] Quine, W. V. O. (1940), Mathematical logic, Cambridge, MA: Harvard University Press, p. 13

[7] Oliver, Alex (2004). “Multigrade Predicates”. Mind 113: 609–681. doi:10.1093/mind/113.452.609.
18 CHAPTER 3. ARITY

3.5 External links


A monograph available free online:

• Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2. Especially pp. 22–24.
Chapter 4

Boolean domain

In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpre-
tations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually
written as {0, 1},[1][2][3] {false, true}, {F, T},[4] {⊥, ⊤} [5] or B. [6][7]
The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The
initial object in the category of bounded lattices is a Boolean domain.
In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming
languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true.
However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for
example, falsity is represented by the number 0 and truth is represented by the number 1 or −1 respectively, and all
variables that can take these values can also take any other numerical values.

4.1 Generalizations
The Boolean domain {0, 1} can be replaced by the unit interval [0,1], in which case rather than only taking values 0
or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x,
conjunction (AND) is replaced with multiplication ( xy ), and disjunction (OR) is defined via De Morgan’s law to be
1 − (1 − x)(1 − y) .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic
and probabilistic logic. In these interpretations, a value is interpreted as the “degree” of truth – to what extent a
proposition is true, or the probability that the proposition is true.

4.2 See also


• Boolean-valued function

4.3 Notes
[1] Dirk van Dalen, Logic and Structure. Springer (2004), page 15.

[2] David Makinson, Sets, Logic and Maths for Computing. Springer (2008), page 13.

[3] George S. Boolos and Richard C. Jeffrey, Computability and Logic. Cambridge University Press (1980), page 99.

[4] Elliott Mendelson, Introduction to Mathematical Logic (4th. ed.). Chapman & Hall/CRC (1997), page 11.

[5] Eric C. R. Hehner, A Practical Theory of Programming. Springer (1993, 2010), page 3.

[6] Ian Parberry (1994). Circuit Complexity and Neural Networks. MIT Press. p. 65. ISBN 978-0-262-16148-0.

19
20 CHAPTER 4. BOOLEAN DOMAIN

[7] Jordi Cortadella et al. (2002). Logic Synthesis for Asynchronous Controllers and Interfaces. Springer Science & Business
Media. p. 73. ISBN 978-3-540-43152-7.
Chapter 5

Boolean expression

In computer science, a Boolean expression is an expression in a programming language that produces a Boolean
value when evaluated, i.e. one of true or false. A Boolean expression may be composed of a combination of the
Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.[1]
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.[2]

5.1 Boolean operators


Most programming languages have the Boolean operators OR, AND and NOT; in C and some newer languages, these
are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively,
while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde).[3] In the mathematical literature
the symbols used are often "+" (plus), "·" (dot) and overbar, or "∨" (cup), "∧" (cap) and "¬" or "′" (prime).

5.2 Examples
• The expression “5 > 3” is evaluated as true.

• “5>=3” and “3<=5” are equivalent Boolean expressions, both of which are evaluated as true.

• Of course, most Boolean expressions will contain at least one variable (X > 3), and often more (X > Y).

5.3 See also


• Expression (programming)

• Expression (mathematics)

5.4 References
[1] Gries, David; Schneider, Fred B. (1993), “Chapter 2. Boolean Expressions”, A Logical Approach to Discrete Math, Mono-
graphs in Computer Science, Springer, p. 25ff, ISBN 9780387941158.

[2] van Melkebeek, Dieter (2000), Randomness and Completeness in Computational Complexity, Lecture Notes in Computer
Science 1950, Springer, p. 22, ISBN 9783540414926.

[3] E.g. for Java see Brogden, William B.; Green, Marcus (2003), Java 2 Programmer, Que Publishing, p. 45, ISBN
9780789728616.

21
22 CHAPTER 5. BOOLEAN EXPRESSION

5.5 External links


• The Calculus of Logic, by George Boole, Cambridge and Dublin Mathematical Journal Vol. III (1848), pp.
183–98.
Chapter 6

Boolean function

Not to be confused with Binary function.

In mathematics and logic, a (finitary) Boolean function (or switching function) is a function of the form ƒ : Bk →
B, where B = {0, 1} is a Boolean domain and k is a non-negative integer called the arity of the function. In the case
where k = 0, the “function” is essentially a constant element of B.
Every k-ary Boolean function can be expressed as a propositional formula in k variables x1 , …, xk, and two propo-
k
sitional formulas are logically equivalent if and only if they express the same Boolean function. There are 22 k-ary
functions for every k.

6.1 Boolean functions in applications


A Boolean function describes how to determine a Boolean value output based on some logical calculation from
Boolean inputs. Such functions play a basic role in questions of complexity theory as well as the design of circuits
and chips for digital computers. The properties of Boolean functions play a critical role in cryptography, particularly
in the design of symmetric key algorithms (see substitution box).
Boolean functions are often represented by sentences in propositional logic, and sometimes as multivariate polynomials
over GF(2), but more efficient representations are binary decision diagrams (BDD), negation normal forms, and
propositional directed acyclic graphs (PDAG).
In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is
applied to solve problems in social choice theory.

6.2 See also


• Algebra of sets

• Boolean algebra

• Boolean algebra topics

• Boolean domain

• Boolean-valued function

• Logical connective

• Truth function

• Truth table

• Symmetric Boolean function

23
24 CHAPTER 6. BOOLEAN FUNCTION

• Decision tree model

• Evasive Boolean function


• Indicator function

• Balanced boolean function

• 3-ary Boolean functions

6.3 References
• Crama, Y; Hammer, P. L. (2011), Boolean Functions, Cambridge University Press.

• Hazewinkel, Michiel, ed. (2001), “Boolean function”, Encyclopedia of Mathematics, Springer, ISBN 978-1-
55608-010-4

• Janković, Dragan; Stanković, Radomir S.; Moraga, Claudio (November 2003). “Arithmetic expressions opti-
misation using dual polarity property” (PDF). Serbian Journal of Electrical Engineering 1 (71 - 80, number 1).
Retrieved 2015-06-07.
• Mano, M. M.; Ciletti, M. D. (2013), Digital Design, Pearson.
Chapter 7

Cardinality of the continuum

In set theory, the cardinality of the continuum is the cardinality or “size” of the set of real numbers R , sometimes
called the continuum. It is an infinite cardinal number and is denoted by |R| or c (a lowercase fraktur script “c”).
The real numbers R are more numerous than the natural numbers N . Moreover, R has the same number of elements
as the power set of N . Symbolically, if the cardinality of N is denoted as ℵ0 , the cardinality of the continuum is

c = 2ℵ0 > ℵ0 .
This was proven by Georg Cantor in his 1874 uncountability proof, part of his groundbreaking study of different
infinities, and later more simply in his diagonal argument. Cantor defined cardinality in terms of bijective functions:
two sets have the same cardinality if and only if there exists a bijective function between them.
Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many
other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In
other words, the open interval (a,b) is equinumerous with R. This is also true for several other infinite sets, such as
any n-dimensional Euclidean space Rn (see space filling curve). That is,

|(a, b)| = |R| = |Rn |.


The smallest infinite cardinal number is ℵ0 (aleph-naught). The second smallest is ℵ1 (aleph-one). The continuum
hypothesis, which asserts that there are no sets whose cardinality is strictly between ℵ0 and c , implies that c = ℵ1 .

7.1 Properties

7.1.1 Uncountability
Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that
the set of real numbers is uncountably infinite; i.e. c is strictly greater than the cardinality of the natural numbers, ℵ0
:

ℵ0 < c.
In other words, there are strictly more real numbers than there are integers. Cantor proved this statement in several
different ways. See Cantor’s first uncountability proof and Cantor’s diagonal argument.

7.1.2 Cardinal equalities


A variation on Cantor’s diagonal argument can be used to prove Cantor’s theorem which states that the cardinality of
any set is strictly less than that of its power set, i.e. |A| < 2|A| , and so the power set P(N) of the natural numbers N is
uncountable. In fact, it can be shown that the cardinality of P(N) is equal to c :

25
26 CHAPTER 7. CARDINALITY OF THE CONTINUUM

1. Define a map f : R → P(Q) from the reals to the power set of the rationals by sending each real number x to
the set {q ∈ Q | q ≤ x} of all rationals less than or equal to x (with the reals viewed as Dedekind cuts, this
is nothing other than the inclusion map in the set of sets of rationals). This map is injective since the rationals
are dense in R. Since the rationals are countable we have that c ≤ 2ℵ0 .

2. Let {0,2}N be the set of infinite sequences with values in set {0,2}. This set clearly has cardinality 2ℵ0 (the
natural bijection between the set of binary sequences and P(N) is given by the indicator function). Now asso-
ciate to each such sequence (ai) the unique real number in the interval [0,1] with the ternary-expansion given
by the digits (ai), i.e. the i-th digit after the decimal point is ai. The image of this map is called the Cantor set.
It is not hard to see that this map is injective, for by avoiding points with the digit 1 in their ternary expansion
we avoid conflicts created by the fact that the ternary-expansion of a real number is not unique. We then have
that 2ℵ0 ≤ c .

By the Cantor–Bernstein–Schroeder theorem we conclude that

c = |P (N)| = 2ℵ0 .

(A different proof of c = 2ℵ0 is given in Cantor’s diagonal argument. This proof constructs a bijection from {0,1}N
to R.)
The cardinal equality c2 = c can be demonstrated using cardinal arithmetic:

c2 = (2ℵ0 )2 = 22×ℵ0 = 2ℵ0 = c.

By using the rules of cardinal arithmetic one can also show that

cℵ0 = ℵ0 ℵ0 = nℵ0 = cn = ℵ0 c = nc = c,

where n is any finite cardinal ≥ 2, and

cc = (2ℵ0 )c = 2c×ℵ0 = 2c ,

where 2c is the cardinality of the power set of R, and 2c > c .

7.1.3 Alternative explanation for c = 2ℵ0


Every real number has at least one infinite decimal expansion. For example,

1/2 = 0.50000...
1/3 = 0.33333...
π = 3.14159....

(This is true even when the expansion repeats as in the first two examples.) In any given case, the number of digits
is countable since they can be put into a one-to-one correspondence with the set of natural numbers N . This fact
makes it sensible to talk about (for example) the first, the one-hundredth, or the millionth digit of π . Since the natural
numbers have cardinality ℵ0 , each real number has ℵ0 digits in its expansion.
Since each real number can be broken into an integer part and a decimal fraction, we get

ℵ0
c ≤ ℵ0 · 10ℵ0 ≤ 2ℵ0 · (24 ) = 2ℵ0 +4·ℵ0 = 2ℵ0

since
7.2. BETH NUMBERS 27

ℵ0 + 4 · ℵ 0 = ℵ0 .

On the other hand, if we map 2 = {0, 1} to {3, 7} and consider that decimal fractions containing only 3 or 7 are only
a part of the real numbers, then we get

2ℵ0 ≤ c .

and thus

c = 2ℵ0 .

7.2 Beth numbers


Main article: Beth number

The sequence of beth numbers is defined by setting ℶ0 = ℵ0 and ℶk+1 = 2ℶk . So c is the second beth number,
beth-one:

c = ℶ1 .

The third beth number, beth-two, is the cardinality of the power set of R (i.e. the set of all subsets of the real line):

2c = ℶ2 .

7.3 The continuum hypothesis


Main article: Continuum hypothesis

The famous continuum hypothesis asserts that c is also the second aleph number ℵ1 . In other words, the continuum
hypothesis states that there is no set A whose cardinality lies strictly between ℵ0 and c

∄A : ℵ0 < |A| < c.

This statement is now known to be independent of the axioms of Zermelo–Fraenkel set theory with the axiom of
choice (ZFC). That is, both the hypothesis and its negation are consistent with these axioms. In fact, for every
nonzero natural number n, the equality c = ℵn is independent of ZFC (the case n = 1 is the continuum hypothesis).
The same is true for most other alephs, although in some cases equality can be ruled out by König’s theorem on
the grounds of cofinality, e.g., c ̸= ℵω . In particular, c could be either ℵ1 or ℵω1 , where ω1 is the first uncountable
ordinal, so it could be either a successor cardinal or a limit cardinal, and either a regular cardinal or a singular cardinal.

7.4 Sets with cardinality of the continuum


A great many sets studied in mathematics have cardinality equal to c . Some common examples are the following:

• the real numbers R


28 CHAPTER 7. CARDINALITY OF THE CONTINUUM

• any (nondegenerate) closed or open interval in R (such as the unit interval [0, 1] )

For instance, for all a, b ∈ R such that a < b we can define the bijection
f : R → (a, b)
arctan x + π2
x 7→ · (b − a) + a
π
Now we show the cardinality of an infinite interval. For all a ∈ R we can define the bijection
f : R → (a, ∞)
{
arctan x + π2 + a if x < 0
x 7→
x + π2 + a if x ≥ 0
and similarly for all b ∈ R
f : R → (−∞, b)
{
x − π2 + b if x < 0
x 7→
arctan x − π
2 +b if x ≥ 0

• the irrational numbers

• the transcendental numbers

We note that the set of real algebraic numbers is countably infinite (assign to each formula its Gödel
number.) So the cardinality of the real algebraic numbers is ℵ0 . Furthermore, the real algebraic numbers
and the real transcendental numbers are disjoint sets whose union is R . Thus, since the cardinality of
R is c , the cardinality of the real transcendental numbers is c − ℵ0 = c . A similar result follows for
complex transcendental numbers, once we have proved that |C| = c .

• the Cantor set

• Euclidean space Rn [1]

• the complex numbers C



We note that, per Cantor’s proof of the cardinality of Euclidean space,[1] R2 = c . By definition, any
c ∈ C can be uniquely expressed as a + bi for some a, b ∈ R . We therefore define the bijection
f : C → R2
a, b 7→ [a, b]

• the power set of the natural numbers P(N) (the set of all subsets of the natural numbers)

• the set of sequences of integers (i.e. all functions N → Z , often denoted ZN )

• the set of sequences of real numbers, RN

• the set of all continuous functions from R to R

• the Euclidean topology on Rn (i.e. the set of all open sets in Rn )

• the Borel σ-algebra on R (i.e. the set of all Borel sets in R ).

7.5 Sets with greater cardinality


Sets with cardinality greater than c include:

• the set of all subsets of R (i.e., power set P(R) )


7.6. REFERENCES 29

• the set 2R of indicator functions defined on subsets of the reals (the set 2R is isomorphic to P(R) – the indicator
function chooses elements of each subset to include)
• the set RR of all functions from R to R

• the Lebesgue σ-algebra of R , i.e., the set of all Lebesgue measurable sets in R .
• the Stone–Čech compactifications of N , Q and R

• the set of all automorphisms of the field of complex numbers.

These all have cardinality 2c = ℶ2 (beth two).

7.6 References
[1] Was Cantor Surprised?, Fernando Q. Gouvêa

• Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag,
New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition).

• Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. ISBN
3-540-44085-2.

• Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. ISBN 0-444-86839-9.

This article incorporates material from cardinality of the continuum on PlanetMath, which is licensed under the Creative
Commons Attribution/Share-Alike License.
Chapter 8

Charles Sanders Peirce

Charles Sanders Peirce (/ˈpɜrs/,[9] like “purse”, September 10, 1839 – April 19, 1914) was an American philoso-
pher, logician, mathematician, and scientist who is sometimes known as “the father of pragmatism". He was educated
as a chemist and employed as a scientist for 30 years. Today he is appreciated largely for his contributions to logic,
mathematics, philosophy, scientific methodology, and semiotics, and for his founding of pragmatism.
An innovator in mathematics, statistics, philosophy, research methodology, and various sciences, Peirce considered
himself, first and foremost, a logician. He made major contributions to logic, but logic for him encompassed much
of that which is now called epistemology and philosophy of science. He saw logic as the formal branch of semiotics,
of which he is a founder, and which foreshadowed the debate among logical positivists and proponents of philosophy
of language that dominated 20th century Western philosophy; additionally, he defined the concept of abductive rea-
soning, as well as rigorously formulated mathematical induction and deductive reasoning. As early as 1886 he saw
that logical operations could be carried out by electrical switching circuits; the same idea was used decades later to
produce digital computers.[10]
In 1934, the philosopher Paul Weiss called Peirce “the most original and versatile of American philosophers and
America’s greatest logician”.[11] Webster’s Biographical Dictionary said in 1943 that Peirce was “now regarded as the
most original thinker and greatest logician of his time.”[12]

8.1 Life
Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin
Peirce, himself a professor of astronomy and mathematics at Harvard University and perhaps the first serious re-
search mathematician in America. At age 12, Charles read his older brother’s copy of Richard Whately's Elements
of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and
reasoning.[13] He went on to earn the A.B. and A.M. from Harvard; in 1863 the Lawrence Scientific School awarded
him a B.Sc. that was Harvard’s first summa cum laude chemistry degree;[14] and otherwise his academic record was
undistinguished.[15] At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and
William James.[16] One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce.
This opinion proved fateful, because Eliot, while President of Harvard 1869–1909—a period encompassing nearly
all of Peirce’s working life—repeatedly vetoed Harvard’s employing Peirce in any capacity.[17]
Peirce suffered from his late teens onward from a nervous condition then known as “facial neuralgia”, which would
today be diagnosed as trigeminal neuralgia. Brent says that when in the throes of its pain “he was, at first, almost
stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to
violent outbursts of temper”.[18] Its consequences may have led to the social isolation which made his life’s later years
so tragic.

8.1.1 Early employment


Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast
Survey and its successor, the United States Coast and Geodetic Survey,[19] where he enjoyed his highly influential
father’s protection[20] until the latter’s death in 1880. That employment exempted Peirce from having to take part in

30
8.1. LIFE 31

Peirce’s birthplace. Now Lesley University's Graduate School of Arts and Social Sciences

the Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with
the Confederacy.[21] At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to
determine small local variations in the Earth's gravity.[19] He was elected a resident fellow of the American Academy
of Arts and Sciences in January 1867.[22] The Survey sent him to Europe five times,[23] first in 1871 as part of a
group sent to observe a solar eclipse; there, he sought out Augustus De Morgan, William Stanley Jevons, and William
Kingdon Clifford,[24] British mathematicians and logicians whose turn of mind resembled his own. From 1869 to
1872, he was employed as an Assistant in Harvard’s astronomical observatory, doing important work on determining
the brightness of stars and the shape of the Milky Way.[25] On April 20, 1877 he was elected a member of the National
Academy of Sciences.[26] Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain
frequency,[27] the kind of definition employed from 1960 to 1983.
During the 1880s, Peirce’s indifference to bureaucratic detail waxed while his Survey work’s quality and timeliness
waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries,
ultimately thousands during 1883–1909, on philosophy, logic, science, and other subjects for the encyclopedic Century
Dictionary.[28] In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of
Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds.[29] In 1891,
Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.[30] He never again
held regular employment.

8.1.2 Johns Hopkins University

In 1879, Peirce was appointed Lecturer in logic at the new Johns Hopkins University, which had strong departments
in a number of areas that interested him, such as philosophy (Royce and Dewey completed their PhDs at Hopkins),
psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study
with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce’s work on mathematics and
logic). 1883 saw publication of his Studies in Logic by Members of the Johns Hopkins University containing works by
himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom
were his graduate students.[31] Peirce’s nontenured position at Hopkins was the only academic appointment he ever
32 CHAPTER 8. CHARLES SANDERS PEIRCE

held.
Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants,
and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American sci-
entist of the day, Simon Newcomb.[32] Peirce’s efforts may also have been hampered by a difficult personality; Brent
conjectures as to further psychological difficulty.[33]
Peirce’s personal life worked against his professional success. After his first wife, Harriet Melusina Fay (“Zina”),
left him in 1875,[34] Peirce, while still legally married, became involved with Juliette, whose name, given variously
as Froissy and Pourtalai[35] and nationality (she spoke French[36] ) remain uncertain.[37] When his divorce from Zina
became final in 1883, he married Juliette.[38] That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce,
while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal
led to his dismissal in January 1884.[39] Over the years Peirce sought academic employment at various universities
without success.[40] He had no children by either marriage.[41]

Cambridge, where Peirce was born and raised, New York City, where he often visited and sometimes lived, and Milford, where he
spent the later years of his life with his second wife Juliette.

8.1.3 Poverty

In 1887 Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km2 ) of rural land near Milford,
Pennsylvania, which never yielded an economic return.[42] There he had an 1854 farmhouse remodeled to his design.[43]
The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives,[44] Charles
writing prolifically, much of it unpublished to this day (see Works). Living beyond their means soon led to grave
financial and legal difficulties.[45] He spent much of his last two decades unable to afford heat in winter and sub-
sisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old
manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a
while.[46] Several people, including his brother James Mills Peirce[47] and his neighbors, relatives of Gifford Pinchot,
settled his debts and paid his property taxes and mortgage.[48]
Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary
entries, and reviews for The Nation (with whose editor, Wendell Phillips Garrison, he became friendly). He did
translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial
mathematical calculations for Langley’s research on powered flight. Hoping to make money, Peirce tried inventing.[49]
He began but did not complete a number of books.[50] In 1888, President Grover Cleveland appointed him to the
Assay Commission.[51]
From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago,[52] who introduced Peirce to
editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal The Monist, which
eventually published at least 14 articles by Peirce.[53] He wrote many texts in James Mark Baldwin's Dictionary of
Philosophy and Psychology (1901–5); half of those credited to him appear to have been written actually by Christine
8.1. LIFE 33

Juliette and Charles by a well at their home Arisbe in 1907

Ladd-Franklin under his supervision.[54] He applied in 1902 to the newly formed Carnegie Institution for a grant
to write a systematic book of his life’s work. The application was doomed; his nemesis Newcomb served on the
Institution’s executive committee, and its President had been the President of Johns Hopkins at the time of Peirce’s
dismissal.[55]
The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his Will
to Believe (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898
and 1903).[56] Most important, each year from 1907 until James’s death in 1910, James wrote to his friends in the
Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated
by designating James’s eldest son as his heir should Juliette predecease him.[57] It has been believed that this was also
why Peirce used “Santiago” (“St. James” in English) as a middle name, but he appeared in print as early as 1890 as
Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).
Peirce died destitute in Milford, Pennsylvania, twenty years before his widow.

8.1.4 Slavery, the Civil War and racism

Peirce grew up in a home where the supremacy of the white Anglo-Saxon male was taken for granted, Irish immigrants
were considered inferior and Negro slavery was considered natural.[58]
34 CHAPTER 8. CHARLES SANDERS PEIRCE

Arisbe in 2011

Until the outbreak of the Civil War his father described himself as a secessionist, but after the outbreak of the war, this
stopped and he became a Union partisan, supporting with donations the Sanitary Commission, the leading Northern
war charity. No members of the Peirce family volunteered or enlisted. Peirce shared his father’s views and liked to
use the syllogism
All Men are equal in their political rights, Negroes are Men - Negroes are equal in political rights to whites
to illustrate the unreliability of traditional forms of logic.[59] See: Peirce’s law#Other proofs of Peirce’s law

8.2 Reception
Bertrand Russell (1959) wrote,[60] “Beyond doubt [...] he was one of the most original minds of the later nine-
teenth century, and certainly the greatest American thinker ever.” (Russell and Whitehead's Principia Mathematica,
published from 1910 to 1913, does not mention Peirce; Peirce’s work was not widely known until later.)[61] A. N.
Whitehead, while reading some of Peirce’s unpublished manuscripts soon after arriving at Harvard in 1924, was
struck by how Peirce had anticipated his own “process” thinking. (On Peirce and process metaphysics, see Lowe
1964.[25] ) Karl Popper viewed Peirce as “one of the greatest philosophers of all times”.[62] Yet Peirce’s achievements
were not immediately recognized. His imposing contemporaries William James and Josiah Royce[63] admired him,
and Cassius Jackson Keyser at Columbia and C. K. Ogden wrote about Peirce with respect, but to no immediate
effect.
The first scholar to give Peirce his considered professional attention was Royce’s student Morris Raphael Cohen,
the editor of an anthology of Peirce’s writings titled Chance, Love, and Logic (1923) and the author of the first
bibliography of Peirce’s scattered writings.[64] John Dewey studied under Peirce at Johns Hopkins[31] and, from 1916
onwards, Dewey’s writings repeatedly mention Peirce with deference. His 1938 Logic: The Theory of Inquiry is
much influenced by Peirce.[65] The publication of the first six volumes of the Collected Papers (1931–35), the most
important event to date in Peirce studies and one that Cohen made possible by raising the needed funds,[66] did not
prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not
become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939),
Feibleman (1946), and Goudge (1950), the 1941 Ph.D. thesis by Arthur W. Burks (who went on to edit volumes 7
and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946.
Its Transactions, an academic quarterly specializing in Peirce, pragmatism, and American philosophy, has appeared
8.3. WORKS 35

since 1965.
In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced
on an autograph letter by Peirce. So began her 40 years of research on Peirce the mathematician and scientist,
culminating in Eisele (1976, 1979, 1985). Beginning around 1960, the philosopher and historian of ideas Max Fisch
(1900–1995) emerged as an authority on Peirce; Fisch (1986)[67] includes many of his relevant articles, including a
wide-ranging survey (Fisch 1986: 422–48) of the impact of Peirce’s thought through 1983.
Peirce has gained a significant international following, marked by university research centers devoted to Peirce studies
and pragmatism in Brazil (CeneP/CIEP), Finland (HPRC, including Commens), Germany (Wirth’s group, Hoffman’s
and Otte’s group, and Deuser’s and Härle’s group[68] ), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His
writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish.
Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirceans of note. For many years, the
North American philosophy department most devoted to Peirce was the University of Toronto's, thanks in good part
to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana
University - Purdue University Indianapolis, home of the Peirce Edition Project (PEP), and the Pennsylvania State
University.

Currently, considerable interest is being taken in Peirce’s ideas by researchers wholly outside the
arena of academic philosophy. The interest comes from industry, business, technology, intelligence or-
ganizations, and the military; and it has resulted in the existence of a substantial number of agencies,
institutes, businesses, and laboratories in which ongoing research into and development of Peircean con-
cepts are being vigorously undertaken.
—Robert Burch, 2001, updated 2010[19]

In recent years, Peirce’s trichotomy of signs is exploited by a growing number of practitioners for marketing and
design tasks.

8.3 Works
Peirce’s reputation rests largely on a number of academic papers published in American scientific and scholarly
journals such as Proceedings of the American Academy of Arts and Sciences, the Journal of Speculative Philosophy,
The Monist, Popular Science Monthly, the American Journal of Mathematics, Memoirs of the National Academy of
Sciences, The Nation, and others. See Articles by Peirce, published in his lifetime for an extensive list with links
to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published
in his lifetime[69] was Photometric Researches (1878), a 181-page monograph on the applications of spectrographic
methods to astronomy. While at Johns Hopkins, he edited Studies in Logic (1883), containing chapters by himself
and his graduate students. Besides lectures during his years (1879–1884) as Lecturer in Logic at Johns Hopkins, he
gave at least nine series of lectures, many now published; see Lectures by Peirce.
Harvard University obtained from Peirce’s widow soon after his death the papers found in his study, but did not
microfilm them until 1964. Only after Richard Robin (1967)[70] catalogued this Nachlass did it become clear that
Peirce had left approximately 1650 unpublished manuscripts, totaling over 100,000 pages,[71] mostly still unpublished
except on microfilm. On the vicissitudes of Peirce’s papers, see Houser (1989).[72] Reportedly the papers remain in
unsatisfactory condition.[73]
The first published anthology of Peirce’s articles was the one-volume Chance, Love and Logic: Philosophical Essays,
edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957,
1958, 1972, 1994, and 2009, most still in print. The main posthumous editions[74] of Peirce’s works in their long
trek to light, often multi-volume, and some still in print, have included:
1931–58: Collected Papers of Charles Sanders Peirce (CP), 8 volumes, includes many published works, along with
a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition
drawn from Peirce’s work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from
1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while
texts from various stages in Peirce’s development are often combined, requiring frequent visits to editors’ notes.[75]
Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.
36 CHAPTER 8. CHARLES SANDERS PEIRCE

1975–87: Charles Sanders Peirce: Contributions to The Nation, 4 volumes, includes Peirce’s more than 300 reviews
and articles published 1869–1908 in The Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online.
1976: The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished
Peirce manuscripts on mathematical subjects, along with Peirce’s important published mathematical articles. Edited
by Carolyn Eisele, back in print.
1977: Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby (2nd edition 2001),
included Peirce’s entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce’s other published corre-
spondence is largely limited to the 14 letters included in volume 8 of the Collected Papers, and the 20-odd pre-1890
items included so far in the Writings. Edited by Charles S. Hardwick with James Cook, out of print.
1982–now: Writings of Charles S. Peirce, A Chronological Edition (W), Volumes 1–6 & 8, of a projected 30. The
limited coverage, and defective editing and organization, of the Collected Papers led Max Fisch and others in the
1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological
edition. Only seven volumes have appeared to date, but they cover the period from 1859–1892, when Peirce carried
out much of his best-known work. W 8 was published in November 2010; and work continues on W 7, 9, and 11. In
print and online.
1985: Historical Perspectives on Peirce’s Logic of Science: A History of Science, 2 volumes. Auspitz has said,[76] “The
extent of Peirce’s immersion in the science of his day is evident in his reviews in the Nation [...] and in his papers,
grant applications, and publishers’ prospectuses in the history and practice of science”, referring latterly to Historical
Perspectives. Edited by Carolyn Eisele, back in print.
1992: Reasoning and the Logic of Things collects in one place Peirce’s 1898 series of lectures invited by William
James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.
1992–98: The Essential Peirce (EP), 2 volumes, is an important recent sampler of Peirce’s philosophical writings.
Edited (1) by Nathan Hauser and Christian Kloesel and (2) by PEP editors, in print.
1997: Pragmatism as a Principle and Method of Right Thinking collects Peirce’s 1903 Harvard “Lectures on Prag-
matism” in a study edition, including drafts, of Peirce’s lecture manuscripts, which had been previously published in
abridged form; the lectures now also appear in EP 2. Edited by Patricia Ann Turisi, in print.
2010: Philosophy of Mathematics: Selected Writings collects important writings by Peirce on the subject, many not
previously in print. Edited by Matthew E. Moore, in print.

8.4 Mathematics
Peirce’s most important work in pure mathematics was in logical and foundational areas. He also worked on linear
algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem,
and the nature of continuity.
He worked on applied mathematics in economics, engineering, and map projections (such as the Peirce quincuncial
projection), and was especially active in probability and statistics.[77]

Discoveries

Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came
to be appreciated only long after he died:
In 1860[78] he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who
completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) Paradoxien des
Unendlichen.

The Peirce arrow,
symbol for "(neither)...nor...”, also called the Quine dagger.

In 1880–81[79] he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical
NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan’s Laws).
In 1881[80] he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and
Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite
8.4. MATHEMATICS 37

The Peirce quincuncial projection of a sphere keeps angles true except at several isolated points and results in less distortion of area
than in other projections.

set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of
an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper
subsets.
In 1885[81] he distinguished between first-order and second-order quantification.[82][83] In the same paper he set out
what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady
2000,[84] pp. 132–3).
In 1886 he saw that Boolean calculations could be carried out via electrical switches,[10] anticipating Claude Shannon
by more than 50 years.
By the later 1890s[85] he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based
on them are John F. Sowa's conceptual graphs and Sun-Joo Shin’s diagrammatic reasoning.

The New Elements of Mathematics

Peirce wrote drafts for an introductory textbook, with the working title The New Elements of Mathematics, that
presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished
mathematical manuscripts finally appeared[77] in The New Elements of Mathematics by Charles S. Peirce (1976),
edited by mathematician Carolyn Eisele.
38 CHAPTER 8. CHARLES SANDERS PEIRCE

PQ P

P Q P Q

P Q

Existential graphs: Alpha graphs

Nature of mathematics

Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences
(of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series,
and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father
Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity
but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and
that logic itself is part of philosophy and is the science about drawing conclusions necessary and otherwise.[86]

8.4.1 Mathematics of logic

Beginning with his first paper on the “Logic of Relatives” (1870), Peirce extended the theory of relations that Augustus
De Morgan had just recently awakened from its Cinderella slumbers. Much of the mathematics of relations now taken
for granted was “borrowed” from Peirce, not always with all due credit; on that and on how the young Bertrand Russell,
especially his Principles of Mathematics and Principia Mathematica, did not do Peirce justice, see Anellis (1995).[61]
In 1918 the logician C. I. Lewis wrote, “The contributions of C.S. Peirce to symbolic logic are more numerous and
varied than those of any other writer — at least in the nineteenth century.”[87] Beginning in 1940, Alfred Tarski and
his students rediscovered aspects of Peirce’s larger vision of relational logic, developing the perspective of relation
algebra.
Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice
theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean
8.4. MATHEMATICS 39

ideas in work of Edgar F. Codd, who was a doctoral student[88] of Arthur W. Burks, a Peirce scholar. In economics,
relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and
utility and by Kenneth J. Arrow in Social Choice and Individual Values, following Arrow’s association with Tarski at
City College of New York.
On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982)[82] documented that
Frege’s work on the logic of quantifiers had little influence on his contemporaries, although it was published four
years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and
logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly
through Peirce’s “On the Algebra of Logic: A Contribution to the Philosophy of Notation”[81] (1885), published
in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who
ignored Frege. They also adopted and modified Peirce’s notations, typographical variants of those now used. Peirce
apparently was ignorant of Frege’s work, despite their overlapping achievements in logic, philosophy of language, and
the foundations of mathematics.
Peirce’s work on formal logic had admirers besides Ernst Schröder:

• Philosophical algebraist William Kingdon Clifford[89] and logician William Ernest Johnson, both British;

• The Polish school of logic and foundational mathematics, including Alfred Tarski;

• Arthur Prior, who praised and studied Peirce’s logical work in a 1964 paper[25] and in Formal Logic (saying on
page 4 that Peirce “perhaps had a keener eye for essentials than any other logician before or since.”).

A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce’s writings and, along
with Peirce’s logical work more generally, is exposited and defended in Hilary Putnam (1982);[82] the Introduction in
Nathan Houser et al. (1997);[90] and Randall Dipert’s chapter in Cheryl Misak (2004).[91]

8.4.2 Continua
Continuity and synechism are central in Peirce’s philosophy: “I did not at first suppose that it was, as I gradually came
to find it, the master-Key of philosophy”.[92]
From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua.
He long held that the real numbers constitute a pseudo-continuum;[93] that a true continuum is the real subject matter
of analysis situs (topology); and that a true continuum of instants exceeds—and within any lapse of time has room
for—any Aleph number (any infinite multitude as he called it) of instants.[94]
In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): “It
is on May 26, 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection
of any multitude. From now on, there are different kinds of continua, which have different properties.”[95]

8.4.3 Probability and statistics


Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity (absolute chance) is
real (see Tychism on his view). Most of his statistical writings promote the frequency interpretation of probability
(objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability
when such models are not based on objective randomization.[96] Though Peirce was largely a frequentist, his possible
world semantics introduced the “propensity” theory of probability before Karl Popper.[97][98] Peirce (sometimes with
Joseph Jastrow) investigated the probability judgments of experimental subjects, “perhaps the very first” elicitation
and estimation of subjective probabilities in experimental psychology and (what came to be called) Bayesian statis-
tics.[2]
Peirce was one of the founders of statistics. He formulated modern statistics in "Illustrations of the Logic of Science"
(1877–8) and "A Theory of Probable Inference" (1883). With a repeated measures design, Charles Sanders Peirce
and Joseph Jastrow introduced blinded, controlled randomized experiments in 1884[99] (Hacking 1990:205)[1] (before
Ronald A. Fisher).[2] He invented optimal design for experiments on gravity, in which he "corrected the means". He
used correlation and smoothing. Peirce extended the work on outliers by Benjamin Peirce, his father.[2] He introduced
terms "confidence" and "likelihood" (before Jerzy Neyman and Fisher). (See Stephen Stigler's historical books and
Ian Hacking 1990[1] ).
40 CHAPTER 8. CHARLES SANDERS PEIRCE

8.5 Philosophy
It is not sufficiently recognized that Peirce’s career was that of a scientist, not a philosopher; and that
during his lifetime he was known and valued chiefly as a scientist, only secondarily as a logician, and
scarcely at all as a philosopher. Even his work in philosophy and logic will not be understood until this
fact becomes a standing premise of Peircean studies.
—Max Fisch 1964, p. 486.[25]

Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he
lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages of Kant's Critique of Pure
Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines,
including mathematics, logic, philosophy, statistics, astronomy,[25] metrology,[3] geodesy, experimental psychology,[4]
economics,[5] linguistics,[6] and the history and philosophy of science. This work has enjoyed renewed interest and
approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration
of how philosophy can be applied effectively to human problems.
Peirce’s philosophy includes (see below in related sections) a pervasive three-category system, belief that truth is
immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic
as formal semiotic on signs, on arguments, and on inquiry’s ways—including philosophical pragmatism (which he
founded), critical common-sensism, and scientific method—and, in metaphysics: Scholastic realism, e.g. John Duns
Scotus, belief in God, freedom, and at least an attenuated immortality, objective idealism, and belief in the reality of
continuity and of absolute chance, mechanical necessity, and creative love. In his work, fallibilism and pragmatism
may seem to work somewhat like skepticism and positivism, respectively, in others’ work. However, for Peirce, falli-
bilism is balanced by an anti-skepticism and is a basis for belief in the reality of absolute chance and of continuity,[100]
and pragmatism commits one to anti-nominalist belief in the reality of the general (CP 5.453–7).
For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the
special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at
any waking moment, and does not settle questions by resorting to special experiences.[101] He divided such philosophy
into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics,
and logic), and (3) metaphysics; his views on them are discussed in order below.

8.5.1 Theory of categories

Main article: Categories (Peirce)

On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to the American
Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication,
involving three universal categories that Peirce developed in response to reading Aristotle, Kant, and Hegel, categories
that Peirce applied throughout his work for the rest of his life.[19] Peirce scholars generally regard the “New List”
as foundational or breaking the ground for Peirce’s “architectonic”, his blueprint for a pragmatic philosophy. In the
categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "How
To Make Our Ideas Clear" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work.
“On a New List of Categories” is cast as a Kantian deduction; it is short but dense and difficult to summarize. The
following table is compiled from that and later works.[102] In 1893, Peirce restated most of it for a less advanced
audience.[103]
*Note: An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process.

8.5.2 Aesthetics and ethics

Peirce did not write extensively in aesthetics and ethics,[110] but came by 1902 to hold that aesthetics, ethics, and logic,
in that order, comprise the normative sciences.[111] He characterized aesthetics as the study of the good (grasped as
the admirable), and thus of the ends governing all conduct and thought.[112]
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 41

8.6 Philosophy: logic, or semiotic

8.6.1 Logic as philosophical

Peirce regarded logic per se as a division of philosophy, as a normative science based on esthetics and ethics, as more
basic than metaphysics,[113] and as “the art of devising methods of research”.[114] More generally, as inference, “logic
is rooted in the social principle”, since inference depends on a standpoint that, in a sense, is unlimited.[115] Peirce
called (with no sense of deprecation) “mathematics of logic” much of the kind of thing which, in current research
and applications, is called simply “logic”. He was productive in both (philosophical) logic and logic’s mathematics,
which were connected deeply in his work and thought.
Peirce argued that logic is formal semiotic, the formal study of signs in the broadest sense, not only signs that are
artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that
“all this universe is perfused with signs, if it is not composed exclusively of signs”,[116] along with their representational
and inferential relations. He argued that, since all thought takes time, all thought is in signs[117] and sign processes
(“semiosis”) such as the inquiry process. He divided logic into: (1) speculative grammar, or stechiology, on how signs
can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody
or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative or universal
rhetoric, or methodeutic,[118] the philosophical theory of inquiry, including pragmatism.

Presuppositions of logic

In his “F.R.L.” [First Rule of Logic] (1899), Peirce states that the first, and “in one sense, the sole”, rule of reason
is that, to learn, one needs to desire to learn and desire it without resting satisfied with that which one is inclined to
think.[113] So, the first rule is, to wonder. Peirce proceeds to a critical theme in research practices and the shaping of
theories:

...there follows one corollary which itself deserves to be inscribed upon every wall of the city of philos-
ophy:

Do not block the way of inquiry.

Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense
that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that “the one unpardonable
offence” is a philosophical barricade against truth’s advance, an offense to which “metaphysicians in all ages have
shown themselves the most addicted”. Peirce in many writings holds that logic precedes metaphysics (ontological,
religious, and physical).
Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that
something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely
basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and
anomalous phenomena. To refuse absolute theoretical certainty is the heart of fallibilism, which Peirce unfolds into
refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic’s presupposition of fallibilism
leads at length to the view that chance and continuity are very real (tychism and synechism).[100]
The First Rule of Logic pertains to the mind’s presuppositions in undertaking reason and logic, presuppositions, for
instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational
relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as,
collectively, hopes which, in particular cases, one is unable seriously to doubt.[119]

Four incapacities

In three articles in 1868–69,[117][120][121] Peirce rejected mere verbal or hyperbolic doubt and first or ultimate prin-
ciples, and argued that we have (as he numbered them[120] ):

1. No power of Introspection. All knowledge of the internal world comes by hypothetical reasoning from known
external facts.
42 CHAPTER 8. CHARLES SANDERS PEIRCE

2. No power of Intuition (cognition without logical determination by previous cognitions). No cognitive stage is
absolutely first in a process. All mental action has the form of inference.

3. No power of thinking without signs. A cognition must be interpreted in a subsequent cognition in order to be
a cognition at all.

4. No conception of the absolutely incognizable.

(The above sense of the term “intuition” is almost Kant’s, said Peirce. It differs from the current looser sense that
encompasses instinctive or anyway half-conscious inference.)
Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes
of reasoning,[121] and the falsity of philosophical Cartesianism (see below).
Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself[120] and later said that to
“dismiss make-believes” is a prerequisite for pragmatism.[122]

Logic as formal semiotic

Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought’s
processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry
rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference,
and, as its culmination, a theory of inquiry for the task of saying 'how science works’ and devising research methods.
This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way
to the principles of all methods.[114] Influences radiate from points on parallel lines of inquiry in Aristotle's work,
in such loci as: the basic terminology of psychology in On the Soul; the founding description of sign relations in
On Interpretation; and the differentiation of inference into three modes that are commonly translated into English
as abduction, deduction, and induction, in the Prior Analytics, as well as inference by analogy (called paradeigma by
Aristotle), which Peirce regarded as involving the other three modes.
Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He
called it both semiotic and semeiotic. Both are current in singular and plural. He based it on the conception of a triadic
sign relation, and defined semiosis as “action, or influence, which is, or involves, a cooperation of three subjects, such
as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between
pairs”.[123] As to signs in thought, Peirce emphasized the reverse:

To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way
of saying that every thought must be interpreted in another, or that all thought is in signs.
—Peirce 1868.[117]

Peirce held that all thought is in signs, issuing in and from interpretation, where 'sign' is the word for the broadest
variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental
concepts and ideas, all as determinations of a mind or quasi-mind, that which at least functions like a mind, as in the
work of crystals or bees[124] — the focus is on sign action in general rather than on psychology, linguistics, or social
studies (fields which he also pursued).
Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena
to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of
inquiry on semiotics’ three levels:

1. Conditions for meaningfulness. Study of significatory elements and combinations, their grammar.

2. Validity, conditions for true representation. Critique of arguments in their various separate modes.

3. Conditions for determining interpretations. Methodology of inquiry in its mutually interacting modes.

Peirce uses examples often from common experience, but defines and discusses such things as assertion and inter-
pretation in terms of philosophical logic. In a formal vein, Peirce said:
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 43

On the Definition of Logic. Logic is formal semiotic. A sign is something, A, which brings some-
thing, B, its interpretant sign, determined or created by it, into the same sort of correspondence (or a
lower implied sort) with something, C, its object, as that in which itself stands to C. This definition no
more involves any reference to human thought than does the definition of a line as the place within
which a particle lies during a lapse of time. It is from this definition that I deduce the principles of
logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism of
Weierstrassian severity, and that is perfectly evident. The word “formal” in the definition is also defined.
—Peirce, “Carnegie Application”, The New Elements of Mathematics v. 4, p. 54.

8.6.2 Signs
Main article: Semiotic elements and classes of signs (Peirce)
See also: Representation (arts) § Peirce and representation and Sign (semiotics) § Triadic signs

A list of noted writings by Peirce on signs and sign relations is at Semiotic elements and classes of signs (Peirce)#References
and further reading.

Sign relation

Peirce’s theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim.
Anything is a sign — not absolutely as itself, but instead in some relation or other. The sign relation is the key. It
defines three roles encompassing (1) the sign, (2) the sign’s subject matter, called its object, and (3) the sign’s meaning
or ramification as formed into a kind of effect called its interpretant (a further sign, for example a translation). It is
an irreducible triadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are
not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further
interpretants.
Extension × intension = information. Two traditional approaches to sign relation, necessary though insufficient, are
the way of extension (a sign’s objects, also called breadth, denotation, or application) and the way of intension (the
objects’ characteristics, qualities, attributes referenced by the sign, also called depth, comprehension, significance, or
connotation). Peirce adds a third, the way of information, including change of information, to integrate the other
two approaches into a unified whole.[125] For example, because of the equation above, if a term’s total amount of
information stays the same, then the more that the term 'intends’ or signifies about objects, the fewer are the objects
to which the term 'extends’ or applies.
Determination. A sign depends on its object in such a way as to represent its object — the object enables and, in a
sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction.
The interpretant depends likewise on both the sign and the object — an object determines a sign to determine an
interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign
determination is triadic. For example, an interpretant does not merely represent something which represented an
object; instead an interpretant represents something as a sign representing the object. The object (be it a quality or
fact or law or even fictional) determines the sign to an interpretant through one’s collateral experience[126] with the
object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an
absent object. Peirce used the word “determine” not in a strictly deterministic sense, but in a sense of “specializes,”
bestimmt,[127] involving variable amount, like an influence.[128] Peirce came to define representation and interpretation
in terms of (triadic) determination.[129] The object determines the sign to determine another sign — the interpretant
— to be related to the object as the sign is related to the object, hence the interpretant, fulfilling its function as sign
of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is
definitive of sign, object, and interpretant in general.[128]

Semiotic elements

Peirce held there are exactly three basic elements in semiosis (sign action):

1. A sign (or representamen)[130] represents, in the broadest possible sense of “represents”. It is something inter-
pretable as saying something about something. It is not necessarily symbolic, linguistic, or artificial—a cloud
might be a sign of rain for instance, or ruins the sign of ancient civilization.[131] As Peirce sometimes put it (he
44 CHAPTER 8. CHARLES SANDERS PEIRCE

defined sign at least 76 times[128] ), the sign stands for the object to the interpretant. A sign represents its object
in some respect, which respect is the sign’s ground.[106]

2. An object (or semiotic object) is a subject matter of a sign and an interpretant. It can be anything thinkable,
a quality, an occurrence, a rule, etc., even fictional, such as Prince Hamlet.[132] All of those are special or
partial objects. The object most accurately is the universe of discourse to which the partial or special object
belongs.[132] For instance, a perturbation of Pluto’s orbit is a sign about Pluto but ultimately not only about
Pluto. An object either (i) is immediate to a sign and is the object as represented in the sign or (ii) is a dynamic
object, the object as it really is, on which the immediate object is founded “as on bedrock”.[133]

3. An interpretant (or interpretant sign) is a sign’s meaning or ramification as formed into a kind of idea or effect,
an interpretation, human or otherwise. An interpretant is a sign (a) of the object and (b) of the interpretant’s
“predecessor” (the interpreted sign) as a sign of the same object. An interpretant either (i) is immediate to a
sign and is a kind of quality or possibility such as a word’s usual meaning, or (ii) is a dynamic interpretant,
such as a state of agitation, or (iii) is a final or normal interpretant, a sum of the lessons which a sufficiently
considered sign would have as effects on practice, and with which an actual interpretant may at most coincide.

Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign
denotes, the mind needs some experience of that sign’s object, experience outside of, and collateral to, that sign or
sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all
in much the same terms.[126]

Classes of signs

Among Peirce’s many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the
second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its inter-
pretant. Also, each of the three typologies is a three-way division, a trichotomy, via Peirce’s three phenomenological
categories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.[134]
I. Qualisign, sinsign, legisign (also called tone, token, type, and also called potisign, actisign, famisign):[135] This ty-
pology classifies every sign according to the sign’s own phenomenological category—the qualisign is a quality, a
possibility, a “First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a “Second"; and
the legisign is a habit, a rule, a representational relation, a “Third”.
II. Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign’s
way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual
connection to its object, and the symbol by a habit or rule for its interpretant.
III. Rheme, dicisign, argument (also called sumisign, dicisign, suadisign, also seme, pheme, delome,[135] and regarded as
very broadened versions of the traditional term, proposition, argument): This typology classifies every sign according
to the category which the interpretant attributes to the sign’s way of denoting its object—the rheme, for example a
term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a
sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object
in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural
element of inference.
Every sign belongs to one class or another within (I) and within (II) and within (III). Thus each of the three ty-
pologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many
co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality,
and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified
at this level of analysis.

8.6.3 Modes of inference


Main article: Inquiry

Borrowing a brace of concepts from Aristotle, Peirce examined three basic modes of inference — abduction, deduction,
and induction — in his “critique of arguments” or “logic proper”. Peirce also called abduction “retroduction”, “pre-
sumption”, and, earliest of all, “hypothesis”. He characterized it as guessing and as inference to an explanatory
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 45

hypothesis. He sometimes expounded the modes of inference by transformations of the categorical syllogism Bar-
bara (AAA), for example in “Deduction, Induction, and Hypothesis” (1878).[136] He does this by rearranging the rule
(Barbara’s major premise), the case (Barbara’s minor premise), and the result (Barbara’s conclusion):
Peirce 1883 in “A Theory of Probable Inference” (Studies in Logic) equated hypothetical inference with the induction
of characters of objects (as he had done in effect before[120] ). Eventually dissatisfied, by 1900 he distinguished them
once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and
comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive
inference:[137]

The surprising fact, C, is observed;


But if A were true, C would be a matter of course,
Hence, there is reason to suspect that A is true.

The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for
its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. “Deduction
proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that
something may be.”[138] Peirce did not remain quite convinced that one logical form covers all abduction.[139] In his
methodeutic or theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference
and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical
explanation, deductive prediction, inductive testing.

8.6.4 Pragmatism
Main articles: Pragmaticism, Pragmatic maxim and Pragmatic theory of truth § Peirce

Peirce’s recipe for pragmatic thinking, which he called pragmatism and, later, pragmaticism, is recapitulated in several
versions of the so-called pragmatic maxim. Here is one of his more emphatic reiterations of it:

Consider what effects that might conceivably have practical bearings you conceive the objects of your
conception to have. Then, your conception of those effects is the whole of your conception of the object.

As a movement, pragmatism began in the early 1870s in discussions among Peirce, William James, and others in the
Metaphysical Club. James among others regarded some articles by Peirce such as "The Fixation of Belief" (1877)
and especially "How to Make Our Ideas Clear" (1878) as foundational to pragmatism.[140] Peirce (CP 5.11–12), like
James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar
attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems.
Peirce differed from James and the early John Dewey, in some of their tangential enthusiasms, in being decidedly more
rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical
moods.
In 1905 Peirce coined the new name pragmaticism “for the precise purpose of expressing the original definition”,
saying that “all went happily” with James’s and F.C.S. Schiller's variant uses of the old name “pragmatism” and that
he coined the new name because of the old name’s growing use in “literary journals, where it gets abused”. Yet
he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his
differences with James as well as literary author Giovanni Papini's declaration of pragmatism’s indefinability. Peirce
in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists,
but he remained allied with them on other issues.[141]
Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce’s pragmatism is a method
of clarification of conceptions of objects. It equates any conception of an object to a conception of that object’s
effects to a general extent of the effects’ conceivable implications for informed practice. It is a method of sorting
out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not
practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his
“Illustrations of the Logic of Science” series of articles. In the second one, "How to Make Our Ideas Clear", Peirce
discussed three grades of clearness of conception:

1. Clearness of a conception familiar and readily used, even if unanalyzed and undeveloped.
46 CHAPTER 8. CHARLES SANDERS PEIRCE

2. Clearness of a conception in virtue of clearness of its parts, in virtue of which logicians called an idea “distinct”,
that is, clarified by analysis of just what makes it applicable. Elsewhere, echoing Kant, Peirce called a likewise
distinct definition “nominal” (CP 5.553).
3. Clearness in virtue of clearness of conceivable practical implications of the object’s conceived effects, such as
fosters fruitful reasoning, especially on difficult problems. Here he introduced that which he later called the
pragmatic maxim.

By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of
the presuppositions of reasoning in general. In clearness’s second grade (the “nominal” grade), he defined truth as
a sign’s correspondence to its object, and the real as the object of such correspondence, such that truth and the real
are independent of that which you or I or any actual, definite community of inquirers think. After that needful but
confined step, next in clearness’s third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion
which would be reached, sooner or later but still inevitably, by research taken far enough, such that the real does
depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance
for the long-run validity of the rule of induction.[142] Peirce argued that even to argue against the independence and
discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth
with just such independence and discoverability.
Peirce said that a conception’s meaning consists in "all general modes of rational conduct" implied by “acceptance”
of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive
to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such
consequent general modes is the whole meaning. His pragmatism does not equate a conception’s meaning, its in-
tellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda),
outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite
set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism
also bears no resemblance to “vulgar” pragmatism, which misleadingly connotes a ruthless and Machiavellian search
for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of
experimentational mental reflection[143] arriving at conceptions in terms of conceivable confirmatory and disconfir-
matory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use
and improvement of verification.[144]
Peirce’s pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry,[145]
which he variously called speculative, general, formal or universal rhetoric or simply methodeutic.[118] He applied his
pragmatism as a method throughout his work.

Theory of inquiry

See also: Inquiry

Critical common-sensism Critical common-sensism,[146] treated by Peirce as a consequence of his pragmatism,


is his combination of Thomas Reid’s common-sense philosophy with a fallibilism that recognizes that propositions
of our more or less vague common sense now indubitable may later come into question, for example because of
transformations of our world through science. It includes efforts to work up in tests genuine doubts for a core group
of common indubitables that vary slowly if at all.

Rival methods of inquiry In The Fixation of Belief (1877), Peirce described inquiry in general not as the pursuit
of truth per se but as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like,
and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry
as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome,
or hyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from
least to most successful:

1. The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but
leads to trying to ignore contrary information and others’ views as if truth were intrinsically private, not public.
The method goes against the social impulse and easily falters since one may well notice when another’s opinion
seems as good as one’s own initial opinion. Its successes can be brilliant but tend to be transitory.
8.6. PHILOSOPHY: LOGIC, OR SEMIOTIC 47

2. The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be
majestic and long-lasting, but it cannot regulate people thoroughly enough to withstand doubts indefinitely,
especially when people learn about other societies present and past.
3. The method of the a priori — which promotes conformity less brutally but fosters opinions as something
like tastes, arising in conversation and comparisons of perspectives in terms of “what is agreeable to reason.”
Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable
but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
4. The method of science — wherein inquiry supposes that the real is discoverable but independent of particular
opinion, such that, unlike in the other methods, inquiry can, by its own account, go wrong (fallibilism), not
only right, and thus purposely tests itself and criticizes, corrects, and improves itself.

Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and
traditional sentiment, and that the scientific method is best suited to theoretical research,[147] which in turn should not
be trammeled by the other methods and practical ends; reason’s “first rule”[113] is that, in order to learn, one must desire
to learn and, as a corollary, must not block the way of inquiry. Scientific method excels the others finally by being
deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices
can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt,
Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief’s integrity, seek as
truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method.

Scientific method Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predic-
tions and testing, pragmatism points beyond the usual duo of foundational alternatives: deduction from self-evident
truths, or rationalism; and induction from experiential phenomena, or empiricism.
Based on his critique of three modes of argument and different from either foundationalism or coherentism, Peirce’s
approach seeks to justify claims by a three-phase dynamic of inquiry:

1. Active, abductive genesis of theory, with no prior assurance of truth;


2. Deductive application of the contingent theory so as to clarify its practical implications;
3. Inductive testing and evaluation of the utility of the provisional theory in anticipation of future experience, in
both senses: prediction and control.

Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalization
simpliciter, which is a mere re-labeling of phenomenological patterns. Peirce’s pragmatism was the first time the
scientific method was proposed as an epistemology for philosophical questions.
A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This
is an operational notion of truth used by scientists.
Peirce extracted the pragmatic model or theory of inquiry from its raw materials in classical logic and refined it in
parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning.
Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle
understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of
thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first
be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that
induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief.
No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the
integrity of inquiry strongly limits the effective modularity of its principal components.
Peirce’s outline of the scientific method in §III–IV of “A Neglected Argument”[148] is summarized below (except as
otherwise noted). There he also reviewed plausibility and inductive precision (issues of critique of arguments).
1. Abductive (or retroductive) phase. Guessing, inference to explanatory hypotheses for selection of those best worth
trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the
hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in
one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content
of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way
48 CHAPTER 8. CHARLES SANDERS PEIRCE

for a surprising or complicated phenomenon. The modicum of success in our guesses far exceeds that of random
luck, and seems born of attunement to nature by developed or inherent instincts, especially insofar as best guesses
are optimally plausible and simple in the sense of the “facile and natural”, as by Galileo's natural light of reason and
as distinct from “logical simplicity”.[149] Abduction is the most fertile but least secure mode of inference. Its general
rationale is inductive: it succeeds often enough and it has no substitute in expediting us toward new truths.[150] In
1903, Peirce called pragmatism “the logic of abduction”.[151] Coordinative method leads from abducting a plausible
hypothesis to judging it for its testability[152] and for how its trial would economize inquiry itself.[153] The hypothesis,
being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves
to scientific tests. A simple but unlikely guess, if not costly to test for falsity, may belong first in line for testing. A
guess is intrinsically worth testing if it has plausibility or reasonably objective probability, while subjective likelihood,
though reasoned, can be misleadingly seductive. Guesses can be selected for trial strategically, for their caution (for
which Peirce gave as example the game of Twenty Questions), breadth, or incomplexity.[154] One can discover only
that which would be revealed through their sufficient experience anyway, and so the point is to expedite it; economy
of research demands the leap, so to speak, of abduction and governs its art.[153]
2. Deductive phase. Two stages:

i. Explication. Not clearly premised, but a deductive analysis of the hypothesis so as to render its parts
as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of conse-
quences of the hypothesis as predictions about evidence to be found. Corollarial or, if needed, Theore-
matic.

3. Inductive phase. Evaluation of the hypothesis, inferring from observational or experimental tests of its deduced
consequences. The long-run validity of the rule of induction is deducible from the principle (presuppositional to
reasoning in general) that the real “is only the object of the final opinion to which sufficient investigation would
lead";[142] in other words, anything excluding such a process would never be real. Induction involving the ongoing
accumulation of evidence follows “a method which, sufficiently persisted in,” will “diminish the error below any
predesignate degree.” Three stages:

i. Classification. Not clearly premised, but an inductive classing of objects of experience under general
ideas.
ii. Probation: direct Inductive Argumentation. Crude or Gradual in procedure. Crude Induction,
founded on experience in one mass (CP 2.759), presumes that future experience on a question will
not differ utterly from all past experience (CP 2.756). Gradual Induction makes a new estimate of the
proportion of truth in the hypothesis after each test, and is Qualitative or Quantitative. Qualitative Grad-
ual Induction depends on estimating the relative evident weights of the various qualities of the subject
class under investigation (CP 2.759; see also CP 7.114–20). Quantitative Gradual Induction depends on
how often, in a fair sample of instances of S, S is found actually accompanied by P that was predicted
for S (CP 2.758). It depends on measurements, or statistics, or counting.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly,
then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final
judgment on the whole result”.

Against Cartesianism Peirce drew on the methodological implications of the four incapacities — no genuine
introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the
absolutely incognizable — to attack philosophical Cartesianism, of which he said that:[120]
1. “It teaches that philosophy must begin in universal doubt” — when, instead, we start with preconceptions, “preju-
dices [...] which it does not occur to us can be questioned”, though we may find reason to question them later. “Let
us not pretend to doubt in philosophy what we do not doubt in our hearts.”
2. “It teaches that the ultimate test of certainty is...in the individual consciousness” — when, instead, in science
a theory stays on probation till agreement is reached, then it has no actual doubters left. No lone individual can
reasonably hope to fulfill philosophy’s multi-generational dream. When “candid and disciplined minds” continue to
disagree on a theoretical issue, even the theory’s author should feel doubts about it.
3. It trusts to “a single thread of inference depending often upon inconspicuous premisses” — when, instead, philos-
ophy should, “like the successful sciences”, proceed only from tangible, scrutinizable premisses and trust not to any
8.7. PHILOSOPHY: METAPHYSICS 49

one argument but instead to “the multitude and variety of its arguments” as forming, not a chain at least as weak as
its weakest link, but “a cable whose fibers”, soever “slender, are sufficiently numerous and intimately connected”.
4. It renders many facts “absolutely inexplicable, unless to say that 'God makes them so' is to be regarded as an
explanation”[155] — when, instead, philosophy should avoid being “unidealistic”,[156] misbelieving that something real
can defy or evade all possible ideas, and supposing, inevitably, “some absolutely inexplicable, unanalyzable ultimate”,
which explanatory surmise explains nothing and so is inadmissible.

8.7 Philosophy: metaphysics


Peirce divided metaphysics into (1) ontology or general metaphysics, (2) psychical or religious metaphysics, and (3)
physical metaphysics.
Ontology. Peirce was a Scholastic Realist, declaring for the reality of generals as early as 1868.[157] Regarding
modalities (possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just
how positively real the modalities are. In his 1897 “The Logic of Relatives” he wrote:

I formerly defined the possible as that which in a given state of information (real or feigned) we do
not know not to be true. But this definition today seems to me only a twisted phrase which, by means of
two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not
true, because we see they are impossible.

Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the
pragmaticist is committed to a strong modal realism by conceiving of objects in terms of predictive general conditional
propositions about how they would behave under certain circumstances.[158]
Psychical or religious metaphysics. Peirce believed in God, and characterized such belief as founded in an instinct
explorable in musing over the worlds of ideas, brute facts, and evolving habits — and it is a belief in God not as
an actual or existent being (in Peirce’s sense of those words), but all the same as a real being.[159] In "A Neglected
Argument for the Reality of God" (1908),[148] Peirce sketches, for God’s reality, an argument to a hypothesis of God as
the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling
in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features
of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such
purposefulness will “stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing
an “infinitely incomprehensible” being, starts off at odds with its own nature as a purportively true conception, and
so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and
as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though
God as the Necessary Being is not vague or growing; but the hypothesis will hold it to be more false to say the
opposite, that God is purposeless. Peirce also argued that the will is free[160] and (see Synechism) that there is at least
an attenuated kind of immortality.
Physical metaphysics. Peirce held the view, which he called objective idealism, that “matter is effete mind, in-
veterate habits becoming physical laws”.[161] Peirce asserted the reality of (1) absolute chance (his tychist view), (2)
mechanical necessity (anancist view), and (3) that which he called the law of love (agapist view), echoing his categories
Firstness, Secondness, and Thirdness, respectively. He held that fortuitous variation (which he also called “sporting”),
mechanical necessity, and creative love are the three modes of evolution (modes called “tychasm”, “anancasm”, and
“agapasm”)[162] of the cosmos and its parts. He found his conception of agapasm embodied in Lamarckian evolution;
the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a
mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that
overall he was a synechist, holding with reality of continuity,[163] especially of space, time, and law.[164]

8.8 Science of review


Main article: Classification of the sciences (Peirce)

Peirce outlined two fields, “Cenoscopy” and “Science of Review”, both of which he called philosophy. Both included
philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:[101]
50 CHAPTER 8. CHARLES SANDERS PEIRCE

1. Science of Discovery.

(a) Mathematics.
(b) Cenoscopy (philosophy as discussed earlier in this article—categorial, normative, metaphysical), as First
Philosophy, concerns positive phenomena in general, does not rely on findings from special sciences, and
includes the general study of inquiry and scientific method.
(c) Idioscopy, or the Special Sciences (of nature and mind).

2. Science of Review, as Ultimate Philosophy, arranges "...the results of discovery, beginning with digests, and
going on to endeavor to form a philosophy of science”. His examples included Humboldt's Cosmos, Comte's
Philosophie positive, and Spencer's Synthetic Philosophy.

3. Practical Science, or the Arts.

Peirce placed, within Science of Review, the work and theory of classifying the sciences (including mathematics and
philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are
of interest both as a map for navigating his philosophy and as an accomplished polymath’s survey of research in his
time.

8.9 See also


Contemporaries associated with Peirce

8.10 Notes
[1] Hacking, Ian (1990), “A Universe of Chance”, The Taming of Chance, pp. 200–215, Cambridge U. Pr.

[2] Stigler, Stephen M. (1978). “Mathematical statistics in the early States”. Annals of Statistics 6: 239–265 [248]. doi:10.1214/aos/1176344123.
JSTOR 2958876. MR 483118.

[3] Crease, Robert P (2009). “Charles Sanders Peirce and the first absolute measurement standard: In his brilliant but troubled
life, Peirce was a pioneer in both metrology and philosophy”. Physics Today 62 (12): 39–44. doi:10.1063/1.3273015.

[4] Cadwallader, Thomas C. (1974). “Charles S. Peirce (1839-1914): The first American experimental psychologist”. Journal
of the History of the Behavioral Sciences 10 (3): 291. doi:10.1002/1520-6696(197407)10:3<291::AID-JHBS2300100304>3.0.CO;2-
N.

[5] Wible, James R. (2008), "The Economic Mind of Charles Sanders Peirce", Contemporary Pragmatism, v. 5, n. 2, Decem-
ber, pp. 39-67

[6] Nöth, Winfried (2000), "Charles Sanders Peirce, Pathfinder in Linguistics", Digital Encyclopedia of Charles S. Peirce.

[7] Joseph Brent (1998). Charles Sanders Peirce: A Life (2 ed.). Indiana University Press. p. 18. ISBN 9780253211613.
Peirce had strong, though unorthodox, religious convictions. Although he was a communicant in the Episcopal church for
most of his life, he expressed contempt for the theologies, metaphysics, and practices of established religions.

[8] Brent, Joseph (1998), Charles Sanders Peirce: A Life, 2nd edition, Bloomington and Indianapolis: Indiana University Press
(catalog page); also NetLibrary.

[9] “Peirce”, in the case of C.S. Peirce, always rhymes with the English-language word “terse” and so, in most dialects, is
pronounced exactly like the English-language word " purse ". See "Note on the Pronunciation of 'Peirce'", Peirce Project
Newsletter, v. 1, nos. 3/4, Dec. 1994.

[10] Peirce, C. S., “Letter, Peirce to A. Marquand", dated 1886, W 5:541–3, Google Preview. See Burks, Arthur W., “Review:
Charles S. Peirce, The new elements of mathematics", Bulletin of the American Mathematical Society v. 84, n. 5 (1978),
pp. 913–18, see 917. PDF Eprint. Also p. xliv in Houser, Nathan, Introduction, W 5.

[11] Weiss, Paul (1934), “Peirce, Charles Sanders” in the Dictionary of American Biography. Arisbe Eprint.

[12] “Peirce, Benjamin”, subheading “Charles Sanders”, in Webster’s Biographical Dictionary (1943/1960), Springfield, MA:
Merriam-Webster.
8.10. NOTES 51

[13] Fisch, Max, "Introduction", W 1:xvii, find phrase “One episode”.

[14] “Peirce, Charles Sanders” (1898), The National Cyclopedia of American Biography, v. 8, p. 409.

[15] B:54–6

[16] B:363–4

[17] B:19-20, 53, 75, 245

[18] B:40

[19] Burch, Robert (2001, 2010), "Charles Sanders Peirce", Stanford Encyclopedia of Philosophy.

[20] B:139

[21] B:61-2

[22] B:69

[23] B:368

[24] B:79-81

[25] Moore, Edward C., and Robin, Richard S., eds., (1964), Studies in the Philosophy of Charles Sanders Peirce, Second Series,
Amherst: U. of Massachusetts Press. On Peirce the astronomer, see Lenzen’s chapter.

[26] B:367

[27] Fisch, Max (1983), “Peirce as Scientist, Mathematician, Historian, Logician, and Philosopher”, Studies in Logic (new
edition), see p. x.

[28] See "Peirce Edition Project (UQÀM) - in short" from PEP-UQÀM.

[29] Houser, Nathan, "Introduction", W 5:xxviii-xxix, find “Allison”.

[30] B:202

[31] Houser, Nathan (1989), "Introduction", W 4:xxxviii, find “Eighty-nine”.

[32] B:150–4, 195, 279–80, 289

[33] B:xv

[34] B:98–101

[35] B:141

[36] B:148

[37] Houser, Nathan, "Introduction", W 6, first paragraph.

[38] B:123, 368

[39] B:150–1, 368

[40] In 1885 (B:369); in 1890 and 1900 (B:215, 273); in 1891 (B:215–16); and in 1892 (B:151–2, 222).

[41] B:77

[42] B:191-2, 217, 270, 318, 321, 337.

[43] B:13

[44] B:369–74

[45] B:191

[46] B:246

[47] B:242

[48] B:271
52 CHAPTER 8. CHARLES SANDERS PEIRCE

[49] B:249–55

[50] B:371

[51] B:189

[52] B:370

[53] B:205–6

[54] B:374–6

[55] B:279–89

[56] B:261–4, 290–2, 324

[57] B:306–7 & 315–6

[58] Brent, Joseph (1998). Charles Sanders Peirce, a life. Bloomington, Indiana: Indiana University Press. p. 34. ISBN
0-253-21161-1.

[59] Menand, Louis (2001). The Metaphysical Club. London: Flamingo. pp. 161–162. ISBN 0-00-712690-5.

[60] Russell, Bertrand (1959), Wisdom of the West, p. 276.

[61] Anellis, Irving H. (1995), “Peirce Rustled, Russell Pierced: How Charles Peirce and Bertrand Russell Viewed Each Other’s
Work in Logic, and an Assessment of Russell’s Accuracy and Role in the Historiography of Logic”, Modern Logic 5, 270–
328. Arisbe Eprint.

[62] Popper, Karl (1972), Objective Knowledge: An Evolutionary Approach, p. 212.

[63] See Royce, Josiah, and Kernan, W. Fergus (1916), “Charles Sanders Peirce”, The Journal of Philosophy, Psychology, and
Scientific Method v. 13, pp. 701–9. Arisbe Eprint.

[64] Ketner et al. (1986), Comprehensive Bibliography, see p. iii.

[65] Hookway, Christopher (2008), "Pragmatism", Stanford Encyclopedia of Philosophy.

[66] B:8

[67] Fisch, Max (1986), Peirce, Semeiotic, and Pragmatism, Kenneth Laine Ketner and Christian J. W. Kloesel, eds., Bloom-
ington, Indiana: Indiana U. Pr.

[68] Theological Research Group in C.S. Peirce’s Philosophy (Hermann Deuser, Justus-Liebig-Universität Gießen; Wilfred
Härle, Philipps-Universität Marburg, Germany).

[69] Burks, Arthur, Introduction, CP 7, p. xi.

[70] Robin, Richard S. (1967), Annotated Catalogue of the Papers of Charles S. Peirce. Amherst MA: University of Mas-
sachusetts Press.

[71] “The manuscript material now (1997) comes to more than a hundred thousand pages. These contain many pages of no
philosophical interest, but the number of pages on philosophy certainly number much more than half of that. Also, a
significant but unknown number of manuscripts have been lost.” — Joseph Ransdell (1997), “Some Leading Ideas of
Peirce’s Semiotic”, end note 2, 1997 light revision of 1977 version in Semiotica 19:157–78.

[72] Houser, Nathan, “The Fortunes and Misfortunes of the Peirce Papers”, Fourth Congress of the IASS, Perpignan, France,
1989. Signs of Humanity, v. 3, 1992, pp. 1259–68. Eprint

[73] Memorandum to the President of Charles S. Peirce Society by Ahti-Veikko Pietarinen, U. of Helsinki, March 29, 2012.
Eprint.

[74] See for example "Collections of Peirce’s Writings" at Commens, U. of Helsinki.

[75] See 1987 review by B. Kuklick (of Peirce by Christopher Hookway), in British Journal for the Philosophy of Sciencev. 38,
n. 1, pp. 117-19. First page.

[76] Auspitz, Josiah Lee (1994), “The Wasp Leaves the Bottle: Charles Sanders Peirce”, The American Scholar, v. 63, n. 4,
autumn, 602–18. Arisbe Eprint.

[77] Burks, Arthur W., “Review: Charles S. Peirce, The new elements of mathematics", Bulletin of the American Mathematical
Society v. 84, n. 5 (1978), pp. 913–18 (PDF).
8.10. NOTES 53

[78] Peirce (1860 MS), “Orders of Infinity”, News from the Peirce Edition Project, September 2010 (PDF), p. 6, with the
manuscript’s text. Also see logic historian Irving Anellis’s November 11, 2010 comment at peirce-l.

[79] Peirce (MS, winter of 1880–81), “A Boolean Algebra with One Constant”, CP 4.12–20, W 4:218-21. Google Preview.
See Roberts, Don D. (1973), The Existential Graphs of Charles S. Peirce, p. 131.

[80] Peirce (1881), “On the Logic of Number”, American Journal of Mathematics v. 4, pp. 85−95. Reprinted (CP 3.252–88),
(W 4:299–309). See See Shields, Paul (1997), “Peirce’s Axiomatization of Arithmetic”, in Houser et al., eds., Studies in
the Logic of Charles S. Peirce.

[81] Peirce (1885), “On the Algebra of Logic: A Contribution to the Philosophy of Notation”, American Journal of Mathematics
7, two parts, first part published 1885, pp. 180–202 (see Houser in linked paragraph in “Introduction” in W 4). Presented,
National Academy of Sciences, Newport, RI, 14–17 October 1884 (see EP 1, Headnote 16). 1885 is the year usually given
for this work. Reprinted CP 3.359–403, W 5:162–90, EP 1:225–8, in part.

[82] Putnam, Hilary (1982), “Peirce the Logician”, Historia Mathematica 9, 290–301. Reprinted, pp. 252–60 in Putnam
(1990), Realism with a Human Face, Harvard. Excerpt with article’s last five pages.

[83] It was in Peirce’s 1885 “On the Algebra of Logic”. See Byrnes, John (1998), “Peirce’s First-Order Logic of 1885”, Trans-
actions of the Charles S. Peirce Society v. 34, n. 4, pp. 949-76.

[84] Brady, Geraldine (2000), From Peirce to Skolem: A Neglected Chapter in the History of Logic, North-Holland/Elsevier
Science BV, Amsterdam, Netherlands.

[85] See Peirce (1898), Lecture 3, “The Logic of Relatives” (not the 1897 Monist article), Reasoning and the Logic of Things ,
pp. 146–64, see 151.

[86] Peirce (1898), “The Logic of Mathematics in Relation to Education” in Educational Review v. 15, pp. 209–16 (via Internet
Archive). Reprinted CP 3.553–62. See also his “The Simplest Mathematics” (1902 MS), CP 4.227–323.

[87] Lewis, Clarence Irving (1918), A Survey of Symbolic Logic, see ch. 1, §7 “Peirce”, pp. 79–106, see p. 79 (Internet Archive).
Note that Lewis’s bibliography lists works by Frege, tagged with asterisks as important.

[88] Avery, John (2003) Information theory and evolution, p. 167; also Mitchell, Melanie, "My Scientific Ancestry".

[89] Beil, Ralph G. and Ketner, Kenneth (2003), “Peirce, Clifford, and Quantum Theory”, International Journal of Theoretical
Physics v. 42, n. 9, pp. 1957-1972.

[90] Houser, Roberts, and Van Evra, eds. (1997), Studies in the Logic of Charles Sanders Peirce, Indiana U., Bloomington, IN.

[91] Misak, ed. (2004), The Cambridge Companion to Peirce, Cambridge U., UK.

[92] Peirce (1893-1894, MS 949, p. 1)

[93] Peirce (1903 MS), CP 6.176: “But I now define a pseudo-continuum as that which modern writers on the theory of functions
call a continuum. But this is fully represented by [...] the totality of real values, rational and irrational [...].”

[94] Peirce (1902 MS) and Ransdell, Joseph, ed. (1998), “Analysis of the Methods of Mathematical Demonstration”, Memoir
4, Draft C, MS L75.90–102, see 99–100. (Once there, scroll down).

[95] See:

• Peirce (1908), “Some Amazing Mazes (Conclusion), Explanation of Curiosity the First”, The Monist, v. 18, n. 3,
pp. 416-64, see 463−4. Reprinted CP 4.594-642, see 642.
• Havenel, Jérôme (2008), “Peirce’s Clarifications on Continuity”, Transactions Winter 2008 pp. 68–133, see 119.
Abstract.

[96] Peirce condemned the use of “certain likelihoods" (EP 2:108–9) even more strongly than he criticized Bayesian methods.
Indeed Peirce used a bit of Bayesian inference in criticizing parapsychology (W 6:76).

[97] Miller, Richard W. (1975), “Propensity: Popper or Peirce?", British Journal for the Philosophy of Science (site), v. 26, n.
2, pp. 123–32. doi:10.1093/bjps/26.2.123. Eprint.

[98] Haack, Susan and Kolenda, Konstantin (1977), “Two Fallibilists in Search of the Truth”, Proceedings of the Aristotelian
Society, Supplementary Volumes, v. 51, pp. 63–104. JSTOR 4106816

[99] Peirce CS, Jastrow J. On Small Differences in Sensation. Memoirs of the National Academy of Sciences 1885;3:73-83.

[100] Peirce (1897) “Fallibilism, Continuity, and Evolution”, CP 1.141–75 (Eprint), placed by the CP editors directly after
“F.R.L.” (1899, CP 1.135–40).
54 CHAPTER 8. CHARLES SANDERS PEIRCE

[101] Peirce (1903), CP 1.180-202 Eprint and (1906) “The Basis of Pragmaticism”, EP 2:372–3, see "Philosophy" at CDPT.

[102] See in “Firstness”, “Secondness”, and “Thirdness” in CDPT.

[103] Peirce (1893), “The Categories” MS 403. Arisbe Eprint, edited by Joseph Ransdell, with information on the re-write, and
interleaved with the 1867 “New List” for comparison.

[104] “Minute Logic”, CP 2.87, c.1902 and A Letter to Lady Welby, CP 8.329, 1904. See relevant quotes under "Categories,
Cenopythagorean Categories" in Commens Dictionary of Peirce’s Terms (CDPT), Bergman & Paalova, eds., U. of Helsinki.

[105] See quotes under "Firstness, First [as a category]" in CDPT.

[106] The ground blackness is the pure abstraction of the quality black. Something black is something embodying blackness,
pointing us back to the abstraction. The quality black amounts to reference to its own pure abstraction, the ground black-
ness. The question is not merely of noun (the ground) versus adjective (the quality), but rather of whether we are considering
the black(ness) as abstracted away from application to an object, or instead as so applied (for instance to a stove). Yet note
that Peirce’s distinction here is not that between a property-general and a property-individual (a trope). See "On a New
List of Categories" (1867), in the section appearing in CP 1.551. Regarding the ground, cf. the Scholastic conception of
a relation’s foundation, Google limited preview Deely 1982, p. 61

[107] A quale in this sense is a such, just as a quality is a suchness. Cf. under “Use of Letters” in §3 of Peirce’s “Description of
a Notation for the Logic of Relatives”, Memoirs of the American Academy, v. 9, pp. 317–78 (1870), separately reprinted
(1870), from which see p. 6 via Google books, also reprinted in CP 3.63:

Now logical terms are of three grand classes. The first embraces those whose logical form involves only
the conception of quality, and which therefore represent a thing simply as “a —.” These discriminate objects
in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an
object as it is in itself as such (quale); for example, as horse, tree, or man. These are absolute terms. (Peirce,
1870. But also see “Quale-Consciousness”, 1898, in CP 6.222–37.)

[108] See quotes under "Secondness, Second [as a category]" in CDPT.

[109] See quotes under "Thirdness, Third [as a category]" in CDPT.

[110] "Charles S. Peirce on Esthetics and Ethics: A Bibliography" (PDF) by Kelly A. Parker in 1999.

[111] Peirce (1902 MS), Carnegie Application, edited by Joseph Ransdell, Memoir 2, see table.

[112] See Esthetics at CDPT.

[113] Peirce (1899 MS), “F.R.L.” [First Rule of Logic], CP 1.135–40, Eprint

[114] Peirce (1882), “Introductory Lecture on the Study of Logic” delivered September 1882, Johns Hopkins University Circulars,
v. 2, n. 19, pp. 11–12 (via Google), November 1882. Reprinted (EP 1:210–14; W 4:378–82; CP 7.59–76). The definition
of logic quoted by Peirce is by Peter of Spain.

[115] Peirce (1878), “The Doctrine of Chances”, Popular Science Monthly, v. 12, pp. 604–15 (CP 2.645–68, W 3:276–90, EP
1:142–54).

...death makes the number of our risks, the number of our inferences, finite, and so makes their mean result
uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely
great. .... ...logicality inexorably requires that our interests shall not be limited. .... Logic is rooted in the social
principle.

[116] Peirce, CP 5.448 footnote, from “The Basis of Pragmaticism” in 1906.

[117] Peirce, (1868), “Questions concerning certain Faculties claimed for Man”, Journal of Speculative Philosophy v. 2, n. 2,
pp. 103−14. On thought in signs, see p. 112. Reprinted CP 5.213-63 (on thought in signs, see 253), W 2:193-211, EP
2:11-27. Arisbe Eprint.

[118] See rhetoric definitions at CDPT.

[119] Peirce (1902), The Carnegie Institute Application, Memoir 10, MS L75.361-2, Arisbe Eprint.

[120] Peirce (1868), “Some Consequences of Four Incapacities”, Journal of Speculative Philosophy v. 2, n. 3, pp. 140−57.
Reprinted CP 5.264-317, W 2:211-42, EP 1:28-55. Arisbe Eprint.

[121] Peirce, “Grounds of Validity of the Laws of Logic: Further Consequences of Four Incapacities”, Journal of Speculative
Philosophy v. II, n. 4, pp. 193−208. Reprinted CP 5.318-357, W 2:242-272 (PEP Eprint), EP 1:56-82.
8.10. NOTES 55

[122] Peirce (1905), “What Pragmatism Is”, The Monist, v. XV, n. 2, pp. 161-81, see 167. Reprinted CP 5.411-37, see 416.
Arisbe Eprint.

[123] Peirce 1907, CP 5.484. Reprinted, EP 2:411 in “Pragmatism” (398–433).

[124] See "Quasi-mind" in CDPT.

[125] Peirce (1867), “Upon Logical Comprehension and Extension” (CP 2.391–426), (W 2:70–86).

[126] See pp. 404–9 in “Pragmatism” in EP 2. Ten quotes on collateral experience from Peirce provided by Joseph Ransdell can
be viewed here at peirce-l’s Lyris archive. Note: Ransdell’s quotes from CP 8.178–9 are also in EP 2:493–4, which gives
their date as 1909; and his quote from CP 8.183 is also in EP 2:495–6, which gives its date as 1909.

[127] Peirce, letter to William James, dated 1909, see EP 2:492.

[128] See "76 definitions of the sign by C.S.Peirce", collected by Robert Marty (U. of Perpignan, France).

[129] Peirce, A Letter to Lady Welby (1908), Semiotic and Significs, pp. 80–1:

I define a Sign as anything which is so determined by something else, called its Object, and so determines
an effect upon a person, which effect I call its Interpretant, that the latter is thereby mediately determined by
the former. My insertion of “upon a person” is a sop to Cerberus, because I despair of making my own broader
conception understood.

[130] “Representamen”, properly with the 'a' long and stressed (/rɛprəzɛnˈteɪmən/ rep-rə-zen-TAY-mən), was adopted (not coined)
by Peirce as his technical term for the sign as covered in his theory, in case a divergence should come to light between his
theoretical version and the popular senses of the word “sign”. He eventually stopped using “representamen”. See EP
2:272–3 and Semiotic and Significs p. 193, quotes in "Representamen" at CDPT.

[131] Eco, Umberto (1984). Semiotics and the Philosophy of Language. Bloomington & Indianapolis: Indiana University Press.
p. 15. ISBN 978-0-253-20398-4.

[132] Peirce (1909), A Letter to William James, EP 2:492-502. Fictional object, 498. Object as universe of discourse, 492. See
"Dynamical Object" at CDPT.

[133] See “Immediate Object”, etc., at CDPT.

[134] Peirce (1903 MS), “Nomenclature and Divisions of Triadic Relations, as Far as They Are Determined”, under other titles
in Collected Papers (CP) v. 2, paragraphs 233–72, and reprinted under the original title in Essential Peirce (EP) v. 2, pp.
289–99. Also see image of MS 339 (August 7, 1904) supplied to peirce-l by Bernard Morand of the Institut Universitaire
de Technologie (France), Département Informatique.

[135] On the varying terminology, look up in CDPT.

[136] Popular Science Monthly, v. 13, pp. 470–82, see 472 or the book at Wikisource. CP 2.619–44, see 623.

[137] See, under "Abduction" at CDPT, the following quotes:

• On correction of “A Theory of Probable Inference”, see quotes from “Minute Logic”, CP 2.102, c. 1902, and from
the Carnegie Application (L75), 1902, Historical Perspectives on Peirce’s Logic of Science v. 2, pp. 1031–1032.
• On new logical form for abduction, see quote from Harvard Lectures on Pragmatism, 1903, CP 5.188–189.

See also Santaella, Lucia (1997) “The Development of Peirce’s Three Types of Reasoning: Abduction, Deduction, and
Induction”, 6th Congress of the IASS. Eprint.

[138] “Lectures on Pragmatism”, 1903, CP 5.171.

[139] A Letter to J. H. Kehler (dated 1911), The New Elements of Mathematics v. 3, pp. 203–4, see in "Retroduction" at CDPT.

[140] James, William (1897), The Will to Believe, see p. 124.

[141] See Pragmaticism#Pragmaticism’s name for discussion and references.

[142] “That the rule of induction will hold good in the long run may be deduced from the principle that reality is only the object
of the final opinion to which sufficient investigation would lead”, in Peirce (1878 April), “The Probability of Induction”, p.
718 (via Internet Archive ) in Popular Science Monthly, v. 12, pp. 705–18. Reprinted in CP 2.669–93, W 3:290–305, EP
1:155–69, elsewhere.

[143] Peirce (1902), CP 5.13 note 1.


56 CHAPTER 8. CHARLES SANDERS PEIRCE

[144] See CP 1.34 Eprint (in “The Spirit of Scholasticism”), where Peirce ascribed the success of modern science less to a novel
interest in verification than to the improvement of verification.

[145] See Joseph Ransdell's comments and his tabular list of titles of Peirce’s proposed list of memoirs in 1902 for his Carnegie
application, Eprint

[146] Peirce (1905), “Issues of Pragmaticism”, The Monist, v. XV, n. 4, pp. 481−99. Reprinted CP 5.438-63. Also important:
CP 5.497-525.

[147] Peirce, “Philosophy and the Conduct of Life”, Lecture 1 of the 1898 Cambridge (MA) Conferences Lectures, CP 1.616–48
in part and Reasoning and the Logic of Things, 105–22, reprinted in EP 2:27–41.

[148] Peirce (1908), "A Neglected Argument for the Reality of God", published in large part, Hibbert Journal v. 7, 90–112.
Reprinted with an unpublished part, CP 6.452–85, Selected Writings pp. 358–79, EP 2:434–50, Peirce on Signs 260–78.

[149] See also Nubiola, Jaime (2004), "Il Lume Naturale: Abduction and God", Semiotiche I/2, 91–102.

[150] Peirce (c. 1906), “PAP (Prolegomena to an Apology for Pragmatism)" (MS 293), The New Elements of Mathematics v. 4,
pp. 319–20, first quote under "Abduction" at CDPT.

[151] Peirce (1903), “Pragmatism – The Logic of Abduction”, CP 5.195–205, especially 196. Eprint.

[152] Peirce, Carnegie application, MS L75.279-280: Memoir 27, Draft B.

[153] See MS L75.329–330, from Draft D of Memoir 27 of Peirce’s application to the Carnegie Institution:

Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not
troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics.
The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of
discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first
question of heuretic, is to be governed by economical considerations.

[154] Peirce, C. S., “On the Logic of Drawing Ancient History from Documents”, EP 2, see 107-9. On Twenty Questions, see
109:

Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do.

[155] Peirce believed in God. See section #Philosophy: metaphysics.

[156] However, Peirce disagreed with Hegelian absolute idealism. See for example CP 8.131.

[157] Peirce (1868), “Nominalism versus Realism”, Journal of Speculative Philosophy v. 2, n. 1, pp. 57−61. Reprinted (CP
6.619–24), (W 2:144–53).

[158] On developments in Peirce’s realism, see:

• Peirce (1897), “The Logic of Relatives”, The Monist v. VII, n. 2 pp. 161–217, see 206 (via Google). Reprinted CP
3.456–552.
• Peirce (1905), “Issues of Pragmaticism”, The Monist v. XV, n. 4, pp. 481–99, see 495–6 (via Google). Reprinted
(CP 5.438–63, see 453–7).
• Peirce (c. 1905), Letter to Signor Calderoni, CP 8.205–13, see 208.
• Lane, Robert (2007), “Peirce’s Modal Shift: From Set Theory to Pragmaticism”, Journal of the History of Philosophy,
v. 45, n. 4.

[159] Peirce in his 1906 “Answers to Questions concerning my Belief in God”, CP 6.495, Eprint, reprinted in part as “The
Concept of God” in Philosophical Writings of Peirce, J. Buchler, ed., 1940, pp. 375–8:

I will also take the liberty of substituting “reality” for “existence.” This is perhaps overscrupulosity; but I
myself always use exist in its strict philosophical sense of “react with the other like things in the environment.”
Of course, in that sense, it would be fetichism to say that God “exists.” The word “reality,” on the contrary,
is used in ordinary parlance in its correct philosophical sense. [....] I define the real as that which holds its
characters on such a tenure that it makes not the slightest difference what any man or men may have thought
them to be, or ever will have thought them to be, here using thought to include, imagining, opining, and willing
(as long as forcible means are not used); but the real thing’s characters will remain absolutely untouched.

[160] See his “The Doctrine of Necessity Examined” (1892) and “Reply to the Necessitarians” (1893), to both of which editor
Paul Carus responded.
8.11. EXTERNAL LINKS 57

[161] Peirce (1891), “The Architecture of Theories”, The Monist v. 1, pp. 161–76, see p. 170, via Internet Archive. Reprinted
(CP 6.7–34) and (EP 1:285–97, see p. 293).

[162] See “tychism”, “tychasm”, “tychasticism”, and the rest, at CDPT.

[163] Peirce (1893), “Evolutionary Love”, The Monist v. 3, pp. 176–200. Reprinted CP 6.278–317, EP 1:352–72. Arisbe Eprint

[164] See p. 115 in Reasoning and the Logic of Things (Peirce’s 1898 lectures).

8.11 External links


Charles Sanders Peirce bibliography has external links throughout to such materials as biographical and overview
articles on Peirce at encyclopedias, study sites, etc.; individual works by Peirce; and collections, bibliographies, and
Peirce’s definitions in the Baldwin dictionary.
Other useful sets of links:

• Existential graph references and external links.

• Pragmatism external links.

• Semiotics external links.

Peirce sites

• Arisbe: The Peirce Gateway, Joseph Ransdell, ed. Over 100 online writings by Peirce as of 11/24/10, with
annotations. 100s of online papers on Peirce. The peirce-l e-forum. Much else.

• Center for Applied Semiotics (CAS) (1998–2003), Donald Cunningham & Jean Umiker-Sebeok, Indiana U.

• Centro de Estudos Peirceanos (CeneP) and Centro Internacional de Estudos Peirceanos (CIEP), Lucia Santaella
et al., Pontifical Catholic U. of São Paulo (PUC-SP), Brazil. In Portuguese, some English.

• Centro StudiPeirce, Carlo Sini, Rossella Fabbrichesi, et al., U. of Milan, Italy. In Italian and English. Part of
Pragma.

• Charles S. Peirce Foundation. Co-sponsoring the 2014 Peirce International Centennial Congress (100th an-
niversary of Peirce’s death).

• Charles S. Peirce Society


—Transactions of the Charles S. Peirce Society. Quarterly journal of Peirce studies since spring 1965. Table
of Contents of all issues.

• Charles S. Peirce Studies, Brian Kariger, ed.

• Charles Sanders Peirce at the Mathematics Genealogy Project

• Collegium for the Advanced Study of Picture Act and Embodiment: The Peirce Archive. Humboldt U, Berlin,
Germany. Cataloguing Peirce’s innumerable drawings & graphic materials. More info (Prof. Aud Sissel Hoel).

• Digital Encyclopedia of Charles S. Peirce, João Queiroz (now at UFJF) & Ricardo Gudwin (at Unicamp), eds.,
U. of Campinas, Brazil, in English. 84 authors listed, 51 papers online & more listed, as of 1/31/09.

• Existential Graphs, Jay Zeman, ed., U. of Florida. Has 4 Peirce texts.

• Grupo de Estudios Peirceanos (GEP) / Peirce Studies Group, Jaime Nubiola, ed., U. of Navarra, Spain. Big
study site, Peirce & others in Spanish & English, bibliography, more.

• Helsinki Peirce Research Center (HPRC), Ahti-Veikko Pietarinen et al., U. of Helsinki, with Commens: Vir-
tual Centre for Peirce Studies, Mats Bergman & Sami Paavola, eds. 23 papers by 11 authors as of 11/24/10.
—Commens Dictionary of Peirce’s Terms (CDPT): Peirce’s own definitions, often many per term across the
decades.
58 CHAPTER 8. CHARLES SANDERS PEIRCE

• His Glassy Essence. Autobiographical Peirce. Kenneth Laine Ketner.

• Institute for Studies in Pragmaticism, Kenneth Laine Ketner, Clyde Hendrick, et al., Texas Tech U. Peirce’s
life and works.

• International Research Group on Abductive Inference, Uwe Wirth et al., eds., Goethe U., Frankfurt, Germany.
Uses frames. Click on link at bottom of its home page for English. Moved to U. of Gießen, Germany, home
page not in English but see Artikel section there.
• L'I.R.S.C.E. (1974–2003)—Institut de Recherche en Sémiotique, Communication et Éducation, Gérard De-
ledalle, Joëlle Réthoré, U. of Perpignan, France.
• Minute Semeiotic, Vinicius Romanini, U. of São Paulo, Brazil. English, Portuguese.

• Peirce at Signo: Theoretical Semiotics on the Web, Louis Hébert, director, supported by U. of Québec. Theory,
application, exercises of Peirce’s Semiotics and Esthetics. English, French.

• Peirce Edition Project (PEP), Indiana U.-Purdue U. Indianapolis (IUPUI). André De Tienne, Nathan Houser,
et al. Editors of the Writings of Charles S. Peirce (W) and The Essential Peirce (EP) v. 2. Many study aids such
as the Robin Catalog of Peirce’s manuscripts & letters and:
—Biographical introductions to EP 1–2 and W 1–6 & 8
—Most of W 2 readable online.
—PEP’s branch at Université du Québec à Montréal (UQÀM). Working on W 7: Peirce’s work on the Century
Dictionary. Definition of the week.

• Peirce’s Existential Graphs, Frithjof Dau, Germany


• Peirce’s Theory of Semiosis: Toward a Logic of Mutual Affection, Joseph Esposito. Free online course.

• Pragmatism Cybrary, David Hildebrand & John Shook.


• Research Group on Semiotic Epistemology and Mathematics Education (late 1990s), Institut für Didaktik der
Mathematik (Michael Hoffman, Michael Otte, Universität Bielefeld, Germany). See Peirce Project Newsletter
v. 3, n. 1, p. 13.

• Semiotics according to Robert Marty, with 76 definitions of the sign by C. S. Peirce.


Chapter 9

Clone (algebra)

In universal algebra, a clone is a set C of finitary operations on a set A such that

• C contains all the projections πkn : An → A, defined by πkn (x1 , …,xn) = xk,

• C is closed under (finitary multiple) composition (or “superposition”[1] ): if f, g1 , …, gm are members of C


such that f is m-ary, and gj is n-ary for every j, then the n-ary operation h(x1 , …,xn) := f(g1 (x1 , …,xn), …,
gm(x1 , …,xn)) is in C.

Given an algebra in a signature σ, the set of operations on its carrier definable by a σ-term (the term functions) is a
clone. Conversely, every clone can be realized as the clone of term functions in a suitable algebra.
If A and B are algebras with the same carrier such that every basic function of A is a term function in B and vice versa,
then A and B have the same clone. For this reason, modern universal algebra often treats clones as a representation
of algebras which abstracts from their signature.
There is only one clone on the one-element set. The lattice of clones on a two-element set is countable, and has been
completely described by Emil Post (see Post’s lattice). Clones on larger sets do not admit a simple classification; there
κ
are continuum clones on a finite set of size at least three, and 22 clones on an infinite set of cardinality κ.

9.1 Abstract clones


Philip Hall introduced the concept of abstract clone.[2] An abstract clone is different from a concrete clone in that the
set A is not given. Formally, an abstract clone comprises

• a set Cn for each natural number n,

• elements πk,n in Cn for all k≤n, and

• a family of functions ∗:Cm × (Cn)m →Cn for all m and n

such that

• c ∗ (π1,n,...,πn,n) = c

• πk,m ∗ (c1,...,cm) = c

• c ∗ (d1 ∗ (e1,...,en),...,dm∗ (e1,...,en)) = (c ∗ (d1,...dm)) ∗ (e1,...,en).

Any concrete clone determines an abstract clone in the obvious manner.


Any algebraic theory determines an abstract clone where Cn is the set of terms in n variables, πk,n are variables, and ∗
is substitution. Two theories determine isomorphic clones if and only if the corresponding categories of algebras are

59
60 CHAPTER 9. CLONE (ALGEBRA)

isomorphic. Conversely every abstract clone determines an algebraic theory with an n-ary operation for each element
of Cn. This gives a bijective correspondence between abstract clones and algebraic theories.
Every abstract clone C induces a Lawvere theory in which the morphisms m→n are elements of (Cm)n . This induces
a bijective correspondence between Lawvere theories and abstract clones.

9.2 References
[1] Denecke, Klaus. Menger algebras and clones of terms, East-West Journal of Mathematics 5 2 (2003),179-193.

[2] P. M. Cohn. Universal algebra. D Reidel, 2nd edition, 1981. Ch III.

• Ralph N. McKenzie, George F. McNulty, and Walter F. Taylor, Algebras, Lattices, Varieties, Vol. 1, Wadsworth
& Brooks/Cole, Monterey, CA, 1987.

• F. William Lawvere: Functorial semantics of algebraic theories, Columbia University, 1963. Available online
at Reprints in Theory and Applications of Categories
Chapter 10

Computability theory

For the concept of computability, see Computability.

Computability theory, also called recursion theory, is a branch of mathematical logic, of computer science, and
of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.
The basic questions addressed by recursion theory are “What does it mean for a function on the natural numbers
to be computable?" and “How can noncomputable functions be classified into a hierarchy based on their level of
noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched.
The field has since grown to include the study of generalized computability and definability. Invention of the cen-
tral combinatorial object of recursion theory, namely the Universal Turing Machine, predates and predetermines the
invention of modern computers. Historically, the study of algorithmically undecidable sets and functions was moti-
vated by various problems in mathematics that turned to be undecidable; for example, word problem for groups and
the like. There are several applications of the theory to other branches of mathematics that do not necessarily con-
centrate on undecidability. The early applications include the celebrated Higman’s embedding theorem that provides
a link between recursion theory and group theory, results of Michael O. Rabin and Anatoly Maltsev on algorithmic
presentations of algebras, and the negative solution to Hilbert’s Tenth Problem. The more recent applications in-
clude algorithmic randomness, results of Soare et al. who applied recursion-theoretic methods to solve a problem in
algebraic geometry,[1] and the very recent work of Slaman et al. on normal numbers that solves a problem in analytic
number theory.
Recursion theory overlaps with proof theory, effective descriptive set theory, model theory, and abstract algebra. Ar-
guably, computational complexity theory is a child of recursion theory; both theories share the same technical tool,
namely the Turing Machine. Recursion theorists in mathematical logic often study the theory of relative computabil-
ity, reducibility notions and degree structures described in this article. This contrasts with the theory of subrecursive
hierarchies, formal methods and formal languages that is common in the study of computability theory in computer
science. There is a considerable overlap in knowledge and methods between these two research communities; how-
ever, no firm line can be drawn between them. For instance, parametrized complexity was invented by a complexity
theorist Michael Fellows and a recursion theorist Rod Downey.

10.1 Computable and uncomputable sets


Recursion theory originated in the 1930s, with work of Kurt Gödel, Alonzo Church, Alan Turing, Stephen Kleene
and Emil Post.[2]
The fundamental results the researchers obtained established Turing computability as the correct formalization of
the informal idea of effective calculation. These results led Stephen Kleene (1952) to coin the two names “Church’s
thesis” (Kleene 1952:300) and “Turing’s Thesis” (Kleene 1952:376). Nowadays these are often considered as a
single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a
computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:

“Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general
recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact

61
62 CHAPTER 10. COMPUTABILITY THEORY

that with this concept one has for the first time succeeded in giving an absolute notion to an interest-
ing epistemological notion, i.e., one not depending on the formalism chosen.*"(Gödel 1946 in Davis
1965:84).[3]

With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be
effectively decided. Church (1936a, 1936b) and Turing (1936), inspired by techniques used by Gödel (1931) to prove
his incompleteness theorems, independently demonstrated that the Entscheidungsproblem is not effectively decidable.
This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical
propositions are true or false.
Many problems of mathematics have been shown to be undecidable after these initial examples were established.
In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be
effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s
that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in
a finitely presented group, will decide whether the element represented by the word is the identity element of the
group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich’s theorem, which implies
that Hilbert’s tenth problem has no effective solution; this problem asked whether there is an effective procedure
to decide whether a Diophantine equation over the integers has a solution in the integers. The list of undecidable
problems gives additional examples of problems with no computable solution.
The study of which mathematical constructions can be effectively performed is sometimes called recursive math-
ematics; the Handbook of Recursive Mathematics (Ershov et al. 1998) covers many of the known results in this
field.

10.2 Turing computability


The main form of computability studied in recursion theory was introduced by Turing (1936). A set of natural
numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a
Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the
set. A function f from the natural numbers to themselves is a recursive or (Turing) computable function if there is a
Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary;
there are many other models of computation that have the same computing power as Turing machines; for example
the μ-recursive functions obtained from primitive recursion and the μ operator.
The terminology for recursive functions and sets is not completely standardized. The definition in terms of μ-recursive
functions as well as a different definition of rekursiv functions by Gödel led to the traditional name recursive for sets and
functions computable by a Turing machine. The word decidable stems from the German word Entscheidungsproblem
which was used in the original papers of Turing and others. In contemporary use, the term “computable function”
has various definitions: according to Cutland (1980), it is a partial recursive function (which can be undefined for
some inputs), while according to Soare (1987) it is a total recursive (equivalently, general recursive) function. This
article follows the second of these conventions. Soare (1996) gives additional comments about the terminology.
Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing
machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncom-
putable sets follows from the facts that there are only countably many Turing machines, and thus only countably many
computable sets, but there are uncountably many sets of natural numbers.
Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite
list of the programs that do halt. Thus the halting problem is an example of a recursively enumerable set, which is a set
that can be enumerated by a Turing machine (other terms for recursively enumerable include computably enumerable
and semidecidable). Equivalently, a set is recursively enumerable if and only if it is the range of some computable
function. The recursively enumerable sets, although not decidable in general, have been studied in detail in recursion
theory.

10.3 Areas of research


Beginning with the theory of recursive sets and functions described above, the field of recursion theory has grown to
include the study of many closely related topics. These are not independent areas of research: each of these areas
10.3. AREAS OF RESEARCH 63

draws ideas and results from the others, and most recursion theorists are familiar with the majority of them.

10.3.1 Relative computability and the Turing degrees


Main articles: Turing reduction and Turing degree

Recursion theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing
computability defined using oracle Turing machines, introduced by Turing (1939). An oracle Turing machine is a
hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions
of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form “Is
n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable.
Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an
oracle cannot.
Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells
whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively)
computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then
the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives
a precise measure of how uncomputable the set is.
The natural examples of sets that are not computable, including many different sets that encode variants of the halting
problem, have two properties in common:

1. They are recursively enumerable, and

2. Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a
total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or
m-equivalent).

Many-one reductions are “stronger” than Turing reductions: if a set A is many-one reducible to a set B, then A is
Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets
are all many-one equivalent, it is possible to construct recursively enumerable sets A and B such that A is Turing
reducible to B but not many-one reducible to B. It can be shown that every recursively enumerable set is many-
one reducible to the halting problem, and thus the halting problem is the most complicated recursively enumerable
set with respect to many-one reducibility and with respect to Turing reducibility. Post (1944) asked whether every
recursively enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is
no recursively enumerable set with a Turing degree intermediate between those two.
As intermediate results, Post defined natural types of recursively enumerable sets like the simple, hypersimple and
hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem
with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other
reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of re-
cursively enumerable sets of intermediate Turing degree; this problem became known as Post’s problem. After ten
years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable
sets and the halting problem, but they failed to show that any of these degrees contains a recursively enumerable set.
Very soon after this, Friedberg and Muchnik independently solved Post’s problem by establishing the existence of
recursively enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing
degrees of the recursively enumerable sets which turned out to possess a very complicated and non-trivial structure.
There are uncountably many sets that are not recursively enumerable, and the investigation of the Turing degrees
of all sets is as central in recursion theory as the investigation of the recursively enumerable Turing degrees. Many
degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative
to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute
a function f which dominates every computable function g in the sense that there is a constant c depending on g such
that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic
sets; and the degrees below the halting problem of limit-recursive sets.
The study of arbitrary (not necessarily recursively enumerable) Turing degrees involves the study of the Turing jump.
Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle
Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the
64 CHAPTER 10. COMPUTABILITY THEORY

original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the
Turing jump of another set. Post’s theorem establishes a close relationship between the Turing jump operation and
the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability
in arithmetic.
Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set
of Turing degrees containing recursively enumerable sets. A deep theorem of Shore and Slaman (1999) states that the
function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees.
A recent survey by Ambos-Spies and Fejer (2006) gives an overview of this research and its historical progression.

10.3.2 Other reducibilities


Main article: Reduction (recursion theory)

An ongoing area of research in recursion theory studies reducibility relations other than Turing reducibility. Post
(1944) introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing ma-
chine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with.
Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one
example.
The strong reducibilities include:

One-one reducibility A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f
such that each n is in A if and only if f(n) is in B.
Many-one reducibility This is essentially one-one reducibility without the constraint that f be injective. A is many-
one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only
if f(n) is in B.
Truth-table reducibility A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine
that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this
is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the
oracle simultaneously, and then having seen their answers is able to produce an output without asking additional
questions regardless of the oracle’s answer to the initial queries. Many variants of truth-table reducibility have
also been studied.

Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed
in the article Reduction (recursion theory).
The major research on strong reducibilities has been to compare their theories, both for the class of all recursively
enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the
reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is
the union of infinitely many truth-table degrees.
Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have
also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These
reducibilities are closely connected to definability over the standard model of arithmetic.

10.3.3 Rice’s theorem and the arithmetical hierarchy


Rice showed that for every nontrivial class C (which contains some but not all r.e. sets) the index set E = {e: the eth
r.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that
is, can be mapped using a many-one reduction to E (see Rice’s theorem for more detail). But, many of these index
sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical
hierarchy. For example, the index set FIN of class of all finite sets is on the level Σ2 , the index set REC of the class
of all recursive sets is on the level Σ3 , the index set COFIN of all cofinite sets is also on the level Σ3 and the index
set COMP of the class of all Turing-complete sets Σ4 . These hierarchy levels are defined inductively, Σn₊₁ contains
just all sets which are recursively enumerable relative to Σn; Σ1 contains the recursively enumerable sets. The index
sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the
given index sets.
10.3. AREAS OF RESEARCH 65

10.3.4 Reverse mathematics

Main article: Reverse mathematics

The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems
of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was
studied in detail by Stephen Simpson and others; Simpson (1999) gives a detailed discussion of the program. The
set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers
is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive
comprehension, which states that the powerset of the naturals is closed under Turing reducibility.

10.3.5 Numberings

A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in
the numbering on the input x. Numberings can be partial-recursive although some of its members are total recursive,
that is, computable functions. Admissible numberings are those into which all others can be translated. A Friedberg
numbering (named after its discoverer) is a one-one numbering of all partial-recursive functions; it is necessarily
not an admissible numbering. Later research dealt also with numberings of other classes like classes of recursively
enumerable sets. Goncharov discovered for example a class of recursively enumerable sets for which the numberings
fall into exactly two classes with respect to recursive isomorphisms.

10.3.6 The priority method

For further explanation, see the section Post’s problem and the priority method in the article Turing
degree.

Post’s problem was solved with a method called the priority method; a proof using this method is called a priority
argument. This method is primarily used to construct recursively enumerable sets with particular properties. To use
this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known
as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties.
Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to
the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each
stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers
from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause
another to become unsatisfied; the priority order is used to decide what to do in such an event.
Priority arguments have been employed to solve many problems in recursion theory, and have been classified into a
hierarchy based on their complexity (Soare 1987). Because complex priority arguments can be technical and difficult
to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results
proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof
for the existence of Friedberg numberings without using the priority method.

10.3.7 The lattice of recursively enumerable sets

When Post defined the notion of a simple set as an r.e. set with an infinite complement not containing any infinite r.e.
set, he started to study the structure of the recursively enumerable sets under inclusion. This lattice became a well-
studied structure. Recursive sets can be defined in this structure by the basic result that a set is recursive if and only
if the set and its complement are both recursively enumerable. Infinite r.e. sets have always infinite recursive subsets;
but on the other hand, simple sets exist but do not have a coinfinite recursive superset. Post (1944) introduced already
hypersimple and hyperhypersimple sets; later maximal sets were constructed which are r.e. sets such that every r.e.
superset is either a finite variant of the given maximal set or is co-finite. Post’s original motivation in the study of this
lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of
the recursive sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution
to his problem applied priority methods instead; Harrington and Soare (1991) found eventually such a property.
66 CHAPTER 10. COMPUTABILITY THEORY

10.3.8 Automorphism problems


Another important question is the existence of automorphisms in recursion-theoretic structures. One of these struc-
tures is that one of recursively enumerable sets under inclusion modulo finite difference; in this structure, A is below
B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the
property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the recursive
enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. Soare
(1974) showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets
form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each
other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative
sets, the sets which are many-one equivalent to the halting problem.
Besides the lattice of recursively enumerable sets, automorphisms are also studied for the structure of the Turing
degrees of all sets as well as for the structure of the Turing degrees of r.e. sets. In both cases, Cooper claims to have
constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not
been verified and some colleagues believe that the construction contains errors and that the question of whether there
is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area (Slaman
and Woodin 1986, Ambos-Spies and Fejer 2006).

10.3.9 Kolmogorov complexity

Main article: Kolmogorov complexity

The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by
Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of
the research was independent, and the unity of the concept of randomness was not understood at the time). The
main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as
the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine
when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by
invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent
study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area.
For that reason, a recent research conference in this area was held in January 2007[4] and a list of open problems[5]
is maintained by Joseph Miller and Andre Nies.

10.3.10 Frequency computation


This branch of recursion theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions
A is it possible to compute for any different n inputs x1 , x2 , ..., xn a tuple of n numbers y1 ,y2 ,...,yn such that at least m
of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch
of Recursion Theory is Trakhtenbrot’s result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n.
On the other hand, Jockusch’s semirecursive sets (which were already known informally before Jockusch introduced
them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many
of these sets and also some recursively enumerable but noncomputable sets of this type. Later, Degtev established a
hierarchy of recursively enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of
research by Russian scientists, this subject became repopularized in the west by Beigel’s thesis on bounded queries,
which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One
of the major results was Kummer’s Cardinality Theory which states that a set A is computable if and only if there is
an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the
cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out
at least one false one.

10.3.11 Inductive inference


This is the recursion-theoretic branch of learning theory. It is based on Gold’s model of learning in the limit from
1967 and has developed since then more and more models of learning. The general scenario is the following: Given
a class S of computable functions, is there a learner (that is, recursive functional) which outputs for any input of the
10.4. RELATIONSHIPS BETWEEN DEFINABILITY, PROOF AND COMPUTABILITY 67

form (f(0),f(1),...,f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e
of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns
every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of
all computable functions is not learnable. Many related models have been considered and also the learning of classes
of recursively enumerable sets from positive data is a topic studied from Gold’s pioneering paper in 1967 onwards.

10.3.12 Generalizations of Turing computability


Recursion theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical
reducibility and α-recursion theory, as described by Sacks (1990). These generalized notions include reducibilities
that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These
studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by per-
mitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas
are linked to the theories of well-orderings and trees; for example the set of all indices of recursive (nonbinary) trees
without infinite branches is complete for level Π11 of the analytical hierarchy. Both Turing reducibility and hyper-
arithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion
of degrees of constructibility is studied in set theory.

10.3.13 Continuous computability theory


Computability theory for digital computation is well developed. Computability theory is less well developed for
analog computation that occurs in analog computers, analog signal processing, analog electronics, neural networks
and continuous-time control theory, modelled by differential equations and continuous dynamical systems (Orponen
1997; Moore 1996).

10.4 Relationships between definability, proof and computability


There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of
the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by
Post’s theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and
incompleteness theorems. Gödel’s proofs show that the set of logical consequences of an effective first-order theory is
a recursively enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski’s
indefinability theorem can be interpreted both in terms of definability and in terms of computability.
Recursion theory is also linked to second order arithmetic, a formal theory of natural numbers and sets of natural
numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be
defined in weak subsystems of second order arithmetic. The program of reverse mathematics uses these subsystems
to measure the noncomputability inherent in well known mathematical theorems. Simpson (1999) discusses many
aspects of second-order arithmetic and reverse mathematics.
The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal
theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these
weak systems is by characterizing which computable functions the system can prove to be total (see Fairtlough and
Wainer (1998)). For example, in primitive recursive arithmetic any computable function that is provably total is
actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not
primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an
example of such a function is provided by Goodstein’s theorem.

10.5 Name of the subject


The field of mathematical logic dealing with computability and its generalizations has been called “recursion theory”
since its early days. Robert I. Soare, a prominent researcher in the field, has proposed (Soare 1996) that the field should
be called “computability theory” instead. He argues that Turing’s terminology using the word “computable” is more
natural and more widely understood than the terminology using the word “recursive” introduced by Kleene. Many
contemporary researchers have begun to use this alternate terminology.[6] These researchers also use terminology
68 CHAPTER 10. COMPUTABILITY THEORY

such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and
recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow[7] and
Simpson.[8] Some commentators argue that both the names recursion theory and computability theory fail to convey
the fact that most of the objects studied in recursion theory are not computable.[9]
Rogers (1967) has suggested that a key property of recursion theory is that its results and structures should be invariant
under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in
geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any
structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn
on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the
infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest
in recursion theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the
natural numbers.

10.6 Professional organizations


The main professional organization for recursion theory is the Association for Symbolic Logic, which holds several
research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also orga-
nizes a series of annual conferences.

10.7 See also


• Recursion (computer science)

• Computability logic

• Transcomputational problem

10.8 Notes
[1] Csima, Barbara F., et al. “Bounding prime models.” The Journal of Symbolic Logic 69.04 (2004): 1117-1142.

[2] Many of these foundational papers are collected in The Undecidable (1965) edited by Martin Davis.

[3] The full paper can also be found at pages 150ff (with commentary by Charles Parsons at 144ff) in Feferman et al. editors
1990 Kurt Gödel Volume II Publications 1938-1974, Oxford University Press, New York, ISBN 978-0-19-514721-6. Both
reprintings have the following footnote * added to the Davis volume by Gödel in 1965: “To be more precise: a function
of integers is computable in any formal system containing arithmetic if and only if it is computable in arithmetic, where a
function f is called computable in S if there is in S a computable term representing f (p. 150).

[4] Conference on Logic, Computability and Randomness, January 10–13, 2007.

[5] The homepage of Andre Nies has a list of open problems in Kolmogorov complexity

[6] Mathscinet searches for the titles like “computably enumerable” and “c.e.” show that many papers have been published
with this terminology as well as with the other one.

[7] Lance Fortnow, "Is it Recursive, Computable or Decidable?,” 2004-2-15, accessed 2006-1-9.

[8] Stephen G. Simpson, "What is computability theory?,” FOM email list, 1998-8-24, accessed 2006-1-9.

[9] Harvey Friedman, "Renaming recursion theory,” FOM email list, 1998-8-28, accessed 2006-1-9.

10.9 References
Undergraduate level texts • S. B. Cooper, 2004. Computability Theory, Chapman & Hall/CRC. ISBN 1-
58488-237-9
10.9. REFERENCES 69

• N. Cutland, 1980. Computability, An introduction to recursive function theory, Cambridge University


Press. ISBN 0-521-29465-7
• Y. Matiyasevich, 1993. Hilbert’s Tenth Problem, MIT Press. ISBN 0-262-13295-8

Advanced texts • S. Jain, D. Osherson, J. Royer and A. Sharma, 1999. Systems that learn, an introduction to
learning theory, second edition, Bradford Book. ISBN 0-262-10077-0
• S. Kleene, 1952. Introduction to Metamathematics, North-Holland (11th printing; 6th printing added
comments). ISBN 0-7204-2103-9
• M. Lerman, 1983. Degrees of unsolvability, Perspectives in Mathematical Logic, Springer-Verlag. ISBN
3-540-12155-2.
• Andre Nies, 2009. Computability and Randomness, Oxford University Press, 447 pages. ISBN 978-0-
19-923076-1.
• P. Odifreddi, 1989. Classical Recursion Theory, North-Holland. ISBN 0-444-87295-7
• P. Odifreddi, 1999. Classical Recursion Theory, Volume II, Elsevier. ISBN 0-444-50205-X
• H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition
1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1
• G Sacks, 1990. Higher Recursion Theory, Springer-Verlag. ISBN 3-540-19305-7
• S. G. Simpson, 1999. Subsystems of Second Order Arithmetic, Springer-Verlag. ISBN 3-540-64882-8
• R. I. Soare, 1987. Recursively Enumerable Sets and Degrees, Perspectives in Mathematical Logic, Springer-
Verlag. ISBN 0-387-15299-7.

Survey papers and collections • K. Ambos-Spies and P. Fejer, 2006. "Degrees of Unsolvability.” Unpublished
preprint.
• H. Enderton, 1977. “Elements of Recursion Theory.” Handbook of Mathematical Logic, edited by J.
Barwise, North-Holland (1977), pp. 527–566. ISBN 0-7204-2285-X
• Y. L. Ershov, S. S. Goncharov, A. Nerode, and J. B. Remmel, 1998. Handbook of Recursive Mathematics,
North-Holland (1998). ISBN 0-7204-2285-X
• M. Fairtlough and S. Wainer, 1998. “Hierarchies of Provably Recursive Functions”. In Handbook of
Proof Theory, edited by S. Buss, Elsevier (1998).
• R. I. Soare, 1996. Computability and recursion, Bulletin of Symbolic Logic v. 2 pp. 284–321.

Research papers and collections • Burgin, M. and Klinger, A. “Experience, Generations, and Limits in Ma-
chine Learning.” Theoretical Computer Science v. 317, No. 1/3, 2004, pp. 71–91
• A. Church, 1936a. “An unsolvable problem of elementary number theory.” American Journal of Math-
ematics v. 58, pp. 345–363. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• A. Church, 1936b. “A note on the Entscheidungsproblem.” Journal of Symbolic Logic v. 1, n. 1, and v.
3, n. 3. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• M. Davis, ed., 1965. The Undecidable—Basic Papers on Undecidable Propositions, Unsolvable Problems
and Computable Functions, Raven, New York. Reprint, Dover, 2004. ISBN 0-486-43228-9
• R. M. Friedberg, 1958. “Three theorems on recursive enumeration: I. Decomposition, II. Maximal Set,
III. Enumeration without repetition.” The Journal of Symbolic Logic, v. 23, pp. 309–316.
• Gold, E. Mark (1967), Language Identification in the Limit (PDF) 10, Information and Control, pp. 447–
474

• L. Harrington and R. I. Soare, 1991. “Post’s Program and incomplete recursively enumerable sets”,
Proceedings of the National Academy of Sciences of the USA, volume 88, pages 10242—10246.
• C. Jockusch jr, “Semirecursive sets and positive reducibility”, Trans. Amer. Math. Soc. 137
(1968) 420-436
• S. C. Kleene and E. L. Post, 1954. “The upper semi-lattice of degrees of recursive unsolvability.”
Annals of Mathematics v. 2 n. 59, 379–407.
70 CHAPTER 10. COMPUTABILITY THEORY

• Moore, C. (1996). “Recursion theory on the reals and continuous-time computation”. Theoretical
Computer Science. CiteSeerX: 10.1.1.6.5519.
• J. Myhill, 1956. “The lattice of recursively enumerable sets.” The Journal of Symbolic Logic, v.
21, pp. 215–220.
• Orponen, P. (1997). “A survey of continuous-time computation theory”. Advances in algorithms,
languages, and complexity. CiteSeerX: 10.1.1.53.1991.
• E. Post, 1944, “Recursively enumerable sets of positive integers and their decision problems”,
Bulletin of the American Mathematical Society, volume 50, pages 284–316.
• E. Post, 1947. “Recursive unsolvability of a problem of Thue.” Journal of Symbolic Logic v. 12,
pp. 1–11. Reprinted in “The Undecidable”, M. Davis ed., 1965.
• Shore, Richard A.; Slaman, Theodore A. (1999), “Defining the Turing jump” (PDF), Mathematical
Research Letters 6: 711–722, doi:10.4310/mrl.1999.v6.n6.a10, ISSN 1073-2780, MR 1739227
• T. Slaman and W. H. Woodin, 1986. "Definability in the Turing degrees.” Illinois J. Math. v. 30
n. 2, pp. 320–334.
• R. I. Soare, 1974. “Automorphisms of the lattice of recursively enumerable sets, Part I: Maximal
sets.” Annals of Mathematics, v. 100, pp. 80–120.
• A. Turing, 1937. “On computable numbers, with an application to the Entscheidungsproblem.”
Proceedings of the London Mathematics Society, ser. 2 v. 42, pp. 230–265. Corrections ibid.
v. 43 (1937) pp. 544–546. Reprinted in “The Undecidable”, M. Davis ed., 1965. PDF from
comlab.ox.ac.uk
• A. Turing, 1939. “Systems of logic based on ordinals.” Proceedings of the London Mathematics
Society, ser. 2 v. 45, pp. 161–228. Reprinted in “The Undecidable”, M. Davis ed., 1965.

10.10 External links


• Association for Symbolic Logic homepage

• Computability in Europe homepage


• Webpage on Recursion Theory Course at Graduate Level with approximately 100 pages of lecture notes

• German language lecture notes on inductive inference


Chapter 11

De Morgan’s laws

In propositional logic and boolean algebra, De Morgan’s laws[1][2][3] are a pair of transformation rules that are both
valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules
allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:

The negation of a conjunction is the disjunction of the negations.


The negation of a disjunction is the conjunction of the negations.

or informally as:

"not (A and B)" is the same as "(not A) or (not B)"


also,
"not (A or B)" is the same as "(not A) and (not B)".

The rules can be expressed in formal language with two propositions P and Q as:

¬(P ∧ Q) ⇐⇒ (¬P ) ∨ (¬Q)

and

¬(P ∨ Q) ⇐⇒ (¬P ) ∧ (¬Q)

where:

• ¬ is the negation operator (NOT)


• ∧ is the conjunction operator (AND)
• ∨ is the disjunction operator (OR)
• ⇔ is a metalogical symbol meaning “can be replaced in a logical proof with”

Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs.
De Morgan’s laws are an example of a more general concept of mathematical duality.

11.1 Formal notation


The negation of conjunction rule may be written in sequent notation:

71
72 CHAPTER 11. DE MORGAN’S LAWS

De Morgan’s Laws represented with Venn diagrams

¬(P ∧ Q) ⊢ (¬P ∨ ¬Q)


The negation of disjunction rule may be written as:

¬(P ∨ Q) ⊢ (¬P ∧ ¬Q)


11.1. FORMAL NOTATION 73

In rule form: negation of conjunction

¬(P ∧ Q)
∴ ¬P ∨ ¬Q
and negation of disjunction

¬(P ∨ Q)
∴ ¬P ∧ ¬Q
and expressed as a truth-functional tautology or theorem of propositional logic:

¬(P ∧ Q) → (¬P ∨ ¬Q)


¬(P ∨ Q) → (¬P ∧ ¬Q)
where P and Q are propositions expressed in some formal system.

11.1.1 Substitution form


De Morgan’s laws are normally shown in the compact form above, with negation of the output on the left and negation
of the inputs on the right. A clearer form for substitution can be stated as:

(P ∧ Q) ≡ ¬(¬P ∨ ¬Q)
(P ∨ Q) ≡ ¬(¬P ∧ ¬Q)
This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a
substitution.

11.1.2 Set theory and Boolean algebra


In set theory and Boolean algebra, it is often stated as “union and intersection interchange under complementation”,[4]
which can be formally expressed as:

A∪B ≡A∩B
A∩B ≡A∪B
where:

• A is the negation of A, the overline being written above the terms to be negated
• ∩ is the intersection operator (AND)
• ∪ is the union operator (OR)

The generalized form is:

∩ ∪
Ai ≡ Ai
i∈I i∈I
∪ ∩
Ai ≡ Ai
i∈I i∈I

where I is some, possibly uncountable, indexing set.


In set notation, De Morgan’s laws can be remembered using the mnemonic “break the line, change the sign”.[5]
74 CHAPTER 11. DE MORGAN’S LAWS

11.1.3 Engineering
In electrical and computer engineering, De Morgan’s laws are commonly written as:

A·B ≡A+B

and

A + B ≡ A · B,

where:

• · is a logical AND
• + is a logical OR
• the overbar is the logical NOT of what is underneath the overbar.

11.1.4 Text searching


De Morgan’s laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of
documents containing the words “cars” and “trucks”. De Morgan’s laws hold that these two searches will return the
same set of documents:

Search A: NOT (cars OR trucks)


Search B: (NOT cars) AND (NOT trucks)

The corpus of documents containing “cars” or “trucks” can be represented by four documents:

Document 1: Contains only the word “cars”.


Document 2: Contains only “trucks”.
Document 3: Contains both “cars” and “trucks”.
Document 4: Contains neither “cars” nor “trucks”.

To evaluate Search A, clearly the search “(cars OR trucks)” will hit on Documents 1, 2, and 3. So the negation of
that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search “(NOT cars)” will hit on documents that do not contain “cars”, which is Documents 2
and 4. Similarly the search “(NOT trucks)” will hit on Documents 1 and 4. Applying the AND operator to these two
searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will return the same set of documents
(Documents 1, 2, 4):

Search C: NOT (cars AND trucks)


Search D: (NOT cars) OR (NOT trucks)

11.2 History
The laws are named after Augustus De Morgan (1806–1871),[6] who introduced a formal version of the laws to
classical propositional logic. De Morgan’s formulation was influenced by algebraization of logic undertaken by George
Boole, which later cemented De Morgan’s claim to the find. Nevertheless, a similar observation was made by Aristotle,
and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down
11.3. INFORMAL PROOF 75

the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes
rules of conversion that follow the lines of De Morgan’s laws.[9] Still, De Morgan is given credit for stating the laws
in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan’s laws can be
proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid inferences in proofs
and deductive arguments.

11.3 Informal proof


De Morgan’s theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part
of a formula.

11.3.1 Negation of a disjunction

In the case of its application to a disjunction, consider the following claim: “it is false that either of A or B is true”,
which is written as:

¬(A ∨ B)

In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true,
which may be written directly as:

(¬A) ∧ (¬B)

If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in
English, this follows the logic that “since two things are both false, it is also false that either of them is true”.
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that “not
A” and “not B” are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction
must thus be true, and the result is identical to the first claim.

11.3.2 Negation of a conjunction

The application of De Morgan’s theorem to a conjunction is very similar to its application to a disjunction both in
form and rationale. Consider the following claim: “it is false that A and B are both true”, which is written as:

¬(A ∧ B)

In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction
of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or
equivalently, one or more of “not A” and “not B” must be true). This may be written directly as:

(¬A) ∨ (¬B)

Presented in English, this follows the logic that “since it is false that two things are both true, at least one of them
must be false”.
Working in the opposite direction again, the second expression asserts that at least one of “not A” and “not B” must
be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their
conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression
is identical to the first claim.
76 CHAPTER 11. DE MORGAN’S LAWS

11.4 Formal proof


The proof that (A ∩ B)c = Ac ∪ B c is completed in 2 steps by proving both (A ∩ B)c ⊆ Ac ∪ B c and Ac ∪ B c ⊆
(A ∩ B)c :
Let x ∈ (A ∩ B)c . Then, x ̸∈ A ∩ B . Because A ∩ B = {y|y ∈ A and y ∈ B} , it must be the case that x ̸∈ A
or x ̸∈ B . If x ̸∈ A , then x ∈ Ac , so x ∈ Ac ∪ B c . Similarly, if x ̸∈ B , then x ∈ B c , so x ∈ Ac ∪ B c . Thus,
∀x( if x ∈ (A ∩ B)c , then x ∈ Ac ∪ B c ) ; that is, (A ∩ B)c ⊆ Ac ∪ B c .
To prove the reverse direction, let x ∈ Ac ∪ B c , and assume x ̸∈ (A ∩ B)c . Under that assumption, it must be
the case that x ∈ A ∩ B ; it follows that x ∈ A and x ∈ B , and thus x ̸∈ Ac and x ̸∈ B c . However, that means
x ̸∈ Ac ∪ B c , in contradiction to the hypothesis that x ∈ Ac ∪ B c ; the assumption x ̸∈ (A ∩ B)c must not be the
case, meaning that x ∈ (A ∩ B)c must be the case. Therefore, ∀x( if x ∈ Ac ∪ B c , then x ∈ (A ∩ B)c ) ; that is,
Ac ∪ B c ⊆ (A ∩ B)c .
If Ac ∪ B c ⊆ (A ∩ B)c and (A ∩ B)c ⊆ Ac ∪ B c , then (A ∩ B)c = Ac ∪ B c ; this concludes the proof of De
Morgan’s law.
The other De Morgan’s law, (A ∪ B)c = Ac ∩ B c , is proven similarly.

11.5 Extensions

≥1 &

& ≥1
De Morgan’s Laws represented as a circuit with logic gates

In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always
find its dual), since in the presence of the identities governing negation, one may always introduce an operator that
is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely
the existence of negation normal forms: any formula is equivalent to another formula where negations only occur
applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications,
for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where
it is a prerequisite for finding the conjunctive normal form and disjunctive normal form of a formula. Computer
programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in
computations in elementary probability theory.
11.6. SEE ALSO 77

Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be
the operator Pd defined by

Pd (p, q, ...) = ¬P (¬p, ¬q, . . . ).

This idea can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:

∀x P (x) ≡ ¬∃x ¬P (x),

∃x P (x) ≡ ¬∀x ¬P (x).


To relate these quantifier dualities to the De Morgan laws, set up a model with some small number of elements in its
domain D, such as

D = {a, b, c}.

Then

∀x P (x) ≡ P (a) ∧ P (b) ∧ P (c)

and

∃x P (x) ≡ P (a) ∨ P (b) ∨ P (c).

But, using De Morgan’s laws,

P (a) ∧ P (b) ∧ P (c) ≡ ¬(¬P (a) ∨ ¬P (b) ∨ ¬P (c))

and

P (a) ∨ P (b) ∨ P (c) ≡ ¬(¬P (a) ∧ ¬P (b) ∧ ¬P (c)),

verifying the quantifier dualities in the model.


Then, the quantifier dualities can be extended further to modal logic, relating the box (“necessarily”) and diamond
(“possibly”) operators:

□p ≡ ¬♢¬p,

♢p ≡ ¬□¬p.
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of
normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up
models using Kripke semantics.

11.6 See also


• Isomorphism (NOT operator as isomorphism between positive logic and negative logic)

• List of Boolean algebra topics


78 CHAPTER 11. DE MORGAN’S LAWS

11.7 References
[1] Copi and Cohen

[2] Hurley

[3] Moore and Parker

[4] Boolean Algebra by R. L. Goodstein. ISBN 0-486-45894-6

[5] 2000 Solved Problems in Digital Electronics by S. P. Bali

[6] DeMorgan’s Theorems at mtsu.edu

[7] Bocheński’s History of Formal Logic

[8] William of Ockham, Summa Logicae, part II, sections 32 and 33.

[9] Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially
Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0

[10] Augustus De Morgan (1806–1871) by Robert H. Orr

11.8 External links


• Hazewinkel, Michiel, ed. (2001), “Duality principle”, Encyclopedia of Mathematics, Springer, ISBN 978-1-
55608-010-4

• Weisstein, Eric W., “de Morgan’s Laws”, MathWorld.


• de Morgan’s laws at PlanetMath.org.
Chapter 12

Formal language

This article is about a technical term in mathematics and computer science. For related studies about natural lan-
guages, see Formal semantics (linguistics). For formal modes of speech in natural languages, see Register (sociolin-
guistics).
In mathematics, computer science, and linguistics, a formal language is a set of strings of symbols that may be

NP VP

N' V'

AdjP N' V' AdvP

Adj' AdjP N' V Adv'

Adj Adj' N sleep Adv


Colorless Adj ideas furiously
green
Structure of a syntactically well-formed, although nonsensical English sentence (historical example from Chomsky 1957).

constrained by rules that are specific to it.


The alphabet of a formal language is the set of symbols, letters, or tokens from which the strings of the language may
be formed; frequently it is required to be finite.[1] The strings formed from this alphabet are called words, and the

79
80 CHAPTER 12. FORMAL LANGUAGE

words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas.
A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar,
also called its formation rule.
The field of formal language theory studies primarily the purely syntactical aspects of such languages—that is,
their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the
syntactic regularities of natural languages. In computer science, formal languages are used among others as the
basis for defining the grammar of programming languages and formalized versions of subsets of natural languages
in which the words of the language represent concepts that are associated with particular meanings or semantics. In
computational complexity theory, decision problems are typically defined as formal languages, and complexity classes
are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In
logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems,
and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation
of formal languages in this way.

12.1 History

The first formal language is thought be the one used by Gottlob Frege in his Begriffsschrift (1879), literally meaning
“concept writing”, and which Frege described as a “formal language of pure thought.”[2]
Axel Thue's early Semi-Thue system which can be used for rewriting strings was influential on formal grammars.

12.2 Words over an alphabet

An alphabet, in the context of formal languages, can be any set, although it often makes sense to use an alphabet in the
usual sense of the word, or more generally a character set such as ASCII or Unicode. Alphabets can also be infinite;
e.g. first-order logic is often expressed using an alphabet which, besides symbols such as ∧, ¬, ∀ and parentheses,
contains infinitely many elements x0 , x1 , x2 , … that play the role of variables. The elements of an alphabet are called
its letters.
A word over an alphabet can be any finite sequence, or string, of characters or letters, which sometimes may include
spaces, and are separated by specified word separation characters. The set of all words over an alphabet Σ is usually
denoted by Σ* (using the Kleene star). The length of a word is the number of characters or letters it is composed
of. For any alphabet there is only one word of length 0, the empty word, which is often denoted by e, ε or λ. By
concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original
words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as
formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.

12.3 Definition

A formal language L over an alphabet Σ is a subset of Σ* , that is, a set of words over that alphabet. Sometimes
the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of
'well-formed expressions’.
In computer science and mathematics, which do not usually deal with natural languages, the adjective “formal” is
often omitted as redundant.
While formal language theory usually concerns itself with formal languages that are described by some syntactical
rules, the actual definition of the concept “formal language” is only as above: a (possibly infinite) set of finite-length
strings composed from a given alphabet, no more nor less. In practice, there are many languages that can be described
by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the
intuitive concept of a “language,” one described by syntactic rules. By an abuse of the definition, a particular formal
language is often thought of as being equipped with a formal grammar that describes it.
12.4. EXAMPLES 81

12.4 Examples
The following rules describe a formal language L over the alphabet Σ = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, = }:

• Every nonempty string that does not contain "+" or "=" and does not start with “0” is in L.
• The string “0” is in L.
• A string containing "=" is in L if and only if there is exactly one "=", and it separates two valid strings of L.
• A string containing "+" but not "=" is in L if and only if every "+" in the string separates two valid strings of L.
• No string is in L other than those implied by the previous rules.

Under these rules, the string “23+4=555” is in L, but the string "=234=+" is not. This formal language expresses
natural numbers, well-formed addition statements, and well-formed addition equalities, but it expresses only what they
look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication
that “0” means the number zero, or that "+" means addition.

12.4.1 Constructions
For finite languages one can explicitly enumerate all well-formed words. For example, we can describe a language L
as just L = {"a”, “b”, “ab”, “cba"}. The degenerate case of this construction is the empty language, which contains
no words at all (L = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are infinitely many words: “a”, “abb”,
“ababba”, “aaababbbbaab”, …. Therefore formal languages are typically infinite, and describing an infinite formal
language is not as simple as writing L = {"a”, “b”, “ab”, “cba"}. Here are some examples of formal languages:

• L = Σ* , the set of all words over Σ;


• L = {"a"}* = {"a”n }, where n ranges over the natural numbers and “a”n means “a” repeated n times (this is the
set of words consisting only of the symbol “a”);
• the set of syntactically correct programs in a given programming language (the syntax of which is usually
defined by a context-free grammar);
• the set of inputs upon which a certain Turing machine halts; or
• the set of maximal strings of alphanumeric ASCII characters on this line, i.e., the set {"the”, “set”, “of”,
“maximal”, “strings”, “alphanumeric”, “ASCII”, “characters”, “on”, “this”, “line”, “i”, “e"}.

12.5 Language-specification formalisms


Formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned
with the study of various types of formalisms to describe languages. For instance, a language can be given as

• those strings generated by some formal grammar;


• those strings described or matched by a particular regular expression;
• those strings accepted by some automaton, such as a Turing machine or finite state automaton;
• those strings for which some decision procedure (an algorithm that asks a sequence of related YES/NO ques-
tions) produces the answer YES.

Typical questions asked about such formalisms include:

• What is their expressive power? (Can formalism X describe every language that formalism Y can describe?
Can it describe other languages?)
82 CHAPTER 12. FORMAL LANGUAGE

• What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described
by formalism X?)

• What is their comparability? (How difficult is it to decide whether two languages, one described in formalism
X and one in formalism Y, or in X again, are actually the same language?).

Surprisingly often, the answer to these decision problems is “it cannot be done at all”, or “it is extremely expen-
sive” (with a characterization of how expensive). Therefore, formal language theory is a major application area of
computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on
the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-
free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are
widely used in practical applications.

12.6 Operations on languages


Certain operations on languages are common. This includes the standard set operations, such as union, intersection,
and complement. Another class of operation is the element-wise application of string operations.
Examples: suppose L1 and L2 are languages over some common alphabet.

• The concatenation L1 L2 consists of all strings of the form vw where v is a string from L1 and w is a string from
L2 .

• The intersection L1 ∩ L2 of L1 and L2 consists of all strings which are contained in both languages

• The complement ¬L of a language with respect to a given alphabet consists of all strings over the alphabet that
are not in the language.

• The Kleene star: the language consisting of all words that are concatenations of 0 or more words in the original
language;

• Reversal:

• Let e be the empty word, then eR = e, and


• for each non-empty word w = x1 …xn over some alphabet, let wR = x …x1 ,
• then for a formal language L, LR = {wR | w ∈ L}.

• String homomorphism

Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed
under a particular operation when the operation, applied to languages in the class, always produces a language in the
same class again. For instance, the context-free languages are known to be closed under union, concatenation, and
intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract
families of languages studies the most common closure properties of language families in their own right.[3]

12.7 Applications

12.7.1 Programming languages


Main articles: Syntax (programming languages) and Compiler compiler

A compiler usually has two distinct components. A lexical analyzer, generated by a tool like lex, identifies the tokens
of the programming language grammar, e.g. identifiers or keywords, which are themselves expressed in a simpler
formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, usually
12.7. APPLICATIONS 83

generated by a parser generator like yacc, attempts to decide if the source program is valid, that is if it belongs to the
programming language for which the compiler was built. Of course, compilers do more than just parse the source
code—they usually translate it into some executable format. Because of this, a parser usually outputs more than a
yes/no answer, typically an abstract syntax tree, which is used by subsequent stages of the compiler to eventually
generate an executable containing machine code that runs directly on the hardware, or some intermediate code that
requires a virtual machine to execute.

12.7.2 Formal theories, systems and proofs

Symbols and
strings of symbols

Well-formed formulas

Theorems

This diagram shows the syntactic divisions within a formal system. Strings of symbols may be broadly divided into nonsense and
well-formed formulas. The set of well-formed formulas is divided into theorems and non-theorems.

Main articles: Theory (mathematical logic) and Formal system

In mathematical logic, a formal theory is a set of sentences expressed in a formal language.


A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a
deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation
rules which may be interpreted as valid rules of inference or a set of axioms, or have both. A formal system is used
to derive one expression from one or more other expressions. Although a formal language can be identified with
its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems FS and FS ′ may
have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic
consequence of a formula B in one but not another for instance).
84 CHAPTER 12. FORMAL LANGUAGE

A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as propositions)
each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last
sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be
interpreted as true propositions.

Interpretations and models

Main articles: Formal semantics (logic), Interpretation (logic) and Model theory

Formal languages are entirely syntactic in nature but may be given semantics that give meaning to the elements of the
language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language,
and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often
done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as mathematical
structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived
from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes
true.

12.8 See also


• Combinatorics on words
• Grammar framework
• Formal method
• Mathematical notation
• Associative array
• String (computer science)

12.9 References

12.9.1 Citation footnotes


[1] See e.g. Reghizzi, Stefano Crespi (2009), Formal Languages and Compilation, Texts in Computer Science, Springer, p. 8,
ISBN 9781848820500, An alphabet is a finite set.

[2] Martin Davis (1995). “Influences of Mathematical Logic on Computer Science”. In Rolf Herken. The universal Turing
machine: a half-century survey. Springer. p. 290. ISBN 978-3-211-82637-9.

[3] Hopcroft & Ullman (1979), Chapter 11: Closure properties of families of languages.

12.9.2 General references


• A. G. Hamilton, Logic for Mathematicians, Cambridge University Press, 1978, ISBN 0-521-21838-1.
• Seymour Ginsburg, Algebraic and automata theoretic properties of formal languages, North-Holland, 1975,
ISBN 0-7204-2506-9.
• Michael A. Harrison, Introduction to Formal Language Theory, Addison-Wesley, 1978.
• John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation,
Addison-Wesley Publishing, Reading Massachusetts, 1979. ISBN 81-7808-347-7.
• Rautenberg, Wolfgang (2010). A Concise Introduction to Mathematical Logic (3rd ed.). New York: Springer
Science+Business Media. doi:10.1007/978-1-4419-1221-3. ISBN 978-1-4419-1220-6.
12.10. EXTERNAL LINKS 85

• Grzegorz Rozenberg, Arto Salomaa, Handbook of Formal Languages: Volume I-III, Springer, 1997, ISBN
3-540-61486-9.
• Patrick Suppes, Introduction to Logic, D. Van Nostrand, 1957, ISBN 0-442-08072-7.

12.10 External links


• Hazewinkel, Michiel, ed. (2001), “Formal language”, Encyclopedia of Mathematics, Springer, ISBN 978-1-
55608-010-4
• Alphabet at PlanetMath.org.

• Language at PlanetMath.org.
• University of Maryland, Formal Language Definitions

• James Power, “Notes on Formal Language Theory and Parsing”, 29 November 2002.

• Drafts of some chapters in the “Handbook of Formal Language Theory”, Vol. 1-3, G. Rozenberg and A.
Salomaa (eds.), Springer Verlag, (1997):

• Alexandru Mateescu and Arto Salomaa, “Preface” in Vol.1, pp. v-viii, and “Formal Languages: An
Introduction and a Synopsis”, Chapter 1 in Vol. 1, pp.1-39
• Sheng Yu, “Regular Languages”, Chapter 2 in Vol. 1
• Jean-Michel Autebert, Jean Berstel, Luc Boasson, “Context-Free Languages and Push-Down Automata”,
Chapter 3 in Vol. 1
• Christian Choffrut and Juhani Karhumäki, “Combinatorics of Words”, Chapter 6 in Vol. 1
• Tero Harju and Juhani Karhumäki, “Morphisms”, Chapter 7 in Vol. 1, pp. 439 - 510
• Jean-Eric Pin, “Syntactic semigroups”, Chapter 10 in Vol. 1, pp. 679-746
• M. Crochemore and C. Hancart, “Automata for matching patterns”, Chapter 9 in Vol. 2
• Dora Giammarresi, Antonio Restivo, “Two-dimensional Languages”, Chapter 4 in Vol. 3, pp. 215 - 267
Chapter 13

Functional completeness

In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express
all possible truth tables by combining members of the set into a Boolean expression.[1][2] A well-known complete set
of connectives is { AND, NOT }, consisting of binary conjunction and negation. The singleton sets { NAND } and
{ NOR } are also functionally complete.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) ade-
quate.[3]
From the point of view of digital electronics, functional completeness means that every possible logic gate can be
realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from
either only binary NAND gates, or only binary NOR gates.

13.1 Formal definition


Given the Boolean domain B = {0,1}, a set F of Boolean functions ƒᵢ: Bni → B is functionally complete if the clone
on B generated by the basic functions ƒᵢ contains all functions ƒ: Bn → B, for all strictly positive integers n ≥ 1. In
other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed
in terms of the functions ƒᵢ. Since every Boolean function of at least one variable can be expressed in terms of binary
Boolean functions, F is functionally complete if and only if every binary Boolean function can be expressed in terms
of the functions in F.
A more natural condition would be that the clone generated by F consist of all functions ƒ: Bn → B, for all integers n
≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible
to write a nullary function, i.e. a constant expression, in terms of F if F itself does not contain at least one nullary
function. With this stronger definition, the smallest functionally complete sets would have 2 elements.
Another natural condition would be that the clone generated by F together with the two nullary constant functions
be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The
example of the Boolean function given by S(x, y, z) = z if x = y and S(x, y, z) = x otherwise shows that this condition
is strictly weaker than functional completeness.[4][5][6]

13.2 Informal definition


Modern texts on logic typically take as primitive some subset of the connectives: conjunction ( ∧ ), or Kpq; disjunction
( ∨ ), or Apq; negation ( ¬ ), Np; or material conditional ( → ), or Cpq; and possibly the biconditional ( ↔ ), or Epq.
These connectives are functionally complete. However, they do not form a minimal functionally complete set, as the
conditional and biconditional may be defined as:

A → B := ¬A ∨ B
A ↔ B := (A → B) ∧ (B → A).

86
13.3. CHARACTERIZATION OF FUNCTIONAL COMPLETENESS 87

So {¬, ∧, ∨} is also functionally complete. But then, ∨ can be defined as

A ∨ B := ¬(¬A ∧ ¬B).

∧ can also be defined in terms of ∨ in a similar manner.


It is also the case that ∨ can be defined in terms of → as follows:

A ∨ B := (A → B) → B.

No further simplifications are possible. Hence ¬ and one of {∧, ∨, →} are each minimal functionally complete
subsets of {¬, ∧, ∨, →, ↔} .

13.3 Characterization of functional completeness


Further information: Post’s lattice

Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of
the following sets of connectives:

• The monotonic connectives; changing the truth value of any connected variables from F to T without changing
any from T to F never makes these connectives change their return value from T to F, e.g. ∨ , ∧ , ⊤ , ⊥ .

• The affine connectives, such that each connected variable either always or never affects the truth value these
connectives return, e.g. ¬ , ⊤ , ⊥ , ↔ , ̸↔ .

• The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are
reversed, so is the truth value these connectives return, e.g. ¬ , MAJ(p,q,r).

• The truth-preserving connectives; they return the truth value T under any interpretation which assigns T to
all variables, e.g. ∨ , ∧ , ⊤ , → , ↔ .

• The falsity-preserving connectives; they return the truth value F under any interpretation which assigns F to
all variables, e.g. ∨ , ∧ , ⊥ , ̸→ , ̸↔ .

In fact, Post gave a complete description of the lattice of all clones (sets of operations closed under composition and
containing all projections) on the two-element set {T, F}, nowadays called Post’s lattice, which implies the above
result as a simple corollary: the five mentioned sets of connectives are exactly the maximal clones.

13.4 Minimal functionally complete operator sets


When a single logical connective or Boolean operator is functionally complete by itself, it is called a Sheffer func-
tion[7] or sometimes a sole sufficient operator. There are no unary operators with this property, and the only binary
Sheffer functions — NAND and NOR are dual. These were discovered but not published by Charles Sanders Peirce
around 1880, and rediscovered independently and published by Henry M. Sheffer in 1913.[8] In digital electronics
terminology, the binary NAND gate and the binary NOR gate are the only binary universal logic gates.
The following are the minimal functionally complete sets of logical connectives with arity ≤ 2:[9]

One element {NAND}, {NOR}.


Two elements { ∨ , ¬}, { ∧ , ¬}, {→, ¬}, {←, ¬}, {→, ⊥ }, {←, ⊥ }, {→, ̸↔ }, {←, ̸↔ }, {→, ̸→ }, {→, ̸← },
{←, ̸→ }, {←, ̸← }, { ̸→ , ¬}, { ̸← , ¬}, { ̸→ , ⊤ }, { ̸← , ⊤ }, { ̸→ , ↔ }, { ̸← , ↔ }.
88 CHAPTER 13. FUNCTIONAL COMPLETENESS

Three elements { ∨ , ↔ , ⊥ }, { ∨ , ↔ , ̸↔ }, { ∨ , ̸↔ , ⊤ }, { ∧ , ↔ , ⊥ }, { ∧ , ↔ , ̸↔ }, { ∧ , ̸↔ , ⊤ }.

There are no minimal functionally complete sets of more than three at most binary logical connectives.[9] Constant
unary or binary connectives and binary connectives that depend only on one of the arguments have been suppressed
to keep the list readable. E.g. the set consisting of binary ∨ and the binary connective given by negation of the first
argument (ignoring the second) is another minimal functionally complete set.

13.5 Examples

• Examples of using the NAND completeness. As illustrated by,[10]

• ¬A = A NAND A
• A ∧ B = ¬(A NAND B) = (A NAND B) NAND (A NAND B)
• A ∨ B = (A NAND A) NAND (B NAND B)

• Examples of using the NOR completeness. As illustrated by,[11]

• ¬A = A NOR A
• A ∧ B = (A NOR A) NOR (B NOR B)
• A ∨ B = (A NOR B) NOR (A NOR B)

Note that, an electronic circuit or a software function is optimized by the reuse, that reduce the number of gates. For
instance, the “A ∧ B” operation, when expressed by NAND gates, is implemented with the reuse of “A NAND B”,

X = (A NAND B); A ∧ B = X NAND X

13.6 In other domains


Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains.
For example, a set of reversible gates is called functionally complete, if it can express every reversible operator.
The 3-input Fredkin gate is functionally complete reversible gate by itself – a sole sufficient operator. There are many
other three-input universal logic gates, such as the Toffoli gate.

13.7 Set theory


There is an isomorphism between the algebra of sets and the Boolean algebra, that is, they have the same structure.
Then, if we map boolean operators into set operators, the “translated” above text are valid also for sets: there are
many “minimal complete set of set-theory operators” that can generate any other set relations. The more popular
“Minimal complete operator sets” are {¬, ∩} and {¬, ∪}.

13.8 See also

• Algebra of sets

• Boolean algebra
13.9. REFERENCES 89

13.9 References
[1] Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-
238452-3. (“Complete set of logical connectives”).

[2] Nolt, John; Rohatyn, Dennis; Varzi, Achille (1998), Schaum’s outline of theory and problems of logic (2nd ed.), New York:
McGraw–Hill, ISBN 978-0-07-046649-4. ("[F]unctional completeness of [a] set of logical operators”).

[3] Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Defines
“expressively adequate”, shortened to “adequate set of connectives” in a section heading.)

[4] Wesselkamper, T.C. (1975), “A sole sufficient operator”, Notre Dame Journal of Formal Logic 16: 86–88, doi:10.1305/ndjfl/1093891614

[5] Massey, G.J. (1975), “Concerning an alleged Sheffer function”, Notre Dame Journal of Formal Logic 16 (4): 549–550,
doi:10.1305/ndjfl/1093891898

[6] Wesselkamper, T.C. (1975), “A Correction To My Paper” A. Sole Sufficient Operator”, Notre Dame Journal of Formal
Logic 16 (4): 551, doi:10.1305/ndjfl/1093891899

[7] The term was originally restricted to binary operations, but since the end of the 20th century it is used more generally.
Martin, N.M. (1989), Systems of logic, Cambridge University Press, p. 54, ISBN 978-0-521-36770-7.

[8] Scharle, T.W. (1965), “Axiomatization of propositional calculus with Sheffer functors”, Notre Dame J. Formal Logic 6 (3):
209–217, doi:10.1305/ndjfl/1093958259.

[9] Wernick, William (1942) “Complete Sets of Logical Functions,” Transactions of the American Mathematical Society 51:
117–32. In his list on the last page of the article, Wernick does not distinguish between ← and →, or between ̸← and ̸→ .

[10] “NAND Gate Operations” at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html

[11] “NOR Gate Operations” at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nor.html


Chapter 14

Halting problem

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary
computer program and an input, whether the program will finish running or continue to run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs
cannot exist. A key part of the proof was a mathematical definition of a computer and program, which became known
as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a
decision problem.
Jack Copeland (2004) attributes the term halting problem to Martin Davis.[1]

14.1 Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model
of computation, i.e., all programs that can be written in some given programming language that is general enough
to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program,
whether the program will eventually halt when run with that input. In this abstract framework, there are no resource
limitations on the amount of memory or time required for the program’s execution; it can take arbitrarily long, and
use arbitrarily much storage space, before halting. The question is simply whether the given program will ever halt
on a particular input.
For example, in pseudocode, the program

while (true) continue

does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program

print “Hello, world!"

does halt.
While deciding whether these programs halt is simple, more complex programs prove problematic.
One approach to the problem might be to run the program for some number of steps and check if it halts. But if the
program does not halt, it is unknown whether the program will eventually halt or run forever.
Turing proved no algorithm can exist which will always correctly decide whether, for a given arbitrary program and
its input, the program halts when run with that input; the essence of Turing’s proof is that any such algorithm can be
made to contradict itself, and therefore cannot be correct.

14.2 Importance and consequences


The halting problem is historically important because it was one of the first problems to be proved undecidable.
(Turing’s proof went to press in May 1936, whereas Alonzo Church's proof of the undecidability of a problem in

90
14.3. REPRESENTATION AS A SET 91

the lambda calculus had already been published in April 1936.) Subsequently, many other undecidable problems
have been described; the typical method of proving a problem to be undecidable is with the technique of reduction.
To do this, it is sufficient to show that if a solution to the new problem were found, it could be used to decide an
undecidable problem by transforming instances of the undecidable problem into instances of the new problem. Since
we already know that no method can decide the old problem, no method can decide the new problem either. Often
the new problem is reduced to solving the halting problem. (Note: the same technique is used to demonstrate that a
problem is NP complete, only in this case, rather than demonstrating that there is no solution, it demonstrates there
is no polynomial time solution, assuming P ≠ NP).
For example, one such consequence of the halting problem’s undecidability is that there cannot be a general algorithm
that decides whether a given statement about natural numbers is true or not. The reason for this is that the proposition
stating that a certain program will halt given a certain input can be converted into an equivalent statement about
natural numbers. If we had an algorithm that could solve every statement about natural numbers, it could certainly
solve this one; but that would determine whether the original program halts, which is impossible, since the halting
problem is undecidable.
Rice’s theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial prop-
erty, there is no general decision procedure that, for all programs, decides whether the partial function implemented
by the input program has that property. (A partial function is a function which may not always produce a result, and
so is used to model programs, which can either produce results or fail to halt.) For example, the property “halt for the
input 0” is undecidable. Here, “non-trivial” means that the set of partial functions that satisfy the property is neither
the empty set nor the set of all partial functions. For example, “halts or fails to halt on input 0” is clearly true of all
partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports “true.” Also, note
that this theorem holds only for properties of the partial function implemented by the program; Rice’s Theorem does
not apply to properties of the program itself. For example, “halt on input 0 within 100 steps” is not a property of
the partial function that is implemented by the program—it is a property of the program implementing the partial
function and is very much decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally
is said to represent the probability that a randomly produced program halts. These numbers have the same Turing
degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely
computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few
digits can be calculated in simple cases.
While Turing’s proof shows that there can be no general method or algorithm to determine whether algorithms halt,
individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often
show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof.
But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to
determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in
an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of
research is known as automated termination analysis.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing ma-
chine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods.
However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle
machines). It is an open question whether there can be actual deterministic physical processes that, in the long run,
elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be
harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing
machine amongst other things. It is also an open question whether any such unknown physical processes are involved
in the working of the human brain, and whether humans can solve the halting problem (Copeland 2004, p. 15).

14.3 Representation as a set


The conventional representation of decision problems is the set of objects possessing the property in question. The
halting set

K := { (i, x) | program i halts when run on input x}

represents the halting problem.


92 CHAPTER 14. HALTING PROBLEM

This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it
contains.[2] However, the complement of this set is not recursively enumerable.[2]
There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting
problem is such a formulation. Examples of such sets include:

• { i | program i eventually halts when run with input 0 }

• { i | there is an input x such that program i eventually halts when run with input x }.

14.4 Sketch of proof


The proof shows there is no total computable function that decides whether an arbitrary program i halts on arbitrary
input x; that is, the following function h is not computable (Penrose 1990, p. 57–63):

{
1 if program i input on halts x,
h(i, x) =
0 otherwise.

Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of
computation.
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i)
and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.

The proof proceeds by directly establishing that every total computable function with two arguments differs from the
required function h. To this end, given any total computable binary function f, the following partial function g is also
computable by some program e:

{
0 iff (i, i) = 0,
g(i) =
undefined otherwise.

The verification that g is computable relies on the following constructs (or their equivalents):

• computable subprograms (the program that computes f is a subprogram in program e),

• duplication of values (program e computes the inputs i,i for f from the input i for g),

• conditional branching (program e selects between two results depending on the value it computes for f(i,i)),

• not producing a defined result (for example, by looping forever),

• returning a value of 0.

The following pseudocode illustrates a straightforward way to compute g:


procedure compute_g(i): if f(i,i) == 0 then return 0 else loop forever

Because g is partial computable, there must be a program e that computes g, by the assumption that the model of
computation is Turing-complete. This program is one of all the programs on which the halting function h is defined.
The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
It follows from the definition of g that exactly one of the following two cases must hold:

• f(e,e) = 0 and so g(e) = 0. In this case h(e,e) = 1, because program e halts on input e.

• f(e,e) ≠ 0 and so g(e) is undefined. In this case h(e,e) = 0, because program e does not halt on input e.
14.5. PROOF AS A COROLLARY OF THE UNCOMPUTABILITY OF KOLMOGOROV COMPLEXITY 93

In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two
arguments, all such functions must differ from h.
This proof is analogous to Cantor’s diagonal argument. One may visualize a two-dimensional array with one column
and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row
j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The
construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position
(i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of
the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined ( g(e) = 0 in this case
), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined,
then halting function f(e,e) = 0, which leads to g(e) = 0 under g's construction. This contradicts the assumption that
g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be
the halting function h.

14.5 Proof as a corollary of the uncomputability of Kolmogorov complex-


ity
The undecidability of the halting problem also follows from the fact that Kolmogorov complexity is not computable. If
the halting problem were decidable, it would be possible to construct a program that generated programs of increasing
length, running those that halt and comparing their final outputs with a string parameter until one matched (which
must happen eventually, as any string can be generated by a program that contains it as data and just lists it); the length
of the matching generated program would then be the Kolmogorov complexity of the parameter, as the terminating
generated program must be the shortest (or shortest equal) such program.[3]

14.6 Common pitfalls


The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs
and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always
answers “halts” and another that always answers “doesn't halt”. For any specific program and input, one of these two
algorithms answers correctly, even though nobody may know which one.
There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs
can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation,
which shows that the original program halted. However, an interpreter will not halt if its input program does not
halt, so this approach cannot solve the halting problem as stated. It does not successfully answer “doesn't halt” for
programs that do not halt.
The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with
finite memory. A machine with finite memory has a finite number of states, and thus any deterministic program on
it must eventually either halt or repeat a previous state:

...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repeti-
tive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the
machine... (italics in original, Minsky 1967, p. 24)

Minsky warns us, however, that machines such as computers with e.g., a million small parts, each with two states,
will have at least 21,000,000 possible states:

This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate
at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the
time of a journey through such a cycle (Minsky 1967 p. 25):

Minsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata “have a number
of theoretical limitations":

...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the
mere finiteness [of] the state diagram may not carry a great deal of significance. (Minsky p. 25)
94 CHAPTER 14. HALTING PROBLEM

It can also be decided automatically whether a nondeterministic machine with finite memory halts on none of, some
of, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.

14.7 Formalization
In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result
is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational
power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag
systems.
What is important is that the formalization allows a straightforward mapping of algorithms to some data type that
the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as
Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms
define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms
to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n
characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.

14.8 Relationship with Gödel’s incompleteness theorems


The concepts raised by Gödel’s incompleteness theorems are very similar to those raised by the halting problem, and
the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the
undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness
theorem by asserting that a complete, consistent and sound axiomatization of all statements about natural numbers is
unachievable. The “sound” part is the weakening: it means that we require the axiomatic system in question to prove
only true statements about natural numbers (it’s very important to observe that the statement of the standard form
of Gödel’s First Incompleteness Theorem is completely unconcerned with the question of truth, and only concerns
formal provability).
The weaker form of the theorem can be proven from the undecidability of the halting problem as follows. Assume
that we have a consistent and complete axiomatization of all true first-order logic statements about natural numbers.
Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that,
given a natural number n, computes a true first-order logic statement about natural numbers such that, for all the true
statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide whether the
algorithm with representation a halts on input i. By using Kleene’s T predicate, we can express the statement "a halts
on input i" as a statement H(a, i) in the language of arithmetic. Since the axiomatization is complete it follows that
either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n
until we either find H(a, i) or its negation, we will always halt. This means that this gives us an algorithm to decide the
halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a
consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

14.9 Recognizing partial solutions


There are many programs that either return a correct answer to the halting problem or do not return an answer at all.
If it were possible to decide whether any given program gives only correct answers, one might hope to collect a large
number of such programs and run them in parallel and determine whether any programs halt. Curiously, deciding
whether a program is a partial halting solver (PHS) is as hard as the halting problem itself.
Suppose it’s possible to decide whether any given program is a partial halting solver. Then there exists a partial halting
solver recognizer, PHSR, guaranteed to terminate with an answer. Construct a program H:
input a program P X := “input Q. if Q = P output 'halts’ else loop forever” run PHSR with X as input
By construction, program H is also guaranteed to terminate with an answer. If PHSR recognizes the constructed
program X as a partial halting solver, that means that P, the only input for which X produces a result, halts. If PHSR
fails to recognize X, then it must be because P does not halt. Therefore H can decide whether an arbitrary program
P halts; it solves the halting problem. Since this is impossible, then the program PHSR could not have existed as
supposed. Therefore, it’s not possible to decide whether any given program is a partial halting solver.
14.10. HISTORY 95

14.10 History
Further information: History of algorithms

• 1900: David Hilbert poses his “23 questions” (now known as Hilbert’s problems) at the Second International
Congress of Mathematicians in Paris. “Of these, the second was that of proving the consistency of the 'Peano
axioms' on which, as he had shown, the rigour of mathematics depended”. (Hodges p. 83, Davis’ commentary
in Davis, 1965, p. 108)

• 1920–1921: Emil Post explores the halting problem for tag systems, regarding it as a candidate for unsolvability.
(Absolutely unsolvable problems and relatively undecidable propositions – account of an anticipation, in Davis,
1965, pp. 340–433.) Its unsolvability was not established until much later, by Marvin Minsky (1967).

• 1928: Hilbert recasts his 'Second Problem' at the Bologna International Congress. (Reid pp. 188–189) Hodges
claims he posed three questions: i.e. #1: Was mathematics complete? #2: Was mathematics consistent? #3:
Was mathematics decidable? (Hodges p. 91). The third question is known as the Entscheidungsproblem
(Decision Problem). (Hodges p. 91, Penrose p. 34)

• 1930: Kurt Gödel announces a proof as an answer to the first two of Hilbert’s 1928 questions [cf Reid p.
198]. “At first he [Hilbert] was only angry and frustrated, but then he began to try to deal constructively with
the problem... Gödel himself felt—and expressed the thought in his paper—that his work did not contradict
Hilbert’s formalistic point of view” (Reid p. 199)

• 1931: Gödel publishes “On Formally Undecidable Propositions of Principia Mathematica and Related Systems
I”, (reprinted in Davis, 1965, p. 5ff)

• 19 April 1935: Alonzo Church publishes “An Unsolvable Problem of Elementary Number Theory”, wherein
he identifies what it means for a function to be effectively calculable. Such a function will have an algorithm,
and "...the fact that the algorithm has terminated becomes effectively known ...” (Davis, 1965, p. 100)

• 1936: Church publishes the first proof that the Entscheidungsproblem is unsolvable. (A Note on the Entschei-
dungsproblem, reprinted in Davis, 1965, p. 110.)

• 7 October 1936: Emil Post's paper “Finite Combinatory Processes. Formulation I” is received. Post adds to his
“process” an instruction "(C) Stop”. He called such a process “type 1 ... if the process it determines terminates
for each specific problem.” (Davis, 1965, p. 289ff)

• 1937: Alan Turing's paper On Computable Numbers With an Application to the Entscheidungsproblem reaches
print in January 1937 (reprinted in Davis, 1965, p. 115). Turing’s proof departs from calculation by recursive
functions and introduces the notion of computation by machine. Stephen Kleene (1952) refers to this as one
of the “first examples of decision problems proved unsolvable”.

• 1939: J. Barkley Rosser observes the essential equivalence of “effective method” defined by Gödel, Church,
and Turing (Rosser in Davis, 1965, p. 273, “Informal Exposition of Proofs of Gödel’s Theorem and Church’s
Theorem”)

• 1943: In a paper, Stephen Kleene states that “In setting up a complete algorithmic theory, what we do is
describe a procedure ... which procedure necessarily terminates and in such manner that from the outcome we
can read a definite answer, 'Yes’ or 'No,' to the question, 'Is the predicate value true?'.”

• 1952: Kleene (1952) Chapter XIII (“Computable Functions”) includes a discussion of the unsolvability of
the halting problem for Turing machines and reformulates it in terms of machines that “eventually stop”, i.e.
halt: "... there is no algorithm for deciding whether any given machine, when started from any given situation,
eventually stops.” (Kleene (1952) p. 382)

• 1952: "Martin Davis thinks it likely that he first used the term 'halting problem' in a series of lectures that he
gave at the Control Systems Laboratory at the University of Illinois in 1952 (letter from Davis to Copeland, 12
December 2001).” (Footnote 61 in Copeland (2004) pp. 40ff)
96 CHAPTER 14. HALTING PROBLEM

14.11 Avoiding the halting problem


In many practical situations, programmers try to avoid infinite loops—they want every subroutine to finish (halt). In
particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to
finish (halt), but are guaranteed to finish before the given deadline.
Sometimes these programmers use some general-purpose (Turing-complete) programming language, but attempt to
write in a restricted style—such as MISRA C—that makes it easy to prove that the resulting subroutines finish before
the given deadline.
Other times these programmers apply the rule of least power—they deliberately use a computer language that is not
quite fully Turing-complete, often a language that guarantees that all subroutines are guaranteed to finish, such as
Coq.

14.12 See also


• Busy beaver
• Generic-case complexity
• Geoffrey K. Pullum
• Gödel’s incompleteness theorem
• Kolmogorov complexity
• P versus NP problem
• Termination analysis
• Worst-case execution time

14.13 Notes
[1] In none of his work did Turing use the word “halting” or “termination”. Turing’s biographer Hodges does not have the word
“halting” or words “halting problem” in his index. The earliest known use of the words “halting problem” is in a proof by
Davis (1958, p. 70–71):

“Theorem 2.2 There exists a Turing machine whose halting problem is recursively unsolvable.
“A related problem is the printing problem for a simple Turing machine Z with respect to a symbol Sᵢ".

Davis adds no attribution for his proof, so one infers that it is original with him. But Davis has pointed out that a statement
of the proof exists informally in Kleene (1952, p. 382). Copeland (2004, p 40) states that:

“The halting problem was so named (and it appears, first stated) by Martin Davis [cf Copeland footnote 61]...
(It is often said that Turing stated and proved the halting theorem in 'On Computable Numbers’, but strictly
this is not true).”

[2] Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, pp. 236–237, ISBN
9780191620805.

[3] Stated without proof in: "Course notes for Data Compression - Kolmogorov complexity", 2005, P.B. Miltersen, p.7

14.14 References
• Alan Turing, On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the
London Mathematical Society, Series 2, Volume 42 (1937), pp 230–265, doi:10.1112/plms/s2-42.1.230. —
Alan Turing, On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction, Pro-
ceedings of the London Mathematical Society, Series 2, Volume 43 (1938), pp 544–546, doi:10.1112/plms/s2-
43.6.544 . Free online version of both parts This is the epochal paper where Turing defines Turing machines,
formulates the halting problem, and shows that it (as well as the Entscheidungsproblem) is unsolvable.
14.14. REFERENCES 97

• Sipser, Michael (2006). “Section 4.2: The Halting Problem”. Introduction to the Theory of Computation
(Second Edition ed.). PWS Publishing. pp. 173–182. ISBN 0-534-94728-X.

• c2:HaltingProblem

• B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial
Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford
UK, ISBN 0-19-825079-7.

• Davis, Martin (1965). The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And
Computable Functions. New York: Raven Press.. Turing’s paper is #3 in this volume. Papers include those by
Godel, Church, Rosser, Kleene, and Post.

• Davis, Martin (1958). Computability and Unsolvability. New York: McGraw-Hill..

• Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University
Press, 1962. Re: the problem of paradoxes, the authors discuss the problem of a set not be an object in any
of its “determining functions”, in particular “Introduction, Chap. 1 p. 24 "...difficulties which arise in formal
logic”, and Chap. 2.I. “The Vicious-Circle Principle” p. 37ff, and Chap. 2.VIII. “The Contradictions” p. 60ff.

• Martin Davis, “What is a computation”, in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random
House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines for the non-
specialist. Davis reduces the Turing Machine to a far-simpler model based on Post’s model of a computation.
Discusses Chaitin proof. Includes little biographies of Emil Post, Julia Robinson.

• Marvin Minsky, Computation, Finite and Infinite Machines, Prentice-Hall, Inc., N.J., 1967. See chapter 8,
Section 8.2 “The Unsolvability of the Halting Problem.” Excellent, i.e. readable, sometimes fun. A classic.

• Roger Penrose, The Emperor’s New Mind: Concerning computers, Minds and the Laws of Physics, Oxford
University Press, Oxford England, 1990 (with corrections). Cf: Chapter 2, “Algorithms and Turing Machines”.
An over-complicated presentation (see Davis’s paper for a better model), but a thorough presentation of Turing
machines and the halting problem, and Church’s Lambda Calculus.

• John Hopcroft and Jeffrey Ullman, Introduction to Automata Theory, Languages and Computation, Addison-
Wesley, Reading Mass, 1979. See Chapter 7 “Turing Machines.” A book centered around the machine-
interpretation of “languages”, NP-Completeness, etc.

• Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf Chapter “The Spirit of Truth”
for a history leading to, and a discussion of, his proof.

• Constance Reid, Hilbert, Copernicus: Springer-Verlag, New York, 1996 (first published 1970). Fascinat-
ing history of German mathematics and physics from 1880s through 1930s. Hundreds of names familiar to
mathematicians, physicists and engineers appear in its pages. Perhaps marred by no overt references and few
footnotes: Reid states her sources were numerous interviews with those who personally knew Hilbert, and
Hilbert’s letters and papers.

• Edward Beltrami, What is Random? Chance and order in mathematics and life, Copernicus: Springer-Verlag,
New York, 1999. Nice, gentle read for the mathematically inclined non-specialist, puts tougher stuff at the
end. Has a Turing-machine model in it. Discusses the Chaitin contributions.

• Ernest Nagel and James R. Newman, Godel’s Proof, New York University Press, 1958. Wonderful writing
about a very difficult subject. For the mathematically inclined non-specialist. Discusses Gentzen's proof on
pages 96–97 and footnotes. Appendices discuss the Peano Axioms briefly, gently introduce readers to formal
logic.

• Taylor Booth, Sequential Machines and Automata Theory, Wiley, New York, 1967. Cf Chapter 9, Turing
Machines. Difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-
recursion with reference to Turing Machines, halting problem. Has a Turing Machine model in it. References
at end of Chapter 9 catch most of the older books (i.e. 1952 until 1967 including authors Martin Davis, F.
C. Hennie, H. Hermes, S. C. Kleene, M. Minsky, T. Rado) and various technical papers. See note under
Busy-Beaver Programs.
98 CHAPTER 14. HALTING PROBLEM

• Busy Beaver Programs are described in Scientific American, August 1984, also March 1985 p. 23. A reference
in Booth attributes them to Rado, T.(1962), On non-computable functions, Bell Systems Tech. J. 41. Booth
also defines Rado’s Busy Beaver Problem in problems 3, 4, 5, 6 of Chapter 9, p. 396.

• David Bolter, Turing’s Man: Western Culture in the Computer Age, The University of North Carolina Press,
Chapel Hill, 1984. For the general reader. May be dated. Has yet another (very simple) Turing Machine
model in it.
• Stephen Kleene, Introduction to Metamathematics, North-Holland, 1952. Chapter XIII (“Computable Func-
tions”) includes a discussion of the unsolvability of the halting problem for Turing machines. In a departure
from Turing’s terminology of circle-free nonhalting machines, Kleene refers instead to machines that “stop”,
i.e. halt.
• Logical Limitations to Machine Ethics, with Consequences to Lethal Autonomous Weapons - paper discussed
in: Does the Halting Problem Mean No Moral Robots?

14.15 External links


• Scooping the loop snooper - a poetic proof of undecidability of the halting problem
• animated movie - an animation explaining the proof of the undecidability of the halting problem

• A 2-Minute Proof of the 2nd-Most Important Theorem of the 2nd Millennium - a proof in only 13 lines
Chapter 15

List of multiple discoveries

Main article: Multiple discovery

Historians and sociologists have remarked on the occurrence, in science, of "multiple independent discovery". Robert
K. Merton defined such “multiples” as instances in which similar discoveries are made by scientists working indepen-
dently of each other.[1] “Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a
new discovery which, unknown to him, somebody else has made years before.”[2]
Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus
by Isaac Newton, Gottfried Wilhelm Leibniz and others, described by A. Rupert Hall;[3] the 18th-century discovery
of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier and others; and the theory of the evolution
of species, independently advanced in the 19th century by Charles Darwin and Alfred Russel Wallace.
Multiple independent discovery, however, is not limited to only a few historic instances involving giants of scientific
research. Merton believed that it is multiple discoveries, rather than unique ones, that represent the common pattern
in science.[4]
Merton contrasted a “multiple” with a “singleton”—a discovery that has been made uniquely by a single scientist or
group of scientists working together.[5]
Merton's hypothesis is also discussed extensively in Harriet Zuckerman's Scientific Elite.[6]

15.1 13th century


• 1242 – first description of the function of pulmonary circulation, in Egypt, by Ibn al-Nafis. Later independently
rediscovered by the Europeans, Michael Servetus (1553) and William Harvey (1616).

15.2 14th century


• Gresham’s (Copernicus’) law: Nicole Oresme (ca. 1370); Nicolaus Copernicus (1519);[7] Thomas Gresham
(16th century); Henry Dunning Macleod (1857).

15.3 16th century


• Galileo Galilei and Simon Stevin: heavy and light balls fall together (contra Aristotle).

• Galileo Galilei and Simon Stevin: Hydrostatic paradox (Stevin ca. 1585, Galileo ca. 1610).

• Scipione dal Ferro (1520) and Niccolò Tartaglia (1535) independently developed a method for solving cubic
equations.

99
100 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Copernicus

15.4 17th century


• Sunspots – Thomas Harriot (England, 1610), Johannes and David Fabricius (Frisia, 1611), Galileo Galilei
(Italy, 1612), Christoph Scheiner (Germany, 1612).
• Logarithms – John Napier (Scotland, 1614) and Joost Bürgi (Switzerland, 1618).
• Analytic geometry – René Descartes, Pierre de Fermat.
• Problem of points solved by both Pierre de Fermat (France, 1654), Blaise Pascal (France, 1654), and Huygens
(Holland, 1657).
• Determinants – Gottfried Wilhelm Leibniz and Seki Kōwa.
• Calculus – Isaac Newton, Gottfried Wilhelm Leibniz, Pierre de Fermat and others.[8]
15.4. 17TH CENTURY 101

Galileo

• Boyle’s law (sometimes referred to as the “Boyle-Mariotte law”) is one of the gas laws and basis of derivation
for the Ideal gas law, which describes the relationship between the product pressure and volume within a closed
system as constant when temperature remains at a fixed measure. The law was named for chemist and physicist
102 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Newton

Robert Boyle who published the original law in 1662. The French physicist Edme Mariotte discovered the
same law independently of Boyle in 1676.

• Newton–Raphson method – Joseph Raphson (1690), Isaac Newton (Newton’s work was written in 1671, but
not published until 1736).
15.5. 18TH CENTURY 103

Leibniz

• Brachistochrone problem solved by Johann Bernoulli, Jakob Bernoulli, Isaac Newton, Gottfried Wilhelm Leib-
niz, Guillaume de l'Hôpital, and Ehrenfried Walther von Tschirnhaus. The problem was posed in 1696 by
Johann Bernoulli, and its solutions were published next year.

15.5 18th century

• Platinum – Antonio de Ulloa and Charles Wood (both, 1740s).


104 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Laplace

• Leyden Jar – Ewald Georg von Kleist (1745) and Pieter van Musschenbroek (1745-46).
• Lightning rod – Benjamin Franklin (1749) and Prokop Diviš (1754) (debated: Diviš's apparatus is assumed to
have been more effective than Franklin’s lightning rods in 1754, but was intended for a different purpose than
lightning protection).
• Oxygen – Carl Wilhelm Scheele (Uppsala, 1773), Joseph Priestley (Wiltshire, 1774). The term was coined by
Antoine Lavoisier (1777).
15.6. 19TH CENTURY 105

• Black-hole theory: John Michell, in a 1783 paper in The Philosophical Transactions of the Royal Society, wrote:
“If the semi-diameter of a sphere of the same density as the Sun in the proportion of five hundred to one, and by
supposing light to be attracted by the same force in proportion to its [mass] with other bodies, all light emitted
from such a body would be made to return towards it, by its own proper gravity.”[9] A few years later, a similar
idea was suggested independently by Pierre-Simon Laplace.[10]
• A method for measuring the specific heat of a solid substance was devised independently by Benjamin Thomp-
son, Count Rumford; and by Johan Wilcke, who published his discovery first (apparently not later than 1796,
when he died).

15.6 19th century


• In a treatise[11] written in 1805 and published in 1866, Carl Friedrich Gauss describes an efficient algorithm to
compute the discrete Fourier transform. James W. Cooley and John W. Tukey reinvented a similar algorithm
in 1965.[12]
• Complex plane – Geometrical representation of complex numbers was discovered independently by Caspar
Wessel (1799), Jean-Robert Argand (1806), John Warren (1828), and Carl Friedrich Gauss (1831).[13]
• Cadmium – Friedrich Strohmeyer, K.S.L Hermann (both in 1817).
• Grotthuss–Draper law (aka the Principle of Photochemical Activation) – first proposed in 1817 by Theodor
Grotthuss, then independently, in 1842, by John William Draper. The law states that only that light which is
absorbed by a system can bring about a photochemical change.
• Beryllium – Friedrich Wöhler, A.A.B. Bussy (1828).
• Electromagnetic induction was discovered by Michael Faraday in England in 1831, and independently about
the same time by Joseph Henry in the U.S.[14]
• Chloroform – Samuel Guthrie in the United States (July 1831), and a few months later Eugène Soubeiran
(France) and Justus von Liebig (Germany), all of them using variations of the haloform reaction.
• Non-Euclidean geometry (hyperbolic geometry) – Nikolai Ivanovich Lobachevsky (1830), János Bolyai (1832);
preceded by Gauss (unpublished result) ca. 1805.
• Dandelin–Gräffe method, aka Lobachevsky method – an algorithm for finding multiple roots of a polynomial,
developed independently by Germinal Pierre Dandelin, Karl Heinrich Gräffe and Nikolai Ivanovich Lobachevsky.
• Electrical telegraph – Charles Wheatstone (England), 1837, Samuel F.B. Morse (United States), 1837.
• First law of thermodynamics – In the late 19th century, various scientists independently stated that energy
and matter are persistent, although this was later to be disregarded under subatomic conditions. Hess’s Law
(Germain Hess), Julius Robert von Mayer, and James Joule were some of the first.
• In 1846, Urbain Le Verrier and John Couch Adams, studying Uranus's orbit, independently proved that another,
farther planet must exist. Neptune was found at the predicted moment and position.
• Bessemer Process – The process of removing impurities from steel on an industrial level using oxidation, de-
veloped in 1851 by American William Kelly and independently developed and patented in 1855 by eponymous
Englishman Sir Henry Bessemer.
• The Möbius strip was discovered independently by the German astronomer–mathematician August Ferdinand
Möbius and the German mathematician Johann Benedict Listing in 1858.
• Theory of evolution by natural selection – Charles Darwin (discovery about 1840), Alfred Russel Wallace
(discovery about 1857-8) – joint publication, 1859.
• 109P/Swift–Tuttle, the comet generating the Perseid meteor shower, was independently discovered by Lewis
Swift on July 16, 1862, and by Horace Parnell Tuttle on July 19, 1862. The comet made a return appearance
in 1992, when it was rediscovered by Japanese astronomer Tsuruhiko Kiuchi.
• Helium – Pierre Jansen, Norman Lockyer (both in 1868).[15]
106 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Gauss

• In 1869, Dmitri Ivanovich Mendeleev published his periodic table of chemical elements, and the following
year Julius Lothar Meyer published his independently constructed version.

• In 1876, Oskar Hertwig and Hermann Fol independently described the entry of sperm into the egg and the
subsequent fusion of the egg and sperm nuclei to form a single new nucleus.

• In 1876, Elisha Gray and Alexander Graham Bell filed a patent on discovery of the telephone on the same day.

• In 1877 Charles Cros described the principles of the phonograph that was, independently, constructed the
following year by Thomas Edison.
15.6. 19TH CENTURY 107

Faraday

• The Hall–Héroult process for inexpensively producing aluminum was independently discovered in 1886 by the
American engineer-inventor Charles Martin Hall and the French scientist Paul Héroult.[16]
108 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

• Two proofs of the prime number theorem (the asymptotic law of the distribution of prime numbers) were
obtained independently by Jacques Hadamard and Charles de la Vallée-Poussin and appeared in the same year
(1896).
• Discovery of radioactivity (1896) independently by Henri Becquerel and Silvanus Thompson.[17]
• Discovery of thorium radioactivity (1898) by Gerhard Carl Schmidt and Maria Skłodowska Curie.[18]
• Linguists Filip Fyodorovich Fortunatov and Ferdinand de Saussure independently formulated the sound law
now known as the Saussure–Fortunatov law.[19]

15.7 20th century


• In 1902 Walter Sutton and Theodor Boveri independently proposed that the hereditary information is carried
in the chromosomes.
• In the same year (1902) Richard Assmann and Léon Teisserenc de Bort independently discovered the stratosphere.
• E = mc2 , though only Einstein provided the accepted interpretation – Henri Poincaré, 1900; Olinto De Pretto,
1903; Albert Einstein, 1905; Paul Langevin, 1906.[20]
• Epinephrine – synthesized 1904 independently by Friedrich Stolz and Henry Drysdale Dakin.
• Lutetium – discovered 1907 independently by French scientist Georges Urbain and Austrian mineralogist Baron
Carl Auer von Welsbach.
• Hilbert space representation theorem, also known as Riesz representation theorem, the mathematical justifi-
cation of the Bra-ket notation in the theory of quantum mechanics – 1907 independently proved by Frigyes
Riesz and Maurice René Fréchet.
• Stark–Einstein law (aka photochemical equivalence law, or photoequivalence law) – independently formulated
between 1908 and 1913 by Johannes Stark and Albert Einstein. It states that every photon that is absorbed will
cause a (primary) chemical or physical reaction.[21]
• Chandrasekhar Limit - While on a ship and travelling to England from India, to pursue his higher studies,
Subramanyan Chandrasekhar calculated that a cold star of about > 1 1/2 times the mass of the sun would not
be able to support itself from its own gravity. A similar claim was made by Lev Davidovich Landau at about
the same time.[22]
• Frequency-hopping spread spectrum in radio work was described by Johannes Zenneck (1908), Leonard
Danilewicz (1929), Willem Broertjes (1929), and Hedy Lamarr and George Antheil (1942 US patent).
• Bacteriophages (viruses that infect bacteria) – Frederick Twort (1915), Félix d'Hérelle (1917).
• Rotor cipher machines – Theo A. van Hengel and R.P.C. Spengler (1915); Edward Hebern (1917); Arthur
Scherbius (Enigma machine, 1918); Hugo Koch (1919); Arvid Damm (1919).
• Sound film – Joseph Tykociński-Tykociner (1922), Lee De Forest (1923).
• Georgios Papanikolaou is credited with discovering as early as 1923 that cervical cancer cells can be detected
microscopically, though his invention of the Pap test went largely ignored by physicians until 1943. Aurel
Babeş of Romania independently made similar discoveries in 1927.[23]
• "Primordial soup" theory of the evolution of life from carbon-based molecules – Alexander Oparin (1924),
J.B.S. Haldane.
• The discovery of phosphocreatine was reported by Grace and Philip Eggleton of the University of Cam-
bridge[24] and separately by Cyrus Fiske and Yellapragada Subbarow of the Harvard Medical School[25] in
1927.
• Indefinability theorem, an important limitative result in mathematical logic – Kurt Gödel, Alfred Tarski.
• Natural deduction, an approach to proof theory in philosophical logic – discovered independently by Gerhard
Gentzen and Stanisław Jaśkowski in 1934.
15.7. 20TH CENTURY 109

• In mathematics, the Gelfond–Schneider theorem is a result which establishes the transcendence of a large class
of numbers. It was originally proved in 1934 by Aleksandr Gelfond and again independently proved in 1935
by Theodor Schneider.
• The Penrose triangle, also known as the “tribar”, is an impossible object. It was first created by the Swedish
artist Oscar Reutersvärd in 1934. The mathematician Roger Penrose independently devised and popularised it
in the 1950s.
• In computer science, the concept of the “universal computing machine” (now generally called the "Turing
Machine") was proposed by Alan Turing, but also independently by Emil Post,[26] both in 1936. Similar
approaches, also aiming to cover the concept of universal computing, were introduced by S.C. Kleene and by
Alonzo Church that same year. Also in 1936, Konrad Zuse tried to build a binary electrically-driven mechanical
calculator with limited programability; however, Zuse’s machine was never fully functional. The Atanasoff–
Berry Computer (“ABC”), designed by John Vincent Atanasoff and Clifford Berry, was the first fully electronic
digital computing device;[27] while not programmable, it pioneered important elements of modern computing,
including binary arithmetic and electronic switching elements,[28][29] though its special-purpose nature and lack
of a changeable, stored program distinguish it from modern computers.
• The atom bomb was independently thought of by Leó Szilárd,[30] Józef Rotblat[31] and others.
• The jet engine, independently invented by them, was used in working aircraft by Hans von Ohain (1939),
Secondo Campini (1940) and Frank Whittle (1941).
• In agriculture, the ability of synthetic auxins 2,4-D, 2,4,5-T, and MCPA to act as hormone herbicides was
discovered independently by four groups in the United States and Great Britain: William G. Templeman and
coworkers (1941); Philip Nutman, Gerard Thornton, and Juda Quastel (1942); Franklin Jones (1942); and
Ezra Kraus, John W. Mitchell, and Charles L. Hamner (1943). All four groups were subject to various aspects
of wartime secrecy, and the exact order of discovery is a matter of some debate.[32]
• Polio vaccine (1950–63): Hilary Koprowski, Jonas Salk, Albert Sabin.
• the integrated circuit was devised independently by Jack Kilby in 1958[33] and half a year later by Robert
Noyce.[34] Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit.[35]
• The Higgs boson was developed into a full relativistic model in 1964 independently and almost simultaneously
by three groups of physicists: by François Englert and Robert Brout; by Peter Higgs; and by Gerald Guralnik,
C. R. Hagen, and Tom Kibble.
• Quantum electrodynamics and renormalization (1930s–40s): Ernst Stueckelberg, Julian Schwinger, Richard
Feynman, and Sin-Itiro Tomonaga, for which the latter 3 received the 1965 Nobel Prize in Physics.
• The maser, a precursor to the laser, was described by Russian scientists in 1952, and built independently by
scientists at Columbia University in 1953. The laser itself was developed independently by Gordon Gould at
Columbia University and by researchers at Bell Labs, and by the Russian scientist Aleksandr Prokhorov.
• Kolmogorov complexity, also known as “Kolmogorov–Chaitin complexity”, descriptive complexity, etc., of an
object such as a piece of text is a measure of the computational resources needed to specify the object. The
concept was independently introduced by Ray Solomonoff, Andrey Kolmogorov and Gregory Chaitin in the
1960s.[36]
• The concept of packet switching, a communications method in which discrete blocks of data (packets) are
routed between nodes over data links, was first explored by Paul Baran in the early 1960s, and then indepen-
dently a few years later by Donald Davies.
• Cosmic background radiation as a signature of the Big Bang was confirmed by Arno Penzias and Robert Wilson
of Bell Labs. Penzias and Wilson had been testing a very sensitive microwave detector when they noticed that
their equipment was picking up a strange noise that was independent of the orientation (direction) of their
instrument. At first they thought the noise was generated due to pigeon droppings in the detector, but even
after they removed the droppings the noise was still detected. Meanwhile, at nearby Princeton University two
physicists, Robert Dicke and Jim Peebles, were working on a suggestion of George Gamow's that the early
universe had been hot and dense; they believed its hot glow could still be detected but would be so red-shifted
that it would manifest as microwaves. When Penzias and Wilson learned about this, they realized that they had
already detected the red-shifted microwaves and (to the disappointment of Dicke and Peebles) were awarded
the 1978 Nobel Prize in physics.[10]
110 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

• The Cocke–Younger–Kasami algorithm was independently discovered three times: by T. Kasami (1965), by
Daniel H. Younger (1967), and by John Cocke and Jacob T. Schwartz (1970).

• The Wagner–Fischer algorithm, in computer science, was discovered and published at least six times.[37]:43

• In 1970, Howard Temin and David Baltimore independently discovered reverse transcriptase enzymes.

• The Knuth–Morris–Pratt string searching algorithm was developed by Donald Knuth and Vaughan Pratt and
independently by J. H. Morris.

• The Cook–Levin theorem (also known as “Cook’s theorem”), a result in computational complexity theory, was
proven independently by Stephen Cook (1971 in the U.S.) and by Leonid Levin (1973 in the USSR). Levin
was not aware of Cook’s achievement because of communication difficulties between East and West during the
Cold War. The other way round, Levin’s work was not widely known in the West until around 1978.[38]

• Mevastatin (compactin; ML-236B) was independently discovered by Akira Endo in Japan in a culture of Peni-
cillium citrinium[39] and by a British group in a culture of Penicillium brevicompactum.[40] Both reports were
published in 1976.

• The Bohlen–Pierce scale, a harmonic, non-octave musical scale, was independently discovered by Heinz Bohlen
(1972), Kees van Prooijen (1978) and John R. Pierce (1984).

• RSA, an algorithm suitable for signing and encryption in public-key cryptography, was publicly described in
1977 by Ron Rivest, Adi Shamir and Leonard Adleman. An equivalent system had been described in 1973
in an internal document by Clifford Cocks, a British mathematician working for the UK intelligence agency
GCHQ, but his work was not revealed until 1997 due to its top-secret classification.

• Asymptotic freedom, which states that the strong nuclear interaction between quarks decreases with decreasing
distance, was discovered in 1973 by David Gross and Frank Wilczek, and by David Politzer, and was published
in the same edition of the journal Physical Review Letters.[41] For their work the three received the Nobel Prize
in Physics in 2004.

• The J/ψ meson was independently discovered by a group at the Stanford Linear Accelerator Center, headed
by Burton Richter, and by a group at Brookhaven National Laboratory, headed by Samuel Ting of MIT. Both
announced their discoveries on November 11, 1974. For their shared discovery, Richter and Ting shared the
1976 Nobel Prize in Physics.

• Endorphins were discovered independently in Scotland and the US in 1975.

• The use of elliptic curves in cryptography (Elliptic curve cryptography) was suggested independently by Neal
Koblitz and Victor S. Miller in 1985.

• The Immerman–Szelepcsényi theorem, another fundamental result in computational complexity theory, was
proven independently by Neil Immerman and Róbert Szelepcsényi in 1987.[42]

• In 1989, Thomas R. Cech (Colorado) and Sidney Altman (Yale) won the Nobel Prize in chemistry for their
independent discovery in the 1980s of ribozymes – for the “discovery of catalytic properties of RNA” – using
different approaches. Catalytic RNA was an unexpected finding, something they were not looking for, and it
required rigorous proof that there was no contaminating protein enzyme.

• In 1993, groups led by Donald S. Bethune at IBM and Sumio Iijima at NEC independently discovered single-
wall carbon nanotubes and methods to produce them using transition-metal catalysts.

• Conductive polymers: Between 1963 and 1977, doped and oxidized highly-conductive polyacetylene deriva-
tives were independently discovered, “lost”, and then rediscovered at least four times. The last rediscovery
won the 2000 Nobel prize in Chemistry, for the “discovery and development of conductive polymers”. This
was without reference to the previous discoveries. Citations in article "Conductive polymers.”
15.8. 21ST CENTURY 111

15.8 21st century


• In 2001 four different authors published different implementations of a distributed hash table.
• The 2006 Shaw Prize in Astronomy and the 2011 Nobel Prize in Physics were both awarded to Saul Perlmutter,
Adam G. Riess and Brian P. Schmidt for the 1998 discovery of the accelerating expansion of the universe
through observations of distant supernovae.[43]
• In 2012, the collaborations of the ATLAS and CMS experiments at the Large Hadron Collider independently
reported the discovery of a new boson with properties in agreement with the Higgs boson.

15.9 Quotations
“When the time is ripe for certain things, these things appear in different places in the manner of
violets coming to light in early spring.”
— Farkas Bolyai to his son János in urging him to claim the invention of non-Euclidean geometry
without delay,
quoted in Li & Vitanyi, An introduction to Kolmogorov Complexity and Its Applications, 1st ed., p. 83.

15.10 See also


• Convergent and divergent production
• Historic recurrence
• History of science
• History of technology
• List of discoveries
• Logology (science of science)
• Multiple discovery
• Priority disputes
• Synchronicity

15.11 Notes
[1] Robert K. Merton, “Resistance to the Systematic Study of Multiple Discoveries in Science”, European Journal of Sociology,
4:237–82, 1963. Reprinted in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations,
Chicago, University of Chicago Press,1973, pp. 371–82.

[2] Robert K. Merton, The Sociology of Science, 1973.

[3] A. Rupert Hall, Philosophers at War, New York, Cambridge University Press, 1980.

[4] Robert K. Merton, “Singletons and Multiples in Scientific Discovery: a Chapter in the Sociology of Science”, Proceedings
of the American Philosophical Society, 105: 470–86, 1961. Reprinted in Robert K. Merton, The Sociology of Science:
Theoretical and Empirical Investigations, Chicago, University of Chicago Press, 1973, pp. 343–70.

[5] Robert K. Merton, On Social Structure and Science, p. 307.

[6] Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, Free Press, 1979.

[7] “Copernicus seems to have drawn up some notes [on the displacement of good coin from circulation by debased coin] while
he was at Olsztyn in 1519. He made them the basis of a report on the matter, written in German, which he presented to the
Prussian Diet held in 1522 at Grudziądz... He later drew up a revised and enlarged version of his little treatise, this time in
Latin, and setting forth a general theory of money, for presentation to the Diet of 1528.” Angus Armitage, The World of
Copernicus, 1951, p. 91.
112 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

[8] Roger Penrose, The Road to Reality, Vintage Books, 2005, p. 103.

[9] Alan Ellis, “Black Holes – Part 1 – History”, Astronomical Society of Edinburgh, Journal 39, 1999. A description of
Michell’s theory of black holes.

[10] Stephen Hawking, A Brief History of Time, Bantam, 1996, pp. 43-45.

[11] Gauss, Carl Friedrich, “Nachlass: Theoria interpolationis methodo nova tractata”, Werke, Band 3, Göttingen, Königliche
Gesellschaft der Wissenschaften, 1866, pp. 265–327.

[12] Heideman, M. T., D. H. Johnson, and C. S. Burrus, “Gauss and the history of the fast Fourier transform”, Archive for
History of Exact Sciences, vol. 34, no. 3 (1985), pp. 265–277.

[13] Roger Penrose, The Road to Reality, Vintage Books, 2005, p. 81.

[14] Halliday et al., Physics, vol. 2, 2002, p. 775.

[15] “Aug. 18, 1868: Helium Discovered During Total Solar Eclipse”, http://www.wired.com/thisdayintech/2009/08/dayintech_
0818/

[16] Isaac Asimov, Asimov’s Biographical Encyclopedia of Science and Technology, p. 933.

[17] “Had Becquerel... not [in 1896] presented his discovery to the Académie des Sciences the day after he made it, credit for
the discovery of radioactivity, and even a Nobel Prize, would have gone to Silvanus Thompson.” Robert William Reid,
Marie Curie, New York, New American Library, 1974, ISBN 0002115395, pp. 64-65.

[18] "Marie Curie was... beaten in the race to tell of her discovery that thorium gives off rays in the same way as uranium.
Unknown to her, a German, Gerhard Carl Schmidt, had published his finding in Berlin two months earlier.” Robert William
Reid, Marie Curie, New York, New American Library, 1974, ISBN 0002115395, p. 65.

[19] N.E. Collinge, The Laws of Indo-European, pp. 149-52.

[20] Barbara Goldsmith, Obsessive Genius: The Inner World of Marie Curie, New York, W.W. Norton, 2005, ISBN 0-393-
05137-4, p. 166.

[21] “Photochemical equivalence law”. Encyclopædia Britannica Online. Retrieved 2009-11-07.

[22] Stephen Hawking, The Brief History of Time,Bantam press(1996), pg:88.

[23] M.J. O'Dowd, E.E. Philipp, The History of Obstetrics & Gynaecology, London, Parthenon Publishing Group, 1994, p. 547.

[24] Eggleton, Philip; Eggleton, Grace Palmer (1927). “The inorganic phosphate and a labile form of organic phosphate in the
gastrocnemius of the frog”. Biochemical Journal 21 (1): 190–195. PMC 1251888. PMID 16743804.

[25] Fiske, Cyrus H.; Subbarow, Yellapragada (1927). “The nature of the 'inorganic phosphate' in voluntary muscle”. Science
65 (1686): 401–403. doi:10.1126/science.65.1686.401. PMID 17807679.

[26] See the “bibliographic notes” at the end of chapter 7 in Hopcroft & Ullman, Introduction to Automata, Languages, and
Computation, Addison-Wesley, 1979.

[27] Ralston, Anthony; Meek, Christopher, eds. (1976), Encyclopedia of Computer Science (second ed.), pp. 488–489, ISBN
0-88405-321-0

[28] Campbell-Kelly, Martin; Aspray, William (1996), Computer: A History of the Information Machine, New York: Basic
Books, p. 84, ISBN 0-465-02989-2.

[29] Jane Smiley, The Man Who Invented the Computer: The Biography of John Atanasoff, Digital Pioneer, 2010.

[30] Richard Rhodes, The Making of the Atomic Bomb, New York, Simon and Schuster, 1986, ISBN 0671441337, p. 27.

[31] Irwin Abrams website,

[32] Troyer, James (2001). “In the beginning: the multiple discovery of the first hormone herbicides”. Weed Science 49 (2):
290–297. doi:10.1614/0043-1745(2001)049[0290:ITBTMD]2.0.CO;2.

[33] The Chip that Jack Built, ca. 2008, HTML, Texas Instruments, retrieved 29 May 2008.

[34] Christophe Lécuyer, Making Silicon Valley: Innovation and the Growth of High Tech, 1930-1970, MIT Press, 2006, ISBN
0262122812, p. 129.

[35] Nobel Web AB, 10 October 2000 The Nobel Prize in Physics 2000, retrieved 29 May 2008.
15.12. REFERENCES 113

[36] See Chapter 1.6 in the first edition of Li & Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, who cite
Chaitin (1975): “this definition [of Kolmogorov complexity] was independently proposed about 1965 by A.N. Kolmogorov
and me ... Both Kolmogorov and I were then unaware of related proposals made in 1960 by Ray Solomonoff.”

[37] Navarro, Gonzalo (2001). “A guided tour to approximate string matching” (PDF). ACM Computing Surveys 33 (1): 31–88.
doi:10.1145/375360.375365.

[38] See Garey & Johnson, Computers and intractability, p. 119.


Cf. also the survey article by Trakhtenbrot (see “External Links”).
Levin emigrated to the U.S. in 1978.

[39] Endo, Akira; Kuroda, M.; Tsujita, Y. (1976). “ML-236A, ML-236B, and ML-236C, new inhibitors of cholesterogenesis
produced by Penicillium citrinium”. Journal of Antibiotics (Tokyo), '". 1976' 29 (12): 1346–8. doi:10.7164/antibiotics.29.1346.
PMID 1010803.

[40] Brown, Alian G.; Smale, Terry C.; King, Trevor J.; Hasenkamp, Rainer; Thompson, Ronald H. (1976). “Crystal and
Molecular Structure of Compactin, a New Antifungal Metabolite from Penicillium brevicompactum”. J. Chem. Soc.,
Perkin Trans 1: 1165–1170. doi:10.1039/P19760001165.

[41] D. J. Gross, F. Wilczek, Ultraviolet behavior of non-abeilan gauge theoreies, Phys. Rev. Letters 30 (1973) 1343-1346; H.
D. Politzer, Reliable perturbative results for strong interactions, Phys. Rev. Letters 30 (1973) 1346-1349

[42] See EATCS on the Gödel Prize 1995.

[43] Paál, G.; Horváth, I.; Lukács, B. (1992). Astrophysics and Space Science 191: 107. Bibcode:1992Ap&SS.191..107P.
doi:10.1007/BF00644200. Missing or empty |title= (help)

15.12 References

• Armitage, Angus (1951). The World of Copernicus. New York: Mentor Books.

• Isaac Asimov, Asimov’s Biographical Encyclopedia of Science and Technology, second revised edition, New
York, Doubleday, 1982.

• N.E. Collinge (1985). The Laws of Indo-European. Amsterdam: John Benjamins. ISBN 0-915027-75-5.
(U.S.), ISBN 90-272-2102-2 (Europe).

• Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-
Completeness. W.H. Freeman. ISBN 0-7167-1045-5.

• A. Rupert Hall, Philosophers at War, New York, Cambridge University Press, 1980.

• David Lamb, Multiple Discovery: The Pattern of Scientific Progress, Amersham, Avebury Press, 1984.

• Ming Li and Paul Vitanyi (1993). An Introduction to Kolmogorov Complexity and Its Applications. New York:
Springer-Verlag. ISBN 0-387-94053-7. (U.S.), ISBN 3-540-94053-7 (Europe).

• Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, University of Chicago
Press, 1973.

• Robert K. Merton, On Social Structure and Science, edited and with an introduction by Piotr Sztompka,
University of Chicago Press, 1996.

• Robert William Reid, Marie Curie, New York, New American Library, 1974, ISBN 0002115395.

• Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, Free Press, 1979.
114 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

15.13 External links


• Annals of Innovation: In the Air:Who says big ideas are rare?, Malcolm Gladwell, The New Yorker, May 12,
2008
• The Technium: Simultaneous Invention, Kevin Kelly, May 9, 2008

• Apperceptual: The Heroic Theory of Scientific Development at the Wayback Machine (archived May 12,
2008), Peter Turney, January 15, 2007
• A Survey of Russian Approaches to Perebor (Brute-Force Searches) Algorithms, by B.A. Trakhtenbrot, in the
Annals of the History of Computing, 6(4):384-400, 1984.

This list is incomplete; you can help by expanding it.


15.13. EXTERNAL LINKS 115

Lobachevsky
116 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Darwin
15.13. EXTERNAL LINKS 117

Mendeleyev
118 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Bell
15.13. EXTERNAL LINKS 119

Becquerel
120 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Skłodowska Curie
15.13. EXTERNAL LINKS 121

Einstein
122 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Szilárd
15.13. EXTERNAL LINKS 123

Noyce
124 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Nambu
15.13. EXTERNAL LINKS 125

Higgs
126 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Penzias
15.13. EXTERNAL LINKS 127

Baltimore
128 CHAPTER 15. LIST OF MULTIPLE DISCOVERIES

Ting
15.13. EXTERNAL LINKS 129

Immerman

Perlmutter, Riess, Schmidt


Chapter 16

Logic gate

“Discrete logic” redirects here. For discrete circuitry, see Discrete circuit.

In electronics, a logic gate is an idealized or physical device implementing a Boolean function; that is, it performs a
logical operation on one or more logical inputs, and produces a single logical output. Depending on the context, the
term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer
to a non-ideal physical device[1] (see Ideal and real op-amps for comparison).
Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be con-
structed using vacuum tubes, electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or
even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions
can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the
algorithms and mathematics that can be described with Boolean logic.
Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory,
all the way up through complete microprocessors, which may contain more than 100 million gates. In modern practice,
most gates are made from field-effect transistors (FETs), particularly MOSFETs (metal–oxide–semiconductor field-
effect transistors).
Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design be-
cause their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[2]
In reversible logic, Toffoli gates are used.

16.1 Electronic gates


Main article: Logic family

To build a functionally complete logic system, relays, valves (vacuum tubes), or transistors can be used. The simplest
family of logic gates using bipolar transistors is called resistor-transistor logic (RTL). Unlike simple diode logic
gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic
functions. RTL gates were used in early integrated circuits. For higher speed and better density, the resistors used
in RTL were replaced by diodes resulting in diode-transistor logic (DTL). Transistor-transistor logic (TTL) then
supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-
effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still further, most contemporary
chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-
channel) MOSFET devices to achieve a high speed with low power dissipation.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400
series by Texas Instruments, the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these
fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a large
number of mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic
devices such as FPGAs has removed the 'hard' property of hardware; it is now possible to change the logic design of
a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware

130
16.2. SYMBOLS 131

implementation of a logic system to be changed.


Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume
much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental
structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction)
between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier,
which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current
to flow between the output and the input of a semiconductor logic gate.
Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is
that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other
gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for
the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fanout limit'.
Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding
change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual
delays, an effect which can become a problem in high-speed circuits. Additional delay can be caused when a large
number of inputs are connected to an output, due to the distributed capacitance of all the inputs and wiring and the
finite amount of current that each output can provide.

16.2 Symbols

A synchronous 4-bit up/down decade counter symbol (74LS192) in accordance with ANSI/IEEE Std. 91-1984 and IEC Publication
60617-12.

There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984
and its supplement ANSI/IEEE Std 91a-1991. The “distinctive shape” set, based on traditional schematics, is used
132 CHAPTER 16. LOGIC GATE

for simple drawings, and derives from MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described
as “military”, reflecting its origin. The “rectangular shape” set, based on ANSI Y32.14 and other early industry
standards, as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation
of a much wider range of devices than is possible with the traditional symbols.[3] The IEC standard, IEC 60617-12,
has been adopted by other standards, such as EN 60617-12:1999 in Europe, BS EN 60617-12:1999 in the United
Kingdom, and DIN EN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 60617-12 was to provide a uniform method of describing the complex
logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and
OR gates. They could be medium scale circuits such as a 4-bit counter to a large scale circuit such as a microprocessor.
IEC 617-12 and its successor IEC 60617-12 do not explicitly show the “distinctive shape” symbols, but do not prohibit
them.[3] These are, however, shown in ANSI/IEEE 91 (and 91a) with this note: “The distinctive-shape symbol is,
according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard.”
IEC 60617-12 correspondingly contains the note (Section 2.1) “Although non-preferred, the use of other symbols
recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall
not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form
complex symbols (for example, use as embedded symbols) is discouraged.” This compromise was reached between
the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with
one another.
A third style of symbols was in use in Europe and is still widely used in European academia. See the column “DIN
40700” in the table in the German Wikipedia.
In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate
arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description
Languages (HDL) such as Verilog or VHDL.
The two input exclusive-OR is true only when the two input values are different, false if they are equal, regardless of
the value. If there are more than two inputs, the gate generates a true at its output if the number of trues at its input
is odd. In practice, these gates are built from combinations of simpler logic gates.

16.3 Universal logic gates


For more details on the theoretical basis, see functional completeness.
Charles Sanders Peirce (winter of 1880–81) showed that NOR gates alone (or alternatively NAND gates alone)
can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[4]
The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called
Sheffer stroke; the logical NOR is sometimes called Peirce’s arrow.[5] Consequently, these gates are sometimes called
universal logic gates.[6]

16.4 De Morgan equivalent symbols


By use of De Morgan’s laws, an AND function is identical to an OR function with negated inputs and outputs. Likewise,
an OR function is identical to an AND function with negated inputs and outputs. A NAND gate is equivalent to an
OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol (AND or OR) but with the
inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help
to show accidental connection of an active high output to an active low input or vice versa. Any connection that has
logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa.
Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead
using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends
of a connection match, there is no logic negation in that path (effectively, bubbles “cancel”), making it easier to follow
logic states from one symbol to the next. This is commonly seen in real logic diagrams - thus the reader must not get
into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at
both inputs and outputs in order to determine the “true” logic function indicated.
A De Morgan symbol can show more clearly a gate’s primary logical purpose and the polarity of its nodes that are
16.4. DE MORGAN EQUIVALENT SYMBOLS 133

The 7400 chip, containing four NANDs. The two additional pins supply power (+5 V) and connect the ground.

considered in the “signaled” (active, on) state. Consider the simplified case where a two-input NAND gate is used
to drive a motor when either of its inputs are brought low by a switch. The “signaled” state (motor on) occurs when
either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan
version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble
at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol
shows both inputs and output in the polarity that will drive the motor.
De Morgan’s theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as
combinations of only NOR gates, for economic reasons.
134 CHAPTER 16. LOGIC GATE

16.5 Data storage

Main article: Sequential logic

Logic gates can also be used to store data. A storage element can be constructed by connecting several gates in a
"latch" circuit. More complicated designs that use clock signals and that change only on a rising or falling edge of the
clock are called edge-triggered "flip-flops". The combination of multiple flip-flops in parallel, to store a multiple-bit
value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a
sequential logic system since its output can be influenced by its previous state(s).
These logic circuits are known as computer memory. They vary in performance, based on factors of speed, complex-
ity, and reliability of storage, and many different types of designs are used based on the application.

16.6 Three-state logic gates

B
B
A C A C

A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off, the switch is open.

Main article: Tri-state buffer

A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-
impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used
on buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control
circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in
cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive
voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High
impedance would mean that the output is effectively disconnected from the circuit.

16.7 History and development

The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established
that by using the binary system, the principles of arithmetic and logic could be combined. In an 1886 letter, Charles
Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[7] Eventually,
vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can
be used as an AND logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition
5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of
the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built
electromechanical logic gates for his computer Z1 (from 1935–38). Claude E. Shannon introduced the use of Boolean
algebra in the analysis and design of switching circuits in 1937. Active research is taking place in molecular logic
gates.
16.8. IMPLEMENTATIONS 135

16.8 Implementations
Main article: Unconventional computing

Since the 1990s, most logic gates are made in CMOS technology (i.e. NMOS and PMOS transistors are used). Often
millions of logic gates are packaged in a single integrated circuit.
There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL
(resistor-diode logic), RTL (resistor-transistor logic), DTL (diode-transistor logic), TTL (transistor-transistor logic)
and CMOS (complementary metal oxide semiconductor). There are also sub-variants, e.g. standard CMOS logic vs.
advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower
PMOS transistors.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early
electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-
mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic
gates, including on a molecular scale.[8] Logic gates have been made out of DNA (see DNA nanotechnology)[9] and
used to create a computer called MAYA (see MAYA II). Logic gates can be made from quantum mechanical ef-
fects (though quantum computing usually diverges from boolean design). Photonic logic gates use non-linear optical
effects.
In principle any method that leads to a gate that is functionally complete (for example, either a NOR or a NAND
gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not
needed, and can be replaced by digital multiplexers.

16.9 See also


• And-inverter graph
• Boolean algebra topics
• Boolean function
• Digital circuit
• Espresso heuristic logic minimizer
• Fanout
• Flip-flop (electronics)
• Functional completeness
• Karnaugh map
• Combinational logic
• Logic family
• Logical graph
• NMOS logic
• Programmable Logic Controller (PLC)
• Programmable Logic Device (PLD)
• Propositional calculus
• Quantum gate
• Race hazard
• Reversible computing
• Truth table
136 CHAPTER 16. LOGIC GATE

16.10 References
[1] Jaeger, Microelectronic Circuit Design, McGraw-Hill 1997, ISBN 0-07-032482-4, pp. 226-233

[2] Tinder, Richard F. (2000). Engineering digital design: Revised Second Edition. pp. 317–319. ISBN 0-12-691295-5.
Retrieved 2008-07-04.

[3] Overview of IEEE Standard 91-1984 Explanation of Logic Symbols, Doc. No. SDYZ001A, Texas Instruments Semicon-
ductor Group, 1996

[4] Peirce, C. S. (manuscript winter of 1880–81), “A Boolean Algebra with One Constant”, published 1933 in Collected Papers
v. 4, paragraphs 12–20. Reprinted 1989 in Writings of Charles S. Peirce v. 4, pp. 218-21, Google Preview. See Roberts,
Don D. (2009), The Existential Graphs of Charles S. Peirce, p. 131.

[5] Hans Kleine Büning; Theodor Lettmann (1999). Propositional logic: deduction and algorithms. Cambridge University
Press. p. 2. ISBN 978-0-521-63017-7.

[6] John Bird (2007). Engineering mathematics. Newnes. p. 532. ISBN 978-0-7506-8555-9.

[7] Peirce, C. S., “Letter, Peirce to A. Marquand", dated 1886, Writings of Charles S. Peirce, v. 5, 1993, pp. 541–3. Google
Preview. See Burks, Arthur W., “Review: Charles S. Peirce, The new elements of mathematics", Bulletin of the American
Mathematical Society v. 84, n. 5 (1978), pp. 913–18, see 917. PDF Eprint.

[8] Mechanical Logic gates (focused on molecular scale)

[9] DNA Logic gates

16.11 Further reading


• Awschalom, D.D.; Loss, D.; Samarth, N. (5 August 2002). Semiconductor Spintronics and Quantum Compu-
tation. Berlin, Germany: Springer-Verlag. ISBN 978-3-540-42176-4. Retrieved 28 November 2012.

• Bostock, Geoff (1988). Programmable logic devices: technology and applications. New York: McGraw-Hill.
ISBN 978-0-07-006611-3. Retrieved 28 November 2012.

• Brown, Stephen D.; Francis, Robert J.; Rose, Jonathan; Vranesic, Zvonko G. (1992). Field Programmable
Gate Arrays. Boston, MA: Kluwer Academic Publishers. ISBN 978-0-7923-9248-4. Retrieved 28 November
2012.
Chapter 17

Logical biconditional

In logic and mathematics, the logical biconditional (sometimes known as the material biconditional) is the logical
connective of two statements asserting "p if and only if q", where q is an antecedent and p is a consequent.[1] This is
often abbreviated p iff q. The operator is denoted using a doubleheaded arrow (↔), a prefixed E (Epq), an equality
sign (=), an equivalence sign (≡), or EQV. It is logically equivalent to (p → q) ∧ (q → p), or the XNOR (exclusive
nor) boolean operator. It is equivalent to "(not p or q) and (not q or p)". It is also logically equivalent to "(p and q) or
(not p and not q)", meaning “both or neither”.
The only difference from material conditional is the case when the hypothesis is false but the conclusion is true. In
that case, in the conditional, the result is true, yet in the biconditional the result is false.
In the conceptual interpretation, a = b means “All a 's are b 's and all b 's are a 's"; in other words, the sets a and b
coincide: they are identical. This does not mean that the concepts have the same meaning. Examples: “triangle” and
“trilateral”, “equiangular trilateral” and “equilateral triangle”. The antecedent is the subject and the consequent is the
predicate of a universal affirmative proposition.
In the propositional interpretation, a ⇔ b means that a implies b and b implies a; in other words, that the propositions
are equivalent, that is to say, either true or false at the same time. This does not mean that they have the same meaning.
Example: “The triangle ABC has two equal sides”, and “The triangle ABC has two equal angles”. The antecedent is
the premise or the cause and the consequent is the consequence. When an implication is translated by a hypothetical
(or conditional) judgment the antecedent is called the hypothesis (or the condition) and the consequent is called the
thesis.
A common way of demonstrating a biconditional is to use its equivalence to the conjunction of two converse conditionals,
demonstrating these separately.
When both members of the biconditional are propositions, it can be separated into two conditionals, of which one
is called a theorem and the other its reciprocal. Thus whenever a theorem and its reciprocal are true we have a
biconditional. A simple theorem gives rise to an implication whose antecedent is the hypothesis and whose consequent
is the thesis of the theorem.
It is often said that the hypothesis is the sufficient condition of the thesis, and the thesis the necessary condition of
the hypothesis; that is to say, it is sufficient that the hypothesis be true for the thesis to be true; while it is necessary
that the thesis be true for the hypothesis to be true also. When a theorem and its reciprocal are true we say that its
hypothesis is the necessary and sufficient condition of the thesis; that is to say, that it is at the same time both cause
and consequence.

17.1 Definition

Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if and only if both operands are false or both operands are true.

137
138 CHAPTER 17. LOGICAL BICONDITIONAL

17.1.1 Truth table

The truth table for A ↔ B (also written as A ≡ B, A = B, or A EQ B) is as follows:


More than two statements combined by ↔ are ambiguous:
x1 ↔ x2 ↔ x3 ↔ ... ↔ xn may be meant as (((x1 ↔ x2 ) ↔ x3 ) ↔ ...) ↔ xn ,
or may be used to say that all xi are together true or together false: ( x1 ∧ ... ∧ xn ) ∨ (¬x1 ∧ ... ∧ ¬xn )
Only for zero or two arguments this is the same.
The following truth tables show the same bit pattern only in the line with no argument and in the lines with two
arguments:

x1 ↔ ... ↔ xn
meant as equivalent to
¬ (¬x1 ⊕ ... ⊕ ¬xn )
The central Venn diagram below,
and line (ABC ) in this matrix
represent the same operation.

The left Venn diagram below, and the lines (AB ) in these matrices represent the same operation.

17.1.2 Venn diagrams

Red areas stand for true (as in for and).


17.2. PROPERTIES 139

x1 ↔ ... ↔ xn
meant as shorthand for
( x1 ∧ ... ∧ xn )
∨ (¬x1 ∧ ... ∧ ¬xn )
The Venn diagram directly below,
and line (ABC ) in this matrix
represent the same operation.

17.2 Properties

commutativity: yes
associativity: yes
distributivity: Biconditional doesn't distribute over any binary function (not even itself),
but logical disjunction (see there) distributes over biconditional.
idempotency: no

monotonicity: no
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: no
When all inputs are false, the output is not false.
Walsh spectrum: (2,0,0,2)
Nonlinearity: 0 (the function is linear)
140 CHAPTER 17. LOGICAL BICONDITIONAL

17.3 Rules of inference


Main article: Rules of inference

Like all connectives in first-order logic, the biconditional has rules of inference that govern its use in formal proofs.

17.3.1 Biconditional introduction

Main article: Biconditional introduction

Biconditional introduction allows you to infer that, if B follows from A, and A follows from B, then A if and only if
B.
For example, from the statements “if I'm breathing, then I'm alive” and “if I'm alive, then I'm breathing”, it can be
inferred that “I'm breathing if and only if I'm alive” or, equally inferrable, “I'm alive if and only if I'm breathing.”
B→AA→B∴A↔BB→AA→B∴B↔A

17.3.2 Biconditional elimination

Biconditional elimination allows one to infer a conditional from a biconditional: if ( A ↔ B ) is true, then one may
infer one direction of the biconditional, ( A → B ) and ( B → A ).
For example, if it’s true that I'm breathing if and only if I'm alive, then it’s true that if I'm breathing, I'm alive; likewise,
it’s true that if I'm alive, I'm breathing.
Formally:
(A↔B)∴(A→B)
also
(A↔B)∴(B→A)

17.4 Colloquial usage


One unambiguous way of stating a biconditional in plain English is of the form "b if a and a if b". Another is "a
if and only if b". Slightly more formally, one could say "b implies a and a implies b". The plain English “if'" may
sometimes be used as a biconditional. One must weigh context heavily.
For example, “I'll buy you a new wallet if you need one” may be meant as a biconditional, since the speaker doesn't
intend a valid outcome to be buying the wallet whether or not the wallet is needed (as in a conditional). However, “it
is cloudy if it is raining” is not meant as a biconditional, since it can be cloudy while not raining.

17.5 See also


• If and only if

• Logical equivalence

• Logical equality

• XNOR gate

• Biconditional elimination

• Biconditional introduction
17.6. NOTES 141

17.6 Notes
[1] Handbook of Logic, page 81

17.7 References
• Brennan, Joseph G. Handbook of Logic, 2nd Edition. Harper & Row. 1961

This article incorporates material from Biconditional on PlanetMath, which is licensed under the Creative Commons
Attribution/Share-Alike License.
Chapter 18

Logical conjunction

"∧" redirects here. For the logic gate, see AND gate. For exterior product, see Exterior algebra.
In logic and mathematics, and is the truth-functional operator of logical conjunction; the and of a set of operands is

Venn diagram of A∧B

true if and only if all of its operands are true. The logical connective that represents this operator is typically written
as ∧ or · .
"A and B" is true only if A is true and B is true.
An operand of a conjunction is a conjunct.
Related concepts in other fields are:

• In natural language, the coordinating conjunction “and”.

• In programming languages, the short-circuit and control structure.

142
18.1. NOTATION 143

Venn diagram of A∧B∧C

• In set theory, intersection.

• In predicate logic, universal quantification.

18.1 Notation
And is usually expressed with an infix operator: in mathematics and logic, ∧; in electronics, · ; and in programming
languages, & or and. In Jan Łukasiewicz's prefix notation for logic, the operator is K, for Polish koniunkcja.[1]

18.2 Definition
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if and only if both of its operands are true.
The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the
expression. In keeping with the concept of vacuous truth, when conjunction is defined as an operator or function of
144 CHAPTER 18. LOGICAL CONJUNCTION

arbitrary arity, the empty conjunction (AND-ing over an empty set of operands) is often defined as having the result
1.

18.2.1 Truth table

Conjunctions of the arguments on the left — The true bits form a Sierpinski triangle.

The truth table of A ∧ B :

18.3 Introduction and elimination rules


As a rule of inference, conjunction introduction is a classically valid, simple argument form. The argument form has
two premises, A and B. Intuitively, it permits the inference of their conjunction.

A,
B.
Therefore, A and B.

or in logical operator notation:

A,

B
18.4. PROPERTIES 145

⊢A∧B

Here is an example of an argument that fits the form conjunction introduction:

Bob likes apples.


Bob likes oranges.
Therefore, Bob likes apples and oranges.

Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from
any conjunction of either element of that conjunction.

A and B.
Therefore, A.

...or alternately,

A and B.
Therefore, B.

In logical operator notation:

A∧B

⊢A

...or alternately,

A∧B

⊢B

18.4 Properties
commutativity: yes
associativity: yes
distributivity: with various operations, especially with or
idempotency: yes

monotonicity: yes
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: yes
When all inputs are false, the output is false.
Walsh spectrum: (1,−1,−1,1)
Nonlinearity: 1 (the function is bent)
If using binary values for true (1) and false (0), then logical conjunction works exactly like normal arithmetic multiplication.
146 CHAPTER 18. LOGICAL CONJUNCTION

AND logic gate

18.5 Applications in computer engineering


In high-level computer programming and digital electronics, logical conjunction is commonly represented by an infix
operator, usually as a keyword such as “AND”, an algebraic multiplication, or the ampersand symbol "&". Many
languages also provide short-circuit control structures corresponding to logical conjunction.
Logical conjunction is often used for bitwise operations, where 0 corresponds to false and 1 to true:

• 0 AND 0 = 0,
• 0 AND 1 = 0,
• 1 AND 0 = 0,
• 1 AND 1 = 1.

The operation can also be applied to two binary words viewed as bitstrings of equal length, by taking the bitwise
AND of each pair of bits at corresponding positions. For example:

• 11000110 AND 10100011 = 10000010.

This can be used to select part of a bitstring using a bit mask. For example, 10011101 AND 00001000 = 00001000
extracts the fifth bit of an 8-bit bitstring.
In computer networking, bit masks are used to derive the network address of a subnet within an existing network
from a given IP address, by ANDing the IP address and the subnet mask.
Logical conjunction “AND” is also used in SQL operations to form database queries.
The Curry-Howard correspondence relates logical conjunction to product types.

18.6 Set-theoretic correspondence


The membership of an element of an intersection set in set theory is defined in terms of a logical conjunction: x ∈ A
∩ B if and only if (x ∈ A) ∧ (x ∈ B). Through this correspondence, set-theoretic intersection shares several properties
with logical conjunction, such as associativity, commutativity, and idempotence.

18.7 Natural language


As with other notions formalized in mathematical logic, the logical conjunction and is related to, but not the same
as, the grammatical conjunction and in natural languages.
English “and” has properties not captured by logical conjunction. For example, “and” sometimes implies order. For
example, “They got married and had a child” in common discourse means that the marriage came before the child.
18.8. SEE ALSO 147

The word “and” can also imply a partition of a thing into parts, as “The American flag is red, white, and blue.” Here
it is not meant that the flag is at once red, white, and blue, but rather that it has a part of each color.

18.8 See also


• And-inverter graph
• AND gate
• Binary and
• Bitwise AND
• Boolean algebra (logic)
• Boolean algebra topics
• Boolean conjunctive query
• Boolean domain
• Boolean function
• Boolean-valued function
• Conjunction introduction
• Conjunction elimination
• De Morgan’s laws
• First-order logic
• Fréchet inequalities
• Grammatical conjunction
• Logical disjunction
• Logical negation
• Logical graph
• Logical value
• Operation
• Peano-Russell notation
• Propositional calculus

18.9 References
[1] Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.

18.10 External links


• Hazewinkel, Michiel, ed. (2001), “Conjunction”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-
010-4
• Wolfram MathWorld: Conjunction
Chapter 19

Logical connective

This article is about connectives in logical systems. For connectors in natural languages, see discourse connective.
For other logical symbols, see List of logic symbols.

In logic, a logical connective (also called a logical operator) is a symbol or word used to connect two or more
sentences (of either a formal or a natural language) in a grammatically valid way, such that the sense of the compound
sentence produced depends only on the original sentences.
The most common logical connectives are binary connectives (also called dyadic connectives) which join two
sentences which can be thought of as the function’s operands. Also commonly, negation is considered to be a unary
connective.
Logical connectives along with quantifiers are the two main types of logical constants used in formal systems such
as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, presented as a
truth function.
[1]
A logical connective is similar to but not equivalent to a conditional operator.

19.1 In language

19.1.1 Natural language


In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a gram-
matically compound sentence. Some but not all such grammatical conjunctions are truth functions. For example,
consider the following sentences:

A: Jack went up the hill.


B: Jill went up the hill.
C: Jack went up the hill and Jill went up the hill.
D: Jack went up the hill so Jill went up the hill.

The words and and so are grammatical conjunctions joining the sentences (A) and (B) to form the compound sentences
(C) and (D). The and in (C) is a logical connective, since the truth of (C) is completely determined by (A) and (B):
it would make no sense to affirm (A) and (B) but deny (C). However, so in (D) is not a logical connective, since it
would be quite reasonable to affirm (A) and (B) but deny (D): perhaps, after all, Jill went up the hill to fetch a pail of
water, not because Jack had gone up the hill at all.
Various English words and word pairs express logical connectives, and some of them are synonymous. Examples
(with the name of the relationship in parentheses) are:

• “and” (conjunction)
• “and then” (conjunction)

148
19.2. COMMON LOGICAL CONNECTIVES 149

• “and then within” (conjunction)


• “or” (disjunction)
• “either...or” (exclusive disjunction)
• “implies” (implication)
• “if...then” (implication)
• “if and only if” (equivalence)
• “only if” (implication)
• “just in case” (biconditional)
• “but” (conjunction)
• “however” (conjunction)
• “not both” (alternative denial)
• “neither...nor” (joint denial)

The word “not” (negation) and the phrases “it is false that” (negation) and “it is not the case that” (negation) also
express a logical connective – even though they are applied to a single statement, and do not connect two statements.

19.1.2 Formal languages


In formal languages, truth functions are represented by unambiguous symbols. These symbols are called “logical
connectives”, “logical operators”, “propositional operators”, or, in classical logic, "truth-functional connectives”. See
well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-
formed formulas using truth-functional connectives.
Logical connectives can be used to link more than two statements, so one can speak about "n-ary logical connective”.

19.2 Common logical connectives

19.2.1 List of common logical connectives


Commonly used logical connectives include

• Negation (not): ¬ , N (prefix), ~


• Conjunction (and): ∧ , K (prefix), & , ∙
• Disjunction (or): ∨ , A (prefix)
• Material implication (if...then): → , C (prefix), ⇒ , ⊃
• Biconditional (if and only if): ↔ , E (prefix), ≡ , =

Alternative names for biconditional are “iff”, “xnor” and “bi-implication”.


For example, the meaning of the statements it is raining and I am indoors is transformed when the two are combined
with logical connectives:

• It is not raining (¬P)


• It is raining and I am indoors (P ∧ Q)
• It is raining or I am indoors (P ∨ Q)
150 CHAPTER 19. LOGICAL CONNECTIVE

• If it is raining, then I am indoors (P → Q)


• If I am indoors, then it is raining (Q → P)
• I am indoors if and only if it is raining (P ↔ Q)

For statement P = It is raining and Q = I am indoors.


It is also common to consider the always true formula and the always false formula to be connective:

• True formula (⊤, 1, V [prefix], or T)


• False formula (⊥, 0, O [prefix], or F)

19.2.2 History of notations

• Negation: the symbol ¬ appeared in Heyting in 1929.[2][3] (compare to Frege's symbol A in his Begriffsschrift);
the symbol ~ appeared in Russell in 1908;[4] an alternative notation is to add an horizontal line on top of the
formula, as in P ; another alternative notation is to use a prime symbol as in P'.
• Conjunction: the symbol ∧ appeared in Heyting in 1929[2] (compare to Peano's use of the set-theoretic notation
of intersection ∩[5] ); & appeared at least in Schönfinkel in 1924;[6] . comes from Boole's interpretation of logic
as an elementary algebra.
• Disjunction: the symbol ∨ appeared in Russell in 1908[4] (compare to Peano's use of the set-theoretic notation
of union ∪); the symbol + is also used, in spite of the ambiguity coming from the fact that the + of ordinary
elementary algebra is an exclusive or when interpreted logically in a two-element ring; punctually in the history
a + together with a dot in the lower right corner has been used by Peirce,[7]
• Implication: the symbol → can be seen in Hilbert in 1917;[8] ⊃ was used by Russell in 1908[4] (compare to
Peano’s inverted C notation); ⇒ was used in Vax.[9]
• Biconditional: the symbol ≡ was used at least by Russell in 1908;[4] ↔ was used at least by Tarski in 1940;[10] ⇔
was used in Vax; other symbols appeared punctually in the history such as ⊃⊂ in Gentzen,[11] ~ in Schönfinkel[6]
or ⊂⊃ in Chazal.[12]
• True: the symbol 1 comes from Boole's interpretation
∧ of logic as an elementary algebra over the two-element
Boolean algebra; other notations include to be found in Peano.

• False: the symbol 0 comes also from Boole’s interpretation of logic as a ring; other notations include to be
found in Peano.

Some authors used letters for connectives at some time of the history: u. for conjunction (German’s “und” for
“and”) and o. for disjunction (German’s “oder” for “or”) in earlier works by Hilbert (1904); Np for negation, Kpq
for conjunction, Dpq for alternative denial, Apq for disjunction, Xpq for joint denial, Cpq for implication, Epq for
biconditional in Łukasiewicz (1929);[13] cf. Polish notation.

19.2.3 Redundancy
Such logical connective as converse implication ← is actually the same as material conditional with swapped ar-
guments, so the symbol for converse implication is redundant. In some logical calculi (notably, in classical logic)
certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is
the classical equivalence between ¬P ∨ Q and P → Q. Therefore, a classical-based logical system does not need the
conditional operator "→" if "¬" (not) and "∨" (or) are already in use, or may use the "→" only as a syntactic sugar
for a compound having one negation and one disjunction.
There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs. These
correspond to possible choices of binary logical connectives for classical logic. Different implementation of classical
logic can choose different functionally complete subsets of connectives.
One approach is to choose a minimal set, and define other connectives by some logical form, like in the example with
material conditional above. The following are the minimal functionally complete sets of operators in classical logic
whose arities do not exceed 2:
19.3. PROPERTIES 151

One element {↑}, {↓}.


Two elements { ∨ , ¬}, { ∧ , ¬}, {→, ¬}, {←, ¬}, {→, ⊥ }, {←, ⊥ }, {→, ↮ }, {←, ↮ }, {→, ↛ }, {→, ↚ },
{←, ↛ }, {←, ↚ }, { ↛ , ¬}, { ↚ , ¬}, { ↛ , ⊤ }, { ↚ , ⊤ }, { ↛ , ↔ }, { ↚ , ↔ }.
Three elements { ∨ , ↔ , ⊥ }, { ∨ , ↔ , ↮ }, { ∨ , ↮ , ⊤ }, { ∧ , ↔ , ⊥ }, { ∧ , ↔ , ↮ }, { ∧ , ↮ , ⊤ }.

See more details about functional completeness in classical logic at Functional completeness in truth function.
Another approach is to use on equal rights connectives of a certain convenient and functionally complete, but not
minimal set. This approach requires more propositional axioms and each equivalence between logical forms must be
either an axiom or provable as a theorem.
But intuitionistic logic has the situation more complicated. Of its five connectives {∧, ∨, →, ¬, ⊥} only negation ¬
has to be reduced to other connectives (see details). Neither of conjunction, disjunction and material conditional has
an equivalent form constructed of other four logical connectives.

19.3 Properties
Some logical connectives possess properties which may be expressed in the theorems containing the connective. Some
of those properties that a logical connective may have are:

• Associativity: Within an expression containing two or more of the same associative connectives in a row, the
order of the operations does not matter as long as the sequence of the operands is not changed.
• Commutativity: The operands of the connective may be swapped preserving logical equivalence to the original
expression.
• Distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a
· b) + (a · c) for all operands a, b, c.
• Idempotence: Whenever the operands of the operation are the same, the compound is logically equivalent to
the operand.
• Absorption: A pair of connectives ∧ , ∨ satisfies the absorption law if a ∧ (a ∨ b) = a for all operands a, b.
• Monotonicity: If f(a1 , ..., an) ≤ f(b1 , ..., bn) for all a1 , ..., an, b1 , ..., bn ∈ {0,1} such that a1 ≤ b1 , a2 ≤ b2 ,
..., an ≤ bn. E.g., ∨ , ∧ , ⊤ , ⊥ .
• Affinity: Each variable always makes a difference in the truth-value of the operation or it never makes a
difference. E.g., ¬ , ↔ , ↮ , ⊤ , ⊥ .
• Duality: To read the truth-value assignments for the operation from top to bottom on its truth table is the same
as taking the complement of reading the table of the same or another connective from bottom to top. Without
resorting to truth tables it may be formulated as g̃ (¬a1 , ..., ¬an) = ¬g(a1 , ..., an). E.g., ¬ .
• Truth-preserving: The compound all those argument are tautologies is a tautology itself. E.g., ∨ , ∧ , ⊤ , →
, ↔ , ⊂. (see validity)
• Falsehood-preserving: The compound all those argument are contradictions is a contradiction itself. E.g., ∨
, ∧ , ↮ , ⊥ , ⊄, ⊅. (see validity)
• Involutivity (for unary connectives): f(f(a)) = a. E.g. negation in classical logic.

For classical and intuitionistic logic, the "=" symbol means that corresponding implications "…→…" and "…←…" for
logical compounds can be both proved as theorems, and the "≤" symbol means that "…→…" for logical compounds
is a consequence of corresponding "…→…" connectives for propositional variables. Some many-valued logics may
have incompatible definitions of equivalence and order (entailment).
Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of
many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and
disjunction over conjunction, as well as for the absorption law.
In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is
self-dual, the latter is also self-dual in intuitionistic logic.
152 CHAPTER 19. LOGICAL CONNECTIVE

19.4 Order of precedence


As a way of reducing the number of necessary parentheses, one may introduce precedence rules: ¬ has higher
precedence than ∧ , ∧ higher than ∨ , and ∨ higher than → . So for example, P ∨ Q ∧ ¬R → S is short for
(P ∨ (Q ∧ (¬R))) → S .
Here is a table that shows a commonly used precedence of logical operators.[14]

However not all authors use the same order; for instance, an ordering in which disjunction is lower precedence than
implication or bi-implication has also been used.[15] Sometimes precedence between conjunction and disjunction is
unspecified requiring to provide it explicitly in given formula with parentheses. The order of precedence determines
which connective is the “main connective” when interpreting a non-atomic formula.

19.5 Computer science


A truth-functional approach to logical operators is implemented as logic gates in digital circuits. Practically all digital
circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates; see more
details in Truth function in computer science. Logical operators over bit vectors (corresponding to finite Boolean
algebras) are bitwise operations.
But not every usage of a logical connective in computer programming has a Boolean semantic. For example, lazy
evaluation is sometimes implemented for P ∧ Q and P ∨ Q, so these connectives are not commutative if some of
expressions P, Q has side effects. Also, a conditional, which in some sense corresponds to the material conditional
connective, is essentially non-Boolean because for if (P) then Q; the consequent Q is not executed if the antecedent
P is false (although a compound as a whole is successful ≈ “true” in such case). This is closer to intuitionist and
constructivist views on the material conditional, rather than to classical logic’s ones.

19.6 See also

19.7 Notes
[1] Cogwheel. “What is the difference between logical and conditional /operator/". Stack Overflow. Retrieved 9 April 2015.

[2] Heyting (1929) Die formalen Regeln der intuitionistischen Logik.

[3] Denis Roegel (2002), Petit panorama des notations logiques du 20e siècle (see chart on page 2).

[4] Russell (1908) Mathematical logic as based on the theory of types (American Journal of Mathematics 30, p222–262, also
in From Frege to Gödel edited by van Heijenoort).

[5] Peano (1889) Arithmetices principia, nova methodo exposita.

[6] Schönfinkel (1924) Über die Bausteine der mathematischen Logik, translated as On the building blocks of mathematical
logic in From Frege to Gödel edited by van Heijenoort.

[7] Peirce (1867) On an improvement in Boole’s calculus of logic.

[8] Hilbert (1917/1918) Prinzipien der Mathematik (Bernays’ course notes).

[9] Vax (1982) Lexique logique, Presses Universitaires de France.

[10] Tarski (1940) Introduction to logic and to the methodology of deductive sciences.

[11] Gentzen (1934) Untersuchungen über das logische Schließen.

[12] Chazal (1996) : Éléments de logique formelle.

[13] See Roegel


19.8. REFERENCES 153

[14] O'Donnell, John; Hall, Cordelia; Page, Rex (2007), Discrete Mathematics Using a Computer, Springer, p. 120, ISBN
9781846285981.

[15] Jackson, Daniel (2012), Software Abstractions: Logic, Language, and Analysis, MIT Press, p. 263, ISBN 9780262017152.

19.8 References
• Bocheński, Józef Maria (1959), A Précis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, D. Reidel, Dordrecht, South Holland.

• Enderton, Herbert (2001), A Mathematical Introduction to Logic (2nd ed.), Boston, MA: Academic Press,
ISBN 978-0-12-238452-3

• Gamut, L.T.F (1991), “Chapter 2”, Logic, Language and Meaning 1, University of Chicago Press, pp. 54–64,
OCLC 21372380

• Rautenberg, W. (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Sci-
ence+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6.

19.9 Further reading


• Lloyd Humberstone (2011). The Connectives. MIT Press. ISBN 978-0-262-01654-4.

19.10 External links


• Hazewinkel, Michiel, ed. (2001), “Propositional connective”, Encyclopedia of Mathematics, Springer, ISBN
978-1-55608-010-4

• Lloyd Humberstone (2010), "Sentence Connectives in Formal Logic", Stanford Encyclopedia of Philosophy
(An abstract algebraic logic approach to connectives.)

• John MacFarlane (2005), "Logical constants", Stanford Encyclopedia of Philosophy.


Chapter 20

Logical disjunction

“Disjunction” redirects here. For the logic gate, see OR gate. For separation of chromosomes, see Meiosis. For
disjunctions in distribution, see Disjunct distribution.
In logic and mathematics, or is the truth-functional operator of (inclusive) disjunction, also known as alternation;

Venn diagram of A∨B

the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that
represents this operator is typically written as ∨ or + .
"A or B" is true if A is true, or if B is true, or if both A and B are true.
In logic, or by itself means the inclusive or, distinguished from an exclusive or, which is false when both of its
arguments are true, while an “or” is true in that case.
An operand of a disjunction is called a disjunct.
Related concepts in other fields are:

154
20.1. NOTATION 155

Venn diagram of A∨B∨C

• In natural language, the coordinating conjunction “or”.

• In programming languages, the short-circuit or control structure.

• In set theory, union.

• In predicate logic, existential quantification.

20.1 Notation
Or is usually expressed with an infix operator: in mathematics and logic, ∨; in electronics, +; and in programming
languages, | or or. In Jan Łukasiewicz's prefix notation for logic, the operator is A, for Polish alternatywa.[1]

20.2 Definition
Logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value
of false if and only if both of its operands are false. More generally, a disjunction is a logical formula that can have
156 CHAPTER 20. LOGICAL DISJUNCTION

one or more literals separated only by ORs. A single literal is often considered to be a degenerate disjunction.
The disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original
expression. In keeping with the concept of vacuous truth, when disjunction is defined as an operator or function of
arbitrary arity, the empty disjunction (OR-ing over an empty set of operands) is generally defined as false.

20.2.1 Truth table

Disjunctions of the arguments on the left — The false bits form a Sierpinski triangle.

The truth table of A ∨ B :

20.3 Properties
• Commutativity

• Associativity

• Distributivity with various operations, especially with and

• Idempotency

• Monotonicity

• Truth-preserving validity
20.4. SYMBOL 157

When all inputs are true, the output is true.

• False-preserving validity

When all inputs are false, the output is false.

• Walsh spectrum: (3,−1,−1,−1)

• Nonlinearity: 1 (the function is bent)

If using binary values for true (1) and false (0), then logical disjunction works almost like binary addition. The only
difference is that 1 ∨ 1 = 1 , while 1 + 1 = 10 .

20.4 Symbol
The mathematical symbol for logical disjunction varies in the literature. In addition to the word “or”, and the formula
“Apq", the symbol " ∨ ", deriving from the Latin word vel (“either”, “or”) is commonly used for disjunction. For
example: "A ∨ B " is read as "A or B ". Such a disjunction is false if both A and B are false. In all other cases it is
true.
All of the following are disjunctions:

A∨B

¬A ∨ B
A ∨ ¬B ∨ ¬C ∨ D ∨ ¬E.
The corresponding operation in set theory is the set-theoretic union.

20.5 Applications in computer science

A out
B
OR logic gate

Operators corresponding to logical disjunction exist in most programming languages.

20.5.1 Bitwise operation


Disjunction is often used for bitwise operations. Examples:
158 CHAPTER 20. LOGICAL DISJUNCTION

• 0 or 0 = 0
• 0 or 1 = 1
• 1 or 0 = 1
• 1 or 1 = 1
• 1010 or 1100 = 1110

The or operator can be used to set bits in a bit field to 1, by or-ing the field with a constant field with the relevant bits
set to 1. For example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged.

20.5.2 Logical operation


Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages
following C, bitwise disjunction is performed with the single pipe (|) and logical disjunction with the double pipe (||)
operators.
Logical disjunction is usually short-circuited; that is, if the first (left) operand evaluates to true then the second (right)
operand is not evaluated. The logical disjunction operator thus usually constitutes a sequence point.
In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one
terminates with value true, the other is interrupted. This operator is thus called the parallel or.
Although in most languages the type of a logical disjunction expression is boolean and thus can only have the value
true or false, in some (such as Python and JavaScript) the logical disjunction operator returns one of its operands: the
first operand if it evaluates to a true value, and the second operand otherwise.

20.5.3 Constructive disjunction


The Curry–Howard correspondence relates a constructivist form of disjunction to tagged union types.

20.6 Union
The membership of an element of an union set in set theory is defined in terms of a logical disjunction: x ∈ A ∪ B if
and only if (x ∈ A) ∨ (x ∈ B). Because of this, logical disjunction satisfies many of the same identities as set-theoretic
union, such as associativity, commutativity, distributivity, and de Morgan’s laws.

20.7 Natural language


As with other notions formalized in mathematical logic, the meaning of the natural-language coordinating conjunction
or is closely related to, but different from the logical or. For example, “Please ring me or send an email” likely means
“do one or the other, but not both”. On the other hand, “Her grades are so good that she’s either very bright or studies
hard” does not exclude the possibility of both. In other words, in ordinary language “or” can mean the inclusive or
exclusive or.

20.8 See also

20.9 Notes
• George Boole, closely following analogy with ordinary mathematics, premised, as a necessary condition to
the definition of “x + y”, that x and y were mutually exclusive. Jevons, and practically all mathematical logi-
cians after him, advocated, on various grounds, the definition of “logical addition” in a form which does not
necessitate mutual exclusiveness.
20.10. EXTERNAL LINKS 159

20.10 External links


• Hazewinkel, Michiel, ed. (2001), “Disjunction”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-
010-4
• Stanford Encyclopedia of Philosophy entry

• Eric W. Weisstein. “Disjunction.” From MathWorld--A Wolfram Web Resource

20.11 References
[1] Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.
Chapter 21

Material conditional

“Logical conditional” redirects here. For other related meanings, see Conditional statement.
Not to be confused with material inference.
The material conditional (also known as "material implication", "material consequence", or simply "implication",

Venn diagram of A → B .
If a member of the set described by this diagram (the red areas) is a member of A , it is in the intersection of A and B , and it
therefore is also in B .

"implies" or "conditional") is a logical connective (or a binary operator) that is often symbolized by a forward ar-
row "→". The material conditional is used to form statements of the form "p→q" (termed a conditional statement)
which is read as “if p then q” or “p only if q” and conventionally compared to the English construction “If...then...”.
But unlike the English construction, the material conditional statement "p→q" does not specify a causal relationship
between p and q and is to be understood to mean “if p is true, then q is also true” such that the statement "p→q"
is false only when p is true and q is false.[1] Intuitively, consider that a given p being true and q being false would
prove an “if p is true, q is always also true” statement false, even when the “if p then q” does not represent a causal

160
21.1. DEFINITIONS OF THE MATERIAL CONDITIONAL 161

relationship between p and q. Instead, the statement describes p and q as each only being true when the other is
true, and makes no claims that p causes q. However, note that such a general and informal way of thinking about
the material conditional is not always acceptable, as will be discussed. As such, the material conditional is also to be
distinguished from logical consequence.
The material conditional is also symbolized using:

1. p ⊃ q (Although this symbol may be used for the superset symbol in set theory.);
2. p ⇒ q (Although this symbol is often used for logical consequence (i.e. logical implication) rather than for
material conditional.)

With respect to the material conditionals above, p is termed the antecedent, and q the consequent of the conditional.
Conditional statements may be nested such that either or both of the antecedent or the consequent may themselves
be conditional statements. In the example "(p→q) → (r→s)" both the antecedent and the consequent are conditional
statements.
In classical logic p → q is logically equivalent to ¬(p ∧ ¬q) and by De Morgan’s Law logically equivalent to ¬p ∨ q
.[2] Whereas, in minimal logic (and therefore also intuitionistic logic) p → q only logically entails ¬(p ∧ ¬q) ; and in
intuitionistic logic (but not minimal logic) ¬p ∨ q entails p → q .

21.1 Definitions of the material conditional


Logicians have many different views on the nature of material implication and approaches to explain its sense.[3]

21.1.1 As a truth function


In classical logic, the compound p→q is logically equivalent to the negative compound: not both p and not q. Thus
the compound p→q is false if and only if both p is true and q is false. By the same stroke, p→q is true if and only if
either p is false or q is true (or both). Thus → is a function from pairs of truth values of the components p, q to truth
values of the compound p→q, whose truth value is entirely a function of the truth values of the components. Hence,
this interpretation is called truth-functional. The compound p→q is logically equivalent also to ¬p∨q (either not p, or
q (or both)), and to ¬q→¬p (if not q then not p). But it is not equivalent to ¬p→¬q, which is equivalent to q→p.

Truth table

The truth table associated with the material conditional p→q is identical to that of ¬p∨q and is also denoted by Cpq.
It is as follows:
It may also be useful to note that in Boolean algebra, true and false can be denoted as 1 and 0 respectively with an
equivalent table.

21.1.2 As a formal connective


The material conditional can be considered as a symbol of a formal theory, taken as a set of sentences, satisfying all
the classical inferences involving →, in particular the following characteristic rules:

1. Modus ponens;
2. Conditional proof;
3. Classical contraposition;
4. Classical reductio ad absurdum.

Unlike the truth-functional one, this approach to logical connectives permits the examination of structurally identi-
cal propositional forms in various logical systems, where somewhat different properties may be demonstrated. For
example, in intuitionistic logic which rejects proofs by contraposition as valid rules of inference, (p → q) ⇒ ¬p ∨ q
is not a propositional theorem, but the material conditional is used to define negation.
162 CHAPTER 21. MATERIAL CONDITIONAL

21.2 Formal properties


When studying logic formally, the material conditional is distinguished from the semantic consequence relation |= .
We say A |= B if every interpretation that makes A true also makes B true. However, there is a close relationship
between the two in most logics, including classical logic. For example, the following principles hold:

• If Γ |= ψ then ∅ |= (φ1 ∧ · · · ∧ φn → ψ) for some φ1 , . . . , φn ∈ Γ . (This is a particular form of the


deduction theorem. In words, it says that if Γ models ψ this means that ψ can be deduced just from some
subset of the theorems in Γ.)

• The converse of the above

• Both → and |= are monotonic; i.e., if Γ |= ψ then ∆ ∪ Γ |= ψ , and if φ → ψ then (φ ∧ α) → ψ for any α,
Δ. (In terms of structural rules, this is often referred to as weakening or thinning.)

These principles do not hold in all logics, however. Obviously they do not hold in non-monotonic logics, nor do they
hold in relevance logics.
Other properties of implication (the following expressions are always true, for any logical values of variables):

• distributivity: (s → (p → q)) → ((s → p) → (s → q))

• transitivity: (a → b) → ((b → c) → (a → c))

• reflexivity: a → a

• totality: (a → b) ∨ (b → a)

• truth preserving: The interpretation under which all variables are assigned a truth value of 'true' produces a
truth value of 'true' as a result of material implication.

• commutativity of antecedents: (a → (b → c)) ≡ (b → (a → c))

Note that a → (b → c) is logically equivalent to (a ∧ b) → c ; this property is sometimes called un/currying.


Because of these properties, it is convenient to adopt a right-associative notation for → where a → b → c denotes
a → (b → c) .
Comparison of Boolean truth tables shows that a → b is equivalent to ¬a ∨ b , and one is an equivalent replacement
for the other in classical logic. See material implication (rule of inference).

21.3 Philosophical problems with material conditional


Outside of mathematics, it is a matter of some controversy as to whether the truth function for material implica-
tion provides an adequate treatment of conditional statements in English (a sentence in the indicative mood with a
conditional clause attached, i.e., an indicative conditional, or false-to-fact sentences in the subjunctive mood, i.e., a
counterfactual conditional).[4] That is to say, critics argue that in some non-mathematical cases, the truth value of
a compound statement, “if p then q", is not adequately determined by the truth values of p and q.[4] Examples of
non-truth-functional statements include: "q because p", "p before q" and “it is possible that p".[4] “[Of] the sixteen
possible truth-functions of A and B, material implication is the only serious candidate. First, it is uncontroversial that
when A is true and B is false, “If A, B" is false. A basic rule of inference is modus ponens: from “If A, B" and A, we
can infer B. If it were possible to have A true, B false and “If A, B" true, this inference would be invalid. Second, it is
uncontroversial that “If A, B" is sometimes true when A and B are respectively (true, true), or (false, true), or (false,
false)… Non-truth-functional accounts agree that “If A, B" is false when A is true and B is false; and they agree that
the conditional is sometimes true for the other three combinations of truth-values for the components; but they deny
that the conditional is always true in each of these three cases. Some agree with the truth-functionalist that when A
and B are both true, “If A, B" must be true. Some do not, demanding a further relation between the facts that A and
that B.”[4]
21.4. SEE ALSO 163

The truth-functional theory of the conditional was integral to Frege's new logic (1879). It was taken
up enthusiastically by Russell (who called it “material implication”), Wittgenstein in the Tractatus, and
the logical positivists, and it is now found in every logic text. It is the first theory of conditionals which
students encounter. Typically, it does not strike students as obviously correct. It is logic’s first surprise.
Yet, as the textbooks testify, it does a creditable job in many circumstances. And it has many defenders.
It is a strikingly simple theory: “If A, B" is false when A is true and B is false. In all other cases, “If A,
B" is true. It is thus equivalent to "~(A&~B)" and to "~A or B". "A ⊃ B" has, by stipulation, these truth
conditions.
— Dorothy Edgington, The Stanford Encyclopedia of Philosophy, “Conditionals”[4]

The meaning of the material conditional can sometimes be used in the natural language English “if condition then
consequence" construction (a kind of conditional sentence), where condition and consequence are to be filled with
English sentences. However, this construction also implies a “reasonable” connection between the condition (protasis)
and consequence (apodosis) (see Connexive logic).
The material conditional can yield some unexpected truths when expressed in natural language. For example, any
material conditional statement with a false antecedent is true (see vacuous truth). So the statement “if 2 is odd then 2
is even” is true. Similarly, any material conditional with a true consequent is true. So the statement “if I have a penny
in my pocket then Paris is in France” is always true, regardless of whether or not there is a penny in my pocket. These
problems are known as the paradoxes of material implication, though they are not really paradoxes in the strict sense;
that is, they do not elicit logical contradictions. These unexpected truths arise because speakers of English (and other
natural languages) are tempted to equivocate between the material conditional and the indicative conditional, or other
conditional statements, like the counterfactual conditional and the material biconditional. It is not surprising that a
rigorously defined truth-functional operator does not correspond exactly to all notions of implication or otherwise
expressed by 'if...then...' sentences in English (or their equivalents in other natural languages). For an overview of
some the various analyses, formal and informal, of conditionals, see the “References” section below.

21.4 See also

21.4.1 Conditionals
• Counterfactual conditional

• Indicative conditional

• Corresponding conditional

• Strict conditional

21.5 References
[1] Magnus, P.D (January 6, 2012). “forallx: An Introduction to Formal Logic” (PDF). Creative Commons. p. 25. Retrieved
28 May 2013.

[2] Teller, Paul (January 10, 1989). “A Modern Formal Logic Primer: Sentence Logic Volume 1” (PDF). Prentice Hall. p.
54. Retrieved 28 May 2013.

[3] Clarke, Matthew C. (March 1996). “A Comparison of Techniques for Introducing Material Implication”. Cornell Univer-
sity. Retrieved March 4, 2012.

[4] Edgington, Dorothy (2008). Edward N. Zalta, ed. “Conditionals”. The Stanford Encyclopedia of Philosophy (Winter 2008
ed.).

21.6 Further reading


• Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.
164 CHAPTER 21. MATERIAL CONDITIONAL

• Edgington, Dorothy (2001), “Conditionals”, in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic,
Blackwell.
• Quine, W.V. (1982), Methods of Logic, (1st ed. 1950), (2nd ed. 1959), (3rd ed. 1972), 4th edition, Harvard
University Press, Cambridge, MA.
• Stalnaker, Robert, “Indicative Conditionals”, Philosophia, 5 (1975): 269–286.

21.7 External links


• Conditionals entry by Edgington, Dorothy in the Stanford Encyclopedia of Philosophy
Chapter 22

Monotonic function

“Monotonicity” redirects here. For information on monotonicity as it pertains to voting systems, see monotonicity
criterion.
“Monotonic” redirects here. For other uses, see Monotone (disambiguation).
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves

Figure 1. A monotonically increasing function. It is strictly increasing on the left and right while just non-decreasing in the middle.

165
166 CHAPTER 22. MONOTONIC FUNCTION

Figure 2. A monotonically decreasing function

the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order
theory.

22.1 Monotonicity in calculus and analysis


In calculus, a function f defined on a subset of the real numbers with real values is called monotonic if and only if it is
either entirely increasing or decreasing. It is called monotonically increasing (also increasing or non-decreasing),
if for all x and y such that x ≤ y one has f (x) ≤ f (y) , so f preserves the order (see Figure 1). Likewise, a function
is called monotonically decreasing (also decreasing, or non-increasing) if, whenever x ≤ y , then f (x) ≥ f (y) ,
so it reverses the order (see Figure 2).
If the order ≤ in the definition of monotonicity is replaced by the strict order < , then one obtains a stronger require-
ment. A function with this property is called strictly increasing. Again, by inverting the order symbol, one finds a
corresponding concept called strictly decreasing. Functions that are strictly increasing or decreasing are one-to-one
(because for x not equal to y , either x < y or x > y and so, by monotonicity, either f (x) < f (y) or f (x) > f (y) ,
thus f (x) is not equal to f (y) .)
When functions between discrete sets are considered in combinatorics, it is not always obvious that “increasing” and
“decreasing” are taken to include the possibility of repeating the same value at successive arguments, so one finds the
terms weakly increasing and weakly decreasing to stress this possibility.
22.1. MONOTONICITY IN CALCULUS AND ANALYSIS 167

Figure 3. A function that is not monotonic

The terms “non-decreasing” and “non-increasing” should not be confused with the (much weaker) negative qualifi-
cations “not decreasing” and “not increasing”. For example, the function of figure 3 first falls, then rises, then falls
again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
The term monotonic transformation can also possibly cause some confusion because it refers to a transformation
by a strictly increasing function. Notably, this is the case in economics with respect to the ordinal properties of a
utility function being preserved across a monotonic transform (see also monotone preferences).[1]
A function f (x) is said to be absolutely monotonic over an interval (a, b) if the derivatives of all orders of f are
nonnegative at all points on the interval.

22.1.1 Some basic applications and results


The following properties are true for a monotonic function f : R → R :

• f has limits from the right and from the left at every point of its domain;
• f has a limit at positive or negative infinity ( ±∞ ) of either a real number, ∞ , or (−∞) .
• f can only have jump discontinuities;
• f can only have countably many discontinuities in its domain.
168 CHAPTER 22. MONOTONIC FUNCTION

These properties are the reason why monotonic functions are useful in technical work in analysis. Two facts about
these functions are:

• if f is a monotonic function defined on an interval I , then f is differentiable almost everywhere on I , i.e. the
set {x : x ∈ I} of numbers x in I such that f is not differentiable in x has Lebesgue measure zero. In addition,
this result cannot be improved to countable: see Cantor function.

• if f is a monotonic function defined on an interval [a, b] , then f is Riemann integrable.

An important application of monotonic functions is in probability theory. If X is a random variable, its cumulative
distribution function FX (x) = Prob(X ≤ x) is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreas-
ing.
When f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f , then there is an
inverse function on T for f .

22.2 Monotonicity in Topology


A map f : X → Y is said to be monotone if each of its fibers is connected i.e. for each element y in Y the (possibly
empty) set f −1 (y) is connected.

22.3 Monotonicity in functional analysis


In functional analysis on a topological vector space X, a (possibly non-linear) operator T : X → X∗ is said to be a
monotone operator if

(T u − T v, u − v) ≥ 0 ∀u, v ∈ X.

Kachurovskii’s theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset G of X × X∗ is said to be a monotone set if for every pair [u1 ,w1 ] and [u2 ,w2 ] in G,

(w1 − w2 , u1 − u2 ) ≥ 0.

G is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph
of a monotone operator G(T) is a monotone set. A monotone operator is said to be maximal monotone if its graph
is a maximal monotone set.

22.4 Monotonicity in order theory


Order theory deals with arbitrary partially ordered sets and preordered sets in addition to real numbers. The above
definition of monotonicity is relevant in these cases as well. However, the terms “increasing” and “decreasing” are
avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the
strict relations < and > are of little use in many non-total orders and hence no additional terminology is introduced
for them.
A monotone function is also called isotone, or order-preserving. The dual notion is often called antitone, anti-
monotone, or order-reversing. Hence, an antitone function f satisfies the property

x ≤ y implies f(x) ≥ f(y),


22.5. MONOTONICITY IN THE CONTEXT OF SEARCH ALGORITHMS 169

for all x and y in its domain. The composite of two monotone mappings is also monotone.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain
of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special
applications are found in these places. Some notable special monotone functions are order embeddings (functions for
which x ≤ y if and only if f(x) ≤ f(y)) and order isomorphisms (surjective order embeddings).

22.5 Monotonicity in the context of search algorithms


In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions.
A heuristic h(n) is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated
cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching
the goal from n' ,

h(n) ≤ c(n, a, n′ ) + h(n′ ).

This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is
also admissible, monotonicity is a stricter requirement than admissibility. In some heuristic algorithms, such as A*,
the algorithm can be considered optimal if it is monotonic.[2]

22.6 Boolean functions


In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1 ≤ b1 , a2 ≤ b2 , ..., an ≤ bn
(i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1 , ..., an) ≤ f(b1 , ..., bn). In other words, a
Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can
only cause the output to switch from false to true and not from true to false. Graphically, this means that a Boolean
function is monotonic when in its Hasse diagram (dual of its Venn diagram), there is no 1 (red vertex) connected to
a higher 0 (white vertex).
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs
(which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance
“at least two of a,b,c hold” is a monotonic function of a,b,c, since it can be written for instance as ((a and b) or (a
and c) or (b and c)).
The number of such functions on n variables is known as the Dedekind number of n.

22.7 See also

• Monotone cubic interpolation

• Pseudo-monotone operator

• Total monotonicity

22.8 Notes
[1] See the section on Cardinal Versus Ordinal Utility in Simon & Blume (1994).

[2] Conditions for optimality: Admissibility and consistency pg. 94-95 (Russell & Norvig 2010).
170 CHAPTER 22. MONOTONIC FUNCTION

22.9 Bibliography
• Bartle, Robert G. (1976). The elements of real analysis (second edition ed.).

• Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. ISBN 0-7167-0442-0.
• Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manch-
ester University Press. ISBN 0-7190-3341-1.

• Renardy, Michael and Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in
Applied Mathematics 13 (Second edition ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.

• Riesz, Frigyes and Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN
978-0-486-66289-3.

• Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle
River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.

• Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first edition ed.). ISBN 978-0-
393-95733-4. (Definition 9.31)

22.10 External links


• Hazewinkel, Michiel, ed. (2001), “Monotone function”, Encyclopedia of Mathematics, Springer, ISBN 978-1-
55608-010-4

• Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram
Demonstrations Project.

• Weisstein, Eric W., “Monotonic Function”, MathWorld.


Chapter 23

n-ary group

In mathematics, and in particular universal algebra, the concept of n-ary group (also called n-group or multiary
group) is a generalization of the concept of group to a set G with an n-ary operation instead of a binary operation.[1]
By an n-ary operation is meant any set map f: Gn → G from the n-th Cartesian power of G to G. The axioms for
an n-ary group are defined in such a way that they reduce to those of a group in the case n = 2. The earliest work
on these structures was done in 1904 by Kasner and in 1928 by Dörnte;[2] the first systematic account of (what were
then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions
of the American Mathematical Society.[3]

23.1 Axioms

23.1.1 Associativity
The easiest axiom to generalize is the associative law. Ternary associativity is the polynomial identity (abc)de =
a(bcd)e = ab(cde), i.e. the equality of the three possible bracketings of the string abcde in which any three consecutive
symbols are bracketed. (Here it is understood that the equations hold for arbitrary choices of elements a,b,c,d,e in
G.) In general, n-ary associativity is the equality of the n possible bracketings of a string consisting of n+(n−1) =
2n-1 distinct symbols with any n consecutive symbols bracketed. A set G which is closed under an associative n-
ary operation is called an n-ary semigroup. A set G which is closed under any (not necessarily associative) n-ary
operation is called an n-ary groupoid.

23.1.2 Inverses / unique solutions


The inverse axiom is generalized as follows: in the case of binary operations the existence of an inverse means ax =
b has a unique solution for x, and likewise xa = b has a unique solution. In the ternary case we generalize this to abx
= c, axb = c and xab = c each having unique solutions, and the n-ary case follows a similar pattern of existence of
unique solutions and we get an n-ary quasigroup.

23.1.3 Definition of n-ary group


An n-ary group is an n-ary semigroup which is also an n-ary quasigroup.

23.1.4 Identity / neutral elements


In the 2-ary case, i.e. for an ordinary group, the existence of an identity element is a consequence of the associativity
and inverse axioms, however in n-ary groups for n ≥ 3 there can be zero, one, or many identity elements.
An n-ary groupoid (G, ƒ) with ƒ = (x1 ◦ x2 ◦ . . . ◦ xn), where (G, ◦) is a group is called reducible or derived from
the group (G, ◦). In 1928 Dörnte [2] published the first main results: An n-ary groupoid which is reducible is an n-ary
group, however for all n > 2 there exist n-ary groups which are not reducible. In some n-ary groups there exists an

171
172 CHAPTER 23. N-ARY GROUP

element e (called an n-ary identity or neutral element) such that any string of n-elements consisting of all e's, apart
from one place, is mapped to the element at that place. E.g., in a quaternary group with identity e, eeae = a for every
a.
An n-ary group containing a neutral element is reducible. Thus, an n-ary group that is not reducible does not contain
such elements. There exist n-ary groups with more than one neutral element. If the set of all neutral elements of an
n-ary group is non-empty it forms an n-ary subgroup.[4]
Some authors include an identity in the definition of an n-ary group but as mentioned above such n-ary operations
are just repeated binary operations. Groups with intrinsically n-ary operations do not have an identity element.[5]

23.1.5 Weaker axioms


The axioms of associativity and unique solutions in the definition of an n-ary group are stronger than they need to be.
Under the assumption of n-ary associativity it suffices to postulate the existence of the solution of equations with the
unknown at the start or end of the string, or at one place other than the ends; e.g., in the 6-ary case, xabcde=f and
abcdex=f, or an expression like abxcde=f. Then it can be proved that the equation has a unique solution for x in any
place in the string.[3] The associativity axiom can also be given in a weaker form - see page 17 of “On some old and
new problems in n-ary groups”.[1]

23.2 Example
The following is an example of a three element ternary group, one of four such groups[6]

aaa = a, aab = b, aac = c, aba = c, abb = a, abc = b, aca = b, acb = c, acc = a,

baa = b, bab = c, bac = a, bba = a, bbb = b, bbc = c, bca = c, bcb = a, bcc = b,


caa = c, cab = a, cac = b, cba = b, cbb = c, cbc = a, cca = a, ccb = b, ccc = c.

23.3 See also


• Universal algebra

23.4 References
[1] Dudek, W.A. (2001), “On some old and new problems in n-ary groups”, Quasigroups and Related Systems 8: 15–36.

[2] W. Dörnte, Untersuchungen über einen verallgemeinerten Gruppenbegriff, Mathematische Zeitschrift, vol. 29 (1928), pp.
1-19.

[3] E. L. Post, Polyadic groups, Transactions of the American Mathematical Society 48 (1940), 208–350.

[4] Wiesław A. Dudek, Remarks to Głazek’s results on n-ary groups, Discussiones Mathematicae. General Algebra and Appli-
cations 27 (2007), 199–233.

[5] Wiesław A. Dudek and Kazimierz Głazek, Around the Hosszú-Gluskin theorem for n-ary groups, Discrete Mathematics
308 (2008), 486–4876.

[6] http://home.comcast.net/~{}tamivox/dave/math/tern_quasi/assoc1234.html

23.5 Further reading


• S. A. Rusakov: Some applications of n-ary group theory, (Russian), Belaruskaya navuka, Minsk 1998.
Chapter 24

Negation

For other uses, see Negation (disambiguation) and NOT gate.

In logic, negation, also called logical complement, is an operation that takes a proposition p to another proposition
“not p", written ¬p, which is interpreted intuitively as being true when p is false and false when p is true. Negation
is thus a unary (single-argument) logical connective. It may be applied as an operation on propositions, truth values,
or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes
truth to falsity and vice versa. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation,
the negation of a proposition p is the proposition whose proofs are the refutations of p.

24.1 Definition

No agreement exists as to the possibility of defining negation, as to its logical status, function, and meaning, as to its
field of applicability..., and as to the interpretation of the negative judgment, (F.H. Heinemann 1944).[1]
Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value
of true when its operand is false and a value of false when its operand is true. So, if statement A is true, then ¬A
(pronounced “not A”) would therefore be false; and conversely, if ¬A is true, then A would be false.
The truth table of ¬p is as follows:
Classical negation can be defined in terms of other logical operations. For example, ¬p can be defined as p → F, where
"→" is logical consequence and F is absolute falsehood. Conversely, one can define F as p & ¬p for any proposition
p, where "&" is logical conjunction. The idea here is that any contradiction is false. While these ideas work in both
classical and intuitionistic logic, they do not work in Brazilian logic, where contradictions are not necessarily false.
But in classical logic, we get a further identity: p → q can be defined as ¬p ∨ q, where "∨" is logical disjunction: “not
p, or q".
Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to
pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic
respectively.

24.2 Notation

The negation of a proposition p is notated in different ways in various contexts of discussion and fields of application.
Among these variants are the following:
In set theory \ is also used to indicate 'not member of': U \ A is the set of all members of U that are not members of
A.
No matter how it is notated or symbolized, the negation ¬p / −p can be read as “it is not the case that p", “not that p",
or usually more simply (though not grammatically) as “not p".

173
174 CHAPTER 24. NEGATION

24.3 Properties

24.3.1 Double negation


Within a system of classical logic, double negation, that is, the negation of the negation of a proposition p, is logically
equivalent to p. Expressed in symbolic terms, ¬¬p ⇔ p. In intuitionistic logic, a proposition implies its double negation
but not conversely. This marks one important difference between classical and intuitionistic negation. Algebraically,
classical negation is called an involution of period two.
However, in intuitionistic logic we do have the equivalence of ¬¬¬p and ¬p. Moreover, in the propositional case, a
sentence is classically provable if its double negation is intuitionistically provable. This result is known as Glivenko’s
theorem.

24.3.2 Distributivity
De Morgan’s laws provide a way of distributing negation over disjunction and conjunction :

¬(a ∨ b) ≡ (¬a ∧ ¬b) , and


¬(a ∧ b) ≡ (¬a ∨ ¬b) .

24.3.3 Linearity
In Boolean algebra, a linear function is one such that:
If there exists a0 , a1 , ..., a ∈ {0,1} such that f(b1 , ..., b ) = a0 ⊕ (a1 ∧ b1 ) ⊕ ... ⊕ (a ∧ b ), for all b1 , ..., b ∈ {0,1}.
Another way to express this is that each variable always makes a difference in the truth-value of the operation or it
never makes a difference. Negation is a linear logical operator.

24.3.4 Self dual


In Boolean algebra a self dual function is one such that:
f(a1 , ..., a ) = ~f(~a1 , ..., ~a ) for all a1 , ..., a ∈ {0,1}. Negation is a self dual logical operator.

24.4 Rules of inference


There are a number of equivalent ways to formulate rules for negation. One usual way to formulate classical negation
in a natural deduction setting is to take as primitive rules of inference negation introduction (from a derivation of p to
both q and ¬q, infer ¬p; this rule also being called reductio ad absurdum), negation elimination (from p and ¬p infer
q; this rule also being called ex falso quodlibet), and double negation elimination (from ¬¬p infer p). One obtains the
rules for intuitionistic negation the same way but by excluding double negation elimination.
Negation introduction states that if an absurdity can be drawn as conclusion from p then p must not be the case (i.e.
p is false (classically) or refutable (intuitionistically) or etc.). Negation elimination states that anything follows from
an absurdity. Sometimes negation elimination is formulated using a primitive absurdity sign ⊥. In this case the rule
says that from p and ¬p follows an absurdity. Together with double negation elimination one may infer our originally
formulated rule, namely that anything follows from an absurdity.
Typically the intuitionistic negation ¬p of p is defined as p→⊥. Then negation introduction and elimination are just
special cases of implication introduction (conditional proof) and elimination (modus ponens). In this case one must
also add as a primitive rule ex falso quodlibet.

24.5 Programming
As in mathematics, negation is used in computer science to construct logical statements.
24.6. KRIPKE SEMANTICS 175

if (!(r == t)) { /*...statements executed when r does NOT equal t...*/ }

The "!" signifies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl,
and PHP. “NOT” is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired
syntax such as Pascal, Ada, Eiffel and Seed7. Some languages (C++, Perl, etc.) provide more than one operator for
negation. A few languages like PL/I and Ratfor use ¬ for negation. Some modern computers and operating systems
will display ¬ as ! on files encoded in ASCII. Most modern languages allow the above statement to be shortened from
if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster
programs.
In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s
and 0s to 1s. See bitwise operation. This is often used to create ones’ complement or "~" in C or C++ and two’s
complement (just simplified to "-" or the negative sign since this is equivalent to taking the arithmetic negative value
of the number) as it basically creates the opposite (negative value equivalent) or mathematical complement of the
value (where both values are added together they create a whole).
To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from
negative to positive (it is negative because “x < 0” yields true)
unsigned int abs(int x) { if (x < 0) return -x; else return x; }

To demonstrate logical negation:


unsigned int abs(int x) { if (!(x < 0)) return x; else return -x; }

Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e.
will have identical results for any input (note that depending on the compiler used, the actual instructions performed
by the computer may differ).
This convention occasionally surfaces in written speech, as computer-related slang for not. The phrase !voting, for
example, means “not voting”.

24.6 Kripke semantics


In Kripke semantics where the semantic values of formulae are sets of possible worlds, negation can be taken to mean
set-theoretic complementation. (See also possible world semantics.)

24.7 See also


• Logical conjunction

• Logical disjunction

• NOT gate

• Bitwise NOT

• Ampheck

• Apophasis

• Cyclic negation

• Double negative elimination

• Grammatical polarity

• Negation (linguistics)

• Negation as failure
176 CHAPTER 24. NEGATION

• Square of opposition

• Binary opposition

24.8 References
[1] Horn, Laurence R (2001). “Chapter 1”. A NATURAL HISTORY OF NEGATION (PDF). Stanford University: CLSI Pub-
lications. p. 1. ISBN 1-57586-336-7. Retrieved 29 Dec 2013.

24.9 Further reading


• Gabbay, Dov, and Wansing, Heinrich, eds., 1999. What is Negation?, Kluwer.

• Horn, L., 2001. A Natural History of Negation, University of Chicago Press.


• G. H. von Wright, 1953–59, “On the Logic of Negation”, Commentationes Physico-Mathematicae 22.

• Wansing, Heinrich, 2001, “Negation”, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic, Blackwell.

• Marco Tettamanti, Rosa Manenti, Pasquale A. Della Rosa, Andrea Falini, Daniela Perani, Stefano F. Cappa
and Andrea Moro (2008). “Negation in the brain: Modulating action representation”, NeuroImage Volume
43, Issue 2, 1 November 2008, pages 358–367, http://dx.doi.org/10.1016/j.neuroimage.2008.08.004/

24.10 External links


• Negation entry by Laurence R. Horn & Heinrich Wansing in the Stanford Encyclopedia of Philosophy

• Hazewinkel, Michiel, ed. (2001), “Negation”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-
010-4

• NOT, on MathWorld
Chapter 25

NOR gate

This article is about NOR in the sense of an electronic logic gate (e.g. CMOS 4001). For NOR in the purely logical
sense, see Logical NOR. For other uses of NOR or Nor, see Nor (disambiguation).
“4001” redirects here. For the year, see 5th millennium.

The NOR gate is a digital logic gate that implements logical NOR - it behaves according to the truth table to the
right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW
output (0) results. NOR is the result of the negation of the OR operator. It can also be seen as an AND gate with all
the inputs inverted. NOR is a functionally complete operation—NOR gates can be combined to generate any other
logical function. By contrast, the OR operator is monotonic as it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—including CMOS and TTL. In such logic
families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is
some forms of the domino logic family.
The original Apollo Guidance Computer used 4,100 ICs, each one containing only a single 3-input NOR gate.

25.1 Symbols
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or
'rectangular') symbol, as well as the deprecated DIN symbol. For more information see Logic Gate Symbols. The
ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.

25.2 Hardware description and pinout


NOR Gates are basic logic gates, and as such they are recognised in TTL and CMOS ICs. The standard, 4000 series,
CMOS IC is the 4001, which includes four independent, two-input, NOR gates. The pinout diagram is as follows:

25.2.1 Availability

These devices are available from most semiconductor manufacturers such as Fairchild Semiconductor, Philips or
Texas Instruments. These are usually available in both through-hole DIP and SOIC format. Datasheets are readily
available in most datasheet databases.
In the popular CMOS and TTL logic families, NOR gates with up to 8 inputs are available:

• CMOS

• 4001: Quad 2-input NOR gate


• 4025: Triple 3-input NOR gate

177
178 CHAPTER 25. NOR GATE

A = f0 B = cc

03 C = aa

0c 30 11 55

e2 8a

1c 15 08

e8 = CARRY 61

96 = SUM

NOR Full adder

• 4002: Dual 4-input NOR gate


• 4078: Single 8-input NOR gate

• TTL

• 7402: Quad 2-input NOR gate


25.3. IMPLEMENTATIONS 179

• 7427: Triple 3-input NOR gate


• 7425: Dual 4-input NOR gate (with strobe, obsolete)
• 74260: Dual 5-Input NOR Gate
• 744078: Single 8-input NOR Gate

In the older RTL and ECL families, NOR gates were efficient and most commonly used.

25.3 Implementations

The diagrams above show the construction of a 2-input NOR gate using NMOS logic circuitry. If either of the inputs
are high, the corresponding N-channel MOSFET is turned on and the output is pulled low; otherwise the output is
pulled high through the pull-up resistor.
The diagram below shows a 2-input NOR gate using CMOS technology. The diodes and resistors on the inputs are
to protect the CMOS components from damage due to electrostatic discharge (ESD) and play no part in the logical
function of the circuit.

Unbuffered CMOS two input NOR gate


180 CHAPTER 25. NOR GATE

25.3.1 Alternatives
If no specific NOR gates are available, one can be made from NAND gates, because NAND and NOR gates are
considered the “universal gates”, meaning that they can be used to make all the others.[1]

25.4 See also


• AND gate
• OR gate

• NOT gate
• NAND gate

• XOR gate
• XNOR gate

• Boolean algebra (logic)


• Logic gates

• NOR logic

25.5 References
[1] Mano, M. Morris and Charles R. Kime. Logic and Computer Design Fundamentals, Third Edition. Prentice Hall, 2004. p.
73.

25.6 External links


• Interactive NOR gate, Displays the logic simulation of the NOR Gate.
Chapter 26

Post canonical system

A Post canonical system, as created by Emil Post, is a string-manipulation system that starts with finitely-many
strings and repeatedly transforms them by applying a finite set j of specified rules of a certain form, thus generating a
formal language. Today they are mainly of historical relevance because every Post canonical system can be reduced to
a string rewriting system (semi-Thue system), which is a simpler formulation. Both formalisms are Turing complete.

26.1 Definition
A Post canonical system is a triplet (A,I,R), where

• A is a finite alphabet, and finite (possibly empty) strings on A are called words.
• I is a finite set of initial words.
• R is a finite set of string-transforming rules (called production rules), each rule being of the following form:

g10 $11 g11 $12 g12 . . . $1m1 g1m1


g20 $21 g21 $22 g22 . . . $2m2 g2m2
... ... ... ... ... ... ... ...
gk0 $k1 gk1 $k2 gk2 . . . $kmk gkmk

h0 $′1 h1 $′2 h2 . . . $′n hn


where each g and h is a specified fixed word, and each $ and $' is a variable standing for an arbitrary word. The
strings before and after the arrow in a production rule are called the rule’s antecedents and consequent, respectively.
It is required that each $' in the consequent be one of the $s in the antecedents of that rule, and that each antecedent
and consequent contain at least one variable.
In many contexts, each production rule has only one antecedent, thus taking the simpler form

g0 $1 g1 $2 g2 . . . $m gm → h0 $′1 h1 $′2 h2 . . . $′n hn


The formal language generated by a Post canonical system is the set whose elements are the initial words together with
all words obtainable from them by repeated application of the production rules. Such sets are precisely the recursively
enumerable languages.

26.1.1 Example (well-formed bracket expressions)


Alphabet: {[, ]} Initial word: [] Production rules: (1) $ → [$] (2) $ → $$ (3) $1 []$2 → $1 $2 Derivation of a few
words in the language of well-formed bracket expressions: [] initial word [][] by (2) [[][]] by (1) [[][]][[][]] by (2)
[[][]][][[][]] by (3) ...

181
182 CHAPTER 26. POST CANONICAL SYSTEM

26.2 Normal-form theorem


A Post canonical system is said to be in normal form if it has only one initial word and every production rule is of the
simple form

g$ → $h

Post 1943 proved the remarkable Normal-form Theorem, which applies to the most-general type of Post canonical
system:

Given any Post canonical system on an alphabet A, a Post canonical system in normal form can be
constructed from it, possibly enlarging the alphabet, such that the set of words involving only letters of
A that are generated by the normal-form system is exactly the set of words generated by the original
system.

Tag systems, which comprise a universal computational model, are notable examples of Post normal-form system,
being also monogenic. (A canonical system is said to be monogenic if, given any string, at most one new string can be
produced from it in one step — i.e., the system is deterministic.)

26.3 String rewriting systems, Type-0 formal grammars


A string rewriting system is a special type of Post canonical system with a single initial word, and the productions are
each of the form

P1 gP2 → P1 hP2

That is, each production rule is a simple substitution rule, often written in the form g → h. It has been proved that
any Post canonical system is reducible to such a substitution system, which, as a formal grammar, is also called a
phrase-structure grammar, or a type-0 grammar in the Chomsky hierarchy.

26.4 References
• Emil Post, “Formal Reductions of the General Combinatorial Decision Problem,” American Journal of Math-
ematics 65 (2): 197-215, 1943.
• Marvin Minsky, Computation: Finite and Infinite Machines, Prentice-Hall, Inc., N.J., 1967.
Chapter 27

Post correspondence problem

Not to be confused with the other Post’s problem on the existence of incomparable r.e. Turing degrees.
Not to be confused with PCP theorem.

The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946.[1]
Because it is simpler than the halting problem and the Entscheidungsproblem it is often used in proofs of undecid-
ability.

27.1 Definition of the problem


The input of the problem consists of two finite lists α1 , . . . , αN and β1 , . . . , βN of words over some alphabet A having
at least two symbols. A solution to this problem is a sequence of indices (ik )1≤k≤K with K ≥ 1 and 1 ≤ ik ≤ N
for all k , such that

αi1 . . . αiK = βi1 . . . βiK .


The decision problem then is to decide whether such a solution exists or not.

27.2 Example instances of the problem

27.2.1 Example 1
Consider the following two lists:
A solution to this problem would be the sequence (3, 2, 3, 1), because

α3 α2 α3 α1 = bba + ab + bba + a = bbaabbbaa = bb + aa + bb + baa = β3 β2 β3 β1 .


Furthermore, since (3, 2, 3, 1) is a solution, so are all of its “repetitions”, such as (3, 2, 3, 1, 3, 2, 3, 1), etc.; that is,
when a solution exists, there are infinitely many solutions of this repetitive kind.
However, if the two lists had consisted of only α2 , α3 and β2 , β3 from those sets, then there would have been no
solution (the last letter of any such α string is not the same as the letter before it, whereas β only constructs pairs of
the same letter).
A convenient way to view an instance of a Post correspondence problem is as a collection of blocks of the form
there being an unlimited supply of each type of block. Thus the above example is viewed as
where the solver has an endless supply of each of these three block types. A solution corresponds to some way of
laying blocks next to each other so that the string in the top cells corresponds to the string in the bottom cells. Then
the solution to the above example corresponds to:

183
184 CHAPTER 27. POST CORRESPONDENCE PROBLEM

27.2.2 Example 2

Again using blocks to represent an instance of the problem, the following is an example that has infinitely many
solutions in addition to the kind obtained by merely “repeating” a solution.
In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions):

27.3 Proof sketch of undecidability

The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation
of a Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing
machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot
be decidable either. The following discussion is based on Michael Sipser's textbook Introduction to the Theory of
Computation.[2]
In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing
machine’s computation. This means it will list a string describing the initial state, followed by a string describing the
next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some
separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine
consists of three parts:

• The current contents of the tape.

• The current state of the finite state machine which operates the tape head.

• The current position of the tape head on the tape.

Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down
as part of our state. To describe the state of the finite control, we create new symbols, labelled q1 through qk, for each
of the finite state machine’s k states. We insert the correct symbol into the string describing the tape’s contents at the
position of the tape head, thereby indicating both the tape head’s position and the current state of the finite control.
For the alphabet {0,1}, a typical state might look something like:
101101110q7 00110.
A simple computation history would then look something like this:
q0 101#1q4 01#11q2 1#1q8 10.
We start out with this block, where x is the input string and q0 is the start state:
The top starts out “lagging” the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol
a in the tape alphabet, as well as #, we have a “copy” block, which copies it unmodified from one state to the next:
We also have a block for each position transition the machine can make, showing how the tape head moves, how the
finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in
state 4, and then writes a 1 and moves right, changing to state 7:
Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match.
To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step
will cause a symbol near the tape head to vanish, one at a time, until none remain. If qf is an accepting state, we can
represent this with the following transition blocks, where a is a tape alphabet symbol:
There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial
tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing
machine computation.
The previous example
q0 101#1q4 01#11q2 1#1q8 10.
is represented as the following solution to the Post correspondence problem:
27.4. VARIANTS 185

27.4 Variants
Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new
problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an
apparently weaker version.

• The problem may be phrased in terms of monoid morphisms f, g from the free monoid B∗ to the free monoid
A∗ where B is of size n. The problem is to determine whether there is a word w in B+ such that f(w) = g(w).[3]

• The condition that the alphabet A have at least two symbols is required since the problem is decidable if A has
only one symbol.

• A simple variant is to fix n, the number of tiles. This problem is decidable if n ≤ 2, but remains undecidable
for n ≥ 5. It is unknown whether the problem is decidable for 3 ≤ n ≤ 4.[4]

• The circular Post correspondence problem asks whether indexes i1 , i2 , . . . can be found such that αi1 · · · αik
and βi1 · · · βik are conjugate words, i.e., they are equal modulo rotation. This variant is undecidable.[5]

• One of the most important variants of PCP is the bounded Post correspondence problem, which asks if we
can find a match using no more than k tiles, including repeated tiles. A brute force search solves the problem
in time O(2k ), but this may be difficult to improve upon, since the problem is NP-complete.[6] Unlike some
NP-complete problems like the boolean satisfiability problem, a small variation of the bounded problem was
also shown to be complete for RNP, which means that it remains hard even if the inputs are chosen at random
(it is hard on average over uniformly distributed inputs).[7]

• Another variant of PCP is called the marked Post Correspondence Problem, in which each ui must begin
with a different symbol, and each vi must also begin with a different symbol. Halava, Hirvensalo, and de Wolf
showed that this variation is decidable in exponential time. Moreover, they showed that if this requirement
is slightly loosened so that only one of the first two characters need to differ (the so-called 2-marked Post
Correspondence Problem), the problem becomes undecidable again.[8]

• The Post Embedding Problem is another variant where one looks for indexes i1 , i2 , . . . such that αi1 · · · αik
is a (scattered) subword of βi1 · · · βik . This variant is easily decidable since, when some solutions exist, in
particular a length-one solution exists. More interesting is the Regular Post Embedding Problem, a further
variant where one looks for solutions that belong to a given regular language (submitted, e.g., under the form
of a regular expression on the set {1, . . . , N } ). The Regular Post Embedding Problem is still decidable but,
because of the added regular constraint, it has a very high complexity that dominates every multiply recursive
function.[9]

• The Identity Correspondence Problem (ICP) asks whether a finite set of pairs of words (over a group alpha-
bet) can generate an identity pair by a sequence of concatenations. The problem is undecidable and equivalent
to the following Group Problem: is the semigroup generated by a finite set of pairs of words (over a group
alphabet) a group.[10]

27.5 References
[1] E. L. Post (1946). “A variant of a recursively unsolvable problem” (PDF). Bull. Amer. Math. Soc 52.

[2] Michael Sipser (2005). “A Simple Undecidable Problem”. Introduction to the Theory of Computation (2nd ed.). Thomson
Course Technology. pp. 199–205. ISBN 0-534-95097-3.

[3] Salomaa, Arto (1981). Jewels of Formal Language Theory. Pitman Publishing. pp. 74–75. ISBN 0-273-08522-0. Zbl
0487.68064.

[4] T. Neary (2015). “Undecidability in Binary Tag Systems and the Post Correspondence Problem for Five Pairs of Words”.
In Ernst W. Mayr and Nicolas Ollinger. 32nd International Symposium on Theoretical Aspects of Computer Science (STACS
2015). STACS 2015 30. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. pp. 649–661. doi:10.4230/LIPIcs.STACS.2015.649.
186 CHAPTER 27. POST CORRESPONDENCE PROBLEM

[5] K. Ruohonen (1983). “On some variants of Post’s correspondence problem”. Acta Informatica (Springer) 19 (4): 357–367.
doi:10.1007/BF00290732.

[6] Michael R. Garey; David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness.
W.H. Freeman. p. 228. ISBN 0-7167-1045-5.

[7] Y. Gurevich (1991). “Average case completeness”. J. Comp. Sys. Sci. (Elsevier Science) 42 (3): 346–398. doi:10.1016/0022-
0000(91)90007-R.

[8] V. Halava; M. Hirvensalo and R. de Wolf (2001). “Marked PCP is decidable”. Theor. Comp. Sci. (Elsevier Science) 255:
193–204. doi:10.1016/S0304-3975(99)00163-2.

[9] P. Chambart; Ph. Schnoebelen (2007). “Post embedding problem is not primitive recursive, with applications to channel
systems”. Lecture Notes in Computer Science. Lecture Notes in Computer Science (Springer) 4855: 265–276. doi:10.1007/978-
3-540-77050-3_22. ISBN 978-3-540-77049-7.

[10] Paul C. Bell; Igor Potapov (2010). “On the Undecidability of the Identity Correspondence Problem and its Applications
for Word and Matrix Semigroups”. International Journal of Foundations of Computer Science (World Scientific) 21.6:
963–978. doi:10.1142/S0129054110007660.

27.6 External links


• Eitan M. Gurari. An Introduction to the Theory of Computation, Chapter 4, Post’s Correspondence Problem.
A proof of the undecidability of PCP based on Chomsky type-0 grammars.

• Online PHP Based PCP Solver


• PCP AT HOME

• PCP - a nice problem


Chapter 28

Post’s inversion formula

Post’s inversion formula for Laplace transforms, named after Emil Post, is a simple-looking but usually impractical
formula for evaluating an inverse Laplace transform.
The statement of the formula is as follows: Let f(t) be a continuous function on the interval [0, ∞) of exponential
order, i.e.

f (t)
sup <∞
t>0 ebt

for some real number b. Then for all s > b, the Laplace transform for f(t) exists and is infinitely differentiable with
respect to s. Furthermore, if F(s) is the Laplace transform of f(t), then the inverse Laplace transform of F(s) is given
by

( )k+1 ( )
−1 (−1)k k k
f (t) = L {F (s)} = lim F (k)
k→∞ k! t t

for t > 0, where F (k) is the k-th derivative of F with respect to s.


As can be seen from the formula, the need to evaluate derivatives of arbitrarily high orders renders this formula
impractical for most purposes.
With the advent of powerful personal computers, the main efforts to use this formula have come from dealing with
approximations or asymptotic analysis of the Inverse Laplace transform, using the Grunwald-Letnikov differintegral
to evaluate the derivatives.
Post’s inversion has attracted interest due to the improvement in computational science and the fact that it is not
necessary to know where the poles of F(s) lie, which make it possible to calculate the asymptotic behaviour for big x
using inverse Mellin transforms for several arithmetical functions related to the Riemann Hypothesis.

28.1 See also


• Poisson summation formula

28.2 References
• Widder, D. V. (1946), The Laplace Transform, Princeton University Press
• Elementary inversion of the Laplace transform. Bryan, Kurt. Accessed June 14, 2006.

187
Chapter 29

Post’s lattice

In logic and universal algebra, Post’s lattice denotes the lattice of all clones on a two-element set {0, 1}, ordered
by inclusion. It is named for Emil Post, who published a complete description of the lattice in 1941.[1] The relative
simplicity of Post’s lattice is in stark contrast to the lattice of clones on a three-element (or larger) set, which has the
cardinality of the continuum, and a complicated inner structure. A modern exposition of Post’s result can be found
in Lau (2006).[2]

29.1 Basic concepts


A Boolean function, or logical connective, is an n-ary operation f: 2n → 2 for some n ≥ 1, where 2 denotes the
two-element set {0, 1}. Particular Boolean functions are the projections

πkn (x1 , . . . , xn ) = xk ,
and given an m-ary function f, and n-ary functions g1 , ..., gm, we can construct another n-ary function

h(x1 , . . . , xn ) = f (g1 (x1 , . . . , xn ), . . . , gm (x1 , . . . , xn )),


called their composition. A set of functions closed under composition, and containing all projections, is called a clone.
Let B be a set of connectives. The functions which can be defined by a formula using propositional variables and
connectives from B form a clone [B], indeed it is the smallest clone which includes B. We call [B] the clone generated
by B, and say that B is the basis of [B]. For example, [¬, ∧] are all Boolean functions, and [0, 1, ∧, ∨] are the monotone
functions.
We use the operations ¬ (negation), ∧ (conjunction or meet), ∨ (disjunction or join), → (implication), ↔ (biconditional),
+ (exclusive disjunction or Boolean ring addition), ↛ (nonimplication), ?: (the ternary conditional operator) and the
constant unary functions 0 and 1. Moreover, we need the threshold functions

{
n 1 if {i | xi = 1} ≥ k,
thk (x1 , . . . , xn ) =
0 otherwise.

For example, th1 n is the large disjunction of all the variables xi, and thnn is the large conjunction. Of particular
importance is the majority function

maj = th32 = (x ∧ y) ∨ (x ∧ z) ∨ (y ∧ z).


We denote elements of 2n (i.e., truth-assignments) as vectors: a = (a1 , ..., an). The set 2n carries a natural product
Boolean algebra structure. That is, ordering, meets, joins, and other operations on n-ary truth assignments are defined
pointwise:

188
29.2. NAMING OF CLONES 189

(a1 , . . . , an ) ≤ (b1 , . . . , bn ) ⇐⇒ ai ≤ bi every for i = 1, . . . , n,

(a1 , . . . , an ) ∧ (b1 , . . . , bn ) = (a1 ∧ b1 , . . . , an ∧ bn ).

29.2 Naming of clones


Intersection of an arbitrary number of clones is again a clone. It is convenient to denote intersection of clones by
simple juxtaposition, i.e., the clone C 1 ∩ C 2 ∩ ... ∩ Ck is denoted by C 1 C 2 ...Ck. Some special clones are introduced
below:

• M is the set of monotone functions: f(a) ≤ f(b) for every a ≤ b.


• D is the set of self-dual functions: ¬f(a) = f(¬a).
• A is the set of affine functions: the functions satisfying

f (a1 , . . . , ai−1 , c, ai+1 , . . . , an ) = f (a1 , . . . , d, ai+1 , . . . ) ⇒ f (b1 , . . . , c, bi+1 , . . . ) = f (b1 , . . . , d, bi+1 , . . . )

for every i ≤ n, a, b ∈ 2n , and c, d ∈ 2. Equivalently, the functions expressible as f(x1 , ..., xn) = a0 +
a1 x1 + ... + anxn for some a0 , a.

• U is the set of essentially unary functions, i.e., functions which depend on at most one input variable: there
exists an i = 1, ..., n such that f(a) = f(b) whenever ai = bi.
• Λ is the set of conjunctive
∧ functions: f(a ∧ b) = f(a) ∧ f(b). The clone Λ consists of the conjunctions
f (x1 , . . . , xn ) = i∈I xi for all subsets I of {1, ..., n} (including the empty conjunction, i.e., the constant 1),
and the constant 0.
• V is the set of disjunctive
∨ functions: f(a ∨ b) = f(a) ∨ f(b). Equivalently, V consists of the disjunctions
f (x1 , . . . , xn ) = i∈I xi for all subsets I of {1, ..., n} (including the empty disjunction 0), and the constant
1.
• For any k ≥ 1, T0 k is the set of functions f such that

a1 ∧ · · · ∧ ak = 0 ⇒ f (a1 ) ∧ · · · ∧ f (ak ) = 0.
∩∞ k
Moreover, T∞ 0 = k=1 T0 is the set of functions bounded above by a variable: there exists i = 1, ..., n
such that f(a) ≤ ai for all a.
As a special case, P0 = T0 1 is the set of 0-preserving functions: f(0) = 0.

• For any k ≥ 1, T1 k is the set of functions f such that

a1 ∨ · · · ∨ ak = 1 ⇒ f (a1 ) ∨ · · · ∨ f (ak ) = 1,
∩∞ k
and T∞1 = k=1 T1 is the set of functions bounded below by a variable: there exists i = 1, ..., n such
that f(a) ≥ ai for all a.
The special case P1 = T1 1 consists of the 1-preserving functions: f(1) = 1.

• The largest clone of all functions is denoted ⊤, the smallest clone (which contains only projections) is denoted
⊥, and P = P0 P1 is the clone of constant-preserving functions.
190 CHAPTER 29. POST’S LATTICE

Hasse diagram of Post’s lattice

29.3 Description of the lattice

The set of all clones is a closure system, hence it forms a complete lattice. The lattice is countably infinite, and all its
members are finitely generated. All the clones are listed in the table below.
The eight infinite families have actually also members with k = 1, but these appear separately in the table: T0 1 = P0 ,
T1 1 = P1 , PT0 1 = PT1 1 = P, MT0 1 = MP0 , MT1 1 = MP1 , MPT0 1 = MPT1 1 = MP.
The lattice has a natural symmetry mapping each clone C to its dual clone C d = {f d | f ∈ C}, where f d (x1 , ..., xn) =
¬f(¬x1 , ..., ¬xn) is the de Morgan dual of a Boolean function f. For example, Λd = V, (T0 k )d = T1 k , and Md = M.
29.4. APPLICATIONS 191

Central part of the lattice

29.4 Applications
The complete classification of Boolean clones given by Post helps to resolve various questions about classes of Boolean
functions. For example:

• An inspection of the lattice shows that the maximal clones different from ⊤ (often called Post’s classes) are
M, D, A, P0 , P1 , and every proper subclone of ⊤ is contained in one of them. As a set B of connectives is
functionally complete if and only if it generates ⊤, we obtain the following characterization: B is functionally
complete iff it is not included in one of the five Post’s classes.
• The satisfiability problem for Boolean formulas is NP-complete by Cook’s theorem. Consider a restricted
version of the problem: for a fixed finite set B of connectives, let B-SAT be the algorithmic problem of checking
whether a given B-formula is satisfiable. Lewis[3] used the description of Post’s lattice to show that B-SAT is
NP-complete if the function ↛ can be generated from B (i.e., [B] ⊇ T0 ∞ ), and in all the other cases B-SAT is
polynomial-time decidable.
192 CHAPTER 29. POST’S LATTICE

29.5 Variants
Post originally did not work with the modern definition of clones, but with the so-called iterative systems, which are
sets of operations closed under substitution

h(x1 , . . . , xn+m−1 ) = f (x1 , . . . , xn−1 , g(xn , . . . , xn+m−1 )),

as well as permutation and identification of variables. The main difference is that iterative systems do not necessarily
contain all projections. Every clone is an iterative system, and there are 20 non-empty iterative systems which are not
clones. (Post also excluded the empty iterative system from the classification, hence his diagram has no least element
and fails to be a lattice.) As another alternative, some authors work with the notion of a closed class, which is an
iterative system closed under introduction of dummy variables. There are four closed classes which are not clones:
the empty set, the set of constant 0 functions, the set of constant 1 functions, and the set of all constant functions.
Composition alone does not allow to generate a nullary function from the corresponding unary constant function, this
is the technical reason why nullary functions are excluded from clones in Post’s classification. If we lift the restriction,
we get more clones. Namely, each clone C in Post’s lattice which contains at least one constant function corresponds
to two clones under the less restrictive definition: C, and C together with all nullary functions whose unary versions
are in C.

29.6 References
[1] E. L. Post, The two-valued iterative systems of mathematical logic, Annals of Mathematics studies, no. 5, Princeton Uni-
versity Press, Princeton 1941, 122 pp.

[2] D. Lau, Function algebras on finite sets: Basic course on many-valued logic and clone theory, Springer, New York, 2006,
668 pp. ISBN 978-3-540-36022-3

[3] H. R. Lewis, Satisfiability problems for propositional calculi, Mathematical Systems Theory 13 (1979), pp. 45–53.
Chapter 30

Post’s theorem

In computability theory Post’s theorem, named after Emil Post, describes the connection between the arithmetical
hierarchy and the Turing degrees.

30.1 Background
The statement of Post’s theorem uses several concepts relating to definability and recursion theory. This section gives
a brief overview of these concepts, which are covered in depth in their respective articles.
The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arith-
metic. A formula is said to be Σ0m if it is an existential statement in prenex normal form (all quantifiers at the front)
with m alternations between existential and universal quantifiers applied to a quantifier free formula. Formally a
formula ϕ(s) in the language of Peano arithmetic is a Σ0m formula if it is of the form

∃n1 ∀n2 ∃n3 ∀n4 · · · Qnm ρ(n1 , . . . , nm , x1 , . . . , xk ),

where ρ is a quantifier free formula and Q is ∀ if m is even and ∃ if m is odd. Note that any formula of the form

( )( )( )
∃n11 ∃n12 · · · ∃n1j1 ∀n21 · · · ∀n2j2 ∃n31 · · · · · · (Q1 nm
1 · · · ) ρ(n1 , . . . njm , x1 , . . . , xk )
1 m

where ρ contains only bounded quantifiers is provably equivalent to a formula of the above form from the axioms of
Peano arithmetic. Thus it isn't uncommon to see Σ0m formulas defined in this alternative and technically nonequivalent
manner since in practice the distinction is rarely important.
A set of natural numbers A is said to be Σ0m if it is definable by a Σ0m formula, that is, if there is a Σ0m formula ϕ(s)
such that each number n is in A if and only if ϕ(n) holds. It is known that if a set is Σ0m then it is Σ0n for any n > m
, but for each m there is a Σ0m+1 set that is not Σ0m . Thus the number of quantifier alternations required to define a
set gives a measure of the complexity of the set.
Post’s theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set
A of natural numbers is said to be Σ0m relative to a set B, written Σ0,B 0
m , if A is definable by a Σm formula in an
extended language that includes a predicate for membership in B.
While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level
of uncomputability of sets of natural numbers. A set A is said to be Turing reducible to a set B, written A ≤T B , if
there is an oracle Turing machine that, given an oracle for B, computes the characteristic function of A. The Turing
jump of a set A is a form of the Halting problem relative to A. Given a set A, the Turing jump A′ is the set of indices
of oracle Turing machines that halt on input 0 when run with oracle A. It is known that every set A is Turing reducible
to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set.
Post’s theorem uses finitely iterated Turing jumps. For any set A of natural numbers, the notation A(n) indicates the
n-fold iterated Turing jump of A. Thus A(0) is just A, and A(n+1) is the Turing jump of A(n) .

193
194 CHAPTER 30. POST’S THEOREM

30.2 Post’s theorem and corollaries


Post’s theorem establishes a close connection between the arithmetical hierarchy and the Turing degrees of the form
∅(n) , that is, finitely iterated Turing jumps of the empty set. (The empty set could be replaced with any other
computable set without changing the truth of the theorem.)
Post’s theorem states:

1. A set B is Σ0n+1 if and only if B is recursively enumerable by an oracle Turing machine with an oracle for ∅(n)
(n)
, that is, if and only if B is Σ0,∅
1 .
2. The set ∅(n) is Σ0n complete for every n > 0 . This means that every Σ0n set is many-one reducible to ∅(n) .

Post’s theorem has many corollaries that expose additional relationships between the arithmetical hierarchy and the
Turing degrees. These include:

(n)
1. Fix a set C. A set B is Σ0,C 0,C
n+1 if and only if B is Σ1 . This is the relativization of the first part of Post’s
theorem to the oracle C.

2. A set B is ∆n+1 if and only if B ≤T ∅(n) . More generally, B is ∆C


n+1 if and only if B ≤T C
(n)
.
3. A set is defined to be arithmetical if it is Σ0n for some n. Post’s theorem shows that, equivalently, a set is
arithmetical if and only if it is Turing reducible to ∅(m) for some m.

30.3 References
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1; ISBN
0-07-053522-1
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin,
1987. ISBN 3-540-15299-7
Chapter 31

Post–Turing machine

The article Turing machine gives a general introduction to Turing machines, while this article covers a
specific class of Turing machines.

A Post–Turing machine is a “program formulation” of an especially simple type of Turing machine, comprising a
variant of Emil Post's Turing-equivalent model of computation described below. (Post’s model and Turing’s model,
though very similar to one another, were developed independently. Turing’s paper was received for publication in
May 1936, followed by Post’s in October.) A Post–Turing machine uses a binary alphabet, an infinite sequence of
binary storage locations, and a primitive programming language with instructions for bi-directional movement among
the storage locations and alteration of their contents one at a time. The names “Post–Turing program” and “Post–
Turing machine” were used by Martin Davis in 1973–1974 (Davis 1973, p. 69ff). Later in 1980, Davis used the
name “Turing–Post program” (Davis, in Steen p. 241).

31.1 1936: Post model


In his 1936 paper “Finite combinatory processes—formulation 1” (which can be found on page 289 of The Un-
decidable), Emil Post described a model of extreme simplicity which he conjectured is "logically equivalent to
recursiveness", and which was later proved to be so. The quotes in the following are from this paper.
Post’s model of a computation differs from the Turing-machine model in a further “atomization” of the acts a human
“computer” would perform during a computation.a[›]
Post’s model employs a "symbol space” consisting of a “two-way infinite sequence of spaces or boxes”, each box
capable of being in either of two possible conditions, namely “marked” (as by a single vertical stroke) and “unmarked”
(empty). Initially, finitely-many of the boxes are marked, the rest being unmarked. A “worker” is then to move among
the boxes, being in and operating in only one box at a time, according to a fixed finite “set of directions” (instructions),
which are numbered in order (1,2,3,...,n). Beginning at a box “singled out as the starting point”, the worker is to follow
the set of instructions one at a time, beginning with instruction 1.
The instructions may require the worker to perform the following “basic acts” or "operations":

(a) Marking the box he is in (assumed empty),


(b) Erasing the mark in the box he is in (assumed marked),
(c) Moving to the box on his right,
(d) Moving to the box on his left,
(e) Determining whether the box he is in, is or is not marked.

Specifically, the i th “direction” (instruction) given to the worker is to be one of the following forms:

(A) Perform operation Oi [Oi = (a), (b), (c) or (d)] and then follow direction ji,
(B) Perform operation (e) and according as the answer is yes or no correspondingly follow direction ji' or
ji' ' ,

195
196 CHAPTER 31. POST–TURING MACHINE

(C) Stop.

(The above indented text and italics are as in the original.) Post remarks that this formulation is “in its initial stages”
of development, and mentions several possibilities for “greater flexibility” in its final “definitive form”, including

(1) replacing the infinity of boxes by a finite extensible symbol space, “extending the primitive operations
to allow for the necessary extension of the given finite symbol space as the process proceeds”,
(2) using an alphabet of more than two symbols, “having more than one way to mark a box”,
(3) introducing finitely-many “physical objects to serve as pointers, which the worker can identify and
move from box to box”.

31.2 1947: Post’s formal reduction of the Turing 5-tuples to 4-tuples


As briefly mentioned in the article Turing machine, Post, in his paper of 1947 (Recursive Unsolvability of a Problem
of Thue) atomized the Turing 5-tuples to 4-tuples:

“Our quadruplets are quintuplets in the Turing development. That is, where our standard instruction
orders either a printing (overprinting) or motion, left or right, Turing’s standard instruction always order
a printing and a motion, right, left, or none"(footnote 12, Undecidable p. 300)

Like Turing he defined erasure as printing a symbol “S0”. And so his model admitted quadruplets of only three types
(cf p. 294 Undecidable):

qi Sj L ql,
qi Sj R ql,
qi Sj Sk ql

At this time he was still retaining the Turing state-machine convention – he had not formalized the notion of an
assumed sequential execution of steps until a specific test of a symbol “branched” the execution elsewhere.

31.3 1954, 1957: Wang model


For an even further reduction – to only four instructions – of the Wang model presented here see Wang B-machine.
Wang (1957, but presented to the ACM in 1954) is often cited (cf Minsky (1967) p. 200) as the source of the
“program formulation” of binary-tape Turing machines using numbered instructions from the set

write 0
write 1
move left
move right
if scanning 0 then goto instruction i
if scanning 1 then goto instruction j

where sequential execution is assumed, and Post’s single "if ... then ... else" has been “atomised” into two “if ... then
...” statements. (Here '1' and '0' are used where Wang used “marked” and “unmarked”, respectively, and the initial
tape is assumed to contain only '0’s except for finitely-many '1’s.)
Wang noted the following:

• “Since there is no separate instruction for halt (stop), it is understood that the machine will stop when it has
arrived at a stage that the program contains no instruction telling the machine what to do next.” (p. 65)
31.4. 1974: FIRST DAVIS MODEL 197

• “In contrast with Turing who uses a one-way infinite tape that has a beginning, we are following Post in the use
of a 2-way infinite tape.” (p. 65)
• Unconditional gotos are easily derived from the above instructions, so “we can freely use them too”. (p. 84)

Any binary-tape Turing machine is readily converted to an equivalent “Wang program” using the above instructions.

31.4 1974: first Davis model


Martin Davis was an undergraduate student of Emil Post’s. Along with Stephen Kleene he completed his PhD under
Alonzo Church (Davis (2000) 1st and 2nd footnotes p. 188).
The following model he presented in a series of lectures to the Courant Institute at NYU in 1973–1974. This is
the model to which Davis formally applied the name “Post–Turing machine” with its “Post–Turing language”. The
instructions are assumed to be executed sequentially (Davis 1974, p. 71):

“Write 1
“Write B
“To A if read 1
“To A if read B
“RIGHT
“LEFT

Note that there is no “halt” or “stop”.

31.5 1978 second Davis model


The following model appears as an essay What is a computation? in Steen pages 241–267. For some reason Davis
has renamed his model a “Turing–Post machine” (with one back-sliding on page 256.)
In the following model Davis assigns the numbers “1” to Post’s “mark/slash” and “0” to the blank square. To quote
Davis: “We are now ready to introduce the Turing–Post Programming Language. In this language there are seven
kinds of instructions:

“PRINT 1
“PRINT 0
“GO RIGHT
“GO LEFT
“GO TO STEP i IF 1 IS SCANNED
“GO TO STEP i IF 0 IS SCANNED
“STOP

“A Turing–Post program is then a list of instructions, each of which is of one of these seven kinds. Of course in
an actual program the letter i in a step of either the fifth or sixth kind must replaced with a definite (positive whole)
number.” (Davis in Steen, p. 247).

• Confusion arises if one does not realize that a “blank” tape is actually printed with all zeroes — there is no
“blank”.
• Splits Post’s "GO TO" ("branch" or “jump”) instruction into two, thus creating a larger (but easier-to-use)
instruction set of seven rather than Post’s six instructions.
• Does not mention that instructions PRINT 1, PRINT 0, GO RIGHT and GO LEFT imply that, after execution,
the “computer” must go to the next step in numerical sequence.
198 CHAPTER 31. POST–TURING MACHINE

31.6 1994 (2nd Edition) Davis–Sigal–Weyuker’s Post–Turing program model


“Although the formulation of Turing we have presented is closer in spirit to that originally given by Emil Post, it was
Turing’s analysis of the computation that has made this formulation seem so appropriate. This language has played a
fundamental role in theoretical computer science.” (Davis et al. (1994) p. 129)
This model allows for the printing of multiple symbols. The model allows for B (blank) instead of S0 . The tape is
infinite in both directions. Either the head or the tape moves, but their definitions of RIGHT and LEFT always specify
the same outcome in either case (Turing used the same convention).

PRINT σ ;Replace scanned symbol with σ


IF σ GOTO L ;IF scanned symbol is σ THEN goto “the first” instruction labelled L
RIGHT ;Scan square immediately right of the square currently scanned
LEFT ;Scan square immediately left of the square currently scanned

Note that only one type of “jump” – a conditional GOTO – is specified; for an unconditional jump a string of GOTO’s
must test each symbol.
This model reduces to the binary { 0, 1 } versions presented above, as shown here:

PRINT 0 = ERASE ;Replace scanned symbol with 0 = B = BLANK


PRINT 1 ;Replace scanned symbol with 1
IF 0 GOTO L ;IF scanned symbol is 0 THEN goto “the first” instruction labelled L
IF 1 GOTO L ;IF scanned symbol is 1 THEN goto “the first” instruction labelled L
RIGHT ;Scan square immediately right of the square currently scanned
LEFT ;Scan square immediately left of the square currently scanned

31.7 Examples of the Post–Turing machine

31.7.1 Atomizing Turing quintuples into a sequence of Post–Turing instructions

The following “reduction” (decomposition, atomizing) method – from 2-symbol Turing 5-tuples to a sequence of
2-symbol Post–Turing instructions – can be found in Minsky (1961). He states that this reduction to “a program ... a
sequence of Instructions" is in the spirit of Hao Wang’s B-machine (italics in original, cf Minsky (1961) p. 439).
(Minsky’s reduction to what he calls “a sub-routine” results in 5 rather than 7 Post–Turing instructions. He did not
atomize Wi0: “Write symbol Si0; go to new state Mi0”, and Wi1: “Write symbol Si1; go to new state Mi1”. The
following method further atomizes Wi0 and Wi1; in all other respects the methods are identical.)
This reduction of Turing 5-tuples to Post–Turing instructions may not result in an “efficient” Post–Turing program,
but it will be faithful to the original Turing-program.
In the following example, each Turing 5-tuple of the 2-state busy beaver converts into

(i) an initial conditional “jump” (goto, branch), followed by


(ii) 2 tape-action instructions for the “0” case – Print or Erase or None, followed by Left or Right or
None, followed by
(iii) an unconditional “jump” for the “0” case to its next instruction
(iv) 2 tape-action instructions for the “1” case – Print or Erase or None, followed by Left or Right or
None, followed by
(v) an unconditional “jump” for the “1” case to its next instruction

for a total of 1 + 2 + 1 + 2 + 1 = 7 instructions per Turing-state.


For example, the 2-state busy beaver’s “A” Turing-state, written as two lines of 5-tuples, is:
31.7. EXAMPLES OF THE POST–TURING MACHINE 199

The table represents just a single Turing “instruction”, but we see that it consists of two lines of 5-tuples, one for
the case “tape symbol under head = 1”, the other for the case “tape symbol under head = 0”. Turing observed
(Undecidable, p. 119) that the left-two columns – “m-configuration” and “symbol” – represent the machine’s current
“configuration” – its state including both Tape and Table at that instant – and the last three columns are its subsequent
“behavior”. As the machine cannot be in two “states” at once, the machine must “branch” to either one configuration
or the other:
After the “configuration branch” (J1 xxx) or (J0 xxx) the machine follows one of the two subsequent “behaviors”. We
list these two behaviors on one line, and number (or label) them sequentially (uniquely). Beneath each jump (branch,
go to) we place its jump-to “number” (address, location):
Per the Post–Turing machine conventions each of the Print, Erase, Left, and Right instructions consist of two actions:

(i) Tape action: { P, E, L, R}, then


(ii) Table action: go to next instruction in sequence

And per the Post–Turing machine conventions the conditional “jumps” J0xxx, J1xxx consist of two actions:

(i) Tape action: look at symbol on tape under the head


(ii) Table action: If symbol is 0 (1) and J0 (J1) then go to xxx else go to next instruction in sequence

And per the Post–Turing machine conventions the unconditional “jump” Jxxx consists of a single action, or if we
want to regularize the 2-action sequence:

(i) Tape action: look at symbol on tape under the head


(ii) Table action: If symbol is 0 then go to xxx else if symbol is 1 then go to xxx.

Which, and how many, jumps are necessary? The unconditional jump Jxxx is simply J0 followed immediately by
J1 (or vice versa). Wang (1957) also demonstrates that only one conditional jump is required, i.e. either J0xxx or
J1xxx. However, with this restriction the machine becomes difficult to write instructions for. Often only two are
used, i.e.

(i) { J0xxx, J1xxx }


(ii) { J1xxx, Jxxx }
(iii) { J0xxx, Jxxx },

but the use of all three { J0xxx, J1xxx, Jxxx } does eliminate extra instructions. In the 2-state Busy Beaver example
that we use only { J1xxx, Jxxx }.

31.7.2 2-state Busy Beaver


The mission of the busy beaver is to print as many ones as possible before halting. The “Print” instruction writes a
1, the “Erase” instruction (not used in this example) writes a 0 (i.e. it is the same as P0). The tape moves “Left” or
“Right” (i.e. the “head” is stationary).
State table for a 2-state Turing-machine busy beaver:
Instructions for the Post–Turing version of a 2-state busy beaver: observe that all the instructions are on the same line
and in sequence. This is a significant departure from the “Turing” version and is in the same format as what is called
a “computer program":
Alternately, we might write the table as a string. The use of “parameter separators” ":" and instruction-separators
",” are entirely our choice and do not appear in the model. There are no conventions (but see Booth (1967) p. 374,
and Boolos and Jeffrey (1974, 1999) p. 23), for some useful ideas of how to combine state diagram conventions
with the instructions – i.e. to use arrows to indicate the destination of the jumps). In the example immediately
below, the instructions are sequential starting from “1”, and the parameters/"operands” are considered part of their
instructions/"opcodes":
200 CHAPTER 31. POST–TURING MACHINE

J1:5, P, R, J:8, P, L, J:8, J1:12, P, L, J1:1, P, N, J:15, H

The state diagram of a two-state busy beaver (little drawing, right-hand corner) converts to the equivalent Post–Turing
machine with the substitution of 7 Post–Turing instructions per “Turing” state. The HALT instruction adds the 15th

state:
A “run” of the 2-state busy beaver with all the intermediate steps of the Post–Turing machine shown:
31.7. EXAMPLES OF THE POST–TURING MACHINE 201

31.7.3 Two state busy beaver followed by “tape cleanup”

The following is a two-state Turing busy beaver with additional instructions 15–20 to demonstrate the use of “Erase”,
J0, etc. These will erase the 1’s written by the busy beaver:
Additional Post–Turing instructions 15 through 20 erase the symbols created by the busy beaver. These “atomic”
instructions are more “efficient” than their Turing-state equivalents (of 7 Post–Turing instructions). To accomplish
the same task a Post–Turing machine will (usually) require fewer Post–Turing states than a Turing-machine, because
(i) a jump (go-to) can occur to any Post–Turing instruction (e.g. P, E, L, R) within the Turing-state, (ii) a grouping
of move-instructions such as L, L, L, P are possible, etc.:
202 CHAPTER 31. POST–TURING MACHINE
31.8. EXAMPLE: MULTIPLY 3 × 4 WITH A POST–TURING MACHINE 203

31.8 Example: Multiply 3 × 4 with a Post–Turing machine


This example is a reference to show how a “multiply” computation would proceed on a single-tape, 2-symbol { blank,
1 } Post–Turing machine model.
This particular “multiply” algorithm is recursive through two loops. The head moves. It starts to the far left (the top)
of the string of unary marks representing a' :

• Move head far right. Establish (i.e. “clear”) register c by placing a single blank and then a mark to
the right of b
• a_loop: Move head right once, test for the bottom of a' (a blank). If blank then done else erase
mark;
• Move head right to b' . Move head right once past the top mark of b' ;
• b_loop: If head is at the bottom of b' (a blank) then move head to far left of a' , else:

• Erase a mark to locate counter (a blank) in b' .


• Increment c' : Move head right to top of c' and increment c' .
• Move head left to the counter inside b' ,
• Repair counter: print a mark in the blank counter.
• Decrement b' −count: Move head right once.
204 CHAPTER 31. POST–TURING MACHINE

An example of “multiply” a × b = c on a Post–Turing machine. At the start, the tape (shown on the left) has two numbers on it –
a' = 3' (4 marks), b' = 4' (5 marks). (A single mark would represent “0”.) At the end the tape will have the product c' = 12' (13
marks) to the right of b. Note “top” and “bottom” are there just to clarify what the P–T machine is doing.

• Return to b_loop.

Multiply a × b = c, for example: 3 × 4 = 12. The scanned square is indicated by brackets around the
mark i.e. [ | ]. An extra mark serves to indicate the symbol “0":
At the start of the computation a' is 4 unary marks, then a separator blank, b' is 5 unary
marks, then a separator mark. An unbounded number of empty spaces must be available for
c to the right:
....a'.b'.... = : ....[ | ] | | | . | | | | | ....

During the computation the head shuttles back and forth from a' to b' to c' back to b'
then to c' , then back to b' , then to c' ad nauseam while the machine counts through b'
and increments c' . Multiplicand a' is slowly counted down (its marks erased – shown for
reference with x’s below). A “counter” inside b' moves to the right through b (an erased mark
shown being read by the head as [ . ] ) but is reconstructed after each pass when the head
returns from incrementing c' :
....a.b.... = : ....xxx | . | | [ . ] | | . | | | | | | | ...

At end of computation: c' is 13 marks = “successor of 12” appearing to the right of b' . a'
has vanished in process of the computation
....b.c = ......... | | | | | . | | | | | | | | | | | | | ...

31.9 Footnotes
^ a: Difference between Turing- and Post–Turing machine models
In his chapter XIII Computable Functions, Kleene adopts the Post model; Kleene’s model uses a blank and one symbol
“tally mark ¤" (Kleene p. 358), a “treatment closer in some respects to Post 1936. Post 1936 considered computation
with a 2-way infinite tape and only 1 symbol” (Kleene p. 361). Kleene observes that Post’s treatment provided a
further reduction to “atomic acts” (Kleene p. 357) of “the Turing act” (Kleene p. 379). As described by Kleene “The
Turing act” is the combined 3 (time-sequential) actions specified on a line in a Turing table: (i) print-symbol/erase/do-
nothing followed by (ii) move-tape-left/move-tape-right/do-nothing followed by (iii) test-tape-go-to-next-instruction:
31.10. REFERENCES 205

e.g. “s1Rq1” means “Print symbol "¤", then move tape right, then if tape symbol is "¤" then go to state q1”. (See
Kleene’s example P. 358).
Kleene observes that Post atomized these 3-actions further into two types of 2-actions. The first type is a “print/erase”
action, the second is a “move tape left/right action": (1.i) print-symbol/erase/do-nothing followed by (1.ii) test-tape-
go-to-next-instruction, OR (2.ii) move-tape-left/move-tape-right/do-nothing followed by (2.ii) test-tape-go-to-next-
instruction.
But Kleene observes that while

“Indeed it could be argued that the Turing machine act is already compound, and consists psychologically
in a printing and change in state of mind, followed by a motion and another state of mind [, and] Post
1947 does thus separate the Turing act into two; we have not here, primarily because it saves space in
the machine tables not to do so."(Kleene p. 379)

In fact Post’s treatment (1936) is ambiguous; both (1.1) and (2.1) could be followed by "(.ii) go to next instruction in
numerical sequence”. This represents a further atomization into three types of instructions: (1) print-symbol/erase/do-
nothing then go-to-next-instruction-in-numerical-sequence, (2) move-tape-left/move-tape-right/do-nothing then go-
to-next-instruction-in-numerical-sequence (3)test-tape then go-to-instruction-xxx-else-go-to-next-instruction-in-numerical-
sequence.

31.10 References
• Stephen C. Kleene, Introduction to Meta-Mathematics, North-Holland Publishing Company, New York, 10th
edition 1991, first published 1952. Chapter XIII is an excellent description of Turing machines; Kleene uses a
Post-like model in his description and admits the Turing model could be further atomized, see Footnote 1.

• Martin Davis, editor: The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And
Computable Functions, Raven Press, New York, 1965. Papers include those by Gödel, Church, Rosser, Kleene,
and Post.
• Martin Davis, “What is a computation”, in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random
House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines. Davis reduces
the Turing Machine to a far-simpler model based on Post’s model of a computation. Includes a little biography
of Emil Post.
• Martin Davis, Computability: with Notes by Barry Jacobs, Courant Institute of Mathematical Sciences, New
York University, 1974.

• Martin Davis, Ron Sigal, Elaine J. Weyuker, (1994) Computability, Complexity, and Languages: Fundamentals
of Theoretical Computer Science – 2nd edition, Academic Press: Harcourt, Brace & Company, San Diego,
1994 ISBN 0-12-206382-1 (First edition, 1983).
• Fred Hennie, Introduction to Computability, Addison–Wesley, 1977.

• Marvin Minsky, (1961), Recursive Unsolvability of Post’s problem of 'Tag' and other Topics in Theory of Turing
Machines, Annals of Mathematics, Vol. 74, No. 3, November, 1961.

• Roger Penrose, The Emperor’s New Mind: Concerning computers, Minds and the Laws of Physics, Oxford
University Press, Oxford England, 1990 (with corrections). Cf: Chapter 2, “Algorithms and Turing Machines”.
An overly-complicated presentation (see Davis’s paper for a better model), but a thorough presentation of
Turing machines and the halting problem, and Church’s lambda calculus.

• Hao Wang (1957): “A variant to Turing’s theory of computing machines”, Journal of the Association for
Computing Machinery (JACM) 4, 63–92.
Chapter 32

Propositional calculus

Propositional calculus (also called propositional logic, sentential calculus, or sentential logic) is the branch of
mathematical logic concerned with the study of propositions (whether they are true or false) that are formed by other
propositions with the use of logical connectives, and how their value depends on the truth value of their components.
Logical connectives are found in natural languages. In English for example, some examples are “and” (conjunction),
“or” (disjunction), “not” (negation) and “if” (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic:

Premise 1: If it’s raining then it’s cloudy.


Premise 2: It’s raining.
Conclusion: It’s cloudy.

Both premises and the conclusions are propositions. The premises are taken for granted and then with the application
of modus ponens (an inference rule) the conclusion follows.
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decom-
posed anymore by logical connectives, this inference can be restated replacing those atomic statements with statement
letters, which are interpreted as variables representing statements:

P →Q
P
Q
The same can be stated succinctly in the following way:

P → Q, P ⊢ Q
When P is interpreted as “It’s raining” and Q as “it’s cloudy” the above symbolic expressions can be seen to exactly
correspond with the original expression in natural language. Not only that, but they will also correspond with any
other inference of this form, which will be valid on the same basis that this inference is.
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted
to represent propositions. A system of inference rules and axioms allows certain formulas to be derived. These
derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such
formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may
be interpreted as proof of the proposition represented by the theorem.
When a formal system is used to represent formal logic, only statement letters are represented directly. The natural
language propositions that arise when they're interpreted are outside the scope of the system, and the relation between
the formal system and its interpretation is likewise outside the formal system itself.
Usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a
truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-
order logic.

206
32.1. HISTORY 207

32.1 History
Main article: History of logic

Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philoso-
phers, it was developed into a formal logic by Chrysippus in the 3rd century BC[1] and expanded by the Stoics. The
logic was focused on propositions. This advancement was different from the traditional syllogistic logic which was fo-
cused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood
. Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.[2]
Propositional logic was eventually refined using symbolic logic. The 17th/18th century philosopher Gottfried Leibniz
has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his
work was the first of its kind, it was unknown to the larger logical community. Consequently, many of the advances
achieved by Leibniz were reachieved by logicians like George Boole and Augustus De Morgan completely independent
of Leibniz.[3]
Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Frege’s
predicate logic was an advancement from the earlier propositional logic. One author describes predicate logic as
combining “the distinctive features of syllogistic logic and propositional logic.”[4] Consequently, predicate logic ush-
ered in a new era in logic’s history; however, advances in propositional logic were still made after Frege, including
Natural Deduction, Truth-Trees and Truth-Tables. Natural deduction was invented by Gerhard Gentzen and Jan
Łukasiewicz. Truth-Trees were invented by Evert Willem Beth.[5] The invention of truth-tables, however, is of con-
troversial attribution.
Within works by Frege[6] and Bertrand Russell,[7] one finds ideas influential in bringing about the notion of truth tables.
The actual 'tabular structure' (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or
Emil Post (or both, independently).[6] Besides Frege and Russell, others credited with having ideas preceding truth-
tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure
include Łukasiewicz, Schröder, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving
Lewis.[7] Ultimately, some have concluded, like John Shosky, that “It is far from clear that any one person should be
given the title of 'inventor' of truth-tables.”.[7]

32.2 Terminology
In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas),
a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation,
intended to be interpreted to be logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted to be statements,
and the rules, known to be inference rules, are typically intended to be truth-preserving. In this setting, the rules (which
may include axioms) can then be used to derive (“infer”) formulas representing true statements from given formulas
representing true statements.
The set of axioms may be empty, a nonempty finite set, a countably infinite set, or be given by axiom schemata. A
formal grammar recursively defines the expressions and well-formed formulas of the language. In addition a semantics
may be given which defines truth and valuations (or interpretations).
The language of a propositional calculus consists of

1. a set of primitive symbols, variously referred to be atomic formulas, placeholders, proposition letters, or vari-
ables, and

2. a set of operator symbols, variously interpreted to be logical operators or logical connectives.

A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of
operator symbols according to the rules of the grammar.
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propo-
sitional constants represent some particular proposition, while propositional variables range over the set of all atomic
propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by
208 CHAPTER 32. PROPOSITIONAL CALCULUS

A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ,
and χ.

32.3 Basic concepts


The following outlines a standard propositional calculus. Many different formulations exist which are all more or less
equivalent but differ in the details of:

1. their language, that is, the particular collection of primitive symbols and operator symbols,
2. the set of axioms, or distinguished formulas, and
3. the set of inference rules.

Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a
number by a letter in mathematics, for instance, a = 5. All propositions require exactly one of two truth-values: true
or false. For example, let P be the proposition that it is raining outside. This will be true (P) if it is raining outside
and false otherwise (¬P).

• We then define truth-functional operators, beginning with negation. (¬P represents the negation of P, which
can be thought of as the denial of P. In the example above, (¬P expresses that it is not raining outside, or by a
more standard reading: “It is not the case that it is raining outside.” When P is true, (¬P is false; and when P
is false, (¬P is true. {(¬¬P always has the same truth-value as P.
• Conjunction is a truth-functional connective which forms a proposition out of two simpler propositions, for
example, P and Q. The conjunction of P and Q is written P ∧ Q, and expresses that each are true. We read P
∧ Q for “P and Q”. For any two propositions, there are four possible assignments of truth values:
1. P is true and Q is true
2. P is true and Q is false
3. P is false and Q is true
4. P is false and Q is false

The conjunction of P and Q is true in case 1 and is false otherwise. Where P is the proposition that it is
raining outside and Q is the proposition that a cold-front is over Kansas, P ∧ Q is true when it is raining
outside and there is a cold-front over Kansas. If it is not raining outside, then P ∧ Q is false; and if there
is no cold-front over Kansas, then P ∧ Q is false.

• Disjunction resembles conjunction in that it forms a proposition out of two simpler propositions. We write it
P ∨ Q, and it is read “P or Q”. It expresses that either P or Q is true. Thus, in the cases listed above, the
disjunction of P and Q is true in all cases except 4. Using the example above, the disjunction expresses that
it is either raining outside or there is a cold front over Kansas. (Note, this use of disjunction is supposed to
resemble the use of the English word “or”. However, it is most like the English inclusive “or”, which can be
used to express the truth of at least one of two propositions. It is not like the English exclusive “or”, which
expresses the truth of exactly one of two propositions. That is to say, the exclusive “or” is false when both P
and Q are true (case 1). An example of the exclusive or is: You may have a bagel or a pastry, but not both.
Often in natural language, given the appropriate context, the addendum “but not both” is omitted but implied.
In mathematics, however, “or” is always inclusive or; if exclusive or is meant it will be specified, possibly by
“xor”.)
• Material conditional also joins two simpler propositions, and we write P → Q, which is read “if P then Q”.
The proposition to the left of the arrow is called the antecedent and the proposition to the right is called
the consequent. (There is no such designation for conjunction or disjunction, since they are commutative
operations.) It expresses that Q is true whenever P is true. Thus it is true in every case above except case 2,
because this is the only case when P is true but Q is not. Using the example, if P then Q expresses that if it is
raining outside then there is a cold-front over Kansas. The material conditional is often confused with physical
causation. The material conditional, however, only relates two propositions by their truth-values—which is not
the relation of cause and effect. It is contentious in the literature whether the material implication represents
logical causation.
32.3. BASIC CONCEPTS 209

• Biconditional joins two simpler propositions, and we write P ↔ Q, which is read “P if and only if Q”. It
expresses that P and Q have the same truth-value, thus P if and only if Q is true in cases 1 and 4, and false
otherwise.

It is extremely helpful to look at the truth tables for these different operators, as well as the method of analytic tableaux.

32.3.1 Closure under operations


Propositional logic is closed under truth-functional connectives. That is to say, for any proposition φ, ¬φ is also a
proposition. Likewise, for any propositions φ and ψ, φ ∧ ψ is a proposition, and similarly for disjunction, conditional,
and biconditional. This implies that, for instance, φ ∧ ψ is a proposition, and so it can be conjoined with another
proposition. In order to represent this, we need to use parentheses to indicate which proposition is conjoined with
which. For instance, P ∧ Q ∧ R is not a well-formed formula, because we do not know if we are conjoining P ∧
Q with R or if we are conjoining P with Q ∧ R. Thus we must write either (P ∧ Q) ∧ R to represent the former, or
P ∧ (Q ∧ R) to represent the latter. By evaluating the truth conditions, we see that both expressions have the same
truth conditions (will be true in the same cases), and moreover that any proposition formed by arbitrary conjunctions
will have the same truth conditions, regardless of the location of the parentheses. This means that conjunction is
associative, however, one should not assume that parentheses never serve a purpose. For instance, the sentence P ∧
(Q ∨ R) does not have the same truth conditions of (P ∧ Q) ∨ R, so they are different sentences distinguished only by
the parentheses. One can verify this by the truth-table method referenced above.
Note: For any arbitrary number of propositional constants, we can form a finite number of cases which list their
possible truth-values. A simple way to generate this is by truth-tables, in which one writes P, Q, ..., Z, for any list of
k propositional constants—that is to say, any list of propositional constants with k entries. Below this list, one writes
2k rows, and below P one fills in the first half of the rows with true (or T) and the second half with false (or F). Below
Q one fills in one-quarter of the rows with T, then one-quarter with F, then one-quarter with T and the last quarter
with F. The next column alternates between true and false for each eighth of the rows, then sixteenths, and so on,
until the last propositional constant varies between T and F for each row. This will give a complete listing of cases or
truth-value assignments possible for those propositional constants.

32.3.2 Argument
The propositional calculus then defines an argument to be a set of propositions. A valid argument is a set of proposi-
tions, the last of which follows from—or is implied by—the rest. All other arguments are invalid. The simplest valid
argument is modus ponens, one instance of which is the following set of propositions:

1. P → Q
2. P
∴ Q
This is a set of three propositions, each line is a proposition, and the last follows from the rest. The first two lines are
called premises, and the last line the conclusion. We say that any proposition C follows from any set of propositions
(P1 , ..., Pn ) , if C must be true whenever every member of the set (P1 , ..., Pn ) is true. In the argument above, for
any P and Q, whenever P → Q and P are true, necessarily Q is true. Notice that, when P is true, we cannot consider
cases 3 and 4 (from the truth table). When P → Q is true, we cannot consider case 2. This leaves only case 1, in
which Q is also true. Thus Q is implied by the premises.
This generalizes schematically. Thus, where φ and ψ may be any propositions at all,

1. φ → ψ
2. φ
∴ ψ
Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such
set), modus ponens is sufficient to prove all other argument forms in propositional logic, thus they may be considered
to be a derivative. Note, this is not true of the extension of propositional logic to other logics like first-order logic.
First-order logic requires at least one additional rule of inference in order to obtain completeness.
210 CHAPTER 32. PROPOSITIONAL CALCULUS

The significance of argument in formal logic is that one may obtain new truths from established truths. In the first
example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is
deduced. In this way, we define a deduction system to be a set of all propositions that may be deduced from another
set of propositions. For instance, given the set of propositions A = {P ∨ Q, ¬Q ∧ R, (P ∨ Q) → R} , we can
define a deduction system, Γ, which is the set of all propositions which follow from A. Reiteration is always assumed,
so P ∨ Q, ¬Q ∧ R, (P ∨ Q) → R ∈ Γ . Also, from the first element of A, last element, as well as modus ponens,
R is a consequence, and so R ∈ Γ . Because we have not included sufficiently complete axioms, though, nothing
else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce
(P ∨ Q) ↔ (¬P → Q) , this one is too weak to prove such a proposition.

32.4 Generic description of a propositional calculus


A propositional calculus is a formal system L = L (A, Ω, Z, I) , where:

• The alpha set A is a finite set of elements called proposition symbols or propositional variables. Syntactically
speaking, these are the most basic elements of the formal language L , otherwise referred to as atomic formulas
or terminal elements. In the examples to follow, the elements of A are typically the letters p, q, r, and so on.

• The omega set Ω is a finite set of elements called operator symbols or logical connectives. The set Ω is partitioned
into disjoint subsets as follows:

Ω = Ω0 ∪ Ω1 ∪ . . . ∪ Ωj ∪ . . . ∪ Ωm .

In this partition, Ωj is the set of operator symbols of arity j.

In the more familiar propositional calculi, Ω is typically partitioned as follows:

Ω1 = {¬},

Ω2 ⊆ {∧, ∨, →, ↔}.

A frequently adopted convention treats the constant logical values as operators of arity zero, thus:

Ω0 = {0, 1}.

Some writers use the tilde (~), or N, instead of ¬; and some use the ampersand (&), the prefixed K, or
· instead of ∧ . Notation varies even more for the set of logical values, with symbols like {false, true},
{F, T}, or {⊥, ⊤} all being seen in various contexts instead of {0, 1}.

• The zeta set Z is a finite set of transformation rules that are called inference rules when they acquire logical
applications.

• The iota set I is a finite set of initial points that are called axioms when they receive logical interpretations.

The language of L , also known as its set of formulas, well-formed formulas, is inductively defined by the following
rules:
32.5. EXAMPLE 1. SIMPLE AXIOM SYSTEM 211

1. Base: Any element of the alpha set A is a formula of L .

2. If p1 , p2 , . . . , pj are formulas and f is in Ωj , then (f (p1 , p2 , . . . , pj )) is a formula.

3. Closed: Nothing else is a formula of L .

Repeated applications of these rules permits the construction of complex formulas. For example:

1. By rule 1, p is a formula.

2. By rule 2, ¬p is a formula.

3. By rule 1, q is a formula.

4. By rule 2, (¬p ∨ q) is a formula.

32.5 Example 1. Simple axiom system


Let L1 = L(A, Ω, Z, I) , where A , Ω , Z , I are defined as follows:

• The alpha set A , is a finite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

• Of the three connectives for conjunction, disjunction, and implication ( ∧, ∨ , and →), one can be taken as
primitive and the other two can be defined in terms of it and negation (¬).[8] Indeed, all of the logical connectives
can be defined in terms of a sole sufficient operator. The biconditional (↔) can of course be defined in terms
of conjunction and implication, with a ↔ b defined as (a → b) ∧ (b → a) .

Ω = Ω1 ∪ Ω 2

Ω1 = {¬},

Ω2 = {→}.

• An axiom system discovered by Jan Łukasiewicz formulates a propositional calculus in this language as follows.
The axioms are all substitution instances of:

• (p → (q → p))

• ((p → (q → r)) → ((p → q) → (p → r)))

• ((¬p → ¬q) → (q → p))

• The rule of inference is modus ponens (i.e., from p and (p → q) , infer q). Then a ∨ b is defined as ¬a → b ,
and a ∧ b is defined as ¬(a → ¬b) . This system is used in Metamath set.mm formal proof database.
212 CHAPTER 32. PROPOSITIONAL CALCULUS

32.6 Example 2. Natural deduction system


Let L2 = L(A, Ω, Z, I) , where A , Ω , Z , I are defined as follows:

• The alpha set A , is a finite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

• The omega set Ω = Ω1 ∪ Ω2 partitions as follows:

Ω1 = {¬},

Ω2 = {∧, ∨, →, ↔}.

In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the
inference rules of a so-called natural deduction system. The particular system presented here has no initial points,
which means that its interpretation for logical applications derives its theorems from an empty axiom set.

• The set of initial points is empty, that is, I = ∅ .


• The set of transformation rules, Z , is described as follows:

Our propositional calculus has ten inference rules. These rules allow us to derive other true formulas given a set of
formulas that are assumed to be true. The first nine simply state that we can infer certain well-formed formulas from
other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the
rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer
a certain other formula. Since the first nine rules don't do this they are usually described as non-hypothetical rules,
and the last one as a hypothetical rule.
In describing the transformation rules, we may introduce a metalanguage symbol ⊢ . It is basically a convenient
shorthand for saying “infer that”. The format is Γ ⊢ ψ , in which Γ is a (possibly empty) set of formulas called
premises, and ψ is a formula called conclusion. The transformation rule Γ ⊢ ψ means that if every proposition in Γ is
a theorem (or has the same truth value as the axioms), then ψ is also a theorem. Note that considering the following
rule Conjunction introduction, we will know whenever Γ has more than one formula, we can always safely reduce it
into one formula using conjunction. So for short, from that time on we may represent Γ as one formula instead of a
set. Another omission for convenience is when Γ is an empty set, in which case Γ may not appear.

Negation introduction From (p → q) and (p → ¬q) , infer ¬p .


That is, {(p → q), (p → ¬q)} ⊢ ¬p .
Negation elimination From ¬p , infer (p → r) .
That is, {¬p} ⊢ (p → r) .
Double negative elimination From ¬¬p , infer p.
That is, ¬¬p ⊢ p .
Conjunction introduction From p and q, infer (p ∧ q) .
That is, {p, q} ⊢ (p ∧ q) .
Conjunction elimination From (p ∧ q) , infer p.
From (p ∧ q) , infer q.
That is, (p ∧ q) ⊢ p and (p ∧ q) ⊢ q .
Disjunction introduction From p, infer (p ∨ q) .
From q, infer (p ∨ q) .
32.7. BASIC AND DERIVED ARGUMENT FORMS 213

That is, p ⊢ (p ∨ q) and q ⊢ (p ∨ q) .


Disjunction elimination From (p ∨ q) and (p → r) and (q → r) , infer r.
That is, {p ∨ q, p → r, q → r} ⊢ r .
Biconditional introduction From (p → q) and (q → p) , infer (p ↔ q) .
That is, {p → q, q → p} ⊢ (p ↔ q) .
Biconditional elimination From (p ↔ q) , infer (p → q) .
From (p ↔ q) , infer (q → p) .
That is, (p ↔ q) ⊢ (p → q) and (p ↔ q) ⊢ (q → p) .
Modus ponens (conditional elimination) From p and (p → q) , infer q.
That is, {p, p → q} ⊢ q .
Conditional proof (conditional introduction) From [accepting p allows a proof of q], infer (p → q) .
That is, (p ⊢ q) ⊢ (p → q) .

32.7 Basic and derived argument forms

32.8 Proofs in propositional calculus


One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations
of logical equivalence between propositional formulas. These relationships are determined by means of the available
transformation rules, sequences of which are called derivations or proofs.
In the discussion to follow, a proof is presented as a sequence of numbered lines, with each line consisting of a single
formula followed by a reason or justification for introducing that formula. Each premise of the argument, that is, an
assumption introduced as an hypothesis of the argument, is listed at the beginning of the sequence and is marked as
a “premise” in lieu of other justification. The conclusion is listed on the last line. A proof is complete if every line
follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see
proof-trees).

32.8.1 Example of a proof


• To be shown that A → A.

• One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be
arranged as follows:

Interpret A ⊢ A as “Assuming A, infer A”. Read ⊢ A → A as “Assuming nothing, infer that A implies A”, or “It is
a tautology that A implies A”, or “It is always true that A implies A”.

32.9 Soundness and completeness of the rules


The crucial properties of this set of rules are that they are sound and complete. Informally this means that the rules
are correct and that no other rules are required. These claims can be made more formal as follows.
We define a truth assignment as a function that maps propositional variables to true or false. Informally such a
truth assignment can be understood as the description of a possible state of affairs (or possible world) where certain
statements are true and others are not. The semantics of formulas can then be formalized by defining for which “state
of affairs” they are considered to be true, which is what is done by the following definition.
We define when such a truth assignment A satisfies a certain well-formed formula with the following rules:
214 CHAPTER 32. PROPOSITIONAL CALCULUS

• A satisfies the propositional variable P if and only if A(P) = true

• A satisfies ¬φ if and only if A does not satisfy φ

• A satisfies (φ ∧ ψ) if and only if A satisfies both φ and ψ

• A satisfies (φ ∨ ψ) if and only if A satisfies at least one of either φ or ψ

• A satisfies (φ → ψ) if and only if it is not the case that A satisfies φ but not ψ

• A satisfies (φ ↔ ψ) if and only if A satisfies both φ and ψ or satisfies neither one of them

With this definition we can now formalize what it means for a formula φ to be implied by a certain set S of formulas.
Informally this is true if in all worlds that are possible given the set of formulas S the formula φ also holds. This
leads to the following formal definition: We say that a set S of well-formed formulas semantically entails (or implies)
a certain well-formed formula φ if all truth assignments that satisfy all the formulas in S also satisfy φ.
Finally we define syntactical entailment such that φ is syntactically entailed by S if and only if we can derive it with
the inference rules that were presented above in a finite number of steps. This allows us to formulate exactly what it
means for the set of inference rules to be sound and complete:
Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula φ then S semantically
entails φ.
Completeness: If the set of well-formed formulas S semantically entails the well-formed formula φ then S syntac-
tically entails φ.
For the above set of rules this is indeed the case.

32.9.1 Sketch of a soundness proof


(For most logical systems, this is the comparatively “simple” direction of proof)
Notational conventions: Let G be a variable ranging over sets of sentences. Let A, B and C range over sentences. For
“G syntactically entails A” we write “G proves A”. For “G semantically entails A” we write “G implies A”.
We want to show: (A)(G) (if G proves A, then G implies A).
We note that “G proves A” has an inductive definition, and that gives us the immediate resources for demonstrating
claims of the form “If G proves A, then ...”. So our proof proceeds by induction.

1. Basis. Show: If A is a member of G, then G implies A.

2. Basis. Show: If A is an axiom, then G implies A.

3. Inductive step (induction on n, the length of the proof):

(a) Assume for arbitrary G and A that if G proves A in n or fewer steps, then G implies A.
(b) For each possible application of a rule of inference at step n + 1, leading to a new theorem B, show that
G implies B.

Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used,
Step II involves showing that each of the axioms is a (semantic) logical truth.
The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The
proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will
systematically cover all the further sentences that might be provable—by considering each case where we might reach
a logical conclusion using an inference rule—and shows that if a new sentence is provable, it is also logically implied.
(For example, we might have a rule telling us that from “A” we can derive “A or B”. In III.a We assume that if A is
provable it is implied. We also know that if A is provable then “A or B” is provable. We have to show that then “A
or B” too is implied. We do so by appeal to the semantic definition and the assumption we just made. A is provable
from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But
any valuation making A true makes “A or B” true, by the defined semantics for “or”. So any valuation which makes
32.9. SOUNDNESS AND COMPLETENESS OF THE RULES 215

all of G true makes “A or B” true. So “A or B” is implied.) Generally, the Inductive step will consist of a lengthy but
simple case-by-case analysis of all the rules of inference, showing that each “preserves” semantic implication.
By the definition of provability, there are no sentences provable other than by being a member of G, an axiom, or
following by a rule; so if all of those are semantically implied, the deduction calculus is sound.

32.9.2 Sketch of completeness proof


(This is usually the much harder direction of proof.)
We adopt the same notational conventions as above.
We want to show: If G implies A, then G proves A. We proceed by contraposition: We show instead that if G does
not prove A then G does not imply A.

1. G does not prove A. (Assumption)


2. If G does not prove A, then we can construct an (infinite) Maximal Set, G∗ , which is a superset of G and
which also does not prove A.

(a) Place an ordering on all the sentences in the language (e.g., shortest first, and equally long ones in extended
alphabetical ordering), and number them (E 1 , E 2 , ...)
(b) Define a series G of sets (G0 , G1 , ...) inductively:

i. G0 = G
ii. If Gk ∪ {Ek+1 } proves A, then Gk+1 = Gk
iii. If Gk ∪ {Ek+1 } does not prove A, then Gk+1 = Gk ∪ {Ek+1 }
(c) Define G∗ as the union of all the G . (That is, G∗ is the set of all the sentences that are in any G .)
(d) It can be easily shown that

i. G∗ contains (is a superset of) G (by (b.i));


ii. G∗ does not prove A (because if it proves A then some sentence was added to some G which caused
it to prove A; but this was ruled out by definition); and
iii. G∗ is a Maximal Set with respect to A: If any more sentences whatever were added to G∗ , it would
prove A. (Because if it were possible to add any more sentences, they should have been added when
they were encountered during the construction of the G , again by definition)
3. If G∗ is a Maximal Set with respect to A, then it is truth-like. This means that it contains C only if it does not
contain ¬C; If it contains C and contains “If C then B” then it also contains B; and so forth.
4. If G∗ is truth-like there is a G∗ -Canonical valuation of the language: one that makes every sentence in G∗ true
and everything outside G∗ false while still obeying the laws of semantic composition in the language.
5. A G∗ -canonical valuation will make our original set G all true, and make A false.
6. If there is a valuation on which G are true and A is false, then G does not (semantically) imply A.

QED

32.9.3 Another outline for a completeness proof


If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the
formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth
or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional
variable in the subformula. Then combine the lines of the truth table together two at a time by using "(P is true implies
S) implies ((P is false implies S) implies S)". Keep repeating this until all dependencies on propositional variables
have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the
logic is complete.
216 CHAPTER 32. PROPOSITIONAL CALCULUS

32.10 Interpretation of a truth-functional propositional calculus


An interpretation of a truth-functional propositional calculus P is an assignment to each propositional symbol of
P of one or the other (but not both) of the truth values truth (T) and falsity (F), and an assignment to the connective
symbols of P of their usual truth-functional meanings. An interpretation of a truth-functional propositional calculus
may also be expressed in terms of truth tables.[10]
For n distinct propositional symbols there are 2n distinct possible interpretations. For any particular symbol a , for
example, there are 21 = 2 possible interpretations:

1. a is assigned T, or
2. a is assigned F.

For the pair a , b there are 22 = 4 possible interpretations:

1. both are assigned T,


2. both are assigned F,
3. a is assigned T and b is assigned F, or
4. a is assigned F and b is assigned T.[10]

Since P has ℵ0 , that is, denumerably many propositional symbols, there are 2ℵ0 = c , and therefore uncountably
many distinct possible interpretations of P .[10]

32.10.1 Interpretation of a sentence of truth-functional propositional logic


Main article: Interpretation (logic)

If φ and ψ are formulas of P and I is an interpretation of P then:

• A sentence of propositional logic is true under an interpretation I iff I assigns the truth value T to that sentence.
If a sentence is true under an interpretation, then that interpretation is called a model of that sentence.
• φ is false under an interpretation I iff φ is not true under I .[10]
• A sentence of propositional logic is logically valid if it is true under every interpretation

|= ϕ means that φ is logically valid

• A sentence ψ of propositional logic is a semantic consequence of a sentence φ iff there is no interpretation


under which φ is true and ψ is false.
• A sentence of propositional logic is consistent iff it is true under at least one interpretation. It is inconsistent if
it is not consistent.

Some consequences of these definitions:

• For any given interpretation a given formula is either true or false.[10]


• No formula is both true and false under the same interpretation.[10]
• φ is false for a given interpretation iff ¬ϕ is true for that interpretation; and φ is true under an interpretation
iff ¬ϕ is false under that interpretation.[10]
• If φ and (ϕ → ψ) are both true under a given interpretation, then ψ is true under that interpretation.[10]
• If |=P ϕ and |=P (ϕ → ψ) , then |=P ψ .[10]
32.11. ALTERNATIVE CALCULUS 217

• ¬ϕ is true under I iff φ is not true under I .

• (ϕ → ψ) is true under I iff either φ is not true under I or ψ is true under I .[10]

• A sentence ψ of propositional logic is a semantic consequence of a sentence φ iff (ϕ → ψ) is logically valid,


that is, ϕ |=P ψ iff |=P (ϕ → ψ) .[10]

32.11 Alternative calculus


It is possible to define another version of propositional calculus, which defines most of the syntax of the logical
operators by means of axioms, and which uses only one inference rule.

32.11.1 Axioms

Let φ, χ, and ψ stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek
letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:

• Axiom THEN-2 may be considered to be a “distributive property of implication with respect to implication.”

• Axioms AND-1 and AND-2 correspond to “conjunction elimination”. The relation between AND-1 and AND-
2 reflects the commutativity of the conjunction operator.

• Axiom AND-3 corresponds to “conjunction introduction.”

• Axioms OR-1 and OR-2 correspond to “disjunction introduction.” The relation between OR-1 and OR-2 re-
flects the commutativity of the disjunction operator.

• Axiom NOT-1 corresponds to “reductio ad absurdum.”

• Axiom NOT-2 says that “anything can be deduced from a contradiction.”

• Axiom NOT-3 is called "tertium non datur" (Latin: “a third is not given”) and reflects the semantic valuation
of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value,
at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.

32.11.2 Inference rule

The inference rule is modus ponens:

ϕ, ϕ → χ ⊢ χ

32.11.3 Meta-inference rule

Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to
the right of the turnstile. Then the deduction theorem can be stated as follows:

If the sequence

ϕ1 , ϕ2 , ..., ϕn , χ ⊢ ψ

has been demonstrated, then it is also possible to demonstrate the sequence

ϕ1 , ϕ2 , ..., ϕn ⊢ χ → ψ
218 CHAPTER 32. PROPOSITIONAL CALCULUS

This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional
calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems
about the soundness or completeness of propositional calculus.
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as
another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof
inference rule which is part of the first version of propositional calculus introduced in this article.
The converse of DT is also valid:

If the sequence

ϕ1 , ϕ2 , ..., ϕn ⊢ χ → ψ

has been demonstrated, then it is also possible to demonstrate the sequence

ϕ1 , ϕ2 , ..., ϕn , χ ⊢ ψ

in fact, the validity of the converse of DT is almost trivial compared to that of DT:

If

ϕ1 , ..., ϕn ⊢ χ → ψ

then

ϕ1 , ..., ϕn , χ ⊢ χ → ψ

ϕ1 , ..., ϕn , χ ⊢ χ
and from (1) and (2) can be deduced

ϕ1 , ..., ϕn , χ ⊢ ψ

by means of modus ponens, Q.E.D.

The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example,
the axiom AND-1,

⊢ϕ∧χ→ϕ

can be transformed by means of the converse of the deduction theorem into the inference rule

ϕ∧χ⊢ϕ

which is conjunction elimination, one of the ten inference rules used in the first version (in this article) of the propo-
sitional calculus.

32.11.4 Example of a proof


The following is an example of a (syntactical) demonstration, involving only axioms THEN-1 and THEN-2:
Prove: A → A (Reflexivity of implication).
Proof:

1. (A → ((B → A) → A)) → ((A → (B → A)) → (A → A))

ϕ = A, χ = B → A, ψ = A
32.12. EQUIVALENCE TO EQUATIONAL LOGICS 219

2. A → ((B → A) → A)

ϕ = A, χ = B → A

3. (A → (B → A)) → (A → A)

From (1) and (2) by modus ponens.

4. A → (B → A)

ϕ = A, χ = B

5. A → A

From (3) and (4) by modus ponens.

32.12 Equivalence to equational logics


The preceding alternative calculus is an example of a Hilbert-style deduction system. In the case of propositional
systems the axioms are terms built with logical connectives and the only inference rule is modus ponens. Equational
logic as standardly used informally in high school algebra is a different kind of calculus from Hilbert systems. Its
theorems are equations and its inference rules express the properties of equality, namely that it is a congruence on
terms that admits substitution.
Classical propositional calculus as described above is equivalent to Boolean algebra, while intuitionistic propositional
calculus is equivalent to Heyting algebra. The equivalence is shown by translation in each direction of the theorems
of the respective systems. Theorems ϕ of classical or intuitionistic propositional calculus are translated as equations
ϕ = 1 of Boolean or Heyting algebra respectively. Conversely theorems x = y of Boolean or Heyting algebra are
translated as theorems (x → y) ∧ (y → x) of classical or intuitionistic calculus respectively, for which x ≡ y is a
standard abbreviation. In the case of Boolean algebra x = y can also be translated as (x ∧ y) ∨ (¬x ∧ ¬y) , but this
translation is incorrect intuitionistically.
In both Boolean and Heyting algebra, inequality x ≤ y can be used in place of equality. The equality x = y is
expressible as a pair of inequalities x ≤ y and y ≤ x . Conversely the inequality x ≤ y is expressible as the equality
x ∧ y = x , or as x ∨ y = y . The significance of inequality for Hilbert-style systems is that it corresponds to the
latter’s deduction or entailment symbol ⊢ . An entailment

ϕ1 , ϕ2 , . . . , ϕn ⊢ ψ

is translated in the inequality version of the algebraic framework as

ϕ1 ∧ ϕ2 ∧ . . . ∧ ϕn ≤ ψ

Conversely the algebraic inequality x ≤ y is translated as the entailment

x ⊢ y

The difference between implication x → y and inequality or entailment x ≤ y or x ⊢ y is that the former is internal
to the logic while the latter is external. Internal implication between two terms is another term of the same kind.
Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and
is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily
understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
220 CHAPTER 32. PROPOSITIONAL CALCULUS

Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as
described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a
more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as
the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition
in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism
per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any
proof will do and there is no point in distinguishing them.

32.13 Graphical calculi

It is possible to generalize the definition of a formal language from a set of finite sequences over a finite basis to include
many other sets of mathematical structures, so long as they are built up by finitary means from finite materials. What’s
more, many of these families of formal structures are especially well-suited for use in logic.
For example, there are many families of graphs that are close enough analogues of formal languages that the concept
of a calculus is quite easily and naturally extended to them. Indeed, many species of graphs arise as parse graphs
in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on
formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs,
simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many
advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings
to parse graphs is called parsing and the inverse mapping from parse graphs to strings is achieved by an operation
that is called traversing the graph.

32.14 Other logical calculi

Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways.
(Aristotelian “syllogistic” calculus, which is largely supplanted in modern logic, is in some ways simpler – but in other
ways more complex – than propositional calculus.) The most immediate way to develop a more complex logical
calculus is to introduce rules that are sensitive to more fine-grained details of the sentences being used.
First-order logic (a.k.a. first-order predicate logic) results when the “atomic sentences” of propositional logic are
broken up into terms, variables, predicates, and quantifiers, all keeping the rules of propositional logic with some
new ones introduced. (For example, from “All dogs are mammals” we may infer “If Rover is a dog then Rover is
a mammal”.) With the tools of first-order logic it is possible to formulate a number of theories, either with explicit
axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these;
others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of
first-order logic. Thus, it makes sense to refer to propositional logic as “zeroth-order logic”, when comparing it with
these logics.
Modal logic also offers a variety of inferences that cannot be captured in propositional calculus. For example, from
“Necessarily p” we may infer that p. From p we may infer “It is possible that p”. The translation between modal
logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on
Boolean or Heyting algebras, different from the Boolean operations, interpreting the possibility modality, and in the
case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity
is the De Morgan dual of possibility). The first operator preserves 0 and disjunction while the second preserves 1 and
conjunction.
Many-valued logics are those allowing sentences to have values other than true and false. (For example, neither and
both are standard “extra values"; “continuum logic” allows each sentence to have any of an infinite number of “degrees
of truth” between true and false.) These logics often require calculational devices quite distinct from propositional
calculus. When the values form a Boolean algebra (which may have more than two or even infinitely many values),
many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the
values form an algebra that is not Boolean.
32.15. SOLVERS 221

32.15 Solvers
Finding solutions to propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g.,
DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended
the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.

32.16 See also

32.16.1 Higher logical levels


• First-order logic
• Second-order propositional logic
• Second-order logic
• Higher-order logic

32.16.2 Related topics

32.17 References
[1] Ancient Logic (Stanford Encyclopedia of Philosophy)
[2] Marenbon, John (2007). Medieval philosophy: an historical and philosophical introduction. Routledge. p. 137.
[3] Leibniz’s Influence on 19th Century Logic
[4] Hurley, Patrick (2007). A Concise Introduction to Logic 10th edition. Wadsworth Publishing. p. 392.
[5] Beth, Evert W.; “Semantic entailment and formal derivability”, series: Mededlingen van de Koninklijke Nederlandse
Akademie van Wetenschappen, Afdeling Letterkunde, Nieuwe Reeks, vol. 18, no. 13, Noord-Hollandsche Uitg. Mij.,
Amsterdam, 1955, pp. 309–42. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University
Press, 1969
[6] Truth in Frege
[7] Russell’s Use of Truth-Tables
[8] Wernick, William (1942) “Complete Sets of Logical Functions,” Transactions of the American Mathematical Society 51,
pp. 117–132.
[9] Toida, Shunichi (2 August 2009). “Proof of Implications”. CS381 Discrete Structures/Discrete Mathematics Web Course
Material. Department Of Computer Science, Old Dominion University. Retrieved 10 March 2010.
[10] Hunter, Geoffrey (1971). Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of
California Pres. ISBN 0-520-02356-0.

32.18 Further reading


• Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
• Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
• Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition,
McGraw–Hill, 1978.
• Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
• Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press,
Cambridge, UK.
• Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.
222 CHAPTER 32. PROPOSITIONAL CALCULUS

32.18.1 Related works


• Hofstadter, Douglas (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-
02656-2.

32.19 External links


• Klement, Kevin C. (2006), “Propositional Logic”, in James Fieser and Bradley Dowden (eds.), Internet Ency-
clopedia of Philosophy, Eprint.

• Formal Predicate Calculus, contains a systematic formal development along the lines of Alternative calculus

• forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sen-
tential logic.

• Category:Propositional Calculus on ProofWiki (GFDLed)


• An Outline of Propositional Logic
Chapter 33

Recursively enumerable set

“Enumerable set” redirects here. For the set-theoretic concept, see Countable set.

In computability theory, traditionally called recursion theory, a set S of natural numbers is called recursively enu-
merable, computably enumerable, semidecidable, provable or Turing-recognizable if:

• There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S.

Or, equivalently,

• There is an algorithm that enumerates the members of S. That means that its output is simply a list of the
members of S: s1 , s2 , s3 , ... . If necessary, this algorithm may run forever.

The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably
enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase.
In computational complexity theory, the complexity class containing all recursively enumerable sets is RE. In recursion
theory, the lattice of r.e. sets under inclusion is denoted E .

33.1 Formal definition


A set S of natural numbers is called recursively enumerable if there is a partial recursive function whose domain is
exactly S, meaning that the function is defined if and only if its input is a member of S.

33.2 Equivalent formulations


The following are all equivalent properties of a set S of natural numbers:

Semidecidability:

• The set S is recursively enumerable. That is, S is the domain (co-range) of a partial recursive
function.
• There is a partial recursive function f such that:
{
1 if x ∈ S
f (x) =
undefined/does not halt if x ∈ /S
Enumerability:

• The set S is the range of a partial recursive function.

223
224 CHAPTER 33. RECURSIVELY ENUMERABLE SET

• The set S is the range of a total recursive function or empty. If S is infinite, the function can be
chosen to be injective.
• The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of
values may be necessary in this case.
Diophantine:
• There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over
the natural numbers such that
x ∈ S ⇔ ∃a, b, c, d, e, f, g, h, i (p(x, a, b, c, d, e, f, g, h, i) = 0).
• There is a polynomial from the integers to the integers such that the set S contains exactly the
non-negative numbers in its range.

The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing.
The Diophantine characterizations of a recursively enumerable set, while not as straightforward or intuitive as the first
definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert’s Tenth Problem. Diophantine
sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence
was only remarked more than three decades after the introduction of recursively enumerable sets). The number of
bound variables in the above definition of the Diophantine set is the best known so far; it might be that a lower number
can be used to define all diophantine sets.

33.3 Examples
• Every recursive set is recursively enumerable, but it is not true that every recursively enumerable set is recursive.
For recursive sets, the algorithm must also say if an input is not in the set – this is not required of recursively
enumerable sets.
• A recursively enumerable language is a recursively enumerable subset of a formal language.
• The set of all provable sentences in an effectively presented axiomatic system is a recursively enumerable set.
• Matiyasevich’s theorem states that every recursively enumerable set is a Diophantine set (the converse is trivially
true).
• The simple sets are recursively enumerable but not recursive.
• The creative sets are recursively enumerable but not recursive.
• Any productive set is not recursively enumerable.
• Given a Gödel numbering ϕ of the computable functions, the set {⟨i, x⟩ | ϕi (x) ↓} (where ⟨i, x⟩ is the Cantor
pairing function and ϕi (x) ↓ indicates ϕi (x) is defined) is recursively enumerable (cf. picture for a fixed x).
This set encodes the halting problem as it describes the input parameters for which each Turing machine halts.
• Given a Gödel numbering ϕ of the computable functions, the set {⟨x, y, z⟩ | ϕx (y) = z} is recursively
enumerable. This set encodes the problem of deciding a function value.
• Given a partial function f from the natural numbers into the natural numbers, f is a partial recursive function
if and only if the graph of f, that is, the set of all pairs ⟨x, f (x)⟩ such that f(x) is defined, is recursively
enumerable.

33.4 Properties
If A and B are recursively enumerable sets then A ∩ B, A ∪ B and A × B (with the ordered pair of natural numbers
mapped to a single natural number with the Cantor pairing function) are recursively enumerable sets. The preimage
of a recursively enumerable set under a partial recursive function is a recursively enumerable set.
A set is recursively enumerable if and only if it is at level Σ01 of the arithmetical hierarchy.
33.5. REMARKS 225

A set T is called co-recursively enumerable or co-r.e. if its complement N \ T is recursively enumerable. Equiva-
lently, a set is co-r.e. if and only if it is at level Π01 of the arithmetical hierarchy.
A set A is recursive (synonym: computable) if and only if both A and the complement of A are recursively enumerable.
A set is recursive if and only if it is either the range of an increasing total recursive function or finite.
Some pairs of recursively enumerable sets are effectively separable and some are not.

33.5 Remarks
According to the Church–Turing thesis, any effectively calculable function is calculable by a Turing machine, and
thus a set S is recursively enumerable if and only if there is some algorithm which yields an enumeration of S. This
cannot be taken as a formal definition, however, because the Church–Turing thesis is an informal conjecture rather
than a formal axiom.
The definition of a recursively enumerable set as the domain of a partial function, rather than the range of a total
recursive function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion
theories, such as α-recursion theory, the definition corresponding to domains has been found to be more natural. Other
texts use the definition in terms of enumerations, which is equivalent for recursively enumerable sets.

33.6 References
• Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1;
ISBN 0-07-053522-1.
• Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag,
Berlin, 1987. ISBN 3-540-15299-7.
• Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–
1181.
Chapter 34

Semi-Thue system

In theoretical computer science and mathematical logic a string rewriting system (SRS), historically called a semi-
Thue system, is a rewriting system over strings from a (usually finite) alphabet. Given a binary relation R between
fixed strings over the alphabet, called rewrite rules, denoted by s → t , an SRS extends the rewriting relation to all
strings in which the left- and right-hand side of the rules appear as substrings, that is usv → utv , where s , t , u ,
and v are strings.
The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Thus they constitute a
natural framework for solving the word problem for monoids and groups.
An SRS can be defined directly as an abstract rewriting system. It can also be seen as a restricted kind of a term
rewriting system. As a formalism, string rewriting systems are Turing complete. The semi-Thue name comes from
the Norwegian mathematician Axel Thue, who introduced systematic treatment of string rewriting systems in a 1914
paper.[1] Thue introduced this notion hoping to solve the word problem for finitely presented semigroups. It wasn't
until 1947 the problem was shown to be undecidable— this result was obtained independently by Emil Post and A.
A. Markov Jr.[2][3]

34.1 Definition
A string rewriting system or semi-Thue system is a tuple (Σ, R) where

• Σ is an alphabet, usually assumed finite.[4] The elements of the set Σ∗ (* is the Kleene star here) are finite
(possibly empty) strings on Σ , sometimes called words in formal languages; we will simply call them strings
here.

• R is a binary relation on strings from Σ , i.e., R ⊆ Σ∗ × Σ∗ . Each element (u, v) ∈ R is called a (rewriting)
rule and is usually written u → v .

If the relation R is symmetric, then the system is called a Thue system.


The rewriting rules in R can be naturally extended to other strings in Σ∗ by allowing substrings to be rewritten
according to R . More formally, the one-step rewriting relation relation →R induced by R on Σ∗ for any strings s
, and t in Σ∗ :

s →R t if and only if there exist x , y , u , v in Σ∗ such that s = xuy , t = xvy , and u → v .

Since →R is a relation on Σ∗ , the pair (Σ∗ , →R ) fits the definition of an abstract rewriting system. Obviously R is a
subset of →R . Some authors use a different notation for the arrow in →R (e.g. ⇒R ) in order to distinguish it from
R itself ( → ) because they later want to be able to drop the subscript and still avoid confusion between R and the
one-step rewrite induced by R .
Clearly in a semi-Thue system we can form a (finite or infinite) sequence of strings produced by starting with an
initial string s0 ∈ Σ∗ and repeatedly rewriting it by making one substring-replacement at a time:

226
34.2. THUE CONGRUENCE 227

s0 →R s1 →R s2 →R . . .

A zero-or-more-steps rewriting like this is captured by the reflexive transitive closure of →R , denoted by →R (see
abstract rewriting system#Basic notions). This is called the rewriting relation or reduction relation on Σ∗ induced
by R .

34.2 Thue congruence


In general, the set Σ∗ of strings on an alphabet forms a free monoid together with the binary operation of string
concatenation (denoted as · and written multiplicatively by dropping the symbol). In a SRS, the reduction relation
∗ ∗ ∗
→R is compatible with the monoid operation, meaning that x→R y implies uxv →R uyv for all strings x , y , u , v in
∗ ∗
Σ∗ . Since →R is by definition a preorder, (Σ∗ , ·, →R ) forms a preordered monoid.

Similarly, the reflexive transitive symmetric closure of →R , denoted ↔R (see abstract rewriting system#Basic no-
tions), is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string

concatenation. The relation ↔R is called the Thue congruence generated by R . In a Thue system, i.e. if R is
∗ ∗
symmetric, the rewrite relation →R coincides with the Thue congruence ↔R .

34.3 Factor monoid and monoid presentations


∗ ∗
Since ↔R is a congruence, we can define the factor monoid MR = Σ∗ /↔R of the free monoid Σ∗ by the Thue
congruence in the usual manner. If a monoid M is isomorphic with MR , then the semi-Thue system (Σ, R) is
called a monoid presentation of M .
We immediately get some very useful connections with other areas of algebra. For example, the alphabet {a, b} with
the rules { ab → ε, ba → ε }, where ε is the empty string, is a presentation of the free group on one generator. If
instead the rules are just { ab → ε }, then we obtain a presentation of the bicyclic monoid.
The importance of semi-Thue systems as presentation of monoids is made stronger by the following:
Theorem: Every monoid has a presentation of the form (Σ, R) , thus it may be always be presented by a semi-Thue
system, possibly over an infinite alphabet.[5]
In this context, the set Σ is called the set of generators of M , and R is called the set of defining relations M .
We can immediately classify monoids based on their presentation. M is called

• finitely generated if Σ is finite.


• finitely presented if both Σ and R are finite.

34.4 The word problem for semi-Thue systems


The word problem for semi-Thue systems can be stated as follows: Given a semi-Thue system T := (Σ, R) and two
words (strings) u, v ∈ Σ∗ , can u be transformed into v by applying rules from R ? This problem is undecidable, i.e.
there is no general algorithm for solving this problem. This even holds if we limit the input to finite systems.
Martin Davis offers the lay reader a two-page proof in his article “What is a Computation?" pp. 258–259 with
commentary p. 257. Davis casts the proof in this manner: “Invent [a word problem] whose solution would lead to a
solution to the halting problem.”

34.5 Connections with other notions


A semi-Thue system is also a term-rewriting system—one that has monadic words (functions) ending in the same
variable as left- and right-hand side terms,[6] e.g. a term rule f2 (f1 (x)) → g(x) is equivalent with the string rule
f1 f2 → g .
228 CHAPTER 34. SEMI-THUE SYSTEM

A semi-Thue system is also a special type of Post canonical system, but every Post canonical system can also be
reduced to an SRS. Both formalism are Turing complete, and thus equivalent to Noam Chomsky's unrestricted gram-
mars, which are sometimes called semi-Thue grammars.[7] A formal grammar only differs from a semi-Thue system
by the separation of the alphabet in terminals and non-terminals, and the fixation of a starting symbol amongst non-
terminals. A minority of authors actually define a semi-Thue system as a triple (Σ, A, R) , where A ⊆ Σ∗ is called
the set of axioms. Under this “generative” definition of semi-Thue system, an unrestricted grammar is just a semi-
Thue system with a single axiom in which one partitions the alphabet in terminals and non-terminals, and makes the
axiom a nonterminal.[8] The simple artifice of partitioning the alphabet in terminals and non-terminals is a powerful
one; it allows the definition of the Chomsky hierarchy based on what combination of terminals and non-terminals
rules contain. This was a crucial development in the theory of formal languages.

34.6 History and importance


Semi-Thue systems were developed as part of a program to add additional constructs to logic, so as to create systems
such as propositional logic, that would allow general mathematical theorems to be expressed in a formal language,
and then proven and verified in an automatic, mechanical fashion. The hope was that the act of theorem proving could
then be reduced to a set of defined manipulations on a set of strings. It was subsequently realized that semi-Thue
systems are isomorphic to unrestricted grammars, which in turn are known to be isomorphic to Turing machines.
This method of research succeeded and now computers can be used to verify the proofs of mathematic and logical
theorems.
At the suggestion of Alonzo Church, Emil Post in a paper published in 1947, first proved “a certain Problem of Thue”
to be unsolvable, what Martin Davis states as "...the first unsolvability proof for a problem from classical mathematics
-- in this case the word problem for semigroups.” (Undecidable p. 292)
Davis [ibid] asserts that the proof was offered independently by A. A. Markov (C. R. (Doklady) Acad. Sci. U.S.S.R.
(n.s.) 55(1947), pp. 583–586.

34.7 See also

• L-system

• MU puzzle

34.8 Notes

[1] Book and Otto, p. 36

[2] Abramsky et al. p. 416

[3] Salomaa et al., p.444

[4] In Book and Otto a semi-Thue system is defined over a finite alphabet through most of the book, except chapter 7 when
monoid presentation are introduced, when this assumption is quietly dropped.

[5] Book and Otto, Theorem 7.1.7, p. 149

[6] Nachum Dershowitz and Jean-Pierre Jouannaud. Rewrite Systems (1990) p. 6

[7] D.I.A. Cohen, Introduction to Computer Theory, 2nd ed., Wiley-India, 2007, ISBN 81-265-1334-9, p.572

[8] Dan A. Simovici, Richard L. Tenney, Theory of formal languages with applications, World Scientific, 1999 ISBN 981-02-
3729-4, chapter 4
34.9. REFERENCES 229

34.9 References

34.9.1 Monographs
• Ronald V. Book and Friedrich Otto, String-rewriting Systems, Springer, 1993, ISBN 0-387-97965-4.
• Matthias Jantzen, Confluent string rewriting, Birkhäuser, 1988, ISBN 0-387-13715-7.

34.9.2 Textbooks
• Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theo-
retical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1, chapter 7

• Elaine Rich, Automata, computability and complexity: theory and applications, Prentice Hall, 2007, ISBN
0-13-228806-0, chapter 23.5.

34.9.3 Surveys
• Samson Abramsky, Dov M. Gabbay, Thomas S. E. Maibaum (ed.), Handbook of Logic in Computer Science:
Semantic modelling, Oxford University Press, 1995, ISBN 0-19-853780-8.

• Grzegorz Rozenberg, Arto Salomaa (ed.), Handbook of Formal Languages: Word, language, grammar, Springer,
1997, ISBN 3-540-60420-0.

34.9.4 Landmark papers


• Emil Post (1947), Recursive Unsolvability of a Problem of Thue, The Journal of Symbolic Logic, vol. 12 (1947)
pp. 1–11. Reprinted in Martin Davis ed. (1965), The Undecidable: Basic Papers on Undecidable Propositions,
Unsolvable Problems and Computable Functions, Raven Press, New York. pp. 293ff
Chapter 35

Sheffer stroke

Venn diagram of A ↑ B

In Boolean functions and propositional calculus, the Sheffer stroke, named after Henry M. Sheffer, written "|" (see
vertical bar, not to be confused with "||" which is often used to represent disjunction), “Dpq", or "↑" (an upwards
arrow), denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary
language as “not both”. It is also called nand (“not and”) or the alternative denial, since it says in effect that at least
one of its operands is false. In Boolean algebra and digital electronics it is known as the NAND operation.
Like its dual, the NOR operator (also known as the Peirce arrow or Quine dagger), NAND can be used by itself,
without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This
property makes the NAND gate crucial to modern digital electronics, including its use in NAND flash memory and
computer processor design.

230
35.1. DEFINITION 231

35.1 Definition
The NAND operation is a logical operation on two logical values. It produces a value of true, if — and only if — at
least one of the propositions is false.

35.1.1 Truth table


The truth table of A NAND B (also written as A | B, Dpq, or A ↑ B) is as follows:

35.2 History
The stroke is named after Henry M. Sheffer, who in 1913 published a paper in the Transactions of the American
Mathematical Society (Sheffer 1913) providing an axiomatization of Boolean algebras using the stroke, and proved
its equivalence to a standard formulation thereof by Huntington employing the familiar operators of propositional
logic (and, or, not). Because of self-duality of Boolean algebras, Sheffer’s axioms are equally valid for either of the
NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for non-disjunction (NOR)
in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It was Jean Nicod who
first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current
practice.[1] Russell and Whitehead used the Sheffer stroke in the 1927 second edition of Principia Mathematica and
suggested it as a replacement for the “or” and “not” operations of the first edition.
Charles Sanders Peirce (1880) had discovered the functional completeness of NAND or NOR more than 30 years
earlier, using the term ampheck (for 'cutting both ways’), but he never published his finding.

35.3 Properties
NAND does not possess any of the following five properties, each of which is required to be absent from, and the
absence of all of which is sufficient for, at least one member of a set of functionally complete operators: truth-
preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth- (falsity-) preserving
if its value is truth (falsity) whenever all of its arguments are truth (falsity).) Therefore {NAND} is a functionally
complete set.
This can also be realized as follows: All three elements of the functionally complete set {AND, OR, NOT} can be
constructed using only NAND. Thus the set {NAND} must be functionally complete as well.

35.4 Introduction, elimination, and equivalencies


The Sheffer stroke ↑ is the negation of the conjunction:
Expressed in terms of NAND ↑ , the usual operators of propositional logic are:

35.5 Formal system based on the Sheffer stroke


The following is an example of a formal system based entirely on the Sheffer stroke, yet having the functional ex-
pressiveness of the propositional logic:

35.5.1 Symbols
pn for natural numbers n
(|)
The Sheffer stroke commutes but does not associate (e.g., (T|T)|F = T, but T|(T|F) = F). Hence any formal system
including the Sheffer stroke must also include a means of indicating grouping. We shall employ '(' and ')' to this effect.
232 CHAPTER 35. SHEFFER STROKE

We also write p, q, r, … instead of p0 , p1 , p2 .

35.5.2 Syntax
Construction Rule I: For each natural number n, the symbol pn is a well-formed formula (wff), called an atom.
Construction Rule II: If X and Y are wffs, then (X|Y) is a wff.
Closure Rule: Any formulae which cannot be constructed by means of the first two Construction Rules are not wffs.
The letters U, V, W, X, and Y are metavariables standing for wffs.
A decision procedure for determining whether a formula is well-formed goes as follows: “deconstruct” the formula
by applying the Construction Rules backwards, thereby breaking the formula into smaller subformulae. Then repeat
this recursive deconstruction process to each of the subformulae. Eventually the formula should be reduced to its
atoms, but if some subformula cannot be so reduced, then the formula is not a wff.

35.5.3 Calculus
All wffs of the form

((U|(V|W))|((Y|(Y|Y))|((X|V)|((U|X)|(U|X)))))

are axioms. Instances of

(U|(V|W)), U ⊢ W

are inference rules.

35.5.4 Simplification
Since the only connective of this logic is |, the symbol | could be discarded altogether, leaving only the parentheses to
group the letters. A pair of parentheses must always enclose a pair of wffs. Examples of theorems in this simplified
notation are

(p(p(q(q((pq)(pq)))))),

(p(p((qq)(pp)))).

The notation can be simplified further, by letting

(U) := (UU)
((U)) ≡ U

for any U. This simplification causes the need to change some rules:

1. More than two letters are allowed within parentheses.

2. Letters or wffs within parentheses are allowed to commute.

3. Repeated letters or wffs within a same set of parentheses can be eliminated.

The result is a parenthetical version of the Peirce existential graphs.


Another way to simplify the notation is to eliminate parenthesis by using Polish Notation. For example, the earlier
examples with only parenthesis could be rewritten using only strokes as follows
35.6. SEE ALSO 233

(p(p(q(q((pq)(pq)))))) becomes
|p|p|q|q||pq|pq, and

(p(p((qq)(pp)))) becomes,

: |p|p||qq|pp.
This follows the same rules as the parenthesis version, with opening parenthesis replaced with a Sheffer stroke and
the (redundant) closing parenthesis removed.

35.6 See also


• List of logic symbols

• AND gate

• Boolean domain

• CMOS

• Gate equivalent (GE)

• Laws of Form

• Logic gate

• Logical graph

• NAND Flash Memory

• NAND logic

• NAND gate

• NOR gate

• NOT gate

• OR gate

• Peirce’s law

• Peirce arrow = NOR

• Propositional logic

• Sole sufficient operator

• XOR gate

• Peirce arrow

35.7 Notes
[1] Church (1956:134)
234 CHAPTER 35. SHEFFER STROKE

35.8 References
• Bocheński, Józef Maria (1960), Précis of Mathematical Logic, translated from the French and German editions
by Otto Bird, Dordrecht, South Holland: D. Reidel.
• Church, Alonzo, (1956) Introduction to mathematical logic, Vol. 1, Princeton: Princeton University Press.

• Nicod, Jean G. P., (1917) “A Reduction in the Number of Primitive Propositions of Logic”, Proceedings of
the Cambridge Philosophical Society, Vol. 19, pp. 32–41.
• Charles Sanders Peirce, 1880, “A Boolian[sic] Algebra with One Constant”, in Hartshorne, C. and Weiss, P.,
eds., (1931–35) Collected Papers of Charles Sanders Peirce, Vol. 4: 12–20, Cambridge: Harvard University
Press.

• Sheffer, H. M. (1913), “A set of five independent postulates for Boolean algebras, with application to logical
constants”, Transactions of the American Mathematical Society 14: 481–488, JSTOR 1988701

35.9 External links


• http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html
• implementations of 2 and 4-input NAND gates

• Proofs of some axioms by Stroke function by Yasuo Setô @ Project Euclid


Chapter 36

Singleton (mathematics)

In mathematics, a singleton, also known as a unit set,[1] is a set with exactly one element. For example, the set {0}
is a singleton.
The term is also used for a 1-tuple (a sequence with one element).

36.1 Properties

Within the framework of Zermelo–Fraenkel set theory, the axiom of regularity guarantees that no set is an element
of itself. This implies that a singleton is necessarily distinct from the element it contains,[1] thus 1 and {1} are not
the same thing, and the empty set is distinct from the set containing only the empty set. A set such as {{1, 2, 3}} is
a singleton as it contains a single element (which itself is a set, however, not a singleton).
A set is a singleton if and only if its cardinality is 1. In the standard set-theoretic construction of the natural numbers,
the number 1 is defined as the singleton {0}.
In axiomatic set theory, the existence of singletons is a consequence of the axiom of pairing: for any set A, the axiom
applied to A and A asserts the existence of {A, A}, which is the same as the singleton {A} (since it contains A, and
no other set, as an element).
If A is any set and S is any singleton, then there exists precisely one function from A to S, the function sending every
element of A to the single element of S. Thus every singleton is a terminal object in the category of sets.
A singleton has the property that every function from it to any arbitrary set is injective. The only non-singleton set
with this property is the empty set.

36.2 In category theory

Structures built on singletons often serve as terminal objects or zero objects of various categories:

• The statement above shows that the singleton sets are precisely the terminal objects in the category Set of sets.
No other sets are terminal.

• Any singleton admits a unique topological space structure (both subsets are open). These singleton topological
spaces are terminal objects in the category of topological spaces and continuous functions. No other spaces are
terminal in that category.

• Any singleton admits a unique group structure (the unique element serving as identity element). These singleton
groups are zero objects in the category of groups and group homomorphisms. No other groups are terminal in
that category.

235
236 CHAPTER 36. SINGLETON (MATHEMATICS)

36.3 Definition by indicator functions


Let S be a class defined by an indicator function

b : X → {0, 1}

Then S is called a singleton if and only if there is some y ∈ X such that for all x ∈ X,

b(x) = (x = y)

Traditionally, this definition was introduced by Whitehead and Russell[2] along with the definition of the natural
number 1, as

def def
1 = α̂{(∃x).α = ιȷx} , where ιȷx = ŷ(y = x) .

36.4 See also


• Class (set theory)

36.5 References
[1] Stoll, Robert (1961). Sets, Logic and Axiomatic Theories. W. H. Freeman and Company. pp. 5–6.

[2] Whitehead, Alfred North; Bertrand Russell (1910). Principia Mathematica. p. 37.
Chapter 37

Subset

“Superset” redirects here. For other uses, see Superset (disambiguation).


In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is

B
A

Euler diagram showing


A is a proper subset of B and conversely B is a proper superset of A

237
238 CHAPTER 37. SUBSET

“contained” inside B, that is, all elements of A are also elements of B. A and B may coincide. The relationship of one
set being a subset of another is called inclusion or sometimes containment.
The subset relation defines a partial order on sets.
The algebra of subsets forms a Boolean algebra in which the subset relation is called inclusion.

37.1 Definitions
If A and B are sets and every element of A is also an element of B, then:

• A is a subset of (or is included in) B, denoted by A ⊆ B ,


or equivalently
• B is a superset of (or includes) A, denoted by B ⊇ A.

If A is a subset of B, but A is not equal to B (i.e. there exists at least one element of B which is not an element of A),
then

• A is also a proper (or strict) subset of B; this is written as A ⊊ B.


or equivalently
• B is a proper superset of A; this is written as B ⊋ A.

For any set S, the inclusion relation ⊆ is a partial order on the set P(S) of all subsets of S (the power set of S).
When quantified, A ⊆ B is represented as: ∀x{x∈A → x∈B}.[1]

37.2 ⊂ and ⊃ symbols


Some authors use the symbols ⊂ and ⊃ to indicate subset and superset respectively; that is, with the same meaning
and instead of the symbols, ⊆ and ⊇.[2] So for example, for these authors, it is true of every set A that A ⊂ A.
Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively, instead of ⊊ and
⊋.[3] This usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or
may not equal y, but if x < y, then x may not equal y, and is less than y. Similarly, using the convention that ⊂ is
proper subset, if A ⊆ B, then A may or may not equal B, but if A ⊂ B, then A definitely does not equal B.

37.3 Examples
• The set A = {1, 2} is a proper subset of B = {1, 2, 3}, thus both expressions A ⊆ B and A ⊊ B are true.
• The set D = {1, 2, 3} is a subset of E = {1, 2, 3}, thus D ⊆ E is true, and D ⊊ E is not true (false).
• Any set is a subset of itself, but not a proper subset. (X ⊆ X is true, and X ⊊ X is false for any set X.)
• The empty set { }, denoted by ∅, is also a subset of any given set X. It is also always a proper subset of any set
except itself.
• The set {x: x is a prime number greater than 10} is a proper subset of {x: x is an odd number greater than 10}
• The set of natural numbers is a proper subset of the set of rational numbers; likewise, the set of points in a line
segment is a proper subset of the set of points in a line. These are two examples in which both the subset and
the whole set are infinite, and the subset has the same cardinality (the concept that corresponds to size, that is,
the number of elements, of a finite set) as the whole; such cases can run counter to one’s initial intuition.
• The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite
but the latter set has a larger cardinality (or power) than the former set.
37.4. OTHER PROPERTIES OF INCLUSION 239

regular
polygons polygons

The regular polygons form a subset of the polygons

Another example in an Euler diagram:

• A is a proper subset of B
• C is a subset but no proper subset of B

37.4 Other properties of inclusion


Inclusion is the canonical partial order in the sense that every partially ordered set (X, ⪯ ) is isomorphic to some
collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified
with the set [n] of all ordinals less than or equal to n, then a ≤ b if and only if [a] ⊆ [b].
For the power set P(S) of a set S, the inclusion partial order is (up to an order isomorphism) the Cartesian product of
k = |S| (the cardinality of S) copies of the partial order on {0,1} for which 0 < 1. This can be illustrated by enumerating
S = {s1 , s2 , …, sk} and associating with each subset T ⊆ S (which is to say with each element of 2S ) the k-tuple from
{0,1}k of which the ith coordinate is 1 if and only if si is a member of T.

37.5 See also


• Containment order

37.6 References
[1] Rosen, Kenneth H. (2012). Discrete Mathematics and Its Applications (7th ed.). New York: McGraw-Hill. p. 119. ISBN
978-0-07-338309-5.

[2] Rudin, Walter (1987), Real and complex analysis (3rd ed.), New York: McGraw-Hill, p. 6, ISBN 978-0-07-054234-1,
MR 924157
240 CHAPTER 37. SUBSET

C
B
A

A B and B C imply A C

[3] Subsets and Proper Subsets (PDF), retrieved 2012-09-07

• Jech, Thomas (2002). Set Theory. Springer-Verlag. ISBN 3-540-44085-2.

37.7 External links


• Weisstein, Eric W., “Subset”, MathWorld.
Chapter 38

Tag system

A tag system is a deterministic computational model published by Emil Leon Post in 1943 as a simple form of Post
canonical system. A tag system may also be viewed as an abstract machine, called a Post tag machine (not to be
confused with Post-Turing machines)—briefly, a finite state machine whose only tape is a FIFO queue of unbounded
length, such that in each transition the machine reads the symbol at the head of the queue, deletes a fixed number of
symbols from the head, and to the tail appends a symbol-string preassigned to the deleted symbol. (Because all of the
indicated operations are performed in each transition, a tag machine strictly has only one state.)

38.1 Definition
A tag system is a triplet (m, A, P), where

• m is a positive integer, called the deletion number.

• A is a finite alphabet of symbols, one of which is a special halting symbol. All finite (possibly empty) strings
on A are called words.

• P is a set of production rules, assigning a word P(x) (called a production) to each symbol x in A. The production
(say P(H)) assigned to the halting symbol is seen below to play no role in computations, but for convenience is
taken to be P(H) = 'H'.

The term m-tag system is often used to emphasise the deletion number. Definitions vary somewhat in the literature
(cf References), the one presented here being that of Rogozhin.

• A halting word is a word that either begins with the halting symbol or whose length is less than m.

• A transformation t (called the tag operation) is defined on the set of non-halting words, such that if x denotes
the leftmost symbol of a word S, then t(S) is the result of deleting the leftmost m symbols of S and appending
the word P(x) on the right.

• A computation by a tag system is a finite sequence of words produced by iterating the transformation t, starting
with an initially given word and halting when a halting word is produced. (By this definition, a computation
is not considered to exist unless a halting word is produced in finitely-many iterations. Alternative definitions
allow nonhalting computations, for example by using a special subset of the alphabet to identify words that
encode output.)

The use of a halting symbol in the above definition allows the output of a computation to be encoded in the final word
alone, whereas otherwise the output would be encoded in the entire sequence of words produced by iterating the tag
operation.
A common alternative definition uses no halting symbol and treats all words of length less than m as halting words.
Another definition is the original one used by Post 1943 (described in the historical note below), in which the only
halting word is the empty string.

241
242 CHAPTER 38. TAG SYSTEM

38.1.1 Example: A simple 2-tag illustration

This is merely to illustrate a simple 2-tag system that uses a halting symbol.
2-tag system Alphabet: {a,b,c,H} Production rules: a --> ccbaH b --> cca c --> cc Computation Initial word: baa
acca caccbaH ccbaHcc baHcccc Hcccccca (halt).

38.1.2 Example: Computation of Collatz sequences

This simple 2-tag system is adapted from [De Mol, 2008]. It uses no halting symbol, but halts on any word of length
less than 2, and computes a slightly modified version of the Collatz sequence.
In the original Collatz sequence, the successor of n is either n/2 (for even n) or 3n + 1 (for odd n). The value 3n + 1
is clearly even for odd n, hence the next term after 3n + 1 is surely (3n + 1)/2. In the sequence computed by the tag
system below we skip this intermediate step, hence the successor of n is (3n + 1)/2 for odd n.
In this tag system, a positive integer n is represented by the word aa...a with n a’s.
2-tag system Alphabet: {a,b,c} Production rules: a --> bc b --> a c --> aaa Computation Initial word: aaa <--> n=3
abc cbc caaa aaaaa <--> 5 aaabc abcbc cbcbc cbcaaa caaaaaa aaaaaaaa <--> 8 aaaaaabc aaaabcbc aabcbcbc bcbcbcbc
bcbcbca bcbcaa bcaaa aaaa <--> 4 aabc bcbc bca aa <--> 2 bc a <--> 1 (halt)

38.2 Turing-completeness of m-tag systems


For each m > 1, the set of m-tag systems is Turing-complete; i.e., for each m > 1, it is the case that for any given Turing
machine T, there is an m-tag system that simulates T. In particular, a 2-tag system can be constructed to simulate a
Universal Turing machine, as was done by Wang 1963 and by Cocke & Minsky 1964.
Conversely, a Turing machine can be shown to be a Universal Turing Machine by proving that it can simulate a
Turing-complete class of m-tag systems. For example, Rogozhin 1996 proved the universality of the class of 2-tag
systems with alphabet {a1 , ..., an, H} and corresponding productions {ananW1 , ..., ananWn-1, anan, H}, where
the Wk are nonempty words; he then proved the universality of a very small (4-state, 6-symbol) Turing machine by
showing that it can simulate this class of tag systems.

38.3 The 2-tag halting problem


This version of the halting problem is among the simplest, most-easily described undecidable decision problems:
Given an arbitrary positive integer n and a list of n+1 arbitrary words P 1 ,P 2 ,...,Pn,Q on the alphabet {1,2,...,n}, does
repeated application of the tag operation t: ijX → XPi eventually convert Q into a word of length less than 2? That
is, does the sequence Q, t 1 (Q), t 2 (Q), t 3 (Q), ... terminate?

38.4 Historical note on the definition of tag system


The above definition differs from that of Post 1943, whose tag systems use no halting symbol, but rather halt only on
the empty word, with the tag operation t being defined as follows:

• If x denotes the leftmost symbol of a nonempty word S, then t(S) is the operation consisting of first appending
the word P(x) to the right end of S, and then deleting the leftmost m symbols of the result — deleting all if
there be less than m symbols.

The above remark concerning the Turing-completeness of the set of m-tag systems, for any m > 1, applies also to
these tag systems as originally defined by Post.
38.5. CYCLIC TAG SYSTEMS 243

38.4.1 Origin of the name “tag”

According to a footnote in Post 1943, B. P. Gill suggested the name for an earlier variant of the problem in which
the first m symbols are left untouched, but rather a check mark indicating the current position moves to the right by
m symbols every step. The name for the problem of determining whether or not the check mark ever touches the end
of the sequence was then dubbed the “problem of tag”, referring to the children’s game of tag.

38.5 Cyclic tag systems

A cyclic tag system is a modification of the original tag system. The alphabet consists of only two symbols, 0 and
1, and the production rules comprise a list of productions considered sequentially, cycling back to the beginning of
the list after considering the “last” production on the list. For each production, the leftmost symbol of the word is
examined—if the symbol is 1, the current production is appended to the right end of the word; if the symbol is 0,
no characters are appended to the word; in either case, the leftmost symbol is then deleted. The system halts if and
when the word becomes empty.

38.5.1 Example

Cyclic Tag System Productions: (010, 000, 1111) Computation Initial Word: 11001 Production Word ---------- ----
---------- 010 11001 000 1001010 1111 001010000 010 01010000 000 1010000 1111 010000000 010 10000000 .
...
Cyclic tag systems were created by Matthew Cook under the employ of Stephen Wolfram, and were used in Cook’s
demonstration that the Rule 110 cellular automaton is universal. A key part of the demonstration was that cyclic tag
systems can emulate a Turing-complete class of tag systems.

38.6 Emulation of tag systems by cyclic tag systems

An m-tag system with alphabet {a1 , ..., an} and corresponding productions {P1 , ..., Pn} is emulated by a cyclic tag
system with m*n productions (Q1 , ..., Qn, -, -, ..., -), where all but the first n productions are the empty string (denoted
by '-'). The Qk are encodings of the respective Pk, obtained by replacing each symbol of the tag system alphabet by
a length-n binary string as follows (these are to be applied also to the initial word of a tag system computation):
a1 = 100...00 a2 = 010...00 . . . an = 000...01
That is, ak is encoded as a binary string with a 1 in the kth position from the left, and 0’s elsewhere. Successive lines
of a tag system computation will then occur encoded as every (m*n)th line of its emulation by the cyclic tag system.

38.6.1 Example

This is a very small example to illustrate the emulation technique.


2-tag system Production rules: (a --> bb, b --> abH, H --> H) Alphabet encoding: a = 100, b = 010, H = 001
Production encodings: (bb = 010 010, abH = 100 010 001, H = 001) Cyclic tag system Productions: (010 010, 100
010 001, 001, -, -, -) Tag system computation Initial word: ba abH Hbb (halt) Cyclic tag system computation Initial
word: 010 100 (=ba) Production Word ---------- ------------------------------- * 010 010 010 100 (=ba) 100 010 001
10 100 001 0 100 100 010 001 - 100 100 010 001 - 00 100 010 001 - 0 100 010 001 * 010 010 100 010 001 (=abH)
100 010 001 00 010 001 010 010 001 0 010 001 010 010 - 010 001 010 010 - 10 001 010 010 - 0 001 010 010 *
010 010 emulated halt --> 001 010 010 (=Hbb) 100 010 001 01 010 010 001 1 010 010 - 010 010 001 ... ...
Every sixth line (marked by '*') produced by the cyclic tag system is the encoding of a corresponding line of the tag
system computation, until the emulated halt is reached.
244 CHAPTER 38. TAG SYSTEM

38.7 See also


• Queue automaton

38.8 References
• Cocke, J., and Minsky, M.: “Universality of Tag Systems with P=2”, J. Assoc. Comput. Mach. 11, 15–20,
1964.
• De Mol, L.: “Tag systems and Collatz-like functions”, Theoretical Computer Science , 390:1, 92–101, January
2008. (Preprint Nr. 314.)
• Marvin Minsky 1961, Recursive Unsolvability of Post’s Problem of “Tag” and other Topics in Theory of Turing
Machines”, the Annals of Mathematics, 2nd ser., Vol. 74, No. 3. (Nov., 1961), pp. 437–455. Stable URL: http:
//links.jstor.org/sici?sici=0003-486X%2819611%292%3A74%3A3%3C437%3ARUOPPO%3E2.0.CO%3B2-N.

• Marvin Minsky, 1967, Computation: Finite and Infinite Machines, Prentice–Hall, Inc. Englewoord Cliffs, N.J.,
no ISBN, Library of Congress Card Catalog number 67-12342.

In a chapter 14 titled “Very Simple Bases for Computability”, Minsky presents a very readable
(and exampled) subsection 14.6 The Problem of “Tag” and Monogenic Canonical Systems
(pp. 267–273) (this sub-section is indexed as “tag system”). Minsky relates his frustrating
experiences with the general problem: “Post found this (00, 1101) problem “intractable,” and
so did I, even with the help of a computer.” He comments that an “effective way to decide, for
any string S, whether this process will ever repeat when started with S” is unknown although
a few specific cases have been proven unsolvable. In particular he mentions Cocke’s Theorem
and Corollary 1964.

• Post, E.: “Formal reductions of the combinatorial decision problem”, American Journal of Mathematics, 65
(2), 197–215 (1943). (Tag systems are introduced on p. 203ff.)

• Rogozhin, Yu.: “Small Universal Turing Machines”, Theoret. Comput. Sci. 168, 215–240, 1996.
• Wang, H.: “Tag Systems and Lag Systems”, Math. Annalen 152, 65–74, 1963.

38.9 External links


• http://mathworld.wolfram.com/TagSystem.html
• http://mathworld.wolfram.com/CyclicTagSystem.html

• http://www.wolframscience.com/nksonline/page-95 (cyclic tag systems)

• http://www.wolframscience.com/nksonline/page-669 (emulation of tag systems)


Chapter 39

Truth table

A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean func-
tions, and propositional calculus—to compute the functional values of logical expressions on each of their functional
arguments, that is, on each combination of values taken by their logical variables (Enderton, 2001). In particular,
truth tables can be used to tell whether a propositional expression is true for all legitimate input values, that is, logically
valid.
Practically, a truth table is composed of one column for each input variable (for example, A and B), and one final
column for all of the possible results of the logical operation that the table is meant to represent (for example, A XOR
B). Each row of the truth table therefore contains one possible configuration of the input variables (for instance, A=true
B=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig
Wittgenstein is often credited with their invention in the Tractatus Logico-Philosophicus,[1] though they appeared at
least a year earlier in a paper on propositional logic by Emil Leon Post.[2]

39.1 Unary operations


There are 4 unary operations:

39.1.1 Logical false

39.1.2 Logical identity

Logical identity is an operation on one logical value, typically the value of a proposition, that produces a value of true
if its operand is true and a value of false if its operand is false.
The truth table for the logical identity operator is as follows:

39.1.3 Logical negation

Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of
true if its operand is false and a value of false if its operand is true.
The truth table for NOT p (also written as ¬p, Np, Fpq, or ~p) is as follows:

39.1.4 Logical true

39.2 Binary operations


There are 16 possible truth functions of two binary variables :

245
246 CHAPTER 39. TRUTH TABLE

39.2.1 Truth table for all binary logical operators

Here is a truth table giving definitions of all 16 of the possible truth functions of two binary variables (P and Q are
thus boolean variables: information about notation may be found in Bocheński (1959), Enderton (2001), and Quine
(1982); for details about the operators see the Key below):
where T = true and F = false. The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The L id row shows the operator’s left identities if it has any - values I such that I op Q = Q. The R id row shows
the operator’s right identities if it has any - values I such that P op I = P.[note 1]
The four combinations of input values for p, q, are read by row from the table above. The output function for each p,
q combination, can be read, by row, from the table.
Key:
The key is oriented by column, rather than row. There are four columns rather than four rows, to display the four
combinations of p, q, as input.
p: T T F F
q: T F T F
There are 16 rows in this key, one row for each binary function of the two binary variables, p, q. For example, in
row 2 of this Key, the value of Converse nonimplication (' ↚ ') is solely T, for the column denoted by the unique
combination p=F, q=T; while in row 2, the value of that ' ↚ ' operation is F for the three remaining columns of p, q.
The output row for ↚ is thus
2: F F T F
and the 16-row[3] key is
Logical operators can also be visualized using Venn diagrams.

39.2.2 Logical conjunction (AND)

Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if both of its operands are true.
The truth table for p AND q (also written as p ∧ q, Kpq, p & q, or p · q) is as follows:
In ordinary language terms, if both p and q are true, then the conjunction p ∧ q is true. For all other assignments of
logical values to p and to q the conjunction p ∧ q is false.
It can also be said that if p, then p ∧ q is q, otherwise p ∧ q is p.

39.2.3 Logical disjunction (OR)

Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if at least one of its operands is true.
The truth table for p OR q (also written as p ∨ q, Apq, p || q, or p + q) is as follows:
Stated in English, if p, then p ∨ q is p, otherwise p ∨ q is q.

39.2.4 Logical implication

Logical implication or the material conditional are both associated with an operation on two logical values, typically
the values of two propositions, that produces a value of false just in the singular case the first operand is true and the
second operand is false.
The truth table associated with the material conditional if p then q (symbolized as p → q) and the logical implication
p implies q (symbolized as p ⇒ q, or Cpq) is as follows:
It may also be useful to note that p → q is equivalent to ¬p ∨ q.
39.3. APPLICATIONS 247

39.2.5 Logical equality

Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if both operands are false or both operands are true.
The truth table for p XNOR q (also written as p ↔ q, Epq, p = q, or p ≡ q) is as follows:
So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have different truth
values.

39.2.6 Exclusive disjunction

Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if one but not both of its operands is true.
The truth table for p XOR q (also written as p ⊕ q, Jpq, or p ≠ q) is as follows:
For two propositions, XOR can also be written as (p ∧ ¬q) ∨ (¬p ∧ q).

39.2.7 Logical NAND

The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a
value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands
is false.
The truth table for p NAND q (also written as p ↑ q, Dpq, or p | q) is as follows:
It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or
composed from other operations. Many such compositions are possible, depending on the operations that are taken
as basic or “primitive” and the operations that are taken as composite or “derivative”.
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: ¬(p ∧ q), and the disjunction of negations: (¬p) ∨ (¬q) can be tabulated as follows:

39.2.8 Logical NOR

The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value
of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is
true. ↓ is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sufficient operator.
The truth table for p NOR q (also written as p ↓ q, Xpq, ¬(p ∨ q)) is as follows:
The negation of a disjunction ¬(p ∨ q), and the conjunction of negations (¬p) ∧ (¬q) can be tabulated as follows:
Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional
arguments p and q, produces the identical patterns of functional values for ¬(p ∧ q) as for (¬p) ∨ (¬q), and for ¬(p ∨ q)
as for (¬p) ∧ (¬q). Thus the first and second expressions in each pair are logically equivalent, and may be substituted
for each other in all contexts that pertain solely to their logical values.
This equivalence is one of De Morgan’s laws.

39.3 Applications

Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:
This demonstrates the fact that p → q is logically equivalent to ¬p ∨ q.
248 CHAPTER 39. TRUTH TABLE

39.3.1 Truth table for most commonly used logical operators


Here is a truth table giving definitions of the most commonly used 6 of the 16 possible truth functions of 2 binary
variables (P,Q are thus boolean variables):
Key:

T = true, F = false
∧ = AND (logical conjunction)
∨ = OR (logical disjunction)
∨ = XOR (exclusive or)
∧ = XNOR (exclusive nor)
→ = conditional “if-then”
← = conditional "(then)-if”

⇐⇒ biconditional or “if-and-only-if” is logically equivalent to ∧ : XNOR (exclusive nor).

Logical operators can also be visualized using Venn diagrams.

39.3.2 Condensed truth tables for binary operators


For binary operators, a condensed form of truth table is also used, where the row headings and the column headings
specify the operands and the table cells specify the result. For example Boolean logic uses this condensed truth table
notation:
This notation is useful especially if the operations are commutative, although one can additionally specify that the
rows are the first operand and the columns are the second operand. This condensed notation is particularly useful
in discussing multi-valued extensions of logic, as it significantly cuts down on combinatoric explosion of the number
of rows otherwise needed. It also provides for quickly recognizable characteristic “shape” of the distribution of the
values in the table which can assist the reader in grasping the rules more quickly.

39.3.3 Truth tables in digital logic


Truth tables are also used to specify the functionality of hardware look-up tables (LUTs) in digital logic circuitry.
For an n-input LUT, the truth table will have 2^n values (or rows in the above tabular format), completely specifying
a boolean function for the LUT. By representing each boolean value as a bit in a binary number, truth table values
can be efficiently encoded as integer values in electronic design automation (EDA) software. For example, a 32-bit
integer can encode the truth table for a LUT with up to 5 inputs.
When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a
bit index k based on the input values of the LUT, in which case the LUT’s output value is the kth bit of the integer.
For example, to evaluate the output value of a LUT given an array of n boolean input values, the bit index of the truth
table’s output value can be computed as follows: if the ith input is true, let Vi = 1, else let Vi = 0. Then the kth bit of
the binary representation of the truth table is the LUT’s output value, where k = V0*2^0 + V1*2^1 + V2*2^2 + ...
+ Vn*2^n.
Truth tables are a simple and straightforward way to encode boolean functions, however given the exponential growth
in size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other
representations which are more memory efficient are text equations and binary decision diagrams.

39.3.4 Applications of truth tables in digital electronics


In digital electronics and computer science (fields of applied logic engineering and mathematics), truth tables can be
used to reduce basic boolean operations to simple correlations of inputs to outputs, without the use of logic gates or
code. For example, a binary addition can be represented with the truth table:
39.4. HISTORY 249

A B | C R 1 1 | 1 0 1 0 | 0 1 0 1 | 0 1 0 0 | 0 0 where A = First Operand B = Second Operand C = Carry R = Result


This truth table is read left to right:

• Value pair (A,B) equals value pair (C,R).


• Or for this example, A plus B equal result R, with the Carry C.

Note that this table does not describe the logic operations necessary to implement this operation, rather it simply
specifies the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically
equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types
of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or
one. The number of combinations of these two values is 2×2, or four. So the result is four possible outputs of C and
R. If one were to use base 3, the size would increase to 3×3, or nine possible outputs.
The first “addition” example above is called a half-adder. A full-adder is when the carry from the previous operation
is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's
logic:
A B C* | C R 0 0 0 | 0 0 0 1 0 | 0 1 1 0 0 | 0 1 1 1 0 | 1 0 0 0 1 | 0 1 0 1 1 | 1 0 1 0 1 | 1 0 1 1 1 | 1 1 Same as previous,
but.. C* = Carry from previous adder

39.4 History
Irving Anellis has done the research to show that C.S. Peirce appears to be the earliest logician (in 1893) to devise a
truth table matrix. From the summary of his paper:

In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell’s
1912 lecture on “The Philosophy of Logical Atomism” truth table matrices. The matrix for negation is
Russell’s, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein.
It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth
table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An
unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with
the composition of Peirce’s “On the Algebra of Logic: A Contribution to the Philosophy of Notation”
that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth
table for the conditional.

39.5 Notes
[1] The operators here with equal left and right identities (XOR, AND, XNOR, and OR) are also commutative monoids because
they are also associative. While this distinction may be irrelevant in a simple discussion of logic, it can be quite important
in more advanced mathematics. For example, in category theory an enriched category is described as a base category
enriched over a monoid, and any of these operators can be used for enrichment.

39.6 See also


• Boolean domain
• Boolean-valued function
• Espresso heuristic logic minimizer
• Excitation table
250 CHAPTER 39. TRUTH TABLE

• First-order logic

• Functional completeness
• Karnaugh maps

• Logic gate
• Logical connective

• Logical graph
• Method of analytic tableaux

• Propositional calculus
• Truth function

39.7 References
[1] Georg Henrik von Wright (1955). “Ludwig Wittgenstein, A Biographical Sketch”. The Philosophical Review 64 (4): 527–
545 (p. 532, note 9). JSTOR 2182631.

[2] Emil Post (July 1921). “Introduction to a general theory of elementary propositions”. American Journal of Mathematics
43 (3): 163–185. JSTOR 2370324.

[3] Ludwig Wittgenstein (1922) Tractatus Logico-Philosophicus Proposition 5.101

39.8 Further reading


• Bocheński, Józef Maria (1959), A Précis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, Dordrecht, South Holland: D. Reidel.

• Enderton, H. (2001). A Mathematical Introduction to Logic, second edition, New York: Harcourt Academic
Press. ISBN 0-12-238452-0

• Quine, W.V. (1982), Methods of Logic, 4th edition, Cambridge, MA: Harvard University Press.

39.9 External links


• Hazewinkel, Michiel, ed. (2001), “Truth table”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-
010-4

• Truth Tables, Tautologies, and Logical Equivalence


• PEIRCE'S TRUTH-FUNCTIONAL ANALYSIS AND THE ORIGIN OF TRUTH TABLES by Irving H.
Anellis
• Converting truth tables into Boolean expressions
Chapter 40

Truth value

“True and false” redirects here. For the book, see True and False: Heresy and Common Sense for the Actor. For the
Unix commands, see true and false (commands). For other uses, see True (disambiguation) and False (disambigua-
tion).

In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a
proposition to truth.

40.1 Classical logic

In classical logic, with its intended semantics, the truth values are true (1 or T) and untrue or false (0 or ⊥); that
is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding
semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical
biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false.
Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan’s laws:

¬(p∧q) ⇔ ¬p ∨ ¬q
¬(p∨q) ⇔ ¬p ∧ ¬q

Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is
referred to as valuation.

40.2 Intuitionistic and constructive logic

Main article: Constructivism (mathematics)

In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a truth value only if they
can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of
the statement from those axioms. A statement is false if you can deduce a contradiction from it. This leaves open the
possibility of statements that have not yet been assigned a truth value.
Unproved statements in Intuitionistic logic are not given an intermediate truth value (as is sometimes mistakenly
asserted). Indeed, you can prove that they have no third truth value, a result dating back to Glivenko in 1928[1]
Instead statements simply remain of unknown truth value, until they are either proved or disproved.
There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpreta-
tion. See also, Intuitionistic Logic - Semantics.

251
252 CHAPTER 40. TRUTH VALUE

40.3 Multi-valued logic


Multi-valued logics (such as fuzzy logic and relevance logic) allow for more than two truth values, possibly containing
some internal structure. For example, on the unit interval [0,1] such structure is a total order; this may be expressed
as existence of various degrees of truth.

40.4 Algebraic semantics


Main article: Algebraic logic

Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions.
For example, intuitionistic logic lacks a complete set of truth values because its semantics, the Brouwer–Heyting–
Kolmogorov interpretation, is specified in terms of provability conditions, and not directly in terms of the necessary
truth of formulae.
But even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics.
The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, compared to Boolean algebra
semantics of classical propositional calculus.

40.5 In other theories


Intuitionistic type theory uses types in the place of truth values.
Topos theory uses truth values in a special sense: the truth values of a topos are the global elements of the subobject
classifier. Having truth values in this sense does not make a logic truth valuational.

40.6 See also


• Agnosticism

• Bayesian probability

• Circular reasoning

• Degree of truth

• False dilemma

• History of logic#Algebraic period

• Paradox

• Semantic theory of truth

• Slingshot argument

• Supervaluationism

• Truth-value semantics

• Verisimilitude

40.7 References
[1] Proof that intuitionistic logic has no third truth value, Glivenko 1928
40.8. EXTERNAL LINKS 253

40.8 External links


• Truth Values entry by Yaroslav Shramko, Heinrich Wansing in the Stanford Encyclopedia of Philosophy
Chapter 41

Turing degree

In computer science and mathematical logic the Turing degree (named after Alan Turing) or degree of unsolvability
of a set of natural numbers measures the level of algorithmic unsolvability of the set. The concept of Turing degree
is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems. The
Turing degree of a set tells how difficult it is to solve the decision problem associated with the set, that is, to determine
whether an arbitrary number is in the given set.
Two sets are Turing equivalent if they have the same level of unsolvability; each Turing degree is a collection of
Turing equivalent sets, so that two sets are in different Turing degrees exactly when they are not Turing equivalent.
Furthermore, the Turing degrees are partially ordered so that if the Turing degree of a set X is less than the Turing
degree of a set Y then any (noncomputable) procedure that correctly decides whether numbers are in Y can be
effectively converted to a procedure that correctly decides whether numbers are in X. It is in this sense that the Turing
degree of a set corresponds to its level of algorithmic unsolvability.
The Turing degrees were introduced by Emil Leon Post (1944), and many fundamental results were established by
Stephen Cole Kleene and Post (1954). The Turing degrees have been an area of intense research since then. Many
proofs in the area make use of a proof technique known as the priority method.

41.1 Turing equivalence


Main article: Turing reduction

For the rest of this article, the word set will refer to a set of natural numbers. A set X is said to be Turing reducible
to a set Y if there is an oracle Turing machine that decides membership in X when given an oracle for membership
in Y. The notation X ≤T Y indicates that X is Turing reducible to Y.
Two sets X and Y are defined to be Turing equivalent if X is Turing reducible to Y and Y is Turing reducible to X.
The notation X ≡T Y indicates that X and Y are Turing equivalent. The relation ≡T can be seen to be an equivalence
relation, which means that for all sets X, Y, and Z:

• X ≡T X

• X ≡T Y implies Y ≡T X

• If X ≡T Y and Y ≡T Z then X ≡T Z.

A Turing degree is an equivalence class of the relation ≡T. The notation [X] denotes the equivalence class containing
a set X. The entire collection of Turing degrees is denoted D .
The Turing degrees have a partial order ≤ defined so that [X] ≤ [Y] if and only if X ≤T Y. There is a unique Turing
degree containing all the computable sets, and this degree is less than every other degree. It is denoted 0 (zero)
because it is the least element of the poset D . (It is common to use boldface notation for Turing degrees, in order to
distinguish them from sets. When no confusion can occur, such as with [X], the boldface is not necessary.)

254
41.2. BASIC PROPERTIES OF THE TURING DEGREES 255

For any sets X and Y, X join Y, written X Y, is defined to be the union of the sets {2n : n ∈ X} and {2m+1 : m ∈
Y}. The Turing degree of X Y is the least upper bound of the degrees of X and Y. Thus D is a join-semilattice.
The least upper bound of degrees a and b is denoted a ∪ b. It is known that D is not a lattice, as there are pairs of
degrees with no greatest lower bound.
For any set X the notation X′ denotes the set of indices of oracle machines that halt when using X as an oracle. The
set X′ is called the Turing jump of X. The Turing jump of a degree [X] is defined to be the degree [X′]; this is a
valid definition because X′ ≡T Y′ whenever X ≡T Y. A key example is 0′, the degree of the halting problem.

41.2 Basic properties of the Turing degrees


• Every Turing degree is countably infinite, that is, it contains exactly ℵ0 sets.

• There are 2ℵ0 distinct Turing degrees.

• For each degree a the strict inequality a < a′ holds.

• For each degree a, the set of degrees below a is at most countable. The set of degrees greater than a has size
2ℵ0 .

41.3 Structure of the Turing degrees


A great deal of research has been conducted into the structure of the Turing degrees. The following survey lists only
some of the many known results. One general conclusion that can be drawn from the research is that the structure of
the Turing degrees is extremely complicated.

41.3.1 Order properties


• There are minimal degrees. A degree a is minimal if a is nonzero and there is no degree between 0 and a.
Thus the order relation on the degrees is not a dense order.

• For every nonzero degree a there is a degree b incomparable with a.

• There is a set of 2ℵ0 pairwise incomparable Turing degrees.

• There are pairs of degrees with no greatest lower bound. Thus D is not a lattice.

• Every countable partially ordered set can be embedded in the Turing degrees.

• No infinite, strictly increasing sequence of degrees has a least upper bound.

41.3.2 Properties involving the jump


• For every degree a there is a degree strictly between a and a′. In fact, there is a countable sequence of pairwise
incomparable degrees between a and a′.

• A degree a is of the form b′ if and only if 0′ ≤ a.

• For any degree a there is a degree b such that a < b and b′ = a′; such a degree b is called low relative to a.

• There is an infinite sequence aᵢ of degrees such that a′ᵢ₊₁ ≤ aᵢ for each i.


256 CHAPTER 41. TURING DEGREE

41.3.3 Logical properties


• Simpson (1977) showed that the first-order theory of D in the language ⟨ ≤, = ⟩ or ⟨ ≤, ′, =⟩ is many-one
equivalent to the theory of true second-order arithmetic. This indicates that the structure of D is extremely
complicated.

• Shore and Slaman (1999) showed that the jump operator is definable in the first-order structure of the degrees
with the language ⟨ ≤, =⟩.

41.4 Structure of the r.e. Turing degrees


A degree is called r.e. (recursively enumerable) if it contains a recursively enumerable set. Every r.e. degree is less
than or equal to 0′ but not every degree less than 0′ is an r.e. degree.

• (G. E. Sacks, 1964) The r.e degrees are dense; between any two r.e. degrees there is a third r.e degree.

• (A. H. Lachlan, 1966a and C. E. M. Yates, 1966) There are two r.e. degrees with no greatest lower bound in
the r.e. degrees.

• (A. H. Lachlan, 1966a and C. E. M. Yates, 1966) There is a pair of nonzero r.e. degrees whose greatest lower
bound is 0.

• (S. K. Thomason, 1971) Every finite distributive lattice can be embedded into the r.e. degrees. In fact, the
countable atomless Boolean algebra can be embedded in a manner that preserves suprema and infima.

• (A. H. Lachlan and R. I. Soare, 1980) Not all finite lattices can be embedded in the r.e. degrees (via an
embedding that preserves suprema and infima). The following particular lattice cannot be embedded in the r.e.
degrees:

• (A. H. Lachlan, 1966b) There is no pair of r.e. degrees whose greatest lower bound is 0 and whose least upper
bound is 0′. This result is informally called the nondiamond theorem.

• (L. A. Harrington and T. A. Slaman, see Nies, Shore, and Slaman (1998)) The first-order theory of the r.e.
degrees in the language ⟨ 0, ≤, = ⟩ is many-one equivalent to the theory of true first order arithmetic.

41.5 Post’s problem and the priority method


“Post’s problem” redirects here. For the other “Post’s problem”, see Post’s correspondence problem.

Emil Post studied the r.e. Turing degrees and asked whether there is any r.e. degree strictly between 0 and 0′. The
problem of constructing such a degree (or showing that none exist) became known as Post’s problem. This problem
was solved independently by Friedberg and Muchnik in the 1950s, who showed that these intermediate r.e. degrees
do exist. Their proofs each developed the same new method for constructing r.e. degrees which came to be known
as the priority method. The priority method is now the main technique for establishing results about r.e. sets.
41.6. SEE ALSO 257

The idea of the priority method for constructing an r.e. set X is to list a countable sequence of requirements that X
must satisfy. For example, to construct an r.e. set X between 0 and 0′ it is enough to satisfy the requirements Ae and
Be for each natural number e, where Ae requires that the oracle machine with index e does not compute 0′ from X
and Be requires that the Turing machine with index e (and no oracle) does not compute X. These requirements are
put into a priority ordering, which is an explicit bijection of the requirements and the natural numbers. The proof
proceeds inductively with one stage for each natural number; these stages can be thought of as steps of time during
which the set X is enumerated. At each stage, numbers may be put into X or forever prevented from entering X in an
attempt to satisfy requirements (that is, force them to hold once all of X has been enumerated). Sometimes, a number
can be enumerated into X to satisfy one requirement but doing this would cause a previously satisfied requirement to
become unsatisfied (that is, to be injured). The priority order on requirements is used to determine which requirement
to satisfy in this case. The informal idea is that if a requirement is injured then it will eventually stop being injured after
all higher priority requirements have stopped being injured, although not every priority argument has this property.
An argument must be made that the overall set X is r.e. and satisfies all the requirements. Priority arguments can be
used to prove many facts about r.e. sets; the requirements used and the manner in which they are satisfied must be
carefully chosen to produce the required result.

41.6 See also


• Martin measure

41.7 References

41.7.1 Monographs (undergraduate level)


• Cooper, S.B. Computability theory. Chapman & Hall/CRC, Boca Raton, FL, 2004. ISBN 1-58488-237-9

• Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ISBN 0-521-22384-9;
ISBN 0-521-29465-7

41.7.2 Monographs and survey articles (graduate level)


• Ambos-Spies, K. and Fejer, P. Degrees of Unsolvability. Unpublished. http://www.cs.umb.edu/~{}fejer/
articles/History_of_Degrees.pdf

• Lerman, M. Degrees of unsolvability. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1983.


ISBN 3-540-12155-2

• Odifreddi, P. G. (1989), Classical Recursion Theory, Studies in Logic and the Foundations of Mathematics
125, Amsterdam: North-Holland, ISBN 978-0-444-87295-1, MR 982269

• Odifreddi, P. G. (1999), Classical recursion theory. Vol. II, Studies in Logic and the Foundations of Mathe-
matics 143, Amsterdam: North-Holland, ISBN 978-0-444-50205-6, MR 1718169

• Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1;
ISBN 0-07-053522-1

• Sacks, Gerald E. Degrees of Unsolvability (Annals of Mathematics Studies), Princeton University Press. ISBN
978-0691079417

• Simpson, S. Degrees of unsolvability: a survey of results. Handbook of Mathematical Logic, North-Holland,


1977, pp. 631–652.

• Shoenfield, Joseph R. Degrees of Unsolvability, North-Holland/Elsevier, ISBN 978-0720420616.


258 CHAPTER 41. TURING DEGREE

• Shore, R. The theories of the T, tt, and wtt r.e. degrees: undecidability and beyond. Proceedings of the IX
Latin American Symposium on Mathematical Logic, Part 1 (Bahía Blanca, 1992), 61–70, Notas Lógica Mat.,
38, Univ. Nac. del Sur, Bahía Blanca, 1993.

• Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag,
Berlin, 1987. ISBN 3-540-15299-7

• Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–
1181. MR 508451

41.7.3 Research papers


• Kleene, Stephen Cole; Post, Emil L. (1954), “The upper semi-lattice of degrees of recursive unsolvability”,
Annals of Mathematics, Second Series 59 (3): 379–407, doi:10.2307/1969708, ISSN 0003-486X, JSTOR
1969708, MR 0061078

• Lachlan, A.H. (1966a), “Lower Bounds for Pairs of Recursively Enumerable Degrees”, Proceedings of the
London Mathematical Society 3 (1): 537, doi:10.1112/plms/s3-16.1.537.

• Lachlan, A.H. (1966b), “The impossibility of finding relative complements for recursively enumerable de-
grees”, J. Symb. Logic 31 (3): 434–454, doi:10.2307/2270459, JSTOR 2270459.

• Lachlan, A.H.; Soare, R.I. (1980), “Not every finite lattice is embeddable in the recursively enumerable de-
grees”, Advances in Math 37: 78–82, doi:10.1016/0001-8708(80)90027-4.

• Nies, André; Shore, Richard A.; Slaman, Theodore A. (1998), “Interpretability and definability in the recur-
sively enumerable degrees”, Proceedings of the London Mathematical Society 77 (2): 241–291, doi:10.1112/S002461159800046X
ISSN 0024-6115, MR 1635141

• Post, Emil L. (1944), “Recursively enumerable sets of positive integers and their decision problems”, Bulletin
of the American Mathematical Society 50 (5): 284–316, doi:10.1090/S0002-9904-1944-08111-1, ISSN 0002-
9904, MR 0010514

• Sacks, G.E. (1964), “The recursively enumerable degrees are dense”, Annals of Mathematics, Second Series
80 (2): 300–312, doi:10.2307/1970393, JSTOR 1970393.

• Shore, Richard A.; Slaman, Theodore A. (1999), “Defining the Turing jump”, Mathematical Research Letters
6: 711–722, doi:10.4310/mrl.1999.v6.n6.a10, ISSN 1073-2780, MR 1739227

• Simpson, Stephen G. (1977), “First-order theory of the degrees of recursive unsolvability”, Annals of Math-
ematics, Second Series 105 (1): 121–139, doi:10.2307/1971028, ISSN 0003-486X, JSTOR 1971028, MR
0432435

• Thomason, S.K. (1971), “Sublattices of the recursively enumerable degrees”, Z. Math. Logik Grundlag. Math.
17: 273–280, doi:10.1002/malq.19710170131.

• Yates, C.E.M. (1966), “A minimal pair of recursively enumerable degrees”, J. Symbolic Logic 31 (2): 159–168,
doi:10.2307/2269807, JSTOR 2269807.
Chapter 42

Universal algebra

Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures
themselves, not examples (“models”) of algebraic structures. For instance, rather than take particular groups as the
object of study, in universal algebra one takes “the theory of groups” as an object of study.

42.1 Basic idea


In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A.
An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary
operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a
letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol
placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed
between its arguments, like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols,
with the arguments placed in parentheses
∧ and separated by commas, like f(x,y,z) or f(x1 ,...,xn). Some researchers
allow infinitary operations, such as α∈J xα where J is an infinite index set, thus leading into the algebraic theory
of complete lattices. One way of talking about an algebra, then, is by referring to it as an algebra of a certain type Ω
, where Ω is an ordered sequence of natural numbers representing the arity of the operations of the algebra.

42.1.1 Equations

After the operations have been specified, the nature of the algebra can be further limited by axioms, which in universal
algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary
operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y,
and z of the set A.

42.2 Varieties
Main article: Variety (universal algebra)

An algebraic structure that can be defined by identities is called a variety, and these are sufficiently important that
some authors consider varieties the only object of study in universal algebra, while others consider them an object.
Restricting one’s study to varieties rules out:

• Predicate logic, notably quantification, including universal quantification ( ∀ ), except before an equation, and
existential quantification ( ∃ )

• All relations except equality, in particular inequalities, both a ≠ b and order relations

259
260 CHAPTER 42. UNIVERSAL ALGEBRA

In this narrower definition, universal algebra can be seen as a special branch of model theory, typically dealing with
structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality),
and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example ordered groups are not studied in
mainstream universal algebra because they involve an ordering relation.
A more fundamental restriction is that universal algebra cannot study the class of fields, because there is no type
(a.k.a. signature) in which all field laws can be written as equations (inverses of elements are defined for all non-zero
elements in a field, so inversion cannot simply be added to the type).
One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that
has finite products. For example, a topological group is just a group in the category of topological spaces.

42.2.1 Examples
Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way – the
usual definitions often involve quantification or inequalities.

Groups

To see how this works, let’s consider the definition of a group. Normally a group is defined in terms of a single binary
operation ∗, subject to these axioms:

• Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z.


• Identity element: There exists an element e such that for each element x, e ∗ x = x = x ∗ e; formally: ∃e ∀x.
e∗x=x=x∗e.
• Inverse element: It can easily be seen that the identity element is unique. If this unique identity element is
denoted by e then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x.

(Some authors also use an axiom called "closure", stating that x ∗ y belongs to the set A whenever x and y do. But
from a universal algebraist’s point of view, that is already implied by calling ∗ a binary operation.)
This definition of a group is problematic from the point of view of universal algebra. The reason is that the axioms of
the identity element and inversion are not stated purely in terms of equational laws but also have clauses involving the
phrase “there exists ... such that ...”. This is inconvenient; the list of group properties can be simplified to universally
quantified equations by adding a nullary operation e and a unary operation ~ in addition to the binary operation ∗.
Then list the axioms for these three operations as follows:

• Associativity: x ∗ (y ∗ z) = (x ∗ y) ∗ z.
• Identity element: e ∗ x = x = x ∗ e; formally: ∀x. e∗x=x=x∗e.
• Inverse element: x ∗ (~x) = e = (~x) ∗ x formally: ∀x. x∗~x=e=~x∗x.

(Of course, we usually write "x−1 " instead of "~x", which shows that the notation for operations of low arity is not
always as given in the second paragraph.)
What has changed is that in the usual definition there are:

• a single binary operation (signature (2))


• 1 equational law (associativity)
• 2 quantified laws (identity and inverse)

...while in the universal algebra definition there are

• 3 operations: one binary, one unary, and one nullary (signature (2,1,0))
42.3. BASIC CONSTRUCTIONS 261

• 3 equational laws (associativity, identity, and inverse)


• no quantified laws (except for outermost universal quantifiers which are allowed in varieties)

It is important to check that this really does capture the definition of a group. The reason that it might not is that
specifying one of these universal groups might give more information than specifying one of the usual kind of group.
After all, nothing in the usual definition said that the identity element e was unique; if there is another identity element
e', then it is ambiguous which one should be the value of the nullary operator e. Proving that it is unique is a common
beginning exercise in classical group theory textbooks. The same thing is true of inverse elements. So, the universal
algebraist’s definition of a group is equivalent to the usual definition.
At first glance this is simply a technical difference, replacing quantified laws with equational laws. However, it has
immediate practical consequences – when defining a group object in category theory, where the object in question
may not be a set, one must use equational laws (which make sense in general categories), and cannot use quantified
laws (which do not make sense, as objects in general categories do not have elements). Further, the perspective
of universal algebra insists not only that the inverse and identity exist, but that they be maps in the category. The
basic example is of a topological group – not only must the inverse exist element-wise, but the inverse map must be
continuous (some authors also require the identity map to be a closed inclusion, hence cofibration, again referring to
properties of the map).

42.3 Basic constructions


We assume that the type, Ω , has been fixed. Then there are three basic constructions in universal algebra: homo-
morphic image, subalgebra, and product.
A homomorphism between two algebras A and B is a function h: A → B from the set A to the set B such that, for every
operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1 ,...,xn)) = fB(h(x1 ),...,h(xn)). (Sometimes
the subscripts on f are taken off when it is clear from context which algebra your function is from) For example, if
e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary
operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well
as definitions of certain special kinds of homomorphisms, are listed under the entry Homomorphism. In particular,
we can take the homomorphic image of an algebra, h(A).
A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic
structures is the cartesian product of the sets with the operations defined coordinatewise.

42.4 Some basic theorems


• The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc.
• Birkhoff’s HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under
homomorphic images, subalgebras, and arbitrary direct products.

42.5 Motivations and applications


In addition to its unifying approach, universal algebra also gives deep theorems and important examples and coun-
terexamples. It provides a useful framework for those who intend to start the study of new classes of algebras. It can
enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting
the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has
also provided conceptual clarification; as J.D.H. Smith puts it, “What looks messy and complicated in a particular
framework may turn out to be simple and obvious in the proper general one.”
In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra
came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these fields,
but with universal algebra, they can be proven once and for all for every kind of algebraic system.
The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular
algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially
262 CHAPTER 42. UNIVERSAL ALGEBRA

defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional
algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under
geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids.

42.6 Generalizations

Further information: Category theory, Operad theory and Partial algebra

A more generalised programme along these lines is carried out by category theory. Given a list of operations and
axioms in universal algebra, the corresponding algebras and homomorphisms are the objects and morphisms of a
category. Category theory applies to many situations where universal algebra does not, extending the reach of the
theorems. Conversely, many theorems that hold in universal algebra do not generalise all the way to category theory.
Thus both fields of study are useful.
A more recent development in category theory that generalizes operations is operad theory – an operad is a set of
operations, similar to a universal algebra.
Another development is partial algebra where the operators can be partial functions.

42.7 History

In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra
had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De
Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.[1]
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic
structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: “The main idea of
the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but
rather the comparative study of their several structures.” At the time George Boole's algebra of logic made a strong
counterpoint to ordinary number algebra, so the term “universal” served to calm strained sensibilities.
Whitehead’s early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole’s
algebra of logic. Whitehead wrote in his book:

“Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative
study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic
symbolism in particular. The comparative study necessarily presupposes some previous separate study,
comparison being impossible without knowledge.” [2]

Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s,
when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics
and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred
Tarski, Andrzej Mostowski, and their students (Brainerd 1967).
In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff’s papers, dealing
with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of
mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly
Maltsev in the 1940s went unnoticed because of the war. Tarski’s lecture at the 1950 International Congress of
Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by
Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others.
In the late 1950s, Edward Marczewski[3] emphasized the importance of free algebras, leading to the publication of
more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski,
Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others.
42.8. SEE ALSO 263

42.8 See also


• Graph algebra
• Homomorphism
• Lattice theory
• Signature
• Term algebra
• Variety
• Clone
• Universal algebraic geometry
• Model theory

42.9 Footnotes
[1] Grätzer, George. Universal Algebra, Van Nostrand Co., Inc., 1968, p. v.
[2] Quoted in Grätzer, George. Universal Algebra, Van Nostrand Co., Inc., 1968.
[3] Marczewski, E. “A general scheme of the notions of independence in mathematics.” Bull. Acad. Polon. Sci. Ser. Sci.
Math. Astronom. Phys. 6 (1958), 731–736.

42.10 References
• Bergman, George M., 1998. An Invitation to General Algebra and Universal Constructions (pub. Henry Helson,
15 the Crescent, Berkeley CA, 94708) 398 pp. ISBN 0-9655211-4-1.
• Birkhoff, Garrett, 1946. Universal algebra. Comptes Rendus du Premier Congrès Canadien de Mathématiques,
University of Toronto Press, Toronto, pp. 310–326.
• Brainerd, Barron, Aug–Sep 1967. Review of Universal Algebra by P. M. Cohn. American Mathematical
Monthly, 74(7): 878–880.
• Burris, Stanley N., and H.P. Sankappanavar, 1981. A Course in Universal Algebra Springer-Verlag. ISBN
3-540-90578-2 Free online edition.
• Cohn, Paul Moritz, 1981. Universal Algebra. Dordrecht, Netherlands: D.Reidel Publishing. ISBN 90-277-
1213-1 (First published in 1965 by Harper & Row)
• Freese, Ralph, and Ralph McKenzie, 1987. Commutator Theory for Congruence Modular Varieties, 1st ed.
London Mathematical Society Lecture Note Series, 125. Cambridge Univ. Press. ISBN 0-521-34832-3. Free
online second edition.
• Grätzer, George, 1968. Universal Algebra D. Van Nostrand Company, Inc.
• Higgins, P. J. Groups with multiple operators. Proc. London Math. Soc. (3) 6 (1956), 366–416.
• Higgins, P.J., Algebras with a scheme of operators. Mathematische Nachrichten (27) (1963) 115–132.
• Hobby, David, and Ralph McKenzie, 1988. The Structure of Finite Algebras American Mathematical Society.
ISBN 0-8218-3400-2. Free online edition.
• Jipsen, Peter, and Henry Rose, 1992. Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer
Verlag. ISBN 0-387-56314-8. Free online edition.
• Pigozzi, Don. General Theory of Algebras.
• Smith, J.D.H., 1976. Mal'cev Varieties, Springer-Verlag.
• Whitehead, Alfred North, 1898. A Treatise on Universal Algebra, Cambridge. (Mainly of historical interest.)
264 CHAPTER 42. UNIVERSAL ALGEBRA

42.11 External links


• Algebra Universalis—a journal dedicated to Universal Algebra.
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 265

42.12 Text and image sources, contributors, and licenses


42.12.1 Text
• Affine transformation Source: https://en.wikipedia.org/wiki/Affine_transformation?oldid=664232390 Contributors: Damian Yerrick,
AxelBoldt, SimonP, Jdlh, Patrick, Chas zzz brown, Michael Hardy, J-Wiki, CatherineMunro, Schneelocke, Charles Matthews, Dcoetzee,
Dysprosia, Jitse Niesen, Hyacinth, Phys, Skeetch, Robbot, Josh Cherry, Fredrik, Altenmann, Guy Peters, Giftlite, Gene Ward Smith,
BenFrantzDale, MSGJ, Ajgorhoe, LucasVB, Mysidia, Felix Wiemann, Discospinster, Guanabot, Dave Foley, Rgdboer, EmilJ, Longhair,
.:Ajvol:., Foobaz, Dvgrn, Jumbuck, Jheald, Alaudo, Oleg Alexandrov, LOL, Mpatel, BD2412, FlaBot, Mathbot, Margosbot~enwiki,
Kri, YurikBot, Michael Slone, Hellbus, Archelon, Tong~enwiki, Trovatore, LeonardoRob0t, Curpsbot-unicodify, Cjfsyntropy, Cmglee,
SmackBot, Mmernex, Incnis Mrsi, KocjoBot~enwiki, Gilliam, IvanAndreevich, DHN-bot~enwiki, Mohan1986, Addshore, Yoshigev,
Eliyak, Breno, Jimmy Pitt, Mets501, Iridescent, PavelCurtis, Agreeney, Ezrakilty, Cydebot, Mikewax, Urdutext, Mhaitham.shammaa,
Battaglia01, Albmont, Rich257, JoergenB, R'n'B, Lantonov, Nwbeeson, Policron, DavidCBryant, PerezTerron, Pleasantville, Alnok-
taBOT, Softtest123, Quietbritishjim, Paolo.dL, SanderEvers, JimInTheUSA, Lord Bruyanovich Bruyanov, Cacadril, Nostraticispeak,
Templarion, Marc van Leeuwen, Addbot, TheGeekHead, Cesiumfrog, Zorrobot, Luckas-bot, Yobot, Ht686rg90, AnomieBOT, Que-
bec99, Xqbot, Omnipaedista, VadimIppolitov, Sławomir Biały, Flinx, Smutny, EmausBot, ZéroBot, Quondum, Wikfr, Erget2005,
Anita5192, Mgvongoeden, Mesoderm, JornAnke, Helpful Pixie Bot, Jerrydeanrsmith, Burningstarfour, Nishch, Brad7777, BattyBot,
Hierarchivist, Sylvanarevalo, AndyThe, Ebag7125, Leegrc, Apapsis and Anonymous: 80
• Arithmetical hierarchy Source: https://en.wikipedia.org/wiki/Arithmetical_hierarchy?oldid=652719674 Contributors: AxelBoldt, Patrick,
Michael Hardy, Bcrowell, Pde, Rotem Dan, Schneelocke, Charles Matthews, Wik, MathMartin, Kntg, Klemen Kocjancic, Alfeld, Ben
Standeven, Gauge, Kwamikagami, EmilJ, Peter M Gerdes, Sligocki, Spambit, Oleg Alexandrov, Fish-Face, NekoDaemon, Laubrau~enwiki,
YurikBot, Hairy Dude, Gaius Cornelius, Trovatore, SmackBot, Maksim-e~enwiki, AlephNull~enwiki, Ligulembot, Tomlee2060, CR-
Greathouse, CmdrObot, CBM, Gregbard, Cydebot, Pascal.Tesson, Knotwork, David Eppstein, LokiClock, PaulTanenbaum, Brezenbene,
El bot de la dieta, Cnoguera, Addbot, Jarble, Yobot, Citation bot, VladimirReshetnikov, Kiefer.Wolfowitz, RobinK, Straightontillmorning,
ZéroBot, ClueBot NG, Kükenschublade, Andsz, Subshift, Klilidiplomus, Deltahedron, Imperialfists, Roamingcuriosity and Anonymous:
22
• Arity Source: https://en.wikipedia.org/wiki/Arity?oldid=660707541 Contributors: Damian Yerrick, AxelBoldt, Zundark, Tarquin, Ed-
ward, Patrick, Michael Hardy, Kaczor~enwiki, Wapcaplet, Andres, Charles Matthews, Dcoetzee, Aleph4, Robbot, Fredrik, Sanders muc,
Merovingian, Wikibot, Giftlite, Mshonle~enwiki, Sohailstyle, Antandrus, Beland, Vina, Jacob grace, Naff89, Zaslav, Elwikipedista~enwiki,
Gauge, Theinfo, CanisRufus, Kwamikagami, Obradovic Goran, Mdd, Marc van Woerkom, Shawn K. Quinn, Ceyockey, Forderud, Oleg
Alexandrov, Feezo, Madmardigan53, MFH, Mangojuice, Bluemoose, BD2412, Qwertyus, NatusRoma, MarSch, Salix alba, Stuwee,
Metacarpus, Mathbot, Roboto de Ajvol, YurikBot, Hairy Dude, Kiscica, RussBot, Twin Bird, Modify, Otto ter Haar, SmackBot, Mel-
choir, Jjalexand, Nbarth, Colonies Chris, Javalenok, Tsca.bot, Frap, Cybercobra, Jon Awbrey, Luís Felipe Braga, Autopilot, Geoinline,
EdC~enwiki, CRGreathouse, Difluoroethene, Torc2, Kilva, Ablonus, Liquid-aim-bot, JAnDbot, Deflective, Soulbot, David Eppstein,
JaGa, Jimka, Gwern, Jim.henderson, R'n'B, Cpiral, Trumpet marietta 45750, Daniel5Ko, Boute, AlleborgoBot, S8333631, AHMartin,
Mild Bill Hiccup, Watchduck, Hans Adler, Libcub, Addbot, AnomieBOT, Rubinbot, RokerHRO, Xqbot, TechBot, Charvest, FrescoBot,
The Utahraptor, Drake921, Hpvpp, ZéroBot, Tijfo098, Sindikat, Frietjes, Helpful Pixie Bot, AvocatoBot, QuarkyPi and Anonymous: 54
• Boolean domain Source: https://en.wikipedia.org/wiki/Boolean_domain?oldid=659850467 Contributors: Toby Bartels, Asparagus, El
C, Cje~enwiki, Versageek, Jeffrey O. Gustafson, Salix alba, DoubleBlue, Closedmouth, Bibliomaniac15, SmackBot, Incnis Mrsi, C.Fred,
Mhss, Bluebot, Octahedron80, Nbarth, Cícero, NickPenguin, Jon Awbrey, JzG, Coredesat, Slakr, KJS77, CBM, AndrewHowse, Gogo
Dodo, Pascal.Tesson, Xuanji, Hut 8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Hans Dunkelberg, Matthi3010, Boute, Jamelan, Maelgwn-
bot, Francvs, Cliff, Wolf of the Steppes, Doubtentry, Icharus Ixion, Hans Adler, Buchanan’s Navy Sec, Overstay, Marsboat, Viva La
Information Revolution!, Autocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovett’s Meat Puppets, Chester
County Dude, Southeast Penna Poppa, Delaware Valley Girl, Theonlydavewilliams, Addbot, Yobot, Erik9bot, EmausBot, BG19bot,
Monkbot and Anonymous: 6
• Boolean expression Source: https://en.wikipedia.org/wiki/Boolean_expression?oldid=649969336 Contributors: Patrick, Andreas Kauf-
mann, Bobo192, Giraffedata, Eclecticos, BorgHunter, YurikBot, Michael Slone, Trovatore, StuRat, SmackBot, Rune X2, Miguel An-
drade, Sct72, Tamfang, Timcrall, David Eppstein, R'n'B, ClueBot, R000t, Addbot, Ptbotgourou, Xqbot, Erik9bot, Gryllida, NameIsRon,
Pratyya Ghosh and Anonymous: 15
• Boolean function Source: https://en.wikipedia.org/wiki/Boolean_function?oldid=665957905 Contributors: Patrick, Michael Hardy, Ci-
phergoth, Charles Matthews, Hyacinth, Michael Snow, Giftlite, Matt Crypto, Neilc, Gadfium, Clemwang, Murtasa, Arthena, Oleg Alexan-
drov, Mindmatrix, Jok2000, CharlesC, Waldir, Qwertyus, Ner102, RobertG, Gene.arboit, NawlinWiki, Trovatore, TheKoG, SDS, Smack-
Bot, Mhss, Jon Awbrey, Poa, Bjankuloski06en~enwiki, Loadmaster, Eassin, Gregbard, Ntsimp, Pce3@ij.net, Shyguy92, Steveprutz,
BigrTex, Trusilver, AltiusBimm, TheSeven, Policron, Tatrgel, TreasuryTag, TXiKiBoT, Spinningspark, Kumioko (renamed), ClueBot,
Watchduck, Farisori, Hans Adler, Addbot, Liquidborn, Luckas-bot, Amirobot, AnomieBOT, Chillout003, Twri, Quebec99, Ayda D,
Xqbot, Omnipaedista, Erik9bot, Nageh, Theorist2, EmausBot, Sivan.rosenfeld, ClueBot NG, Jiri 1984, Rezabot, WikiPuppies, Allanthe-
baws, Int80, Nigellwh, Hannasnow, Anaszt5 and Anonymous: 30
• Cardinality of the continuum Source: https://en.wikipedia.org/wiki/Cardinality_of_the_continuum?oldid=663845992 Contributors:
Damian Yerrick, AxelBoldt, Zundark, Patrick, Jitse Niesen, David Shay, Rizzoj, Tobias Bergemann, Giftlite, Lethe, Fropuff, Mike Rosoft,
TedPavlic, KittySaturn, Number 0, Paul August, EmilJ, Tsirel, Keenan Pepper, Aquae, Salix alba, Algebraist, YurikBot, RussBot, Trova-
tore, Gareth Jones, Froth, Figaro, Kompik, Arthur Rubin, Star trooper man, MachMan, Melchoir, Alink, Lambiam, Mets501, Rschwieb,
Easwaran, 'Ff'lo, JRSpriggs, CRGreathouse, CBM, Ntsimp, Michael C Price, Boemanneke, M cuffa, Albmont, Usien6, Gwern, It Is Me
Here, Skier Dude, Potatoswatter, VolkovBot, LokiClock, QuackGuru, SieBot, Paradoctor, Paolo.dL, CBM2, Ideal gas equation, Drag-
onBot, Jstarret, Addbot, Jim1138, RJGray, Sławomir Biały, 777sms, SporkBot, EdoBot, KlappCK, The Master of Mayhem, Rezabot,
BeastlyBoyy, BG19bot, Kodiologist, Andyhowlett, DavidLeighEllis, BradBentz44 and Anonymous: 29
• Charles Sanders Peirce Source: https://en.wikipedia.org/wiki/Charles_Sanders_Peirce?oldid=668456090 Contributors: AxelBoldt, The
Anome, Tarquin, Stephen Gilbert, Slrubenstein, BenBaker, Enchanter, Ortolan88, David spector, Heron, Olivier, Jboy~enwiki, Michael
Hardy, Llywrch, Dhart, Jizzbug, Eurleif, 172, Delirium, Paul A, Stan Shebs, Docu, Jbdayez~enwiki, Uri~enwiki, Andres, Jeandré du
Toit, Sethmahoney, DesertSteve, Renamed user 4, Charles Matthews, Zoicon5, Hyacinth, Jjshapiro, Samsara, Marc Girod~enwiki,
Banno, JorgeGG, Lumos3, Goethean, Blainster, Bkell, Alan Liefting, Ancheta Wis, Giftlite, Cobaltbluetony, Peruvianllama, Bkon-
rad, FunnyMan3595, Curps, Gecko~enwiki, Christofurio, Neilc, Lichtconlon, Gadfium, Antandrus, Kaldari, Jossi, Rdsmith4, Balcer,
266 CHAPTER 42. UNIVERSAL ALGEBRA

Kevin B12, Pmanderson, Tail, Urhixidur, Stevenzenith, Reflex Reaction, Mike Rosoft, D6, Jayjg, Discospinster, Rich Farmbrough, Leib-
niz, FranksValli, Yuval madar, Dmr2, Bender235, Kbh3rd, Chalst, Kwamikagami, Wareh, Jpgordon, Nk, Cherlin, (aeropagitica), Mdd,
Knucmo2, Jumbuck, Chira, SlimVirgin, Snowolf, SteinbDJ, Kazvorpal, Jackhynes, Velho, Kelly Martin, Kzollman, Ruud Koot, Table-
top, Bbatsell, Palica, AdinaBob, SqueakBox, Magister Mathematicae, BD2412, Search4Lancer, Drbogdan, Rjwilmsi, Koavf, Zbxgscqf,
KYPark, Lockley, Jivecat, Salix alba, MZMcBride, Heah, Chekaz, Kl833x9~enwiki, Thekohser, John Deas, FlaBot, Doc glasgow, JY-
Ouyang, RexNL, Eric.dane~enwiki, David91, YurikBot, RussBot, Hede2000, Yamara, Ori Livneh, Archelon, Gaius Cornelius, The-
Grappler, NawlinWiki, Atfyfe, Wiki alf, Leutha, Joe House, BirgitteSB, Ragesoss, Moe Epsilon, Aaron charles, Tony1, Bota47, BraneJ,
Haemo, Maunus, Tomisti, Avraham, Igiffin, Jmchen, WAS 4.250, Deville, Lt-wiki-bot, Ninly, J. Van Meter, Chase me ladies, I'm the
Cavalry, Fang Aili, Tevildo, JLaTondre, Lifesnadir, Luk, Sardanaphalus, JJL, SmackBot, Reedy, Brockorgan, DCDuring, Gilliam, Mhss,
Bluebot, Kurykh, Thumperward, Josteinn, Go for it!, Colonies Chris, Mike hayes, WikiPedant, Tsca.bot, Vanished User 0001, Kaiserb-
Bot, Mhym, Dreadstar, Only, Jon Awbrey, Leon..., Michael Rogers, ArglebargleIV, JzG, John, Cyberstrike2000x, Geminatea, The Man
in Question, Brent williams, Beetstra, Meco, CharlesMartel, Laurapr, Swampyank, Phuzion, Christian Roess, K, Billy Hathorn, Cyrusc,
CmdrObot, Geremia, Thomasmeeks, Ezrakilty, Moreschi, Sdorrance, Typewritten, Gregbard, Jac16888, Cydebot, Reywas92, Master
son, Gimmetrow, Headbomb, West Brom 4ever, SomeStranger, Gerry Ashton, Epistemologist, AnnMBake, LogicMan, Wylie Ali, Luna
Santin, MengTheMagnificent, Tjmayerinsf, Danny lost, MECU, Arx Fortis, Deflective, Postcard Cathy, The Transhumanist, M4701,
Xact, Nikolaos Bakalis, Magioladitis, Askari Mark, Jay Gatsby, David Eppstein, GhostofSuperslum, Rickard Vogelberg, B9 humming-
bird hovering, Santiago Saint James, Way of Inquiry, Zelda Zilwaukee, Sgeo-BOT, R'n'B, Johnpacklambert, Huzzlet the bot, Terrek,
Nigholith, Peter Graif, Olav Smith, Created Equal, All Men Are, The Proposition That, And Dedicated To, Conceived In Liberty, A
New Nation, On This Continent, Brought Forth, Farmer Kiss, Our Fathers, And Seven, Four Score And Seven Years Ago, Edward
E. Nigma, Notreallydavid, Sir Humphrey Appleby, Ypetrachenko, Semi Virgil, Venus Verdigris, Coppertwig, Plasticup, Diverting In-
ternet Gossip Dress Up Game, Idiothek, Plindenbaum, Pastordavid, Slim Margin, DASonnenfeld, Jagness, Chromancer, Ripberger,
Jmco, Pasixxxx, Trikaduliana, Jeff G., Samboner, Hockey Knight In Canada, Love And Fellowship, WOSlinker, Frank232s, Butseri-
ouslyfolks, TXiKiBoT, Davehi1, Edvard Munchkin, ElinorD, GcSwRhIc, The Tetrast, Don4of4, Cummings01, Knight Of The Woeful
Countenance, PDFbot, Eldredo, Cnilep, Alcmaeonid, AlleborgoBot, GirasoleDE, SieBot, Darrell Wheeler, Ødipus sic, BotMultichill,
Odd nature, Dearborn Wacker, Hugh16, VI-LIII, Armand Boisblanc, Monegasque, Jlwelsh, Voice Of Xperience, Lisatwo, Lightmouse,
Vojvodaen, Sheez Louise, AtomikWeasel, Alt.pasta, Wahrmund, Ed Nauseum, Francvs, Jinx The Phinque, ClueBot, SummerWith-
Morons, Tequila Mockingbard, Ziemia Cieszynska, PipepBot, Rodhullandemu, Graysnots, Plastikspork, Goofé Dean, Jonny WÅÐÐ,
Name Sleightly Anonymized, Razimantv, Spaghettisburg Address, Mild Bill Hiccup, Who’s Wife, Sureupon, Badlytwice, Dancesince,
DragonBot, Ktr101, Sun Creator, Arjayay, Hans Adler, Would Goods, Kato9Tales, JustZeFaxMaam, Genesiswinter, Qwfp, Palnot, Sil-
vonenBot, The Alchimist, G7a, Kbdankbot, Addbot, Grashoofd, MrOllie, Favonian, Christopher Lee Adams, Ariel Black, LinkFA-Bot,
Mdnavman, Numbo3-bot, Lightbot, JEN9841, Ben Ben, Legobot, Luckas-bot, Yobot, Bunnyhop11, Rsquire3, Robson correa de ca-
margo, Queen of the Dishpan, Jean Santeuil, AnomieBOT, JackieBot, Materialscientist, Bob Burkhardt, ArthurBot, Xqbot, Srich32977,
Omnipaedista, ArkinAardvark, RibotBOT, Shadowjams, FrescoBot, ClausVind, Adam04, Recognizance, Cristian.albanese, Louperibot,
Kiefer.Wolfowitz, Kirshank, Tbhotch, MegaSloth, Beyond My Ken, EmausBot, WikitanvirBot, RA0808, Ebe123, KHamsun, ZéroBot,
AManWithNoPlan, Tolly4bolly, Albert Wilford Monroe, ChuispastonBot, RockMagnetist, Kohser 3.0, AntisocialSubversive, ClueBot
NG, Goose friend, Tsk351, Masssly, Widr, Moulbrey, BG19bot, Gruebleen, Lawandeconomics1, Allecher, Pohjannaula, Jean How-
bree, MrBill3, Larstebil, Anthrophilos, Ninmacer20, Damrvrhunter, Tdaim, Dexbot, SamCardioNgo, VIAFbot, Nimetapoeg, Jodosma,
Dani0rad, Trenturrs, Shrikarsan, Sol1, Plerome7, Max Plodwonka, Sly of he Green, Jpeeps, VS2219, Dukon, KasparBot, Wattenoh and
Anonymous: 151
• Clone (algebra) Source: https://en.wikipedia.org/wiki/Clone_(algebra)?oldid=623579268 Contributors: Zundark, EmilJ, Salix alba, Din-
gar, Sam Staton, Epsilon0, Hans Adler, Addbot, Gabriele ricci, Luckas-bot, Jesse V., Stephan Spahn, BG19bot, Deltahedron, JMP EAX
and Anonymous: 3
• Computability theory Source: https://en.wikipedia.org/wiki/Computability_theory?oldid=656711116 Contributors: AxelBoldt, Edward,
Michael Hardy, Modster, Pde, Cyp, Hyacinth, Wwheeler, Phoebe, David.Monniaux, Phil Boswell, Pigsonthewing, MathMartin, Giftlite,
Siroxo, Vivero~enwiki, Beland, Paul August, Ben Standeven, Peter M Gerdes, Haham hanuka, Msh210, Ruud Koot, Teemu Leisti,
Rjwilmsi, Benanhalt, Bgwhite, Wavelength, Hairy Dude, KSmrq, Trovatore, Ott2, Closedmouth, Sardanaphalus, SmackBot, Scott Paeth,
Mhss, Chris the speller, Bluebot, Jjalexand, Pietaster, Readams, Byelf2007, Mr. Random, Wvbailey, Harryboyles, Breno, IronGargoyle,
JMK, CmdrObot, CBM, Myasuda, Gregbard, Cydebot, Blaisorblade, Headbomb, Escarbot, Pmt6sbc, JAnDbot, Mberridge, Avaya1,
MetsBot, A3nm, David Eppstein, GermanX, Stephanwehner, Krishnachandranvn, VolkovBot, JohnBlackburne, Onore Baka Sama, Frank
Stephan, SieBot, Kumioko (renamed), CBM2, LarRan, HairyFotr, Polyamorph, UKoch, Qwfp, Brentsmith101, Dsimic, Multipundit, Ad-
dbot, GorgeUbuasha, Amirobot, AnomieBOT, Materialscientist, Citation bot, Xqbot, MarcelB612, RedBot, MastiBot, Pokus9999, Quo-
tient group, Hauke Pribnow, KHamsun, Jmencisom, Bethnim, Andrebask, ClueBot NG, Snotbot, MerlIwBot, Helpful Pixie Bot, BG19bot,
Flosfa, Brad7777, Gabriela Cunha Sampaio, Adrian1231, Dexbot, Deltahedron, Cerabot~enwiki, Lugia2453, Jochen Burghardt, Star767,
Darkangel77777 and Anonymous: 54
• De Morgan’s laws Source: https://en.wikipedia.org/wiki/De_Morgan’{}s_laws?oldid=668423258 Contributors: The Anome, Tarquin,
Jeronimo, Mudlock, Michael Hardy, TakuyaMurata, Ihcoyc, Ijon, AugPi, DesertSteve, Charles Matthews, Dcoetzee, Choster, Dysprosia,
Xiaodai~enwiki, Hyacinth, David Shay, SirPeebles, Fredrik, Dorfl, Hadal, Giftlite, Starblue, DanielZM, Guppyfinsoup, Smimram, ES-
kog, Chalst, Art LaPella, EmilJ, Scrutchfield, Linj, Alphax, Boredzo, Larry V, Jumbuck, Smylers, Oleg Alexandrov, Linas, Mindmatrix,
Bkkbrad, Btyner, Graham87, Miserlou, The wub, Marozols, Mathbot, Subtractive, DVdm, YurikBot, Wavelength, RobotE, Hairy Dude,
Michael Slone, Cori.schlegel, Saric, Cdiggins, Lt-wiki-bot, Rodrigoq~enwiki, SmackBot, RDBury, Gilliam, MooMan1, Mhss, JRSP,
DHN-bot~enwiki, Ebertek, Coolv, Cybercobra, Jon Awbrey, Vina-iwbot~enwiki, Petrejo, Gobonobo, Darktemplar, 16@r, Loadmaster,
Drae, MTSbot~enwiki, Adambiswanger1, Nutster, JForget, Gregbard, Kanags, Thijs!bot, Epbr123, Jojan, Helgus, Futurebird, AntiVan-
dalBot, Hannes Eder, MikeLynch, JAnDbot, Jqavins, Nitku, Stdazi, Gwern, General Jazza, R'n'B, Bongomatic, Ttwo, Javawizard, Kratos
84, Policron, TWiStErRob, VolkovBot, TXiKiBoT, Drake Redcrest, Ttennebkram, Smoseson, SieBot, Squelle, Fratrep, Melcombe, Into
The Fray, ClueBot, B1atv, Mild Bill Hiccup, Cholmeister, PixelBot, Alejandrocaro35, Hans Adler, Cldoyle, Rror, Alexius08, Addbot,
Mitch feaster, Tide rolls, Luckas-bot, Yobot, Linket, KamikazeBot, Eric-Wester, AnomieBOT, Materialscientist, DannyAsher, Obersach-
sebot, Xqbot, Capricorn42, Boongie, Action ben, JascalX, Omnipaedista, Jsorr, Mfwitten, Rapsar, Stpasha, RBarryYoung, DixonDBot,
Teknad, EmausBot, WikitanvirBot, Mbonet, Chewings72, Davikrehalt, Llightex, ClueBot NG, Wcherowi, Benjgil, Widr, Helpful Pixie
Bot, David815, Sylvier11, Waleed.598, ChromaNebula, Jochen Burghardt, Epicgenius, Bluemathman, G S Palmer, Idonei, Scotus12,
Danlarteygh and Anonymous: 143
• Formal language Source: https://en.wikipedia.org/wiki/Formal_language?oldid=662891308 Contributors: LC~enwiki, Jan Hidders, An-
dre Engels, Youandme, Rp, Ahoerstemeier, Nanshu, Schneelocke, Charles Matthews, Bemoeial, Hyacinth, Cleduc, Spikey, Robbot,
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 267

Jaredwf, Millosh, Tobias Bergemann, Connelly, Giftlite, Kim Bruning, Eequor, ConradPino, MarkSweep, Mukerjee, Tyler McHenry,
Discospinster, Rich Farmbrough, Dbachmann, Ntennis, Chalst, Nk, Jonsafari, Obradovic Goran, Waku, Helix84, Mdd, Jumbuck, MrTree,
Stephan Leeds, Nuno Tavares, Linas, Ruud Koot, Flamingspinach, ThomasOwens, Sdornan, LKenzo, FlaBot, Eubot, PlatypeanArch-
cow, Pexatus, Chobot, MithrandirMage, YurikBot, Wavelength, RussBot, Hede2000, Archelon, Rick Norwood, Muu-karhu, Bkil, Aaron
Schulz, BOT-Superzerocool, Bota47, Saric, Ripper234, Arthur Rubin, Jogers, GrinBot~enwiki, TuukkaH, Finell, SmackBot, Wic2020,
Incnis Mrsi, Unyoyega, Od Mishehu, Fikus, Jpvinall, Jerome Charles Potts, J. Spencer, Frap, Cerebralpayne, Jon Awbrey, FelisLeo,
SashatoBot, Astuishin, Mike Fikes, Iridescent, Dreftymac, Adriatikus, DBooth, CRGreathouse, CBM, Ezrakilty, Gregbard, Alaibot,
NERIUM, Nick Number, Cyclonenim, Vantelimus, Danny lost, VictorAnyakin, Hermel, AndriesVanRenssen, Nyq, JNW, Tedickey, Cic,
David Eppstein, J.delanoy, Trusilver, Brest, Daniele.tampieri, Policron, Bonadea, Halukakin, Idioma-bot, Philomathoholic, VolkovBot,
AlnoktaBOT, Philogo, ASHPvanRenssen, AlleborgoBot, Newbyguesses, MiNombreDeGuerra, Fratrep, OKBot, CBM2, Classicalecon,
ClueBot, Alpha Beta Epsilon, Excirial, Quercus basaseachicensis, Alexbot, NuclearWarfare, Hans Adler, Addbot, Cuaxdon, Lemmey-
BOT, OlEnglish, Zorrobot, Jarble, JakobVoss, Legobot, Luckas-bot, Yobot, Pcap, AnomieBOT, Jim1138, JackieBot, Hahahaha4, Cita-
tion bot, Clickey, Xqbot, GrouchoBot, Charvest, Wei.cs, Serberimor, MastiBot, RobinK, Full-date unlinking bot, ActivExpression, Gzorg,
Lokentaren, LoStrangolatore, Mean as custard, Ankog, Dziadgba, Architectchao, Tijfo098, ClueBot NG, Helpful Pixie Bot, Garsd, Sff9,
ChrisGualtieri, Dmunene, Chunliang Lyu, Dulaambaw, Akerbos, Jochen Burghardt, Isthatmoe, KasparBot and Anonymous: 87
• Functional completeness Source: https://en.wikipedia.org/wiki/Functional_completeness?oldid=665400365 Contributors: Slrubenstein,
Michael Hardy, Paul Murray, Ancheta Wis, Kaldari, Guppyfinsoup, EmilJ, Nortexoid, Domster, CBright, LOL, Paxsimius, Qwer-
tyus, Kbdank71, MarSch, Jameshfisher, R.e.s., RichF, SmackBot, InverseHypercube, CBM, Gregbard, Cydebot, Krauss, Swpb, Sergey
Marchenko, Joshua Issac, FMasic, Saralee Arrowood Viognier, Francvs, Hans Adler, Cnoguera, Dsimic, Addbot, Yobot, TechBot, Infvwl,
Citation bot 1, Abazgiri, Dixtosa, ZéroBot, Tijfo098, Helpful Pixie Bot and Anonymous: 21
• Halting problem Source: https://en.wikipedia.org/wiki/Halting_problem?oldid=667972227 Contributors: Damian Yerrick, AxelBoldt,
Derek Ross, LC~enwiki, Vicki Rosenzweig, Wesley, Robert Merkel, Jan Hidders, Andre Engels, Wahlau, Hephaestos, Stevertigo, Michael
Hardy, Booyabazooka, Dominus, Cole Kitchen, Wapcaplet, Graue, Fwappler, JeremyR, Cyp, Muriel Gottrop~enwiki, Salsa Shark, Qed,
Rotem Dan, Ehn, Charles Matthews, Timwi, Dcoetzee, Dysprosia, Doradus, Furrykef, David.Monniaux, Phil Boswell, DaleNixon, Rob-
bot, Craig Stuntz, RedWolf, Cogibyte, MathMartin, Rursus, Paul G, Guillermo3, Tobias Bergemann, Ramir, Solver, Ancheta Wis, Giftlite,
DavidCary, Dratman, Siroxo, Rchandra, Neilc, DNewhall, Sam Hocevar, Gazpacho, PhotoBox, Mormegil, Rich Farmbrough, Avriette,
Guanabot, ArnoldReinhold, Roodog2k, DcoetzeeBot~enwiki, Bender235, Pt, Jantangring, Army1987, Cyclist, Enric Naval, Obradovic
Goran, Jumbuck, Hackwrench, Axl, Sligocki, Suruena, Oleg Alexandrov, Ataru, MattGiuca, Ruud Koot, GregorB, Ryansking, Wul-
fila, Tslocum, Graham87, BD2412, Zoz, Oddcowboy, Bubba73, SLi, FlaBot, Mathbot, NekoDaemon, Jameshfisher, Chobot, Adam
Lindberg, Banaticus, YurikBot, Wavelength, Hairy Dude, RussBot, Spl, Robert A West, Trovatore, R.e.s., ZacBowling, Eighty~enwiki,
Thiseye, Thsgrn, Muu-karhu, Hv, Zwobot, Bota47, DaveWF, Arthur Rubin, Abeliano, Claygate, Tsiaojian lee, True Pagan Warrior,
SmackBot, Radak, Stux, InverseHypercube, Alksub, Eskimbot, Canderra, Ohnoitsjamie, Hmains, SpaceDude, Thumperward, Jfsamper,
Torzsmokus, Liontooth, Jgoulden, Anatoly Vorobey, Byelf2007, Lambiam, Wvbailey, Nagle, BenRayfield, Meor, WAREL, Zero sharp,
Atreys, FatalError, CRGreathouse, CBM, Chrisahn, Myasuda, Gregbard, Stormwyrm, Cydebot, Mon4, UberScienceNerd, Jdvelasc,
Thijs!bot, Amlz, VictorAnyakin, JAnDbot, LeedsKing, PhilipHunt, .anacondabot, Yurei-eggtart, Singularity, MetsBot, David Eppstein,
Ekotkie, Projectstann, R'n'B, Maurice Carbonaro, Aqwis, Tatrgel, Bah23, Billinghurst, Mikez302, Sapphic, HiDrNick, Macaw3, SieBot,
Ncapito, Likebox, Yahastu, TimMorley, Svick, SuperMarioBrah, CBM2, ClueBot, Jeffreykegler, James Lednik, Catuila, Gundersen53,
XLinkBot, Luke.mccrohon, SilvonenBot, Addbot, Ijriims, Ronhjones, Fieldday-sunday, Download, Verbal, PV=nRT, Yobot, Pcap, PM-
Lawrence, Bility, AnomieBOT, Garga2, PiracyFundsTerrorism, Mahtab mk, Xqbot, Martnym, Nippashish, Ehird, Thehelpfulbot, Fres-
coBot, Arlen22, Kwiki, Pinethicket, Rushbugled13, The.megapode, RedBot, Joshtch, Full-date unlinking bot, Proof Theorist, Specs112,
EmausBot, John of Reading, BillyPreset, Quondum, Zustra, Erget2005, ChuispastonBot, Jiri 1984, Widr, Helpful Pixie Bot, Bengski68,
Pedro.atmc, Marler8997, S.Chepurin, Farhanarrafi, IjonTichyIjonTichy, APerson, Jochen Burghardt, Blackbombchu, KasparBot and
Anonymous: 196
• List of multiple discoveries Source: https://en.wikipedia.org/wiki/List_of_multiple_discoveries?oldid=665549844 Contributors: Michael
Hardy, Ww, Hyacinth, Kmote, Piotrus, Rich Farmbrough, Florian Blaschke, Bender235, Eleland, Logologist, Sligocki, Richard Arthur
Norton (1958- ), Mindmatrix, Waldir, Pictureuploader, BD2412, Qwertyus, Rjwilmsi, KenBailey, RussBot, Dtrebbien, RFBailey, Peg-
ship, SmackBot, Fantasizer, Jagged 85, Hmains, Nbarth, Colonies Chris, Makyen, Dicklyon, Novangelis, Rhetth, CRGreathouse, BeenAroundAWhile,
Gregbard, Mattergy, Doug Weller, Smartse, PloniAlmoni, JamesBWatson, Nyttend, Nucleophilic, Martin Hühne, David J Wilson, Nigholith,
AdderUser, Rogerdpack, Nihil novi, Flyer22, Enyavar, Galapah, Editor2020, Siri Keeton, Candalua, Amble, Emitter~enwiki, Citation
bot, Acebulf, Abc518, Obankston, RjwilmsiBot, WildBot, John of Reading, GoingBatty, Boojum Snark, Spatrick99, Midas02, Luke-
Hancock, BG19bot, Manoguru, NanoMadScientist, Arttechlaw, Kozmokonstans, Sminthopsis84, Inanygivenhole, M.AliShoaib, Noyster,
Mfb, Filedelinkerbot, BethNaught and Anonymous: 30
• Logic gate Source: https://en.wikipedia.org/wiki/Logic_gate?oldid=665017088 Contributors: AxelBoldt, Magnus Manske, Peter Winnberg,
Derek Ross, MarXidad, The Anome, BenBaker, Jkominek, Mudlock, Heron, Stevertigo, Frecklefoot, RTC, Michael Hardy, Booy-
abazooka, Mahjongg, Dominus, SGBailey, Ixfd64, Karada, Mac, Glenn, Netsnipe, GRAHAMUK, Arteitle, Reddi, Dysprosia, Colin
Marquardt, Maximus Rex, Mrand, Omegatron, Jni, Sjorford, Robbot, Lord Kelvin, Pingveno, Bkell, Ianml, Paul Murray, Mushroom, An-
cheta Wis, Centrx, Giftlite, Andy, DavidCary, Peruvianllama, Everyking, Pashute, AJim, Andris, Espetkov, Vadmium, LucasVB, Kaldari,
CSTAR, Creidieki, Jlang, Kineox~enwiki, Mormegil, Discospinster, Rich Farmbrough, Luzian~enwiki, Roo72, LindsayH, SocratesJedi,
ESkog, ZeroOne, Plugwash, Nabla, CanisRufus, Aecis, Diomidis Spinellis, Smalljim, La goutte de pluie, Hooperbloob, Jumbuck, Guy
Harris, Arthena, Blues-harp, Lectonar, Pion, Bantman, N313t3~enwiki, BRW, Wtshymanski, Rick Sidwell, Cburnett, Deadworm222,
Bonzo, Alai, Axeman89, LunaticFringe, Bookandcoffee, Dan100, Cipherswarm, Smark33021, Boothy443, Mindmatrix, Jonathan de
Boyne Pollard, Bkkbrad, VanFowler, Kglavin, Karmosin, The Nameless, V8rik, BD2412, Syndicate, ZanderSchubert, GOD, Bruce1ee,
Ademkader, DoubleBlue, Firebug, Mirror Vax, Latka, Ewlyahoocom, Swtpc6800, Fresheneesz, Vonkje, DVdm, Bgwhite, The Ram-
bling Man, YurikBot, Adam1213, RussBot, Akamad, Stephenb, Yyy, Robchurch, FreelanceWizard, Zwobot, Rohanmittal, StuRat, Reyk,
Urocyon, HereToHelp, Anclation~enwiki, Easter Monkey, SorryGuy, AMbroodEY, JDspeeder1, Adam outler, Crystallina, SmackBot,
Eveningmist, Jcbarr, Frymaster, Canthusus, Folajimi, Andy M. Wang, Lindosland, JoeKearney, SynergyBlades, Oli Filth, MovGP0,
Lightspeedchick, Jjbeard~enwiki, Audriusa, Ian Burnet~enwiki, Can't sleep, clown will eat me, Nick Levine, KevM, Atilme, Epachamo,
Hgilbert, Jon Awbrey, Shadow148, SashatoBot, Lambiam, Kuru, MagnaMopus, Athernar, Igor Markov, Mgiganteus1, JHunterJ, Ci-
kicdragan, Robert Bond, Dicklyon, Mets501, Dacium, JYi, J Di, Aeons, Rangi42, Marysunshine, Eassin, Tawkerbot2, DonkeyKong64,
Drinibot, Circuit dreamer, Skoch3, Arnavion, Gregbard, Rajiv Beharie, Mblumber, Abhignarigala, Mello newf, Dancter, Tawkerbot4,
DumbBOT, Omicronpersei8, Lordhatrus, Thijs!bot, Epbr123, N5iln, Al Lemos, Marek69, DmitTrix, James086, Towopedia, Eleuther,
Stannered, AntiVandalBot, USPatent, MER-C, Wasell, Massimiliano Lincetto, Bongwarrior, VoABot II, JNW, Yandman, Rhdv, M
268 CHAPTER 42. UNIVERSAL ALGEBRA

3bdelqader, Robin S, Rickterp, MartinBot, Rettetast, Glrx, J.delanoy, Jonpro, Feis-Kontrol, Zen-in, Jeepday, Eibx, Bigdumbdinosaur,
FreddieRic, Hanacy, Sunderland06, Cometstyles, Tiggerjay, Tygrrr, DorganBot, Alex:D, Barber32, Idioma-bot, VolkovBot, Hersfold,
AlnoktaBOT, Lear’s Fool, Philip Trueman, PNG crusade bot, TXiKiBoT, GLPeterson, Mamidanna, Murugango, Djkrajnik, Salvar,
The Tetrast, Corvus cornix, Jackfork, Inductiveload, Dirkbb, Updatebjarni, STEDMUNDS07, Logan, Neparis, SieBot, Niv.sarig, I Like
Cheeseburgers, ToePeu.bot, Gerakibot, Teh Naab, Berserkerus, Evaluist, Oxymoron83, Steven Zhang, WimdeValk, ClueBot, The Thing
That Should Not Be, Rilak, Boing! said Zebedee, CounterVandalismBot, Namazu-tron, Alexbot, Ftbhrygvn, EddyJ07, Dspark76, Hans
Adler, The Red, Abhishek Jacob, Horselover Frost, Versus22, Egmontaz, DumZiBoT, XLinkBot, Marylee23, MystBot, Iranway, Addbot,
Willking1979, Melab-1, A0602336, Chef Super-Hot, Ashton1983, Vishnava, Fluffernutter, Rchard2scout, Hmorris94, Tyw7, Tide rolls,
Lightbot, OlEnglish, Legobot, PlankBot, Luckas-bot, Ptbotgourou, THEN WHO WAS PHONE?, Knownot, Alienskull, AnomieBOT,
0x38I9J*, Jim1138, JackieBot, Piano non troppo, Keithbob, Materialscientist, Spirit469, Citation bot, Bean49, Xqbot, RMSfromFtC,
Sketchmoose, Big angry doggy, Capricorn42, Coretheapple, RibotBOT, Elep2009, XPEHOPE3, Joaquin008, Vdsharma12, FrescoBot,
Roman12345, Machine Elf 1735, Cannolis, Pinethicket, Jschnur, RedBot, MastiBot, SpaceFlight89, Forward Unto Dawn, Cnwilliams,
Wikitikitaka, Blackenblue, Vrenator, Zvn, Clarkcj12, MrX, Meistro god, Galloping Moses, EmausBot, John of Reading, Trinibones,
Wikipelli, Draconicfire, GOVIND SANGLI, Wayne Slam, Dmitry123456, Ontyx, Carmichael, Tijfo098, GrayFullbuster, Protoborg,
Stevenglowa, ClueBot NG, Jack Greenmaven, Morgankevinj huggle, VladikVP, Marechal Ney, Masssly, Vibhijain, Jk2q3jrklse, Helpful
Pixie Bot, Wbm1058, Lowercase sigmabot, Mark Arsten, CitationCleanerBot, Snow Blizzard, Husigeza, RscprinterBot, Safeskiboy-
dunknoe, FrederickE, Teammm, Mrt3366, Rsmary, Sha-256, Harsh 2580, Lugia2453, Itsmeshiva, Red-eyed demon, Jamesmcmahon0,
Tentinator, Lilbonanza, Mz bankie, Jianhui67, Abhinav dw6, Cdouglas32, Trax support, TerryAlex, Gfdsfgfgfg, Areyoureadylouie, Char-
liegirl321, Bobbbbbbbbvvvvvcvcv, KasparBot and Anonymous: 549
• Logical biconditional Source: https://en.wikipedia.org/wiki/Logical_biconditional?oldid=629322133 Contributors: Patrick, TakuyaMu-
rata, BAxelrod, Dysprosia, Snobot, Giftlite, DavidCary, Recentchanges, Lethe, Andycjp, Discospinster, Paul August, Elwikipedista~enwiki,
Oleg Alexandrov, Velho, MattGiuca, BD2412, Kbdank71, Jittat~enwiki, Gaius Cornelius, Arthur Rubin, SmackBot, InverseHypercube,
Melchoir, WookieInHeat, GoOdCoNtEnT, Bluebot, Qphilo, Robma, Radagast83, Jon Awbrey, Lambiam, Bjankuloski06en~enwiki, Rain-
warrior, Mets501, Adambiswanger1, CBM, Gregbard, Cydebot, Julian Mendez, Shirulashem, Letranova, Infovarius, CommonsDelinker,
J.delanoy, Freekh, Anonymous Dissident, Wolfrock, Graymornings, Tiny plastic Grey Knight, Francvs, DEMcAdams, ClueBot, Watch-
duck, Alejandrocaro35, MilesAgain, 1ForTheMoney, Addbot, Jarble, Meisam, Yobot, Worldbruce, TaBOT-zerem, AnomieBOT, Ma-
chine Elf 1735, 777sms, Mijelliott, Kuzmaka, JSquish, Chharvey, Wayne Slam, DASHBotAV, Chester Markel, Masssly, MerlIwBot,
Pine, Hanlon1755 and Anonymous: 35
• Logical conjunction Source: https://en.wikipedia.org/wiki/Logical_conjunction?oldid=650824164 Contributors: AxelBoldt, Toby Bar-
tels, Enchanter, B4hand, Mintguy, Stevertigo, Chas zzz brown, Michael Hardy, EddEdmondson, Justin Johnson, TakuyaMurata, Poor
Yorick, Andres, Dysprosia, Jitse Niesen, Fredrik, Voodoo~enwiki, Goodralph, Snobot, Giftlite, Oberiko, Lethe, Yekrats, Jason Quinn,
Macrakis, Brockert, Leonard Vertighel, ALE!, Wikimol, Rdsmith4, Poccil, Richie, RuiMalheiro, Cfailde, SocratesJedi, Paul August,
Emvee~enwiki, Rzelnik, Ling Kah Jai, Oleg Alexandrov, Mindmatrix, Bluemoose, Btyner, LimoWreck, Graham87, BD2412, Kbdank71,
VKokielov, Jameshfisher, Fresheneesz, Chobot, Hede2000, Dijxtra, Trovatore, Mditto, EAderhold, Vanished user 34958, JoanneB, Tom
Morris, Melchoir, Bluebot, Cybercobra, Richard001, Jon Awbrey, Lambiam, Clark Mobarry, TastyPoutine, JoshuaF, Happy-melon,
Daniel5127, Gregbard, Cydebot, Thijs!bot, JAnDbot, Slacka123, VoABot II, Vujke, Gwern, Oren0, Santiago Saint James, Crisneda2000,
R'n'B, CommonsDelinker, AdrienChen, On This Continent, GaborLajos, Policron, Enix150, Trevor Goodyear, Hotfeba, TXiKiBoT,
Geometry guy, Wikiisawesome, Wolfrock, SieBot, WarrenPlatts, Oxymoron83, Majorbrainy, Callowschoolboy, Francvs, Classicale-
con, DEMcAdams, Niceguyedc, Watchduck, Hans Adler, Lab-oratory, Addbot, MrOllie, CarsracBot, Meisam, Legobot, Luckas-bot,
Yobot, BG SpaceAce, АлександрВв, No names available, MastiBot, H.ehsaan, Magmalex, EmausBot, Mjaked, 2andrewknyazev, Friet-
jes, Masssly, Scwarebang, Interapple and Anonymous: 91
• Logical connective Source: https://en.wikipedia.org/wiki/Logical_connective?oldid=666225091 Contributors: AxelBoldt, Rmhermen,
Christian List, Stevertigo, Michael Hardy, Dominus, Justin Johnson, TakuyaMurata, Ahoerstemeier, AugPi, Andres, Dysprosia, Hyacinth,
Robbot, Sbisolo, Ojigiri~enwiki, Filemon, Snobot, Giftlite, DavidCary, Risk one, Siroxo, Boothinator, Wiki Wikardo, Kaldari, Sam Ho-
cevar, Indolering, Abdull, Rfl, Jiy, Guanabot, Paul August, ZeroOne, Elwikipedista~enwiki, Charm, Chalst, Shanes, EmilJ, Spoon!,
Kappa, SurrealWarrior, Suruena, Bookandcoffee, Oleg Alexandrov, Joriki, Mindmatrix, Graham87, BD2412, Kbdank71, Hiding, Fresh-
eneesz, Chobot, YurikBot, RussBot, Gaius Cornelius, Rick Norwood, Trovatore, Cullinane, Arthur Rubin, Masquatto, Nahaj, SmackBot,
Incnis Mrsi, InvictaHOG, JRSP, Chris the speller, Bluebot, Tolmaion, Jon Awbrey, Lambiam, Nishkid64, Bjankuloski06en~enwiki,
JHunterJ, RichardF, JRSpriggs, CRGreathouse, CBM, Gregbard, Cydebot, Julian Mendez, DumbBOT, Letranova, Jdm64, Danger, Dun-
canHill, David Eppstein, Nleclerc~enwiki, R'n'B, Christian424, GoatGuy, Darkvix, Arcanedude91, Policron, VolkovBot, Jeff G., TXiKi-
BoT, Anonymous Dissident, Philogo, Dmcq, Sergio01, SieBot, Gerakibot, Yintan, Skippydo, Huku-chan, Denisarona, Francvs, ClueBot,
Justin W Smith, Ktr101, Watchduck, ZuluPapa5, Hans Adler, MilesAgain, Hugo Herbelin, Djk3, Johnuniq, Addbot, Mortense, Melab-1,
Download, SpBot, Peti610botH, Loupeter, Yobot, Amirobot, AnomieBOT, Racconish, ArthurBot, Xqbot, El Caro, Luis Felipe Schenone,
Entropeter, FrescoBot, Citation bot 1, RandomDSdevel, Pinethicket, Timboat, Der Elbenkoenig, Dude1818, Orenburg1, Hriber, Green-
fernglade, Ipersite, BAICAN XXX, ‫ טוב‬,‫נו‬, Seabuoy, Mentibot, Tijfo098, Mhiji, ClueBot NG, Thebombzen, Masssly, Helpful Pixie Bot,
Owarihajimari, Weaktofu, Hanlon1755, Fuebar, Tommor7835, Everymorning, Star767, Dai Pritchard, Sk8rcoolkat6969, Student342
and Anonymous: 87
• Logical disjunction Source: https://en.wikipedia.org/wiki/Logical_disjunction?oldid=639359080 Contributors: AxelBoldt, Bryan Derk-
sen, Tarquin, Toby Bartels, B4hand, Mintguy, Patrick, D, Michael Hardy, Pit~enwiki, Stephen C. Carlson, Ixfd64, Justin Johnson,
TakuyaMurata, Poor Yorick, DesertSteve, Dysprosia, Colin Marquardt, Robbot, Kowey, Voodoo~enwiki, Tobias Bergemann, Giftlite, Re-
centchanges, Lethe, Macrakis, Espetkov, Bact, Poccil, Guanabot, SocratesJedi, Paul August, ZeroOne, Jnestorius, Daemondust, Blinken,
Obradovic Goran, Hesperian, Emvee~enwiki, Ling Kah Jai, Oleg Alexandrov, Thryduulf, Mindmatrix, Kzollman, Bluemoose, Mandarax,
LimoWreck, BD2412, Kbdank71, Xiao Li, FlaBot, Gringo300, Mathbot, Fresheneesz, Chobot, YurikBot, Gaius Cornelius, Dijxtra,
Trovatore, Tony1, Mditto, Acetic Acid, Vanished user 34958, Nahaj, Katieh5584, Tom Morris, Melchoir, BiT, Bluebot, Kurykh, Or-
angeDog, Cybercobra, Charles Merriam, Jon Awbrey, EdC~enwiki, Doc Daneeka, RekishiEJ, CBM, Gregbard, Cydebot, Julian Mendez,
PamD, Thijs!bot, Wikid77, Moulder, Nick Number, JAnDbot, Arachnocapitalist, Slacka123, Laymanal, Magioladitis, Tony Winter,
David65536, Santiago Saint James, CommonsDelinker, On This Continent, Supuhstar, Policron, Althepal, Enix150, VolkovBot, TXiKi-
BoT, Gwib, Bbukh, World.suman, SieBot, Soler97, Ctxppc, AlanUS, Anyeverybody, Francvs, Classicalecon, ClueBot, C xong, Rumping,
Watchduck, Hans Adler, Wernhervonbraun, MrVanBot, CarsracBot, AndersBot, FiriBot, Tripsone, Meisam, Legobot, Luckas-bot, Max-
damantus, Charlatino, MauritsBot, Xqbot, Ruy Pugliesi, FrescoBot, RedBot, Gamewizard71, Dinamik-bot, EmausBot, Matthewbeckler,
2andrewknyazev, Pengkeu, ClueBot NG, Masssly, Scwarebang, PhnomPencil, CarrieVS, Fuebar, Lemnaminor and Anonymous: 86
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 269

• Material conditional Source: https://en.wikipedia.org/wiki/Material_conditional?oldid=665659334 Contributors: William Avery, Dcljr,


AugPi, Charles Matthews, Dcoetzee, Doradus, Cholling, Giftlite, Jason Quinn, Nayuki, TedPavlic, Elwikipedista~enwiki, Nortexoid,
Vesal, Eric Kvaalen, BD2412, Kbdank71, Martin von Gagern, Joel D. Reid, Fresheneesz, Vonkje, NevilleDNZ, RussBot, KSchutte,
NawlinWiki, Trovatore, Avraham, Closedmouth, Arthur Rubin, SyntaxPC, Fctk~enwiki, SmackBot, Amcbride, Incnis Mrsi, Pokipsy76,
BiT, Mhss, Jaymay, Tisthammerw, Sholto Maud, Robma, Cybercobra, Jon Awbrey, Oceanofperceptions, Byelf2007, Grumpyyoung-
man01, Clark Mobarry, Beefyt, Rory O'Kane, Dreftymac, Eassin, JRSpriggs, Gregbard, FilipeS, Cydebot, Julian Mendez, Thijs!bot,
Egriffin, Jojan, Escarbot, Applemeister, WinBot, Salgueiro~enwiki, JAnDbot, Olaf, Alastair Haines, Arno Matthias, JaGa, Santiago
Saint James, Pharaoh of the Wizards, Pyrospirit, SFinside, Anonymous Dissident, The Tetrast, Cnilep, Radagast3, Newbyguesses, Light-
breather, Paradoctor, Iamthedeus, Soler97, Francvs, Classicalecon, Josang, Ruy thompson, Watchduck, Hans Adler, Djk3, Marc van
Leeuwen, Tbsdy lives, Addbot, Melab-1, Fyrael, Morriswa, SpellingBot, CarsracBot, Chzz, Jarble, Meisam, Luckas-bot, AnomieBOT,
Sonia, Pnq, Bearnfæder, FrescoBot, Greyfriars, Machine Elf 1735, RedBot, MoreNet, Beyond My Ken, John of Reading, Hgetnet, Hi-
bou57, ClueBot NG, Movses-bot, Jiri 1984, Masssly, Dooooot, Noobnubcakes, Hanlon1755, Leif Czerny, CarrieVS, Jochen Burghardt,
Lukekfreeman, ArchReader, NickDragonRyder, Indomitavis, Rathkirani, AnotherPseudonym, Xerula, Matthew Kastor, Mathematical
Truth and Anonymous: 73
• Monotonic function Source: https://en.wikipedia.org/wiki/Monotonic_function?oldid=667673899 Contributors: AxelBoldt, Andre En-
gels, Miguel~enwiki, Edemaine, Patrick, Michael Hardy, Ixfd64, TakuyaMurata, Cherkash, Charles Matthews, Dino, Fibonacci, Hen-
rygb, Epheterson, Tobias Bergemann, Tosha, Giftlite, Markus Krötzsch, BenFrantzDale, MSGJ, Macrakis, Luqui, Smyth, Paul August,
Bender235, Rgdboer, Scott Ritchie, Haham hanuka, AppleJuggler, Jumbuck, Caesura, SteinbDJ, Totalcynic, Oleg Alexandrov, MFH,
Smmurphy, Qwertyus, FreplySpang, Intgr, Kri, Chobot, RussBot, Trovatore, SmackBot, KocjoBot~enwiki, Eskimbot, Mcld, Mhss,
Roberto.zanasi, Craig t moore, Berland, Addshore, TrogdorPolitiks, SashatoBot, DafadGoch, Supertigerman, Jackzhp, CBM, Myasuda,
Simeon, Gregbard, Dugwiki, Escarbot, JAnDbot, Jdrumgoole, Albmont, Sullivan.t.j, David Eppstein, Hans Dunkelberg, Policron, Yecril,
VolkovBot, Pleasantville, Saxobob, Gavin.collins, SieBot, Gerakibot, Dawn Bard, AlanUS, ClueBot, Justin W Smith, Watchduck, Ben-
der2k14, Kausikghatak, Qwfp, Marc van Leeuwen, Addbot, Fgnievinski, Topology Expert, Econotechie, Tide rolls, PV=nRT, Luckas-
bot, TaBOT-zerem, Calle, AnomieBOT, Citation bot, Papppfaffe, Isheden, Hisoka-san, ANDROBETA, RandomDSdevel, Unbitwise,
EmausBot, 478jjjz, TuHan-Bot, ZéroBot, Zap Rowsdower, ResearchRave, ClueBot NG, Helpful Pixie Bot, Costeaeb, The1337gamer,
Miguelcruzf, ChrisGualtieri, Monkbot, Mgkrupa, Some1Redirects4You and Anonymous: 75
• N-ary group Source: https://en.wikipedia.org/wiki/N-ary_group?oldid=661804563 Contributors: Zundark, Michael Hardy, Zaslav, Il-
mari Karonen, SmackBot, LokiClock, Rumping, Hans Adler, TimothyRias, Addbot, Yobot, Charvest, RedBot, Mistory, CSJJ104 and
Nonassociative
• Negation Source: https://en.wikipedia.org/wiki/Negation?oldid=652849228 Contributors: AxelBoldt, Zundark, Arvindn, Toby Bartels,
William Avery, Ryguasu, Youandme, Stevertigo, Edward, Patrick, Ihcoyc, Andres, Hyacinth, David Shay, Omegatron, Francs2000,
Robbot, Lowellian, Hadal, Wikibot, Benc, Tobias Bergemann, Adam78, Giftlite, DocWatson42, Nayuki, Chameleon, Neilc, Jonathan
Grynspan, Poccil, Paul August, Chalst, EmilJ, Spoon!, John Vandenberg, Nortexoid, Rajah, Deryck Chan, Daf, Pazouzou, Obradovic
Goran, Pearle, Knucmo2, Musiphil, Cesarschirmer~enwiki, RainbowOfLight, Forderud, Eric Qel-Droma, Oleg Alexandrov, Mindma-
trix, Troels.jensen~enwiki, Bluemoose, BrydoF1989, TAKASUGI Shinji, BD2412, Kbdank71, FlaBot, Gparker, Chobot, DTOx, Vi-
sor, Roboto de Ajvol, YurikBot, RussBot, Xihr, RJC, Gaius Cornelius, Cookman, Trovatore, Vanished user 1029384756, Dhollm,
Mditto, SmackBot, KnowledgeOfSelf, Eskimbot, Bluebot, Iain.dalton, Thumperward, Ewjw, Furby100, Rrburke, Mr.Z-man, UU, Cy-
bercobra, Revengeful Lobster, Decltype, Jon Awbrey, Quatloo, Byelf2007, Lambiam, Christoffel K~enwiki, Loadmaster, Hans Bauer,
Adambiswanger1, Mudd1, TheTito, Andkore, Simeon, Gregbard, FilipeS, Cydebot, Reywas92, Thijs!bot, Zron, Escarbot, Djihed, Slacka123,
Catgut, WhatamIdoing, Gwern, Santiago Saint James, R'n'B, Kavadi carrier, Policron, Enix150, VolkovBot, Semmelweiss, Pasixxxx,
Cs-Reaperman, PGSONIC, TXiKiBoT, Gwib, Anonymous Dissident, Ontoraul, HeirloomGardener, AlleborgoBot, SieBot, Ivan Štam-
buk, Soler97, Francvs, ClueBot, PixelBot, Alejandrocaro35, Holothurion, AHRtbA==, HumphreyW, DumZiBoT, Mifter, Addbot, Con-
CompS, AndersBot, Gail, Jarble, Meisam, Qwertymith, Legobot, Luckas-bot, AnomieBOT, Nasnema, Oursipan, Zhentmdfan, Pinethicket,
Half price, MastiBot, DixonDBot, Mayoife, Xnn, Shadex9999, EmausBot, ClueBot NG, Wcherowi, Strcat, Wdchk, Masssly, Hadi Payami,
Victor Yus, ChrisGualtieri, YFdyh-bot, Dexbot, Arfæst Ealdwrítere, GinAndChronically, Solid Frog, Reybansingh and Anonymous: 93
• NOR gate Source: https://en.wikipedia.org/wiki/NOR_gate?oldid=667331982 Contributors: Michael Hardy, Booyabazooka, Cherkash,
HPA, Bkell, Paul Murray, Dbroadwell, Hugh Mason, Rich Farmbrough, Andros 1337, Bender235, Smalljim, Hooperbloob, Wtmitchell,
Velella, Wtshymanski, Cburnett, Matevzk, JVz, Ademkader, Fresheneesz, Korg, YurikBot, Dmharvey, RussBot, Jengelh, Toffile, Trova-
tore, Nkendrick, Anclation~enwiki, SmackBot, InverseHypercube, Gilliam, Colonies Chris, Jjbeard~enwiki, Can't sleep, clown will eat
me, Lambiam, Saxbryn, Dl2000, Tenbergen, Stannered, Anoop anooprs, Sodabottle, Cscade, RockMFR, GandalfDaGraay, NewEng-
landYankee, Kloisiie, Jamelan, Inductiveload, Aby.india, Joshua Griisser, Keilana, CultureDrone, Adrianwn, Lampak, Auntof6, 718
Bot, Watchduck, Alejandrocaro35, AlanM1, Addbot, Elfix, Erik9bot, Prari, Redrose64, Meaghan, Robert.Baruch, ‫دالبا‬, Tommy2010,
Thecheesykid, Jaredjeya, Alpha Quadrant, F457fede, ClueBot NG, Frietjes, Jcherman, Lightrace, YiFeiBot, Jrgsampaio, Jamieddd and
Anonymous: 41
• Post canonical system Source: https://en.wikipedia.org/wiki/Post_canonical_system?oldid=664888271 Contributors: R.e.s., Cydebot,
Alaibot, Addbot, Yobot, Pcap, DixonDBot, Jeffrey Bosboom and Anonymous: 4
• Post correspondence problem Source: https://en.wikipedia.org/wiki/Post_correspondence_problem?oldid=666654042 Contributors:
Jan Hidders, Arvindn, Ixfd64, Rodney Topor, Dcoetzee, Ancheta Wis, Giftlite, Gro-Tsen, Mboverload, Nayuki, Neilc, Gachet, As-
cánder, Rjmccall, Caesura, MushroomCloud, Thatha, Marudubshinki, Rjwilmsi, MarSch, NekoDaemon, Gparker, YurikBot, Trovatore,
R.e.s., Misza13, PhS, That Guy, From That Show!, SmackBot, BiT, Chris the speller, DMacks, James Van Boxtel, Geh, Chrisahn, Cyde-
bot, Xprotocol, JAnDbot, Jestbiker, Pendragon81, TXiKiBoT, Aaron Rotenberg, SieBot, Structural Induction, Sambrow, Addbot, DOI
bot, IOLJeff, Yobot, Citation bot, Citation bot 1, RjwilmsiBot, Alph Bot, Twistedcerebrum, RenamedUser01302013, ZéroBot, Frietjes,
ChrisGualtieri, Deltahedron, Jochen Burghardt, Igor potapov, PIerre.Lescanne, Fschwarze, Monkbot and Anonymous: 23
• Post’s inversion formula Source: https://en.wikipedia.org/wiki/Post’{}s_inversion_formula?oldid=616923541 Contributors: KittySat-
urn, Billlion, Rjwilmsi, SmackBot, Mungbean, A. Pichler, Joe Schmedley, Cardamon, La Parka Your Car, FrescoBot, Ajv39 and Anony-
mous: 7
• Post’s lattice Source: https://en.wikipedia.org/wiki/Post’{}s_lattice?oldid=648793836 Contributors: EmilJ, LOL, Rjwilmsi, David Epp-
stein, Addbot, SpBot, Luckas-bot, Gamewizard71, Helpful Pixie Bot, Janniksiebert and Anonymous: 2
• Post’s theorem Source: https://en.wikipedia.org/wiki/Post’{}s_theorem?oldid=569002495 Contributors: Chinju, Charles Matthews,
MathMartin, Giftlite, Dratman, Alfeld, Mani1, Gauge, Peter M Gerdes, NekoDaemon, Laubrau~enwiki, Trovatore, RDBury, BeteNoir,
270 CHAPTER 42. UNIVERSAL ALGEBRA

Zero sharp, CBM, Cydebot, JAnDbot, R'n'B, VolkovBot, Kyle the bot, Jamelan, Dddenton, Addbot, Luckas-bot, ArthurBot, FrescoBot,
Abradoks, EmausBot, Mentibot, Bomazi, Helpful Pixie Bot, Palmik~enwiki and Anonymous: 4
• Post–Turing machine Source: https://en.wikipedia.org/wiki/Post%E2%80%93Turing_machine?oldid=574352346 Contributors: Zun-
dark, The Anome, Michael Hardy, Ahoerstemeier, Palfrey, Altenmann, Giftlite, Rich Farmbrough, Night Gyr, Pearle, Gene Nygaard,
Casey Abell, Trovatore, R.e.s., SmackBot, Jushi, Mhss, Robth, Wvbailey, Urebelscum, CRGreathouse, CBM, ShelfSkewed, Aruffo,
A.M.L., XyBot, Simuloid, AlleborgoBot, Mild Bill Hiccup, StevenDH, Addbot, Yobot, Pcap, AnomieBOT, Noq, AdjustShift, Leventov,
Calcyman, John of Reading, WikitanvirBot, Amr.rs, Jay8g, A.kernitsky, CarrieVS and Anonymous: 9
• Propositional calculus Source: https://en.wikipedia.org/wiki/Propositional_calculus?oldid=668343718 Contributors: The Anome, Tar-
quin, Jan Hidders, Tzartzam, Michael Hardy, JakeVortex, Kku, Justin Johnson, Minesweeper, Looxix~enwiki, AugPi, Rossami, Ev-
ercat, BAxelrod, Charles Matthews, Dysprosia, Hyacinth, UninvitedCompany, BobDrzyzgula, Robbot, Benwing, MathMartin, Rorro,
GreatWhiteNortherner, Marc Venot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Gubbubu, Gadfium, LiDaobing, Grauw, Almit39, Ku-
tulu, Creidieki, Urhixidur, PhotoBox, EricBright, Extrapiramidale, Rich Farmbrough, Guanabot, FranksValli, Paul August, Glenn Willen,
Elwikipedista~enwiki, Tompw, Chalst, BrokenSegue, Cmdrjameson, Nortexoid, Varuna, Red Winged Duck, ABCD, Xee, Nightstallion,
Bookandcoffee, Oleg Alexandrov, Japanese Searobin, Joriki, Linas, Mindmatrix, Ruud Koot, Trevor Andersen, Waldir, Graham87, Qw-
ertyus, Kbdank71, Porcher, Koavf, PlatypeanArchcow, Margosbot~enwiki, Kri, Gareth E Kegg, Roboto de Ajvol, Hairy Dude, Russell C.
Sibley, Gaius Cornelius, Ihope127, Rick Norwood, Trovatore, TechnoGuyRob, Jpbowen, Cruise, Voidxor, Jerome Kelly, Arthur Rubin,
Reyk, Teply, GrinBot~enwiki, SmackBot, Michael Meyling, Imz, Incnis Mrsi, Srnec, Mhss, Bluebot, Cybercobra, Jon Awbrey, Andeggs,
Ohconfucius, Lambiam, Wvbailey, Scientizzle, Loadmaster, Mets501, Pejman47, JulianMendez, Adriatikus, Zero sharp, JRSpriggs,
George100, Harold f, Vaughan Pratt, CBM, ShelfSkewed, Sdorrance, Gregbard, Cydebot, Julian Mendez, Taneli HUUSKONEN, Ap-
plemeister, GeePriest, Salgueiro~enwiki, JAnDbot, Thenub314, Hut 8.5, Magioladitis, Paroswiki, MetsBot, JJ Harrison, Epsilon0, San-
tiago Saint James, R'n'B, N4nojohn, Wideshanks, TomS TDotO, Created Equal, The One I Love, Our Fathers, STBotD, Mistercupcake,
VolkovBot, JohnBlackburne, TXiKiBoT, Lynxmb, The Tetrast, Philogo, Wikiisawesome, General Reader, Jmath666, VanishedUser-
ABC, Sapphic, Newbyguesses, SieBot, Iamthedeus, Дарко Максимовић, Jimmycleveland, OKBot, Svick, Huku-chan, Francvs, Clue-
Bot, Unica111, Wysprgr2005, Garyzx, Niceguyedc, Thinker1221, Shivakumar2009, Estirabot, Alejandrocaro35, Reuben.cornel, Hans
Adler, MilesAgain, Djk3, Lightbearer, Addbot, Rdanneskjold, Legobot, Yobot, Tannkrem, Stefan.vatev, Jean Santeuil, AnomieBOT,
Materialscientist, Ayda D, Doezxcty, Cwchng, Omnipaedista, SassoBot, January2009, Thehelpfulbot, FrescoBot, LucienBOT, Xenfreak,
HRoestBot, Dinamik-bot, EmausBot, John of Reading, 478jjjz, Chharvey, Chewings72, Bomazi, Tijfo098, MrKoplin, Frietjes, Help-
ful Pixie Bot, Brad7777, Wolfmanx122, Hanlon1755, Jochen Burghardt, Mark viking, Mrellisdee, Christian Nassif-Haynes, Matthew
Kastor, Marco volpe, Jwinder47, Mario Castelán Castro, Eavestn, SiriusGR and Anonymous: 148
• Recursively enumerable set Source: https://en.wikipedia.org/wiki/Recursively_enumerable_set?oldid=654867810 Contributors: Michael
Hardy, Hyacinth, Aleph4, Hmackiernan, Kiwibird, MathMartin, Dissident, Neilc, Satyadev, Bender235, MKI, Hackwrench, Caesura,
Amelio Vázquez, ByteBaron, Mathbot, NekoDaemon, BMF81, Roboto de Ajvol, YurikBot, Archelon, Trovatore, Jpbowen, DYLAN
LENNON~enwiki, That Guy, From That Show!, SmackBot, Pkirlin, Eskimbot, Mhss, Benkovsky, Viebel, Henning Makholm, Zde~enwiki,
Mets501, Zero sharp, JRSpriggs, CBM, Gregbard, Cydebot, Julian Mendez, Epbr123, Hermel, David Eppstein, Jamelan, Frank Stephan,
Justin W Smith, Wanderer57, Addbot, Pcap, Omnipaedista, VladimirReshetnikov, Per Ardua, O.fasching.logic.at, Pritish.kamath, Emaus-
Bot, WikitanvirBot, ZéroBot, Mariannealexandrino, Jochen Burghardt, Verdana Bold, DigitalRunes and Anonymous: 24
• Semi-Thue system Source: https://en.wikipedia.org/wiki/Semi-Thue_system?oldid=659670210 Contributors: Stephan Schulz, Math-
Martin, Thv, Woohookitty, Linas, SDC, Chris Pressey, NekoDaemon, Gurch, YurikBot, Dobromila, R.e.s., Ott2, Nielses, PhS, SmackBot,
Nejko, Wvbailey, The Real Marauder, R'n'B, HiEv, Stereotype441, Ivan Štambuk, Bmcm, PixelBot, Addbot, Yobot, Pcap, FrescoBot,
SchreyP, Warchiw, Tijfo098, Snotbot, Jochen Burghardt and Anonymous: 7
• Sheffer stroke Source: https://en.wikipedia.org/wiki/Sheffer_stroke?oldid=659255469 Contributors: AxelBoldt, Fubar Obfusco, David
spector, Vik-Thor, Michael Hardy, AugPi, Jouster, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Cameronc, Johnleemk, Robbot, Saaska,
Rorro, Paul Murray, Snobot, Giftlite, DocWatson42, Brouhaha, Zigger, Gubbubu, Halo, Sam, Urhixidur, Ratiocinate, Rich Farmbrough,
Leibniz, Pie4all88, TheJames, SocratesJedi, Paul August, Chalst, EmilJ, Nortexoid, Redfarmer, Emvee~enwiki, Dominic, Bookandcoffee,
Drakferion, Woohookitty, Mindmatrix, Steven Luo, Ruud Koot, Wayward, BD2412, Qwertyus, Kbdank71, R.e.b., Ademkader, FlaBot,
Mathbot, George Leung, Algebraist, RobotE, Sceptre, Imagist, Archelon, Ksyrie, NormalAsylum, Dijxtra, Trovatore, Nad, Yahya Abdal-
Aziz, Prolineserver, JMRyan, Rohanmittal, Luethi, JoanneB, SmackBot, Melchoir, Mhss, Chris the speller, Bluebot, Thumperward, UU,
Cybercobra, Jon Awbrey, Lambiam, Loadmaster, Yoderj, CBM, Ezrakilty, Gregbard, Nilfanion, Rotiro, Cydebot, Julian Mendez, Ase-
nine, SpK, Royas, MER-C, Magioladitis, VoABot II, Vujke, Seba5618, Santiago Saint James, Kloisiie, Olmsfam, Somejan, Josephholsten,
The Tetrast, Philogo, Manusharma, Jamelan, Inductiveload, Dogah, CultureDrone, Francvs, ClueBot, Plastikspork, Achlaug, Watchduck,
Dspark76, Hans Adler, Addbot, Meisam, Bunnyhop11, TaBOT-zerem, Erud, M&M987, Dante Cardoso Pinto de Almeida, LittleWink,
Dhanyavaada, Omerta-ve, Dega180, Gamewizard71, RjwilmsiBot, Arielkoiman, Set theorist, Zahnradzacken, Hpvpp, SporkBot, ClueBot
NG, Masssly, Jones11235813, MerlIwBot, SOFooBah, Yamaha5, Brianlen, SarahRMadden, Sofia Koutsouveli and Anonymous: 65
• Singleton (mathematics) Source: https://en.wikipedia.org/wiki/Singleton_(mathematics)?oldid=665329666 Contributors: AxelBoldt,
Toby Bartels, Patrick, Michael Hardy, Revolver, Charles Matthews, Dysprosia, Fibonacci, Robbot, MathMartin, Merovingian, To-
bias Bergemann, Giftlite, Lethe, Danny Rathjens, Oleg Alexandrov, Dionyziz, Isnow, Jshadias, Salix alba, YurikBot, NawlinWiki,
Reyk, KnightRider~enwiki, Melchoir, Octahedron80, Lambiam, IronGargoyle, Loadmaster, Cydebot, Thijs!bot, Jamelan, SieBot, Svick,
Marc van Leeuwen, Addbot, M-J, Luckas-bot, TaBOT-zerem, Obersachsebot, Noamz, Sabarwolf39, RjwilmsiBot, EmausBot, Widr,
MKKowalczyk, Mark viking, Monkbot, GeoffreyT2000 and Anonymous: 16
• Subset Source: https://en.wikipedia.org/wiki/Subset?oldid=658760669 Contributors: Damian Yerrick, AxelBoldt, Youssefsan, XJaM,
Toby Bartels, StefanRybo~enwiki, Edward, Patrick, TeunSpaans, Michael Hardy, Wshun, Booyabazooka, Ellywa, Oddegg, Andres,
Charles Matthews, Timwi, Hyacinth, Finlay McWalter, Robbot, Romanm, Bkell, 75th Trombone, Tobias Bergemann, Tosha, Giftlite,
Fropuff, Waltpohl, Macrakis, Tyler McHenry, SatyrEyes, Rgrg, Vivacissamamente, Mormegil, EugeneZelenko, Noisy, Deh, Paul Au-
gust, Engmark, Spoon!, SpeedyGonsales, Obradovic Goran, Nsaa, Jumbuck, Raboof, ABCD, Sligocki, Mac Davis, Aquae, LFaraone,
Chamaeleon, Firsfron, Isnow, Salix alba, VKokielov, Mathbot, Harmil, BMF81, Chobot, Roboto de Ajvol, YurikBot, Alpt, Dmharvey,
KSmrq, NawlinWiki, Trovatore, Nick, Szhaider, Wasseralm, Sardanaphalus, Jacek Kendysz, BiT, Gilliam, Buck Mulligan, SMP, Or-
angeDog, Bob K, Dreadstar, Bjankuloski06en~enwiki, Loadmaster, Vedexent, Amitch, Madmath789, Newone, CBM, Jokes Free4Me,
345Kai, SuperMidget, Gregbard, WillowW, MC10, Thijs!bot, Headbomb, Marek69, RobHar, WikiSlasher, Salgueiro~enwiki, JAnDbot,
.anacondabot, Pixel ;-), Pawl Kennedy, Emw, ANONYMOUS COWARD0xC0DE, RaitisMath, JCraw, Tgeairn, Ttwo, Maurice Car-
bonaro, Acalamari, Gombang, NewEnglandYankee, Liatd41, VolkovBot, CSumit, Deleet, Rei-bot, Anonymous Dissident, James.Spudeman,
PaulTanenbaum, InformationSpace, Falcon8765, AlleborgoBot, P3d4nt, NHRHS2010, Garde, Paolo.dL, OKBot, Brennie8, Jons63,
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 271

Loren.wilton, ClueBot, GorillaWarfare, PipepBot, The Thing That Should Not Be, DragonBot, Watchduck, Hans Adler, Computer97,
Noosentaal, Versus22, PCHS-NJROTC, Andrew.Flock, Reverb123, Addbot, , Fyrael, PranksterTurtle, Numbo3-bot, Zorrobot, Jar-
ble, JakobVoss, Luckas-bot, Yobot, Synchronism, AnomieBOT, Jim1138, Materialscientist, Citation bot, Martnym, NFD9001, Char-
vest, 78.26, XQYZ, Egmontbot, Rapsar, HRoestBot, Suffusion of Yellow, Agent Smith (The Matrix), RenamedUser01302013, ZéroBot,
Alexey.kudinkin, Chharvey, Quondum, Chewings72, 28bot, ClueBot NG, Wcherowi, Matthiaspaul, Bethre, Mesoderm, O.Koslowski,
AwamerT, Minsbot, Pratyya Ghosh, YFdyh-bot, Ldfleur, ChalkboardCowboy, Saehry, Stephan Kulla, , Ilya23Ezhov, Sandshark23,
Quenhitran, Neemasri, Prince Gull, Maranuel123, Alterseemann, Rahulmr.17 and Anonymous: 179
• Tag system Source: https://en.wikipedia.org/wiki/Tag_system?oldid=647556667 Contributors: Edward, Michael Hardy, Docu, Angela,
Charles Matthews, Doradus, Giftlite, RJHall, Gene Nygaard, Dionyziz, R.e.s., KnightRider~enwiki, Mhss, Chlewbot, JohnnyNyquist,
Wvbailey, Misof, Cydebot, A.M.L., Gwern, Freekh, Niceguyedc, Spitfire, Addbot, Lightbot, PI314r, Lueckless, Ashutosh y0078 and
Anonymous: 4
• Truth table Source: https://en.wikipedia.org/wiki/Truth_table?oldid=667166695 Contributors: Mav, Bryan Derksen, Tarquin, Larry
Sanger, Webmaestro, Heron, Hephaestos, Bdesham, Patrick, Michael Hardy, Wshun, Liftarn, Ixfd64, Justin Johnson, Delirium, Jimf-
bleak, AugPi, Andres, DesertSteve, Charles Matthews, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Pakaran, Banno, Robbot, RedWolf,
Snobot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Vadmium, Lst27, Antandrus, JimWae, Schnits, Creidieki, Joyous!, Rich Farm-
brough, ArnoldReinhold, Paul August, CanisRufus, Gershwinrb, Robotje, Billymac00, Nortexoid, Jonsafari, Mdd, LutzL, Alansohn,
Gary, Noosphere, Cburnett, Crystalllized, Bookandcoffee, Oleg Alexandrov, Mindmatrix, Bluemoose, Abd, Graham87, BD2412, Kb-
dank71, Xxpor, JVz, Koavf, Tangotango, Bubba73, FlaBot, Maustrauser, Fresheneesz, Aeroknight, Chobot, DVdm, Bgwhite, Yurik-
Bot, Wavelength, SpuriousQ, Trovatore, Sir48, Kyle Barbour, Cheese Sandwich, Pooryorick~enwiki, Rofthorax, CWenger, Leonar-
doRob0t, Cmglee, KnightRider~enwiki, SmackBot, InverseHypercube, KnowledgeOfSelf, Vilerage, Canthusus, The Rhymesmith, Mhss,
Gaiacarra, Can't sleep, clown will eat me, Chlewbot, Cybercobra, Uthren, Gschadow, Jon Awbrey, Antonielly, Nakke, Dr Smith, Parikshit
Narkhede, Beetstra, Dicklyon, Mets501, Richardcook, Danlev, CRGreathouse, CBM, WeggeBot, Gregbard, Slazenger, Starylon, Cyde-
bot, Flowerpotman, Julian Mendez, Lee, Letranova, Oreo Priest, AntiVandalBot, Kitty Davis, Quintote, Vantelimus, K ganju, JAnDbot,
Avaya1, Olaf, Holly golightly, Johnbrownsbody, R27smith200245, Santiago Saint James, Sevillana~enwiki, CZ Top, Aston Martian, On
This Continent, LordAnubisBOT, Bergin, NewEnglandYankee, Policron, Lights, VolkovBot, The Tetrast, AlleborgoBot, Logan, SieBot,
Paradoctor, Krawi, Djayjp, Flyer22, WimdeValk, Francvs, Someone the Person, ParisianBlade, Hans Adler, XTerminator2000, Wstorr,
Vegetator, Aitias, Qwfp, Staticshakedown, Addbot, Ghettoblaster, AgadaUrbanit, Kiril Simeonovski, C933103, Clon89, Luckas-bot,
Yobot, Nallimbot, Fox89, Racconish, Quad4rax, Xqbot, Addihockey10, RibotBOT, Jonesey95, MastiBot, TBloemink, Onel5969, Mean
as custard, ZéroBot, Tijfo098, ChuispastonBot, ClueBot NG, Akuindo, Millermk, WikiPuppies, Jk2q3jrklse, Wbm1058, Bmusician,
Ceklock, Joydeep, Supernerd11, CitationCleanerBot, Annina.kari, Achal Singh, Wolfmanx122, La marts boys, JYBot, ChamithN, Swash-
ski, Aichotoitinhyeu97, KasparBot and Anonymous: 276
• Truth value Source: https://en.wikipedia.org/wiki/Truth_value?oldid=639584536 Contributors: Mav, Toby Bartels, Patrick, Michael
Hardy, TakuyaMurata, Sethmahoney, Charles Matthews, Hyacinth, ErikDunsing, Tobias Bergemann, Giftlite, Rich Farmbrough, Chalst,
EmilJ, Alansohn, SlimVirgin, AndrejBauer, Cyro, Apokrif, BD2412, Josh Parris, Mayumashu, WhyBeNormal, Chobot, RussBot, Trova-
tore, Zwobot, Tomisti, FatherBrain, SmackBot, Incnis Mrsi, Mhss, Octahedron80, Frap, Chlewbot, Matchups, Nakon, Jon Awbrey,
Byelf2007, Wvbailey, Dbtfz, Kuru, Bjankuloski06en~enwiki, Gveret Tered, CRGreathouse, CBM, Simeon, Gregbard, Cydebot, Xcep-
ticZP, Robertinventor, Letranova, Liquid-aim-bot, JAnDbot, Jackmass, Faizhaider, R'n'B, Senu, VolkovBot, LBehounek, Andrewaskew,
T-9000, Francvs, Watchduck, Hans Adler, Addbot, Luckas-bot, Yobot, Zagothal, ArthurBot, Xqbot, Noamz, RibotBOT, FrescoBot,
Pinethicket, Thabick, EmausBot, Rami radwan, Solomonfromfinland, ZéroBot, Card Zero, Tijfo098, Thecameraguy12345678, Kephir,
ArkadiuszGlowaLaskowski, Gronk Oz and Anonymous: 23
• Turing degree Source: https://en.wikipedia.org/wiki/Turing_degree?oldid=657136623 Contributors: AxelBoldt, Michael Hardy, Gabbe,
Ciphergoth, Charles Matthews, MathMartin, Tobias Bergemann, Gro-Tsen, David Johnson, Alfeld, Gauge, Tompw, Chalst, Peter M
Gerdes, Dalf, Rjwilmsi, NekoDaemon, Tdoune, Bgwhite, Hairy Dude, Archelon, Trovatore, Dr.Halfbaked, SmackBot, Selfworm, Mhss,
Zero sharp, RSido, CmdrObot, CBM, Cydebot, Christian75, Shlomi Hillel, David Eppstein, Jeyradan, CBM2, Cab.jones, MystBot,
Addbot, TutterMouse, Luckas-bot, TaBOT-zerem, Kilom691, Mon oncle, Citation bot, Citation bot 1, Suslindisambiguator, BG19bot,
Walber026, Anrnusna, Toanboe and Anonymous: 9
• Universal algebra Source: https://en.wikipedia.org/wiki/Universal_algebra?oldid=653169020 Contributors: AxelBoldt, Bryan Derksen,
Zundark, Andre Engels, Toby~enwiki, Toby Bartels, Youandme, Michael Hardy, GTBacchus, Andres, Revolver, Charles Matthews,
Dysprosia, Aleph4, Robbot, Fredrik, Sanders muc, Kowey, Fuelbottle, Giftlite, APH, Sam Hocevar, AlexChurchill, Zaslav, Tompw,
Rgdboer, EmilJ, AshtonBenson, Msh210, ABCD, Linas, Mindmatrix, Smmurphy, Isnow, Magidin, Jrtayloriv, Wavelength, Hairy Dude,
Chaos, Wiki alf, Arthur Rubin, RonnieBrown, SmackBot, El Fahno, Alink, Nbarth, Zvar, Spakoj~enwiki, Henning Makholm, WillowW,
HStel, Sam Staton, Sadeghd, Rlupsa, Knotwork, JAnDbot, Sean Tilson, Twisted86, David Eppstein, JaGa, Pavel Jelínek, Trusilver,
TheSeven, JohnBlackburne, AllS33ingI, Popopp, Synthebot, Nicks221, SieBot, Sneakfast, Tkeu, Excirial, He7d3r, Hans Adler, Kaba3,
Algebran, Addbot, Delaszk, Loupeter, Legobot, Yobot, Ptbotgourou, AnomieBOT, RibotBOT, Oursipan, Thehelpfulbot, Ilovegroupthe-
ory, D stankov, Yaddie, Rausch, Eivuokko, Nascar1996, GoingBatty, Quondum, ClueBot NG, Bezik, Frietjes, MerlIwBot, Beaumont877,
Brad7777, Duxwing, Jochen Burghardt, NQ, JMP EAX and Anonymous: 60

42.12.2 Images
• File:2D_affine_transformation_matrix.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2c/2D_affine_transformation_
matrix.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Cmglee
• File:2_state_busy_beaver.JPG Source: https://upload.wikimedia.org/wikipedia/commons/4/4d/2_state_busy_beaver.JPG License: CC-
BY-SA-3.0 Contributors:
• Own work
• Model run in Excel, copied to Autosketch, saved as .Jpg
Original artist: Wvbailey (talk) (Uploads)
• File:2_state_busy_beaver_2.JPG Source: https://upload.wikimedia.org/wikipedia/commons/8/89/2_state_busy_beaver_2.JPG License:
CC-BY-SA-3.0 Contributors: Own work Original artist: User:Wvbailey
272 CHAPTER 42. UNIVERSAL ALGEBRA

• File:7400.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/26/7400.jpg License: CC-BY-SA-3.0 Contributors: ? Origi-


nal artist: ?
• File:74LS192_Symbol.png Source: https://upload.wikimedia.org/wikipedia/commons/5/56/74LS192_Symbol.png License: Public do-
main Contributors: Transferred from en.wikipedia to Commons. Original artist: Swtpc6800 at English Wikipedia
• File:AND_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/AND_ANSI.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:AND_Gate_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/41/AND_Gate_diagram.svg License: Pub-
lic domain Contributors: ? Original artist: ?
• File:AND_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0f/AND_IEC.svg License: Public domain Contribu-
tors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:Affine_transformations.ogv Source: https://upload.wikimedia.org/wikipedia/commons/3/34/Affine_transformations.ogv License:
Public domain Contributors: Own work Original artist: LucasVB
• File:Alexander_Graham_Bell1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/12/Alexander_Graham_Bell1.jpg Li-
cense: Public domain Contributors: http://www.ideasplanet.org/ua/great.13.html Original artist: ?
• File:Algorithm_P-T_multiply_2.JPG Source: https://upload.wikimedia.org/wikipedia/en/e/eb/Algorithm_P-T_multiply_2.JPG License:
Cc-by-sa-3.0 Contributors:
Own work
Original artist:
Wvbailey (talk) (Uploads)
• File:Arno_Penzias.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Arno_Penzias.jpg License: CC-BY-SA-3.0 Con-
tributors: Transferred from en.wikipedia to Commons. Original artist: Kartik J at English Wikipedia
• File:Begriffsschrift_connective1.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Begriffsschrift_connective1.svg
License: Public domain Contributors: Own work Original artist: Guus Hoekman
• File:Brain.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Nicolas_P._Rougier%27s_rendering_of_the_human_
brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist: Nicolas Rougier
• File:CardContin.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/CardContin.svg License: Public domain Contrib-
utors: en:Image:CardContin.png Original artist: en:User:Trovatore, recreated by User:Stannered
• File:Carl_Friedrich_Gauss.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Carl_Friedrich_Gauss.jpg License: Pub-
lic domain Contributors: Gauß-Gesellschaft Göttingen e.V. (Foto: A. Wittmann). Original artist: Gottlieb Biermann
A. Wittmann (photo)
• File:Central_dilation.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a1/Central_dilation.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: KlioKlein
• File:Charles_Darwin.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Charles_Darwin.jpg License: Public domain
Contributors: Likely from [1] Original artist: Maull&Polyblank
• File:Charles_S._Peirce_House_near_Milford_PA.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/43/Charles_S._Peirce_
House_near_Milford_PA.jpg License: GFDL Contributors: Own work Original artist: Beyond My Ken
• File:Charles_Sanders_Peirce’{}s_birthplace_building.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/Charles_
Sanders_Peirce%27s_birthplace_building.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Gruebleen
• File:Cmosunbuff.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Cmosunbuff.png License: Public domain Con-
tributors: I (Nkendrick (talk)) created this work entirely by myself. Original artist: Nkendrick (talk)
• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi-
nal artist: ?
• File:David_Baltimore_NIH.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/David_Baltimore_NIH.jpg License:
Public domain Contributors: http://profiles.nlm.nih.gov/ps/retrieve/ResourceMetadata/DJBBRK Original artist: Unknown
• File:DeMorgan_Logic_Circuit_diagram_DIN.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/DeMorgan_Logic_
Circuit_diagram_DIN.svg License: Public domain Contributors: Own work Original artist: MichaelFrey
• File:Demorganlaws.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Demorganlaws.svg License: CC BY-SA 4.0
Contributors: Own work Original artist: Teknad
• File:Dimitri_Mendelejew.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/55/Dimitri_Mendelejew.jpg License: Pub-
lic domain Contributors: http://www.chemie-master.de/lex/begriffe/Bedeutende_Chemiker/Mendelejew.html Original artist: The origi-
nal uploader was ChrisM at German Wikipedia
• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although
minimally).”
• File:Einstein_patentoffice.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Einstein_patentoffice.jpg License: Pub-
lic domain Contributors: Transferred from en.wikipedia; transferred to Commons by User:Guerillero using CommonsHelper.
Original artist: Lucien Chavan [#cite_note-author-1 [1]] (1868 - 1942), a friend of Einstein’s when he was living in Berne.
• File:Faraday_Michael_old_age.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Faraday_Michael_old_age.jpg Li-
cense: Public domain Contributors: The Life & Experiences of Sir Henry Enfield Roscoe (Macmillan: London and New York), opposite p.
132 Original artist: Henry Roscoe
• File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 273

• File:Formal_languages.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Formal_languages.svg License: CC BY-


SA 3.0 Contributors: Own work based on: en:Image:Formal languages.png by Gregbard. Original artist: MithrandirMage
• File:Fractal_fern_explained.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Fractal_fern_explained.png License:
Public domain Contributors: Own work Original artist: António Miguel de Campos
• File:Galilee.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/48/Galilee.jpg License: Public domain Contributors: French
WP (Utilisateur:Kelson via http://iafosun.ifsi.rm.cnr.it/~{}iafolla/home/homegrsp.html) Original artist: Ottavio Leoni
• File:Geometric_affine_transformation_example.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Geometric_affine_
transformation_example.png License: CC0 Contributors: Own work Original artist: Burningstarfour
• File:GodfreyKneller-IsaacNewton-1689.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/39/GodfreyKneller-IsaacNewton-1689.
jpg License: Public domain Contributors: http://www.newton.cam.ac.uk/art/portrait.html Original artist: Sir Godfrey Kneller
• File:Gottfried_Wilhelm_von_Leibniz.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6a/Gottfried_Wilhelm_von_
Leibniz.jpg License: Public domain Contributors: /gbrown/philosophers/leibniz/BritannicaPages/Leibniz/LeibnizGif.html Original artist:
Christoph Bernhard Francke
• File:Higgs,_Peter_(1929)3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cc/Higgs%2C_Peter_%281929%293.jpg
License: CC BY-SA 2.0 de Contributors: Mathematisches Institut Oberwolfach (MFO), http://owpdb.mfo.de/detail?photo_id=12819
Original artist: Gert-Martin Greuel
• File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY
2.5 Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project
• File:JulietteAndCharles.JPG Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/JulietteAndCharles.JPG License: Pub-
lic domain Contributors: http://web.archive.org/web/20021002100515/http://www.nps.gov/dewa/InDepth/Spanning/stgIMAGE/peiJandC.
GIF Original artist: The Tetrast made a jpg from the gif at an archive.org file of _Spanning the Gap_, newsletter of the Delaware Water
Gap National Recreation Area (Vol. 22 No. 3 Fall 2000) of the National Park Service, U.S. Department of the Interior. The photo image
is not only in archive.org files of the gov't _Spanning the Gap_ newsletters but also in a pdf maintained online by the National Park Service
at http://www.nps.gov/dewa/historyculture/upload/cmsstgPEIRC.pdf, see pdf’s page 2. The archive.org version (a gif in an html file) was
of higher quality. There is no credit for the particular photo image and no copyright notice in either the pdf or in any of the archive.org files
(http://web.archive.org/web/*/http://www.nps.gov/dewa/InDepth/Spanning/stgPEIRC.html) of the article “Philosopher Charles Peirce”
which contains the photo image. The original physical photograph itself must have been made before Charles Peirce’s death in 1914.
• File:Leo_Szilard-cropped.png Source: https://upload.wikimedia.org/wikipedia/commons/2/20/Leo_Szilard-cropped.png License: Pub-
lic domain Contributors: Leo_Szilard.jpg Original artist: unknown photographer Furfur
• File:Logic.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Logic.svg License: CC BY-SA 3.0 Contributors: Own
work Original artist: It Is Me Here
• File:Logic_portal.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Logic_portal.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Logical_connectives_Hasse_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Logical_connectives_
Hasse_diagram.svg License: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Loudspeaker.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Loudspeaker.svg License: Public domain Con-
tributors: New version of Image:Loudspeaker.png, by AzaToth and compressed by Hautala Original artist: Nethac DIU, waves corrected
by Zoid
• File:Marie_Curie_1903.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/93/Marie_Curie_1903.jpg License: Public
domain Contributors: http://nobelprize.org/nobel_prizes/physics/laureates/1903/marie-curie-bio.html Original artist: Nobel foundation
• File:Milford_and_NYC_and_Cambridge.GIF Source: https://upload.wikimedia.org/wikipedia/commons/7/7d/Milford_and_NYC_
and_Cambridge.GIF License: Public domain Contributors: Derived, with much alteration, by me from public-domain map from w:National
Atlas of the United States Original artist: The Tetrast
• File:Monotonicity_example1.png Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Monotonicity_example1.png License:
Public domain Contributors: Own work with Inkscape Original artist: Oleg Alexandrov
• File:Monotonicity_example2.png Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Monotonicity_example2.png License:
Public domain Contributors: self-made with en:Inkscape Original artist: Oleg Alexandrov
• File:Monotonicity_example3.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8c/Monotonicity_example3.png License:
Public domain Contributors: self-made with en:Inkscape Original artist: Oleg Alexandrov
• File:Multigrade_operator_AND.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/74/Multigrade_operator_AND.svg
License: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Multigrade_operator_OR.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/19/Multigrade_operator_OR.svg Li-
cense: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Multigrade_operator_XNOR.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/Multigrade_operator_XNOR.
svg License: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Multigrade_operator_all_or_nothing.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Multigrade_operator_
all_or_nothing.svg License: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk)
• File:Musée_arts_LonsleSaunier_0129.JPG Source: https://upload.wikimedia.org/wikipedia/commons/f/f9/Mus%C3%A9e_arts_LonsleSaunier_
0129.JPG License: CC BY-SA 3.0 Contributors: Own work Original artist: Arnaud 25
• File:NAND_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f2/NAND_ANSI.svg License: Public domain Con-
tributors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:NAND_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/NAND_IEC.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
274 CHAPTER 42. UNIVERSAL ALGEBRA

• File:NMOS_NOR.png Source: https://upload.wikimedia.org/wikipedia/commons/a/ab/NMOS_NOR.png License: CC-BY-SA-3.0 Con-


tributors: ? Original artist: ?
• File:NOR_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6c/NOR_ANSI.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:NOR_ANSI_Labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c6/NOR_ANSI_Labelled.svg License: Pub-
lic domain Contributors: Own work Original artist: Inductiveload
• File:NOR_DIN.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/73/NOR_DIN.svg License: Public domain Contribu-
tors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:NOR_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/NOR_IEC.svg License: Public domain Contribu-
tors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:NOR_Pinout.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2a/NOR_Pinout.jpg License: Public domain Con-
tributors: ? Original artist: ?
• File:NOR_from_NAND.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/NOR_from_NAND.svg License: Public
domain Contributors: Own work Original artist: Inductiveload
• File:NOR_gate_layout.png Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/NOR_gate_layout.png License: Public do-
main Contributors: ? Original artist: ?
• File:NOT_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/NOT_ANSI.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:NOT_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ef/NOT_IEC.svg License: Public domain Contributors:
From scratch in Inkcape 0.43 Original artist: jjbeard
• File:Neil_Immerman.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/95/Neil_Immerman.jpg License: CC0 Contrib-
utors: Own work Original artist: Dcoetzee
• File:Nikolaus_Kopernikus.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f2/Nikolaus_Kopernikus.jpg License: Pub-
lic domain Contributors: http://www.frombork.art.pl/Ang10.htm Original artist: Unknown
• File:Nikolay_Ivanovich_Lobachevsky.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/c/c5/Nikolay_Ivanovich_Lobachevsky.
jpeg License: Public domain Contributors: en:Image:Nikolay_Ivanovich_Lobachevsky.jpeg Original artist: Unknown
• File:NorAdder.svg Source: https://upload.wikimedia.org/wikipedia/en/e/e6/NorAdder.svg License: PD Contributors:
I (Paul Murray (talk)) created this work entirely by myself. Original artist:
Paul Murray (talk)
• File:Noyce1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Noyce1.jpg License: Public domain Contributors: US
Government Original artist: Unknown
• File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_
mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola
apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)
• File:OR_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b5/OR_ANSI.svg License: Public domain Contribu-
tors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:OR_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/42/OR_IEC.svg License: Public domain Contributors:
Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:Or-gate-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Or-gate-en.svg License: CC-BY-SA-3.0 Contrib-
utors: ? Original artist: ?
• File:Peirce-quincuncial-bright-lines.gif Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Peirce-quincuncial-bright-lines.
gif License: Public domain Contributors:
• Peirce-quincuncial-projection.jpg Original artist: Peirce-quincuncial-projection.jpg: User:Mdf
• File:PeirceAlphaGraphs.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/29/PeirceAlphaGraphs.svg License: CC-BY-
SA-3.0 Contributors: from PNG file of the same name by GottschallCH Original artist: Poccil
• File:Peircelines.PNG Source: https://upload.wikimedia.org/wikipedia/commons/e/e9/Peircelines.PNG License: Public domain Con-
tributors: Own work Original artist: myself
• File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
• File:PolygonsSet_EN.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/PolygonsSet_EN.svg License: CC0 Contrib-
utors: File:PolygonsSet.svg Original artist: File:PolygonsSet.svg: kismalac
• File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors:
? Original artist: ?
• File:Portrait_of_Antoine-Henri_Becquerel.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Portrait_of_Antoine-Henri_
Becquerel.jpg License: Public domain Contributors: Portrait of Antoine-Henri Becquerel (1852-1908), Physicist Original artist: Paul
Nadar
• File:Post-lattice-centre.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/26/Post-lattice-centre.svg License: GFDL Con-
tributors: Own work Original artist: User:EmilJ
• File:Post-lattice.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/19/Post-lattice.svg License: CC BY-SA 3.0 Contrib-
utors: Own work Original artist: EmilJ
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 275

• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0


Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
• File:Recursive_enumeration_of_all_halting_Turing_machines.gif Source: https://upload.wikimedia.org/wikipedia/commons/a/a8/
Recursive_enumeration_of_all_halting_Turing_machines.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen
Burghardt
• File:Rehasse.png Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Rehasse.png License: Public domain Contributors:
http://en.wikipedia.org/wiki/File:Rehasse.png Original artist: CMummert
• File:Samuel_ting_10-19-10.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Samuel_ting_10-19-10.jpg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Toastforbrekkie
• File:Shaw2006astro.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/1d/Shaw2006astro.jpg License: Public domain
Contributors: Transferred from en.wikipedia; transfer was stated to be made by User:Peripitus. Original artist: Original uploader was
Ariess at en.wikipedia
• File:Speakerlink-new.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Speakerlink-new.svg License: CC0 Contrib-
utors: Own work Original artist: Kelvinsong
• File:State_diagram_2_state_busy_beaver_.JPG Source: https://upload.wikimedia.org/wikipedia/en/0/0f/State_diagram_2_state_busy_
beaver_.JPG License: Cc-by-sa-3.0 Contributors: ? Original artist: ?
• File:State_diagram_2_state_busy_beaver_2_.JPG Source: https://upload.wikimedia.org/wikipedia/en/3/35/State_diagram_2_state_
busy_beaver_2_.JPG License: Cc-by-sa-3.0 Contributors: ? Original artist: ?
• File:Subset_with_expansion.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Subset_with_expansion.svg License:
CC-BY-SA-3.0 Contributors: Own work Original artist: User:J.Spudeman~{}commonswiki
• File:Syntax_tree.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Syntax_tree.svg License: Public domain Contrib-
utors: Own work Original artist: Aaron Rotenberg
• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
• File:Tristate_buffer.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c0/Tristate_buffer.svg License: Public domain
Contributors: en:Image:Tristate buffer.png Original artist: Traced by User:Stannered
• File:Venn00.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Venn00.svg License: Public domain Contributors: Own
work Original artist: Lipedia
• File:Venn0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Venn0001.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/76/Venn0011.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn01.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Venn01.svg License: Public domain Contributors: Own
work Original artist: Lipedia
• File:Venn0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/Venn0101.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn0110.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Venn0110.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn0111.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/30/Venn0111.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn10.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Venn10.svg License: Public domain Contributors: Own
work Original artist: Lipedia
• File:Venn1000.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Venn1000.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Venn1001.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn1010.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Venn1010.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1e/Venn1011.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn11.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Venn11.svg License: Public domain Contributors: Own
work Original artist: Lipedia
• File:Venn1100.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Venn1100.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn1101.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Venn1101.svg License: Public domain Contributors:
? Original artist: ?
• File:Venn1110.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Venn1110.svg License: Public domain Contributors:
? Original artist: ?
276 CHAPTER 42. UNIVERSAL ALGEBRA

• File:Venn_0000_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Venn_0000_0001.svg License: Public do-


main Contributors: ? Original artist: ?
• File:Venn_0000_0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fb/Venn_0000_0011.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0000_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e5/Venn_0000_0101.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0000_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0e/Venn_0000_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0001_0000.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/ba/Venn_0001_0000.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0001_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Venn_0001_0001.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0001_0100.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Venn_0001_0100.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0001_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/Venn_0001_0101.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0011_0000.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cf/Venn_0011_0000.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0011_1100.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Venn_0011_1100.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0011_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Venn_0011_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0101_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/85/Venn_0101_0101.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0101_0111.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b1/Venn_0101_0111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0101_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/25/Venn_0101_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0110_0110.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Venn_0110_0110.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0110_1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Venn_0110_1001.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0111_0111.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Venn_0111_0111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_0111_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ee/Venn_0111_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1000_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/Venn_1000_0001.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1001_1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Venn_1001_1001.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1010_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Venn_1010_0101.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1011_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/61/Venn_1011_1011.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1011_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Venn_1011_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1100_0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/Venn_1100_0011.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1100_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e9/Venn_1100_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1101_0111.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e1/Venn_1101_0111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1101_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Venn_1101_1011.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1101_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Venn_1101_1111.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_1111_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Venn_1111_1011.svg License: Public do-
main Contributors: ? Original artist: ?
• File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub-
lic domain Contributors: Own work Original artist: Cepheus
• File:Venn_A_subset_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b0/Venn_A_subset_B.svg License: Public do-
main Contributors: Own work Original artist: User:Booyabazooka
42.12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 277

• File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:


CC-BY-SA-3.0 Contributors:
• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen
• File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain
Contributors: ? Original artist: ?
• File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA
3.0 Contributors: Rei-artur Original artist: Nicholas Moreau
• File:Wikiversity-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License: CC BY-
SA 3.0 Contributors: Own work Original artist: Snorky
• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public
domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),
based on original logo tossed together by Brion Vibber
• File:XNOR_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/XNOR_ANSI.svg License: Public domain Con-
tributors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:XNOR_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/XNOR_IEC.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:XOR_ANSI.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/XOR_ANSI.svg License: Public domain Contrib-
utors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:XOR_IEC.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/XOR_IEC.svg License: Public domain Contribu-
tors: Own Drawing, made in Inkscape 0.43 Original artist: jjbeard
• File:YoichiroNambu.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/11/YoichiroNambu.jpg License: CC BY-SA 3.0
Contributors: Own work Original artist: Betsy Devine

42.12.3 Content license


• Creative Commons Attribution-Share Alike 3.0

You might also like