Professional Documents
Culture Documents
Complexity Theory
– Teor Skladnost Obqislen~ –
Lviv Polytechnic National University
Winter Term 2014
November 17 - 21 and November 24 - 28
Room 107, Time: 1200 − 1600
40 hours = 5 ECTS
Prof. Dr. Klaus Wagner - University of Würzburg (Germany)
2 Complexity of Computations 12
2.1 Relating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Complexity Bounded Computations . . . . . . . . . . . . . . . . . . . . . . 13
2.3 The Equivalence of Different Types of Algorithms . . . . . . . . . . . . . . 15
2.4 Can Problems Be Arbitrarily Complex? . . . . . . . . . . . . . . . . . . . . 16
3 Complexity Classes 17
3.1 Time and Space Complexity Classes . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Hierarchies of Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Polynomially Fuzzy Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 On Time Versus Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8 Appendix 3: Exercises 46
2
1 Computability and Decidability
The subject of this lecture is the complexity of computations and the complexity of objects
(functions, sets) computed by such computations. Hence, we have to start with the notion
of computation.
Roughly speaking, a computation is the application of an algorithm to a given input.
The word algorithm comes from the Persian mathematician Al Chwarismi of the ninth
century who wrote books on computing with Indian numerals and on solving algebraic
equations.
But how to define the notion of algorithm?
However, we will restrict ourselves to algorithms which are applied to formal objects like
numbers, texts, pictures etc.
Mathematically exact definitions of the notion of algorithm.
Why we need a mathematical definition?
• As long as people only write algorithms to solve problems, no mathematical defini-
tion is necessary.
• However, if we want to prove that there is no algorithm to solve a given problem,
then we need a mathematical definition of the notion “algorithm”.
Until the beginning of the 20th century mathematicians believed: If one has a precise
definition of a problem then there exists an algorithm to decide this problem. But then
some important problems came up for which no algorithm could be found, e.g.:
3
Later people discussed the possibility that such algorithms do not exist. To prove that,
an exact mathematical definition of “algorithm” was necessary. Beginning in 1936 (before
the first computers!) dozens of different such definitions were made. Surprise:
This strongly supports the famous Church’s thesis (1936) which says that these defi-
nitions of “algorithm” describe exactly what the very nature of “algorithm” is. We will
make this precise in what follows.
On this base, it was shown that there are no algorithms to solve Hilbert’s 10th problem,
the theory of arithmetic, or the halting problem.
Algorithms can process only inputs that are presented in a digitized form. Thus inputs
must be described by strings of symbols.
4
Natural numbers as inputs are given as their binary descriptions. For the definition of
the binary description of natural numbers we need the following
Proposition 1.3 Every natural number n ≥ 1 can be written in P exactly one way as
m
n = am ·2 + am−1 ·2 m−1
+ · · · + a2 ·2 + a1 ·2 + a0 ·2 ( = m
2 1 0 i
i=0 ai ·2 )
where am = 1 and a0 , a1 , a2 , . . . , am−1 ∈ {0, 1}.
Proof. Omitted.
We start with a very general definition of computability and decidability which ap-
plies to every notion of algorithm. For such an algorithm M the following must be defined:
• How an input x ∈ (Σ∗ )n (for some alphabet Σ and n ≥ 0) is presented to M ?
• When does it mean that the algorithm produces a result?
the result of M on x, if M on input x produces a result
• rM (x) =def
not defined, if M on input x does not produce a result
5
1.4 Main Theorem and Church’s Thesis
6
1.5 Turing Machines
The British mathematican Alan Turing developed 1936 a model simulating the activity
of a person who executes an algorithm. Such a person writes symbols on a paper and
modifies them, following some fixed rules. In the model,
• the paper is represented by a tape with infinitely many cells with symbols, and
• the rules are located in a finite control unit which interacts with the tape.
finite control
Finite unit
finite control unit: – in each moment it is in one state of a finite set S of states;
– there are a special starting state and a special stopping state;
– a program controls the activities.
tape: – has infinitely many cells;
– in each cell there is a symbol from a finite alphabet Σ;
– there is a special emptiness symbol .
head: – reads the symbol in the current cell;
– can change the symbol and move to a neighbored cell.
A Turing machine (TM) works step-by-step. In one step it does the following:
dependend on
– the current state s and
– the symbol a read by the head
the TM
– changes the current state s into a state s0 (s0 = s is possible),
– changes the read symbol a into a symbol a0 (a0 = a is possible), and
– moves the head one cell left (L), one cell right (R), or does not move (O)
This behavior is described by the instruction sa → s0 a0 τ where τ ∈ {L, R, O}.
For every situation (state, symbol) there is exactly one such instruction.
The set of these instructions is the program of the TM.
7
Example 1.7 TM adds 1 to an arbitrary natural number (given in binary presen-
tation).
Start and stop with the head on the leftmost digit.
s0 – starting state, s1 – stopping state
... 1 0 1 1 0 0 0 1 0 1 0 1 ...
instructions comments
s0 1 → s0 1 R s0 – start, move to right
s0 0 → s0 0 R
s0 → s3 L
s3 1 → s3 0 L s3 – carry 1, move to left
s3 0 → s2 1 L
s3 → s1 1 O
s2 1 → s2 1 L s2 – carry 0, move to left
s2 0 → s2 0 L
s2 → s1 R
Let PAL =def {w : w ∈ {a, b}∗ ∧ w = wR } be the set of palindroms in {a, b}∗ .
8
instructions comments
s0 a → sa R sa – memorize a, move right
s0 b → sb R sb – memorize b, move right
s0 → s1 1 O stop for a symmetric word of even length
sa a → sa a R
sa b → sa b R
sa → ra L ra – right end of word, go one step to the left, memorize a
sb a → sb a R
sb b → sb b R
sb → rb L rb – right end of word, go one step to the left, memorize b
ra a → s 2 L s2 – test positive, back to left end
ra b → s 3 L s3 – test negative, back to left end and erase word
ra → s 1 1 O stop for a symmetric word of odd length
rb b → s2 L
rb a → s3 L
rb → s1 a O stop for a symmetric word of odd length
s2 a → s2 a L
s2 b → s2 b L
s2 → s0 R left end of the word, start next round
s3 a → s3 L
s3 b → s3 L
s3 → s1 0 O stop for a non-symmetric word
Example 1.10 1. Let S(x) =def x + 1. Using the binary representation of natural
numbers, we have shown in Example 1.7 that S can be computed by a TM.
2. Example 1.9 shows that the set PAL of palindroms can be decided by a TM.
Turing machines are a very simple concept of “algorithm” and “computability”, much
simpler then programming languages like Java. However, the Main Theorem of algorithm
theory (Theorem 1.5) says that every function computed by a Java program can also be
computed by a Turing machine. Moreover, by Church’s Thesis, every function, computed
in an intuitive sense, can also be computed by a Turing machine. The big advantage of
this concept is that, using Turing machines, it is much easier to prove certain assumptions
on computability.
9
1.6 The Halting Problem
Is every set M of words decidable? For theoretical reasons (using cardinality arguments)
this is not the case. In a sense, an abundant majority of the sets are not decidable.
However, these arguments do not tell us which sets are decidable or not.
What about some given, easy to define sets? We will show that the famous halting problem
is not decidable.
We consider all TMs with tape alphabet {0, 1, }. The program of such a TM M can
easily be encoded by a word code(M ) ∈ {0, 1}∗ .
One can also define the halting problem using Java programs instead of TMs. The results
are the same. It surely would be interesting if there is an algorithm telling us whether a
given program stops on a given input. But unfortunately:
The very reason for this contradiction is the fact, that we mix two different levels of
objects. Generally, assume there are two levels of objects, and the objects of the first
level (the TMs) can do something with the objects of the second level (the inputs). If
10
we identify these levels then an object can do something with itself. This may produce a
contradiction. Another example for this phenomenon:
Example 1.12 The barber paradox. Assume there is only one barber who lives
in town and every men in town used to be shaved, either by himself or by this bar-
ber. It seems to be correct to define the job of the barber as follows:
The barber is the man who shaves all those,
and only those men in town who do not shave themselves.
Or, a little more formalized:
For every man M in town there holds:
the barber shaves M ⇐⇒ M does not shave himself.
Now, the question arises: Who shaves the barber?
Applying the above definition to the barber we can conclude:
the barber shaves himself ⇐⇒ the barber does not shave himself
A contradiction. Consequently, the above definition of a barber was not a good idea.
The contradiction was produced by the fact that we mixed the levels of “the shaver”
and “the being shaved”.
11
2 Complexity of Computations
12
2.2 Complexity Bounded Computations
The exact meaning of “running time” and “used memory” has to be defined for every
notion of algorithms.
As already mentioned, a natural number n is given in binary representation bin(n).
We set |n| =def |bin(n)| and get 2|n|−1 ≤ n < 2|n| and log2 n < |n| ≤ log2 n + 1, for n ≤ 1
(see Exercise 9.
Definition. (time and space complexity for TMs) Let M be a TM. We define
3 BERECHNUNGSKOMPLEXIT
tM (x1 , . . . , xm ) =def number ÄT
of steps of M on input (x1 , . . . , xm ). 40
sM (x1 , . . . , xm ) =def number of tape cells used by M on input (x1 , . . . , xm ).
Laufzeitfunktionen für RAM und TM:
Notice that an input of length n uses already n tape cells. Hence sM (n) ≥ n for every
tM (xthe
TM M . For 1 , . .time
. , xmcomplexity
) =def Anzahl
we der
canTakte
have tvon M bei Eingabe (x1 , . TMs
M (n) < n. But e.g. for
. . , xmthis
). means
that the head cannot see every letter of the input. We will not consider this case.
Beispiel: 1-Band-TM zur Entscheidung der symmetrischen Wörter (Beispiel von früher).
ExamplePAL 2.3 =TM def {w
M :deciding the∗ ∧
w ∈ {a, b} = wRof} palindroms from Example 1.9.
setw PAL
For symmetric words the head moves of M are as follows:
... a b a a b a a b a ...
14
2.3 The Equivalence of Different Types of Algorithms
When we consider a fixed problem, how the running times and the used memory of
algorithms of different types deciding this problem differ? For example, if a Java program
decides a problem in time t, which time does a TM need to decide this problem? Or
similarly: If a TM simulates a given Java program, how much more time does the TM
need? It turns out that the running times of the different types of algorithms mentioned
on page 6 are polynomially related and that the used memories of the different types of
algorithms are linearly related.
Theorem 2.6 Consider algorithms of type A and type B from the list on page 6.
1. There exists a constant k ≥ 1 such that: If an algorithm M of type A computes
a given function (decides a given set) then there exists an algorithm M 0 of
type B which computes the same function (decides the same set) such that
tM 0 ≤ae (tM )k .
2. There exists a constant k ≥ 1 such that: If an algorithm M of type A computes
a given function (decides a given set) then there exists an algorithm M 0 of
type B which computes the same function (decides the same set) such that
sM 0 ≤ae k·sM .
The following corollary is the time resp. space bounded version of Theorem 1.5 (Main
Theorem of algorithm theory).
15
Corollary 2.7 says that the type of algorithm does not matter for decidability in time
Pol(t) or in space O(s). Hence, we say simply a function is computable in time Pol(t) or
in space O(s), resp., and a problem is decidable in time Pol(t) or in space O(s), without
mentioning any type of algorithm.
Theorem 2.8 1. For every computable function t(n) ≥ n there exists a decidable
problem A such that no TM can decide A in time t.
2. For every computable function s(n) ≥ n there exists a decidable problem A such
that no TM can decide A in space s.
This theorem is a complexity analog of Theorem 1.11. There we proved that there are
problems which are not decidable. Here we prove that there are decidable problems which
cannot be decided within a given time or space bound. Also the proof is similar. We use
a modification of the special halting problem.
Proof. 1. For a computable function t(n) ≥ n, define the modified halting problem
Kt =def {z | z = code(M ) for some TM M and
M stops on input z with result 1 and tM (z) ≤ t(|z|)}.
Because t is computable, the problem Kt is decidable. Hence, Kt is decidable.
Assume that there exists a TM M which decides Kt in time t.
Hence there exists an n0 ≥ 1 such that tM (n) ≤ t(n) for n ≥ n0 .
We modify M to get a TM M 0 by adding redundant states and instructions such
that |code(M 0 )| ≥ n0 .
Hence M 0 also decides Kt and tM 0 (n) = tM (n) ≤ t(n) for n ≥ n0 . We obtain:
code(M 0 ) ∈ Kt ⇔ M 0 stops on code(M 0 ) with 1 (M 0 decides Kt )
⇔ M 0 stops on code(M 0 ) with 1
and tM 0 (code(M 0 )) ≤ t(|code(M 0 )|) (|code(M 0 )| ≥ n0 )
0
⇔ code(M ) ∈ Kt (definition of Kt )
This is obviously a contradiction. Hence M does not decide A in time t.
2. The space result can be proven similarly.
16
3 Complexity Classes
It would be really nice if for every decidable problem there would be a “best” algorithm
to solve it. For example, for time complexity this would mean for every problem A: There
exists an algorithm M which decides A and for every algorithm M 0 deciding A there holds
tM ≤ae tM 0 .
Unfortunately, this is not true. On the contrary: For every algorithm M which decides a
problem A with tM (n) ≥ 2n there exists another algorithm M 0 deciding A with tM 0 (n) ≤ae
1
·t (n). The same is true for space. (∗)
2 M
It is still worse. There are decidable problems A such that: For every algorithm M
deciding A there exists another algorithm M 0 deciding A with tM 0 (n) ≤ae log2 (tM (n)).
So, we cannot attach to every problem a minimum complexity. Hence, we go the other
way around. We fix a function t(n) ≥ n and look which problems can be decided in time
O(t) (we use O(t) instead of t because of (∗)).
Because of Corollary 2.7.2 the space classes SPACE(s) do not really depend on a given
type of algorithms, contrary to the time classes TIME(t).
The smallest time complexity class is the class of regular languages, see page 38.
Proof. Omitted.
17
Every time complexity class is included in the space complexity class with the same bound.
The time and space complexity classes are closed under the set operations of union,
intersection and complement.
An obvious consequence of the the fact that complexity classes are defined with O:
18
3.2 Hierarchies of Complexity Classes
In Theorem 2.8 we have seen that problems can have arbitrary large time complexities.
What does this mean for complexity classes?
Theorem 3.6 For every computable function t(n) ≥ n there exists a computable
function t0 ≥ t such that TIME(t) ⊂ TIME(t0 ).
In fact, the jump from a time complexity class to a larger one is rather tight. For such
a tight result we need bounding functions that are “not hard to compute”. We restrict
ourselves to monotonic functions.
The demand on a function to be a time (space) function is not very strong. So many
functions fulfill this requirement.
Theorem 3.7 1. For every computable function f (n) ≥ n there exists a time
(space) function g ≥ f .
2. If f and g are time (space) functions then f ·g is a time (space) function.
k
3. nk , 2n : n, 2kn , 2kn ·n, 2n (for every k ≥ 1) are time and space functions.
Proof. We prove only Statement 3 for the function n2 . We have to prove that n2
can be computed in time (2n )2 = 22n . In fact, we can do it in time n2 (see Example
2.4.2).
19
Theorem 3.8 (Hierarchy Theorem)
1. Let t(n) ≥ n2 and t0 (n) ≥ n.
If t is a time function and t0 ·log2 (t0 ) ∈ o(t) then TIME(t0 ) ⊂ TIME(t).
2. Let s(n), s0 (n) ≥ n.
If s is a space function and s0 ∈ o(s) then SPACE(s0 ) ⊂ SPACE(s).
Proof. The proof follows the idea of the proof of Theorem 2.8, but it is much more
technical.
Theorem 2.7 says that the time and space complexities of algorithms of different type for
the same problem are polynomially related (where space complexities are even linearly
related). Thus it makes sense to define “polynomially fuzzy” complexity classes which do
not depend on a given type of algorithms.
Because of their definition with Pol these classes are the union of infinitely many classes.
20
S
Proposition 3.10 1. TIME(Pol(t)) = ∞ k
k=1 TIME(t ) for all t(n) ≥ n.
S ∞
2. SPACE(Pol(s)) = k=1 SPACE(sk ) for all s(n) ≥ n.
S
3. P= ∞ k=1 TIME(n
k
).
S∞
4. PSPACE = k=1 SPACE(nk )
Proof. We conlude
A ∈ TIME(Pol(t)) ⇔
⇔ A can be decided in time Pol(t)
⇔ there exists a TM M that decides A in time Pol(t)
⇔ there exists a TM M that decides A and tM ∈ Pol(t)
⇔ there exists a TM M that decides A and a k ≥ 1 such that tM ≤ae tk
⇔ thereSexists a k ≥ 1 such that A ∈ TIME(tk )
⇔ A∈ ∞ k=1 TIME(t )
k
21
3.4 On Time Versus Space
In Proposition 3.3 we have seen that time classes are included in the space classes with the
same bound, i.e., TIME(t) ⊆ SPACE(t) for every t(n) ≥ n. But what about including
space classes in time classes? Here we have an exponential blow-up of the bound.
Proof. We conclude
S
PSPACE =def SPACE(Pol(n)) = ∞ k
k=1 SPACE(n ) 3.10.4
S∞ k
⊆ k=1 TIME(Pol(2n )) 3.13
S S∞
= ∞ m=1 TIME((2 ) )
nk m
3.10.1
Sk=1 S
= ∞ ∞
m=1 TIME(2
m·nk
)
Sk=1
∞ S ∞ nk+1 k k+1
⊆ k=1 m=1 TIME(2 ) 3.5.1 and 2m·n ∈ O(2n )
S∞ nk+1
= k=1 TIME(2 )
S∞ nk
⊆ k=1 TIME(2 ) def = EXP
22
Between our favorite complexity classes we have the following inclusion-chain:
We emphasize that the writing in the proposition is “. . . or. . . ” and not “either . . . or . . . ”.
Finally let us mention that equalities between complexity classes translate upwards. As
an example we prove:
|x|k
Proof. For a set A ∈ Σ∗ we choose an a 6∈ Σ and define BAk =def {xa2 −|x| | x ∈ A}
k
for every k ≥ 1. This padding of the input from length |x| to length 2|x| reduces the
complexity because it is related to the length of the input. One can prove (Exercise
23):
k
(a) If A ∈ SPACE(2n ) then BAk ∈ SPACE(n).
k+1
(b) If BAk ∈ P then A ∈ TIME(2n ).
Now assume P = PSPACE. For an A ∈ EXPSPACE there exists a k ≥ 1 such
k
that A ∈ SPACE(2n ). By (a) we obtain BAk ∈ SPACE(n) ⊆ PSPACE = P.
k+1
Using (b) we get A ∈ TIME(2n ) ⊆ EXP.
23
4 Polynomial Time Computability
The smallest time bound we consider is given by the identity function I(n) =def n. On
page 20 we defined the class P as the class of problems which can be decided in time
Pol(n). We define an analogous class for functions.
Because of Corollary 2.7.1 the classes FP and P are independent of the choice of the type
of algorithm. To prove that a function is in FP (a problem is in P) it is sufficient to find
a polynomial time algorithm of any type for that function (problem).
The classes FP and P are of great practical interest because every function which can be
computed (every problem which can be solved) by a computer in a reasonable amount of
time is in FP (P, resp.). This becomes clear in the light of the following table where we
compare the running times of algorithms working in polynomial time and such not working
in polynomial time. The algorithms are assumed to be implemented on a computer with
200.000 MIPS, i.e. which executes 2 · 1011 instructions per second.
Faster computers can improve the data of the table only by a constant factor.
By definition every function in f ∈ FP can be computed by a TM M such that tM (x) ≤ae
|x|k for some k ≥ 1. We show that also the length of f (x) can be bounded in such a way.
Proposition 4.2
For every f ∈ FP there exist k, n0 ≥ 1 and a TM M computing f such that
tM (x) ≤ |x|k and |f (x)| ≤ |x|k for all x such that |x| ≥ n0 .
24
Example 4.3 (polynomial time computable functions and polynomial time decid-
able problems)
1. sum, sub, mul, div ∈ FP.
P
2. p ∈ FP for all polynomials p(x) = m i=0 ai x
i
(m, a0 , a1 , . . . , am ∈ N)
(follows from Statement 1)
3. {(x, y, z) | x, y, z ∈ N and x2 + y 2 = z 2 } ∈ P.
4. exp 6∈ FP (|exp(x)| ≈ 2|x| is too long).
But: {(x, y, z) : x, y, z ∈ N and xy = z} ∈ P.
5. PRIME =def {x : x ∈ N and x ist eine Primzahl} ∈ P. (See Example 2.5.)
6. It is not known whether SF ∈ FP, where SF(n) =def smallest factor k ≥ 2 of n.
(This is related to the prime number decomposition of natural numbers. Note
that the security of cryptographic systems like RSA and PGP rests on the fact
that SF ∈ FP is not known. But their usability is due to the fact that
PRIME ∈ P.)
7. PAL ∈ P, the palindrom set. (See Example 2.4.1.)
8. Pattern matching problem
PM =def {(m, t) : m, t ∈ {a, b}∗ ∧ there exist x, y such that t = xmy} ∈ P.
A TM can do it in O(n2 ); with higher programming languages or pseudocode it
can be done in O(n) implementing the Knuth-Morris-Pratt algorithm.
Proof. 1. Let f, g ∈ FP. By Proposition 4.2 there exist k, n0 ≥ 1 and TMs M and
M 0 computing f and g, resp., such that for all x such that |x| ≥ n0 :
tM (x) ≤ |x|k , tM 0 (x) ≤ |x|k , |f (x)| ≤ |x|k , and |g(x)| ≤ |x|k .
An new algorithm M 00 simulates first M and then M 0 to compute f (x) and g(x),
resp. Finally M 00 computes f (x)·g(x) in time (|f (x)|+|g(x)|)2 (Example 2.4).
Consequently, for all x such that |x| ≥ max{n0 , 6}:
tM 00 (x) ≤ tM (x)+tM 0 (x)+(|f (x)|+|g(x)|)2 ≤ 2·|x|k +(2·|x|k )2 ≤ 6·|x|2k ≤ |x|2k+1 .
So we have tM 00 ≤ae n2k+1 and hence f ·g ∈ FP.
Subtraction, multiplication and division are analogous.
For the substitution f ◦ g let M and M 0 be as above. The algorithm M 00 computes
first g(x) and then f (g(x)). Consequently, for all x such that |x| ≥ max{n0 , 2}:
2 2
tM 00 (x) ≤ tM 0 (x)+tM (g(x)) ≤ |x|k +(|x|k )k ≤ 2·|x|k ≤ |x|k +1 .
2
So we have tM 00 ≤ae nk +1 and hence f ◦g ∈ FP.
S∞
2. For A, B ∈ P = k=1 TIME(nk ) there exists a k ≥ 1 such that
A, B ∈ TIME(nk ). By 3.4.1. we have A ∪ B, A ∩ B, A ∈ TIME(nk ) ⊆ P.
25
4.2 Graph Problems in P
Graph problems are a very interesting and important group of problems, because many
problems in practice can be easily formulated as graph problems; e.g. transport problems
and scheduling problems.
A graph G = (V, E) consists of a set V of vertices and a binary relation E ⊆ V × V
whose elements are called edges. We say that G is an undirected graph if the edges do not
have a direction, i.e., with (u, v) ∈ E we have automatically (v, u) ∈ E (however, in the
description of the graph usually only one of them is listed). Otherwise, G is said to be a
directed graph.
We consider only finite graphs, i.e. graphs with a finite set of vertices. Such graphs can
be presented in a natural way as shown in the following example. Note that an edge (u, v)
of a directed graph is presented as u >v whereas an edge (u, v) of a undirected graph
is presented as u v.
Example 4.5 Consider the undirected graph of the three houses and the three foun-
tains 3H3F =def (V, E) where V = {h1, h2, h3, f 1, f 2, f 3} and
E = {(h1, f 1), (h1, f 2), (h1, f 3), (h2, f 1), (h2, f 2), (h2, f 3), (h3, f 1), (h3, f 2), (h3, f 3)}.
which is represented as
h1 h2 h3
f1 f2 f3
26
Example 4.6 Consider the graph of the three houses and the three fountains 3H3F
from Example 4.5. Setting v1 = h1, v2 = h2, v3 = h3, v4 = f 1, v5 = f 2 and v6 = f 3 we
get the adjacency matrix
0 0 0 1 1 1
0 0 0 1 1 1
0 0 0 1 1 1
A3H3F =def
1 1 1 0 0 0
1 1 1 0 0 0
1 1 1 0 0 0
and the input word w3H3F = 000111000111000111111000111000111000
27
Example 4.10 Graph coloring.
An undirected graph G = ({1, 2, . . . , m}, E) has a k-coloring ⇔def
there exist c1 , c2 , . . . cm ∈ {1, . . . , k} such that ci 6= cj for all (i, j) ∈ E;
i.e., every vertex gets one of the colors 1, 2, . . . , k such that two vertices have differ-
ent colors when connected by an edge. For example, the graph in Example 4.5 has a
2-coloring , and the following graph has a 3-coloring but not a 2-coloring.
In this context the famous four color problem should be mentioned. This is the problem
of whether every planar graph has a 4-coloring. An equivalent formulation of the problem
is: For any given map of geographical regions, can the regions be colored with four colors
in such a way that adjacent regions have different colors?
The figures show a 4-coloring of the regions of Ukraina as a map and as a graph. Each of
the regions corresponds to a vertex of the graph. If two regions are adjacent (neighboring)
then the corresponding vertices of the graph are connected by an edge. This results in a
planar graph.
The four color problem was a long-standing open problem. In 1976 it was answered in
the affirmative by the American mathematicans Appel and Haken.
28
4.3 The class NP
There are many practically relevant problems which are not known to be in P; i.e., for
which we do not know algorithms solving them in a reasonable amount of time. Many of
these problems are so-called polynomial search problems.
Example 4.11 The graph coloring problem k-COLOR for k ≥ 2 (see Example 4.10).
For a graph G = ({1, 2, . . . , m}, E) we can write:
G has a k-coloring ⇔ ∃c(c is a k-coloring for G) ⇔ ∃c((G, c) ∈ B)
where B =def {(G, c) | c is a k-coloring of G}.
Using the colors 1, 2, . . . , k, a k-coloring attaches to to each vertex i ∈ {1, 2, . . . , m}
a color ci ∈ {1, ... , k} such that (i, j) ∈ E implies ci 6= cj . Hence we can write
B = {({1, ... , m}, E, (c1 , ... , cm )) | c1 , ... , cm ∈ {1, ... , k} ∧ ∀i∀j((i, j) ∈ E → ci 6= cj )}
It is easy to see that B ∈ P. Because we choose only one color ci for every vertex i
there holds |(c1 , ..., cm )| ≤ |G|. Thus we can write equivalently
G has a k-coloring ⇔ ∃c(|c| ≤ |G| ∧ (G, c) ∈ B).
Consequently, k-COLOR is a polynomial search problem. In Example 4.10 we have
seen that 2-COLOR is in P which is not known for k ≥ 3.
29
Example 4.12 The Hamiltonian circuit problem
(William Rowan Hamilton, Irish mathematician, 1805-1865).
A Hamiltonian circuit of an undirected graph G is a circuit which includes every
vertex exactly once.
HAMILTONIAN CIRCUIT =def {G : the graph G has a Hamiltonian circuit}
This problem seems to be similar to the Eulerian circuit problem. However, the
latter problem is in P which is not known for the Hamiltonian circuit problem.
Example 4.13 The sum of subset problem. The problem is defined as the set of all
pairs (K, b) where K is a finite subset of N and b ∈ N such that b is the sum of
a subset of K:
P
SOS =def {(K, b) | K ⊂ N is finite ∧ b ∈ N ∧ ∃L(L ⊆ K ∧ b = a∈L a)}
or, equivalently (set ci = cL (ai )),
P
SOS =def {(a1 , ... , am , b) : m, a1 , ... , am , b ∈ N ∧ ∃(c1 , ... , cm ∈ {0, 1})(b = mi=1 c1 ·ai )}.
Using this form it is easy to see that SOS is a polynomial search problem (see Exer-
cise 28).
Definition. NP =def {A | there exist a k ≥ 1 and a set B ∈ P such that for all x
x ∈ A ⇔ ∃y(|y| ≤ |x|k +k ∧ (x, y) ∈ B) }
“NP” is an acronym of nondeterministic polynomial time. This comes from the fact that
the NP problems can be characterized as exactly those problems which can be solved by
nondeterministic algorithms in polynomial time. However, we will not consider here this
type of algorithms, which is a special form of parallel algorithms.
30
Proposition 4.14
k-COLOR (k ≥ 3), HAMILTONIAN CIRCUIT, and SOS are in NP.
Theorem 4.16 P ⊆ NP
Many problems of practical interest are in NP; e.g., cost optimization problems, trans-
port problems, storage problems, scheduling problems, and computer operation problems.
Hence, it is of great interest whether the inclusion in Theorem 4.16 is an equation or not.
This is just the P-NP problem, the most famous and intensively attacked problem of
Theoretical Computer Science:
P-NP Problem.
What is correct: P ⊂ NP or P = NP?
Commonly believed: P ⊂ NP.
Most important unsolved problem of Theoretical Computer Science.
There is a $ 1,000,000 prize for solving this problem!
Many practical and theoretical consequences.
In Theorem 4.15 we have seen that NP is closed under union and intersection. But what
about complementation? This is another famous open problem.
NP Complement Problem:
Is NP closed under complement? I.e.: does A ∈ NP imply A ∈ NP?
Commonly believed: No.
Since P is closed under complement (see Theorem 4.4.2) the answer P = NP to the P-NP
problem would give an affirmative answer to the NP complement problem.
31
There is a third important open problem. It is connected with the relationship between
the classes NP and PSPACE. From Corollary 3.11 we know P ⊆ PSPACE. This can
be strengthened:
NP-PSPACE Problem.
What is correct: NP ⊂ PSPACE or NP = PSPACE?
Commonly believed: NP ⊂ PSPACE.
From Proposition 3.15, Theorem 4.16, and Theorem 4.17 we have the following inclusion-
chain between our favorite complexity classes:
32
5 Polynomial Time Reducibility and Completeness
For a moment, assume P ⊂ NP. In that case it seems that the sets from P are the
simplest sets in NP, and the most complicated sets of NP are not in P. But what means
“simpler” or “more complicated”? Intuitively: A problem A is simpler as or equally
simple to a set B if B ∈ C implies A ∈ C for “well-formed” complexity classes C like P,
NP, PSPACE, EXP, and EXPSPACE. I.e., A cannot be in a larger complexity class
than B. This leads us to the following definition.
33
Proof. 1. x ∈ A ⇔ I(x) ∈ A and I ∈ FP.
2. If A ≤ B then there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B.
If B ≤ C then there exists an g ∈ FP such that x ∈ B ⇔ g(x) ∈ C.
We conclude: x ∈ A ⇔ f (x) ∈ B ⇔ g(f (x)) ∈ C ⇔ (g ◦ f )(x) ∈ C.
By Theorem 4.4.1 we obtain g ◦ f ∈ FP.
3. A ≤ B ⇐⇒ there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B
⇐⇒ there exists an f ∈ FP such that x ∈ A ⇔ f (x) ∈ B
⇐⇒ A≤B
The sets in P are the simplest sets w.r.t. polynomial time reducibility:
Proof. 1. We choose
some a ∈ B and b ∈ B, and we define
a, if x ∈ A
f (x) =def
b, if x 6∈ A
Since A can be decided in polynomial
time, f can be computed in polynomial time.
x ∈ A ⇒ f (x) = a ∈ B
=⇒ x ∈ A ⇔ f (x) ∈ B =⇒ A ≤ B
x ∈ A ⇒ f (x) = b ∈ B
2. Let B ∈ P which means cB ∈ FP. Assume A ≤ B. Then there exists an f ∈ FP
such cA = cB ◦ f . By Theorem 4.4.1 we get cA ∈ FP . Consequently, A ∈ P, a
contradiction. Hence A 6≤ B.
Statement 3 is an immediate consequence of Statement 1.
The statements 4 and 5 are obvious.
Now we will see whether the notion ≤ really does the job we discussed at the beginning
of this subsection.
34
So we hope that the most important complexity classes are closed under polynomial time
reducibility.
S∞ k
Theorem
S∞ 5.4 For monotonic functions t(n), s(n) ≥ n the classes k=1 TIME(t(n ))
k
and k=1 SPACE(t(n )) are closed under polynomial time reducibility.
Proof. We prove the time case, the space case being analogous.
Let A ≤ B and B ∈ TIME(t(nk )). There exist m, n0 ≥ 1 and a TM M deciding B
such that tM (n) ≤ m·t(nk ) for all n ≥ n0 .
Further, there is a function f ∈ FP such that x ∈ A ↔ f (x) ∈ B for all x.
By Proposition 4.2 there exist r, n1 ≥ 1 and a TM M 0 computing f such that
tM 0 (x) ≤ |x|r and |f (x)| ≤ |x|r for all x such that |x| ≥ n1 .
The following algorithm M 00 obviously decides A. Let x be the input.
(a) M 00 simulates M 0 on input x and computes in such a way f (x).
(b) M 00 simulates M on input f (x) and tests in such a way whether f (x) ∈ B.
(c) The result of M 00 is given by the result of M .
What is the running time of M 00 on x with |x| ≥ max{n0 , n1 }?
Phase (a) takes |x|r steps.
Phase (b) takes m·t(|f (x)|k ) ≤ m·t((|x|r )k ) steps (since t is monotonic).
So we obtain tM 00 (n) ≤ nr +m·t((nr )k ) ≤ (m+1)·t(nrk ) for all n ≥ max{n0 , n1 }.
Hence tM 00 (n) ∈ O(t(nrk )) and consequently A ∈ TIME(t(nrk )).
Theorem 5.5 The classes P, NP, EXP, PSPACE, and EXPSPACE are closed
under polynomial time reducibility.
S S
Proof. For the classes P = ∞k=1 TIME(nk ), PSPACE = ∞ k=1 SPACE(n ),
k
S∞ S
EXP = k=1 TIME(2n ), and EXPSPACE = ∞
k nk
k=1 SPACE(2 ) this is a
immediate consequence of Theorem 5.4.
The class NP needs a separate treatment, using methods similar to the one in the
proof of Theorem 5.4. We omit this proof.
S
Finally
S∞ we mention that complexity classes which are not of type ∞ k
k=1 TIME(t(n )) or
k
k=1 SPACE(t(n )) are often not
S closed underk·n
polynomial time reducibility. For example
this is true for SPACE(n) and ∞ k=1 TIME(2 ). (See also Exercise 32.)
5.2 Completeness
Complexity classes are defined by a bounding function, i.e., all problems are in such a
class which can be decided with a complexity that does not exceed the bounding function.
Hence, also very simple problems are in every complexity class. But what about hardest
problems in a complexity class? Do they exist? We need a good definition.
35
This means: A C-complete problem is at least as hard to decide than every other problem
in C; it belongs to the hardest problems in C. Some properties of C-complete sets:
The following theorem shows that C-complete problems are really the hardest problems
in C. Namely, if a C-complete problem is in a subclass of C then all sets from C are in this
subclass. Thus, a C-complete problem incorporates the whole complexity of the class C.
C-completeness of a problem can also imply that this problem cannot be in a smaller
complexity class.
36
Proof. 1. Assume A ∈ P. Since A is EXP-complete, and P is closed under
polynomial time reducibility, Theorem 5.7 yields P = EXP. This contradicts
Theorem 3.12.1.
Statement 2 can be proven in the same way.
For complexity classes which are closed under polynomial time reducibility we introduced
the notion of complete problems. However, this does not automatically mean that there
really exist complete problems for these classes. What about the special classes we have
studied?
We start with the class P. From Theorem 5.3 we obtain directly a complete answer.
For the other of our favorite classes (and many others) it is not hard to proof the existence
of complete sets.
Theorem 5.11 NP, PSPACE, EXP and EXPSPACE have complete problems.
Proof. We give the proof for PSPACE. For the other classes the proofs is similar.
We use the encoding “code” of TMs we already used in Subsection 1.6. Define
U =def {(code(M ), x, z) | TM M stops on x with result 1 using at most |z| cells}.
It is not hard to see that U ∈ SPACE(n2 ) (Exercise 31).
For an A ∈ PSPACE there exists a TM M deciding A such that sM (n) ≤ae nk .
By Proposition 2.2.1 there exists a m ≥ 1 such that sM (n) ≤ nk + m for all n ≥ 0.
k
Consequently, x ∈ A ⇔ (code(M ), x, 0|x| +m ) ∈ U ⇔ f (x) ∈ U
k
where f (x) =def (code(M ), x, 0|x| +m ). Since f ∈ FP, we get A ≤ U .
However, the complete problem in the preceding proof is rather artificial. Are there
natural complete problems for the classes NP, PSPACE, EXP and EXPSPACE?
Answer: Yes!
Proof. Omitted.
More NP-complete problems can be found in Section 6 and in the book by M.R.
Garey, D.S. Johnson: Computers and Intractibility, A Guide to the Theory of NP-
Completeness.
37
For more complete languages we need the notion of regular languages.
Definition.
1. Let Σ be an alphabet, and let L, L0 ⊆ Σ∗ .
– L · L0 =def {xy | x ∈ L ∧ y ∈ L0 } (concatenation of L and L0 )
– L0 =def {ε}
S∞ andk L
k+1
=def Lk · L for k ≥ 0
∗
– L =def k=0 L = {x1 x2 ... xk | k ≥ 0 ∧ x1 , x2 , ... , xk ∈ L} (iteration of L)
2. The regular languages over Σ are the languages which can be generated from ∅
and the sets {a} (a ∈ Σ) by repeated application of union, concatenation and
iteration. In other terms:
– the empty set ∅ is regular
– {a} is regular for every a ∈ Σ
– if L, L0 are regular then L ∪ L0 , L · L0 and L∗ are regular
3. In such a way we obtain regular languages like
(({a}·{b}∗ ) ∪ {c})∗ ∪ ({c}∗ ·(({b}·{c}) ∪ {a})∗ )
For simplicity we omit the set-braces { and } and obtain
((a·b∗ ) ∪ c)∗ ∪ (c∗ ·((b·c) ∪ a)∗ ).
This is called a regular expression describing the corresponding regular set.
Example 5.13 1. (a ∪ b)∗ a describes the set of all words over {a, b} ending with a.
2. (0∗·1·0∗·1)∗·0∗ describes the set of all words over {0, 1} with an even number of 1.
3. (a1 ∪ a2 ∪ · · · ∪ an )∗ = {a1 , a2 , . . . , an }∗ describes the set of all words over the
alphabet {a1 , a2 , . . . , an }.
4. ∅∗ = {ε}.
5. Different regular expressions can describe the same regular language, e.g.:
0·(0 ∪ 1) and (0·0) ∪ ∅ ∪ ((0·1)·∅∗ ) describe the same language {00, 01}.
Definition.
EQ(∪, ·,∗ ) =def {(E, E 0 ) | the regular expressions E, E 0 with operations ∪, · and ∗
describe the same regular language}
EQ(∪, ·, , ) =def {(E, E ) | the regular expressions E, E 0 with operations ∪, ·,2 and ∗
∗ 2 0
Proof. Omitted.
Since the complement of a regular language is also a regular language, one can consider also
− ∗ −
regular expressions with
2
complement . We notice that the related problem EQ(∪, ·, , )
S ·· log(nk )
is ∞k=1 SPACE(2 2·
)-complete. This is a tremendously large complexity class.
38
Finally let us consider the complexity of some games, an very interesting field of com-
plexity theory.
Hex. Two players put alternately stones on an empty cell of a hex board: Red has red
stones and blue has blue stones. The players try to form a connected path of their own
stones linking the opposing sides of the board marked by their colors. The first player to
complete his connection wins the game.
Definition: HEX
Input: an n×n hex board having already blue and red stones in some of the cells
Question: Does player blue have a winning strategy?
Proof. Omitted.
Checkers and Go. In the same way we can define generalized versions of checkers and
go. That means the size of the board can be different from the original size 8×8 or 19×19,
resp. So we get the problems CHECKERS and GO.
Proof. Omitted.
39
Here are our favorite complexity classes with some complete sets:
In Complexity Theory much more complexity classes with interesting complete sets are
investigated. Some examples:
• Using a more sensitive notion of reducibility it turns out that not all problems in P
are equally complex. One of the hardest problems in P is the problem of evaluating
a logical circuit.
• Using nondeterministic algorithms (a special kind of parallel algorithms) the class
NL of problems solvable in logarithmic space is defined. There holds NL ⊆ P.
One of the hardest problems in NL is the problem of whether there exists a path
between two given vertices of a graph.
• The class NEXP of problems which can be solved in exponential time by nondeter-
ministic algorithms. There holds EXP ⊆ NEXP ⊆ EXPSPACE. The problem
EQ(∪, ·,2 ) is NEXP-complete.
40
6 Appendix 1: List of NP-complete problems
INDEPENDENT SET
Input: undirected graph G = (V, E), k ∈ N
Question: Does G have an independent set with k vertices?
(a independent set is a subset C ⊆ V such that C × C ⊆ E)
VERTEX COVER
Input: undirected graph G = (V, E), k ∈ N
Question: Does G have a vertex cover with at most k vertices?
(a vertex cover is a subset K ⊆ V such that E ⊆ (K × V ) ∪ (V × K))
k-COLOR (k ≥ 3)
Input: undirected graph G = (V, E)
Question: Does G have a k-coloring?
(a k-coloring is a coloring of the vertices with k colors such that two
vertices have different colors if they are connected by an edge, i.e., it is
a function c : V → {1, 2, . . . , k} such that c(u) 6= c(v) for every (u, v) ∈ E)
HAMILTONIAN CIRCUIT
Input: undirected graph G = (V, E)
Question: Does G have a Hamiltonian circuit?
(a Hamiltonian circuit is a closed path in G that visits every vertex
exactly once, i.e. it is a sequence v1 , v2 , . . . , vm such that m = #V ,
{v1 , v2 , . . . , vm } = V , and (v1 , v2 ), (v2 , v3 ), . . . , (vm−1 , vm ), (vm , v1 ) ∈ E)
TRAVELLING SALESMAN
Input: m ≥ 2 and towns 1, 2, . . . , m a travelling salesman has to visit,
costs M (i, j) ∈ N for travelling from town i to town j,
bound k ∈ N to the overall cost
Question: Is there a round trip with overall cost ≤ k?
I.e., do there exist s1 , s2 . . . , sm such that {s1 , s2 . . . , sm } = {1, 2, . . . , m}
P
and m−1i=1 M (si , si+1 ) + M (sm , s1 ) ≤ k?
41
LONGEST PATH
Input: undirected graph G = (V, E), k ∈ N
Question: Does G have a a path of length k?
I.e., does there exists a sequence of different v0 , v1 , . . . , vk ∈ V
such that (v0 , v1 ), (v1 , v2 ), . . . , (vk−1 , vk ) ∈ E?
BIN PACKING
Input: items 1, 2, . . . , m with volumes a1 , a2 , . . . , am ∈ N,
k bins with volume l each
Question: Can the items be placed in the bins?
I.e.: Do there exists Z1 , Z2 , . . . , Zk suchP that
Z1 ∪ Z2 ∪ · · · ∪ Zk = {1, 2, . . . , m} and j∈Zi aj ≤ l for i = 1, 2, . . . , k?
SOS
Input: n, a1 , a2 . . . , an , b ∈ N
Question: Does there exist an I ⊆ {1, 2, . . . , n} such that Σi∈I ai = b?
PARTITION
Input: n, a1 , a2 . . . , an ∈ N
Question: Does there exist an I ⊆ {1, 2, . . . , n} such that Σi∈I ai = Σi6∈I ai ?
KNAPSACK
Input: m items with volumes g1 , g2 . . . gm ∈ N and values v1 , v1 . . . , vm ∈ N
volume G of a knapsack, value V to be transported
Question: Can one choose some of the items whose overall volume is not greater
than G and whose overall valueP is at least V ? I.e.:
P Does there exist
an I ⊆ {1, 2, . . . , m} such that i∈I gi ≤ G and i∈I vi ≥ V ?
QUADRATIC EQUATION
Input: a, b, c ∈ N
Question: Do there exist x, y ∈ N such that ax2 + by = c?
Much more NP-complete problems can be found in the book by M.R. Garey, D.S.
Johnson: Computers and Intractibility, A Guide to the Theory of NP-Completeness.
42
7 Appendix 2: General Definitions and Notations
Logics
Simple statements and conditions are often composed to more complicated ones using
logical connectives and quantifiers. We use the following:
connective/quantifier formula formula is true if
negation ¬A A is not true
conjunction (A ∧ B) A and B are true
disjunction (A ∨ B) at least one of A and B is true
implication (A → B) (¬A ∨ B) is true
equivalence (A ↔ B) A is true if and only if B is true
existential quantifier ∃xA(x) A(x) is true for at least one x
universal quantifier ∀xA(x) A(x) is true for all x
We also write A ⇒ B instead of A → B, and A ⇔ B instead of A ↔ B.
If, for an object O, we introduce by definition a name or denotation B, we write B =def O
or O def = B. If, for a condition C, we introduce by definition a name or denotation B,
we write B ⇔def C or C def ⇔ B.
Sets
We use sets in the naive sense, i.e. as collections of objects. If a object a belongs to the
set A then we say that a is an element of A: We write
a ∈ A for a is an element of the set A
a 6∈ A for ¬(a ∈ A)
A ⊆ B for ∀a((a ∈ A) → (a ∈ B)) (A is a subset of B)
A 6⊆ B for ¬(A ⊆ B) (A is not a subset of B)
A ⊂ B for (A ⊆ B) ∧ (B 6⊆ A) (A is a proper subset of B)
A = B for (A ⊆ B) ∧ (B ⊆ A) (A and B are equal)
A set can be described in different ways. If a set A consists of n ≥ 1 different elements
a1 , a2 , . . . , an then we write A = {a1 , a2 , . . . , an }. Such sets are finite, and we denote by
#A =def n the number of its elements. The only set which has no element is the empty
set which is denoted by ∅ with #∅ =def 0.
Another way to describe sets is by a defining property. If E is a property an object can
possess or not, then we write A = {a | a possesses the property E}.
For example, the set Q of squares of natural numbers can be written in different ways as
Q = {0, 1, 4, 9, 16, 25, 36, . . . }
= {m | m is the square of a natural number}
= {n2 | n is a natural number}
For a set A we define P(A) =def {B | B ⊆ A} as the power set A.
From given sets more complicated sets can be build using set operations.
43
A∪B =def {x | x ∈ A ∨ x ∈ B} (union)
A∩B =def {x | x ∈ A ∧ x ∈ B} (intersection)
A =def {x | x 6∈ A} (complement)
A−B =def {x | x ∈ A ∧ x 6∈ B} (set difference)
The following rules for set operations are important:
A∪B = B∪A commutativity
A∩B = B∩A commutativity
A ∪ (B ∪ C) = (A ∪ B) ∪ C associativity
A ∩ (B ∩ C) = (A ∩ B) ∩ C associativity
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) distributivity
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) distributivity
A∪B = A∩B De Morgan’s law
A∩B = A∪B De Morgan’s law
A∪A = A∩A=A=A
We also consider unions and intersections of infinitely many sets. We define
S∞ T∞
i=m Ai =def {x | ∃i(i ≥ m ∧ x ∈ Ai )} and i=m Ai =def {x | ∀i(i ≥ m → x ∈ Ai )}.
In a set there is no special order of the elements, so e. g. we have {a, b} = {b, a}. If in
a set {a1 , a2 , . . . , an } the order of the elements should be fixed as a1 , a2 , . . . , an then we
write (a1 , a2 , . . . , an ), and we call that an n-tuple. For sets A1 , A2 , . . . , An we define
A1 ×A2 ×. . .×An =def {(a1 , a2 , . . . , an ) | a1 ∈ A1 , a2 ∈ A2 , . . . , an ∈ An }
as the cartesian product of A1 , A2 , . . . , An . In particular, An =def A | ×A× {z· · · × A}.
ntimes
Relations
44
Functions
Functions are a special kind of sets. Intuitively, a function maps elements of a set onto
elements of another set. Let A and B are sets. A set ϕ ⊆ A×B is called a function if for
every a ∈ A there exists at most one b ∈ B such that (a, b) ∈ ϕ. We write ϕ : A → B. If,
for an a ∈ A, there exists an element b ∈ B such that (a, b) ∈ ϕ then ϕ(a) is defined and
we write ϕ(a) = b. Otherwise, ϕ(a) is said to be not defined. We set
Dϕ =def {a | a ∈ A and ϕ(a) defined} (domain of f )
Wϕ =def {ϕ(b) | a ∈ A and ϕ(a) defined} (codomain of f )
A function f : A → B is called total if Df = A. For total functions f : A → B and
g : B → C we define the function (g ◦ f ) : A → C by (g ◦ f )(a) =def g(f (a)).
1, if x ∈ A
The characteristic function cA of a set A is defined as cA (x) =def for x ∈ A.
0, if x 6∈ A
Natural Numbers
Let N =def {0, 1, 2, 3, . . . } be the set of natural numbers, and let + and · denote the
operations of addition and multiplication on N. The application of the subtraction and
division does not necessarily result in a natural number. Therefore we modify these
operations in such a way that the result is in any case a natural number. We define the
modified subtraction − · and the modified division : for all x, y ∈ N by
· y =def x − y, if x ≥ y
x−
0 else
the greatest z ∈ N such that z · y ≤ x, if y > 0
x : y =def
x, if y = 0
Besides the operational symbols +, −· , ·, and : for the arithmetical operations we will also
use the functional symbols sum, sub, mul and div, which are defined for all x, y ∈ N by
sum(x, y) =def x + y, sub(x, y) =def x − · y, mul(x, y) =def x · y, and div(x, y) =def x : y.
Moreover we define the exponential function exp by exp(x, y) =def xy for all x, y ∈ N.
We usePa special notation for sums of more than two natural numbersPai . For r ≤ s we
define si=r ai =def ar + ar+1 + · · · + as , and for a finite set I we define i∈I ai as the sum
of all ai such that i ∈ I.
Let f and g be functions and k ≥ 2. We define the functions (f + g), (f − g), (f · g), (f :
g), (f g ), (k·f ), (f k ), and (k f ) by
(f +g)(x) =def f (x) + g(x) (f g )(x) =def f (x)g(x)
(f −g)(x) =def ·
f (x) −g(x) (k·f )(x) =def k·f (x)
(f · g)(x) =def f (x) · g(x) (f k )(x) =def f (x)k
(f : g)(x) =def f (x) : g(x) (k f )(x) =def k f (x)
Let A ⊆ N be a nonempty finite set. The greatest (smallest) Element w.r.t the order ≤
on N is called the maximum (minimum, resp.) of A, and it is denoted by max A (min A,
resp.).
45
8 Appendix 3: Exercises
1. How many words are in the set {x | x ∈ {a, b} ∧ |x| ≤ n}, for n ≥ 0?
2. How many words of length n ≥ 2 are in the set {xayb | x, y ∈ {a, b}∗ }?
3. Let bin(n) = |101010
{z. . . 10}. Give a formula for n (as in Example 1.4.2).
2m digits
46
23. (See proof of Theorem 3.17.) Prove:
k
(a) If A ∈ SPACE(2n ) then BAk ∈ SPACE(n).
k+1
(b) If BAk ∈ P then A ∈ TIME(2n ).
47