You are on page 1of 21

[next] [tail] [up]

6.1 Error-Free Probabilistic Programs


Randomization is an important programming tool. Intuitively, its power stems from choice. The ability to make "random choices" can be viewed as a derivation of the ability to make "nondeterministic choices." In the nondeterministic case, each execution of an instruction must choose between a number of options. Some of the options might be "good", and others "bad." The choice must be for a good option, whenever it exists. The problem is that it does not generally seem possible to make nondeterministic choices in an efficient manner. The options in the case for random choices are similar to those for the nondeterministic case, however, no restriction is made on the nature of the option to be chosen. Instead, each of the good and bad options is assumed to have an equal probability of being chosen. Consequently, the lack of bias among the different options enables the efficient execution of choices. The burden of increasing the probability of obtaining good choices is placed on the programmer. Here random choices are introduced to programs through random assignment instructions of the form x := random(S), where S can be any finite set. An execution of a random assignment instruction x := random(S) assigns to x an element from S, where each of the elements in S is assumed to have an equal probability of being chosen. Programs with random assignment instructions, and no nondeterministic instructions, are called probabilistic programs. Each execution sequence of a probabilistic program is assumed to be a computation . On a given input a probabilistic program might have both accepting and nonaccepting computations. The execution of a random assignment instruction x := random(S) is assumed to take one unit of time under the uniform cost criteria, and |v| + log |S| time under the logarithmic cost criteria. |v| is assumed to be the length of the representation of the value v chosen from S, and |S| is assumed to denote the cardinality of S. A probabilistic program P is said to have an expected time complexity (x) on input x if (x) is equal to p0(x) 0 + p1(x) 1 + p2(x) 2 + . The function pi(x) is assumed to be the probability for the program P to have on input x a computation that takes exactly i units of time. The program P is said to have an expected time complexity (n) if (|x|) (x) for each x.

The following example shows how probabilism can be used to guarantee an improved behavior (on average) for each input. Example 6.1.1 Consider the deterministic program in Figure 6.1.1

call SELECT(k, S) procedure SELECT(k,S) x := first element in S S1 := { y | y is in S, and y < x } n1 := cardinality of the set stored in S1 S2 := { y | y is in S, and y > x } n2 := cardinality of the set stored in S2 n3 := (cardinality of the set stored in S) - n2 case k n1: SELECT(k, S1) n3 < k : SELECT(k - n3, S2) n1 < k n3: x holds the desired element end end

Figure 6.1.1 A program that selects the kth smallest element in S. (given in a free format using recursion). The program selects the kth smallest element in any given set S of finite cardinality. Let T(n) denote the time (under the uniform cost criteria) that the program takes to select an element from a set of cardinality n. T(n) satisfies, for some constant c and some integer m < n, the following inequalities.

From the inequalities above T(n) T(n - 1) + cn T(n - 2) + c T(n - 3) + c T(1) + c cn2. That is, the program is of time complexity O(n2). The time requirement of the program is sensitive to the ordering of the elements in the sets in question. For instance, when searching for the smallest element, O(n) time is sufficient if the elements of the set are given in nondecreasing order. Alternatively, the program uses O(n2) time when the elements are given in nonincreasing order.

This sensitivity to the order of the elements can be eliminated by assigning a random element from S to x, instead of the first element of S. In such a case, the expected time complexity (n) of the program satisfies the following inequalities, for some constant c.

From the inequalities above (n) + cn

+ cn + c(1 - ) + cn

+ c + cn

+ + + c + cn + 2c + cn + 2c + cn + 3c + cn (1) + (n - 1)c + cn 2cn. That is, the modified program is probabilistic and its expected time complexity is O(n). For every given input (k, S) with S of cardinality |S|, the probabilistic program is guaranteed to find the kth smallest element in S within O(|S|2) time. However, on average it requires O(|S|) time for a given input. [next] [front] [up] [next] [tail] [up]

6.1 Error-Free Probabilistic Programs


Randomization is an important programming tool. Intuitively, its power stems from choice. The ability to make "random choices" can be viewed as a derivation of the ability to make "nondeterministic choices."

In the nondeterministic case, each execution of an instruction must choose between a number of options. Some of the options might be "good", and others "bad." The choice must be for a good option, whenever it exists. The problem is that it does not generally seem possible to make nondeterministic choices in an efficient manner. The options in the case for random choices are similar to those for the nondeterministic case, however, no restriction is made on the nature of the option to be chosen. Instead, each of the good and bad options is assumed to have an equal probability of being chosen. Consequently, the lack of bias among the different options enables the efficient execution of choices. The burden of increasing the probability of obtaining good choices is placed on the programmer. Here random choices are introduced to programs through random assignment instructions of the form x := random(S), where S can be any finite set. An execution of a random assignment instruction x := random(S) assigns to x an element from S, where each of the elements in S is assumed to have an equal probability of being chosen. Programs with random assignment instructions, and no nondeterministic instructions, are called probabilistic programs. Each execution sequence of a probabilistic program is assumed to be a computation . On a given input a probabilistic program might have both accepting and nonaccepting computations. The execution of a random assignment instruction x := random(S) is assumed to take one unit of time under the uniform cost criteria, and |v| + log |S| time under the logarithmic cost criteria. |v| is assumed to be the length of the representation of the value v chosen from S, and |S| is assumed to denote the cardinality of S. A probabilistic program P is said to have an expected time complexity (x) on input x if (x) is equal to p0(x) 0 + p1(x) 1 + p2(x) 2 + . The function pi(x) is assumed to be the probability for the program P to have on input x a computation that takes exactly i units of time. The program P is said to have an expected time complexity (n) if (|x|) (x) for each x.

The following example shows how probabilism can be used to guarantee an improved behavior (on average) for each input. Example 6.1.1 Consider the deterministic program in Figure 6.1.1

call SELECT(k, S) procedure SELECT(k,S) x := first element in S S1 := { y | y is in S, and y < x } n1 := cardinality of the set stored in S1 S2 := { y | y is in S, and y > x } n2 := cardinality of the set stored in S2 n3 := (cardinality of the set stored in S) - n2 case

k n1: SELECT(k, S1) n3 < k : SELECT(k - n3, S2) n1 < k n3: x holds the desired element end end

Figure 6.1.1 A program that selects the kth smallest element in S. (given in a free format using recursion). The program selects the kth smallest element in any given set S of finite cardinality. Let T(n) denote the time (under the uniform cost criteria) that the program takes to select an element from a set of cardinality n. T(n) satisfies, for some constant c and some integer m < n, the following inequalities.

From the inequalities above T(n) T(n - 1) + cn T(n - 2) + c T(n - 3) + c T(1) + c cn2. That is, the program is of time complexity O(n2). The time requirement of the program is sensitive to the ordering of the elements in the sets in question. For instance, when searching for the smallest element, O(n) time is sufficient if the elements of the set are given in nondecreasing order. Alternatively, the program uses O(n2) time when the elements are given in nonincreasing order. This sensitivity to the order of the elements can be eliminated by assigning a random element from S to x, instead of the first element of S. In such a case, the expected time complexity (n) of the program satisfies the following inequalities, for some constant c.

From the inequalities above

(n)

+ cn

+ cn + c(1 - ) + cn

+ c + cn

+ + + c + cn + 2c + cn + 2c + cn + 3c + cn (1) + (n - 1)c + cn 2cn. That is, the modified program is probabilistic and its expected time complexity is O(n). For every given input (k, S) with S of cardinality |S|, the probabilistic program is guaranteed to find the kth smallest element in S within O(|S|2) time. However, on average it requires O(|S|) time for a given input. [next] [front] [up] [next] [prev] [prev-tail] [tail] [up]

6.2 Probabilistic Programs That Might Err


Error Probability of Repeatedly Executed Probabilistic Programs Outputs of Probabilistic Programs For many practical reasons programs might be allowed a small probability of erring on some inputs. Example 6.2.1 A brute-force algorithm for solving the nonprimality problem takes exponential time (see Example 5.1.3). The program in Figure 6.2.1 is an example of a probabilistic program that determines the nonprimality of numbers in polynomial expected time.

read x

y := random({2, . . . , }) if x is divisible by y then answer := yes /* not a prime number */ else answer := no

Figure 6.2.1 An undesirable probabilistic program for the nonprimality problem.

The program has zero probability for an error on inputs that are prime numbers. However, for infinitely many nonprime numbers the program has a high probability of giving a wrong answer. Specifically, the probability for an error on a nonprime number m is 1 - s/( - 1), where s is assumed to be the number of distinct divisors of m in {2, . . . , }. In particular, the probability for an error reaches the value of 1 - 1/( - 1) for those numbers m that are a square of a prime number. The probability of getting a wrong answer for a given number m can be reduced by executing the program k times. In such a case, the number m is declared to be nonprime with full confidence, if in any of k executions the answer yes is obtained. Otherwise, m is determined to be prime with probability of at most (1 - 1/( - 1))k for an error. With k = c( - 1) this probability c approaches the value of (1/ ) < 0.37c as m increases, where is the constant 2.71828 . . . However, such a value for k is forbidingly high, because it is exponential in the length of the representation of m, that is, in log m. An improved probabilistic program can be obtained by using the following known result. Result Let Wm(b) be a predicate that is true if and only if either of the following two conditions holds. a. b. (bm-1 - 1) mod m 0. 1 < gcd (bt - 1, m) < m for some t and i such that m - 1 = t 2i. 2 the conditions below hold.

Then for each integer m a. b.

m is a prime number if and only if Wm(b) is false for all b such that 2 b < m. If m is not prime, then the set { b | 2 b < m, and Wm(b) holds } is of cardinality (3/4)(m - 1) at least.

The result implies the probabilistic program in Figure 6.2.2.

read x y := random{2, . . . , x - 1} if Wx(y) then answer := yes else answer := no

Figure 6.2.2 A good probabilistic program for the nonprimality problem. For prime numbers m the program always provides the right answers. On the other hand, for nonprime numbers m, the program provides the right answer with probability of at most 1 (3/4)(m - 1)/(m - 2) 1/4 for an error. The probability for a wrong answer can be reduced to any desired constant by executing the program for k log1/4 times. That is, the number of times k that the program has to be executed is independent of the input m. Checking for the condition (bm-1 - 1) mod m 0 can be done in polynomial time by using the relation (a + b) mod m = ( (a mod m) + (b mod m) ) mod m and the relation (ab) mod m = ((a mod m)(b mod m)) mod m. Checking for the condition gcd (bt - 1, m) can be done in polynomial time by using Euclid's algorithm (see Exercise 5.1.1). Consequently, the program in Figure 6.2.2 is of polynomial time complexity. Example 6.2.2 Consider the problem of deciding for any given matrices A, B, and C whether AB C. A brute-force algorithm to decide the problem can compute D = AB, and E = D - C, and check whether E A brute-force multiplication of A and B requires O(N3) time (under the uniform cost criteria), if A and B are of dimension N N. Therefore, the brute-force algorithm for deciding whether AB C also takes O(N3) time. The inequality AB C holds if and only if the inequality

(AB - C) holds for some vector =

Consequently, the inequality AB determines a. A column vector

C can be determined by a probabilistic program that

b. c. d. e. f.

of random numbers from {-1, 1}. The value of the vector = B . The value of the vector = A . The value of the vector = C . The value of the vector = - =A -C =A = -C = (AB - C)

If some of the entries in the vector =

are nonzeros, then the probabilistic program reports that the inequality AB C must hold. Otherwise, the program reports that AB = C with probability of at most 1/2 for an error. The program takes O(N2) time. The analysis of the program for its error probability relies on the following result. Result Let d1x1 + + dN xN = 0 be a linear equation with coefficients that satisfy the inequality (d1, . . . , dN ) (0, . . . , 0). Then the linear equation has at most 2N-1 solutions ( 1, . . . , N ) over {-1, 1}. Proof Consider any linear equation d1x1 + + dN xN = 0. With no loss of generality assume that d1 > 0. If ( 1, . . . , N ) is a solution to the linear equation over {-1, 1} then (- 1, 2, . . . , N ) does not solve the equation. On the other hand, if both ( 1, . . . , N ) and ( 1, . . . , N ) solve the equation over {-1, 1}, then the inequality (- 1, 2, . . . , N ) (- 1, 2, . . . , N ) holds whenever ( 1, . . . , N ) ( 1, . . . , N ). As a result, each solution to the equation has an associated assignment that does not solve the equation. That is, at most half of the possible assignments over {-1, 1} to (x1, . . . , xN ) can serve as solutions to the equation.

The probabilistic program can be executed k times on any given triplet A, B, and C. In such a case, if any of the executions results in a nonzero vector then AB C must hold. Otherwise, AB = C with probability of at most (1/2)k for an error. That is, by repeatedly executing the probabilistic program, one can reduce the error probability to any desired magnitude. Moreover, the resulting error probability of (1/2)k is guaranteed for all the possible choices of matrices A, B, and C. Error Probability of Repeatedly Executed Probabilistic Programs The probabilistic programs in the previous two examples exhibit the property of one-sided error probability. Specifically, their yes answers are always correct. On the other hand, their no answers might sometimes be correct and sometimes wrong. In general, however, probabilistic programs might err on more than one kind of an answer. In such a case, the answer can be arrived at by taking the majority of answers obtained in repeated computations on the given instance. The following lemma analyzes the probability of error in repeated computations to establish an answer by absolute majority. Lemma 6.2.1 Consider any probabilistic program P. Assume that P has probability e of providing an output that differs from y on input x. Then P has probability

of having at least N + 1 computations with outputs that differ from y in any sequence of 2N + 1 computations of P on input x. Proof Let P, x, y, and e be as in the statement of the lemma. Let (N, e, k) denote the probability that, in a sequence of 2N + 1 computations on input x, P will have exactly k computations with an output that is equal to y. Let (N, e) denote the probability that, in a sequence of 2N + 1 computations on input x, P will have at least N + 1 computations with outputs that differ from y. By definition,

The probability of having the answer y at and only at the i1st, . . . , ikth computations, in a sequence of 2N + 1 computations of P on input x, is equal to

i1, . . . , ik are assumed to satisfy 1

i1 < i2 < < ik

2N + 1.

Each collection {C1, . . . , C2N+1} of 2N + 1 computations of P has

possible arrangements Ci1 Cik of k computations. In these arrangements only

satisfy the condition i1 < < ik. Consequently, in each sequence of 2N + 1 computations of P there are possible ways to obtain the output y for exactly k times. The result follows because (N, e, k) = (the number of possible sequences of 2N + 1 computations in which exactly k computations have the output y) times (the probability of having a sequence of 2N + 1 computations with exactly k outputs of y). In general, we are interested only in probabilistic programs which run in polynomial time, and which have error probability that can be reduced to a desired constant by executing the program for some polynomial number of times. The following theorem considers the usefulness of probabilistic programs that might err.

Theorem 6.2.1 Let (N, e) be as in the statement of Lemma 6.2.1. Then (N, e) has the following properties. a. b. c. Proof a. The equality (N, ) = 1/2 for all N is implied from the following relations. 2 (N, ) = 2 = (N, ) = 1/2 for all N. (N, e) approaches 0 for each constant e < 1/2, when N approaches . (N, - ) approaches a constant that is greater than 1/(2 6), when N approaches

+ = =1 b. The equality
2N+1

implies the following inequality for 0

2N + 1.

Consequently, the result is implied from the following relations that hold for e = 1/2 - . (N, e) = eN+1(1 - e)N
N+1 N

(N + 1) = (N + 1)22N+1 =2

(N + 1)22N+1eN+1(1 - e)N

(N + 1)(1 - 2 )N (1 + 2 )N

= 2e(N + 1)(1 - 4 2)N

c.

The result follows from the following relations because with N. = = = > = = = =

N/2

approaches 1/

2N+1 2N+1 2N+1 2N+1 2N+1

k=0 k=0

N N

2N+1 k=0

(1 + 1)2N+1

(4+2/N) 6

d. According to Theorem 6.2.1(a), a probabilistic program must have an error probability e(x) smaller than 1/2, in order to reduce in the probability of obtaining a wrong solution through repeatedly running the program. According to Theorem 6.2.1(b), error probability e(x) smaller than some constant < 1/2 allows a reduction to a desired magnitude in speed that is independent from the given input x. On the other hand, by Theorem 6.2.1(c) the required speed of reduction is bounded below by f(x) = 1/( (1/2) - e(x) ), because the probability of obtaining a wrong solution through repeatedly running the program for f(x) times is greater than 1/(2 6). In particular, when f(x) is more than a polynomial in |x|, then the required speed of reduction is similarly greater. Outputs of Probabilistic Programs The previous theorems motivate the following definitions. A probabilistic program P is said to have an output y on input x, if the probability of P having a computation with output y on input x is greater than 1/2. If no such y exists, then the output of P on input x is undefined. A probabilistic program P is said to compute a function f(x) if P on each input x has probability 1 - e(x) for an accepting computation with output f(x), where a. e(x) < 1/2 whenever f(x) is defined.

b. e(x) is undefined whenever f(x) is undefined. e(x) is said to be the error probability of P. The error probability e(x) is said to be a boundederror probability if there exists a constant < 1/2 such that e(x) for all x on which e(x) is defined. P is said to be a bounded-error probabilistic program if it has a bounded-error probability. By the previous discussion it follows that the probabilistic programs, which have both polynomial time complexity and bounded-error probability, are "good" programs. [next] [prev] [prev-tail] [front] [up] [next] [prev] [prev-tail] [tail] [up]

6.2 Probabilistic Programs That Might Err


Error Probability of Repeatedly Executed Probabilistic Programs Outputs of Probabilistic Programs For many practical reasons programs might be allowed a small probability of erring on some inputs. Example 6.2.1 A brute-force algorithm for solving the nonprimality problem takes exponential time (see Example 5.1.3). The program in Figure 6.2.1 is an example of a probabilistic program that determines the nonprimality of numbers in polynomial expected time.

read x y := random({2, . . . , }) if x is divisible by y then answer := yes /* not a prime number */ else answer := no

Figure 6.2.1 An undesirable probabilistic program for the nonprimality problem.

The program has zero probability for an error on inputs that are prime numbers. However, for infinitely many nonprime numbers the program has a high probability of giving a wrong answer. Specifically, the probability for an error on a nonprime number m is 1 - s/( - 1), where s is assumed to be the number of distinct divisors of m in {2, . . . , }. In particular, the probability for an error reaches the value of 1 - 1/( - 1) for those numbers m that are a square of a prime number. The probability of getting a wrong answer for a given number m can be reduced by executing the program k times. In such a case, the number m is declared to be nonprime with full confidence, if

in any of k executions the answer yes is obtained. Otherwise, m is determined to be prime with probability of at most (1 - 1/( - 1))k for an error. With k = c( - 1) this probability approaches the value of (1/ )c < 0.37c as m increases, where is the constant 2.71828 . . . However, such a value for k is forbidingly high, because it is exponential in the length of the representation of m, that is, in log m. An improved probabilistic program can be obtained by using the following known result. Result Let Wm(b) be a predicate that is true if and only if either of the following two conditions holds. a. b. (bm-1 - 1) mod m 0. 1 < gcd (bt - 1, m) < m for some t and i such that m - 1 = t 2i. 2 the conditions below hold.

Then for each integer m a. b.

m is a prime number if and only if Wm(b) is false for all b such that 2 b < m. If m is not prime, then the set { b | 2 b < m, and Wm(b) holds } is of cardinality (3/4)(m - 1) at least.

The result implies the probabilistic program in Figure 6.2.2.

read x y := random{2, . . . , x - 1} if Wx(y) then answer := yes else answer := no

Figure 6.2.2 A good probabilistic program for the nonprimality problem. For prime numbers m the program always provides the right answers. On the other hand, for nonprime numbers m, the program provides the right answer with probability of at most 1 (3/4)(m - 1)/(m - 2) 1/4 for an error. The probability for a wrong answer can be reduced to any desired constant by executing the program for k log1/4 times. That is, the number of times k that the program has to be executed is independent of the input m. Checking for the condition (bm-1 - 1) mod m 0 can be done in polynomial time by using the relation (a + b) mod m = ( (a mod m) + (b mod m) ) mod m and the relation (ab) mod m = ((a mod m)(b mod m)) mod m. Checking for the condition gcd (bt - 1, m) can be done in polynomial time by using Euclid's algorithm (see Exercise 5.1.1). Consequently, the program in Figure 6.2.2 is of polynomial time complexity.

Example 6.2.2 Consider the problem of deciding for any given matrices A, B, and C whether AB C. A brute-force algorithm to decide the problem can compute D = AB, and E = D - C, and check whether E A brute-force multiplication of A and B requires O(N3) time (under the uniform cost criteria), if A and B are of dimension N N. Therefore, the brute-force algorithm for deciding whether AB C also takes O(N3) time. The inequality AB C holds if and only if the inequality

(AB - C) holds for some vector =

Consequently, the inequality AB determines a. A column vector

C can be determined by a probabilistic program that

b. c. d. e. f.

of random numbers from {-1, 1}. The value of the vector = B . The value of the vector = A . The value of the vector = C . The value of the vector = - =A -C =A = -C = (AB - C)

If some of the entries in the vector =

are nonzeros, then the probabilistic program reports that the inequality AB C must hold. Otherwise, the program reports that AB = C with probability of at most 1/2 for an error. The program takes O(N2) time. The analysis of the program for its error probability relies on the following result. Result Let d1x1 + + dN xN = 0 be a linear equation with coefficients that satisfy the inequality (d1, . . . , dN ) (0, . . . , 0). Then the linear equation has at most 2N-1 solutions ( 1, . . . , N ) over {-1, 1}. Proof Consider any linear equation d1x1 + + dN xN = 0. With no loss of generality assume that d1 > 0. If ( 1, . . . , N ) is a solution to the linear equation over {-1, 1} then (- 1, 2, . . . , N ) does not solve the equation. On the other hand, if both ( 1, . . . , N ) and ( 1, . . . , N ) solve the equation over {-1, 1}, then the inequality (- 1, 2, . . . , N ) (- 1, 2, . . . , N ) holds whenever ( 1, . . . , N ) ( 1, . . . , N ). As a result, each solution to the equation has an associated assignment that does not solve the equation. That is, at most half of the possible assignments over {-1, 1} to (x1, . . . , xN ) can serve as solutions to the equation. The probabilistic program can be executed k times on any given triplet A, B, and C. In such a case, if any of the executions results in a nonzero vector then AB C must hold. Otherwise, AB = C with probability of at most (1/2)k for an error. That is, by repeatedly executing the probabilistic program, one can reduce the error probability to any desired magnitude. Moreover, the resulting error probability of (1/2)k is guaranteed for all the possible choices of matrices A, B, and C. Error Probability of Repeatedly Executed Probabilistic Programs The probabilistic programs in the previous two examples exhibit the property of one-sided error probability. Specifically, their yes answers are always correct. On the other hand, their no answers might sometimes be correct and sometimes wrong. In general, however, probabilistic programs might err on more than one kind of an answer. In such a case, the answer can be arrived at by taking the majority of answers obtained in repeated computations on the given instance.

The following lemma analyzes the probability of error in repeated computations to establish an answer by absolute majority. Lemma 6.2.1 Consider any probabilistic program P. Assume that P has probability e of providing an output that differs from y on input x. Then P has probability

of having at least N + 1 computations with outputs that differ from y in any sequence of 2N + 1 computations of P on input x. Proof Let P, x, y, and e be as in the statement of the lemma. Let (N, e, k) denote the probability that, in a sequence of 2N + 1 computations on input x, P will have exactly k computations with an output that is equal to y. Let (N, e) denote the probability that, in a sequence of 2N + 1 computations on input x, P will have at least N + 1 computations with outputs that differ from y. By definition,

The probability of having the answer y at and only at the i1st, . . . , ikth computations, in a sequence of 2N + 1 computations of P on input x, is equal to

i1, . . . , ik are assumed to satisfy 1

i1 < i2 < < ik

2N + 1.

Each collection {C1, . . . , C2N+1} of 2N + 1 computations of P has

possible arrangements Ci1 Cik of k computations. In these arrangements only

satisfy the condition i1 < < ik. Consequently, in each sequence of 2N + 1 computations of P there are possible ways to obtain the output y for exactly k times. The result follows because (N, e, k) = (the number of possible sequences of 2N + 1 computations in which exactly k computations have the output y) times (the probability of having a sequence of 2N + 1 computations with exactly k outputs of y). In general, we are interested only in probabilistic programs which run in polynomial time, and which have error probability that can be reduced to a desired constant by executing the program for some polynomial number of times. The following theorem considers the usefulness of probabilistic programs that might err. Theorem 6.2.1 Let (N, e) be as in the statement of Lemma 6.2.1. Then (N, e) has the following properties. a. b. c. Proof a. The equality (N, ) = 1/2 for all N is implied from the following relations. 2 (N, ) = 2 = (N, ) = 1/2 for all N. (N, e) approaches 0 for each constant e < 1/2, when N approaches . (N, - ) approaches a constant that is greater than 1/(2 6), when N approaches

+ = =1 b. The equality
2N+1

implies the following inequality for 0

2N + 1.

Consequently, the result is implied from the following relations that hold for e = 1/2 - . (N, e) = eN+1(1 - e)N
N+1 N

(N + 1) = (N + 1)22N+1 =2

(N + 1)22N+1eN+1(1 - e)N

(N + 1)(1 - 2 )N (1 + 2 )N

= 2e(N + 1)(1 - 4 2)N c. The result follows from the following relations because with N. = = = > = = = =
N/2

approaches 1/

2N+1 2N+1 2N+1 2N+1 2N+1

k=0 k=0

N N

2N+1 k=0

(1 + 1)2N+1

(4+2/N) 6

d.

According to Theorem 6.2.1(a), a probabilistic program must have an error probability e(x) smaller than 1/2, in order to reduce in the probability of obtaining a wrong solution through repeatedly running the program. According to Theorem 6.2.1(b), error probability e(x) smaller than some constant < 1/2 allows a reduction to a desired magnitude in speed that is independent from the given input x. On the other hand, by Theorem 6.2.1(c) the required speed of reduction is bounded below by f(x) = 1/( (1/2) - e(x) ), because the probability of obtaining a wrong solution through repeatedly running the program for f(x) times is greater than 1/(2 6). In particular, when f(x) is more than a polynomial in |x|, then the required speed of reduction is similarly greater. Outputs of Probabilistic Programs The previous theorems motivate the following definitions. A probabilistic program P is said to have an output y on input x, if the probability of P having a computation with output y on input x is greater than 1/2. If no such y exists, then the output of P on input x is undefined. A probabilistic program P is said to compute a function f(x) if P on each input x has probability 1 - e(x) for an accepting computation with output f(x), where a. e(x) < 1/2 whenever f(x) is defined. b. e(x) is undefined whenever f(x) is undefined. e(x) is said to be the error probability of P. The error probability e(x) is said to be a boundederror probability if there exists a constant < 1/2 such that e(x) for all x on which e(x) is defined. P is said to be a bounded-error probabilistic program if it has a bounded-error probability. By the previous discussion it follows that the probabilistic programs, which have both polynomial time complexity and bounded-error probability, are "good" programs. [next] [prev] [prev-tail] [front] [up]

You might also like