Professional Documents
Culture Documents
The Priori Analysis is aimed at analyzing the algorithm before it is implemented(based on the
algorithm) on any computer. It will give the approximate amount of resources required to solve the
problem before execution. In case of priori analysis, we ignore the machine and platform dependent
factors. It is always better if we analyze the algorithm at the earlier stage of the software life cycle.
Priori analysis require the knowledge of
Mathematical equations
Determination of the problem size
Order of magnitude of any algorithm
Posteriori Analysis is aimed at determination of actual statistics about algorithms consumption of
time and space requirements (primary memory) in the computer when it is being executed as a program
in a machine.
Limitations of Posteriori analysis are
External factors influencing the execution of the algorithm
Network delay
Hardware failure etc.,
The information on target machine is not known during design phase
The same algorithms might behave differently on different systems
Hence cant come to definite conclusions
Asymptotic notations(O,,)
Step count is to compare time complexity of two programs that compute same function and also to
predict the growth in run time as instance characteristics changes. Determining exact step count is
difficult and not necessary also. Since the values are not exact quantities we need only comparative
statements like c1n2 tp(n) c2n2.
For ex: consider two programs with complexities c1n2 + c2n and c3n respectively. For small values of
n, complexity depend upon values of c1, c2 and c3. But there will also be an n beyond which
complexity of c3n is better than that of c1n2 + c2n.This value of n is called break-even point. If this
point is zero, c3n is always faster (or at least as fast).
c1=1,c2=2 & c3=100
Then c1n2+c2n is c3n for n98 and
c1n2+c2n is > c3n for n>98
The Common asymptotic functions are given below.
Function
1
log n
N
n log n
n2
n3
2n
n!
Name
Constant
Logarithmic
Linear
n log n
Quadratic
Cubic
Exponential
Factorial
Definition [Big oh] The function f(n)=O(g(n)) iff there exist positive constants c and no such that
f(n)c*g(n) for all n, n no.
Ex1: f(n) = 2n + 8, and g(n) = n2. Can we find a constant c, so that 2n + 8 <= n2? The number 4 works
here, giving us 16 <= 16.
For any number c greater than 4, this will still work. Since we're trying to generalize this for large
values of n, and small values (1, 2, 3) aren't that important, we can say that f(n) is generally faster than
g(n); that is, f(n) is bound by g(n), and will always be less than it.
Ex2: The function 3n+2=O(n) as 3n+24n for all n2.
Pb1: 3n+3=O(_______) as 3n+3______ for all________.
Ex3: 10n2+4n+2=O(n2) as 10n2+4n+211n2 for all n5
Pb2:1000n2+100n-6=O(_______) as 1000n2+100n-6________for all_______
Ex4:6*2n+n2=O(2n) as 6*2n+n2 7*2n for n 4
Ex5:3n+3=O(n2) as 3n+33n2 for n2.
Ex 6:10n2+4n+2=On4 as10n2+4n+210n4 for n2.
Ex7:3n+2O(1) as3n+2 not less than or equal to c for any constant c and all nn0
Ex 8: 10n2+4n+2O(n)
Definition[Omega] The function f(n) = (g(n) (read as f of n is omega of g of n'') iff there exist
positive constants c and n0 such that f(n) c *g(n) for all n, n no.
Definition [Theta]
The function f(n) = (g(n) ) (read as f of n is theta of g of n'') iff there exist
positive constants C1,C2, and n0 such that c1g(n) f(n) c2g(n) for all n, nn0
Ex 1 : The function 3n + 2 = (n) as 3n + 2 3n for all n 2 and 3n + 2 4n for all n 2 so c1 = 3,
c2=4 and n0 =2
Pb 1: 6n+4= (___) as 6n+4________for all n _______and 6n+4 _______ for all n _______
Ex 2 : 3n + 3 = (n)
Ex 3 : 10n2 + 4n +2 = (n2)
Ex 4: 6* 2n + n2 = (2n)
Ex5:10*log n+4= (log n)
Ex6:3n+2 (1)
Ex7:3n + 3 = (n)
Ex 8: 10n2 +4n+2 (n)
Theorem: If f(n) = amnm+.......................+a3n3+a2n2+a1n+a0, then f(n)=O (nm)
Proof:
f(n) mi = 0 | ai | ni
nm mi = 0aini-m
nm mi = 0ai
for n1
Left as an exercise.
Definition [Little oh] The Function f (n) =o(g(n)) (read as f of n is little oh of g of n)iff
lim f(n)
n g(n)
=0
Example:
The function 3n+2 = o(n2) since
lim 3n+2
n--> n2
Ex1: 3n+2
Ex:2 3n+2
=o(3n).
Ex:4 6 * 2n+n2
=o(2nlog n).
Ex:5 6 * 2n+n2
o(2n).
= 0.
Divide and Conquer algorithm design works on the principle of dividing the given problem into smaller
sub problems which are similar to the original problem. The sub problems are ideally of the same size.
The Divide and Conquer strategy can be viewed as one which has three steps. The first step is called
Divide which is nothing but dividing the given problems into smaller sub problems which are identical
to the original problem and also these sub problems are of the same size. The second step is called
Conquer where in we solve these sub problems recursively. The third step is called Combine where in
we combine the solutions of the sub problems to get the solution for the original problem.
Complexity of Merge Sort is O(n log n) and binary search is O(log n).
This can be proved by repeated substitution in the recurrence relations.
Suppose (for simplicity) that n = 2k for some entire k. as n=2k k = log2n
Merge Sort:
Let T(n) the time used to sort n elements. As we can perform separation and merging in linear time, it
takes cn time to perform these two steps, for some constant c. So, recurrence relation is : T(n) =
2T(n/2) + cn.
In the same way: T(n/2) = 2T(n/4) + cn/2, so
T(n) = 4T(n/4) + 2cn.
Going in this way ...
T(n) = 2mT(n/2m) + mcn, and
T(n) = 2kT(n/2k) + kcn = nT(1) + cnlog2n = O(n log n).
Binary Search:
Let T(n) the time used to search n elements. As we need to search only one of the halves, the
Recurrence relation is : T(n) = T(n/2) + c
In the same way: T(n/2) = T(n/4) + c, so
T(n) = T(n/4) + 2c.
Going in this way ...
T(n) = T(n/2m) + mc, and
T(n) = T(n/2k) + kc = T(1) + kc = kc + 1 = O(log n).
QuickSort:
Quick sort is one of the most powerful sorting algorithms. It works on the Divide and Conquer design
principle. Quick sort works by finding an element, called the pivot, in the given input array and
partitions the array into three sub arrays such that the left sub array contains all elements which are
less than or equal to the pivot. The middle sub array contains the pivot. The right sub array contains
all elements which are greater than the pivot. Now, the two sub arrays, namely the left sub array and
the right sub array are sorted recursively.
The partitioning of the given input array is part of the Divide step. The recursive calls to sort the sub
arrays are part of the Conquer step. Since the sorted sub arrays are already in the right place there is no
Combine step for the Quick sort.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
repeat
j : = j 1;
until (a[j] v);
17.
18.
} until (I j);
19.
20.
1.
2.
3.
4.
5.
6.
Algorithm Interchange ( a , i, j)
//Exchange a[i] with a [j]
{
p : = a[i];
a [i] : = a[j]; a[j] : p;
}
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(p)
65
70
75
80
85
60
55
50
45
65
45
75
80
85
60
55
50
70
65
45
50
80
85
60
55
75
70
65
45
50
55
85
60
80
75
70
65
45
50
55
60
85
80
75
70
60
45
50
55
65
85
80
75
70
B1,1 B1,2
B2,1 B2,2
C1,1 C1,2
C2,1 C2,2
then
C11 =
C12 =
C21 =
C22 =
Strassen showed that 2x2 matrix multiplications can be accomplished in 7 multiplication and 18
additions or subtractions. This reduce can be done by Divide and Conquer Approach. Divide the input
data S in two or more disjoint subsets S1, S2. Solve the sub-problems recursively. Combine the
solutions for S1, S2, , into a solution for S. The base case for the recursion are sub-problems of
constant size. Analysis can be done using recurrence equations. Divide matrices in sub-matrices and
recursively multiply sub-matrices.
This method involves first computing the seven n/2 X n/2 matrices as
P = (A11 + A22) (B11 + B22)
Q = (A21+A22) B11
R = A11 (B12 B22)
S = A22 (B21 B11)
T = (A11 + A12) B22
U = (A21 A11)(B11 + B12)
V = (A12 A22)(B21 + B22)
Then Cijs are computed as
C11 =
C12 =
C21 =
C22 =
P+S T + V
R+T
Q+S
P+R Q + U
n2