You are on page 1of 12

Kinds of Analysis of Algorithms

The Priori Analysis is aimed at analyzing the algorithm before it is implemented(based on the
algorithm) on any computer. It will give the approximate amount of resources required to solve the
problem before execution. In case of priori analysis, we ignore the machine and platform dependent
factors. It is always better if we analyze the algorithm at the earlier stage of the software life cycle.
Priori analysis require the knowledge of
Mathematical equations
Determination of the problem size
Order of magnitude of any algorithm
Posteriori Analysis is aimed at determination of actual statistics about algorithms consumption of
time and space requirements (primary memory) in the computer when it is being executed as a program
in a machine.
Limitations of Posteriori analysis are
External factors influencing the execution of the algorithm
Network delay
Hardware failure etc.,
The information on target machine is not known during design phase
The same algorithms might behave differently on different systems
Hence cant come to definite conclusions

Asymptotic notations(O,,)
Step count is to compare time complexity of two programs that compute same function and also to
predict the growth in run time as instance characteristics changes. Determining exact step count is
difficult and not necessary also. Since the values are not exact quantities we need only comparative
statements like c1n2 tp(n) c2n2.
For ex: consider two programs with complexities c1n2 + c2n and c3n respectively. For small values of
n, complexity depend upon values of c1, c2 and c3. But there will also be an n beyond which
complexity of c3n is better than that of c1n2 + c2n.This value of n is called break-even point. If this
point is zero, c3n is always faster (or at least as fast).
c1=1,c2=2 & c3=100
Then c1n2+c2n is c3n for n98 and
c1n2+c2n is > c3n for n>98
The Common asymptotic functions are given below.

Function
1
log n
N
n log n
n2
n3
2n
n!

Name
Constant
Logarithmic
Linear
n log n
Quadratic
Cubic
Exponential
Factorial

The growth of the functions as below

1< log n < n <n log n < n2 < n3 < 2n <n!

Definition [Big oh] The function f(n)=O(g(n)) iff there exist positive constants c and no such that
f(n)c*g(n) for all n, n no.
Ex1: f(n) = 2n + 8, and g(n) = n2. Can we find a constant c, so that 2n + 8 <= n2? The number 4 works
here, giving us 16 <= 16.
For any number c greater than 4, this will still work. Since we're trying to generalize this for large
values of n, and small values (1, 2, 3) aren't that important, we can say that f(n) is generally faster than
g(n); that is, f(n) is bound by g(n), and will always be less than it.
Ex2: The function 3n+2=O(n) as 3n+24n for all n2.
Pb1: 3n+3=O(_______) as 3n+3______ for all________.
Ex3: 10n2+4n+2=O(n2) as 10n2+4n+211n2 for all n5
Pb2:1000n2+100n-6=O(_______) as 1000n2+100n-6________for all_______
Ex4:6*2n+n2=O(2n) as 6*2n+n2 7*2n for n 4
Ex5:3n+3=O(n2) as 3n+33n2 for n2.
Ex 6:10n2+4n+2=On4 as10n2+4n+210n4 for n2.
Ex7:3n+2O(1) as3n+2 not less than or equal to c for any constant c and all nn0
Ex 8: 10n2+4n+2O(n)
Definition[Omega] The function f(n) = (g(n) (read as f of n is omega of g of n'') iff there exist
positive constants c and n0 such that f(n) c *g(n) for all n, n no.

Ex1: The function 3n+2=(n) as 3n+23n for n1


(the inequality holds for n0,but the definition of requires an n0>0).
3n+2=(n) as 3n+23n for n1.
Ex:2 100n+6=(n) as 100n+6100n for n1
Pb:1

6n+4= (_____) as 6n+4 _______ for all n _____

Ex:3 10n2 +4n+2= (n2) as 10n2 +4n+2 n2 for n 1


Pb:2

2n2+3n+1= ( _____ ) as 2n2+3n+1 _______ for all n_________

Ex: 4 6* 2n + n2 = (2n) as 6* 2n + n2 2n for n 1.


Pb:3
Pb:5

4n2 64 n + 288 = (____)as 4n2 64 n + 288_______ for all n_________ .


n3logn is = (____)as n3logn__________for all n ___.

Definition [Theta]

The function f(n) = (g(n) ) (read as f of n is theta of g of n'') iff there exist
positive constants C1,C2, and n0 such that c1g(n) f(n) c2g(n) for all n, nn0
Ex 1 : The function 3n + 2 = (n) as 3n + 2 3n for all n 2 and 3n + 2 4n for all n 2 so c1 = 3,
c2=4 and n0 =2
Pb 1: 6n+4= (___) as 6n+4________for all n _______and 6n+4 _______ for all n _______
Ex 2 : 3n + 3 = (n)
Ex 3 : 10n2 + 4n +2 = (n2)
Ex 4: 6* 2n + n2 = (2n)
Ex5:10*log n+4= (log n)
Ex6:3n+2 (1)
Ex7:3n + 3 = (n)
Ex 8: 10n2 +4n+2 (n)
Theorem: If f(n) = amnm+.......................+a3n3+a2n2+a1n+a0, then f(n)=O (nm)
Proof:

f(n) mi = 0 | ai | ni
nm mi = 0aini-m
nm mi = 0ai

for n1

f(n) = O(nm) (assuming that m is fixed ).

Theorem: If f(n) = am nm ++................+a3n3+a2n2+a1 n + ao and am > 0, then f(n)= (nm)


proof :

Left as an exercise.

Definition [Little oh] The Function f (n) =o(g(n)) (read as f of n is little oh of g of n)iff
lim f(n)
n g(n)

=0

Example:
The function 3n+2 = o(n2) since

lim 3n+2
n--> n2

Ex1: 3n+2

=o(n log n).

Ex:2 3n+2

=o(n log log n).

Ex:3 6*2n +n2

=o(3n).

Ex:4 6 * 2n+n2

=o(2nlog n).

Ex:5 6 * 2n+n2

o(2n).

= 0.

Analogous to o notation notation is defined as follows.


Definition [Little omega] The function f(n)= (g(n)) (read as f of n is little omega of g of n) iff
lim g(n) = 0
n f(n)

Divide and Conquer


Divide and Conquer for finding Counterfeit coin (exactly one coin):
Algorithm CounterfeitCoin(Bunch, numberofcoins)
{
if (numberofcoins = 2) {
weigh the two coins;
if one of them weighs less that is counterfeit coin;
else, no counterfeit coin in the Bunch
}
else {
Divide Bunch into two halves Bunch1 and Bunch2;
if Bunch1 weighs less than Bunch 2, call CounterfeitCoin(Bunch1, numberofcoins / 2);
else, call CounterfeitCoin(Bunch2, numberofcoins / 2);
}
}
Finding Largest with Divide and Conquer:
Algorithm DandCLargest (a,k, j)
{
if (k = j) Return a[n];
else
{
mid := (k + j) /2;
x1= DandCLargest (a,k,mid);
x2= DandCLargest (a,mid+1,j);
if (x1 > x2) Return x1;
else Return x2;
}
}
First Call for Recursion:
DandCLargest (a,1,n);

Divide and Conquer algorithm design works on the principle of dividing the given problem into smaller
sub problems which are similar to the original problem. The sub problems are ideally of the same size.
The Divide and Conquer strategy can be viewed as one which has three steps. The first step is called
Divide which is nothing but dividing the given problems into smaller sub problems which are identical
to the original problem and also these sub problems are of the same size. The second step is called
Conquer where in we solve these sub problems recursively. The third step is called Combine where in
we combine the solutions of the sub problems to get the solution for the original problem.

Control Abstraction for Divide and Conquer Algorithms General Method


Algorithm DandC(P) {
if Small(P) then return S(P);
else {
Divide P into smaller instances P1,P2,.,Pk, k1;
Apply DandC to each of these sub-problems;
Return Combine(DandC(P1),DandC(P2)...,DandC(Pk)),
}
}
Binary Search
Binary Search takes a sorted array as the input
It works by comparing the target (search key) with the middle element of the array and terminates if it
equals, else it divides the array into two sub arrays and continues the search in left (right) sub array if
the target is less (greater) than the middle element of the array.
Reducing the problem size by half is part of the Divide step in Binary Search and searching in this
reduced problem space is part of the Conquer step in Binary Search. There is no Combine step in
Binary Search.
With Divide and Conquer (recursion)
Algorithm BinSrch(a,i,l,x) {
// Given an array a[I:l] of elements in non decreasing order, 1 i l,
// determine whether x is //present and if so, return j such that x =a[j]; else return 0.
if (l=i) then { // if Small(P)
if (x=a(i)) then return i; else return 0;
}
else {
// Reduce P into a smaller subproblem.
mid :=[(i + l)/2 ];
if (x=a[mid]) then return mid;
else if (x<a[mid] then
return BinSrch (a,i, mid -1,x);
else return BinSrch (a, mid +1,l,x);
}
}
Iterative Binary Search (non-recursive)
Algorithm BinSearch (a,n,x) {
low :=1;high :=n;
while (low high) do {
mid :=(low +high)/2;
if ( x<a[mid]) then high :=mid 1;
else if (x>a[mid]) then low :=mid + 1;
else return mid;
}
return 0;
}

Theorem Algorithm BinSearch(a,n,x) works correctly.


Proof:
We assume that all statements work as expected and that comparisons such as x>a[mid] are
appropriately carried out.
Initially low = 1, high = n, n 0, and a[1] a[2].. a[n].
If n = 0, the while loop is not entered and 0 is returned.
Otherwise we observe that each time through the loop the possible elements to be checked for equality
with x are a [low], a[low + 1], ..., a[mid],..., a[high).
If x = a[mid], then the algorithm terminates successfully.
Otherwise the range is narrowed by either increasing low to mid + 1 or decreasing high to mid 1.
Clearly this narrowing of the range does not affect the outcome of the search.
If low becomes greater than high, then x is not present and hence the loop is exited.

Performance of Binary Search


k_1 k
Theorem:-If n is in the range [2
, 2 ), then BinSearch makes at most k element comparisons
for a successful search and either k 1 or k comparisons for an unsuccessful search. (In other
words the time for a successful search is 0 (log n) and for an unsuccessful search is (log n).
Proof: Consider the binary decision tree describing the action of BinSearch on n elements. All
successful searches end at a circular node whereas all unsuccessful searches end at a square node.
k-1
k
If 2
n <2 , then all circular nodes are at levels 1, 2,... , k whereas all square nodes are at levels
k and k + 1 (note that the root is at level 1). The number of comparisons needed to terminate at a
circular node on level i is i whereas the number of element comparisons needed to terminate at a
square node at level i is only i 1. The theorem follows.
Merge Sort
Merge sort is yet another sorting algorithm which works on the Divide and Conquer design principle.
Merge sort works by dividing the given array into two sub arrays of equal size
The sub arrays are sorted independently using recursion
The sorted sub arrays are then merged to get the solution for the original array.
The breaking of the given input array into two sub arrays of equal size is part of the Divide step. The
recursive calls to sort the sub arrays are part of the Conquer step. The merging of the sub arrays to get
the solution for the original array is part of the Combine step.
The basic operation in Merge sort is comparison and swapping. Merge Sort Algorithm calls it self
recursively. Merge Sort divides the array into sub arrays based on the position of the elements whereas
Quick Sort divides the array into sub arrays based on the value of the elements. Merge Sort requires an
auxiliary array to do the merging (Combine step). The merging of two sub arrays, which are already
sorted, into an auxiliary array can be done in O(n) where n is the total number of elements in both the
sub arrays. This is possible because both the sub arrays are sorted.

Algorithm MergeSort(low, high) {


// Small(P) is true if there is only one element to sort .
// In this case the list is already sorted.
if (low < high) then {
//If there are more than one element
//Divide P into subproblems.
mid := [(low + high)/2] ;
// Solve the subproblems.
MergeSort(low, mid);
MergeSort (mid + 1, high);
// Combine the solutions.
Merge(low, mid, high);
}
}

Merging in Merge Sort


Algorithm Merge (low, mid, high) {
// a[low:high] is a global array containing two sorted subsets in
// a [low:mid] and in a [mid + 1 :high]. The goal is to merge these
// two sets into a single set residing in a [low:high]. B[] is an
// auxiliary global array.
h:=low; i:=low ; j : =mid +1;
while ((h mid) and ( jhigh)) do {
if (a[h]a[j] then {
b[i] := a[h];h :=h+1;
}
else {
b[i] := a[j];j :=j+1;
}
i:= i+1;
}
if ( h>mid) then
for k:=j to high do
{ b[i] : =a[k]; i:=i+1; }
else for k :=h to mid do
{ b[i] := a[k]; i:= i+1; }
for k: =low to high do a[k] :=b[k];
}

Complexity of Merge Sort is O(n log n) and binary search is O(log n).
This can be proved by repeated substitution in the recurrence relations.
Suppose (for simplicity) that n = 2k for some entire k. as n=2k k = log2n
Merge Sort:
Let T(n) the time used to sort n elements. As we can perform separation and merging in linear time, it
takes cn time to perform these two steps, for some constant c. So, recurrence relation is : T(n) =
2T(n/2) + cn.
In the same way: T(n/2) = 2T(n/4) + cn/2, so
T(n) = 4T(n/4) + 2cn.
Going in this way ...
T(n) = 2mT(n/2m) + mcn, and
T(n) = 2kT(n/2k) + kcn = nT(1) + cnlog2n = O(n log n).
Binary Search:
Let T(n) the time used to search n elements. As we need to search only one of the halves, the
Recurrence relation is : T(n) = T(n/2) + c
In the same way: T(n/2) = T(n/4) + c, so
T(n) = T(n/4) + 2c.
Going in this way ...
T(n) = T(n/2m) + mc, and
T(n) = T(n/2k) + kc = T(1) + kc = kc + 1 = O(log n).

QuickSort:
Quick sort is one of the most powerful sorting algorithms. It works on the Divide and Conquer design
principle. Quick sort works by finding an element, called the pivot, in the given input array and
partitions the array into three sub arrays such that the left sub array contains all elements which are
less than or equal to the pivot. The middle sub array contains the pivot. The right sub array contains
all elements which are greater than the pivot. Now, the two sub arrays, namely the left sub array and
the right sub array are sorted recursively.
The partitioning of the given input array is part of the Divide step. The recursive calls to sort the sub
arrays are part of the Conquer step. Since the sorted sub arrays are already in the right place there is no
Combine step for the Quick sort.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Algorithm QuickSort (p,q)


// Sorts the elements a[p],, a[q] which reside in the global
// array a[1 ;n] into ascending order ; a[n+1] is considered to
// be defined and must be all the elements in a[1 :n]
{
if (p < q) then // if there are more than one element
{
// divide P into two sub problems.
j : = Partition (a,p,q +1);
// j is the position of the partitioning element.

11.
12.
13.
14.
15.
16.

// Solve the subproblems.


QuickSort (p,j -1),
QuickSort ( j +1,q)
// there is no need for combining solutions.
}
}
Algorithm for Sorting by partitioning.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.

Algorithm Partition (a,m,p)


//Within a[m],a[m+1],.a[p-1] the elements are
//rearranged in such a manner that if initially t = a[m],
// then after completion a[q] = t for some q between m
// and p-1, a[k] t for m k<q, and a [k] t
// for q< k<p. q is returned. Set a[p] = .
{
v :=a[m], I :=m; j : = p;
repeat
{
repeat
i :=i+1;
until (a[i]v);

14.
15.
16.

repeat
j : = j 1;
until (a[j] v);

17.

if (I < i) then interchange (a,I,j);

18.

} until (I j);

19.
20.

a[m] : = a[j]; a[j] : = v; return j ;


}

1.
2.
3.
4.
5.
6.

Algorithm Interchange ( a , i, j)
//Exchange a[i] with a [j]
{
p : = a[i];
a [i] : = a[j]; a[j] : p;
}

Algorithm Partition the array a[m : p-1] about a [m]

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(p)

65

70

75

80

85

60

55

50

45

65

45

75

80

85

60

55

50

70

65

45

50

80

85

60

55

75

70

65

45

50

55

85

60

80

75

70

65

45

50

55

60

85

80

75

70

60

45

50

55

65

85

80

75

70

STRASSENS MATRIX MULTIPLICATION


Let A and B be two n X n matrices. The product matrix C = AB is also an n X n matrix whose i,j th
element is formed by taking the elements in the i th row of A and the j th column of B and multiplying
them to get
C(i,j) = A(i, k) B(k, j)
1kn
for all i and j between 1 and n. To compute C (i,j) using this formula, we need n multiplication. As the
matrix C has n2 elements, the time for the resulting matrix multiplication algorithm, which we refer to as
the conventional method is (n3).
A1,1 A1,2
A2,1 A2,2

B1,1 B1,2
B2,1 B2,2

C1,1 C1,2
C2,1 C2,2

then

C11 =
C12 =
C21 =
C22 =

A11 B11 + A12 B21


A11 B12 + A12 B22
A21 B11 + A22 B21
A21 B12 + A22 B22

Strassen showed that 2x2 matrix multiplications can be accomplished in 7 multiplication and 18
additions or subtractions. This reduce can be done by Divide and Conquer Approach. Divide the input
data S in two or more disjoint subsets S1, S2. Solve the sub-problems recursively. Combine the
solutions for S1, S2, , into a solution for S. The base case for the recursion are sub-problems of
constant size. Analysis can be done using recurrence equations. Divide matrices in sub-matrices and
recursively multiply sub-matrices.

This method involves first computing the seven n/2 X n/2 matrices as
P = (A11 + A22) (B11 + B22)
Q = (A21+A22) B11
R = A11 (B12 B22)
S = A22 (B21 B11)
T = (A11 + A12) B22
U = (A21 A11)(B11 + B12)
V = (A12 A22)(B21 + B22)
Then Cijs are computed as
C11 =
C12 =
C21 =
C22 =

P+S T + V
R+T
Q+S
P+R Q + U

The resulting recurrence relation for T(n) is


T(n) =

n2

7T(n/2) + an2 n>2


Where a and b are constants. Working with this formula, we get
T(n) = an2[1+7/4+(7/4)2 ++ (7/4)k-1] +7kT(1)
cn2 (7/4) log2n + 7log2n, c a constant
= cnlog2 4 + log2 7,-log2 4 +n log2 7
= 0 (nlog2 7) 0(n2.81)

You might also like