You are on page 1of 66

Analysis of Algorithms

Day 2
Plan for Day-2

• Analyzing Algorithms
– Basic Mathematical principles
– Order of magnitude
– Introduction to Asymptotic notations
• Best case
• Worst case
• Average case

• Analysis of well known algorithms


– Algorithm design techniques (Brute force, Greedy, Divide & Conquer, Decrease &
Conquer)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 2
Technologies Ltd Version No: 2.0
Analysis of Algorithms
Unit 3 - Analyzing Algorithms
Analysis of Algorithms

• Refers to predicting the resources required by the


algorithm, based on size of the problem.

• The primary resources required are Time and Space.

• Analysis based on time taken to execute


the algorithm is called Time complexity of the Algorithm.

• Analysis based on the memory required to


execute the algorithm is called Space
complexity of the Algorithm.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 4
Technologies Ltd Version No: 2.0
Kinds of Analysis of Algorithms

• The Priori Analysis is aimed at analyzing the algorithm before it is


implemented on any computer. It will give the approximate amount of
resources required to solve the problem before execution.

• Posteriori Analysis is aimed at determination of actual statistics about


algorithm’s consumption of time and space requirements (primary memory) in
the computer when it is being executed as a program.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 5
Technologies Ltd Version No: 2.0
Limitations of Posteriori analysis

• External factors influencing the execution of the algorithm


– Network delay
– Hardware failure etc.,

• The information on target machine is not known during design phase


• The same algorithms might behave differently on different systems
– Hence can’t come to definite conclusions

Hence Priori analysis is more emphasized during


analysis of algorithm

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 6
Technologies Ltd Version No: 2.0
Priori Analysis
Priori analysis require the knowledge of
– Mathematical equations
– Determination of the problem size
– Order of magnitude of any algorithm

Each of these are discussed in the forthcoming sections

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 7
Technologies Ltd Version No: 2.0
Some Basic Mathematics
Arithmetic Progressions:

n
n(n + 1)

i =0
i = 1 + 2 + 3 + ... + (n − 1) + n =
2
Geometric Progressions:

x −1 n +1

, if ( x ≠ 1)
n

∑x =
i

i =0
x −1

1

i =0
x =i

1− x
, if x < 1

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 8
Technologies Ltd Version No: 2.0
Some Basic Mathematics (Contd…)
Logarithms:

a = b log b a

logb a = (logc a)(logb c)


1
logb a =
log a b

log c a
log b a =
log c b
ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 9
Technologies Ltd Version No: 2.0
Some Basic Mathematics (Contd…)
ƒ A few mathematical formulae.

ƒ 1 + 2 + … + n = n * (n + 1) / 2

ƒ 1 + 22 + … + n2 = n * (n + 1) * (2n + 1) / 6

ƒ 1 + a + a2 + … + an = (a(n+1) – 1) / (a – 1)

ƒ The floor of a function f(x), which is written as floor(x) or x , is the largest integer not greater
than the value of f(x)
ƒ Choice function:

n!
n
C =
r!(n − r )!
r

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 10
Technologies Ltd Version No: 2.0
Some Basic Mathematics (Contd…)
ƒ How many times should we divide (into half) the number of elements ‘n’
(discarding reminders if any) to reach 1 element?

Since n is being divided by 2 consecutively we need to consider two cases.


Case – 1: n is a power of 2:
Say for example n = 8 in which case 8 must be halved 3 times to reach 1
8 Î 4 Î 2 Î 1. Similarly 16 must be halved 4 times to reach 1.
16 Î 8 Î 4 Î 2 Î 1
Case – 2: n is not a power of 2:
Say for example n = 9 in which case 9 must be halved 3 times to reach 1
9 Î 4 Î 2 Î 1. Similarly 15 must be halved 3 times to reach 1
15 Î 7 Î 3 Î 1. So if 2m < n < 2(m+1) then n must be halved m times to reach 1
In general, n must be halved m times and m is given by :

m= floor(log2n)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 11
Technologies Ltd Version No: 2.0
Growth of functions
ƒ Algorithm complexity will be represented
in terms of mathematical functions.
Ex. x log x, x2
x2 xlog(x)
ƒ Given the complexities,
x log(x)
x2
which will grow slow?
2x X

log(x)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 12
Technologies Ltd Version No: 2.0
Problem size
The problem size depends on the nature of the problem for which we are
developing the algorithms.

Examples:
• If we are searching an element in an array having ‘n’ elements, the problem
size is ____
same as the size of array ( = ‘n’).

• If we are merging 2 arrays of size ‘n’ and ‘m’, the problem size of the algorithm
is _____
sum of two array sizes ( = ‘n + m’)

• If we are computing the nth factorial, the problem size is _______

The Problem size is ‘n’

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 13
Technologies Ltd Version No: 2.0
Order of Magnitude of an algorithm
The ‘order of magnitude’ of an algorithm is the sum of number of occurrences of
statements contained in it.
Example 1
for( i = 0; i< n; i ++)
{
...
...
}
Assume there are ‘c’ number of statements inside the loop
Each statement takes 1 unit of time
Execution time for 1 loop = c * 1 = c

Total execution time = n * c

Since ‘c’ is constant, the order is ‘n’

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 14
Technologies Ltd Version No: 2.0
Order of Magnitude of an algorithm (Cont…)
• Example 2
for( i=0;i<n; i ++) {
for(j=0;j<m;j++) {
…. ….
}
}
Assume we have ‘c’ number of statements inside the loop
Following the same assumptions as the earlier example

Execution time for 1 loop = c * 1

Execution time for the inner loop = m * c

Total execution time = n * (m * c)

Since c is a constant, the total execution time = n * m

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 15
Technologies Ltd Version No: 2.0
Asymptotic notations for determination of order of
magnitude of an algorithm
The limiting behavior of the complexity of a problem as problem size increases is
called asymptotic complexity

The most common asymptotic notations are:


• ‘Big Oh’ ( ‘O’) notation:
It represents the upper bound of the resources required to solve a problem.
It is represented by ‘O’

• ‘Omega’ notation:
It represents the lower bound of the resources required to solve a problem.
It is represented by Ω

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 16
Technologies Ltd Version No: 2.0
Big ‘Oh’ Vs Omega notations
Case (i) : A Project manager requires maximum of 100 software engineers to
finish the project on time.
Case (ii) : The Project manager can start the project with minimum of 50 software
engineers but cannot assure the completion of project in time.

Which case is preferred?


In software industry there will always be stringent time limits.
So Case (i) is preferred to finish the project to get the goodwill of the client and to
allocate the software engineers to other projects.

Case (i) is similar to Big Oh notation, specifying the upper bound of resources
needed to do a task.
Case (ii) is similar to Omega notation, specifying the lower bound of resources
needed to do a task.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 17
Technologies Ltd Version No: 2.0
Analysis based on the nature of the problem
The analysis of the algorithm can be performed based on the nature of the
problem.

Thus we have:
• Worst case analysis
• Average case analysis
• Best case analysis

To perform the above analysis, we can either use Big Oh notation or Omega
notation.

But most preferred notation is Big Oh notation


In the subsequent sections, we use Big Oh notation for analysis of
algorithm for above stated conditions.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 18
Technologies Ltd Version No: 2.0
Worst case analysis
T(n) = O(f(n)) if there are constants c and n0 such that T(n) <= cf(n) when
n >= n0. In this Big-Oh notation for worst case analysis, c and n0 are positive
integers. n0 represents the threshold problem size.

T(n)=O(f(n))
T(n) is bound within f(n)
2 for different values of n
f(n) = n
T(n)

(Upped bound of
the algorithm)

Problem Size

Threshold
Problem size
n
0

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 19
Technologies Ltd Version No: 2.0
Worst case analysis (contd.,)

f(n) = n

T(n) = O(n)

Threshold
T(n)

problem size n0

T(n) is bound within


f(n) i.e 'n' for different
problem sizes

n0
Problem size

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 20
Technologies Ltd Version No: 2.0
Why Worst case analysis?

Worst case analysis is preferred due to the following reasons:

• It is better to bound ones pessimism – the time of execution can’t go beyond


T(n) as it is the upper bound. It is the maximum time the algorithm can take for
any value of problem size.

• It is very easy to compute the worst case analysis as compared to computation


of best case and average case of algorithms.

Whenever we refer to the complexity of an algorithm, we mean the


Worst case complexity which is expressed in terms of the Big-Oh
notation.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 21
Technologies Ltd Version No: 2.0
Average case analysis

In average case analysis, running time (execution time) of an algorithm is arrived


at by considering the average condition of the problem.

T( n ) = O( f( n )) if there are positive constants c1, c2 and n0 such that


c2.f(n) ≤ T( n ) ≤ c1.f(n), for all n ≥ n0.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 22
Technologies Ltd Version No: 2.0
Best case analysis

In average case analysis, running time (execution time) of an algorithm is arrived


at by considering the average condition of the problem.

T( n ) = O( f( n )) if there are positive constants c and n0 such that


T( n ) ≥ c.f( n ) for all n ≥ n0.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 23
Technologies Ltd Version No: 2.0
Assumptions while analyzing algorithms using ‘Big
Oh’ notation
While finding the worst case complexities of algorithms using Big Oh notation, the
following assumptions are made.

Assumption I
The leading constants and coefficients of highest power of ‘n’ and all lower powers
of ‘n’ are ignored
in f(n)

Example:
T(n) = O(100n3 + 29 n2 + 19n)

Representing the same in big Oh notation

T(n) = O(n3)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 24
Technologies Ltd Version No: 2.0
Assumptions (contd.,)
Assumption II :
The time of execution of a ‘for loop’ is the ‘running time’ of all statements
inside the ‘for loop’ multiplied by number of iterations of the ‘for loop’.

Example:
for( i=0 to n)
{
print (“ hi ”);
x Å x+1;
y Å y+1;
}

The for loop is executed n times.


So, worst case running time of the algorithm is

T ( n ) = O( 3 * n ) = O ( n )

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 25
Technologies Ltd Version No: 2.0
Assumptions (contd.,)
Assumption III :
If we have a ‘nested for loop’, in an algorithm, the analysis of that algorithm should
start from the inner loop and move it outwards towards outer loop.
Example:
for(j=0 to m) {
for( i=0 to n) {
print(“hi”);
x Å x+1;
y Å y+1;
}
}
The worst case running time of inner loop is O( 3*n )

The worst case running time of outer loop is O( m*3*n )

The total running time = O ( m * n )

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 26
Technologies Ltd Version No: 2.0
Assumptions (contd.,)
Assumption IV :
The execution of an ‘if else statement’ is an algorithm comprises of
• Execution time for testing the condition
• The maximum execution time of either ‘if’ or ‘else’( whichever is larger )
Example:
If(x > y) {
print( “ x is larger than y”);
print(“ x is the value to be selected”);
z Å x;
x Å x+1;
}
else print( “ x is smaller than y”);
The execution time of the program is the exec. time of testing (X > Y) +
exec. time of ‘if’ statement, as the execution time of ‘if’ statement is
more than that of ‘else’ statement
ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 27
Technologies Ltd Version No: 2.0
Case study on analysis of algorithms

The following examples will help us to understand the concept of worst case and
average case complexities

Example – 1: Consider the following pseudocode.


To insert a given value, k at a particular index, l in an array, a[1…n]:
1. Begin
2. Copy a[l…n] to a[l+1…n+1]
3. Copy k to a[l]
4. End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 28
Technologies Ltd Version No: 2.0
Case Study (Contd…)

1. Begin
2. Copy a[l…n] to a[l+1…n+1]
3. Copy k to a[l]
4. End

• What is the average case complexity of the algorithm?


• The probability that step 2 performs 1 copy is 1/n
• The probability that step 2 performs 2 copies is 2/n and so on
• So the average case complexity can be calculated as 1/n + 2/ n + …+
(n-1)/n
• (1 + 2 + 3 + … + n-1) /n
• (n * (n - 1) / 2)/n = (n-1)/2
• The ACC of the algorithm is O(n)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 29
Technologies Ltd Version No: 2.0
Case study (Contd…)

Example – 2: Consider the following pseudocode.


To delete the value, k at a given index, l in an array, a[1…n]:
1. Begin
1 to (j-1)
2. Copy a[i+1…n] to a[i…n-1]
3. Clear a[n]
4. End

1 to (i-1)
j to (n-1)

(i+1) to n

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 30
Technologies Ltd Version No: 2.0
Summary of Unit-3

• Basic Mathematics
• Analysis of Algorithms
– Problem size
– Order of magnitude of algorithm
– Best, average and worst case problem conditions
• Asymptotic notation
• Calculating the running time

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 31
Technologies Ltd Version No: 2.0
Analysis of Algorithms
Unit 4 - Analysis of well known Algorithms
Analysis of well known Algorithms

• Brute Force Algorithms

• Greedy Algorithms

• Divide and Conquer Algorithms

• Decrease and Conquer Algorithms

• Dynamic Programming

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 33
Technologies Ltd Version No: 2.0
Brute Force Algorithms
• Brute force approach is a straight
forward approach to solve the problem.
It is directly based on the problem
statement and the concepts

• It is one of the simplest algorithm design


to implement and covers a wide range of
problems under its gamut.

• Brute force is a simple but a very costly


technique.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 34
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Selection Sort

• Sorting is the technique of rearranging a given array of data into a specific


order. When the array contains numeric data the sorting order is either
ascending or descending. Similarly when the array contains non-numeric data
the sorting order is lexicographical order

• All sorting algorithms start with the given input array of data (unsorted array)
and after each iteration extend the sorted part of the array by one cell. The
sorting algorithm terminates when sorted part of array is equal to the size of
the array

• In selection sort, the basic idea is to find the smallest number in the array and
swap this number with the leftmost cell of the unsorted array, thereby
increasing the sorted part of the array by one more cell

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 35
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Selection Sort (Contd…)
Selection Sort:

To sort the given array a[1…n] in ascending order:

1. Begin

2. For i = 1 to n-1 do
2.1 set min = i
2.2 For j = i+1 to n do
2.2.1 If (a[j] < a[min]) then set min = j
2.3 If (i ≠ min) then swap a[i] and a[min]

3. End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 36
Technologies Ltd Version No: 2.0
Analysis of Selection Sort
• Number of times the inner ‘for loop’ will gets executed = n - (i+1) + 1
• Number of basic operations done inside the outer ‘for loop’ = number of basic
operations done inside ( inner ‘for loop’ + ‘if’ statement ).
• At the worst case, the if statement will do 1 swap operation, every time.
• Î Number of basic operations done inside the inner loop = [ n - (i+1) + 1 ] + 1
=n–i+1
• The outer loop is executed for i=1 to n-1
• Î Total number of operations done by the outer for loop is
• (n-1+1) + (n-2+1) + (n-3+1) + … + (n- (n-1)+1)
• = n + (n -1) + (n-2) + … + 2
• = n(n+1)/2 – 1 = (n2 + n – 2) / 2
• = O ( n2 )

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 37
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Bubble Sort

• Bubble sort is yet another sorting technique which works on the design of brute
force

• Bubble sort works by comparing adjacent elements of an array and exchanges


them if they are not in order

• After each iteration (pass) the largest element bubbles up to the last position of
the array. In the next iteration the second largest element bubbles up to the
second last position and so on

• Bubble sort is one of the simplest sorting techniques to implement

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 38
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Bubble Sort (Contd…)
Bubble Sort:

To sort the given array a[1…n] in ascending order:

1. Begin
2. For i= n-1 down to 1
2.1 For j = 1 up to i
2.2.1 If (a[ j ] > a[ j+1]) then swap a[ j ] and a[ j+1 ]
• End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 39
Technologies Ltd Version No: 2.0
Analysis of Bubble sort
• The basic operations done inside the inner ‘For’ loop are one checking and one
swapping.
• Number of times the basic operations done inside the internal ‘For’ loop will
gets executed = ( i – 1 ) + 1 = i
• The outer loop is executed for i = n-1 down to 1
• Î Total number of operations done by the outer for loop is
• (n - 1) + (n - 2) + (n - 3) +…+ 2 + 1
• = [(n - 1) n]/2
• = [n2 – n] / 2
• = O(n2)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 40
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Linear Search

• Searching is a technique where in we find if a target element is part of the


given set of data

• There are different searching techniques like linear search, binary search etc…

• Linear search works on the design principle of brute force and it is one of the
simplest searching algorithm as well

• Linear search is also called as a sequential search as it compares the


successive elements of the given set with the search key

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 41
Technologies Ltd Version No: 2.0
Brute Force Algorithms – Linear Search (Contd…)
Linear Search:

To find which element (if any) of the given array a[1…n] equals the target
element:

1. Begin

2. For i = 1 to n do

2.1 If (target = a[i]) then End with output as i

3. End with output as none

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 42
Technologies Ltd Version No: 2.0
Analysis of Linear search

• Best case is O( 1 )

• Worst Case is O( n )

• Average Case is O( n )

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 43
Technologies Ltd Version No: 2.0
Greedy Algorithms
• Greedy design technique is primarily used in Optimization problems

• The Greedy approach helps in constructing a solution for a problem through a


sequence of steps where each step is considered to be a partial solution. This
partial solution is extended progressively to get the complete solution

• The choice of each step in a greedy approach is done based on the following

• It must be feasible

• It must be locally optimal

• It must be irrevocable

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 44
Technologies Ltd Version No: 2.0
Greedy Algorithms – An Activity Selection Problem

• An Activity Selection problem is a slight variant of the problem of scheduling a


resource among several competing activities.

Suppose that we have a set S = {1, 2, …, n} of n events that wish to use an


auditorium which can be used by only one event at a time. Each event i has a
start time si and a finish time fi where si ≤ fi. An event i if selected can be executed
anytime on or after si and must necessarily end before fi. Two events i and j are
said to compatible if they do not overlap (meaning si ≥ fj or sj ≥ fi). The activity
selection problem is to select a maximum subset of compatible activities.

To solve this problem we use the greedy approach

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 45
Technologies Ltd Version No: 2.0
Greedy Algorithms – An Activity Selection Problem
(Contd…)
An Activity Selection Problem:

To find the maximum subset of mutually compatible activities from a given


set of n activities. The given set of activities are represented as two arrays s
and f denoting the set of all start time and set of all finish time respectively:

1. Begin

2. Set A = {1}, j = 1

3. For i = 2 to n do

2.1 If (si ≥ fj) then A = A ∪ {i}, j = i

3. End with output as A

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 46
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms
• Divide and Conquer algorithm design works on the principle of dividing the
given problem into smaller sub problems which are similar to the original
problem. The sub problems are ideally of the same size.

• These sub problems are solved independently

• Recursion will be a handy tool in solving such problems.

• The solutions for the sub problems are combined to get the solution for the
original problem

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 47
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Quick Sort

• Quick sort is one of the most powerful sorting algorithm. It works on the Divide
and Conquer design principle.

• Quick sort works by finding an element, called the pivot, in the given input
array and partitions the array into three sub arrays such that

• The left sub array contains all elements which are less than or equal to the pivot
• The middle sub array contains pivot
• The right sub array contains all elements which are greater than or equal to the pivot

• Now the two sub arrays, namely the left sub array and the right sub array are
sorted recursively

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 48
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Quick Sort
(Contd…)
Quick Sort:

To sort the given array a[1…n] in ascending order:

1. Begin
2. Set left = 1, right = n
3. If (left < right) then
3.1 Partition a[left…right] such that a[left…p-1] are all less than a[p] and
a[p+1…right] are all greater than a[p]
3.2 Quick Sort a[left…p-1]
3.3 Quick Sort a[p+1…right]
4. End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 49
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Quick Sort
(Contd…)
Quick Sort – Partition Algorithm:

To partition the given array a[1…n] such that every element to the left of the
pivot is less than the pivot and every element to the right of the pivot is
greater than the pivot.

1. Begin
2. Set left = 1, right = n, pivot = a[left], p = left
3. For r = left+1 to right do
3.1 If a[r] < pivot then
3.1.1 a[p] = a[r], a[r] = a[p+1], a[p+1] = pivot
3.1.2 Increment p
4. End with output as p

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 50
Technologies Ltd Version No: 2.0
Analysis of Partition algorithm

• Inside the ‘For’ loop, there is one comparison statement.

• Number of times the comparison statement will gets executed = n-1

• Î Worst case complexity is O( n )


• Î Also, since the complexity is O(n), the number of times the comparison
statement will gets executed = c. n where c is a constant.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 51
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Analysis of
Quick Sort
• Quicksort is recursive.
• The best / worst case is calculated based on the selection of the pivot.
• If the pivot selected happens to be the median of the elements, then best case
will happen.
• If the pivot is always the smallest, then worst case will happen.

• The pivot selection takes c.N amount of time.


• Running time = running time of two recursive calls + time spent on partition.
• Î T(n) = T(i) + T(N- i -1) + c.N
• T(0) = T(1) = 1

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 52
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Analysis of
Quick Sort
Best Case Analysis:
• If the pivot happens to be the median then is the best case.
• The partition algorithm splits the array into 2 equal sub arrays.
• The complexity of the partition algorithm is O(n). So we are taking it as c.n,
where c is a constant.
• T(n) = 2T(n/2) + c n
• Divide both sides by n
• T(n) / (n) = T(n/2) / (n/2) + c
• T(n/2) / (n/2) = T(n/4) / (n/4) + c
• T(n/4) / (n/4) = T(n/8) / (n/8) + c
• …
• T(2)/2 = T(1) / (1) +c
• Î T(n) / (n) = T(1) / (1) + c log n
• Î T(n) = c nlog n + n
• Î T(n) = O(n log n)
(An alternate method of analysis is given in the notes page)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 53
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Analysis of
Quick Sort
Worst Case Complexity:
• The pivot is always the smallest element or the biggest element.
• So, here i = 0
• T(n) = T(n-1) + c.n, n > 1
• T(n - 1)=T(n - 2) + c. (n - 1)
• T(n - 2)=T(n - 3) + c. (n - 2)
• …
• T( 2 ) = T( 1 ) + c. (2)
• Here we ignore T(0) as it is insignificant.
• From the above equations,
• n
• T(n) = T(1) + c. ∑ i
• i=2
• Î T(n) = O(n2)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 54
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Merge Sort

• Merge sort is yet another sorting algorithm which works on the Divide and
Conquer design principle.

• Merge sort works by dividing the given array into two sub arrays of equal size

• The sub arrays are sorted independently using recursion

• The sorted sub arrays are then merged to get the solution for the original array

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 55
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Merge Sort
(Contd…)
Merge Sort:

To sort the given array a[1…n] in ascending order:

1. Begin
2. Set left = 1, right = n
3. If (left < right) then
3.1 Set m = floor((left + right) / 2)
3.2 Merge Sort a[left…m]
3.3 Merge Sort a[m+1…right]
3.4 Merge a[left…m] and a[m+1…right] into an auxiliary array b
3.5 Copy the elements of b to a[left…right]
4. End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 56
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Merge Sort
• Consider the size of the array to be sorted as n.
• Running time = running time of two recursive mregesorts of size n/2 + time to
merge.

• Î T(n) = 2. T(n / 2) + n
• where T(1) = 1

• Find out the best case complexity.


• When worst case will happen?
• Find out the worst case complexity.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 57
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Binary Search

• Binary Search works on the Divide and Conquer Strategy

• Binary Search takes a sorted array as the input

• It works by comparing the target (search key) with the middle element of the
array and terminates if it is the same, else it divides the array into two sub
arrays and continues the search in left (right) sub array if the target is less
(greater) than the middle element of the array.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 58
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Binary Search
(Contd…)
Binary Search:

To find which element (if any) of the given array a[1…n] equals the target
element:

1. Begin
2. Set left = 1, right = n
3. While (left ≤ right) do
3.1 m = floor(( left + right )/2)
3.2 If (target = a[m]) then End with output as m
3.3 If (target < a[m]) then right = m-1
3.4 If (target > a[m]) then left = m+1
3. End with output as none

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 59
Technologies Ltd Version No: 2.0
Divide and Conquer Algorithms – Binary Search
• Running time = time to divide the array into two + one checking
• Î T(n) = T(n / 2) + 1,
• where T(1) = 1
• T(n / 2) = T(n / 4) + 1
• …
• T(4) = T(2) + 1
• T(2) = T(1) + 1 = log 2 + 1
• Backtracking,
• T(4) = 2 + 1 = log 4 + 1
• …
• T(n) = log n + 1

• Î T(n) = O( log n )
( An alternate method of analysis is given in the previous notes page)

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 60
Technologies Ltd Version No: 2.0
Decrease and Conquer Algorithms
• Decrease and Conquer algorithm design technique exploits the relationship
between a solution for the given instance of the problem and a solution for a
smaller instance of the same problem. Based on this relationship the problem
can be solved using the top-down or bottom-up approach

• There are different variants of Decrease and Conquer design strategies

• Decrease by a constant

• Decrease by a constant factor

• Variable size decrease

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 61
Technologies Ltd Version No: 2.0
Decrease and Conquer Algorithms – Insertion Sort

• Insertion sort is a sorting algorithm which works on the algorithm design


principle of Decrease and Conquer

• Insertion sort works by decreasing the problem size by a constant in each


iteration and exploits the relationship between the solution for the problem of
smaller size and that of the original problem

• Insertion Sort is the sorting technique which we normally use in the cards
game

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 62
Technologies Ltd Version No: 2.0
Decrease and Conquer Algorithms – Insertion Sort
(Contd…)
Insertion Sort:

To sort the given array a[1…n] in ascending order:

1. Begin
2. For i = 2 to n do
2.1 Set v = a[i], j = i – 1
2.2 While (j ≥ 1) and v ≤ a[j] do
2.2.1 a[j+1] = a[j], j = j – 1
2.3 a[j+1] = v
3. End

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 63
Technologies Ltd Version No: 2.0
Decrease and Conquer Algorithms – Insertion Sort
Based on the algorithm, Solve the following :

– Best case in Insertion sort


– Best case complexity.
– Worst case in Insertion sort
– Worst case complexity.

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 64
Technologies Ltd Version No: 2.0
Summary of Day-2
• Analyzing Algorithms
– Basic Mathematical principles
– Order of magnitude
– Introduction to Asymptotic notations
• Best case
• Worst case
• Average case

• Analysis of well known algorithms


– Algorithm design techniques
– Analysis of some well known algorithms

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 65
Technologies Ltd Version No: 2.0
Thank You!

ER/CORP/CRS/SE15/003
Copyright © 2004, Infosys 66
Technologies Ltd Version No: 2.0

You might also like