You are on page 1of 11

MUSSAWIR IQBAL (655 REPEATER)

5th Semester
Submitted to: Sir Mansoor

ALGORITHM ANALYSIS
ASS # 2
Working and complexity of Heap sort, Heap sort vs Quick sort and
Pseudo code

Heap Sort Algorithm


Heap Sort is one of the best sorting methods being in-place and with no quadratic worst-case
scenarios. Heap sort algorithm is divided into two basic parts:

Creating a Heap of the unsorted list.

Then a sorted array is created by repeatedly removing the largest/smallest element from the
heap, and inserting it into the array. The heap is reconstructed after each removal.

What is a Heap?
Heap is a special tree-based data structure, that satisfies the following special heap properties:
1.

Shape Property: Heap data structure is always a Complete Binary Tree, which means all
levels of the tree are fully filled.

2.

Heap Property: All nodes are either [greater than or equal to] or [less than or equal to] each
of its children. If the parent nodes are greater than their children, heap is called a Max-Heap,
and if the parent nodes are smaller than their child nodes, heap is called Min-Heap.

How Heap Sort Works


Initially on receiving an unsorted list, the first step in heap sort is to create a Heap data structure
(Max-Heap or Min-Heap). Once heap is built, the first element of the Heap is either largest or
smallest (depending upon Max-Heap or Min-Heap), so we put the first element of the heap in our
array. Then we again make heap using the remaining elements, to again pick the first element of the
heap and put it into the array. We keep on doing the same repeatedly until we have the complete
sorted list in our array.
In the below algorithm, initially heapsort() function is called, which calls buildheap() to build heap,
which in turn uses satisfyheap() to build the heap.

Sorting using Heap Sort Algorithm


/* Below program is written in C++ language */
void heapsort(int[], int);
void buildheap(int [], int);
void satisfyheap(int [], int, int);

void main()

{
int a[10], i, size;
cout << "Enter size of list";

// less than 10, because max size of array is 10

cin >> size;


cout << "Enter" << size << "elements";
for( i=0; i < size; i++)
{
cin >> a[i];
}
heapsort(a, size);
getch();
}

void heapsort(int a[], int length)


{
buildheap(a, length);
int heapsize, i, temp;
heapsize = length - 1;
for( i=heapsize; i >= 0; i--)
{
temp = a[0];
a[0] = a[heapsize];
a[heapsize] = temp;
heapsize--;
satisfyheap(a, 0, heapsize);

}
for( i=0; i < length; i++)
{
cout << "\t" << a[i];
}
}

void buildheap(int a[], int length)


{
int i, heapsize;
heapsize = length - 1;
for( i=(length/2); i >= 0; i--)
{
satisfyheap(a, i, heapsize);
}
}

void satisfyheap(int a[], int i, int heapsize)


{
int l, r, largest, temp;
l = 2*i;
r = 2*i + 1;
if(l <= heapsize && a[l] > a[i])
{
largest = l;

}
else
{
largest = i;
}
if( r <= heapsize && a[r] > a[largest])
{
largest = r;
}
if(largest != i)
{
temp = a[i];
a[i] = a[largest];
a[largest] = temp;
satisfyheap(a, largest, heapsize);
}
}

Complexity of Heap Sort

a). To insert a single node in an empty heap is : O(1).


b). To insert a single element in a n node heap is : O(logn).
Let us first understand this that how O(logn) comes. Let here be at-most N number of
comparison is done for inserting an element in heap. This number are equal to depth of
tree, so the depth of tree will be equal to the time complexity of heap for insertion of an
element which consist of n element. If n nodes are inserted than depth of tree will be :
depth 0 have : 2^0 elements,
depth 1 have : 2^1 elements,

depth 2 have : 2^2 elements,


. . . . .
in similar way
depth d have : 2^d elements.
so Total number of elements inserted in dth row be n.
This gives n = 2^0 + 2^1 + 2^2 + 2^3+ 2^d.
This forms GP, and get the expression by GP sum formula as n+1=2^(d+1).
This equation then converted in the form of d as d=log(n+1)-1. So for very large value of
n, 1 can be neglected, this gives
d=log(n).
And d=time complexity for inserting one element=O(logn).
c.) To insert n elements in a n node heap is : O(nlogn).
d.) To delete the largest element in a max heap tree: O(1).
e.) To delete the smallest element in a max heap tree: O(logn).
f.) To delete n elements in a max heap tree: O(nlogn).
g.) To create a heap, time complexity will be: O(nlogn).
Now to sort the heap tree requires,
1. Arrange or insert the elements in a form of heap i.e O(nlogn) and
2. Delete the elements one by one after swapping operation O(nlogn)
This gives Heap sort complexity = O(nlogn) + O(nlogn).
= 2 O(nlogn) = O(nlogn).
Complexity Analysis of Heap Sort
Worst Case Time Complexity: O(n log n)
Best Case Time Complexity: O(n log n)
Average Time Complexity: O(n log n)
Space Complexity: O(n)

Heap sort is not a Stable sort, and requires a constant space for sorting a list.

Heap Sort is very fast and is widely used for sorting.

Pseudo code

Pseudo code is a simple way of writing programming code in English. Pseudo code is not
actual programming language. It uses short phrases to write code for programs before you
actually create it in a specific language. Once you know what the program is about and how
it will function, then you can use pseudo code to create statements to achieve the required
results for your program.

Understanding Pseudo code

Pseudo code makes creating programs easier. Programs can be complex and long;
preparation is the key. For years, flowcharts were used to map out programs before writing
one line of code in a language. However, they were difficult to modify and with the
advancement of programming languages, it was difficult to display all parts of a program with
a flowchart. It is challenging to find a mistake without understanding the complete flow of a

program. That is where pseudo code becomes more appealing.


To use pseudocode, all you do is write what you want your program to say in English.
Pseudocode allows you to translate your statements into any language because there are no
special commands and it is not standardized. Writing out programs before you code can
enable you to better organize and see where you may have left out needed parts in your
programs. All you have to do is write it out in your own words in short statements. Let's look
at some examples.

Examples of Pseudocode

Let's review an example of pseudocode to create a program to add 2 numbers together and

then display the result.


Start Program
Enter two numbers, A, B
Add the numbers together
Print Sum

End Program
Compare that pseudocode to an example of a flowchart to add two numbers

Now, let's look at a few more simple examples of pseudocode. Here is a pseudocode

to compute the area of a rectangle:


Get the length, l, and width, w
Compute the area = l*w

Display the area


Now, let's look at an example of pseudocode to compute the perimeter of a rectangle:
Enter length, l
Enter width, w
Compute Perimeter = 2*l + 2*w

Display Perimeter of a rectangle


Remember, writing basic pseudocode is not like writing an actual coding language. It cannot
be compiled or run like a regular program. Pseudocode can be written how you want. But
some companies use specific pseudocode syntax to keep everyone in the company on the
same page. Syntax is a set of rules on how to use and organize statements in a
programming language. By adhering to specific syntax, everyone in the company can read

and understand the flow of a program. This becomes cost effective and there is less time
spent finding and correcting errors.

Comparison between Heap Sort and Quick


Sort
Empirical studies show that generally quick sort is considerably faster than heapsort.
The following counts of compare and exchange operations were made for three
different sorting algorithms running on the same data:

Thus, when an occasional "blowout" to O(n2) is tolerable, we can expect that, on


average, quick sort will provide considerably better performance - especially if one of
the modified pivot choice procedures is used.
Most commercial applications would use quicksort for its better average performance:
they can tolerate an occasional long run (which just means that a report takes slightly
longer to produce on full moon days in leap years) in return for shorter runs most of the
time.
However, quick sort should never be used in applications which require a guarantee of
response time, unless it is treated as an O(n2) algorithm in calculating the worst-case
response time. If you have to assume O(n2) time, then - if n is small, you're better off
using insertion sort - which has simpler code and therefore smaller constant factors.
And if n is large, you should obviously be using heap sort, for its guaranteed O(nlog
n) time. Life-critical (medical monitoring, life support in aircraft and space craft) and
mission-critical (monitoring and control in industrial and research plants handling

dangerous materials, control for aircraft, defence, etc) software will generally have a
response time as part of the system specifications. In all such systems, it is not
acceptable to design based on average performance, you must always allow for the
worst case, and thus treat quicksort as O(n2).
So far, our best sorting algorithm has O(nlog n) performance: can we do any better?
In general, the answer is no.
However, if we know something about the items to be sorted, then we may be able to do
better.
But first, we should look at squeezing the last drop of performance out of quicksort.

You might also like