You are on page 1of 13

IGNOU MCA MCS-31 Free Solved Assignments 2010

Question 1.

Solution: Insertion sort is a simple sorting algorithm, a comparison sort in which the
sorted array (or list) is built one entry at a time. It is much less efficient on large lists than
more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion
sort provides several advantages:

• simple implementation
• efficient for (quite) small data sets
• adaptive, i.e. efficient for data sets that are already substantially sorted: the time
complexity is O(n + d), where d is the number of inversions
• more efficient in practice than most other simple quadratic (i.e. O(n2)) algorithms
such as selection sort or bubble sort: the average running time is n2/4[citation needed],
and the running time is linear in the best case
• stable, i.e. does not change the relative order of elements with equal keys
• in-place, i.e. only requires a constant amount O(1) of additional memory space
• online, i.e. can sort a list as it receives it.

insertionSort(array A)
begin
for i := 1 to length[A]-1 do
begin
value := A[i];
j := i - 1;
done := false;
repeat
if A[j] > value then
begin
A[j + 1] := A[j];
j := j - 1;
if j < 0 then
done := true;
end
else
done := true;
until done;
A[j + 1] := value;
end;
end;

ii).

Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has


O(n2) complexity, making it inefficient on large lists, and generally performs worse than
the similar insertion sort. Selection sort is noted for its simplicity, and also has
performance advantages over more complicated algorithms in certain situations.

The algorithm works as follows:


1. Find the minimum value in the list
2. Swap it with the value in the first position
3. Repeat the steps above for the remainder of the list (starting at the second position
and advancing each time)

Effectively, the list is divided into two parts: the sublist of items already sorted, which is
built up from left to right and is found at the beginning, and the sublist of items remaining
to be sorted, occupying the remainder of the array.

iii)

Heapsort is a comparison-based sorting algorithm, and is part of the selection sort


family. Although somewhat slower in practice on most machines than a good
implementation of quicksort, it has the advantage of a worst-case Θ(n log n) runtime.
Heapsort is an in-place algorithm, but is not a stable sort.

iv).

Merge sort is an O(n log n) comparison-based sorting algorithm. In most


implementations it is stable, meaning that it preserves the input order of equal elements in
the sorted output. It is an example of the divide and conquer algorithmic paradigm. It was
invented by John von Neumann in 1945.

Conceptually, a merge sort works as follows

1. If the list is of length 0 or 1, then it is already sorted. Otherwise:


2. Divide the unsorted list into two sublists of about half the size.
3. Sort each sublist recursively by re-applying merge sort.
4. Merge the two sublists back into one sorted list.

Merge sort incorporates two main ideas to improve its runtime:

1. A small list will take fewer steps to sort than a large list.
2. Fewer steps are required to construct a sorted list from two sorted lists than two
unsorted lists. For example, you only have to traverse each list once if they're
already sorted (see the merge function below for an example implementation).

v).
Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare that, on
average, makes Θ(nlogn) (big O notation) comparisons to sort n items. In the worst
case, it makes Θ(n2) comparisons, though if implemented correctly this behavior is rare.
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms,
because its inner loop can be efficiently implemented on most architectures, and in most
real-world data, it is possible to make design choices which minimize the probability of
requiring quadratic time. Additionally, quicksort tends to make excellent usage of the
memory hierarchy, taking perfect advantage of virtual memory and available caches.
Coupled with the fact that quicksort is an in-place sort and uses no temporary memory, it
is very well suited to modern computer architectures

Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-
lists.

Full example of quicksort on a random set of numbers. The boxed element is the pivot. It
is always chosen as the last element of the partition.

The steps are:

1. Pick an element, called a pivot, from the list.


2. Reorder the list so that all elements with values less than the pivot come before
the pivot, while all elements with values greater than the pivot come after it (equal
values can go either way). After this partitioning, the pivot is in its final position.
This is called the partition operation.
3. Recursively sort the sub-list of lesser elements and the sub-list of greater
elements.

The base case of the recursion are lists of size zero or one, which are always sorted.

Question 2.

a).
Suppose
o S(k) is true for fixed constant k
 Often k = 0
o S(n) ◊ S(n+1) for all n >= k

• Then S(n) is true for all n >= k


David Luebke

Proof By Induction
• Claim:S(n) is true for all n >= k
• Basis:
o Show formula is true when n = k
• Inductive hypothesis:
o Assume formula is true for an arbitrary n

• Step:
o Show that formula is then true for n+1
David Luebke

Induction Example:
Gaussian Closed Form
• Prove 1 + 2 + 3 + … + n = n(n+1) / 2
o Basis:
 If n = 0, then 0 = 0(0+1) / 2
o Inductive hypothesis:
 Assume 1 + 2 + 3 + … + n = n(n+1) / 2
o Step (show true for n+1):

1 + 2 + … + n + n+1 = (1 + 2 + …+ n) + (n+1)

= n(n+1)/2 + n+1 = [n(n+1) + 2(n+1)]/2

= (n+1)(n+2)/2 = (n+1)(n+1 + 1) / 2
David Luebke

Induction Example:
Geometric Closed Form
• Prove a0 + a1 + … + an = (an+1 - 1)/(a - 1)
for all a ≠ 1
o Basis: show that a0 = (a0+1 - 1)/(a - 1)

a0 = 1 = (a1 - 1)/(a - 1)

o Inductive hypothesis:
 Assume a0 + a1 + … + an = (an+1 - 1)/(a - 1)
o Step (show true for n+1):

a0 + a1 + … + an+1 = a0 + a1 + … + an + an+1
= (an+1 - 1)/(a - 1) + an+1 = (an+1+1 - 1)/(a - 1)
David Luebke

Review: Asymptotic Performance


• Asymptotic performance: How does
algorithm behave as the problem size
gets very large?
 Running time
 Memory/storage requirements
o Use the RAM model:
 All memory equally expensive to access
 No concurrent operations
 All reasonable instructions take unit time
 Except, of course, function calls
 Constant word size
David Luebke

Review: Running Time


• Number of primitive steps that are
executed
o Except for time of executing a function call most statements roughly
require the same amount of time
o We can be more exact if need be

• Worst case vs. average case


David Luebke

Review: Asymptotic Notation


• Upper Bound Notation:
o f(n) is O(g(n)) if there exist positive constants c and n0 such that f(n) ≤ c
⋅ g(n) for all n ≥ n0
o Formally, O(g(n)) = { f(n): ∃ positive constants c and n0 such that f(n) ≤
c ⋅ g(n) ∀ n ≥ n0
• Big O fact:
o A polynomial of degree k is O(nk)
David Luebke

Review: Asymptotic Notation


• Asymptotic lower bound:
o f(n) is Ω (g(n)) if ∃ positive constants c and n0 such that 0 ≤ c⋅ g(n) ≤
f(n) ∀ n ≥ n0

• Asymptotic tight bound:


o f(n) is Θ (g(n)) if ∃ positive constants c1, c2, and n0 such that c1 g(n) ≤
f(n) ≤ c2 g(n) ∀ n ≥ n0
o f(n) = Θ (g(n)) if and only if
f(n) = O(g(n)) AND f(n) = Ω (g(n))

Question 2 .
b).
A classic example of a recursive procedure is the function used to calculate the factorial
of an integer.

Function definition:

Pseudocode (recursive):

function factorial is:


input: integer n such that n >= 0
output: [n × (n-1) × (n-2) × … × 1]

1. if n is 0, return 1
2. otherwise, return [ n × factorial(n-1) ]

end factorial

c).
A Turing machine is a theoretical device that manipulates symbols contained on a strip
of tape. Despite its simplicity, a Turing machine can be adapted to simulate the logic of
any computer algorithm, and is particularly useful in explaining the functions of a CPU
inside of a computer. The "Turing" machine was described by Alan Turing in 1937,[1]
who called it an "a(utomatic)-machine". Turing machines are not intended as a practical
computing technology, but rather as a thought experiment representing a computing
machine. They help computer scientists understand the limits of mechanical computation.

A succinct definition of the thought experiment was given by Turing in his 1948 essay,
"Intelligent Machinery". Referring back to his 1936 publication, Turing writes that the
Turing machine, here called a Logical Computing Machine, consisted of:

...an infinite memory capacity obtained in the form of an infinite tape marked out into squares on
each of which a symbol could be printed. At any moment there is one symbol in the machine; it is
called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part
determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of
the machine. However, the tape can be moved back and forth through the machine, this being one
of the elementary operations of the machine. Any symbol on the tape may therefore eventually
have an innings[2]. (Turing 1948, p. 61)

A Turing machine that is able to simulate any other Turing machine

The Turing machine mathematically models a machine that mechanically operates on a


tape on which symbols are written which it can read and write one at a time using a tape
head. Operation is fully determined by a finite set of elementary instructions such as "in
state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, shift to the right, and
change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;"
etc. In the original article ("On computable numbers, with an application to the
Entscheidungsproblem", see also references below), Turing imagines not a mechanical
machine, but a person whom he calls the "computer", who executes these deterministic
mechanical rules slavishly (or as Turing puts it, "in a desultory manner").

The head is always over a particular square of the tape; only a finite stretch of squares is
given. The instruction to be performed (q4) is shown over the scanned square. (Drawing
after Kleene (1952) p.375.)

Here, the internal state (q1) is shown inside the head, and the illustration describes the
tape as being infinite and pre-filled with "0", the symbol serving as blank. The system's
full state (its configuration) consists of the internal state, the contents of the shaded
squares including the blank scanned by the head ("11B"), and the position of the head.
(Drawing after Minsky (1967) p. 121).

More precisely, a Turing machine consists of:


1. A TAPE which is divided into cells, one next to the other. Each cell contains a
symbol from some finite alphabet. The alphabet contains a special blank symbol
(here written as '0') and one or more other symbols. The tape is assumed to be
arbitrarily extendable to the left and to the right, i.e., the Turing machine is always
supplied with as much tape as it needs for its computation. Cells that have not
been written to before are assumed to be filled with the blank symbol. In some
models the tape has a left end marked with a special symbol; the tape extends or is
indefinitely extensible to the right.
2. A HEAD that can read and write symbols on the tape and move the tape left and
right one (and only one) cell at a time. In some models the head moves and the
tape is stationary.
3. A finite TABLE ("action table", or transition function) of instructions (usually
quintuples [5-tuples] : qiaj→qi1aj1dk, but sometimes 4-tuples) that, given the
state(qi) the machine is currently in and the symbol(aj) it is reading on the tape
(symbol currently under HEAD) tells the machine to do the following in sequence
(for the 5-tuple models):
o Either erase or write a symbol (instead of aj written aj1), and then
o Move the head (which is described by dk and can have values: 'L' for one
step left or 'R' for one step right or 'N' for staying in the same place), and
then
o Assume the same or a new state as prescribed (go to state qi1).
In the 4-tuple models, erase or write a symbol (aj1) and move the head left
or right (dk) are specified as separate instructions. Specifically, the
TABLE tells the machine to (ia) erase or write a symbol or (ib) move the
head left or right, and then (ii) assume the same or a new state as
prescribed, but not both actions (ia) and (ib) in the same instruction. In
some models, if there is no entry in the table for the current combination
of symbol and state then the machine will halt; other models require all
entries to be filled.
4. A STATE REGISTER that stores the state of the Turing table, one of finitely
many. There is one special start state with which the state register is initialized.
These states, writes Turing, replace the "state of mind" a person performing
computations would ordinarily be in.

Question 3.
a)i).
a)ii).
Load balancing algorithms based on
gradient methods and their analysis
through algebraic graph theory

Andrey G. Bronevicha, ,
and Wolfgang Meyerb,
a
Taganrog State University of Radio-Engineering, Nekrasovskij street 44, 347928 Taganrog, Russia
b
Institut für Automatisierungstechnik (2-05), Schwarzenbergstrasse 95, D-21073 Hamburg, Germany
Received 5 May 2006;
revised 4 September 2007;
accepted 6 September 2007.
Available online 21 September 2007.

Abstract
The main results of this paper are based on the idea that most load balancing algorithms
can be described in the framework of optimization theory. It enables to involve classical
results linked with convergence, its speed and other elements. We emphasize that these
classical results have been found independently and till now this connection has not been
shown clearly. In this paper, we analyze the load balancing algorithm based on the
steepest descent algorithm. The analysis shows that the speed of convergence is
determined by eigenvalues of the Laplacian for the graph of a given load balancing
system. This consideration also leads to the problems of choosing an optimal structure for
a load balancing system. We prove that these optimal graphs have special Laplacians: the
multiplicities of their minimal and maximal positive eigenvalues must be greater than
one. Such a property is essential for strongly regular graphs, investigated in algebraic
graph theory.

b).
Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree
for a connected weighted graph. This means it finds a subset of the edges that forms a
tree that includes every vertex, where the total weight of all the edges in the tree is
minimized. If the graph is not connected, then it finds a minimum spanning forest (a
minimum spanning tree for each connected component). Kruskal's algorithm is an
example of a greedy algorithm.

Question 4.a).

Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare that, on


average, makes Θ(nlogn) (big O notation) comparisons to sort n items. In the worst
case, it makes Θ(n2) comparisons, though if implemented correctly this behavior is rare.
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms,
because its inner loop can be efficiently implemented on most architectures, and in most
real-world data, it is possible to make design choices which minimize the probability of
requiring quadratic time. Additionally, quicksort tends to make excellent usage of the
memory hierarchy, taking perfect advantage of virtual memory and available caches.
Coupled with the fact that quicksort is an in-place sort and uses no temporary memory, it
is very well suited to modern computer architectures

Formal analysis
From the initial description it's not obvious that quicksort takes Θ(nlogn) time on
average. It's not hard to see that the partition operation, which simply loops over the
elements of the array once, uses Θ(n) time. In versions that perform concatenation, this
operation is also Θ(n).

In the best case, each time we perform a partition we divide the list into two nearly equal
pieces. This means each recursive call processes a list of half the size. Consequently, we
can make only logn nested calls before we reach a list of size 1. This means that the
depth of the call tree is Θ(logn). But no two calls at the same level of the call tree
process the same part of the original list; thus, each level of calls needs only Θ(n) time
all together (each call has some constant overhead, but since there are only Θ(n) calls at
each level, this is subsumed in the Θ(n) factor). The result is that the algorithm uses only
Θ(nlogn) time.

An alternate approach is to set up a recurrence relation for the T(n) factor, the time
needed to sort a list of size n. Because a single quicksort call involves Θ(n) factor work
plus two recursive calls on lists of size n / 2 in the best case, the relation would be:

The master theorem tells us that T(n) = Θ(nlogn).


In fact, it's not necessary to divide the list this precisely; even if each pivot splits the
elements with 99% on one side and 1% on the other (or any other fixed fraction), the call
depth is still limited to 100logn, so the total running time is still Θ(nlogn).

In the worst case, however, the two sublists have size 1 and n − 1 (for example, if the
array consists of the same element by value), and the call tree becomes a linear chain of
n nested calls. The ith call does Θ(n − i) work, and . The recurrence relation is:

T(n) = Θ(n) + T(0) + T(n − 1) = O(n) + T(n − 1).

This is the same relation as for insertion sort and selection sort, and it solves to T(n) =
Θ(n2). Given knowledge of which comparisons are performed by the sort, there are
adaptive algorithms that are effective at generating worst-case input for quicksort on-the-
fly, regardless of the pivot selection strategy.[4]

Question 4 b.)

A greedy algorithm is any algorithm that follows the problem solving metaheuristic of
making the locally optimal choice at each stage[1] with the hope of finding the global
optimum.

For example, applying the greedy strategy to the traveling salesman problem yields the
following algorithm: "At each stage visit the unvisited city nearest to the current city".

Optimal substructure
"A problem exhibits optimal substructure if an optimal solution to the problem
contains optimal solutions to the sub-problems."[2] Said differently, a problem has
optimal substructure if the best next move always leads to the optimal solution.
An example of 'non-optimal substructure' would be a situation where capturing a
queen in chess (good next move) will eventually lead to the loss of the game (bad
overall move).

[edit] When greedy-type algorithms fail

For many other problems, greedy algorithms fail to produce the optimal solution, and
may even produce the unique worst possible solutions. One example is the nearest
neighbor algorithm mentioned above: for each number of cities there is an assignment of
distances between the cities for which the nearest neighbor heuristic produces the unique
worst possible tour.[3]
Imagine the coin example with only 25-cent, 10-cent, and 4-cent coins. The greedy
algorithm would not be able to make change for 41 cents, since after committing to use
one 25-cent coin and one 10-cent coin it would be impossible to use 4-cent coins for the
balance of 6 cents. Whereas a person or a more sophisticated algorithm could make
change for 41 cents change with one 25-cent coin and four 4-cent coins.

Question 4.d).
In computational complexity theory, the complexity class NP-complete (abbreviated NP-
C or NPC), is a class of problems having two properties:

• Any given solution to the problem can be verified quickly (in polynomial time);
the set of problems with this property is called NP (nondeterministic polynomial
time).
• If the problem can be solved quickly (in polynomial time), then so can every
problem in NP.

Although any given solution to such a problem can be verified quickly, there is no known
efficient way to locate a solution in the first place; indeed, the most notable characteristic
of NP-complete problems is that no fast solution to them is known. That is, the time
required to solve the problem using any currently known algorithm increases very
quickly as the size of the problem grows. As a result, the time required to solve even
moderately large versions of many of these problems easily reaches into the billions or
trillions of years, using any amount of computing power available today. As a
consequence, determining whether or not it is possible to solve these problems quickly is
one of the principal unsolved problems in computer science today.

While a method for computing the solutions to NP-complete problems using a reasonable
amount of time remains undiscovered, computer scientists and programmers still
frequently encounter NP-complete problems. An expert programmer should be able to
recognize an NP-complete problem so that he or she does not unknowingly waste time
trying to solve a problem which so far has eluded generations of computer scientists.
Instead, NP-complete problems are often addressed by using approximation algorithms.

You might also like