You are on page 1of 14

Question 1:

1. (a) What is Randomized Quicksort? Analyse the expected running time of Randomized Quicksort, with the help of a suitable example. Answer Randomized quick sort is a simple modification of quick sort in which a random element in the sub array being considered is moved to the last position in the sub array, and then partition is invoked. Rand partion(A,P,r) I = Random(P,r) array Interchange A[I] and A[r] Return position(A,P,r) random element in sub

Correctness of Rand partition is immediate once we assume correctness of partition, since A[I] is guaranteed to be an element in the sub array A[P,r]. the running time remains linear in the size of sub array. Randomized Quick Sort (A,p,r) INPUT: A[1.n] of elements from a totally ordered set P, and r the inter with 1prn. OUTPUT: The element I subarray A[p.r] are rearranged in sorted order, the elements in array A outside of sub array A[pr] are unchanged. If p<r then Q: Rand Partition (A,p,r) Randomized Quick Sort (A, p,q-1) Fi Expected running time of Randomized Quick sort Let X be the number of comparisons between pairs of elements in array. A made by Rand-Quick sort on an array A of size n, where we assume that all elements in the input array are distinct. Since the total number of operations executed by Rand-Quicksort, is within a constant factor of the number of comparisons performed

by it. This will bound the expected running time as O(n log). (b) Explain the Greedy Structure algorithm. Give an example in which the Greedy technique fails to deliver an optimal solution. Answer Greedy approach: Greedy approach works by making the decision that seems most promising at any moment; it never reconsiders this decision, whatever situation may arise later. Step1. Initially the set of chosen items is empty i.e., solution set. Step 2. At each Step Item will be added in a solution set by using selection function. If the set would no longer be feasible. Reject items under consideration(and is never considered again). Else if set is still feasible Then Add the current item. Step 3. END. (c) Describe the two properties that characterise a good dynamic programming Problem. Answer Following are two properties of a good dynamic programming. Optimal Substructure: this means that optimal solutions of sub problems can be used to find the optimal solutions of the overall problem. For example, the shortest path to a goal from a vertex in an acyclic graph can be found by first computing the shortest path to the goal from all adjacent vertices, and then using this to pick the best overall path. Process to solve a problem using optimal substructure. Break the problem into smaller sub problems.

Solve these problems optimally using this three-step process recursively. Use this optimal solution to construct an optimal solution for the original problem. Overlapping sub problems: to say that a problem has overlapping sub problems is to say that the same sub problems are used to solve many different large problems. For example. In a Fibonacci sequence, f3=f1+f2 and f4=f2+f3 computing each number involves computing previous two numbers. This applies whenever overlapping sub problems are present: a nave approach may waste time recomposing optimal solution to sub problems it has already solved. (d) State Traveling Sales Persons problem. Comment on the nature of solution to the problem. Answer The Travelling Salesman Problem (TSP) is an NP-hard problem. A problem that is NP-Complete has the property that it can be solved in polynomial time if and only if all other NP-complete problems can also be solved in polynomial time. If an NP-hard problem can be solved in polynomial time, then all NP-complete problems can be solved in polynomial time. All NP-complete problems are NP-hard, but some NP-hard problems are not NP-complete.

Question 2:

2. Give an analysis of each one of the following with examples: (i) Insertion Sort (ii) Selection Sort (iii) Heap Sort (iv) Merge Sort (v) Quick Sort Answer (i) Insertion Sort One of the simplest methods to sort an array is an insertion sort. An example of an insertion sort occurs in everyday life while playing cards. To sort the cards in your hand you extract a card, shift the remaining cards, and then insert the extracted card in the correct place. This process is repeated until all the cards are in the correct sequence. Both average and worst-case time is O(n2). EXAMPLE Let there is a array of four integer to be sorted. Starting near the top of the array we extract the 3. Then the above elements are shifted down until we find the correct place to insert the 3. This process repeats with the next number. Finally, we complete the sort by inserting 2 in the correct place.

Assuming there are n elements in the array, we must index through n 1 entries. For each entry, we may need to examine and shift up to n 1 other entries, resulting in a O(n2) algorithm. The insertion sort is an in-place sort. That is, we sort the array in-place. No extra memory is required. The insertion sort is also a stable sort. Stable sorts retain the original ordering of keys when identical keys are present in the input data. 1 (ii) Selection Sort The selection sort searches all of the elements in a list until it finds the smallest/largest element. It swaps that element with the first element in the list. Next time, it finds the smallest/largest of remaining elements and swaps it with the second element. The same process will be repeated until the last two elements in the list are compared. As we are selecting the smallest/largest element by searching the entire list, the name is selection sort. Example: Let us take a set of 7 numbers to sort them in ascending order 2654173 Pass 1: 2654173 First, search for the smallest element in the list. Here 1 is the smallest element. Swap it with the first element of the list. The list will be 1654273 Pass 2: 1654273 Now we have search for the smallest element from 2nd element onwards. The smallest element is 2 and has to be swapped with 2nd element of the list i.e. 6. The list will be 1254673 Pass 3: 1254673

Now we have search for the smallest element from 3rd element onwards. The smallest element is 3 and has to be swapped with 3rd element of the list i.e. 5. The list will be 1234675 Pass 4: 1234675 Now we have search for the smallest element from 4th element onwards. The smallest element is 4 and has to be swapped with 4th element of the list i.e. 4. As it is in the correct place no need to swap. Pass 5: 1234675 Now we have search for the smallest element from 5th element onwards. The smallest element is 5 and has to be swapped with 5th element of the list i.e. 6. The list will be 1234576 Pass 6: 1234576 Now we have search for the smallest element from 6h element onwards. The smallest element is 6 and has to be swapped with 6th element of the list i.e. 7. The list will be 1234567 Totally N-1 passes are needed to sort all the elements where N is the number of elements in the list. In each pass we need to search for the smallest/largest element in the list and should be swapped with the element in the position equal to the number of pass. Complexity: The number of comparisons is N-1 in the first pass, N-2 in the second pass, N-3 in the third pass and N-i in the ith pass. The complexity will be O(N2). 1 (iii) Heap Sort We can sort an array by first of all constructing a heap from it as in first part of algorithm below. It can be shown that this takes n steps where n is the array size. Next, the largest value is removed from the heap and the heap resized. This requires a sift Down() so that the smaller

heap is still a heap and this operation takes at most log2 n steps. The largest value is then placed on the end of the array in the spot freed by the removal. This is repeated with the next largest value being placed in the second last position and so on. This process is repeated n-1 times. So heap sort takes at most n+(n-1)log2 n= nlog2 n This is the same as the average cost of quick sort. EXAMPLE Let there is an array of eight integers to be sorted. The first thing to do in Heapsort is to convert the array into a heap as illustrated in the diagram.

Next, the array a[ ] is sorted by removing the highest priority value from the heap and placing it at the end of the array. Remember, on heap removal, the value at the bottom of the heap is moved to the top and then sifted down to reestablish the heap. The first 3 iterations of this loop are illustrated next.

(iv) Merge Sort 1 Merge sort works by dividing the dataset into two portions and applying merge sort on each half using this recursive technique. The base case is when there are two items in the dataset being considered. These can be ordered by exchange if needed. Once all of the pairs of data have been sorted, then two pairs of sorted data can be merged. As the recursion 'unwinds', larger pairs of sorted datasets can be merged. EXAMPLE To illustrate this, consider the following dataset (6,5,8,1,4,3,7,2) For the first call of the function, the data is partitioned into two, one list of 6, 5, 8 and 1, and a second list of 4, 3, 7 and 2. The list (6 5 8 1) is then partitioned into two smaller lists ( 6 5 ) and ( 8 1 ). The base case has now been reached, and we can sort ( 6 5 ) into ( 5 6 ) and we can sort ( 8 1 ) into ( 1 8 ) We can now merge (5 6 ) and ( 1 8 ).

We compare 1 and 5, so the first element of the merged sequence is 1. We next compare 5 and 8, so the second element is 5. Next we compare 6 and 8 the third element will be 6, leaving 8 as the last element. The merged result is (1 5 6 8 ) We now turn our attention to the second half of the original data set. Again we partition ( 4 3 7 2 ) into ( 4 3 ) and ( 7 2 ) . Sorting these we get ( 3 4 ) and ( 2 7 ) Merging these we get ( 2 3 4 7 ). We now have two halves of the data sorted as ( 1 5 6 8 ) and ( 2 3 4 7 ) . All that remains to be done is to merge the two halves together. ( 1 2 3 4 5 6 7 8 ) (v) Quick Sort One of the most popular sorting algorithms is quicksort. Quicksort executes in O(n lg n) on average, and O(n2) in the worst-case. However, with proper precautions, worstcase behavior is very unlikely. Quicksort is a non-stable sort. It is not an in-place sort as stack space is required. The quicksort algorithm works by partitioning the array to be sorted, then recursively sorting each partition. In Partition one of the array elements is selected as a pivot value. Values smaller than the pivot value are placed to the left of the pivot, while larger values are placed to the right. EXAMPLE Let there is an array of five integers to be sorted The pivot selected is 3. Indices are run starting at both ends of the array. One index starts on the left and selects an element that is larger than the pivot, while another index starts on the right and selects an element that is smaller than the pivot. In this case, numbers 4 and 1 are selected. These elements are then exchanged, as is shown in Figure This process repeats until all elements to the left of the pivot are the pivot, and all items to the right of the pivot are the pivot. QuickSort recursively sorts the two sub-arrays, resulting in the array shown in Figure.

Question 3:

3.(a) Consider Best first search technique and Breadth first search technique. Answer the following with respect to these techniques. Give justification for your answer in each case. (i)Which algorithm has some knowledge of problem space? (ii) Which algorithm has the property that if a wrong path is chosen, it can be corrected afterwards? Answer (i) Best first algorithm has some knowledge of problem space. It chooses a starting vertex X, initializes tree T to X, and calls BFS(X). just like in breadth first search, if a vertex has several neighbors it would be equally correct to go through them in any order. I didnt simply say for each unvisited neighbor of v because it is very important to delay the test for whether a vertex is visited until the recursive calls for previous neighbor are finished. (iii) Breadth first Search Technique: Every edge of G can be classified into one of three group. Some edge are in Tree themselves. Some connect two vertices at the same level of Tree. And remaining ones connect two vertices on two adjacent levels. It is not possible for an edge to skip a level. Therefore, breadth first search tree really is a shortest path tree starting from its root. Every vertex has a path to the root, with path length equal to its level, and no path can skip a level so this really is a shortest path. (b) Describe the difference between a Deterministic Finite Automata and NonDeterministic Finite Automata. In general, which one is expected to have less number of states? Answer

In deterministic automata, every state has exactly one transition for each possible input. In non-deterministic automata, an input can lead to one, more than one or no transition for a given state. This distinction is relevant in practice, but not in theory, as there exist an algorithm which can transform any NFA into a more complex DFA with identical functionality. (c) Explain TURING THESIS" in detail.

Answer A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer. The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, shift to the right, and change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner").

Question 4:

4. (a) Write a randomized algorithm to statistic in a set of n elements Select. Answer It is an algorithm which employs a degree of randomness as part of its logic. In common practice, this means that the machine implementing the algorithm has access to a pseudo-random number generator. The algorithm typically uses the random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the average case. The algorithm performance will be a random variable determined by the random bits Graph (undirected graph G) { while there are more than 2 nodes in G do { pick an edge(u,v) at random in G Contract the edge, while preserving multi-deges. Remove al loops. } Output the remaining edges } Randomized selection for n>p consider the problem of selection when n=pc (b) Write a recursive procedure to compute the factorial of a number. Answer A classic example of a recursive procedure is the function used to calculate the factorial of an integer. function factorial is: input: integer n such that n >= 0 output: [n (n-1) (n-2) 1] 1. if n is 0, return 1 2. otherwise, return [ n factorial(n-1) ] end factorial (c) Design a Turing Machine that increments a binary number which is stored on the input tape. Answer

A Turing machine is a theoretical device that manipulates symbols contained on a strip of tape. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside of a computer Turing machines are not intended as a practical computing technology, but rather as a thought experiment representing a computing machine. They help computer scientists understand the limits of mechanical computation. Process of increment a binary numberLet the binary number is -101 We represent five to increment five which represent six the binary equivalent of six is 110. That is we have to change the last 0 1 in 0. Suppose the binary number 101 on tape is stored in this manner. 1 0 1 And after increment 1 1 0 Since initial configuration of Turing machine is 1 0 1 Now the required Turing Machine is

M=(Q, , , q0, h) with Q= {q0.q1,q2,q3,h} and ={0,1} ={0,1,#}

You might also like