You are on page 1of 4

Ques 1:Explain complexity of an algorithm in detain? Ans.

Complexity of an algorithm is the study of how long a program will take to run, depending on the size of its input & long of loops made inside the code Specifically, the complexity of an algorithm is a measure of how long it takes to complete (give an answer) relative to increasing sizes of input data. Thus, complexity is not concerned with how long it took the algorithm to run using X amount of data. Rather, it is concerned with the relationship in runtime when using X amount of data, 2X amounts of data, 10X amounts of data, etc. While complexity usually refers to execution time, it can also be applied to other resource usage (for example, memory allocation). In all cases, complexity is concerned with the relationship between the rate of increase in resource consumption and the rate of increase of the size of the data set being worked on. Worst-case performance analysis and average case performance analysis have some similarities, but in practice usually require different tools and approaches. Determining what average input means is difficult, and often that average input has properties which make it difficult to characterise mathematically (consider, for instance, algorithms that are designed to operate on strings of text). Similarly, even when a sensible description of a particular "average case" (which will probably only be applicable for some uses of the algorithm) is possible, they tend to result in more difficult to analyse equations. Worst-case analysis has similar problems: it is typically impossible to determine the exact worst-case scenario. Instead, a scenario is considered such that it is at least as bad as the worst case. For example, when analysing an algorithm, it may be possible to find the longest possible path through the algorithm (by considering the maximum number ofloops, for instance) even if it is not possible to determine the exact input that would generate this path (indeed, such an input may not exist). This gives a safe analysis (the worst case is never underestimated), but one which is pessimistic, since there may be no input that would require this path. Alternatively, a scenario which is thought to be close to (but not necessarily worse than) the real worst case may be considered. This may lead to an optimistic result, meaning that the analysis may actually underestimate the true worst case. In some situations it may be necessary to use a pessimistic analysis in order to guarantee safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets closer to the real value but may be optimistic (perhaps with some known low probability of failure) can be a much more practical approach. When analyzing algorithms which often take a small time to complete, but periodically require a much larger time, amortized analysis can be used to determine the worst-case running time over a (possibly infinite) series of operations. This amortized worstcase cost can be much closer to the average case cost, while still providing a guaranteed upper limit on the running time

Example Linear search on a list of n elements. In the worst case, the search must visit every element once. This happens when the value being searched for is either the last element in the list, or is not in the list. However, on average, assuming the value searched for is in the list and each list element is equally likely to be the value searched for, the search visits only n/2 elements.

Insertion sort applied to a list of n elements, assumed to be all different and initially in random order. On average, half the elements in a list A1 ... Aj are less than element Aj+1, and half are greater. Therefore the algorithm compares the j+1-st element to be inserted on the average with half the already sorted sub-list, so tj = j/2. Working out the resulting average-case running time yields a quadratic function of the input size, just like the worst-case running time.

Quicksort applied to a list of n elements, again assumed to be all different and initially in random order. This popular sorting algorithm has an average-case performance of O(nlog n), which contributes to making it a very fast algorithm in practice. But given a worst-case input, its performance degrades to O(n2).

Ques.2:Convert the following Infix Expressions (7-5)*(9/2) to Postfix? Ans: Symbol Scanned Stack Expression P ( ( 7 ( 7 (,7 5 (,7,5 ) Empty 7,5,* * 7,5,( *,( 7,5,9 *,( 7,5,-,9 / *,(,/ 7,5,-,9 2 *,(,/ 7,5,-,9,2 ) Stack is empty 7,5,-,9,2,/,* * The postfix expression is p=7 5 9 2 / *

Ques.3: Write the algorithm to insert an element at the end of linked list. Ans. The algorithm to insert an elemnt at the end of linked list: void insertatend Step 1. Node *head,int,item Step 2. Node *ptr,*loc; Step3. ptr=(node*) malloc(sizeof(node)); Step 4. ptr->info=item; Step 5. ptr->next=NULL; Step 6. if (*head==NULL) Step 7. Head=ptr; Step8. else { Step9. loc=*head; Step10. while(loc->next!=NULL) Step11. loc=loc->next; Step12. loc->next=ptr; Step13. }

Ques 4:Write short notes on:*Binary Search *Overflow & Underflow Ans. BINARY SEARCH: Binary search is apply to the order list. In binary search, we first compare the key with the item in the middle position of the array. If theres a match, we can return immediately. If the key is less than the middle key, then the item sought must lie in the lower half of the array; if its greater then the item sought must lie in the upper half of the array. So we repeat the procedure on the lower (or upper) half of the array. The vari ables beg and end denote,respectivly,the beginning and end locations of the segment under consideration. The algorithm compares ITEM with the middle element DATA[MID] of the segment,where MID is obtained by: MID=INT((BEG+END)/2) OVERFLOW Overflow in Data Structure: Sometimes new data are to be inserted into the data structure but there is no available space, i.e. the free storage list is empty. This situation is usually called overflow. The programmer may handle overflow by printing the message OVERFLOW. In such a case, the programmer may then modify the program by adding space to the underlying arrays. Observe that overflow will occur with our linked lists when AVAIL = NULL and there is an insertion. Here AVAIL is the linked list of

available space, i.e. if we have to insert data then first we have to check the available space. To check the overflow: [Overflow] If AVAIL = NULL, then: Write Overflow UNDERFLOW Underflow in Data Structure: The term underflow refers to the situation where one wants to delete data from a data structure that is empty. The programmer may handle underflow by printing the message UNDERFLOW. Observe that underflow will occur with our linked list when START= NULL and there is deletion. If we have to delete data then first we have to check that whether the list is empty or not. If the list is not empty then there is no problem but if the list is empty and we are trying for deletion then there is problem of underflow. This can be checked using [Underflow] If START = NULL, then: Write Underflow

You might also like