You are on page 1of 14

A1 a) BIG-O NOTATION(O) Let f and g be functions from the set of integers or the set of real numbers to the set

of real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that f(x) <= Cg(x) whenever x > k. For example, when we say the running time T(n) of some program is O(n2), read big oh of n squared or just oh of n squared, we mean that there are positive constants c and n0 such that for n equal to or greater than n0, we have T(n)<=cn2. In support to the above discussion I am presenting a few examples to visualize the effect of this definition. BIG-OMEGA NOTATION (W) Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that f(x) is W(g(x)) if there are constants C and k such that f(x) >= Cg(x) whenever x > k. BIG-THETA NOTATION (q) For the similar functions f and g as discussed in above two definitions, we say that f(x) is q(g(x)) if there are constants C1 ,C2 and k such that, 0<= C1f(x)<=f(x) <= C2f(x) whenever x > k. Since, q (g(x)) bounds a function from both upper and lower sides, it is also called tight bound for the function f(x).

b) A linear search is the simplest possible search algorithm where you simply iterate through a list of items until you find the item you're looking for. Clearly, the time it takes to find an item will be proportional to the number of items in the list (n say). If an item occurs just once in the list then, on average, (n + 1)/2 items will need to be examined before a match is found. Foe example, if a list of ints contains the 5 numbers 1,2,3,4 and 5 and you're looking for a random number in this range then, on average, you'll need to examine (5 +1)/2 = 3 numbers until you find the one you're looking for. In 'big O' notation, a linear search algorithm is O(n). c) In computer science, a binary search tree (BST), sometimes also called an ordered or sorted binary tree, is a node-based binary tree data structure which has the following properties:[1]

The left subtree of a node contains only nodes with keys less than the node's key.

The right subtree of a node contains only nodes with keys greater than the node's key. The left and right subtree each must also be a binary search tree. There must be no duplicate nodes.

Generally, the information represented by each node is a record rather than a single data element. However, for sequencing purposes, nodes are compared according to their keys rather than any part of their associated records. The major advantage of binary search trees over other data structures is that the related sorting algorithms and search algorithms such as in-order traversal can be very efficient. d) To analyze a graph it is important to look at the degree of a vertex. One way to find the degree is to count the number of edges which has that vertx as an endpoint. An easy way to do this is to draw a circle around the vertex and count the number of edges that cross the circle. To find the degree of a graph, figure out all of the vertex degrees. The degree of the graph will be its largest vertex degree. e) Postorder traversal: To traverse a binary tree in Postorder, following operations are carried-out (i) Traverse all the left external nodes starting with the left most subtree which is then followed by bubble-up all the internal nodes, (ii) Traverse the right subtree starting at the left external node which is then followed by bubble-up all the internal nodes, and (iii) Visit the root. f) AVL tree (Adelson-Velskii and Landis' tree, named after the inventors) is a selfbalancing binary search tree. It was the first such data structure to be invented.[1] In an AVL tree, the heights of the two child subtrees of any node differ by at most one; if at any time they differ by more than one, rebalancing is done to restore this property. Lookup, insertion, and deletion all take O(log n) time in both the average and worst cases, where n is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or moretree rotations. The AVL tree is named after its two Soviet inventors, G. M. AdelsonVelskii and E. M. Landis, who published it in their 1962 paper "An algorithm for the organization of information".[2] g) B-tree is a tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree is a generalization of a binary search tree in that a node can have more than two children. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and file systems.

h) binary tree is a tree in which each node has at most two child nodes (denoted as the left child and the right child). Nodes with children are referred to as parent nodes, and child nodes may contain references to their parents. Following this convention, you can define ancestralrelationships in a tree: that is, for example, one node can be an ancestor of another node, a descendant, or a great-grandchild of another node. The rootnode is the ancestor to all nodes of the tree, and any node in the tree can be reached from the root node. A tree which does not have any node other than root node is called a null tree. In a binary tree, the degree of every node can be at most two. A tree with n nodes has exactly n1 branches or degree. i) In a standard queue data structure re-buffering problem occurs for each dequeue operation. To solve this problem by joining the front and rear ends of a queue to make the queue as a circular queue Circular queue is a linear data structure. It follows FIFO principle.

In circular queue the last node is connected back to the first node to make a circle.

Circular linked list fallow the First In First Out principle Elements are added at the rear end and the elements are deleted at front end of the queue

Both the front and the rear pointers points to the beginning of the array. It is also called as Ring buffer. Items can inserted and deleted from a queue in O(1) time.

j) for any two-Dimensional m*n array A,the computer keeps the track of Base(A)-the address of the first element of A[0,0] of A, and computes the address ADD (A[i, j]) of A[i, j]using the formula (Column-major order) (Row-major order) ADD (A[i,j])=Base(A)+w[m(j-l2)+(i-l1)] ADD (A[i,j])=Base(A)+w[n(i-l1)+(j-12)]

Again, w denotes the number of words per memory location for the array A. Note that the formulas are linear in i and j. C language makes use of Row-major ordering and use the following formula: ADD (A[i, j]) = Base (A) + w (n * i + j) (2.6

A2

A3 The restrictions on queue imply that the first element which is inserted into the queue will be the first one to be removed. Thus A is the first letter to be removed, and queues are known as First In First Out (FIFO) lists. Addition into a queue procedure addq (item : items); {add item to the queue q} begin if rear=n then queuefull else begin rear :=rear+1; q[rear]:=item; end; end;{of addq} Deletion in a queue procedure deleteq (var item : items); {delete from the front of q and put into item} begin if front = rear then queueempty else begin front := front+1 item := q[front]; end; end; {of deleteq} A4

A5 void insert_at_position(node *head) { int flag=1,ele,num; node *temp; printf("\nEnter the element after which u want to insert= "); scanf("%d",&num); while(head!=NULL) { if(head->info==num) { flag=2; break; } head=head->next; } if(flag==1) { printf("\n\No. is not in list.n"); return; } else { printf("\nEnter the element to insert= "); scanf("%d",&ele); temp=(node *)malloc(sizeof(node)); temp->next=head->next; temp->prev=head; temp->info=ele; temp->next->prev=temp; head->next=temp; } A6 In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning. An

algorithm is an effective method expressed as a finite list[1] of well-defined instructions for calculating a function.Starting from an initial state and initial input (perhaps empty),the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. By following the bellow steps we can write a simple algorithm. * Understand the problem. * Exact Vs Approximate solving. * Design an algorithm. * Prove correctness. * Analyze the algorithm. * Code the algorithm. In computer science, the analysis of algorithms is the determination of the amount of resources (such as time and storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity). Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Bigomega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called

model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 n + 1 time units are needed to return an answer. A7 TRAVERSALS OF A BINARY TREE A traversal of a graph is to visit each node exactly once. In this section we shall discuss traversal of a binary tree. It is useful in many applications. For example, in searching for particular nodes. Compilers commonly build binary trees in the process of scanning, parsing, generating code and evaluation of arithmetic expression. Let T be a binary tree. There are a number of different ways to proceed. The methods differ primarily in the order in which they visit the nodes. The four different traversals of T are In order, Post order, Preorder and Level-by-level traversal. IN ORDER TRAVERSAL It follows the general strategy of Left-Root-Right. In this traversal, if T is not empty, we first traverse (in order) the left sub tree; then visit the root node of T, and then traverse (in order) the right sub tree. Consider the binary tree given

Expression Tree This is an example of an expression tree for (A + B*C)-(D*E)

A binary tree can be used to represent arithmetic expressions if the node value can be either operators or operand values and are such that: each operator node has exactly two branches each operand node has no branches, such trees are called expression trees. Tree, T, at the start is rooted at '_'; Since left(T) is not empty; current T becomes rooted at +; Since left(T) is not empty; current T becomes rooted at 'A'. Since left(T) is empty; we visit root i.e. A. We access T' root i.e. '+'. We now perform in order traversal of right(T). Current T becomes rooted at '*'. Since left(T) is not empty; Current T becomes rooted at 'B' since left(T) is empty; we visit its root i.e. B; cheek for right(T) which is empty, therefore, we move back to parent tree. We visit its root i.e. '*'. Now in order traversal of right(T)is performed; which would give us 'C'. We visit T's root i.e. 'D' and perform in order traversal of right(T); which would give us'* and E'. Therefore, the complete listing is A+B*C-D*E You may note that expression is in infix notation. The in order traversal produces a(parenthesized) left expression, then prints out the operator at root and then a(parenthesized) right expression. This method of traversal is probably the most widely used. The following is a pascal procedure for in order traversal of a binary tree procedure INORDER (TREE: BINTREE); begin if TREE <>nil then begin

INORDER (TREE^LEFT); Write ln ( TREE^DATA); INORDER (TREE ^ RIGHT); end end; Figure gives a trace of the in order traversal of tree given in figure 9. Root of the tree + A Empty sub tree Empty sub tree * B Empty sub tree Empty sub tree C Empty sub tree Empty sub tree * D Empty sub tree Empty sub tree E Empty sub tree Empty sub tree Output

A + B * C D * E over

Figure 10: Trace of in order traversal of tree given in figure 9 Please notice that this procedure, like the definition for traversal is recursive. POST ORDER TRAVERSAL In this traversal we first traverse left(T) (in post order); then traverse Right(T) (in post order); and finally visit root. It is a Left-Right-Root strategy, i.e. Traverse the left sub tree In Post order. Traverse the right sub tree in Post order.P Visit the root. For example, a post order traversal of the tree given in Figure 9 would be

ABC*+DE*You may notice that it is the postfix notation of the expression (A + (B*C)) -(D*E) We leave the details of the post order traversal method as an exercise. You may also implement it using Pascal or C language. PREORDER TRAVERSAL In this traversal, we visit root first; then recursively perform preorder traversal of Left(T); followed by pre order. traversal of Right(T) i.e. a Root-Left-Right traversal, i.e. Visit the root Traverse the left sub tree preorder. Traverse the right sub tree preorder. A preorder traversal of the tree given in Figure 9 would yield - +A*BC*DE It is the prefix notation of the expression (A+ (B*C)) - (D*E) Preorder traversal is employed in Depth First Search. (See Unit 4, Block 4). For example, suppose we make a depth first search of the binary tree given in Figure 11.

Figure 12: Binary tree example for depth first search

We shall visit a node; go left as deeply as possible before searching to its right. The order in which the nodes would be visited is ABDECFHIJKG A8 a) SPANNING TREES 2) A tree is a connected graph which contains no cycles. 3) The concept of a spanning tree of a graph originated with optimization problems in communication networks. a) A communication network can be represented by a graph in which the vertices are the stations and the edges are the communication lines between stations. b) A subnetwork that connects all the stations without any redundancy will be a tree. 4) A spanning tree for a connected graph is a tree whose vertex set is the same as the vertex set of the given graph, and whose edge set is a subset of the edge set of the given graph. 5) Any connected graph will have a spanning tree.

b) Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. Example For the following graph:

a depth-first search starting at A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trmaux tree, a structure with important applications in graph theory. Performing the same search without remembering previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G. c) In graph theory, breadth-first search (BFS) is a strategy for searching in a graph when search is limited to essentially two operations: (a) visit and inspect a node of a graph; (b) gain access to visit the nodes that neighbor the currently visited node. The BFS begins at a root node and inspects all the neighboring nodes. Then for each of those neighbor nodes in turn, it inspects their neighbor nodes which were unvisited, and so on. Compare BFS with the equivalent, but more memory-efficient Iterative deepening depth-first search and contrast with depth-first search. The algorithm uses a queue data structure to store intermediate results as it traverses the graph, as follows: 1. Enqueue the root node 2. Dequeue a node and examine it

If the element sought is found in this node, quit the search and return a result. Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.

3. If the queue is empty, every node on the graph has been examined quit the search and return "not found". 4. If the queue is not empty, repeat from Step 2. A9 /* Function to get the count of leaf nodes in a binary tree*/ unsigned int getLeafCount(struct node* node) { if(node == NULL) return 0; if(node->left == NULL && node->right==NULL) return 1; else return getLeafCount(node->left)+ getLeafCount(node->right); }

int FullNodes(TreeNode* root){

if(root == NULL) //if tree is empty return 0; if(root->left == NULL && root->right == NULL) //leaf nodes return 0; if(root->left!=NULL && root->right != NULL) // Full Nodes return 1 + FullNodes(root->left) + FullNodes(root->right); if(root->left==NULL && root->right != NULL) return FullNodes(root->right); if(root->left!=NULL && root->right == NULL) return FullNodes(root->left); } //Nodes with no left child

// Nodes with no right child

You might also like