You are on page 1of 6

1

INTRODUCTION
1.1. What is an algorithm?

Chapter 1

An algorithm is named for the ninth-century Persian mathematician Abu Jafar Mohammed ibn Musa al-Khowarizmi, the The word al-khowarizmi when written in Latin became Algorismus, from which algorithm is a small step away. Definition: Algorithm is simply a finite set of rules or instruction for carrying out some calculation, either by hand or, more usually, on a machine to solve a particular computational task.

The algorithm also satisfies the following criteria: a) Input: zero or more quantities are supplied externally b) Output: at least one quantity is produce. c) Definiteness: each instruction is clear and unambiguous. d) Finiteness: for all valid input cases, the algorithm must terminate after a finite number of steps and further each step must be executable in finite amount of time. e) Effectiveness: This criterion is used to compare and choose one or more algorithms from the set of two or more algorithms which does the same task generally effectiveness of algorithm is estimated on the basis of their execution times. But it may be sometimes estimated on space and time both.

1.2. Algorithm analysis


The main concerned with algorithms for use on a computer is the computational efficiency. We will focus primarily on efficiency in running time: we want algorithms that run quickly. But it is important that algorithms be efficient in their use of other resources as well, in particular the amount of space (or memory) used by an algorithm. Efficiency is platform-independent, instance-independent, and of predictive value with respect to increasing input sizes. Following are some of the basic factors in designing and analysing algorithms: (a) Designing algorithms: This part consists of designing and writing algorithms for solving problems. 2

(b) Validation of algorithms: algorithm validation checks whether it computes the correct answer for all possible valid inputs. (c) Analyzing the algorithm: Analysis of algorithms or performance analysis involves estimating the computing time and storage space requirements. (d) Testing a program: Write a program and execute it with some sample data and check whether you get an output. Otherwise, you make necessary corrections in the program and execute it again. This process continues till you get output. This part is called "debugging". However, it does not mean that the program works well for all other legally valid input data sets.

1.2.1

Algorithm designing techniques

The below mentioned techniques are useful for both sequential and parallel algorithms, the techniques are as follows: (a) Brute Force: The brute force approach typically involves trying all possibilities to solve a problem without any restriction. E.g. compute xy , can be done by repeatedly multiply x. result = 1; while(y>0) { result = result * x; n--; } (b) Backtracking: Almost any problem can be cast in some form as a backtracking algorithm. In backtracking, you consider all possible choices to solve a problem and recursively solve sub-problems under the assumption that the choice is taken. The set of recursive calls generates a tree in which each set of choices in the tree is considered consecutively. Consequently, if a solution exists, it will eventually be found. It uses the brute force strategy to solve a problem. (c) Inductive techniques: The idea behind inductive techniques is to solve one or more smaller problems that can be used to solve the original problem. The technique most often uses recursion to solve the sub problems and can be proved correct using (strong) induction. Common techniques that fall in this category include: Divide-and-conquer. Divide the problem on size n into k > 1 subproblems on sizes n1; n2;::: nk, solve the problem recursively on each, and combine the solutions to get the solution to the original problem. E.g. Merge sort algorithm, in this algorithm the unsorted given array is 3

i. divided into small sub arrays repeatedly until it is each sub array contain only one data and ii. sort each sub arrays individually and then iii. Conquer or merge each sorted sub array repeatedly. And we get the sorted array as output. Greedy. For a problem on size n use some greedy approach to pull out one element leaving a problem of size n-1. Solve the smaller problem. Contraction. For a problem of size n generate a significantly smaller (contracted) instance (e.g. of size n=2), solve the smaller instance, and then use the result to solve the original problem. These only differ from divide and conquer in that we make one recursive call instead of multiple. Dynamic Programming. Like divide and conquer, dynamic programming divides the problem into smaller problems and then combines the solutions to the subproblems to find a solution to the original problem. The difference, though, is that the solutions to subproblems are reused multiple times. It is therefore important to store the solutions for reuse either using memorization or by building up a table of solutions.

(d) Hill Climbing: The basic idea is to start with a poor solution to a problem, and then repeatedly apply optimizations to that solution until it becomes optimal or meets some other requirement. E.g. Network flow problems (e) Collection Types: Some techniques make heavy use of the operations

on abstract data types representing collections of values. Abstract collection types that we will cover in this course include: Sequences,
Sets, Priority Queues, Graphs, and Sparse Matrices. (f) Randomization: Randomization in is a powerful technique for getting simple solutions to problems. (g) Reducing to another problem: Sometimes the easiest thing to do is just reduce the problem to another problem for which known algorithms exist. (h) Approximation algorithms: Approximation algorithms are algorithms that do not compute optimal solutions; instead, they compute solutions that are "good enough." Often we use approximation algorithms to solve problems that are computationally expensive but are too significant to give up on altogether. The traveling-salesman problem is one example of a problem usually solved using an approximation algorithm.

1.2.2 Validation
Once an algorithm has been specified, you have to prove its correctness. That is, you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant: Initialization: It is true prior to the first iteration of the loop. Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. Let test the correctness of insertion sort
INSERTION-SORT(A) 1. for j = 2 to A.length 2. key = A[j] 3. i = j -1 4. while i > 0 and A[i] > key 5. A[i+1] = A[i] 6. i = i + 1 7. A[i+1] = key

Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j = 2.1 The subarray A[1 j -1], therefore, consists of just the single element A[1], which is in fact the original element in A[1]. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop. Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving A[j -1], A[j 2], A[j 3], and so on by one position to the right until it finds the proper position for A[j] (lines 47), at which point it inserts the value of A[j] (line 7). The subarray A[1. j] then consists of the elements originally in A[1,..,j], but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant. A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 46. At this point, however, we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. 5

Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j > A.length = n. Because each loop iteration increases j by 1, we must have j = n + 1 at that time. Substituting n + 1 for j in the wording of loop invariant, we have that the subarray A[1,.., n] consists of the elements originally in A[1,.., n], but in sorted order. Observing that the subarray A[1,.., n] is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct.

1.3.

Computational model

For analysing our main concerned factor is analyzing of an algorithm among the above mentioned factors. Before we can analyze an algorithm, we must have a model of computation for implementation and understand of our algorithms. For analysing an algorithm we always keep in consideration the computational model name, Random Access Machine (RAM).

RAM
Definition: a) b) c) RAM is a theoretical model of computer hardware whose have the following property. It have single processing unit. i.e. instructions are executed one after another, with no concurrent operations Each simple operation (+, *, -, =, if, call) takes exactly one time step. Loops and subroutines are not considered simple operations. Instead, they are the composition of many single-step operations. It makes no sense for sort to be a single-step operation. The model has as much memory as we need for the execution of the algorithm. The model takes no notice of whether an item is in cache or on the disk. Each memory access takes exactly one time step.

d) e) f)

The demerits of RAM model is the assumption that it made, unit time for accessing any memory location and for any operation, cannot be applied on real machine. In real machine some type of memory access required much time than other type of. Such as register addressing takes less time than in direct addressing. There also some operation takes much more time than other operation, such as multiplication operation takes more time than addition.

You might also like