Professional Documents
Culture Documents
By
SAGAR BHASIN(CSE-EVENING)
A2345915043
Mentor
AMITY UNIVERSITY
NOIDA,UTTAR PRADESH
BI LINEAR SEARCHING
An algorithm is a step by step method of solving a problem. It is commonly used for data
processing, calculation and other related computer and mathematical operations.
An algorithm is also used to manipulate data in various ways, such as inserting a new data
item, searching for a particular item or sorting an item.
Technically, computers use algorithms to st the detailed instructions for carrying out an
operation. For example, to compute an employee’s paycheck, the computer uses an
algorithm. To accomplish this task, appropriate data must be entered into the system. In
terms of efficiency, various algorithms are able to accomplish operations or problem
solving easily and quickly.
Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – implement both the algorithms and run the two programs
on your computer for different inputs and see which one takes less time. There are many
problems with this approach for analysis of algorithms.
1) It might be possible that for some inputs, first algorithm performs better than the
second. And for some inputs second performs better.
2) It might also be possible that for some inputs, first algorithm perform better on one
machine and the second works better on other machine for some other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing algorithms. In
Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size
(we don’t measure the actual running time). We calculate, how does the time (or space)
taken by an algorithm increases with the input size.
For example, let us consider the search problem (searching a given item) in a sorted array.
One way to search is near Search (order of growth is near) and other way is Binary Search
(order of growth is logarithmic). To understand how Asymptotic Analysis solves the
above mentioned problems in analyzing algorithms, let us say we run the near Search on
a fast computer and Binary Search on a slow computer. For small values of input array
size n, the fast computer may take less time. But, after certain value of input array size, the
Binary Search will definitely start taking less time compared to the near Search even
though the Binary Search is being run on a slow machine. The reason is the order of
growth of Binary Search with respect to input size logarithmic while the order of growth
of near Search is near. So the machine dependent constants can always be ignored after
certain values of input size.
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing
i i i i i i i i i i i i
i algorithms. For example, say there are two sorting algorithms that take 1000nLogn
i i i i i i i i i i i
Also, in Asymptotic analysis, we always talk about input sizes larger than a constant
i i i i i i i i i i i i i
i value. It might be possible that those large inputs are never given to your software
i i i i i i i i i i i i i i
i and an algorithm which is asymptotically slower, always performs better for your
i i i i i i i i i i i
i particular situation. So, you may end up choosing an algorithm that is Asymptotically
i i i i i i i i i i i i
1) Worst Case
i i
2) Average Case
i i
3) Best Case
i i
Ii
Worst Case Analysis (Usually Done) i i i i
In the worst case analysis, we calculate upper bound on running time of an algorithm.
i i i i i i i i i i i i i i
i We must know the case that causes maximum number of operations to be executed.
i i i i i i i i i i i i i
i For near Search, the worst case happens when the element to be searched (x in the
i i i i i i i i i i i i i i i
i above code) is not present in the array. When x is not present, the search() functions
i i i i i i i i i i i i i i i
i compares it with all the elements of arr[] one by one. Therefore, the worst case time
i i i i i i i i i i i i i i i
In average case analysis, we take all possible inputs and calculate computing time for
i i i i i i i i i i i i i
i all of the inputs. Sum all the calculated values and divide the sum by total number of
i i i i i i i i i i i i i i i i
i inputs. We must know (or predict) distribution of cases. For the near search problem,
i i i i i i i i i i i i i
i let us assume that all cases are uniformly distributed (including the case of x not
i i i i i i i i i i i i i i
i being present in array). So we sum all the cases and divide the sum by (n+1).
i i i i i i i i i i i i i i i
In the best case analysis, we calculate lower bound on running time of an algorithm.
i i i i i i i i i i i i i i
i We must know the case that causes minimum number of operations to be executed. In
i i i i i i i i i i i i i i
i the near search problem, the best case occurs when x is present at the first location.
i i i i i i i i i i i i i i i
i The number of operations in the best case is constant (not dependent on n). So time
i i i i i i i i i i i i i i i
i good information. i
The average case analysis is not easy to do in most of the practical cases and it is
i i i i i i i i i i i i i i i i i
i rarely done. In the average case analysis, we must know (or predict) the mathematical
i i i i i i i i i i i i i
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t
i i i i i i i i i i i i i
i provide any information as in the worst case, an algorithm may take years to run.
i i i i i i i i i i i i i i
For some algorithms, all the cases are asymptotically same, i.e., there are no worst
i i i i i i i i i i i i i
i and best cases. For example, Merge Sort. Merge Sort does Θ(nLogn) operations in all
i i i i i i i i i i i i i
i cases. Most of the other sorting algorithms have worst and best cases. For example, in
i i i i i i i i i i i i i i
i the typical implementation of Quick Sort (where pivot is chosen as a corner element),
i i i i i i i i i i i i i
i the worst occurs when the input array is already sorted and the best occur when the
i i i i i i i i i i i i i i i
i pivot elements always divide array in two halves. For insertion sort, the worst case
i i i i i i i i i i i i i
i occurs when the array is reverse sorted and the best case occurs when the array is
i i i i i i i i i i i i i i i
The following 3 asymptotic notations are mostly used to represent time complexity of
i i i i i i i i i i i i
i algorithms.
1) Θ Notation: The theta notation bounds a functions from above and below, so it
i i i i i i i i i i i i i i
A simple way to get Theta notation of an expression is to drop low order terms and
i i i i i i i i i i i i i i i i
Dropping lower order terms is always fine because there will always be a n0 after
i i i i i i i i i i i i i i
i which Θ(n3) has higher values than Θn2) irrespective of the constants involved.
i i i i i i i i i i i
iiiiiiiii that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
i i i i i i i i i i i i
The above definition means, if f(n) is theta of g(n), then the value f(n) is always
i i i i i i i i i i i i i i i
i between c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta
i i i i i i i i i i i i i i i
i also requires that f(n) must be non-negative for values of n greater than n0.
i i i i i i i i i i i i i
i bounds a function only from above. For example, consider the case of Insertion Sort.
i i i i i i i i i i i i i
i It takes near time in best case and quadratic time in worst case. We can safely say that
i i i i i i i i i i i i i i i i i
i the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers near
i i i i i i i i i i i i i
i time.
If we use Θ notation to represent time complexity of Insertion sort, we have to use two
i i i i i i i i i i i i i i i i
i an algorithm. Many times we easily find an upper bound by simply looking at the
i i i i i i i i i i i i i i
i algorithm.
i algorithm.
Let us consider the same Insertion sort example here. The time complexity of
i i i i i i i i i i i i
i Insertion Sort can be written as Ω(n), but it is not a very useful information about
i i i i i i i i i i i i i i i
i insertion sort, as we are generally interested in worst case and sometimes in average
i i i i i i i i i i i i i
i case.
O(1): Time complexity of a function (or set of statements) is considered as O(1) if it
i i i i i i i i i i i i i i i
i doesn’t contain loop, recursion and call to any other non-constant time function.
i i i i i i i i i i i
A loop or recursion that runs a constant number of times is also considered as O(1).
i i i i i i i i i i i i i i i
3) O(nc): Time complexity of nested loops is equal to the number of times the
i i i i i i i i i i i i i i
i innermost statement is executed. For example the following sample loops have
i i i i i i i i i i
For example Selection sort and Insertion Sort have O(n2) time complexity.
i i i i i i i i i i
3) O(Logn) iTime iComplexity iof ia iloop iis iconsidered ias iO(Logn) iif ithe iloop ivariables
i complexity. Let us see mathematically how it is O(Log n). The series that we get in
i i i i i i i i i i i i i i i
i first loop is 1, c, c2, c3, … ck. If we put k equals to Logcn, we get cLogcn which is n.
i i i i i i i i i i i i i i i i i i i i i
ii }
ii //Here ifun iis isqrt ior icuberoot ior iany iother iconstant iroot
ii }
How to calculate time complexity when there are many if, else statements inside
i i i i i i i i i i i i
i loops?
As discussed here, worst case time complexity is the most useful among best, average
i i i i i i i i i i i i i
i and worst. Therefore we need to consider worst case. We evaluate the situation when
i i i i i i i i i i i i i
When the code is too complex to consider all if-else cases, we can get an upper bound
i i i i i i i i i i i i i i i i
i Many algorithms are recursive in nature. When we analyze them, we get a recurrence
i i i i i i i i i i i i i
i relation for time complexity. We get running time on an input of size n as a function
i i i i i i i i i i i i i i i i
i of n and the running time on inputs of smaller sizes. For example in Merge Sort, to
i i i i i i i i i i i i i i i i
i sort a given array, we divide it in two halves and recursively repeat the process for the
i i i i i i i i i i i i i i i i
i two halves. Finally we merge the results. Time complexity of Merge Sort can be
i i i i i i i i i i i i i
i written as T(n) = 2T(n/2) + cn. There are many other algorithms ke Binary Search,
i i i i i i i i i i i i i i
Substitution Method: We make a guess for the solution and then we use mathematical
i i i i i i i i i i i i i
We need to prove that T(n) <= cnLogn. We can assume that it is true
i i i i i i i i i i i i i i
3) Master Method:
i i
Master Method is a direct way to get the solution. The master method works only
i i i i i i i i i i i i i i
i following type. i
Merge Sort: T(n) = 2T(n/2) + Θ(n). It falls in case 2 as c is 1 and Logba] is also 1. So
i i i i i i i i i i i i i i i i i i i i i
Binary Search: T(n) = T(n/2) + Θ(1). It also falls in case 2 as c is 0 and Logba is also 0.
i i i i i i i i i i i i i i i i i i i i i
Notes: i
1) It is not necessary that a recurrence of the form T(n) = aT(n/b) + f(n) can be solved
i i i i i i i i i i i i i i i i i i
i using Master Theorem. The given three cases have some gaps between them. For
i i i i i i i i i i i i
i example, the recurrence T(n) = 2T(n/2) + n/Logn cannot be solved using master
i i i i i i i i i i i i
i method.
If f(n) = Θ(ncLogkn) for some constant k >= 0 and c = Logba, then T(n) =
i i i i i i i i i i i i i i i i
i Θ(ncLogk+1n)
AMORTIZED ANALYSIS i
i slow, but most of the other operations are faster. In Amortized Analysis, we analyze a
i i i i i i i i i i i i i i
i sequence of operations and guarantee a worst case average time which is lower than
i i i i i i i i i i i i i
Let us consider an example of a simple hash table insertions. How do we decide table
i i i i i i i i i i i i i i i
i size? There is a trade-off between space and time, if we make hash-table size big,
i i i i i i i i i i i i i i
Dynamic Table i
The solution to this trade-off problem is to use Dynamic Table (or Arrays). The idea
i i i i i i i i i i i i i i
i is to increase size of table whenever it becomes full. Following are the steps to follow
i i i i i i i i i i i i i i i
1) Allocate memory for a larger table of size, typically twice the old table.
i i i i i i i i i i i i i
If the table has space available, we simply insert new item in available space.
i i i i i i i i i i i i i
What is the time complexity of n insertions using the above scheme?
i i i i i i i i i i i
If we use simple analysis, the worst case cost of an insertion is O(n). Therefore, worst
i i i i i i i i i i i i i i i
i case cost of n inserts is n * O(n) which is O(n2). This analysis gives an upper bound,
i i i i i i i i i i i i i i i i i
i but not a tight upper bound for n insertions as all insertions don’t take Θ(n) time.
i i i i i i i i i i i i i i i
AmortizedAnalysis
So using Amortized Analysis, we could prove that the Dynamic Table scheme has
i i i i i i i i i i i i
i O(1) insertion time which is a great result used in hashing. Also, the concept of
i i i i i i i i i i i i i i
i person. The average monthly expense of the person is less than or equal to the salary,
i i i i i i i i i i i i i i i
i but the person can spend more money in a particular month by buying a car or
i i i i i i i i i i i i i i i
i something. In other months, he or she saves money for the expensive month.
i i i i i i i i i i i i
2) The above Amortized Analysis done for Dynamic Array example is called
i i i i i i i i i i i
i Aggregate Method. There are two more powerful ways to do Amortized analysis
i i i i i i i i i i i
i called Accounting Method and Potential Method. We will be discussing the other
i i i i i i i i i i i
i notion of average case running time where algorithms use randomization to make
i i i i i i i i i i i
i them faster and expected running time is faster than the worst case running time.
i i i i i i i i i i i i i
While using && (logical AND), we must put the condition first whose probab ty of
i i i i i i i i i i i i i i
i getting false is high so that compiler doesn’t need to check the second condition if the
i i i i i i i i i i i i i i i
#include <iostream.h> i i
int main()
i i
{ i
cnt++; i
} i
cnt = 0; i i i
n = 10;
i i i
// Implementation 2
i i i
cnt++; i
} i
} i
In implementation 1, we avoid checking even numbers whether they are prime or not
i i i i i i i i i i i i i
i as primality test requires more computation than checking a number for even/odd.
i i i i i i i i i i i
Probab ty of a number getting odd is more than of it being a prime that’s why we first
i i i i i i i i i i i i i i i i i i
i or not before checking whether it is odd which makes unnecessary computation as all
i i i i i i i i i i i i i
i even numbers other than 2 are not prime but the implementation still checks them for
i i i i i i i i i i i i i i
i prime.
Logical OR (||) i i
While using || (logical OR), we must put the condition first whose probab ty of getting
i i i i i i i i i i i i i i i
i true is high so that compiler doesn’t need to check the second condition if the first
i i i i i i i i i i i i i i i
i condition is true. i i
#include <iostream.h> i i
int main() i i
{ i
// Implementation 1
i i i
if (isEven(i) || !isPrime(i))
i i i i
cnt++; i
} i
} i
As described earlier that the probab ty of a number being even is more than that of it
i i i i i i i i i i i i i i i i i
i being a non-prime. The current order of execution of the statements doesn’t allow
i i i i i i i i i i i i
i even numbers greater than 2 to be checked whether they are non-prime (as they are all
i i i i i i i i i i i i i i i
i non-primes).
Note: For larger inputs, the order of the execution of statements can affect the overall
i i i i i i i i i i i i i i
Properties of Algorithms i i
values from a specified set. The output values are the solution to the
i i i i i i i i i i i i i
problem.
i
i finite (but perhaps large) number of steps for any input in the set.
i i i i i i i i i i i i
Selection Sort ii
The selection sort algorithm sorts an array by repeatedly finding the minimum
ii ii ii ii ii ii ii ii ii ii ii
ii element (considering ascending order) from unsorted part and putting t at the
ii ii ii ii ii ii ii ii i ii ii
order) from the unsorted subarray s picked and moved to the sorted subarray.
ii ii ii ii ii i ii ii ii ii ii ii ii
The good thing about selection sort s t never makes more than O(n) swaps and can
ii ii ii ii ii i i ii ii ii ii ii ii ii ii
BUBBLE SORT ii
Bubble Sort s the simplest sorting algorithm that works by repeatedly swapping
ii i ii ii ii ii ii ii ii ii ii
Example:
First Pass: ii
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already n order (8 > 5),
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii i ii ii ii ii
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii
Now, the array s already sorted, but our algorithm does not know f t s completed.
ii ii i ii ii ii ii ii ii ii ii i i i ii
ii The algorithm needs one whole pass without any swap to know t s sorted.
ii ii ii ii ii ii ii ii ii ii i i ii
Third Pass: ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
Worst and Average Case Time Complexity: O(n*n). Worst case occurs when
ii ii ii ii ii ii ii ii ii ii
Best Case Time Complexity: O(n). Best case occurs when array s already sorted.
ii ii ii ii ii ii ii ii ii i ii ii
Aux ary Space: O(1)
i ii ii
Boundary Cases: Bubble sort takes minimum time (Order of n) when elements are
ii ii ii ii ii ii ii ii ii ii ii ii
ii already sorted. ii
Stable: Yes ii
Due to ts simplicity, bubble sort s often used to ntroduce the concept of a sorting
ii i ii ii ii i ii ii ii i ii ii ii ii ii
ii algorithm.
In computer graphics t s popular for ts capab ty to detect a very small error (like
ii ii i i ii ii i ii i ii ii ii ii ii ii ii
ii swap of just two elements) n almost-sorted arrays and fix t with just near
ii ii ii ii i ii ii ii ii i ii ii i
x axis) and with ncrementing y their order changes (two elements are swapped)
ii ii ii ii i ii ii ii ii ii ii ii ii
INSERTION SORT ii
Insertion sort s a simple sorting algorithm that works the way we sort playing
ii i ii ii ii ii ii ii ii ii ii ii ii
Boundary Cases: nsertion sort takes maximum time to sort f elements are sorted n
ii i ii ii ii ii ii ii i ii ii ii i
ii reverse order. And t takes minimum time (Order of n) when elements are already
ii ii i ii ii ii ii ii ii ii ii ii ii
ii sorted.
Stable: Yes ii
Online: Yes ii
Uses: nsertion sort s used when number of elements s small. t can also be useful
i ii i ii ii ii ii ii i ii i ii ii ii ii
ii when nput array s almost sorted, only few elements are misplaced n complete big
i ii i ii ii ii ii ii ii ii i ii ii
ii array.
We can use binary search to reduce the number of comparisons n normal nsertion
ii ii ii ii ii ii ii ii ii ii i ii i
ii sort. Binary nsertion Sort uses binary search to find the proper location to nsert the
ii i ii ii ii ii ii ii ii ii ii ii i ii
ii selected tem at each teration. n normal nsertion, sorting takes O(i) (at th teration) n
i ii ii i i ii i ii ii ii ii i i i
ii worst case. We can reduce t to O(logi) by using binary search. The algorithm, as a
ii ii ii ii i ii ii ii ii ii ii ii ii ii ii
ii whole, still has a running worst case running time of O(n2) because of the series
ii ii ii ii ii ii ii ii ii ii ii ii ii ii
SEARCHING ALGORITHM i
Search algorithms form an important part of many programs. Some searches involve
i i i i i i i i i i i
i looking for an entry in a database, such as looking up your record in the IRS database.
i i i i i i i i i i i i i i i i
i Other search algorithms trawl through a virtual space, such as those hunting for the
i i i i i i i i i i i i i
i best chess moves. Although programmers can choose from numerous search types,
i i i i i i i i i i
i they select the algorithm that best matches the size and structure of the database to
i i i i i i i i i i i i i i
i st of distinct elements a1,a2,...an or determine that it is not in the st. The solution to
i i i i i i i i i i i i i i i i
i this search problem is the location of the term in the st that equals x and is 0 if x is not
i i i i i i i i i i i i i i i i i i i i i
i in the st.
i i
LINEAR SEARCH i
i element x in arr[]. i i i
Examples : i
iiiii x = 110;
i i
Output : 6 i i
iiiiii x = 175;
i i
Output : -1 i i
i of arr[]
i
Linear search is rarely used practically because other search algorithms such as the
i i i i i i i i i i i i
i binary search algorithm and hash tables allow significantly faster searching
i i i i i i i i i
BINARY SEARCH i
Given a sorted array arr[] of n elements, write a function to search a given element x
i i i i i i i i i i i i i i i i
i in arr[].
i
i O(n). Another approach to perform the same task is using Binary Search.
i i i i i i i i i i i
Binary Search: Search a sorted array by repeatedly dividing the search interval in
i i i i i i i i i i i i
i half. Begin with an interval covering the whole array. If the value of the search key is
i i i i i i i i i i i i i i i i
i less than the item in the middle of the interval, narrow the interval to the lower half.
i i i i i i i i i i i i i i i i
i Otherwise narrow it to the upper half. Repeatedly check until the value is found or the
i i i i i i i i i i i i i i i
i interval is empty. i i
The idea of binary search is to use the information that the array is sorted and reduce
i i i i i i i i i i i i i i i i
Like Binary Search, Jump Search is a searching algorithm for sorted arrays. The basic
i i i i i i i i i i i i i
i idea is to check fewer elements (than near search) by jumping ahead by fixed steps or
i i i i i i i i i i i i i i i
For example, suppose we have an array arr[] of size n and block (to be jumped) size
i i i i i i i i i i i i i i i i
i m. Then we search at the indexes arr[0], arr[m], arr[2m]…..arr[km] and so on. Once
i i i i i i i i i i i i i
i we find the interval (arr[km] < x < arr[(k+1)m]), we perform a near search operation
i i i i i i i i i i i i i i
Let’s consider the following array: (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377,
i i i i i i i i i i i i i i i i i i i
i 610). Length of the array is 16. Jump search will find the value of 55 with the
i i i i i i i i i i i i i i i i
i come to index 9. i i i
STEP 5: Perform near search from index 9 to get the element 55.
i i i i i i i i i i i i
In the worst case, we have to do n/m jumps and if the last checked value is greater than
i i i i i i i i i i i i i i i i i i
i the element to be searched for, we perform m-1 comparisons more for near search.
i i i i i i i i i i i i i
i Therefore the total number of comparisons in the worst case will be ((n/m) + m-1).
i i i i i i i i i i i i i i
i The value of the function ((n/m) + m-1) will be minimum when m = √n. Therefore,
i i i i i i i i i i i i i i i
#include <bits/stdc++.h> i i
{ i
ii int prev = 0; i i i i
ii { i
ii } i
ii { i
iiii prev++; i
ii // If element is found
i i i i i
ii if (arr[prev] == x)
i i i i
ii return -1; i i
} i
int main() i i
{ i
ii int x = 55; i i i i
iii
ii cout << "\nNumber " << x << " is at index " << index;
i i i i i i i i i i i i i i
ii return 0;
i i
} i
INTERPOLATION SEARCH i
Given a sorted array of n uniformly distributed values arr[], write a function to search
i i i i i i i i i i i i i i
Linear Search finds the element in O(n) time, Jump Search takes O(√ n) time and
i i i i i i i i i i i i i i
The Interpolation Search is an improvement over Binary Search for instances, where
i i i i i i i i i i i
i the values in a sorted array are uniformly distributed. Binary Search always goes to
i i i i i i i i i i i i i
i the middle element to check. On the other hand, interpolation search may go to
i i i i i i i i i i i i i
i different locations according to the value of the key being searched. For example, if
i i i i i i i i i i i i i
i the value of the key is closer to the last element, interpolation search is kely to start
i i i i i i i i i i i i i i i i
Algorithm
Rest of the Interpolation algorithm is the same except the above partition logic.
i i i i i i i i i i i i
Step1: In a loop, calculate the value of “pos” using the probe position formula.
i i i i i i i i i i i i i
Step3: If the item is less than arr[pos], calculate the probe position of the left
i i i i i i i i i i i i i i
#include<stdio.h> i
{ i
ii int lo = 0, hi = (n - 1);
i i i i i i i i i
ii { i
iiii } i
iiiiii lo = pos + 1;
i i i i i
iiii else
iiiiii hi = pos - 1;
i i i i i
ii } i
ii return -1; i i
} i
// Driver Code
i i i
int main() i i
{ i
ii // be conducted.
i i i
ii int arr[] = {10, 12, 13, 16, 18, 19, 20, 21, 22, 23,
i i i i i i i i i i i i i
iiiiiiiii 24, 33, 35, 42, 47};
i i i i i
ii int n = sizeof(arr)/sizeof(arr[0]);
i i i i
ii if (index != -1)
i i i i
ii else
ii return 0; i i
} i
BI NEAR SEARCH
i i
In our day-to-day fe, sometimes a situation arises when we need to find out some item
i i i i i i i i i i i i i i i
i from a collection of things. Searching is the process that works behind this. It is the
i i i i i i i i i i i i i i i
i operation. The access can be sequential or random. There are many searching
i i i i i i i i i i i
i Search. Both Interpolation and Binary Search works randomly and executes faster
i i i i i i i i i i
i than the near Search, the algorithm that runs sequentially. Hence, they are highly
i i i i i i i i i i i i
i efficient. But, for both the algorithms, the st of items should be in sorted order. So,
i i i i i i i i i i i i i i i
i our main objective is to overcome these drawbacks and thus the Bi-linear Search
i i i i i i i i i i i i
i exists. This technique executes sequentially, provided the array can be sorted or
i i i i i i i i i i i
i unsorted. The Bi-linear Search works from both end of the array. In first iteration,
i i i i i i i i i i i i i
i both the first and last elements are compared simultaneously with the search key. If
i i i i i i i i i i i i i
i the data is found, the searching method stops. Otherwise, the second element and the
i i i i i i i i i i i i i
i second to last element are compared simultaneously and so on. Thus, the number of
i i i i i i i i i i i i i
i steps/comparisons required is n/2 i.e. half of what is required by near Search. Let us i i i i i i i i i i i i i i
i Now, the element 7 is found and the Bi-linear Search stops. Thus, only two
i i i i i i i i i i i i i
i executions are required. But if the same thing is done by near Search then the latter
i i i i i i i i i i i i i i i
i would have taken seven steps. The Bi-linear Search is the best among all others
i i i i i i i i i i i i i
i PSEUDO CODE i i
Let us consider an array, a of size n. Let i and j be the loop variables and item be the
i i i i i i i i i i i i i i i i i i i i
i Search is as follows:- i i i i
i n/2.
i Step 5: If equal then break the loop and go to step 6 otherwise Go to step 3.
i i i i i i i i i i i i i i i i i i
Step 6: If j is equal to ((m/2)-1) then print “Item Not Found” and go to step 9
i i i i i i i i i i i i i i i i i
Step 9: End i i i
import java.io.*; i
i class searchi
i {
i a[]=new int[100]; i i
i int n=Integer.parseInt(buf.readLine());
i
i int i,j;i i
i for(i=0;i<n;i++) a[i]=Integer.parseInt(buf.readLine());
i
i System.out.print(a[i]+" "); i i
i { i
i int item,x;
i
i for(i=0,j=n-1;i<=(n/2),j>=(n/2);i++,j--) { i iiiiiiiiiiiiiii
i if((b[i]==item)||(b[j]==item))
i break; i
i if(j==((n/2)-1))
i Else
i { i
i if(b[i]==item)
i x=i+1;
i Else
i x=j+1; i
i TIME COMPLEXITY i
i comparisons made. Let the number of elements of an array be n [1, 2, 3, 4]. For a
i i i i i i i i i i i i i i i i i i
i single teration, as the value of ncreases, the value of j decreases simultaneously. The
i i i i i ii i i i i i i i
i algorithm makes only one comparison for the first and last positions of the array. t
i i i i i i i i i i i i i i
i makes two comparisons for the second and second to last positions of the array. Thus,
i i i i i i i i i i i i i i
i for two positions of the array only one comparison s made and for four positions of
i i i i i i i i i i i i i i i
i the array, two comparisons are made and so on. So, f there s n number of elements,
i i i i i i i i i i i i i i i i
i the loop terates for n/2 number of times. Hence, the average case time complexity of
i i i i i i i i i i i i i i
i Bi-linear Search s O (n). f the tem s present n the first and last position of the array,
i i i i ii i i i i i i i i i i i i i
i then only one comparison s made. The first execution of the loop finds the tem and
i i i i i i i i i i i i i i i
i the search process stops. Hence, the best case and worst case time complexities of
i i i i i i i i i i i i i
COMPARISON ANALYSIS i i
The working efficiency of Bi-linear Search s very high n compared to other searching
i i i i i i i i i i i i i
i algorithms. f we make a study of the pseudo codes of the basic searching algorithms,
i i i i i i i i i i i i i i
i we detect that the average case analysis of both Binary and nterpolation search s the
i i i i i i i i i i i i i i
i best [7,9]. Jump Search s better than Binary Search when the array s very large and
i i i i i i i i i i i i i i i
i the search key es close to the starting element. Binary Search directly checks the
i i ii i i i i i i i i i i i
i middle of the array and then access the key from the middle which takes a lot of time,
i i i i i i i i i i i i i i i i i
i whereas Jump Search does not do the same [10]. But binary search, nterpolation
i i i i i i i i i i i i
i search and jump search are applicable only when the array s sorted [11]. f the st s not
i i i i i i i i i i i i i i ii i i i
i sorted, then t has to be sorted first and then the searching operation s performed.
i i i i i i i i i i i i i i
i Thus, the above three algorithms consume more time [1,5].Whereas, near and
i i i i i i i i ii i i
i Bi-linear Search algorithms run for both sorted and unsorted arrays We have made a
i i i i i i i i i i i i i
i brief comparison study of Bi-linear Search with other searching algorithms ke near
i i i i i i i i i ii i ii i
Linear search performs n sequential order [5, 6].Thus, f the number of elements s very
i i i i i i i i i i i i i i
i large then the execution time ncreases and to search the last element, n number of
i i i i i i i i i i i i i i
i comparisons are made [2, 8]. As a result, ts worst case time complexity s O(n). Same
i i i i i i i i i i i i i i i
i thing happens n case of nterpolation search [12].The Jump search also uses the near
i i i i i i i i i i i i ii i
i search mechanism n ts sub sts. Thus, f an element s present second to the last
i i i i ii i i i i i i i i i i i
i element, then the Jump Search makes an nitial comparison of n. n this case, the
i i i i i i i i i i i i i i
i Binary search s advantageous [4].It requires the random access of the data, having
i i i i i i i i i i i i
i time complexity O(log n) [5].But, Binary search requires more space n stack due to ts
i i i i i i i i i i i i i i
i data[3,6]. i i i i i i The Bi-linear search beats Binary search with a time complexity of
i i i i i i i i i i
i O(1),when the searched tem s present n the last position of the array. Therefore, the
i i i i i i i i i i i i i i
i Bi-linear search’s execution time s very fast and s highly efficient for searching tems
i i i i i i i i i i i i i
i we have established a new searching technique (i.e. the Bi-linear Search) and made a
i i i i i i i i i i i i i
i brief comparison analysis with different searching algorithms. The paper has proved
i i i i i i i i i i
i less time to search an tem from a large collection of records than a normal near
i i i i i i i i i i i i i i ii i
i Search. Even n a sorted st, t sometimes beats the Binary Search, especially when the
i i i i ii i i i i i i i i i i
i element s present n the last position of the array. The Bi-linear Search has a worst
i i i i i i i i i i i i i i i
i nvention. deas should be based on the needful demand. t s basically a step towards the
i i i i i i i i i i i i i i i