Professional Documents
Culture Documents
GROWTH OF FUNCTIONS
INTRODUCTION
There are usually two main methods of measuring the running time of
an algorithm.
One is a mathematical analysis of the algorithm, called an asymptotic
analysis, which can provides gross aspects of efficiency for all
possible inputs but not exact execution times.
On the other hand second approach is called an empirical analysis of
an actual implementation to determine exact running times for a
sample of specific inputs, but with this method we can not predict the
performance of the algorithm for all inputs.
We are interested in the behavior for large n because the main purpose
of designing the efficient algorithms is to be able to solve the problem
for large instances size.
For large instance of size n, an algorithm whose running time has a
smaller growth rate than the running time of another algorithm will be
superior.
ORDERS OF GROWTH
1
When two algorithms are comparing with respect to their behavior for
the large input sizes, a useful measure is called order of growth.
The order of growth can be estimated by taken the dominant term of
running time of the algorithm.
Here we only specify how the running time increases as the input
increases rather than specifying the exact relation between an
algorithm's input and its running time.
For example, if the running time for an algorithm is 9n2, with input
size n, then we say that it's running time scales as n2 times when
increase the input size n.
Example1.
T (kn) =k (an) + b
Hence we can say that the running time T (n) has a linear order of growth.
Example 2.
2
Hence we can say that the running time T (n) has a quadratic order of
growth.
Example 3.
T (n) = algn+b
T (kn) = algn + algk + b;
Hence the running time of algorithm has a logarithmic order of growth.
Example 4.
T (n) = a (3n)k + b
Hence the running time has an exponential order of growth.
The growth of the running time of algorithms as the input grows of known
functions is given in the below table.
3
ASYMPTOTIC NOTATION
4
A tool for analyzing time and space usage of algorithms
Assumes input size is a variable, say n, and gives time and space
bounds as a function of n
Ignores multiplicative and additive constants.
Compare the growth rates of two expressions of running time.
Concerned only with the rate of growth.
Theta notation
Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = (g (n)) (or f (n) (g (n))), as the
set:
5
The - notation asymptotically bounds a function from above and
below.
A function f (n) belongs to the set (g (n)) if there exit positive
constants c1, c2 such that it can be Sandwiched between c1 g (n) and
c2 g (n), for sufficiently large n.
The fact that f (n) (g (n), means that f (n) and g (n) have the same
order of growth (i.e. they are asymptotically equivalent)
Big Oh notation:
Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = O (g (n)) (or f (n) O (g (n))),
(pronounced big-oh of g of n), as the set:
6
Here the function g(n) is an asymptotic upper bound for the function
f(n).
When we have only an asymptotic upper bound, we use O-notation.
Big oh notation provides an upper bound for the function. We write f
(n) = O (g (n)) to indicate that a function f (n) is a member of the set
O (g (n)).
Upper bound or maximum level of order is termed as worst-case
efficiency.
It represent the set of all functions whose rate of growth is the same or
lower than that of g (n).
Note that f (n)= (g (n)) implies f (n)= O (g (n)), since notation is a
stronger Notion than O-notation.
For the given function g(n) we have
Omega notation: -
Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = (g (n)) (or f (n) (g (n))),
(pronounced big-omega of g of n), as the set:
7
Here the function g (n) is an asymptotic lower bound for the function
f (n).
- Notation provides an asymptotic lower bound.
Lower bound or minimum level of order is termed as best case
efficiency.
It includes Set of all functions whose rate of growth is the same as or
higher than that of g (n).
For two functions f (n) and g (n) we have f (n) = (g (n)) f (n) =
(g (n)).
For given function g (n) we have (g (n)) (g (n)).
8
RELATIONSHIPS BETWEEN O, o, , and NOTATIONS
9
ASYMPTOTIC NOTATION PROPERTIES
Let f (n) and g (n) are asymptotically positive functions; the some relational
properties of real numbers can also apply to asymptotic comparisons as well.
These asymptotic notation properties are given below.
Reflexivity
Symmetry
Transitivity
Transpose symmetry
Reflexivity:
Symmetry:
Transitivity:
10
Transpose symmetry:
These all above properties holds for asymptotic notations, we can give some
relationship between the comparison of two real numbers a and b and
asymptotic comparison of two functions f and g.
MATHEMATICAL BACKGROUND
Asymptotic notations:
11
Thus, the first algorithm is significantly slower for large n, while the other
two are comparable, up to a constant factor.
Focus on large n:
In the asymptotic analysis we consider trends for large values of n. Thus, the
fastest growing function of n is the only one that needs to be considered. For
example,
Logarithms:
12
Where, in each equation above, logarithm bases are not 1.
Exponentials:
13
Factorials:
14
15