You are on page 1of 15

Lecture 3

GROWTH OF FUNCTIONS
INTRODUCTION

There are usually two main methods of measuring the running time of
an algorithm.
One is a mathematical analysis of the algorithm, called an asymptotic
analysis, which can provides gross aspects of efficiency for all
possible inputs but not exact execution times.
On the other hand second approach is called an empirical analysis of
an actual implementation to determine exact running times for a
sample of specific inputs, but with this method we can not predict the
performance of the algorithm for all inputs.
We are interested in the behavior for large n because the main purpose
of designing the efficient algorithms is to be able to solve the problem
for large instances size.
For large instance of size n, an algorithm whose running time has a
smaller growth rate than the running time of another algorithm will be
superior.

ORDERS OF GROWTH

To analyze the efficiency of an algorithm we are interested in


analyzing how the running time increases when the input size
increases, but the detailed analysis of the running time of algorithm is
not necessary.

1
When two algorithms are comparing with respect to their behavior for
the large input sizes, a useful measure is called order of growth.
The order of growth can be estimated by taken the dominant term of
running time of the algorithm.
Here we only specify how the running time increases as the input
increases rather than specifying the exact relation between an
algorithm's input and its running time.
For example, if the running time for an algorithm is 9n2, with input
size n, then we say that it's running time scales as n2 times when
increase the input size n.

Example1.

Let us consider the expression of running time of an algorithm is given by


T (n) = an + b where a and b are some positive constant and n is the input
size (n0). If the input size, n is multiplied by factor k then the dominant
term in T (n), an, is also multiplied by k as:

T (kn) =k (an) + b
Hence we can say that the running time T (n) has a linear order of growth.

Example 2.

Let us consider the running time expression of an algorithm is given by T


(n) = an2 + bn + c where a, b and c are some positive constant and n is the
input size (n0). After multiplying the input size, n by k then the dominant
term in T (n), an2, is multiplied by k2 as:
T (kn) = a (kn) 2 + b (kn) + c
= k2(an2) +k (bn) + c

2
Hence we can say that the running time T (n) has a quadratic order of
growth.
Example 3.

Let us consider the running time of an algorithm with logarithmic


running time expression T (n) = algn + b where a and b are the positive
constant and n is the input size (n0). If the input size, n is multiplied by
factor k then the dominant term in T (n), algn, does not modify, but the
running time is modified by adding a constant as:

T (n) = algn+b
T (kn) = algn + algk + b;
Hence the running time of algorithm has a logarithmic order of growth.

Example 4.

Let us we take a running time expression of an algorithm is given by T (n)


= a3n + b where a and b are the positive constant and n is the input size
(n0). If the input size, n is multiplied by factor k then the T (n) can be
expressed as:

T (n) = a (3n)k + b
Hence the running time has an exponential order of growth.

The growth of the running time of algorithms as the input grows of known
functions is given in the below table.

3
ASYMPTOTIC NOTATION

Time and space complexity is measured in terms of asymptotic


notations.
Asymptotic notation can describe the time and space complexity
accurately for large instance characteristics.
For a given algorithm asymptotic notation will provides upper and /or
lower time and space bounds. Upper bound gives the maximum time /
space required and lower bound gives the minimum time / space
required.

4
A tool for analyzing time and space usage of algorithms
Assumes input size is a variable, say n, and gives time and space
bounds as a function of n
Ignores multiplicative and additive constants.
Compare the growth rates of two expressions of running time.
Concerned only with the rate of growth.

Theta notation

Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = (g (n)) (or f (n) (g (n))), as the
set:

Here the function g (n) is an asymptotically tight bound for the


function f (n).

5
The - notation asymptotically bounds a function from above and
below.
A function f (n) belongs to the set (g (n)) if there exit positive
constants c1, c2 such that it can be Sandwiched between c1 g (n) and
c2 g (n), for sufficiently large n.
The fact that f (n) (g (n), means that f (n) and g (n) have the same
order of growth (i.e. they are asymptotically equivalent)

Big Oh notation:

Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = O (g (n)) (or f (n) O (g (n))),
(pronounced big-oh of g of n), as the set:

6
Here the function g(n) is an asymptotic upper bound for the function
f(n).
When we have only an asymptotic upper bound, we use O-notation.
Big oh notation provides an upper bound for the function. We write f
(n) = O (g (n)) to indicate that a function f (n) is a member of the set
O (g (n)).
Upper bound or maximum level of order is termed as worst-case
efficiency.
It represent the set of all functions whose rate of growth is the same or
lower than that of g (n).
Note that f (n)= (g (n)) implies f (n)= O (g (n)), since notation is a
stronger Notion than O-notation.
For the given function g(n) we have

Omega notation: -

Definition: - Let f (n) and g (n) be the functions that map positive integers to
positive real numbers. We denote f (n) = (g (n)) (or f (n) (g (n))),
(pronounced big-omega of g of n), as the set:

7
Here the function g (n) is an asymptotic lower bound for the function
f (n).
- Notation provides an asymptotic lower bound.
Lower bound or minimum level of order is termed as best case
efficiency.
It includes Set of all functions whose rate of growth is the same as or
higher than that of g (n).
For two functions f (n) and g (n) we have f (n) = (g (n)) f (n) =
(g (n)).
For given function g (n) we have (g (n)) (g (n)).

8
RELATIONSHIPS BETWEEN O, o, , and NOTATIONS

With the help of the definitions of O, o, , and notations we can draw


the following graph which visualizes the relationships between them.

9
ASYMPTOTIC NOTATION PROPERTIES

Let f (n) and g (n) are asymptotically positive functions; the some relational
properties of real numbers can also apply to asymptotic comparisons as well.
These asymptotic notation properties are given below.

Reflexivity
Symmetry
Transitivity
Transpose symmetry

Reflexivity:

Symmetry:

Transitivity:

10
Transpose symmetry:

These all above properties holds for asymptotic notations, we can give some
relationship between the comparison of two real numbers a and b and
asymptotic comparison of two functions f and g.

MATHEMATICAL BACKGROUND

Asymptotic notations:

Asymptotics notations involve O, o, , and . These notations provides


us a way to simplify the functions that arise in analyzing algorithm running
times by ignoring constant factors and concentrating on the trends for large
values of n. For example,

11
Thus, the first algorithm is significantly slower for large n, while the other
two are comparable, up to a constant factor.

Ignore constant factors:

For the asymptotic notation representation the multiplicative constant factors


are ignored. For example, 47n is O (n). Constant factors appearing
exponents cannot be ignored. For example, 24n is not O (2n).

Focus on large n:

In the asymptotic analysis we consider trends for large values of n. Thus, the
fastest growing function of n is the only one that needs to be considered. For
example,

Logarithms:

The following logarithmic formulas are useful to first simplify terms


involving logarithms. For all real a > 0, b > 0, c > 0, and n.

12
Where, in each equation above, logarithm bases are not 1.

Exponentials:

For all real a > 0, m, and n, we have the following identities:

13
Factorials:

ASYMPTOTIC EFFICIENCY CLASSES

14
15

You might also like