You are on page 1of 14

COMPUTER ORIENTED NUMERICAL METHODS

ITERATIVE METHODS
A technique for solving high order polynomial equations and non-polynomial equations can
be done by trial and error method. For a computer it is the most natural technique to solve
such problems. In a trial and error method of solution a sequence of steps is repeated over
and again. Such a procedure is concisely expressible as a computational algorithm. It is
possible to formulate algorithms which tackle a class of similar problems. Rounding errors
are negligible in trial and error procedures.
METHOD OF SUCCESSIVE BISECTION
In this method we begin the iterative cycle by picking two trail points which enclose the
root. If a function f(x) is continuous between a and b and f(a) and f(b) are of opposite signs
then there exists atleast one root between a and b. Let f(a) be -ve and f(b) be +ve. Then the
roots lies b/w a and b and its approximate value be given by
x
0
=


If f(x
0
)= 0, then x
0
is a root of the equation f(x)=0. Otherwise the roots lies b/w x
0
and b,
when f(x
0
) is ve or b/w x
0
and a, when f(x
0
) is +ve. Let us designate this new interval as (a
1
,
b
1
) whose length is (b-a)/2. As before this is bisected at x
1
and the new interval will be half
the length of previous one. The process is repeated until the latest interval is as small as the
allowed error say e. It is clear that the interval width is reduced by one half at each step at
the end of the n
th
step, the new interval will be (a
n
, b
n
) having a length (b-a)/2
n
, where n is
number of iterations.
Algorithm:
1. Read x
0
, x
1
, e
2. y
o
f(x
0
)
3. y
1
f(x
1
)
4. i 0
5. if (sign(y
0
) = sign(y
1
)) then
begin Write starting values
unsuitable
Write a, b, y
0
, y
1
Stop end
6. while ( ((x
1
-x
0
)/x
1
) > e) do
begin
7. x
2
(x
0
+ x
1
)/2
8. y
2
f(x
2
)
9. i i+1
10. if ( sign(y
0
) = sign (y
2
)) then x
0
x
2
else x
1
x
2

end
11. Write solution converges to a
root
12. Write Number of iterations= , i
13. Write x
2
, y
2

14. Stop

NEWTON- RAPHSON ITERATIVE METHOD
Let x
0
be an appropriate root of f(x) = 0. Let the correct root be x
1
= x
0
+ h, so that f(x
1
) = 0.
Expanding f(x
0
+ h) by Taylor series, we get,
f(x
1
) = f(x
0
+ h) = f(x
0
) + h f(x
0
)+ h
2
/2! f (x
0
) + .. = 0
Neglecting higher order terms of derivatives we get, f(x
0
) + h f(x
0
) = 0.
Therefore, h= f(x
0
) / f(x
0
).
A better approximation than x
0
is therefore given by, x
1
=x
0
+ h = x
0
f(x
0
) / f(x
0
).
Successive approximations are given by, x
n+1
= x
n
f(x
n
) / f(x
n
).
This is Newton- Raphsons formula. In iteration method we know the value x
n+1
= (x
n
) .
Hence, (x
n
) = x
n
f(x
n
) / f(x
n
)
Differentiating this expression we get, (x
n
) = f(x
n
) f(x
n
)/ *f(x
n
)]
2
.
To examine convergence we assume that f(x), f(x), f(x) are continuous in the interval
containing root x= . Here also | (x
n
) |<1. Newton-Raphson formula converges provided
that the initial approximation x
0
is sufficiently close to . If f(x
n
)/ f(x
n
) is finite and not zero
then the Newton- Raphson method is second order convergent. Physically second order
convergence means that in each iteration the number of significant digits in the
approximation doubles.
Algorithm:
1. Read x
0
, epsilon,delta,n
where x
0
is the initial guess,
epsilon is the error, delta is the
lower bound for f and n is the max
no of iterations to be allowed.
2. for i=1 to n in steps of 1 do
3. f
0
f(x
0
)
4. f
0
f(x
0
)
5. if |f
0
| delta then GOTO 11
6. x
1
x
0
(f
0
/f
0
)
7. if |(x
1
- x
0
)/x
1
| < epsilon then GOTO
13
8. x
0
x
1

end for
9. Write Does not converge in n
iterations, f
0
, f
0
, x
o
, x
1

10. Stop
11. Write Slope too small , x
0
, f
0
, f
0
, i.
12. Stop
13. Write Convergent solution, x
1
,
f(x
1
), i.
14. Stop
SOLUTION OF POLYNOMIAL EQUATION
A polynomial equation can be written as, f(x) = a
0
+a
1
x +a
2
x
2
+. +a
n
x
n
= 0 _____ (1)
We may rewrite it as, f(x) = g(x) (x-x
0
) +R ______ (2)
where g(x) is a (n-1)
th
order polynomial obtained by dividing f(x) by (x-x
0
) and R is a constant
which is the remainder after division. Differentiating,
f(x) = g(x) (x-x
0
) +g(x) _____ (3)
and when x =

x
0
, f(x
0
) = g(x
0
) ______ (4)
We can write, g(x) = b
0
+b
1
x +b
2
x
2
+. +b
n-1
x
n-1
______ (5)
The coefficients of the polynomial are obtained as, b
n-1
= a
n
; b
n-2
= a
n-1
+x
0
b
n-1
; b
n-3
= a
n-2
+x
0

b
n-2
and finally b
o
= a
1
+x
0
b
1
.
Hence eqn (2) can be written as,
f(x) = a
0
+a
1
x +a
2
x
2
+. +a
n
x
n
= (b
0
+b
1
x +b
2
x
2
+. +b
n-1
x
n-1
) (x-x
0
) +R = 0 __________ (6)
equating constant terms on both sides ,
a
0
= -b
0
x
0
+ R and hence R = a
0
+ b
0
x
0
________ (7)
from eqn (2), f(x
0
) = R = a
0
+ b
0
x
0
_________(8)
Also we get, f(x
0
) = g(x
0
) = b
0
+b
1
x
0
+b
2
x
0
2
+. +b
n-1
x
0
n-1
___________ (9)
Now the roots are found using Newton- Raphson technique,
x
i+1
x
i
f(x
i
) / f(x
i
) = x
i
f(x
i
) / g(x
i
)
where i = 0,1,N until |(x
i+1
x
i
)/x
i+1
| < e, the allowed error.
Algorithm:
1. Read x
0
, e, n, N
where x
0
is the initial guess, e is
the error, n is the order of
polynomial and N is the max no of
iterations to be allowed.
2. for i=0 to n in steps of 1 do Read a
i

endfor
3. for i=0 to n-1 in steps of 1 do Read
b
i
endfor
4. P a
n

5. b
n-1
a
n

6. S b
n-1

7. for k=1 to N in steps of 1 do
8. for i=1 to n-1 in steps of 1 do
9. b
n-(i+1)
a
n-i
+x
0
b
n-i

10. S b
n-(i+1)
+ x
0
S
endfor
11. P a
0
+ b
0
x
0

12. x
1
x
0
(P/S)
13. if |(x
1
- x
0
)/x
1
| < e then GOTO 18
else
14. x
0
x
1

endif endfor
15. Write Root not found in N
iterations, S, P, x
1
, x
o

16. Stop
17. Write Root found in k iterations
18. x
0
x
1

19. Write x
0
, S, P
20. Stop
INTERPOLATION
Interpolation is the computation of points or values b/w once that are known. In numerical
analysis interpolation is a method of constructing new data points within the range of two
discrete set of data point. Interpolation provides a means of estimating the function at
intermediate points.
LAGRANGE INTERPOLATION
The universal technique for interpolating is to fit a polynomial through the points
surrounding the point y where the value of the function is to be found. This polynomial is
an approximation of the function and is used to find f(y).
Let y(x) is a continuous function and let it be differentiable (n+1) times in the interval (a, b).
Given the (n+1) points, (x
0
, y
0
), (x
1
, y
1
),.. (x
n
, y
n
). The value of x need not necessarily be
equally spaced. We have to find a polynomial of degree n, say L
n
(x) such that,
L
n
(x
i
) = y(x
i
) =y
i
, where i=0,1,n
Consider the eqn of a straight line passing through two points (x
0
, y
0
) and (x
1
, y
1
). Then
L
1
(x) =

y
0
+

y
1
=


This eqn represents Lagrange polynomial of degree one passing through the two points (x
0
,
y
0
) and (x
1
, y
1
).
The Lagrange polynomial may be generalised to the nth order as
L
n
(x) =


Algorithm:
1. Read x, n
2. for i=1 to (n+1) in steps of 1 do
Read x
i
and f
i
end for
3. sum 0
4. for i=1 to (n+1) in steps of 1 do
5. prodfunc

1
6. for j=1 to (n+1) in steps of 1 do
7. if (j i) then
prodfunc

prodfunc x (x- x
j
)/ (x
i

x
j
)
end for
8. sum sum + f
i
x prodfunc
end for
9. Write x, sum
10. Stop


DIFFERENCE TABLE
Assume that we are given a table of values, (x
i
, y
i
), where i=0,1,2,n of a function y=f(x).
Here the values of x are equally spaced, i.e, x
i
=x
0
+ih. Suppose we are required to recover
the values of f(x) for some intermediate values of x or it is required to obtain the derivative
of f(x) for some x in the range x
0
x x
n
. The method for solution of these problems is based
on the concept of the differences of a function.
If y
0
, y
1
, , y
n
denote a set of values of y, then y
1
- y
0
, y
2
- y
1, ..
y
n
- y
n-1
are called the
differences of y such that,
y
0
= y
1
- y
0
, y
1
= y
2
- y
1
, y
n-1
= y
n
- y
n-1
are called first forward differences.

2
y
0
= y
1
- y
0
,
2
y
1
= y
2
-y
1
,
2
y
n-1
=y
n
-y
n-1
are called second forward differences.
Similarly one can define third, fourth etc. forward differences.

2
y
0
= y
1
- y
0
= y
2
- y
1
- y
1
+ y
0
= y
2
-2 y
1
+ y
0

3
y
0
=
2
y
1
-
2
y
0
= y
3
-2 y
2
+ y
1
- y
2
+2 y
1
- y
0
= y
3
-3 y
2
+3 y
1
- y
0
With tabulation at equal intervals a difference table for (n+1) points may be expressed as
shown:
x
0
y
0

y
0
= y
1
- y
0

x
0
+h y
1

2
y
0
= y
1
- y
0

y
1
= y
2
- y
1

3
y
0
=
2
y
1
-
2
y
0
x
0
+2h y
2

2
y
1
= y
2
-y
1

4
y
0
=
3
y
1
-
3
y
0

y
2
= y
3
- y
2

3
y
1
=
2
y
2
-
2
y
1
x
0
+3h y
3

2
y
2
= y
3
-y
2

y
3
= y
4
- y
2
. . .
. . .
. . .
x
0
+nh y
n


NEWTON-GREGORY FORWARD INTERPOLATION FORMULA
Let a set of (n+1) values namely (x
0
, y
0
), (x
1
, y
1
),.. (x
n
, y
n
) of x and y be given. It is required
to find y
n
(x) a polynomial of nth degree such that values of y be equidistant, x
i
=x
0
+ih,
where i=0,1,,n. Since y
n
(x) is a polynomial of nth degree it may be written as,
y
n
(x) = a
0
+ a
1
(x-x
0
) + a
2
(x-x
0
) (x-x
1
) + a
3
(x-x
0
) (x-x
1
) (x-x
2
) + + a
n
(x-x
0
) (x-x
1
) .(x-x
n
)
Imposing now the condition that y and y
n
(x) should agree at the set of tabulated points we
obtain,
n=0, y
0
= a
0
+ a
1
(x
0
-x
0
) =a
0
=> a
0
= y
0
n=1, y
1
= a
0
+ a
1
(x
1
-x
0
) + a
2
(x
1
-x
0
) (x
1
-x
1
) = a
0
+ a
1
(x
1
-x
0
)
a
1
=

= y
0
/ h
n=2, y
2
= a
0
+ a
1
(x
2
-x
0
) + a
2
(x
2
-x
0
) (x
2
-x
1
) + a
3
(x
2
-x
0
) (x
2
-x
1
) (x
2
-x
2
)
= a
0
+ a
1
(x
2
-x
0
) + a
2
(x
2
-x
0
) (x
2
-x
1
) = y
0
+

2h + a
2
2h
2
a
2
=

=
(

) (

=
2
y
0
/h
2
2!
Similarly, a
n
=
n
y
0
/ h
n
n!

Setting x= x
o
+ uh and substituting values of a
1
, a
2
, .. a
n
we get
y
n
(x) = y
o
+ u y
0
+
()


2
y
0

()()

+
3
y
0
+ +
()()()


n
y
0

This eqn is called Newton- Gregory interpolation formula.
LEAST SQUARES APPROXIMATION OF FUNCTIONS
Suppose x
1
, x
2
, x
n
are the values of an independent variable and y
1
, y
2
,, y
n
are the values
of the corresponding dependent variables. et, = f(x) be an approximation to the function.
The function f(x) is to be chosen in such a way that the errors are small. It is done by fitting
the function f(x) to the tabulated values which minimises the sum of squares of the
deviations. This is called least squares fit.
LINEAR REGRESSION
Consider the eqn of straight line, y
1
= a
1
x + a
0

We want to find the values of a
0
and a
1
which lead to the best straight line. It is assumed
that x is independent or controlled variable and thus has no errors in it. The variable y is the
dependent or measured variable and is assumed to have statistical errors. The deviation (y
i
-

i
) measures the error between measured and fitted value along the y direction. The linear
least squares fit eqn obtained in this case is called as linear regression of x on y.
Let S denote the sum of squares of errors, S = (

= (

))


Thus to minimise S take the partial derivative of S wrt a
0
and a
1
and set these to zero. Thus,

)()

)(


i.e.


Rearranging the terms we get,
na
0
+ a
1
x
i
= y
i

a
0
x
i
+ a
1
x
i
2
= x
i
y
i

Therefore,


These two eqns are called normal eqns and give the coefficients of the linear least squares
fit line or so called regression coefficients.
Algorithm:
1. Read n
2. sum x 0
3. sum xsq 0
4. sum y 0
5. sum xy 0
6. for i =1 to n do
7. Read x,y
8. sum x sum x + x
9. sum xsq sum xsq + x
2

10. sum y sum y + y
11. sum xy sum xy + x.y
endfor
12. denom n.sum xsq sum x.sum x
13. a
0
(sum y. sum xsq sum x. sum
xy) / denom
14. a
1
(n. sum xy sum x. sum y) /
denom
15. Write a
1
, a
0

16. Stop

POLYNOMIAL REGRESSION
It may be necessary to fit a higher degree polynomial rather than a straight line. Consider
that a quadratic is to be fitted. Assume that n pairs of coordinates (x
i
, y
i
) are given which are
to be approximated by a quadratic. Let the quadratic curve be represented by,
y = a
2
x
2
+ a
1
x + a
0
When x= x
i
, the left hand side is represented by
I
,
i
= a
2
x
i
2
+ a
1
x
i
+ a
0

The sum of squares of the deviations is given by,
S = (

= (

))


Differentiating S wrt a
0
, a
1
, a
2
respectively and setting each of these coefficients equal to
zero we get,
na
0
+ a
1
x
i
+ a
2
x
i
2
= y
i

a
0
x
i
+ a
1
x
i
2
+ a
2
x
i
3
= x
i
y
i

a
0
x
i
2
+ a
1
x
i
3
+ a
2
x
i
4
= x
i
2
y
i

These are three linear eqns in three unknowns. These are called normal eqns for quadratic
regression.
This may be extended to fit any nth degree polynomial. In the case of an nth degree
polynomial there will be (n+1) simultaneous eqns in (n+1) unknowns.
FITTING AN EXPONENTIAL CURVE
Let y= ae
-bx
be the curve to be fitted.
Let z= log y = log ae
-bx
= log a + (-bx)
Let a
0
= log a and a
1
= -b
Hence, z= a
0
+ a
1
x
This eqn is linear hence we can use the normal eqns for linear regression.
na
0
+ a
1
x
i
= z
i
= log y
i
a
0
x
i
+ a
1
x
i
2
= x
i
z
i
= x
i
log y
i

Solving the two eqns,


From a
0
and a
1
we obtain,

and b = -a
1



FITTING A TRIGONOMETRIC FUNCTION
Assume the eqn of the curve to be, y = A sin ( x+ )
This has three parameters A, , . Assume that is known. Expanding the above eqn we
get,
y= A ( sin x cos + cos x sin ) = a
1
sin x + a
2
cos x
Minimizing the sum of squares of errors, S = (y
i
- a
1
sin x
i
- a
2
cos x
i
)
2

Taking the partial derivative of S wrt a
1
and a
2
and setting them equal to zero,
a
1
sin
2
x
i
+ a
2
sin x
i
cos x
i
= y
i
sin x
i

a
1
sin x
i
cos x
i
+ a
2
cos
2
x
i
= y
i
cos x
i

Solving these two simultaneous linear eqns gives the value of a
1
and a
2
. From these
solutions,
A =

and = tan
-1
(a
2
/ a
1
)

NUMERICAL DIFFERENTIATION AND INTEGRATION
If a function is defined as an expression its derivative or integral may often be determined
using the technique of calculus. However it is difficult to get a close from of formula using
calculus. Only an infinite series approximation is available for this function.
Formulae for numerical differentiation are obtained by using as interpolating polynomial
through the appropriate set of points. Let the values of a function are available at (x
1
, x
2
,
x
n
) and its derivative at x
1
< x < x
3
is required then
f(x) = f(x
1
) +

(x- x
1
) +

(x x
1
) (x- x
2
)+ R(x)
where f
1
,
2
f
1
are the forward differences and R(x) is the error term.
()

() (

) (

)(


Differentiating the above eqn wrt x we get,

()
TRAPEZOIDAL RULE
Formulae for numerical integration called quadrature are based on fitting a polynomial
through a specified set of points and integrating this approximation function. Assume that
the values of a function f(x) are given at x
1
, x
1
+h, x
1
+2h,.,x
1
+nh and that is required to find
the integral of f(x) b/w x
1
and x
1
+nh. The simplest technique to use would be to fit straight
lines through f(x
1
), f(x
1
+h)and determine the area under this approximating function.

The eqn of the straight line b/w x
i
and x
i
+ h is given by,
f(x) = f(x
i
) +
(

) (

(x-x
i
)
S = ()

=
[

)]


This is known as Trapezoidal rule. The integral may be rewritten as,
S = ( f
1
/2 + f
2
+ f
3
+f
4
+ .. + f
n+1
/2 ) h
Thus to evaluate the integral the value of the function is evaluated at (n+1) points. This rule
gives the exact value for the integral only if function f(x) is linear.
Algorithm: (i) Integrating a tabulated function
1. for i=1 to n+1 do Read f
i
endfor
2. sum (f
1
+ f
n+1
)/2
3. for j=2 to n do
4. sum sum + f
j

endfor
5. Integral h. sum
6. Write Integral
7. Stop
Algorithm: (ii) Evaluation of Integral of a known function
1. Read x
1
, x
2
,e
2. h (x
2
- x
1
)
3. S
1
(f(x
1
) + f(x
2
))/2
4. I
1
h. S
1

5. i 1
repeat
6. x x
1
+ h/2
7. for j=1 to i do
8. S
1
S
1
+ f(x)
9. x x + h
endfor
10. h h/2
11. i 2i
12. I
0
I
1

13. I
1
h. S
1

14. until |I
1
I
0
| e. |I
1
|
15. Write I
1
, h, i
16. Stop

SIMPSONS RUE
Simpsons rule is a popular numerical technique. It is based on approximating the function
f(x) by fitting quadratics through sets of three points. This is graphically illustrated,

S = ()

=
[

) (

)]


Integrating the above expression we get,
S =

+
This is called Simpsons rule. In using the above formula it is implied that f is tabulated at an
odd number of points. It may be observed that Simpsons rule is exact for a cubic even
though it was derived for a quadratic.

Algorithm: (i) Integrating a tabulated function
Assume function tabulated at odd number of points (n+1 odd)
1. for i=1 to n+1 do Read f
i
endfor
2. sum (f
1
+ f
n+1
)
3. for i=2 to n in steps of 2 do
4. sum sum + 4f
i

endfor
5. for i=3 to n-1 in steps of 2 do
6. sum sum + 2f
i

endfor
7. Integral h. sum/3
8. Write Integral
9. Stop
If (n+1) is not odd then the integral of the first interval is computed as (f
1
+ f
2
) h/2 and added
to the integral computed using Simpsons rule for the rest of the points.
Algorithm: (ii) Evaluation of Integral of a known function
1. Read x
1
, x
2
,e
2. h (x
2
- x
1
)/2
3. i 2
4. S
1
f(x
1
) + f(x
2
)
5. S
2
0
6. S
4
f(x
1
+h)
7. I
0
0
8. I
n
(S
1
+ 4S
4
). h/3
repeat
9. S
2
S
2
+ S
4

10. S
4
0
11. x x
1
+ h/2
12. for j=1 to i do
13. S
4
S
4
+ f(x)
14. x x + h
endfor
15. h h/2
16. i 2i
17. I
0
I
n

18. I
n
(S
1
+ 2S
2
+ 4S
4
). h/3
19. until |I
n
I
0
| e. |I
n
|
20. Write I
n
, h, i
21. Stop

NUMERICAL SOLUTION OF DIFFERENTIAL EQUATIONS
Consider the differential equation of the type,

( )
with an initial condition, y= y
1
at x= x
1
.
The function f(x, y) may be a general non-linear function of (x, y) or may be a table of values.
When the value of y is given at x= x
1
and the solution is required for x
1
x x
f
then the
problem is called initial vale problem. On the other hand if y is given at x= x
f
and the solution
is required for x
f
x x
1
then the problem is called boundary value problem. We are dealing
with only initial value problem. A solution is a curve g(x, y) in the (x, y) plane whose slope at
every point (x, y) in the specified region is given by the eqn dy/dx =f(x, y).
EUERS METHOD
Consider the differential equation of the type,

( )
with an initial condition, y(x
1
)= y
1
.
Given (x
1,
y
1
) the slope at this point is obtained as,

) (

)
The next point y
2
on the solution curve may be extrapolated by taking a small step in a
direction given by the above slope. Thus,
y(x
1
+ h) = y
2
= y
1
+ h f(x
1
, y
1
)
In general, y
i+1
= y
i
+ h f(x
i
, y
i
)
This eqn is called Eulers eqn.
The error propagation in successive steps of the Euler technique is given by,


If

< 1 then the errors will damp down with successive iterations. In this case the
Euler method is said to be stable. Otherwise the error increase in successive iterations and
the procedure is unstable.
RUNGE-KUTTA METHOD (SECOND ORDER)
Consider the differential equation of the type,

( )
with an initial condition, y(x
1
)= y
1
. Let s
1
= f(x
1
, y
1
) be the slope of (x
1
, y
1
). Let it cut the
vertical line through x
2
= x
1
+ h at (x
1
+ h,
2
). Let the slope of the solution curve y(x) at (x
1
+
h,
2
) be s
2
= f(x
2
,
2
). Now we can draw a straight line from (x
1
, y
1
) with a slope (s
1
+ s
2
) /2.
The point y
2
where these straight line cuts the vertical line at x
1
+ h is the approximate
solution of the differential eqn at x
1
+ h. Thus,
y
2
= y
1
+ h
(


In general, y
i+1
= y
i
+ h
(

, where s
i
= f(x
i
, y
i
) and s
i+1
= f(x
i+1
, y
i
+ s
i
h)
This method is called a second order Runge- Kutta method.
Algorithm:
1. Read x
1
, y
1
, h, x
f

2. while x
1
x
f
do
begin
3. Write x
1
, y
1

4. s
1
f(x
1
, y
1
)
5. x
2
x
1
+ h
6. y
2
y
1
+ hs
1

7. s
2
f(x
2
, y
2
)
8. y
2
y
1
+ h x (s
1
+ s
2
) /2
9. x
1
x
2

10. y
1
y
2

end
11. Stop

You might also like