You are on page 1of 12

Numerical Methods and Analysis

Pre-requisites: Advance Engineering Mathematics and Computer Fundamentals and Programing


I. Roots of Equations
1. Bisection Method
2. Newton-Raphson Method
3. Modified Newtons Method
4. Secant Method; regula-falsi
5. Root Solving as inverse interpolation
6. Simple One-point iteration
II. Solution of Simultaneous Linear Algebraic Equations and Matrix Inversion
1. Basic Matrix Terminology and Operations;
2. Matrix representation and formal solution of simultaneous linear equations;
3. Gauss elimination; Gauss-Jordan elimination;
4. Matrix inversion by Gauss-Jordan elimination;
5. Gauss-Seidel iteration and concepts of relaxation;
III. Least-Squares Curve Fitting and Functional Approximation
1. Least-squares fitting of discrete points;
2. Approximation of continuous functions;
3. Non-Linear models
4. Multiple Regression
5. Interpolation
IV. Interpolation and Extrapolation
1. Generation of difference tables;
2. Gregory-Newton interpolation formula
3. Interpolation with central differences
4. Interpolation with unequally spaced data
5. Lagrange interpolation
6. Chebyshev polynomial interpolation with cubic spline functions
7. Romberg Algorithm
V. Numerical Integration
1. Trapezoidal rule; Simpsons rule
2. Romberg integration
3. Gauss quadrature
4. Multiple integrals; dealing with singularities
5. Improper Integrals (Extended Midpoint Rule)
VI. Ordinary Differential Equations
1. General initial-value problem
2. Euler method; Heuns; Ralstons;
3. Runge-Kutta type formula; Adams open (Adams-Bashforth) and closed (Adams-Moulton)
formulas;
4. Predictor-corrector methods
Reference:
1. Chapra,S. P., Canale, R.P., Numerical Methods for Engineers, 6 th Ed
Note: the underlined Subtopics are subject to change or eliminated.
Grading System

MG=CSM (35 )+ LQ (35 )+ ME (30 )


CSM =( ASS+ SW ) 40 +Quizes (60 )

FG=CSF ( 35 ) + MG ( 35 )+ FE (30 )

CSF= ( ASS+ SW ) 30 +Quizes ( 35 ) + LQ (35 )


Numerical Methods
Approximations and Errors
Understanding the concepts of error is so important to effective use of numerical methods.
An example for the importance of the error is a falling parachutist, determining the velocity of
the falling parachutist by analytical and numerical methods. Although the numerical techniques
yielded estimates that were close to the exact analytic solution, there will always be a
discrepancy or error, due to the fact that the numerical method involved an approximation. In
this case we can use analytic results that will allow us to compute for errors.
Error definitions
A numerical error arises from the use of the approximations to represent exact
mathematical operations and quantities.

True value=a pproximation+ error

By rearranging, we find that the numerical error is equal to the discrepancy between the truth
and the approximation, as in

Et =true value approximation

TRUE ERROR

where Et is used to designate the exact value of the error. One way to account for the
magnitudes of the quantities being evaluated is to normalized the error to the true value, as in

Fractional relative error=

error
true value

the relative error can also be multiplied by 100 percent in order to express it as

t=

true error
x 100
true value

TRUE PERCENT

RELATIVE ERROR
Example: Suppose that you have the task of measuring the length of a bridge and a rivet and
come up with 9,999 and 9 cm, respectively. If the true values are 10,000 and 10 cm respectively
compute for a. the error b. the percent error for each case.
Solution:
a. Computing for the error of bridge

Et =10,0009,999=1 cm

Computing for the error of rivet

Et =109=1 cm

b. Computing for the percent relative error of bridge

t=

1
x 100 =0.01
10,000

Computing for the rivet

t=

1
x 100 =10
10

Thus, although both measurement s have an error of 1 cm, the relative error for the rivet is much
greater. We would conclude the we have done adequate job of measuring the bridge, where as
our estimate for rivet leaves something to be desired.
An alternative is to normalize the error using the best available estimate of the true value, that
is, to approximation itself,

a=

approximation error
x 100
approximation

where a signifies that the error is normalized to an approximate value.


Percent relative error is determined according to

a=

Present approximationPrevious approximation


x 100
Present approximation

Often, when performing computations, we may not be concerned with the sign of the error but
are interested in whether the absolute value is lower than a pre-specified tolerance

s .

| a|< s
If this relationship holds, our result is assumed to be within the pre-specified acceptable level

s .

We can be assured that the result is correct to at least n significant figures.

s =( 0.5 x 1 02n )
Example: In mathematics, function can often be represented by infinite series, for example, the
exponential function can be computed using

e x =1+ x +

x 2 x3
xn
+ + ..+
2! 3 !
n!

Thus, as more terns are added in sequence, the approximation becomes a better and better
estimate of the true value of

. Starting with the simplest version,

one at a time in order to estimate

0.5

e =1 , add the terms

. after each new term is added, compute the true and

approximate percent relative errors. Note that true value is


the absolute value of the approximate error estimate

0.5

e =1.648721271 . Add terms until


falls below a pre-specified error

conforming to three significant figures.


Solution:
For three significant figures,

n=3

s =( 0.5 x 1 023 ) =0.05


thus, we will need to add terms to the series until

falls below this level. The first estimate is

simply equal to 1. The second is then generated by adding the second term,

e 1+ x
x=0.5

Or for

e 1+0.5=1.5
Solving for the true percent relative error gives

t=

1.6487212711.5
x 100 =9.02
1.648721271

solving for approximate estimate of the error

a=
because

is not less than the required value of

another term,
Ter
m
1
2
3
4
5
6

1.51
x 100 =33.3
1.5

s , we would continue the computation by

x2
2 ! , and repeat the error calculations. The process is continued until

Result

1
1.5
1.625
1.6458333
33
1.6484375
00
1.6486979
17

39.3
9.02
1.44
0.175

33.3
7.69
1.27

0.0172

0.158

0.00142

0.0158

| a|< s 0.0158 < 0.05

| a|< s

, the computation was terminated

Round-off Errors
The result when approximate numbers are used to represent exact numbers. Numbers
such as

,e , 7

cannot be represented by a fixed number of significant figures. Therefore,

they cannot be represented exactly by the computer. The discrepancy introduced by this
omission of significant figures is called round-off error.
Computer representation of numbers
Numerical round-off errors are directly related to the manner in which numbers are stored
in a computer. The fundamental unit whereby information is represented is called a word. This is
an entity that consists of a string of binary digits, or bits. Numbers are typically stored in one or
more words. To understand how this is accomplished, we must first review some material related
to number system.
Number system is merely a way of representing numbers.
Base-10 number system is a number used as a reference for constructing the system. It uses ten
digits, from 0-9.
Example: If we have the number 86,409 then we have eight groups of 10,000, six groups of
1,000, four groups of 100, zero for 10, and nine more units, or

( 8 x 10 4 ) + ( 6 x 103 ) + ( 4 x 102 ) + ( 0 x 101 ) + ( 9 x 10 0 )=86,409

Binary or Base-2, with number using only 0 and 1 units


Example: the binary number 11 is equivalent to

( 1 x 21 ) + ( 1 x 20 ) =2+ 1=3

in the base-10 system.

Example: find the binary number 10101101 is equivalent to the decimal number 173.
Solution:

2 128 x 1=128
6

2 64 x 0=0
5

2 32 x 1=32
4

2 16 x 0=0
23 8 x 1=8
22 4 x 1=4
21 2 x 0=0
20 1 x 1=

1__

173
Integer Representation employs the first bit of a word to indicate the sign, with a 0 for positive
and a 1 for negative, called signed magnitude method.
For example, the inter value of -173 would be stored on a 16-bit computer
1 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1

Sign

and the rest of the boxes are the binary representation

Range integers
Example: Determine the range of the integers in base-10 system that can be represented on a
16-bit computer.
Solution: consider 01111111111111111 and is equivalent to decimal, note the first digit is a
sign

( 1 x 214 ) + ( 1 x 213 ) + ( 1 x 212 ) ++ ( 1 x 21 ) + ( 1 x 20 ) =32,767


Thus, a 16-bit computer word can store decimal integers ranging from - 32,767 to + 32,767.
Truncation Errors
Are those that result from using an approximation in the place of an mathematical
procedure. For example, we approximate the derivative of the velocity of a falling parachutist by
a finite-divided-difference equation of the form

dv v v ( t i+1 ) v(t i )

dt t
t i+1 t i
A truncation error was introduced into the numerical solution because the difference equation
only approximates the true value of the derivative. In order to gain insight into the properties of
such errors, we now turn to a mathematical formulation that is used widely in numerical methods
to express functions in a approximate fashion Taylor Series.
The Taylor Series

A useful way to gain insight into the Taylor Series is to build it term by term, the first term
in the series is

f ( xi +1) f (x i)

This relationship, which is called the zero-order approximation, indicates that the value of
at the new point is the same as the value at the old point.

Additional terms of the Taylor Series are required to provide a better estimate, the firstorder approximation is developed by adding another term yield:

f ( x i+1 ) f ( x i ) +f ' (xi )(x i+1x i )


Although, the above equation can predict a change, it is only exact for a straight-line, or
linear, trend. Therefore, a second-order term is added to the series in order to capture some of
the curvature that the function might exhibit:
''

f ( xi )
2
f ( x i+1 ) f ( x i ) +f ( x i )( xi +1x i )+
( x i+1x i )
2!
'

to complete the series.


''

f ( xi )
f ( xi )
2
n
f ( x i+1 ) f ( x i ) +f ( x i )( xi +1x i )+
x i+1x i ) ++
(
( xi +1x i ) + Rn
2!
n!
'

Thus,

f (n+1 ) ( )
n+1
R n=
x i+1 xi )
(
(n+1)!
It is often convenient to simplify the Taylor Series by defining a step size

h=xi +1x i and

expressing
''

f ( xi ) 2
f ( xi ) n
f ( x i+1 ) f ( x i ) +f ( x i ) h+
h + +
h + Rn
2!
n!
'

where

Rn
(n+1 )

( ) n+1
f
R n=
h
(n+1)!
Example: Use zero-through fourth-order Taylor Series expansion to approximate the function
4

f ( x )=0.1 x 0.15 x 0.5 x 0.25 x +1.2


From

x i=0

Solution:
f(x
f(xi)

with

h=1 , that is, predict the functions value at

ZeroFirst-

f ( x ) between 10

Second-

f ( 0 )=1.2

x i+1=1

f ( 1 )=0.2 true value


f(xi+1)

Thus, the true value that we are trying to predict is 0.2 ,

True

TheTaylor Series approximation with

n=0

f (xi +1)1.2
xi=0

xi+1=1
x
n=1

Using this formulation results in a truncation error;

Et =0.21.2=1.0

x=1

At
For

n=1 , the first derivative must be determined and evaluated at

x i=0

f ' ( 0 )=0.4 ( 0.0 )30.45 ( 0.0 )21.0 ( 0.0 )0.25

Therefore the first-order approximation is

f ( 1 )=0.95 ;

Thus,

f ( x i+1 ) 1.20.25 h

Et =0.20.95=0.75

x=1

At
For

n=2 , the first derivative must be determined and evaluated at

''

x i=0

f ( 0 )=0.4 ( 0.0 ) 0.45 ( 0.0 ) 1.0 ( 0.0 ) 0.25

Therefore the second-order approximation is


Substituting,

h=1 ,

f ( 1 )=0.45 ;

f ( x i+1 ) 1.20.25 h0.5 h2

Et =0.20.45=0 .25

Continuing the process will lead in exact the same equation as we started with:

f ( x i+1 ) 1.20.25 h0.5 h20.15 h3 0.10 h4


where the remainder term is
5

f ( ) 5
R4 =
h
5!
Thus, because the fifth derivative of a fourth-order polynomial is zero,
At

x i+1=1 :

R4 =0

f ( 1 )=1.20.25 ( 1 )0.5 ( 1 )20.15 ( 1 )30.10 ( 1 )4 =0.2

Use of Taylor Expansion to approximate a function with an infinite number of derivatives.


Example: use Taylor Series expansion with

n=0

to 6 to approximate

is

f ( x )=cos x
x i+1=

At

on the basis of the value of

f ( x)

and its derivatives at


h= =
3 4 12 .

means that
Solution:

( 3 )=0.5 true value

The zero-order approximation is

( 3 ) cos 4 =0.707106781

Which represent a percent relative error of

t=

0.50.707106781
x 100 =41.4
0.5

For the first-order approximation,

f ' ( x )=sin x

( 3 ) cos 4 sin 4 12 =0.521986659


t =4.40

Which has

The second-order approximation,

12

cos

f ' ' ( x )=cos x


sin

4
4 12
2!

f
cos
3

()

t =0.449
Order n

f n( x)

f( )
3

cos x

-41.4

-sin x

-cos x

sin x

cos x

0.70710678
1
0.52198665
9
0.49775449
1
0.49986914
7
0.50000755
1

-4.4
0.449
2.62x10-2
-1.51x10-3

x i=

4 . Note that this

-sin x

-cos x

0.50000030
4
0.49999998
8

-6.08x10-5
2.40x10-6

Using the Taylor Series to estimate Truncation Errors


-from the equation of Taylor Series other forms can be determined,

v (t ) velocity , thus

v ( t i+1 )=v ( xi ) + v ' ( x i ) ( t i+1t i ) + R1


Thus

v ' ( ti ) =

v ( t i +1) v( t i )
R1

t i+1t i
t i+1t i
First-order approximation

Truncation error
''
R1
v ( )
=
(t t )
t i+1 t i
2 ! i +1 i

Thus

or

R1
=0 (t i +1t i )
t i+1 t i

t i +1t i . The error of our derivative approximation should be

Truncation error of order

proportional to the step size.


The effect of Nonlinearity and step size on the Taylor Series Approximation

f ( x )=x m

Example: In the given figure, it is a plot of the function

for

m=1,2,3,4

over the

x=1 2 .

range from

Notice that for

m=1

the function is linear and that as

and the step size

Solution:
m=

15

The first-order Taylor Series expansion is

f ( x i+1 ) f ( x i ) +m x m1
h
i

m=

10

m=

5
''

R 1=

Which has a remainder

' ''

f ( x i ) 2 f ( xi ) 3
h+
h
2!
3!
m=

1
1

First, we can examine how the approximation performs as


becomes more linear.
For

m=1 , the actual value of the function at

The Taylor Series yields

f ( 2 )=1+1 ( 1 )=2

and

x=2

R1=0

is 2.

increases, as the function

The remainder is zero because the second and higher derivatives of a linear function are zero.
Thus, as expected, the first-order Taylor Series expansion is perfect when underlying function is
linear.

m=2 , the actual value is

For

f ( 2 )=22=4 .

The first-order Taylor Series approximation is

f ( 2 )=1+2 ( 1 )=3

and

2
R1= ( 1 )2 +0+0+ =1
2

Thus, because the function is a parabola, the straight-line approximation results in a discrepancy.
Note that the remainder is determined exactly.

m=3 , the actual value is

For

f ( 2 )=23 =8 .

The first-order Taylor Series approximation is

f ( 2 )=1+3 ( 1 )2 ( 1 )=4

and

6
6
R1= ( 1 )2 + ( 1 )3 +0+ =4
2
6

m=4 , the actual value is

For

f ( 2 )=24=16 .

The first-order Taylor Series approximation is

f ( 2 )=1+ 4 ( 1 )3 (1 ) =5

and

R 1=

12 2 24 3 24 4
( 1 ) + ( 1 ) + ( 1 ) + 0+=11
2
6
24

Observe that R1 increases as the function becomes nonlinear this permits a complete
determination of the Taylor Series remainder.
We will examine
size

f ( x )=x m

for the case where

m=4

and observe how R1 changes as the step

is varied.

f ( x i+1 ) f ( x i ) +4 x 3i h
if

x i=1 ,

f ( x )=1

and this equation can be expressed as:

f ( 1+h ) =1+ 4 h
With a remainder of

R1=6 h 2+ 4 h3 + h4

Bracketing Methods
Equations deals with methods that exploit the fact that a function typically changes sign in
the vicinity of a root. These techniques are called bracketing methods because two initial
guesses for the root are required. As the name implies, these guesses must bracket or be on
either side of, the root.
Graphical Methods

f ( x )=0

A simple method for obtaining an estimate of the root of the equation

is to make

a plot of the function and observe where it crosses the x axis. This point, which represents the x

f ( x )=0 , provides a rough approximation of the root.

value for which

Example: Use the graphical approach to determine the drag coefficient

m=68.1 kg to have a velocity of

parachutist of mass

Given:

t=10

v =40

g=9.8

m=68.1

m
s

needed for a

after free falling for time

t=10 s .

9.8 m/s 2 .

Note: the acceleration due to gravity is


Solution:

40

( ) t
gm
v ( t )=
1e m v (10)
c

Using the formula:


Then

( ) t
gm
f ( c )=
1e m f (10)
c

(
9.8(68.1)
)10
f ( c )=
1e 68.1 40
c

f ( c )=

667.38
[1e0.146843 c ]40
c

f (c)

4
8
12
16
20

34.115
17.653
6.067
-2.269
-8.401

20

10

Applying

f ( 14.75 )=

8
4 Root=14.

12

-10

c=14.75

667.38
[1e0.146843(14.75) ]40=0.059
14.75

which is close to zero. It can also be checked by substituting it to our equation

14.75

(
)10
9.868 .1
v=
1e 68.1 =40.059
14.75

The Bisection Method

16

20

The bisection method, which is alternative called binary chopping, interval halving, or
Bolzanos method, is on type of incremental search method in which the interval is always
divided in half. If a function changes sign over an interval, the function value at the midpoint is
evaluated. The location of the root is then determined as lying at the midpoint of the subinterval
within which the sign changes occur. The process is repeated to obtain refined estimates.
An algorithm for bisection
Step 1: Choose lower

xl

and upper

x u guesses for the root, so that the function changes

sign over the interval. This can be checked by ensuring that


Step 2: An estimate of the root

xr

is determined by

x r=

f (xl )f ( x u )< 0 .
x l+ x u
2

Step 3: Make the following evaluations to determine in which subinterval the root lies:
a. If

f (xl )f (x u )<0

the root lies in the lower subinterval. Therefore set

u= x r
x

and return

the root lies in the upper subinterval. Therefore set

l= x r
x

and return

to Step 2.
b. If

f (xl )f (x u )>0

to Step 2.
c. If

f ( x l ) f ( x u )=0

the root equals

x r ; terminate the computation.

You might also like