You are on page 1of 109

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER

2010)
DANIEL CHIN
UNIVERSITY OF CALGARY
MECH ENGG
(DACHIN@UCALGARY.CA)

Lecture notes from Yanis (Pouyan Jazayeri) lectures winter 2010 at the University
of Calgary

Date: October 4, 2010.


1

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

For future emails (after I graduate): mr.dan.chin@gmail.com


Part 1. Introduction
This is Engg 407 Numerical Methods notes package that I am compiling from my
handwritten notes from class. This is my first attempt at using LATEX to create a
document. In the future perhaps I will be able to fully use LATEX in all of my mathematical courses to type notes.
Unfortunately I cannot easily include figures

from classes dealing with physical mechanical systems etc. I hope that during this
learning process with LATEX, I will be able to more efficiently type my notes, especially mathematical formulas.
Last updated: October 4, 2010 : Finished! as

of yet no cheap plastic Yani notes

Crappy scanned versions included


1. Notes
Curriculum changes in 2009-2010 to gear the course to non-zoo students means
that memory and all notes regarding how computers work will not be tested

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Contents

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Part 2. January Notes


2. Numerical Methods
Wednesday January 13, 2010
Methods in which the complex mathematical problem is reformatted so it
can be solved by simple arithmetic operators
inevitable there will be some discrepancies or errors between the numerical
results & the true results
2.1. Error. The True Error Et is defined as true value approximation.
The shortcoming is that order of magnitude of the error is not accounted for so we
have True Fractional Relative Error
true value approximation
Etf =
truevalue
The problem is that the true value is rarely known in real world applications, so we
will use the relative approximation approach to error
Relative approximation error EA
current approximation previous approximation
100%
current approximation
The sign of EA (+/) is not significant, so we use |EA |
Typically we would like |EA to be smaller than a predetermined threshold
(tolerance) of ES (stopping condition) therefore |EA | < ES
EA =

In application of numerical methods, we need to focus on two types of errors


(1) Round off errors
(2) Truncation errors
Round Off Errors
These errors are introduce since computers can only represent some numbers approximately. Because each number is assigned to a finite (fixed) number of bits, we are
limited to a finite number of significant digits.

2.2. Review on how binary works (not tested in 2009-2010). Binary is a base
2 number system
-if we have 16 bits for storing signed integers the largest signed number we can store is:
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

First bit is for sign therefore largest number is:


215 + 214 + 213 + . . . + 21 + 20 = 13276
Note that we are restricted to a finite range of numbers
Real numbers are stored as a floating point format
m be
Where m is the mantissa (significand), b is the base of the number used (in computers
it is 2) and e is the exponent used.
The mantissa is usually normalized before it is stored , such as removing the leading
zeroes after the decimal is removed.
1
Example: 19
= 0.05263 as a base 10 system with a 5 digit mantissa
We store this as 5.2631102 instead of 0.05263100 to gain 2 extra significant digits.
In a typical computer the real numbers are stored as per the IEEE 754 floating point
format.
In the double precision format 64 bits are allocated for each number. the division
are as follows:
1st bit is the sign bit of the mantissa
next 11 bits are the signed exponent
the final 52 bits are the mantissa itself
The largest value available for floating point double is 1.7977 10308
And the smallest available number is 3.2251 10808
Any number between 0 and 3.2251 10808 is therefore rounded to the nearest available number, or are clipped (chopped) [truncated]
This is the finite precision available with computers

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

3. Truncation
Friday January 15 2010
Machine epsilon: parameter used to indicate the level of precision offered by a computer with IEEE 754 format. For doubles machine epsilon is 253 = 2.22 1016 .
This refers to the smallest number that we can have
Significant digits are the digits of a number that can be used with confidence. To
remove uncertainty we use scientific notation to show training zeroes.
#
Sig. digits
5000
1,2,3,4 ???
3
5 10
1
5.000 103 4
4
5008
5008.5
5
0.005
1
Truncation Error these are the result of using an approximation instead of an
exact mathematical formula.
P
2
n
xn
Ex: ex = 1 + x + x2 + . . . + xn! =
n=1 1 + n!
If we compute ei.2 , approximating with the first three terms we have
e1.2 ' 1 + (1.2) +

(1.2)2
= 2.92
2!

[the real value is a bit different]


The Truncation Error is all of the remaining terms =
If we now use the first 4 terms:
e1.2 ' 1 + (1.2) +

(1.2)3 (1.2)4
+
+ ...
3!
4!

(1.2)2 (1.2)3
+
= 3.208
2!
3!

|3.208 2.92|
EA the relative approximation error EA1 =
100% = 1.98%
3.208
Another example is the derivation of a function at point x is defined as:
f (x + x) f (x)
x0
x
f
(x
+
x)

f (x)
f 0 (x) '
x
f 0 (x) = lim

The truncation error is introduced by using a finite (practical) value of x instead


of (impractical) x 0

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

3.1. Taylor Series Polynomial. Very Important


-Very fundamental to numerical methods
-Suppose we have the value of a function at a single point xi and we know the value
of all of its derivatives (1st, 2nd, . . . ) at the point xi
-Then we can calculate the value of that function at any other point xi+1 from our
taylor series
2

f (xi+1 ) = f (xi ) + f 0 (xi )h + f 00 (xi ) h2! + f 000 (xi ) h3! + . . .


where h = xi+1 xi
Function must also be smooth between xi xi+1
For example:
f (x) = sin(x)
f 0 (x) = cos(x)
We have points at xi = 0:
f (0) = sin(0) = 0
f 0 (0) = cos(0) = 1
f 00 (0) = sin(0) = 0
f (3) (0) = cos(0) = 1
f (4) (0) = sin(0) = 0
f (5) (0) = cos(0) = 1

Using the first six terms we can use the taylor series approximation to find

sin( )
3

xi = 0 xi+1 =
3

h=
3

0
1 3 0 4 1 5
f ( ) = 0 + 1( ) + ( )2 +
( ) + ( ) + ( ) + (H.O.T )
3
3
2! 3
3! 3
4! 3
5! 3

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Where H.O.T is higher order terms. If we drop the higher order terms (as Yani says
drop it like its hot1) we then have truncation error and the taylor series approximation.

5
f( ) '
+
= 0.866295
3
5
3
3
3!3
5!3
r
3
= 0.866025
While our exact value is
2
Therefore our truncation error ' 2.69 104

1Actual

quotation

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

4. Taylor Series Polynomial


Monday January 18, 2010
3

f (xi+1 ) = f (xi ) + f 0 (xi )h + f 00 (xi ) h2! + f 000 (xi ) h3! + . . .


General Equation:
f (xi+1 ) =

X
k=0

(k)

hk
(xi )
k!

h2
f (xi+1 ) = f (xi )
+ f 0 (xi )h + f 00 (xi )
|
{z
}
2!
Zero Order Approximation
{z
}
|
First Order Approximation
{z
}
|
Second Order Approximation
{z
|

+ f 000 (xi )

Third Order Approximation

4.1. Zero Order Approximation.

In the zero order equation we only consider the zeroth term


f (xi+1 ) ' f (xi )

h3
3!

+...

10

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

The induced truncation error (E) is the portion of the taylor series that we did not
account for. This is as follows:
h2
E = f 0 (xi )h + f 00 (xi ) + . . .
2!
4.2. First Order Approximation.

We now take into account the first derivative of the function (we add the slope)
f (xi+1 ) ' f (x)i + f 0 (xi )h
Again the induced truncation error (E) is the portion of the taylor series that we
truncated (ignored).
h2
E = f 00 (xi ) + . . .
2!
4.3. Improving the accuracy of taylor series approximation. We have two
different ways of increasing the accuracy of our taylor series approximation:
- increase number of terms (higher order of approximation, higher order derivatives)
- smaller step size (h)
4.4. McGovin series. for points where initial point xi = 0
ex. f (x) = sin(x), xi = 0, h = xi+1 xi = xi+1
!e know from trigonometry that sin0 (x) = cos(x) and cos0 (x) = sin(x)
f (0) = 0 f 0 (0) = 1 f 00 (0) = 0f (3) (0) = 1 f (4) (0) = 0 f (5) (0) = 1
We can already see that because of trigonometric patterns, we have a much simplified
solution

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

11

If we now replace x( i + 1) with x we now have the following formula, the formula
that calculators in fact use to calculate the value of sin
x3 x5
+
...
sin(x) = x
3!
5!
4.5. General Taylor Series information. If only n number of derivatives of a
function available then
hn
f (xi+1 ) ' (xi ) + f 0 (xi )h + . . . + f (n)
n!
n
0
(n) h
f (xi+1 ) = (xi ) + f (xi )h + . . . + f
+ Rn
n!
Where Rn is the remainder term, or truncation error
Due to the first term of the truncated part of the taylor series, the closed term
expression for Rn can be expressed as:
Rn =

f (c) hn+1
(n + 1)!

Where C = some value between xi xi+1


From the relationship that we have gathered, we now know that Rn is directly proportional to a high order of the step size (h)
Rn hn+1
Or:
Rn = O(hn+1 )
Where O refers to order of in mathematical notation

12

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

5. Behavior of Error and Numerical Differentiation


Wednesday January 20, 2010
5.1. Big OH in terms of error. In an approximation to a mathematical function,
big OH shows how the error will behave as h 0
Example
f 000 (xi )h3 f (4) (xi )h4
+
+ ...
R2 =
3!
4!
Where the function is only a variable of h (one variable)
The h3 term dominates the expression as h 0
therefore |R2 | const|h3 | and |R2 | O|h3 |
Example: taylor series for f (x) = ex at xi = 0
f (x) = ex = 1 + x +

x2 x3
+
+ ...
2!
3!

xi = 0 and xi+1 = x
-Use the fourth order taylor approximation to estimate e2
-Use the remainder expression to find the bounds on the truncation error
4
16
4 16
e2 = 1 + 2 + +
+ R4 ' 1 + 2 + +
=7
2!
4!
2 24
|R4 | k|h5 |
f (5) (c)hn+1
n + 1!
ec 25
R4 =
5!
Since h = xi+1 xi = 2 0 = 2 and 5th derivative of ex = ex
4
R4 = ec
15
Where:
xi < c < xi+1
0<c<2
and since ec increases continuously between 0 2 our error bounds are:
R4 =

4 0
4
4
e =
R4 e2 = 1.97
15
15
15
4
< R4 < 1.97
15

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

If we double check our answers


e2 = 7.389
|| = 7.389 7 = 0.389
4
< 0.389 < 1.97
15
We can see that our range is correct!
5.2. Numerical Differentiation. with taylor series
f (xi+1 ) = f (xi ) + f 0 (xi )h + f 00 (xi )
We rearrange our terms to solve for f 0 (x):

h2
h3
+ f 000 (xi ) + . . .
2!
3!

f (xi+1 ) f (xi ) f 00 (xi )h f 000 (xi )h2

+ ...
h
2!
3!
(Forward Difference Formula)
f (xi+1 ) f (xi )
f 0 (x) '
h
The truncation error is O(h)
Backward Differential formula
-1st backwards taylor series expansion
(note that xi+1 is replaced with xi1 )
f 0 (x) '

f (xi1 ) = f (xi ) f (xi 1)h +


(Backwards Difference Formula)

f 0 (xi ) '

f 000 (xi )h2


+ ...
2!

f (xi ) f (xi1 )
h

Truncation error is still O(h)


Central Difference Method
The best approximation combines the two formulas
(Backwards Difference Formula)
Error is now of the order E = O(h2 )

f 0 (xi ) '

f (xi+1 ) f (xi1 )
2h

13

14

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

6. Numerical Differentiation and N dimensional Taylor series


Friday January 22, 2010
Numerical differentiation techniques that we have used so far are finite difference
in numerical methods
Find numerical derivative at xi given:
f (x0 ) = 2.2874 at x0 = 1.0
f (x1 ) = 2.6773 at x1 = 1.1
f (x2 ) = 3.0945 at x2 = 1.2
Backward Difference
f (x1 ) f (x0 )
= 3.899
x 1 x0
Forward Difference

f 0 (x1 ) '

f (x2 ) f (x1 )
= 4.172
x 2 x1
Central Difference

f 0 (xi ) '

f 0 (xi ) '

f (xi ) f (xi1 )
= 4.0355
x2 x0

6.1. Finite Difference Approximation of Higher Derivatives.


xi+2 = xi + 2h
Taylor series:
f (xi + 2) ' f (xi ) + f 0 (xi )2h +

f 00 (xi )(2h)2
2!

And:
f (xi+1 ) ' f (xi ) + f 0 (xi )h +

f 00 (xi )h2
2!

f (xi+2 ) 2f (xi+1 ) = f (xi ) + f 00 (xi )h2


Solving for f 00 (xi ) we can derive equations to find the second derivative of the function
(Forward Difference)
(Backwards Difference)

f 00 (xi ) '
f 00 (xi ) '

f (xi+2 ) 2f (xi+1 ) + f (xi )


h2
f (xi ) 2f (xi1 ) + f (xii2 )
h2

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

(Backwards Difference)

f 00 (xi ) '

15

f (x + i) 2f (xi1 ) + f (xii1 )
h2

Truncation error is still now O(h2 )


Another way of writing central difference method is the change in the derivative
compared to change in the x value
6.2. N-Dimensional Taylor Series - 1st order approximation. For 1-D functions we only need f (x), but for N dimensional function of f (x1 , x2 , . . . , xn ) we use
vectors to define our values of x as a vector x.


x1
x1
x2
x2


x) = f
x =
...
... Therefore: f (x1 , x2 , . . . , xn ) = f (
xn

xn

We will now look at the differences between 1-Dimensional and N-Dimensional


functions
Quality

1-Dimensional

Function Variable

xi

N-Dimensional

x1
x2

xi =
...
xN

x2i
x1i

xi =
...

Expansion Point

xi

Step Size

h = xi+1 xi

xN i

x1(i+1) x1i
x2(i+1) x2i
=
= xi+1 xi
h
..

xN (i+1) xN i
f (
x)
x1
.

First Derivative
f 0 (x)
f 0 (
xi ) =
x)
.. = J(
f (

x)
xN
The first derivative of a multi-dimensional function is called the Jacobian
J(
xi ) Jacobian of the function f (
x) at x = xi

16

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

The Taylor series expansion at this point for a multi dimensional function is:
f (xi+1 ) ' f (xi ) + f 0 (xi )h

f (
xi+1 ) ' f (
xi ) + J(
xi )T h
Where T represents transposition



x1(i+1) x1i
f (
x)
f (
x)
..

f (xi+1 ) ' f (xi ) +


...
.
x1
xN
xN (i+1) xN i

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

17

7. Functions of Multiple Dimensions


Monday January 25, 2010
7.1. Example 1.


0
0.1
xi = 0 , xi+1 = 0.1 , P (x) = 3x1 + sin(x2 ) + 2x2 x3 + 1
0
0.1
If P (
xi ) = 1 What is P (
xi+1 ) =

(3x1 + sin(x2 ) + 2x2 x3 + 1)

x1
3

(3x1 + sin(x2 ) + 2x2 x3 + 1)


J(
x) =
= cos(x2 ) + 2x3

x2
2x2
(3x1 + sin(x2 ) + 2x2 x3 + 1)
x3
And at point xi

3
3

J((x)i ) = cos(0) = 1
0
0

And due to our 3rd dimensional Taylor series:



0.1


T

f (
xi+1 ) ' f (
xi ) + J(
xi ) h ' 1 + 3 1 0 0.1 ' 1.4
0.1
We will now try the second order approximation
-We need to determine the second order derivative for the functionf (
x)
2

f (
x) 2 f (
x)
2 f (
x)
.
.
.
x2
x1 x2
x1 xN

1
2
2
f (
x) f (
x)
2 f (
x)

...
x x

2
x
x
x
x)where H is called the Hessian
2
N = H(
2 2
2

..
..
..
.
.

.
.
.
2.

f (
x)
2 f (
x)
x) 2 f (
...
xN x1 xN x2
x2N
Hessian also can be written as:

J(
x)
x1

J(
x)
J(
x)
...
x2
xN

18

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Note that the Hessian has symmetry : H T = H


Therefore H(i, j) = H(j, i)
2 f (
x)
2 f (
x)
=
Since
xi xj
xj xi
Now we can use this result to determine the second order taylor polynomial:
2
f 00 (xi )h
f (xi+1 ) ' f (xi ) + f 0 (xi )h +
2!
T H(

h
xi ) h
+
f (
xi+1 ) ' f (
xi ) + J(
xi )T h
2!
Note that as h 0 we may get round off errors and increase total error
Example:
approximation to estimate:
use second
order

0
0.1
= 0.1 , f (
xi = 0 , h
xi ) = 1
0
0.1


3
3

J(
x) = cos(x2 ) + 2x3 , J(
xi ) = J(0) = 1
2x2
0

0 0 0
0
0
0
xi ) = H(0) = 0 0 2
H(
x) = 0 sin(x2 ) 2 , H(
0 2 0
0
2
0
Therefore:

0 0 0
0.1

 3


1

1
0.1
0.1
0.1
0
0
2
0.1
0.1
0.1
f (
xi+1 ) ' 1 +
+

0.1

2
0
0 2 0
0.1


0

1 
' 1 + 0.4 + 0.1 0.1 0.1 0.2
2
0.2
1
' 1 + 0.4 + (0.04)
2
' 1.42
7.2. Roots of Functions. The root of a single variable function have many applications
Optimization of a single variable function where f 0 (x) = 0
For multivariable functions it is more difficult to get optimization and roots
For roots of a single variable functions we will use the following methods
Bracketing Methods
Bisection

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

19

False position
Open Methods
NewtonRaphson
Secant
Polynomials - Muller
Bracketing
We find solutions of f (x) = c and assume xE[xA , xB ]
Define g(x) = f (x) c and find roots of g(x) in [xA , xB ] bracket, and minimize
bracket

20

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

8. Bracketing Methods
Wednesday January 27, 2010
8.1. Bisection.

-Idea is to half the interval and discard the past that does not have root.
Observe that if a root exists between xA and xB , between g(xA ) g(xB ) There will
be a sign change. Therefore g(xA ) g(xB ) < 0
8.1.1. Binomial Algorithm.
(1) Find midpoint between xA & xB . Midpoint xm =

xA + xB
2

(2) Find g(xm )


(3) Test to see if midpoint is zero or close to it
If |g(xm )| = 0 or Etolerance We are done!
xr = xm = root
(4) See if sign change takes place in the first half of the bracket
Else if g(xA ) g(xm ) > 0
Positive sign means root is in upper half of bracket (xm < xr < xB )
We now change the bounds of the bracket to the upper half.
xA = xm xB = xB
Return to step 1 and repeat

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Else if g(xA ) g(xm ) < 0


Negative sign means root is in lower half of bracket (xA < xr < xm )
Now we take the lower half of our bracket as our now bracket
xA = xA xB = xm
Return to step 1 and repeat

Note: if g(xA ) g(xB ) > 0 for initial interval we either have:


-No rood between them
-Even amount of roots between them
If we have an odd number of roots bisection will only get one root
Tip: Try to graph function first before choosing the interval

8.2. False Position. (FP)


-Bisection we can see is not very efficient

21

22

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

-we use false position root instead of midpoint, but same method besides this
Through similar triangle relationship to find our false position midpoint.

g(xA )
g(xB )
=
, xm = Unknown
xm xA
xB xm
g(xA )xB g(xB )xA
xm =
g(xA ) g(xB )
g(xB )(xA xB )
xm = xB
g(xA ) g(xB )
8.3. Bracket Summary. Bracket midpoint update terms:
xA + xB
Bisection xm =
2
g(xB )(xA xB )
False Position xm = xB
g(xA ) g(xB )
FP places xm closer to root, but slower in functions with strong curvature
8.4. Open Methods. :
No bracket is required, only 1 initial point is needed (a guess)
Usually faster than bracketing methods
may not always find a given root, may not converge correctly (if guess is bad)

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

8.4.1. NewtonRaphson. 1st order approximation of taylor series


g(xi+1 ) ' g(xi ) + g 0 (xi )(xi+1 xi )
xi = current point. xi+1 = next point for root.
we are looking for g(xi+1 ) = 0
Therefore: 0 ' g(xi ) + g 0 (xi )(xi+1 xi )
xi+1 ' xi

g(xi )
Newton-Raphson Update Term
g 0 (xi )

23

24

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

9. Open Methods
Friday January 29, 2010
9.1. Newton-Raphson. (NR)
g(xi )
xi xi+1
g(xi )
= xi 0
g (xi )

g 0 (xi ) =
xi+1

Same NR update function:


(1) x = x0
(2) if g(x) = 0 or |g(x)| Etot We are done
xr = x
g(x
(3) xnew = x 0
g (x)
x = xnew
Return to step 2 and repeat
Example:
g(x) = ex 10 exact root ' 2.302
x0 = 4

g 0 (x) = ex

g(x0 ) = e4 10 6= 0

E= 1.70

e4 10
g(x0 )
=
4

= 3.18 E= 0.88
g 0 (x0 )
e4
e3.18 10
x2 = 3.18
= 2.59
E = 0.293
e3.18
x3 = 2.34
E= 0.037

x1 = x0

We can see that the solution for x converges, and error from true value decreases.
9.2. Convergence on open method. Bisection error roughly decreased by 12 every
iteration Linear Convergence
In NR the error is roughly proportional to the square of the previous error Quadratic
Convergence
Warning Watch for inflection and local min/max, since if g 0 (xi ) = 0 its bad!
g(xi )
we use the derivative.
g 0 (xi )
When the derivative is unknown, we use backwards difference method to approximate
derivative g 0 (x)
9.3. Secant Method. In NR update:

xi+1 = xi

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

g 0 (x) '

g(xi+1 ) g(xi )
xi1 xi

Therefore:

xi+1 = xi

g(xi )(xi1 xi )
g(xi1 ) g(xi )

25

26

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

10. Cheap Plastic Yani Notes


Mullers method and root finding

Hurrah, it looks like the notes are finally done!

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

27

!"#$%&'()"*+*,"&-$')."/*,)0123,,)42)-(05

!"#$%&'()*'+,&&'-.*+'/012/13452/0'-11/*13!"#$%&'("#$

6'7518/0'-'4*#-11/#/"9:;9<=>?@A,/%-'/'528'2151/7518/04*#52*'/,%1,-454/4#&%145/*#/(
BC9C
B<9C=>
B;9<

D4-*/@/0'+EFGEHI12'31-4$%&'$'5/45J/04*$'/012(
x0 = 0;
x1 = 0.5;
x2 = 1;
err = 100;
counter = 0;

% arbitrary large number

while err > 0.001


% Evaluate f at current points
fx0 = cos(x0);
fx1 = cos(x1);
fx2 = cos(x2);
% Calculate the coeffs for the 2nd order polynomial
d0 = (fx1 - fx0)/(x1 - x0);
d1 = (fx2 - fx1)/(x2 - x1);
h0 = x1 - x0;
h1 = x2 - x1;
a = (d1 - d0)/(h1 + h0);
b = a*h1 + d1;
c = fx2;
% Calculate the inverse of the roots of the 2nd order polynomial
z1 = (-b + sqrt(b^2 -4*a*c))/(2*c);
z2 = (-b - sqrt(b^2 -4*a*c))/(2*c);
% Find the largest of the two
z = max (abs(z1), abs(z2));
% Calculate the next point and the error
x3 = x2 + 1/z;
err = abs(x3-x2);
% Update points
x0 = x1;
x1 = x2;
x2 = x3;
counter = counter + 1;
end

28

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

!"#$#%&'(#"$)
*#+)$!$,+%$#("



%-./0123
4-./5520
6-./708



98-:/02:

;,6("<!$,+%$#("


%-./1=72
4-:/.=5=
6-./.323



98-:/7=08

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

29

!"#$%&'()"**+"&,$'("#-./,.#0..,1$#-$#23$,(456758

x


fzero1/#',$.#9:$#2+);"0$"<+)#.#+$#)"0=)0.1$#-$#2

!"#$%&#'(#)*+!,+-.&)/(#0)&$1$2/03%#)2(#0)0.%42/51(#)62)7081)(1/")#9&1$(0.#)7("1400($0.)0)'#)124.&)/(#0)$:

;#38'#.#17;<)(2=>X = FZERO(FUN,X0)(4#1$(0.#)72?1400.("1.&)/(#0) FUN)124X0:@X0 #$("1#)#(#2'6&1$$A
FUN/2)%1$81/#.#17#)2)&3%140.B2<$:!0.#)7("1400($0.!"#$%#&'()"#$<0&/2)&$1>


f = @(x) x^2*sin(x);
r1 = fzero (f , 3);

CD412(12)2)0)<30&$.&)/(#0)
CE#)7("1400(B#("2)#)#(#2'6&1$$0.F:


G4H70#(2''2(0)/1>





r1 = fzero(@(x) x^2*sin(x) , 3);

I=238'1>E#)7#)6("1400($0.!"#$%#&'()"#$&$#)6*+!,+->


>> f = @(x) x^2*sin(x);
456758>.??"#-
>> r = fzero(f,4);
>> r = fzero(f,0.5);
>> r = fzero(f,10);

@..,1./#-

J
F

,1$$0)>400(0%(2#)17.403.?1407181)7$0)("1
#)#(#2'6&1$$:

deconv 1/#',$.#9A)1+",$#2B.+C#.?$"+&


+''("131("07$7#$/&$$17$0.24@-#$1/(#0)HE2'$1K0$#(#0)HLMH;1/2)(A/2)%1288'#17(0.#)7#)6412'400($0.80'<)03#2'$:
E0480'<)03#2'$B#("3&'(#8'1400($H80'<)03#2'71.'2(#0)/2)%1&$17#)/03%#)2(#0)B#("2%0N131("07$(02N0#7("1)117
.04/03#)6&8B#("840814#)#(#2'6&1$$N2'&1$:

I=238'1>E#)7#)6("1400($0..@=AO=PQR=FSR=TSR=U

;&880$1B1$(24(B#("2)#)#(#2'6&1$$0.=JOT:T2)7&$1LM(071(143#)1("2(=OT#$2400(:!"#$2'$0312)$("2(@=TA#$2
.2/(040..@=A:V1/2))0B*+!,-.+.@=AB#("("1.2/(04@=TA



=FQF=TQ=SF
=QT
=PQR=FSR=TSR=U

WW:

J


30

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

!"#$#%&$#'%()*+(),*(-./.),/)0.*
!"1231415&2&%3&$678#294:#2;<!8<=2&>&?2@

;<!8<=A1BC#D4#>2&E#$%&$F2"#E&5GB&F?15>#%512?&B2146@

H&5GB&F?154A1BC##B2#$#>?B2&;<!8<=CG42&$?B:2"#A&#%%?A?#B24141I#A2&$7J&'%&$)K/L).0L),0L)M'3#A1BA$#12#1
I#A2&$A155#>%@



>> f = [1 -5 5 5 -6];
NO&#%%?A?#B24?B>#4A#B>?B:E&3#$

<B>%&$(),*3#A1BA$#12#14#A&B>I#A2&$A155#>%$P@




>> fr1 = [1 -2];

Deconv%DBA2?&BA1BB&3C#D4#>2&E#$%&$F2"#E&5GB&F?15>?I?4?&B%$&F5142E1:#@





>> f2 = deconv(f, fr1);

!"?43?55E$&>DA#@
f2 = [ 1 -3 -1 3]

Q"?A"2$1B4512#42&2"#E&5GB&F?15@)./.),/)0.

=1A62&&D$E$&C5#F@
%()*+(),*()./.),/)0.*



%,()*

%,()*?42"#B#3%DBA2?&B%&$3"?A"3#B##>2&%?B>1$&&27<EE5GRS1:1?B7JDEE&4#3#A&BI#$:#&B2"#$&&212)+P7
R&3'3#B##>2&>#%512#%,()*CG()P*@


),/,)/.
)/P
)./.),/)0.

TT7

U

!"#$#%&$#'%,()*+()P*(),/,)/.*

8#294"1I#;<!8<=>#%512#%,7







>> f2 = [ 1 -3 -1 3];
N%,+)./.),/)0.
>> fr2 = [1 -1];
NI#A2&$%&$E&5GB&F?15()/P*
>> f3 = deconv(f2, fr2);
!"?43?55E$&>DA#@
f3 = [1 -2 -3]

Q"?A"2$1B4512#42&2"#E&5GB&F?15@),/,)/.


YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

31

Part 3. February Notes


11. Roots of Polynomials
Monday February 1, 2010
We can use bracketing or open methods for a given polynomial
Combine with polynomial deflation to avoid finding the same root twice
Problems with the method is slow convergence in strong curvature, and complex roots

11.1. Mullers Method. We use the second order taylor approximation to determine next estimate
We can find real, complex, single or multiple roots using this method
This method can be applied to any function, it is very general. We require 3 initial
points.
Parabola:
f (x) = dx2 + ex + F
= A(x x2 )2 + B(x x2 ) + C
We model our parabola with the given points
A B and C can be found using linear algebra

32

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

f (x0 ) = Ax0 2 + Bx0 + C


f (x1 ) = Ax1 2 + Bx1 + C
f (x2 ) = Ax2 2 + Bx2 + C
or:
f (x0 ) = A(x0 x2 )2 + B(x0 x2 )
f (x1 ) = A(x1 x2 )2 + B(x1 x2 )
f (x2 ) = C
Where:
1 0
h1 h0
B = Ah1 + 1
A=

f (x1 ) f (x0 )
x 1 x0
h0 = x1 x0
0 =

f (x2 ) f (x1 )
x 2 x1
h1 = x2 x1
1 =

Therefore
0 = A(x1 x2 )2 + B(x1 x2 ) + C

B B 2 4AC
(x1 x2 ) =
2A
We will get two roots; we choose the one that is closest to x2 (smallest)
Once xr is determined. we update our three points and repeat

11.2. Complication
With Muller Method. if b2 >>> 4ac

Then b b2 4ac Very small = round off errors

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

We then let

z=

1
xr x2
z 1 = xr x2
0 = Az 2 + Bz 1 + C

0 = Cz 2 + Bz + A
where to avoid round off:

B B 2 4AC
Take the largest root |z|
z=
2A
1
z=
root with smallest |xr x2 |
xr x2
To avoid calculating for both roots:

B B 4AC
2A
z=

B
+
B 2 4AC

2A

if B > 0
if B < 0

33

34

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

12. Golden Section Search


Wednesday February 3, 2010
12.1. How It Works. - similar to bracketing method

Assume minimum exists between upper and lower bounds: xu and xl :


(1) Choose distance d [d < (xu xl )]
(2) (
Set up 2 overlapping intervals of d:[xl x1 ] & [x2 xu ]
x1 = xl + d
x2 = xu d
(3) (
Determine f (x1 ) and f (x2 )
iff (x1 ) < f (x2 ) : minimum is in upper interval: [x2 xu ](x2 becomes new xl )
(4)
iff (x1 ) > f (x2 ) : minimum is in lower interval: [xl x1 ](x1 becomes new xu )
(5) Return to # 2 and repeat
xu xl
12.2. How To Choose d. Range of d values:
< d < x u xl
2
Smaller d value means smaller overlap with faster convergence but may be inaccurate.
Larger d value means larger overlap with more accurate results but will be slow.
Golden Ratio is the most efficient ratio that has been found:
If interval is divided according to golden ratio:

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

l1 + l2
l1
=
l1
l2
1
1+=

2 + 1 = 0
=

' 0.618

Therefore d =

51
2

!
(xu xl )

12.3. Golden Section Maximum Search.

1+4

51
2

35

36

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

if f (x2 ) < f (x1 ) : maximum is in upper interval: [x2 xu ](x2 becomes new xl )
if f (x1 ) < f (x2 ) : maximum is in lower interval: [xl x1 ](x1 becomes new xu )

12.4. Example. Optimize f (x) = 1 x2 using golden ratio, interval of [0.5 1]

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

d=

51
2

!
1.5 = 0.927

x1 = xl + d
= 0.5 + 0.927 = 0.427
x2 = xu d
= 1 0.927 = 0.073
f (x1 ) = 1 (0.427)2 = 0.818
f (x2 ) = 1 (0.073)2 = 0.995
Because we are looking for the max: f (x1 ) < f (x2 ) (Increases left)
Now the interval is the lesser one: [xl x1 ] = [0.5 0.427]
Second iteration:

d=

51
2

!
0.927 = 0.573

x1 = xl + d = 0.073
x2 = xu d = 0.146
f (x1 ) = 1 (0.427)2 = 0.995
f (x2 ) = 1 (0.073)2 = 0.978
f (x1 ) > f (x2 )

37

38

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

And we continue to decrease the size of the interval, until stopping conditions

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

39

13. Parabolic Curve Fit


Friday February 5, 2010
13.1. How It Works. - similar to Mullers method
(1) Fit parabola to 3 points given from the function
(2) Find the min/max from taking the derivative of the parabola approximation
(3) Based on the location of the min/max found points, set up 3 new points and
repeat until reaches stability or small enough interval

g(x) = Ax2 + Bx + C
Where A,B,C are unknowns. We use f (x1 ), f (x2 ), f (x3 ) to solve:
A

f (x1 )
Ax21 +Bx1
f (x2 ) = Ax22 +Bx2
f (x3 )
Ax23 +Bx3

+C
x21
+C = x22
+C
x23

And we solve for A, B, and C.


Because:
g(x) = Ax2 + Bx + C
d
(Ax2 + Bx + C) = 2Ax + B
dx
B
xp =
2A

}|
{
x1 1
x2 1
x3 1

40

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

For the next iteration, we use xp , x1 , x2 , x3


Rule: choose xp and its 2 closest neighbours

13.2. Example. f (x) = |x + 1| + 1

x1 = 2
x2

4 2 1
0 0 1
1 1 1

=0
x3

4
2

2 0
3
1
A=

1
3

= 1y1 = 2
y2 = 2
y3

2 0
6 0 0
0

0 1
2 0 0 1
1 0
1
1 1 0
B=

2
3

C=2

1
2
g(x) = x2 x + 2
3
3
2
2
min: g 0 (x) = x
3
3
B
xp =
=1
2A
We know if it is optimum via analytic approach or graphs
xp = 1

x1 = 2

x2 = 0

x3 = 1

=3

2
2
1

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

We can see that the two closest points are x1 and x2

x 1 = 0 y 1 = 2
new initial points: x2 = 0 y2 = 1

x = 2 y = 2
3
3




4 2 1
2
2 0 1
0
2 0 0
2
1 0 0
1 1 1
1 1 1 1
1 0 1 1
0 0 1 0
0 0 1
2
0 0 1
2
0 0 1
2
0 0 1

1
2
2

41

A =1
B = 2
C =2

g(x) = x2 2x + 2
g 0 (x) = 2x 2
xp =

B
(2)
=
=1
2A
2

Since xp is constant, we are done!


d
13.3. Derivative Method. If g(x) = dx
f (x) is known, this is the preferred method.
The slope of the of g(x) curvature of f (x) will determine if there is a min or max
Ex:

f (x) = et cos (t)


g(x) = et cos (t) et sin (t) = et (cos (t) sin (t)
We then use NR or bisection to solve for g(x) = 0
13.4. Multi Dimensional. We only focus on cases where derivative jacobian exists
The objective is to find when the jacobian is 0
Converts a multi-dimensional optimization to solve a non-linear system of equations.
13.5. Case 1 Jacobian is linear system of equations. :
Example 1: optimization of f (x) = x21 + x22 + 3

f (x)
 
2x1
x1
J(x) = f (x) =
2x2
x2
When J(x) = 0:


2x1 0
J(x) =
0 2x2


0
0

(
x1 = 0
x2 = 0

42

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Example 2: optimization of f (x) = x21 + x22 + x1 x2 + 2x1

f (x)


2x1 + x2 + 2
x1
J(x) =
=
f (x)
x1 + 2x2
x2
x2

x1

}|
z
{ 
1 2
1 2
2

2 1
0
1 12

 
2
0

0
1

1
2
1
2


2
0

(
x1 = 43
x2 = 23

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

43

14. Complicated Jacobian: non-linear system of equations


Monday February 8, 2010
14.1. Example 1. Optimum of f (x) = x1 + 2x2 x41 + 2x1 x2 + 2x22 :

4 4x31 + 2x2
system of non-linear equations
J(x) =
2 + 2x1 + 2x2


We need to find x so that J(x) = 0


if x is our current estimate for the root, the Taylor Series expansion for jacobian at
xi is:
Hessian

z }| {
J(xi+1 ) ' J(xi ) + H(xi ) (xi+1 xi )
We want our next point xi+1 to be the root of Jacobian J(xi+1 ) = 0
0 ' J(xi ) + H(xi ) (xi+1 xi )
J(xi ) = H(xi ) (xi+1 xi )
xi+1 = xi H 1 (xi ) J(xi )
xi+1 = xi H 1 (xi ) J(xi ) } Newton Raphson update term or Steepest Decent
update
d
=
H(x) = J(x)
dx

"

J(x)
x1
J(x)
x2


12x21 2
=
2
2


Let:
 
1
x0 =
0


  
44
0
J(x0 ) =
=
} (note at x0 , J(x) 6= 0)
2+2
4

xi+1 = x1 = x0 H 1 (x0 ) J(x0 )


  
1   

1
12 2
0
0.744
=

=
= x1
0
2 2
4
1.714


12 2
H(x0 ) =
2 2

44

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)




0.744
0.886
x1 =
J(x1 ) =
1.714
0




0.615
0.097
x2 =
J(x2 ) =
1.605
0




0.59
0.0017
x3 =
J(x3 ) =
1.59
0


 
0.5898
0
x4 =
J(x4 ) =
1.5898
0


0.5898
The optimized value of x is therefore:
1.5898
Because the Jacobian (derivative) J(x) = 0
14.2. Solutions to Linear Algebraic Equations. In general:
(
m equations
general cases
n unknowns

a11

a21
.
..
am1

. . . a1n x1 b1
.. x b
...
. 2 2
..
... = ...
..

.
.
xn
bm
. . . amn

a11 x1 + a12 x2 + . . . + a1n xn = b1


...
...
...
... ... ...
...
... ...
am1 x1 + am2 x2 + . . . + amn xn = bm
The system will be defined by the relationship between m an n values

m < n undetermined system


m = n solution at a point - a square matrix

m > n (parametric) overdetermined system. Best fit solution


Example:
3x1 + 2x2 = 18
x1 + 2x2 = 2

3 2
1 2

 
18
0 8

2
1 2

 
24
0 1

2
1 0

 
3
1 0

4
0 1


4
3

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

14.3. Terms to know.


main diagonal (a11 , a22 , a33 )
upper triangle matrix (see below)

a11 a12 a13


0 a22 a23
0
0 a33

Matrix inversion

45

46

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

15. Linear Equations: Things that can go wrong


Wednesday February 10, 2010
Midterm: March 1st 6:30-8
15.1. Singular Matrices.

   
1
1 1 x1
=
2
2 2 x2
| {z }

System is linear dependant

A is a singular matrix: det(A) = 0


Coincident lines: no simultaneous solution

   
1 1 x1
1
=
2 2 x2
2
| {z }

det(A) = 0

A0

Parallel lines: no solutions to equation


15.2. Near Similar Cases.

|E| <<< 1,

   
1
1 x1
1
=
1 + E 1 x2
2

det(A) = 1 1 E 1 = E

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

   
x
1
A 1 =
x2
2
 

 
 
1
1
1 1
x1
1 1
=
=A
2
2
x2
E (1 + E) 1

1
1
 


1
x1
1

(1 2)

= 1 E
=
' E1

x2
E E1
(1 E + 2)

| {z }
E
E
E11
Function is very sensitive to E as E 0
Round off errors are critical
This is an ill conditioned system
System will hypothetically have a solution in extreme cases

15.3. Nave Gaussian Elimination Algorithm. Square matrix


n variables, n equations
This should be review of linear algebra

a11 . . . a1n x1 b1
.. x b

. 2 2
a21 . . .
.
..
.. = ...
..
..
.
. .
xn
bm
am1 . . . amn
Goal is: Simple operations on A:
Multiply (or divide) rows

47

48

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Add or subtract rows


Switch rows
To create an upper triangular matrix
Then use back substitution to find [x]
1) Create a combined matrix.

a11 . . . a1n b1
..
..

.
.
a21 . . .

.
.
.
.
..
..
..
..
am1 . . . amn bn
For row m = 2, 3, 4 . . . n
am1
row1
row m = row m
a11
a11 = pivot element in step 1
2)

a11 a12 . . . a1n b1


..
..

.
.
0 a22 . . .

.
.
.
.
..
..
..
..
0
0 am2 . . . amn bn
If pivot element = 0 or small, we switch rows
am2
for m= 3. . .n row m = row m
row 2
a2 2
We use back substitution to isolate main diagonal
Until we have triangular matrix:

a11 a12 . . . . . . b1
..
..
.

.
.
0 a22 . .

.
..
..
0
0 a33
.
0
0
0 amn bn
Neve Problems: if pivot elements are small or zero, we have to exchange rows

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

49

16. Systems of Linear Equations


Monday February 22, 2010
A[x]=B
(1) Gaussian Elimination: Reduce to upper triangular matrix and back substitution
(2) LU: Decompose matrix A only
16.1. Daniels LU method (from linear algebra).
identity matrix

1 2 3
A = 1 0 1
2 1 0

1 2
3
A0 = r1 0 2 2
2r1 0 3 6

1 2
3
0 2 2
A00 =
3
2 r2 0 6 3

1
0
0

1
1
2

1
1
2

Therefore: U = 0
0

L= 1
2
test: A = LU

1
A = 1
2
Matrix multiplication proves us true

}| {
0 0
1 0
0 1

0 0
1 0 +r1
0 1 +2r1

0 0
1 0
3
1 + 32 r2
2

2
3
2 2
6 3

0 0
1 0
3
1
2

0 0
1 2
3
1 0 0 2 2
3
1 0 6 3
2

16.2. Eigen Values and Eigen Vectors.


(
= eigen values
From [A]
V = eigen vectors

50

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

[A][] = [B]
Relationship: [A]V = []V
([A] [][I])V = 0

V =0
or:

will happen if:

Trivial answer

|A I| = 0
= det|A I|


a11

.
.
.
a
1n

..

a22
.
a
= det .21

.
.
..
..
..



an1
...
ann
This gives the polynomial order n in terms of

16.3. Eigen Example.




1 2
=u
3 4



1
2
= (1 )(4 ) 6 = 0

3
4
0 = 2 5 + 4 6
(

5 38
0.37
=
=
2
5.37
(A 1 I)V 1 = 0

   
1 (0.37)
2
a
0
=
3
(0.37) b
0


1.37
1.37 2
0
{Let a = 1} b =
0
3 4.37
2


1
V1 =
1.37
2

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Typical normalization we use unit vector:


 
1
a
V1 =
a2 + b 2 b


0.825
=
0.566

4.63
2
V2 =
3
1.37


0.416
=
0.909


0
0

51

52

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

17. Curve Fitting


Wednesday February 24, 2010
Curve fitting is finding Best Fit Line for noisy data

While Polynomial interpolation fits all points

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

17.1.
x
x1
x2
..
.

53

Linear Regression. We begin with lots of measurements:


y
y1
y2
..
.

xn y n
We know physical properties prior to analysis: linear relationship between x & y.
y = p1 + p2 x
The objective
P is to find p1 & p2 that best represents data (Try to reduce variance
(V = ni=0 Ei )
If data is error free:

Over determined system of equations:

y1 = p1 + p2 x1
y2 = p1 + p2 x2
.. .. .. ..
..
. . . .
.
yn = p1 + p2 xn


y1
y2
.=
..
yn

M =AP

}|
{
z

1 x1
 
1 x2 p1
. .
.. .. p2
1 xn

54

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

17.2. Pseudo Inverse. We need to solve for P:


M = AP
z }| {
T
A M = AT AP Square Matrix
if (AT A)1 exists
(AT A)1 AT M = P
(AT A)1 AT M = P
|
{z
}

Pseudo Inverse of A

17.3. Linear regression example. Noisy data (y=x+1)


0 1
1 2.1
2 2.4
3 3.8
Find P1 ,

P2 if we know y = P1 + p2 x

1
1 0  
2.1 1 1 P1
=

2.4 1 2 P2
3.8
1 3 | {z }
| {z } | {z } P

1



 1

1
1 1 1 1
2.1
1
1
1
1
=

0 1 2 3 2.9
0 1 2 3 1
3.8
1

 
 
9.8
4 6 P1
=
19.3
6 14 P2
| {z }
AT A

0  
1
P1
2 P2
3

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Inverting AT A (Daniels Method):


A

z


4 6
6 14

}|


{ 
1
1 0
2 3
0
2

0 1
32 1
0 5


14
3
2 0

5
5

3
1
0 1
10
5

 7
3

1
10
A = 53
1
10
5

Therefore:


7
5

3
10

3
10
1
5



   

9.8
P1
0.92
=
=
19.3
P2
1.07

y = 0.92 + 1.07x
17.4. Consider N equally spaced points.

y = P1 + P2 (x) or: 1P1 + xP2

y1
1 1
 
y2
1 2 P1
. = . .
..
.. .. P2
yN
1 N
| {z }

f1 (x)

f2 (x)

55

56

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

P = (AT A)1 AT M

1

1 1
y



 1
1 1 . . . 1 1 2

1 1 . . . 1 y2
. .

=
1 2 . . . N .. ..
1 2 . . . N ...
1 N
yN


PN 1  PN
y
N
n
PNi n
PNi 2
= PN
i nyn
i n
i n


PN 1  PN
y
N
n
n
PNi
PN
PNi 2
n
n
i nyn
i
i
(AT A)1

AT

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

57

18. General Least Squares Formulation


Friday February 26, 2010
Extend methodology to non-linear systems
18.1. Model.
y = p1 f1 (x) + p2 f2 (x) + . . . + pn fn (x)

y1
y2
.
..

yN
| {z }

Measurement Vector M

f1 (x1 ) . . . fn (x1 )
..

.
f (x ) . . .
= 1. 2 .
..
..
..
.
f1 (xn ) . . . fn (xn )
{z
}
|
Model Matrix A

M = AP
P =

P1
P2
.
..

Pn
| {z }

Parameter Vector P

Once again we use least square solution to solve for P


(A A)1 AT
{z
}
|
T

Pseudo inverse matrix of A

18.2. Special Case: Polynomial Model. of Degree Q


y = P0 + P1 x + P2 x 2 + . . . + PQ x Q

1 x1 x21 . . . xQ
1

1 x
x22 . . . xQ
2

2
A = . .
.
.
.. . . . ..

.. ..
Q
2
1 xN xN . . . xN
Example: y = P1 sin(x) Goal to find amplitude of function (P)


sin(x1 )
y1
P = [P1 ]
A = ...
M = ...
sin(xN )
yN
P = (AT A)1 AT M

1

sin x1
y



 .1
.
sin(x1 ) . . . sin(xN ) ..
= sin(x1 ) . . . sin(xN ) ..
sin(xN )
yN
PN



1 PN
2
=
i=1 sin (xi )
i=1 sin(xi )yi

58

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

18.3. Derivation of Pseudo-Inverse.


P = (AT A)1 AT M
We claim that this is the least square approximation
Showing for P1 x = z:
Error associated with any point n is:
E n = P 1 xn
| {z }
M odel

N
X

En2 =

n=1

N
X

yn
|{z}

M easurement

(P1 xn yn )2 = J

n=1

18.4. Objective: to find P that minimizes total squared error.


N
X
J
=0=2
(P1 xn yn ) xn
P1
n=1
N
X

P1 x2N

n=1

P1 =

N
X
n=1

Therefore:

yn xn

n=1

!1
x2N

N
X

1

x
x

 .1

 .1

.
..
x1 . . . xn
x1 . . . xn
=
.
xn
xn

P = (AT A)1 AT M

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

18.5. Midterm contents: Covers notes, practice and problems


5 short answer questions
10 multiple choice - choose the obvious
Taylor Series
Root finding (optimizing)
Singe variable
Multi variable
Golden ratio
Parabolic curve fit
Multi variable Jacobian and Hessian

59

60

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Part 4. March Notes


19. Midterm Review Session
Monday March 1, 2010
19.1. Definitions: Accuracy = closeness to true value
Precision = closeness to other results
19.2. Taylor Series.
f (x) = f (x0 ) + f 0 (x0 )(x x0 ) +

f 00 (x0 )(x x0 )2
+ ...
2!

example: write f (x) = ex + sin(x) as third order polynomial about x0 = 1


f 0 (x) = ex + cos(x)
f 00 (x) = ex sin(x)
f 000 (x) = ex cos(x)
1
1
f (x) ' e + sin(1) + (e + cos(1))(x 1) + (e sin(1))(x 1)2 + (e cos(1))(x 1)3
{z
} |2
| {z } |
{z
} |6
{z
}
0

Good enough for the test: dont simplify


Truncation Error: (1st term dominates)




(e + sin(1))(x 4)4
(e + sin(1))(x 4)4
= R3 '

=



4!
24
We dont care about the sign!
19.3. Numerical Derivatives 1st order.
Forward
Reverse
Central Difference

f (xi+1 ) f (xi )
(xi+1 xi )
f (xi ) f (xi1 )
f 0 (xi ) '
(xi xi1 )
f (xi+1 ) f (xi1 )
f 0 (xi ) '
(xi+1 xi1 )
f 0 (xi ) '

Central difference is more accurate error E h2


Use simple answers since some need simple responses

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

61

19.4. Root Finding.


Bracketing Methods
Bisection
False Position
Open Methods
Newton-Raphson
Secant
Muller
1-2 Iterations can do
Bracketing: initial bracket is [xa xb ]
Needs odd number of roots f (xa ) f (xb ) < 0
f (xi )
NR:
xi+1 = xi + 0
f (xi )
Secant NR slow if function is unknown

xi+1 = xi +

f (xi )(xi1 xi )
f (xi1 ) f (xi )

Muller Can handle strong curvature and complex roots


Needs 3 initial points
Goal is to find A, B, and C
f (x) = Ax2 + Bx + C

For finding roots

f (x) = 2Ax + B
For finding Optimum

2
Ax1 + Bx1 + C
y1
y2 = Ax22 + Bx2 + C
Ax23 + Bx3 + C
y3
The new point xnew is there the function = 0. The next iteration will utilize this
new point, as well as the two closest points.

19.5. Single Variable Optimization. Is when the derivative of the function f 0 (x) = 0
Golden Search

62

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

M inimum

M aximum

z }| {
[xL , x1 ]
[x2 , xU ]
| {z }

if :

f (x1 ) < f (x2 )

z }| {
[x2 , xU ]

if :

f (x1 ) > f (x2 )

[x , x ]
| L{z 1}

M inimum

d can be [0.5
1] so long as overlap happens
51
Golden Ratio=
' 0.62 T otal
2
Parabolic Muller Covered already
19.6. Multi-Variable Optimization.
xi+1 = xi H 1 (xi )J(xi )

M aximum

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

63

20. Interpolation
Wednesday March 3, 2010
20.1. Definition. Smooth curve that fits all given data points

20.2. Polynomial Interpolation. Most common type (curve fitting)


Nth order polynomial
N+1 Data points has only 1 Nth order polynomial that will pass through all points
The coefficients of the polynomial are needed to solve for
Example: 4 Data Points are used, we get a cubic function
Ax3 + Bx2 + Cx + D = 0 (A + B + C + D = 4)
A

x31
x32
3
x3
x34

x21
x22
x23
x24

}|
x1
x2
x3
x4

1
1
1
1

{
y1
y2

y3
y4

We solve the system of linear equations to get A,B,C,D


Similar to linear regression we have a polynomial function
Number of Points = Number of coefficients
Shortcomings: As N increases, there is potential for round off error in the model
since xN becomes either very small or very big. This leads to problems.

64

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

20.3. Lagrange Interpolation. is used to avoid numerical problems in determining


the coefficients of the interpolating polynomial
Still find N order function that passes through N+1 data points
N

Order

N
z }| {
X
fN (x) =
yn Ln (x)
n=0

W here :

ex :

Ln =

L1 (x) =

N
Y

(x xi )
(xn xi )
i=0,i6=0
N
Y

(x x0 ) (x x2 ) (x x3 )
(x xN )
(x xi )
=
...
(x1 xi )
(x1 x0 ) (x1 x2 ) (x1 x3 )
(x1 xN )
i=0,i6=0

(x x1 )
term, since the denominator is zero
(x1 x1 )
Example: If we have two points (x0 , y0 ) (x1 , y1 ): N+1=2, N=1

Note that for the example there is no

f1 =

1
X

yn Ln (x) = y0 L0 (x) + y1 L1 (x)

n=0

"
= y0
= y0

#
" 1
#
Y x xi
x x1
x x0
x xi
=
=
+ y1
x
x
x
x1 x0
0 xi
0 x1
1 xi
i=0,i6=0
i=0,i6=0
1
Y

x x1
x x0
+ y1
x0 x1
x1 x0

If for example data is a linear relationship:


x0 = 1,

y0 = 3,

x1 = 2,

y1 = 6

x2
x3
+6
12
21
= 3(x 2) + 6(x 3) = 3x as expected

f =3

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

65

for more points we just need to expand further


x 1 2 3
Example: three points:
y 1 4 9
f2 (x) = y0 L0 + y1 L1 + y2 L2
2
Y
(x x1 ) (x x2 )
(x 2)(x 3)
x xi
=
=
L0 =
x xi
(x0 x1 ) (x0 x2 )
(1 2)(1 3)
i=0 0
2
Y
(x 1)(x 3)
x xi
=
L1 =
x xi
(2 1)(2 3)
i=0 1

f2 (x) = 1

(x 2)(x 3)
(x 1)(x 3)
(x 1)(x 2)
+4
+9
(1 2)(1 3)
(2 1)(2 3)
(3 1)(3 2)

= x2
But for example if:

x 1 2 3
y 1 8 27

(x 1)(x 3)
(x 1)(x 2)
(x 2)(x 3)
+8
+ 27
(1 2)(1 3)
(2 1)(2 3)
(3 1)(3 2)
2
= 6x 11x + 6

f2 (x) = 1

Therefore we see that Lagrangian method does not determine coefficients directly.
No round off errors affect sensitivity

66

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

21. Spline Interpolation


Friday March 5, 2010

-Previously we have seen polynomial interpolation (Lagrangian and normal) Where


nth order polynomial fits n+1 points.
-The problem is that abrupt changes in data lead to ascillations behaviour of higher
order polynomials

21.1. Definition. Piecewise spline interpolation uses lower order polynomials for
subset of data points

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

67

We use a collection of lower order polynomial to estimate instead of a higher order


polynomial that passes through all points

21.2. Linear Spline. (1st order)


Each point is connected with a straight line to the next point

f1 (x) = y1 + m1 (x x1 ) x1 x x2
f (x) = f2 (x) = y2 + m2 (x x2 ) x2 x x3

f (x) = y + m (x x ) x x x
n
n
n
n
n
n+1

68

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

yi+1 yi
xi+1 xi
Closer data points creates smaller error
The problem there is discontinuity of 1st derivative at knots - f(x) is not smooth
Where: mi =

21.3. Quadratic Spline. (2nd order)

f1 (x) = a1 x + b1 x + c1 x1 x x2
f (x) = f2 (x) = a2 x2 + b2 x + c2 x2 x x3

f (x) = a x2 + b x + c x x x
n
n
n
n
n
n+1

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Each quadratic spline segment (fi (x)) contains 3 unknowns: ai , bi , ci


n+1 data points means that there are N segments and N unknowns
Therefore we require 3N equations

69

70

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

22. Spline Conditions


Monday March 8, 2010
For quadratic spline that looks like:

f1 (x) = y1 + m1 (x x1 ) x1 x x2
f (x) = f2 (x) = y2 + m2 (x x2 ) x2 x x3

f (x) = y + m (x x ) x x x
n
n
n
n
n
n+1
22.1. Condition 1. Polynomial at each interval must pass through its 2 endpoints.
Ex: f1 (x)
Or in general:
y1 = a1 x21 + b1 x1 + c1

start

y2 = a1 x22 + b1 x2 + c1

end

yi = ai x2i + bi xi + ci
yi+1 = ai x2i+1 + bi xi+1 + ci

There are a total of N intervals and 2 equations per interval for a total of 2N equations
22.2. Condition 2. The 1st derivative at interior knots must be equal (continous
slope as curve switches from one interval to the next)
Example:
f10 (x2 ) = f20 (x2 )
2x2 a1 + b1 = 2x2 a2 + b2
Or in general:
2ai1 xi + bi1 = 2ai xi + bi
f or : 2 i N
This gives us (N-1) knots in the interior= N-1 equations
From these imposed conditions we have 3N-1 equations
22.3. Condition 3. We need one more equation to solve the system. We then
arbitrarily set the second derivative to zero at the 1st point x1
1st interval f 000 (x1 ) = 0 = a1 :
f1 (x) = 0x2 + b1 x + c1 = b1 x + c1
There is a straight line between xi x2
Now have to solve 3N polynomial coefficients
Either set up system of equations or recursive solution

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

71

1st Interval f1 (x)

a1
f its : (x1 , y1 )(x2 , y2 ) y1

y
2

= 0 (from condition 3)
= b1 x 1 + c 1
= b1 x 2 + c 1

solve for b1 , c1

2nd Interval f2 (x)


(
y2
f its : (x2 , y2 )(x3 , y3 )
y3

= a2 x22 + b2 x2 + c2
= a2 x23 + b2 x3 + c2

solve for a2 , b2 , c2

f10 (x2 ) = f20 (x2 )


2x2 a1 + b1 = 2x2 a2 + b2
Both a1 and b1 are known: 3 equations and 3 unknowns to solve
Repeat N times

22.4. Quadratic spline example. interpolate f (x) = ex between 0 2

72

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

f1 (x) = a1 x2 + b1 x + c1
f2 (x) = a2 x2 + b2 x + c2
Condition 1: f(x) must pass through points:
(
1 = a1 (02 ) + b1 (0) + c1
f1 (x) =
e = a1 (12 ) + b1 (1) + c1
(
e = a2 (12 ) + b2 (1) + c2
f2 (x) =
e2 = a2 (22 ) + b2 (2) + c2
Condition 2: derivative at internal knots must be equal
f10 (x2 )

= f 0 2(x2 ) = 2a1 (1) + b1 = 2a2 (1) + b2


Condition 3: Arbitrarily set second derivative at x1 = 0, therefore: a1 = 0

Setting up a system of equations:


a1

0
1

2
1

b 1 c 1 a2 b 2 c 2
0
1
0
1
0

1 0
0 0
1 0
0 0
0 1
1 1
0 2 1 0
0 0
0 0

1
e

e2

0
0

The first four equations are from condition 1


The last two are from condition 2 and 3 respectively
22.5. Cubic Splines.
fi (x) = ai x3 + bi x2 + ci x + di
N+1 points, N segments
There are 4 N unknowns
22.6. Condition 1. Polynomial must pass through both of its end points
yi = ai x3i + bi x2i + ci xi + di
yi+1 =

ai x3i+1

bi x2i+1

start

+ ci xi+1 + di

end

This gives us 2N equations


22.7. Condition 2. First derivative at all interior knots must be the same
0
fi1
(xi ) = fi0 (xi )

3ai1 x2i + 2bi1 xi + ci1 = 3ai x2i + 2bi xi + ci


For 2 i N we get N-1 equations

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

73

22.8. Condition 3. Set second derivatives for all interior points equal to each other
00
fi1
(x) = fi00 (x)
6ai1 xi + 2bi1 = 6ai xi + 2bi

For 2 i N we get N-1 equations


22.9. Condition 4. Arbitrarily set fi00 (x) = 0 but we need 2 more eqautions
Option 1 set second derivative = 0 at start and end point
6a1 + 2b1 = 0 = 6aN + 2bN
Option 2 set second and third derivative = at 1st point
6a1 + 2b1 = 0
6a1 = 0
a1 = 0, b1 = 0
T heref ore : f1 = c1 x + d1 for 1st interval (straight line)
Now recursive solution is available

74

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

23. Numerical Integration


Wednesday March 10, 2010

If original equation is unknown or difficult to integrate we may have to use data


point from function to solve
The idea is to approximate the complicated/non-existing function with an easy function, then integrate.
fn (x) = nth order polynomial
List of methods to cover:

Rectangular
Trapezoidal
Simpsons:
31 rule
38 rule
Gaussian Quadrature

23.1. Taylor Series Based Approximations. Using the Taylor series at a we integrate both sides of the equation:

(x a)2
f (x) = f (a) + f 0 (a)(x a) + f 00
+ ...
2
Z b
Z b
Z b
f (x)dx =
f (a)dx +
f 0 (a)(x a)dx
a
|a
{z a
}
0th order Taylor = Rectangle
|
{z
}
1st order Taylor = Trapezoidal

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

23.2. Rectangular Approximation.

f (x)dx '
a

f (a)dx = f (a)(b a)
a

Z
Error '
a

f 0 (a)(x a)dx =

f 0 (a)
(b a)2
2

75

76

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

23.3. Trapezoidal Approximation.


(x a)2
f (x) ' f (a) + f 0 (a)(x a) + f 00 (a)
{z
} |
|
{z 2 }
1st order approximation

error

Z
f (x)dx '

Z
f (a)dx +

a
b

f (x)dx ' f (a)(b a) +


a

f 0 (a)(x a)dx

f 0 (a)
(b a)2
2

f (b) f (a)
Forward Difference Approximation: f 0 (a) '
ba
Z b
f (a) + f (b)
f (x) ' (b a)
= Area of a trapezoid
2
a
Error:
b

(x a)2
dx
2
a
f 00 (a) (x a)3 b
=

2
3
a
1 00
= f (a)(b a)3
6
Error h3
Z

Error =

f 00 (a)

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

23.4. Example. find

R
2

sin(x)dx using both rectangular and trapezoidal methods

Rectangular:
f (a) = 0
Z

f (x) ' f (a)


Z
2
f (x)dx '
f (a)dx = 0

Trapezoidal:
Z b

Z
f (x)dx '

77

Z
f (a)dx +

f 0 (a)(x a)dx

1
1

= 0 + (b a) =
=
2
22
4
Or using the formula and knowing: f 0 (a) = cos(0) = 1
f (a) + f (b)
2
10

=
=
2 2
4

= (b a)

78

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

24. More Numerical Integration


Friday March 12, 2010
24.1. Other Options. Higher order polynomials than rectangular and trapezoidal
can be used
If the data points from f(x) are known yet f(x) itself is unknown yet smooth [a,b],
we can use cubic spline to approximate function

First we determine a cubic spline interpolation for the points.


Now each interval xi xi+1 approximated by fi (x) = ai x3 + bi x2 + ci x + di
Therefore:
Z b
Z x2
Z x3
Z x4
Z xN +1
f (x)dx '
f1 (x)dx +
f2 (x)dx +
f3 (x)dx + . . . +
fN (x)dx
a

x1

x2

For each interval:


Z xi+1
Z
fi (x)dx =
xi

x3

xN

xi+1

(ai x3 + bi x2 + ci x + di )dx

xi

bi
ci
ai 4
(xi+1 x4i ) + (x3i+1 x3i ) + (x2i+1 x2i + di (xi+1 xi )
4
3
2

24.2. Simpsons 1/3 Rule. Fits 2nd order polynomial(Lagrange) to 3 equally


spaced points from f(x) to integrate

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

79

Lagrangian polynomial:
f2 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x)
(x x1 )(x x2 )
(x h)(x 2h)
(x h)(x 2h)
=
=
(x0 x1 )(x0 x2 )
(h)(2h)
2h2
(x x0 )(x x2 )
x(x 2h)
x(x 2h)
L1 (x) =
=
=
(x1 x0 )(x1 x2 )
h(h)
h2
(x x0 )(x x1 )
(x h)x
x(2 h)
L2 (x) =
=
=
(x2 x0 )(x2 x1 )
(2h)h
2h2

Where: L0 (x) =

So all together:
f (0) (x h)(x 2h) f (h)
f (2h) (x h)x

x(x

2h)
+
h2
2
h2
h2
2
Z 2h
Z 2h
f (x)dx '
f2 (x)dx
0
0
Z
Z
Z
f (0) 2h (x h)(x 2h)
f (h) 2h
f (2h) 2h (x h)x
' 2
dx 2
x(x 2h)dx +
dx
h
2
h
h2
2
0
0
0
1
4
1
= f (0)h + f (h)h + f (2h)h
3
3
3
h
= (f (0) + 4f (h) + f (2h)
3
f2 (x) =

80

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Hence the 1/3 importance


R 2h
h
Simpsons 1/3 Rule: 0 f (x)dx ' (f (0) + 4f (h) + f (2h)
3
24.3. Splitting Up The Function. Each Panel has to contain 3 points. So if there
are total of 5 points:

h
(f (x0 ) + 4f (x1 ) + f (x2 )
3
h
I2 = (f (x2 ) + 4f (x3 ) + f (x4 )
3
Overall Integral I = I1 + I2
h
= (f (x0 ) + f (x1 ) + 2f (x2 ) + 4f (x3 ) + f (x4 ))
3
I1 =

Note the factors for each point:


2 = even data points
4 = every odd data point
1 = endpoints

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

81

25. Simpsons Rule


Monday March 15, 2010

25.1. Using Simpsons 1/3 Rule for Multiple Panels. Simpsons 1/3 Rule:
h
I ' (f (x0 ) + 4f (x1 ) + f (x2 ) (For 1 panel)
3
Points must be uniformly and linearly spaced so that h is constant
Given N+1 uniformly spaced data point, N being an even number:
h
(f (x0 ) + 4f (x1 ) + 2f (x2 ) + . . . + f (xN ))
3
Factors for each point: endpoints = 1, odds = 4, even = 2

j=Odd
z
}|
{

N
2
N
1
X
X

h
I = f (x0 ) + 2
f (xi ) + 4
f (xj ) +f (xN )

j=2,4,6...
i=1,3,5...
|
{z
}
I'

i=Even

error '

hN
180

Test: if f(x)=G (Constant) for all values of x x0 xN , N= even

82

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

h
I=
3

f (x0 ) + 2

N
2
X

f (xi ) + 4

j=2,4,6...

Gh
I=
3

1+2

N
2
X

N
1
X

f (xj ) + f (xN )

i=1,3,5...

1+4

j=2,4,6...

N
1
X

!
1+1

i=1,3,5...

N 2
N
Gh
1+2
+4 +1
I=
3
2
2
Gh
=
(2 + N 2 + 2N )
3

W idth

z}|{
= |{z}
G Nh
height

25.2. Simpsons 1/3 Rule Summary.

h must be uniform
N points must be even number (total points must be odd)
x0 is 1st point
xN is last pint
Minimum 3 points

j=Odd

z
}|
{

N
2
N
1
X
X

I = f (x0 ) + 2
f (xi ) + 4
f (xj ) +f (xN )

j=2,4,6...
i=1,3,5...
|
{z
}
i=Even

25.3. Simpsons 3/8 Rule. Now every panel contains 4 equally spaced points
3rd order Lagrangian polynomial is used to interpolate the function

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

f3 (x) = f (0)L0 (x) + f (h)L1 (x) + f (2h)L2 (x) + f (3h)L3 (x)


(x h)(x 2h)(x 3h)
(x 0)(x 2h)(x 3h)
, L1 (x) =
...
(0 h)(0 2h)(0 3h)
(h 0)(h 2h)(h 3h)
Z 3h
Z 3h
f (x)dx '
f3 (x)dx = (Complicated)

Where: L0 (x) =

I=

3h
(f (0) + 3f (h) + 3f (2h) + f (3h))
8

Given N+1 uniformly spaced points (N= a multiple of 3)

83

84

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

j=Multiple of 3

z
}|
{

N
1
N
3
X
X

R NH
3h

+2
f
(x
)
f
(x)
'
I
=
f
(x
)
+
f
(x
)
+
3
f
(x
)
0
N
i
j
0

j=3,6,9...
i=1,2,4,5,7...
|
{z
}
i6=Multiple of 3

Simpsons 1/3, 3/8 rules can be used together for all cases: For both even or odd
data points
Ex if N=5 we cannot use just one rule since N isnt a multiple or 2 or 3

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

85

86

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

26. Gaussian Quadrature


Wednsday March 17, 2010

Provides a very simple formula for numerical integration but requires knowledge of
f(x) - not the data points
Suppose we are integrating f(x) from (1 1) using the trapezoidal rule

We attempt to minimize the total error


There will be 2R interior points x0 , x1 that the area of the trapezoid going through
1
f (x0 ), f (x1 ) = 1 f (x)dx

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

87

We need to find the values of x0 , x1 so that


Area
A {z
+ Area C} =
|
Overestimation Error

|Area
{z B}

Underestimation Error

Since if A+C=B, there is no overall error.


The base of the trapezoid = 2 (constant). The two sides of the trapezoid can be
expressed as f (x0 ), f (x1 )
Area of trapezoid (i.e integral of function):
Z

f (x)dx = I = C0 f (x0 ) + C1 f (x1 )


1

There are 4 unknowns: x0 , C0 , x1 , C1


4 Equations are needed:

f (x) = 1 C0 + C1

f (x) = x C x + C x
0 0
1 1
Setting f(x) arbitrarily:
2
2
2

f
(x)
=
x

C
x
+
C
0 0
1 x1

f (x) = x3 C0 x30 + C1 x31

R1
= 1 1dx = 2 (1)
R1
= 1 xdx = 0 (2)
R1
= 1 x2 dx = 23 (3)
R1
= 1 x3 dx = 0 (4)

88

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Solving the previous 4 equations we have:


C + 0 = C1 = 1
1
x0 =
3
1
x0 =
3
1
1
f (x)dx = f ( ) + f ( )
3
3
This is exact for any f(x) up to and including 3rd order
We can now extend the idea to include any arbitrary interval
Actual function to be integrated can have multiple panels of arbitrary length
I=

R1

(
x0
x1

= xi + d i
= xi+1 di

1 13
di
Ratio:
=
hi
2
hi
1
di = (1 )
2
3
di ' 0.21135hi
Therefore: Ii =

R xi+1
xi

f (x)dx =

hi
(f (x0 ) + f (x1 ))
2

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

26.1. Example. Integrate f (x) = x2 from 0 2 using panels of equal length

h1
d1
x0
x1

=1
= 0.21135hi = 0.21135
= 0 + d1 = d1
= 1 d1

h2
d2
x0
x1

=1
= 0.21135hi = 0.21135 = d1
= d1 + 1
= 2 d1

89

90

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

1
(f (x0 ) + f (x1 ))
2

1 2
d1 + (1 d1 )2
=
2
= 0.3

I1 =

1
(f (x0 ) + f (x1 ))
2

1
=
(1 + d1 )2 + (2 d1 )2
2
= 2.3

I2 =

I = I 1 + I2 = 2.6
Theoretically:
Z
0

x2 dx =

x3 2 8
= = 2.6
3 0 3

In general to achieve better accuracy:


Reduce panel size
Use higher order complexity (interpolation) for each panel
26.2. Summary of Gaussian 3 Point Quadrature. look on Yanis Cheap Plastic handout
Z 1
f (x)dx = C0 f (d) + C1 f (0) + C0 f (d)
1

f (x) = 1

f (x) = x

f (x) = x2
Use to solve for C0 , C1 , d

f (x) = x3

f (x) = x4

f (x) = x5

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

27. Cheap Plastic Yani Notes


3 Point Gaussian Quadrature

91

92

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Yanischeap,plastichandouton3pointGaussQuadrature

Startingpoint:Thereare3points,d,0,anddsuchthat:

f(x)dx = c

f(d) + c 1 f(0) + c 0 f(d)

(i)

Goal:Determinec0,c1,andd.
Approach:
Trythefollowingfunctionsforf(x):

f(x)=1

f(x)=x

f(x)=x2

f(x)=x3

f(x)=x4

f(x)=x5

Evaluateeq(i)foreachoftheseinstancesoff(x).Thiswillgiveusenoughequationstosolveforc0,c1,andd.

f (x) dx

c 0 f(d) + c1 f(0) + c 0 f(d)

Equation

c0+c1+c0
=2c0+c1

2=2c0+c1

(1)

c0(d)+0+c0(d)
=0

0=0

Doh!

x2

2/3

c0(d)2+0+c0(d)2
=2c0(d)2

c0(d)2=1/3

(2)

x3

c0(d)3+0+c0(d)3
=0

0=0

Doh!

x4

2/5

c0(d)4+0+c0(d)4
=2c0(d)4

c0(d)4=1/5

(3)

x5

c0(d)5+0+c0(d)5
=0

0=0

Doh!

f(x)

Fromeq.(2)and(3):

c0d 4 1 / 5
=
c0d 2 1 / 3

d=

Subdinto(2):

c0 =

1 1 15 5
=
=
3 d2 3 3 9

Subc0into(1)

c 1 = 2 2c 0 = 2

10 8
=
18 9

Putitalltogether:

f(x)dx = 9 f

3 8
5 3
+ f(0) + f

5 9
9 5

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

93

28. Ordinary Differential Equations


Friday March 19, 2010
Differential equations (DEs) are equations that contains derivatives
ODEs are differential equations with only 1 independent variable
ex:
k
2x
+ x=y
2
t
m
t = Independant Variable, x = dependant variable
or:
y(t)

2 x(t) y(t)
+
+ G = x(t) y(t)
t2
t
t = Independant Variable, x, y = dependant variable

A partial differential equation (PDE) has 2 or more independent variables


ex: (2nd order)
2 (x, y) (x, y)
+
= P (x, y)
x2
y 2
The order of a DE is the highest order of derivative present
28.1. 1st Order ODE (of 1 variable). Problem:
dy
= f (y, x)
dx
With initial condition (x0 , y0 )
The solution is a function y(x) that satisfies all equations and initial conditions
Rewrite:
dy
= f (y, x)
dx
dy = f (x, y)dx
Z
Z
dy = f (x, y)dx
We can only integrate the function if it is a separable one: separate x and y:
f (x, y) = f (x) + f (y)

94

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Case 1:
f (x, y) = f (x) Function of only 1 variable
Z
Z
dy = f (x)dx Easy to solve!
y(x) Z x

f (x)dx = y(x) y0
=
y
y0

x0

Case 2:
dy
f (x, y) = g(x) h(y) =
dx
Z y
Z x
1
(x)
g(x)dx
dy =
h(y)
y0
x0
However, many 1st order ODEs are not part of these classes:
dy
=
dx

f (y, x)
| {z }

Not Separable

If not separable, numerical solutions are needed. Methods are needed to solve
There are infinite solutions to differential equations (like flow lines). f(x,y) = slope of
the function (field arrow diagram). At each value of x and y the slope of the function
changes. Hence the need for initial conditions:

(same equation but different initial values)


For each unique function

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Start at initial value (x0 , y0 )


Take small step along function slope to (x,y)
At the new point repeat

95

96

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

29. ODE Numerical Methods


Monday March 22, 2010
Truncation Errors in Numerically Solving ODES
Numerical solutions are calculated in steps
2 types of truncation error: local and propogated

29.1. Local Truncation Error. Error in the trajectory of the solution due to having a finte step size (x)
With step size x, our next estimate of y(x) may be inaccurate
A more accurate point in the trajectory of y(x) will occur if step size is smaller

29.2. Propagated (Accumulation) Truncation Error. Each additional step increases the truncation error:
1st step from x0 x0 + x generates e1 = local truncation error of step 1
2nd step from x0 x0 + 2x generates e2 = local truncation error of step 1 + local
truncation error of step 2
3rd step from x0 x0 + 3x generates e3 = local truncation error of step 1,2 and 3

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

97

The approximate value of y at each additional point is not at the true trajectory
based on x0 , y0
Slope in the next line is slightly in error

29.3. Forward Eulers method. Objective is to find y(x) so that

dy
= f (y, x)
dx

with initial value (x0 , y0 )


Methodology: Calculate slope at (x0 , y0 ) from DEQ
dy
slope =
= f (x0 , y0 )
dx x0
To estimate the next point on the numerical solution (x,y), assume a linear slope
between (x0 , y0 ) and (x1 , y1 ). Use the slope from the previous step.
slope = f (x0 , y0 )
h

z }|1 {
y1 = y0 + f (x0 , y0 ) (x1 x0 )
| {z }
slope

Next step: slope = f (x1 , y1 ) 6= real slope


h

z }|2 {
y2 = y1 + f (x1 , y1 ) (x2 x1 )

98

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Small error in slope evaluation since calculated slope not equal to real trajectory
h

Old answer

In general: yi+1 =

z}|{
yi

z }| {
+ f (x1 , y1 ) (x1 x0 )
| {z }
slope

Notice that this is 1st order taylor series approximation:


Local Truncation Error

z }| {
dy
1 d2 y 2
h
yi+1 = yi + (xi+1 xi ) +
dx
2 dx2
Therefore local truncation error is the second order term:
1 d2 y 2
E=
h
2 dx2

+...

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

99

30. 1st Order ODE Example


Wednesday March 24, 2010
2

t= independent variable, x= dependent variable


dx
= f (t, x)
dt
t0 =, x0 = 0
Let

dx
= v(t, x) velocity
dt

dx
dt
x1 ' x0 + 0 + v(t0 , x0 )(t1 t0 )
if:

= 3 = constant

at t=1, x=?

' (0 + 3) 1 = 3
x2 ' x1 + v(t0 , x0 )(t2 t1 )
' 3 + (3)(1)
dx
dt
x1 ' x0 + v(t0 , x0 )(t1 t0 )
if:

=t+1

at t=1,2,3, x=?

= 0 + (0 + 1) 1 = 1
x2 ' 1 + (1 + 1) 1 = 3
x3 ' 3 + (2 + 1) 1 = 6
Eulers Forward:
xi+1 = xi + V (ti+1 , xi+1 )(ti+1 ti )
Central Difference (Trapezoidal)
1
xi+1 = xi + [V (ti1 , xi1 ) + V (ti+1 , xi+1 )] (ti+1 ti )
2

2Class

was getting boring at this point. Yani was getting ready to go to the hospital for his son
to be born, and we got a sub for a bit. More to follow

100

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Part 5. Missing Notes


Yani had his baby son Brady and took about a week off. I didnt like the sub
and didnt go to class for a bit. The notes are about linear algebra and differential
equations.

Shortly after Yani returned, I had to attend my grandfathers funeral [doing matlab
assignments on the plane = ( ] and I believe I missed two more classes
Anyone willing to donate missing notes is welcome; use the email provided in the
introduction, I might release the TEXdocument later too...

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

101

Part 6. April Notes


31. I Didnt Know What Was Going on
Wednesday April 7, 2010

d3 x 2
d2 x
x
t
+
+ t2 x = 0
3
2
dt
dt
Solve via 4th order classical RK method
d2 x
d3 x 2
x t + 2 + t2 x = 0
3
dt
dt
d2 x
d3 x 2
x
t
=

t2 x
dt3
dt2 

d2 x
d3 x
1
t
= 2

3
2
dt
dt
xt
x

31.1. Step 1. Transform 3rd order ODE into 1st order ODE using vectors

x1 = x

dx
Change of Variable x2 =
dt2

d
x

x 3 = 2
dt

dx1
= x2
dt
dx2
= x3
dt


t
dx3
x3
1
= 3 = x3

2
dt
dt
x1 t
x1

In the form:


x2
x1
d

x
x2 = x 3 t
3
dt x
2
3
x1 t x1

d
(x) = F (x, t)
dt

102

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

31.2. At iteraion i:

x2,i
x3,i

k 1,i = F (ti , xi ) = F
ti
x3,i
2
x1,i t x1,i
t
t
, xi +
k 2,i = F ( ti +
k 1,i )
2
2
| {z } | {z }
Plus half step Estimate next x


x2,i
x

t 1,i t
x3,i

= F ti +
, x2,i +
ti
x3,i
2
2
2
x3,i
x1,i t x1,i

k 2,i

x1,i +
x2,i

t x2,i +
x3,i

= F ti +
,
x3,i
ti
2

x3,i + 2

x1,i t x1,i

|
{z
}

Simplif y:
V
W


U
t

, V
= F ti +
2
W

Therefore:

k 2,i =,

V
W
W
1+

U 2 (t

t
2

ti +
U

Similar approach for: k 3 , k 4 (very tedious)


xi+1 = xi +


t
k 1 + 2k 2 + 2k 3 + k 4
6

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

103

31.3. Example. Write recursive equation for solving angle of pendulum in time
A) Forward Euler
d2
d
+ + M Lg sin () = 0
2
dt
dt
d2
d g sin ()

dt2
M L2 dt
L
0
Let: x1 = , x2 =

x2

g
d

x2 sin (x1 )
[x] =

2
L
dt
| M
{zL } |{z}
a
b


d
x2
[x] =
= F (x, t)
ax2 + b sin (x1 )
dt
Forward Euler: xi+1 = xi + tF (x, t)
 

 

x1,i
x2,i
x1,i +
x2,i
=
+ t
=
x2,i
ax2 + b sin (x1 )
x2,i + ax2 + b sin (x1 )
M L2

104

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

32. More Examples


Monday April 12, 2010

32.1. Adaptive Step Size. Will not be tested on the final


Constant step size can be inefficient for functions that contain both abrupt changes
and slow gradual changes
Slow changes = large step size more efficient
Abrupt changes = Smaller step sizes more efficient

32.2. Example.

Time(t) Measurement(x)
0
2
1
1
2
0
3
1

Solve the trend using: A) least square method: x(t) = a + b[ t ]


B) LU matrix
C) Polynomial Interpolation D) Quadratic Spline between 0 t 2 and 2 t 3



y1
f1 (x1 ) . . . fK (x1 )
P1
.
.
.
..
.

.. =
..
..
..
Model:
.
yN
f1 (xN ) . . . fK (xN ) PK

M = AP
pseudo inverse: AT M = (AT A)P
P = (AT A)1 AT M

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

In our case:

 
a
=
b

0  
1
a
2 b
3

f1 (t) = 1, f2 (t) = t, P

2
1
1 1
=
0 1
1
1


 1
1 1 1 1 1

P =
0 1 2 3 1
1
1  

4
4 6
P =
4
6 14

1

0

 2

1
1 1 1 1

1
2
0 1 2 3 0
3
1

To solve using LU decomposition:

identity matrix

4
6

4
3
2 r1 0

6
14

6
5

z }| {

1 0
0 1


1 0
3
1
2

Therfore:




1 0
4 6
U=
L 3
0 5
1
 
 2


4 6 1 0
4 6
Test:LU = X =
=
3
0 5 2 1
6 14

105

106

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Now:
   
a
4
LU
=
b
4

   

1 0 4 6 a
4
=
3
0
5
b
4
1
2
| {z }
Y
 
4
LY =
4
y1 = 4
3
y1 + y2 = 4
2   
 
4
4 6 a
y=
=
2
0 5 b
2
b=
5
a = 1.6

Therefore the system is:


2
x(t) = t + 1.6
5
Quadratic Spline: 3 equations are needed for a 2nd order equation
Between 0 t 2
f1 (t) = a1 t2 + b1 t + c1
f (0) = 2 = c1
f (1) = 1 = a1 + b1 + 2
f (2) = 0 = 4a1 + 2b1 + 2

2
0 0 1
Or: 1 = 1 1 1
0
4 2 1

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Solve for the factors Between 2 t 3


f2 (t) = a2 t2 + b2 t + c2
f (2) = 0 = 4a2 + 2b2 + c2
f (3) = 0 = 4a2 + 2b2 + c2
Additional equation: f10 (2) = f2 (2)0
2a1 (2) + b1 = 2a2 (2) + b2
Solve all factors

107

108

DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG (DACHIN@UCALGARY.CA)

Part 7. Final Review Sheet


1 question on each part
1) Taylor series
2) Linear equations
Nieve Gaussian Elimination
Pivoting
LU decomposition
Eigenvectors / Eigenvalues
3) Linear regression
-linear and non-linear
least squares
4)Interpolation
-Polynomial
-Lagrangian Example:
f (x) =

N
X

Yn Ln (x) = Y0 L0 (x) + Y1 L1 (x)

n=0

"

" N
#
#
Y x xi
x xi
= Y0
+ Y1
x0 xi
x1 xi
i=0,i6=0
i=0,i6=0




(x x1 )(x x2 ) . . .
(x x0 )(x x2 ) . . .
= Y0
+ Y1
(x0 x1 )(x0 x2 ) . . .
(x1 x0 )(x1 x2 ) . . .
N
Y

Spline interpolation
1 equation / point
make endpoints first derivatives same: f 0 (x) = limxx = limxx+
if cubic, set second derivative same
set f 00 (x) = 0 at first point
5) Integration
Rectangular
Trapezoidal
Simpsons
Gaussian Quadrature
(Simpsons and quadrature are given on formula sheet)
6) ODE
Euler
Backwards f (xi+1 ) ' f (x0 ) + f 0 (x0 )x

YANIS ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010)

Forwards f (xi+1 ) ' f (x0 ) + f 0 (xguess )x


Huens (central difference kinda) f (xi+1 ) ' f (x0 ) +
Serial RK (Formula Sheet)
Higher order make into system of equations
Fin!

f 0 (x0 ) + f 0 (xguess )
x
2

109