You are on page 1of 79

Lecture 22 - Exam 2 Review

CVEN 302
July 29, 2002
Lecture’s Goals
• Chapter 6 - LU Decomposition
• Chapter 7 - Eigen-analysis
• Chapter 8 - Interpolation
• Chapter 9 - Approximation
• Chapter 11 - Numerical Differentiation and
Integration
Chapter 6
LU Decomposition of
Matrices
LU Decomposition
• A modification of the elimination method,
called the LU decomposition. The
technique will rewrite the matrix as the
product of two matrices.

A = LU
LU Decomposition
There are variation of the technique using
different methods.
– Crout’s reduction (U has ones on the diagonal).
– Doolittle’s method( L has ones on the
diagonal).
– Cholesky’s method ( The diagonal terms are the
same value for the L and U matrices).
LU Decomposition Solving
Using the LU decomposition
[A]{x} = [L][U]{x} = [L]{[U]{x}} = {b}
Solve
[L]{y} = {b}
and then solve
[U]{x} = {y}
LU Decomposition
The matrices are represented by
LU Decomposition (Crout’s reduction)

Matrix decomposition
LU Decomposition (Doolittle’s Method)

Matrix decomposition
Cholesky’s Method
Matrix is decomposed into:

where, lii = uii


Tridiagonal Matrix
For a banded matrix using Doolittle’s method,
i.e. a tridiagonal matrix.

1 0 0 0 u11 u12 0 0   a11 a12 0 0


l 0  0 u 22 0  a 21 a 22 a 23 0 
 21 1 0 u 23

 0 l32 1 0  0 0 u 33 u 34   0 a 32 a 33 a 34 
    
0 0 l 43 1  0 0 0 u 44   0 0 a 43 a 44 
Pivoting of the LU Decomposition
• Still need pivoting in LU decomposition
• Messes up order of [L]
• What to do?
• Need to pivot both [L] and a permutation matrix
[P]
• Initialize [P] as identity matrix and pivot when [A]
is pivoted  Also pivot [L]
Pivoting of the LU Decomposition
• Permutation matrix [ P ]
- permutation of identity matrix [ I ]
• Permutation matrix performs
“bookkeeping” associated with the row
exchanges
• Permuted matrix [ P ] [ A ]
• LU factorization of the permuted matrix
[P][A]=[L][U]
Chapter 7
Eigen-analysis
Eigen-Analysis
• Matrix eigenvalues arise from discrete
models of physical systems

• Discrete models
– Finite number of degrees of freedom result in a
finite number of eigenvalues and eigenvectors.
Eigenvalues
Computing eigenvalues of a matrix is important in numerous
applications.
– In numerical analysis, the convergence of an iterative sequence
involving matrices is determined by the size of the eigenvalues of the
iterative matrix.
– In dynamic systems, the eigenvalues indicate whether a system is
oscillatory, stable (decaying oscillations) or unstable(growing
oscillation).
– Oscillator system, the eigenvalues of differential equations or the
coefficient matrix of a finite element model are directly related to
natural frequencies of the system.
– Regression analysis, eigenvectors of correlation matrix are used to
select new predictor variables that are linear combinations of the
original predictor variables.
General Form of the Equations
The general form of the equations
Ax  x
Ax  I x  0
A  I x  0
A  I   0
Power Method
The basic computation of the power method is
summarized as
Auk -1
uk  and lim uk  1
Auk -1  k 

The equation can be written as:


Auk -1
Auk -1  1uk -1  1 
uk -1
Power Method
The basic computation of the power method is
summarized as
Auk -1
uk  and lim uk  1
Auk -1  k 

The equation can be written as:


Auk -1
Auk -1  1uk -1  1 
uk -1
Shift Method
It is possible to obtain another eigenvalue from the
set equations by using a technique known as
shifting the matrix.
Ax  x
Subtract the a vector from each side, thereby
changing the maximum eigenvalue

Ax sI x    sx


Shift Method
The eigenvalue, s, is the maximum value of the
matrix A. The matrix is rewritten in a form.

B  A  max I 


Use the Power Method to obtain the largest
eigenvalue of [B].
Inverse Power Method
The inverse method is similar to the power method,
except that it finds the smallest eigenvalue. Using the
following technique.

Ax  x  A Ax   A x


1 1

x  A x  x  Bx


1 1


Inverse Power Method
The algorithm is the same as the Power method and
the “eigenvector” is not the eigenvector for the
smallest eigenvalue. To obtain the smallest
eigenvalue from the power method.

1 1
  
 
Accelerated Power Method
The Power method can be accelerated by using the
Rayleigh Quotient instead of the largest wk value.

Az1  1
The Rayeigh Quotient is defined as:
z' w
1 
z' z
Accelerated Power Method
The values of the next z term is defined as:

w
z2 
1
The Power method is adapted to use the new value.
QR Factorization
• Another form of factorization

A = Q*R
• Produces an orthogonal matrix (“Q”) and a
right upper triangular matrix (“R”)
• Orthogonal matrix - inverse is transpose

1
Q Q T
QR Factorization
Why do we care?
We can use Q and R to find eigenvalues
1. Get Q and R (A = Q*R)
2. Let A = R*Q
3. Diagonal elements of A are eigenvalue
approximations
4. Iterate until converged
Note: QR eigenvalue method gives all eigenvalues
simultaneously, not just the dominant 
Householder Matrix
• Householder matrix reduces zk+1 ,…,zn to zero

v
H  I  2 ww  ; w 
v2
x  x 1 x2  xk x k 1  x n  
y  Hx   y 1 y2  yk 0  0 
Householder Matrix
• To achieve the above operation, v must be a
linear combination of x and ek

e k  0 ,0 ,...,0 ,1,0....,0 
T

v  x   e k  x 1 , x 2 , , x k  1 , x k   , x k  1 , , x n 
Chapter 8
Interpolation
Interpolation Methods

Interpolation uses the data to approximate a function,


which will fit all of the data points. All of the data is
used to approximate the values of the function inside
the bounds of the data.
We will look at polynomial and rational function
interpolation of the data and piece-wise interpolation
of the data.
Polynomial Interpolation
Methods
• Lagrange Interpolation Polynomial - a
straightforward, but computational awkward way
to construct an interpolating polynomial.

• Newton Interpolation Polynomial - there is no


difference between the Newton and Lagrange
results. The difference between the two is the
approach to obtaining the coefficients.
Hermite Interpolation
The Advantages
• The segments of the piecewise Hermite
polynomial have a continuous first
derivative at support points.
• The shape of the function being interpolated
is better matched, because the tangent of
this function and tangent of Hermite
polynomial agree at the support points.
Rational Function Interpolation
Polynomial are not always the best match of data. A
rational function can be used to represent the steps. A
rational function is a ratio of two polynomials. This
is useful when you deal with fitting imaginary
functions z=x + iy. The Bulirsch-Stoer algorithm
creates a function where the numerator is of the same
order as the denominator or 1 less.
Rational Function Interpolation
The Rational Function interpolation are required for
the location and function value need to be known.

x 3  a1 x 2  b1 x  c1
Pi x   3
x  a2 x 2  b2 x  c2
or
x  b1 x  c1
2
Pj x   3
x  a2 x 2  b2 x  c2
Cubic Spline Interpolation
Hermite Polynomials produce a smooth interpolation,
they have a disadvantage that the slope of the input
function must be specified at each breakpoint.
Cubic Spline interpolation use only the data points
used to maintaining the desired smoothness of the
function and is piecewise continuous.
Chapter 9
Approximation
Approximation Methods
What is the difference between approximation
and interpolation?

• Interpolation matches the data points exactly. In


case of experimental data, this assumption is not
often true.
• Approximation - we want to consider the curve
that will fit the data with the smallest “error”.
Least Square Fit Approximations
The solution is the minimization of the sum of
squares. This will give a least square solution.

S   ek 
2

This is known as the Maximum Likelihood Principle.


Least Square Error
How do you minimize the error?

dS
0
Take the derivative with da
the coefficients and set it
dS
equal to zero. 0
db
Least Square Coefficients for
Quadratic Fit
The equations can be written as:

N 4 N N
  N
2 
 xi  i   xi Yi 
3 2
x xi 
 iN1 i 1 i 1
  a   i 1

 x3
N N
    N

  i 
x 2
xi b    xiYi 
i 1
i
i 1 i 1
    i 1 
N N    N
c

 xi 2
x N    Yi 
 i 1 
i
i 1  i 1 
Polynomial Least Square
The technique can be used to all forms of polynomials
of the form:
y  a0  a1 x  a2 x 2    an x n
 N N
  N 
 N x i   n
x 
i   Yi 
 N i 1 i 1
 a 0   Ni 1 
 x  a  
 1    xiYi 

  
i 1
i
     i 1 
  
       
N N  a n   N 
 xin    xi 
2n
 xi Yi 
n

 i 1 i 1   i 1 
Polynomial Least Square

Solving large sets of linear equations are not a simple


task. They can have the undesirable property known
as ill-conditioning. The results of this method is that
round-off errors in solving for the coefficients cause
unusually large errors in the curve fits.
Polynomial Least Square
Or measure of the variance of the problem
  N
  Yk  yk 
1
 
2 2

 N  n  1  k 1
Where, n is the degree polynomial and N is the
number of elements and Yk are the data points and,
n
yk   a x j
j k
j 0
Nonlinear Least Squared
Approximation Method
How would you handle a problem, which is
modeled as:
y  bx a

or
y  be ax
Nonlinear Least Squared
Approximation Method
Take the natural log of the equations

y  bx  ln y  ln b   a ln x 
a

 y  b  a x
and

y  be  ln y  ln b   ax
ax

 y  b  ax
Continuous Least Square
Functions
Instead of modeling a known complex function over a
region, we would like to model the values with a
simple polynomial. This technique uses a least
squares over a continuous region.
The coefficients of the polynomial can be determined
using same technique that was used in discrete
method.
Continuous Least Square
Functions
The technique minimizes the error of the function
uses an integral.
b
E    f x   sx  dx
2

a
where

f x   a0  a1 x  a2 x 2
Continuous Least Square
Functions
Take the derivative of the error with respect to the
coefficients and set it equal to zero.
b
 df x  
  2 f x   sx  
dE
dx  0
dai a  dai 

And compute the components of the coefficient


matrix. The right hand side of the matrix will be
the function we are modeling times a x value.
Continuous Least Square
Function
There are other forms of equations, which can be
used to represent continuous functions. Examples of
these functions are
• Legrendre Polynomials
• Tchebyshev Polynomials
• Cosines and sines.
Legendre Polynomial
The Legendre polynomials are a set of orthogonal
functions, which can be used to represent a function as
components of a function.

f x   a0 P0 x   a1P1 x     an Pn x 
Legendre Polynomial
These function are orthogonal over a range [ -1, 1 ].
This range can be scaled to fit the function. The
orthogonal functions are defined as:

1
# if i  j
1 P  x P  x dx  
0 if i  j
i j
Continuous Functions

Other forms of orthogonal functions are sines and


cosines, which are used in Fourier approximation. The
advantages for the sines and cosines are that they can
model large time scales.
You will need to clip the ends of the series so that it
will have zeros at the ends.
Chapter 11
Numerical Differentiation
and Integration
Numerical Differentiation
A Taylor series or Lagrange interpolation of points
can be used to find the derivatives. The Taylor series
expansion is defined as:
f xi   f x0   x
df

x 
2
d2 f

x  d 3 f
3

dx x x0 2! dx 2 xx0
3! dx 3
x x0

x  xi  x0

f xi   f x0   xi  x0  f x0  


 xi  x0 
2
f x0  
 xi  x0 
3
f x0   
2! 3!
Numerical Differentiation
Assume that the data points are equally spaced
and the equations can be written as:
f xi 1   f xi   x  f xi  
x 
2
f x  
x 
3
f xi    1
i
2! 3!
f xi   f xi  2
f xi-1   f xi   x  f xi  
x  2
f xi  
x  3
f xi   3
2! 3!
Differential Error
Notice that the errors of the forward and backward 1st
derivative of the equations have an error of the order
of O(x) and the central differentiation has an error
of order O(x2). The central difference has an better
accuracy and lower error that the others. This can be
improved by using more terms to model the first
derivative.
Higher Order Derivatives

To find higher derivatives, use the Taylor series


expansions of term and eliminate the terms from the
sum of equations. To improve the error in the
problem add additional terms.
Lagrange Differentiation
Another form of differentiation is to use the Lagrange
interpolation between three points. The values can be
determine for unevenly spaced points. Given:

Lx   L1 x  y1  L2  x  y2  L3 x  y3


x  x2 x  x3  y  x  x1 x  x3  y  x  x1 x  x2  y
x1  x2 x1  x3  1 x2  x1 x2  x3  2 x3  x2 x3  x1  3
Lagrange Differentiation
Differentiate the Lagrange interpolation
2 x  x2  x3
f  x   L x   y1
x1  x2 x1  x3 
2 x  x1  x3 2 x  x1  x2
 y2  y3
x2  x1 x2  x3  x3  x2 x3  x1 
Assume a constant spacing
2 x  x2  x3 2 x  x1  x3 2 x  x1  x2
f x   y1  y2  y3
2x 2
 x 2
2x 2
Richardson Extrapolation
This technique uses the concept of variable grid sizes to
reduce the error. The technique uses a simple method for
eliminating the error. Consider a second order central
difference technique. Write the equation in the form:

 f xi 1   2 f  xi   f xi-1  
f xi      a x 2
 a x 4
 
x
1 2
 
2
Richardson Extrapolation
The central difference can be defined as

 f xi 1   2 f  xi   f xi-1  
f xi      a x 2
 a x 4
 
x
1 2
 
2

Write the equation with different grid sizes

A  f xi   Ax   a1x 2  a 2 x 4  


x  x  x 
2 4
  
A  f xi   A   a1    a 2    
 2   2   2 
Richardson Extrapolation
The equation can be rewritten as:
  x  
 4 A   Ax  
  2    x 4

A  a 2    
 3   16 
 
 
It can be rewritten in the form

A  Bx   b1x 4  b 2 x 6  
Richardson Extrapolation
The technique can be extrapolated to include the
higher order error elimination by using a finer grid.

  x  
 16 B   Bx  
A 

 2 
15


 O x  
6
 
 
 
Trapezoid Rule
Integrate to obtain the rule
b b 1
a
f ( x)dx   L( x)dx  h  L( )d
a 0
1 1
 f (a )h  (1   )d  f (b)h   d
0 0
1 2 1
 2
 h
 f (a )h (  )  f (b)h   f (a )  f (b) 
2 0
2 0
2
Simpson’s 1/3-Rule
Integrate the Lagrange interpolation
b 1 h 1

a
f(x)dx  h L(  )dξ  f(x 0 )  ξ(ξ  1 )dξ
1 2 1
1 h 1
 f(x 1 )h  ( 1  ξ )dξ  f(x 2 )  ξ(ξ  1 )dξ
2
0 2 1
1 1
h ξ 3
ξ 2
ξ 3
 f(x 0 ) (  )  f(x 1 )h(ξ  )
2 3 2 1 3 1
1
h ξ
3
ξ2
 f(x 2 ) (  )
2 3 2 1

f(x)dx   f(x 0 )  4f(x 1 )  f(x 2 )


b h
a 3
Simpson’s 3/8-Rule
( x  x 1 )( x  x 2 )( x  x 3 )
L( x )  f ( x0 )
( x 0  x 1 )( x 0  x 2 )( x 0  x 3 )
( x  x 0 )( x  x 2 )( x  x 3 )
 f ( x1 )
( x 1  x 0 )( x 1  x 2 )( x 1  x 3 )
( x  x 0 )( x  x 1 )( x  x 3 )
 f ( x2 )
( x 2  x 0 )( x 2  x 1 )( x 2  x 3 )
( x  x 0 )( x  x 1 )( x  x 2 )
 f ( x3 )
( x 3  x 0 )( x 3  x 1 )( x 3  x 2 )

b b b-a
a
f(x)dx  
a
L(x)dx ; h 
3

3h
 f ( x 0 )  3 f ( x 1 )  3 f ( x 2 )  f ( x 3 )
8
Midpoint Rule
Newton-Cotes Open Formula
b

a
f ( x )dx  ( b  a ) f ( x m )
ab ( b  a )3
 (b  a )f ( ) f (  )
2 24
f(x)

a xm b x
Composite Trapezoid Rule
b x1 x2 xn
a
f(x)dx   f(x)dx   f(x)dx    
x0 x1 xn  1
f(x)dx


h
 f(x 0 )  f(x 1 )  h  f(x 1 )  f(x 2 )    h  f(x n1 )  f(x n )
2 2 2
  f(x 0 )  2 f(x 1 )    2f(x i )    2 f ( x n1 )  f ( x n )
h
2
f(x)

ba
h
n
x0 h x1 h x2 h x3 h x4 x
Composite Simpson’s Rule
Multiple applications of Simpson’s rule
b x2 x4 xn

a
f(x)dx   f(x)dx   f(x)dx 
x0 x2

x n 2
f(x)dx
h h
  f(x 0 )  4f(x1 )  f(x 2 )    f(x 2 )  4f(x 3 )  f(x 4 ) 
3 3
h
   f(x n  2 )  4f(x n 1 )  f(x n ) 
3
h
  f(x 0 )  4f(x1 )  2f(x 2 )  4f(x 3 )  2f(x 4 ) 
3
4f(x 2i-1 )  2 f ( x2i )  4f(x 2i 1 ) 
2 f ( xn  2 )  4 f ( xn 1 )  f ( xn ) 
Richardson Extrapolation
Use trapezoidal rule as an example
– subintervals: n = 2j = 1, 2, 4, 8, 16, ….

f(x)dx   f(x 0 )  2 f(x 1 )    2 f ( x n1 )  f ( x n )   c j h 2 j
b h

a 2 j 1
j n Formula
I 0   f ( a )  f ( b )
h
0 1
2
I 1   f ( a )  2 f ( x 1 )  f ( b )
h
1 2
4
I 2   f ( a )  2 f ( x 1 )  2 f ( x 2 )  2 f ( x 3 )  f ( b )
h
2 4
8
3 8 I3 
h
 f ( a )  2 f ( x 1 )    2 f ( x 7 )  f ( b )
16
  
I j  j  f ( a )  2 f ( x 1 )    2 f ( x n  1 )  f ( b )
h
j 2j
2
Richardson Extrapolation
For trapezoidal rule
b
A   f ( x )dx  A( h )  c 1 h 2  
a

 A  A( h )  c 1 h 2  c 2 h 4 

 h h 2 h 4
 A  A( 2 )  c 1 ( 2 )  c 2 ( 2 )  
1 h  c
 A   4 A( )  A( h )  2 h 4    B( h )  b2 h 4  
3 2  4
 A  B ( h )  b2 h 4  
  1  h 
 h h 4   C ( h )   16 B ( )  B ( h )
 A  B ( )  b 2 ( )    15  2 
2 2 
Richardson Extrapolation
kth level of extrapolation

4 C ( h/2)  C ( h )
k
D( h ) 
4 1
k
Romberg Integration
Accelerated Trapezoid Rule
4 k I j 1 ,k  I j ,k
I j ,k  ; k  1, 2, 3,
4 1
k

Trapezoid Simpson' s Boole' s


k 0 k1 k2 k3 k4
O( h 2 ) O( h 4 ) O( h 6 ) O( h 8 ) O( h 10 )
h I 0 ,0 I 0 ,1 I 0,2 I 0,3 I 0 ,4
h/ 2 I 1,0 I 1, 1 I 1, 2 I 1, 3
h/ 4 I 2 ,0 I 2 ,1 I 2,2
h/ 8 I 3 ,0 I 3,1
h / 16 I 4 ,0
4 I j  1,0  I j ,0 16 I j  1,1  I j , 1 64 I j  1, 2  I j , 2 256 I j  1, 3  I j , 3
3 15 63 255
Gaussian Quadratures
• Newton-Cotes Formulae
– use evenly-spaced functional values

• Gaussian Quadratures
– select functional values at non-uniformly distributed
points to achieve higher accuracy
– change of variables so that the interval of integration is [-
1,1]
– Gauss-Legendre formulae
Gaussian Quadrature on [-1, 1]
1
n2: 
1
f(x)dx  c1 f(x 1 )  c 2 f(x 2 )
Exact integral for f = x0, x1, x2, x3
– Four equations for four unknowns

f 1
 1   1dx  2  c 1  c 2 c 1  1
 1
c  1
 1
 2
f  x   xdx  0  c 1 x 1  c 2 x 2 1
1 
 1 2   x1 
f  x   x dx   c 1 x 12  c 2 x 22
2 2
 3
1
 3  1
f 1
 x   x 3 dx  0  c 1 x 13  c 2 x 23
3  x2  3
 1

Gaussian Quadrature on [-1, 1]
1
n2: 
1
f(x)dx  c1 f(x 1 )  c 2 f(x 2 )

Exact integral for f = x0, x1, x2, x3

1 1 1
I   f ( x )dx  f (  ) f ( )
1
3 3
Gaussian Quadrature on [-1, 1]

Exact integral for f = x0, x1, x2, x3, x4, x5

1 5 3 8 5 3
I f ( x )dx  f (  ) f (0 ) f ( )
1 9 5 9 9 5
Summary
• Open book and open notes.
• The exam will be 5-8 problems.
• Short answer type problems use a table to
differentiate between techniques.
• Problems are not going to be excessive.
• Make a short summary of the material.
• Only use your notes, when you have forgotten
something, do not depend on them.