Professional Documents
Culture Documents
CVEN 302
July 29, 2002
Lecture’s Goals
• Chapter 6 - LU Decomposition
• Chapter 7 - Eigen-analysis
• Chapter 8 - Interpolation
• Chapter 9 - Approximation
• Chapter 11 - Numerical Differentiation and
Integration
Chapter 6
LU Decomposition of
Matrices
LU Decomposition
• A modification of the elimination method,
called the LU decomposition. The
technique will rewrite the matrix as the
product of two matrices.
A = LU
LU Decomposition
There are variation of the technique using
different methods.
– Crout’s reduction (U has ones on the diagonal).
– Doolittle’s method( L has ones on the
diagonal).
– Cholesky’s method ( The diagonal terms are the
same value for the L and U matrices).
LU Decomposition Solving
Using the LU decomposition
[A]{x} = [L][U]{x} = [L]{[U]{x}} = {b}
Solve
[L]{y} = {b}
and then solve
[U]{x} = {y}
LU Decomposition
The matrices are represented by
LU Decomposition (Crout’s reduction)
Matrix decomposition
LU Decomposition (Doolittle’s Method)
Matrix decomposition
Cholesky’s Method
Matrix is decomposed into:
• Discrete models
– Finite number of degrees of freedom result in a
finite number of eigenvalues and eigenvectors.
Eigenvalues
Computing eigenvalues of a matrix is important in numerous
applications.
– In numerical analysis, the convergence of an iterative sequence
involving matrices is determined by the size of the eigenvalues of the
iterative matrix.
– In dynamic systems, the eigenvalues indicate whether a system is
oscillatory, stable (decaying oscillations) or unstable(growing
oscillation).
– Oscillator system, the eigenvalues of differential equations or the
coefficient matrix of a finite element model are directly related to
natural frequencies of the system.
– Regression analysis, eigenvectors of correlation matrix are used to
select new predictor variables that are linear combinations of the
original predictor variables.
General Form of the Equations
The general form of the equations
Ax x
Ax I x 0
A I x 0
A I 0
Power Method
The basic computation of the power method is
summarized as
Auk -1
uk and lim uk 1
Auk -1 k
Inverse Power Method
The algorithm is the same as the Power method and
the “eigenvector” is not the eigenvector for the
smallest eigenvalue. To obtain the smallest
eigenvalue from the power method.
1 1
Accelerated Power Method
The Power method can be accelerated by using the
Rayleigh Quotient instead of the largest wk value.
Az1 1
The Rayeigh Quotient is defined as:
z' w
1
z' z
Accelerated Power Method
The values of the next z term is defined as:
w
z2
1
The Power method is adapted to use the new value.
QR Factorization
• Another form of factorization
A = Q*R
• Produces an orthogonal matrix (“Q”) and a
right upper triangular matrix (“R”)
• Orthogonal matrix - inverse is transpose
1
Q Q T
QR Factorization
Why do we care?
We can use Q and R to find eigenvalues
1. Get Q and R (A = Q*R)
2. Let A = R*Q
3. Diagonal elements of A are eigenvalue
approximations
4. Iterate until converged
Note: QR eigenvalue method gives all eigenvalues
simultaneously, not just the dominant
Householder Matrix
• Householder matrix reduces zk+1 ,…,zn to zero
v
H I 2 ww ; w
v2
x x 1 x2 xk x k 1 x n
y Hx y 1 y2 yk 0 0
Householder Matrix
• To achieve the above operation, v must be a
linear combination of x and ek
e k 0 ,0 ,...,0 ,1,0....,0
T
v x e k x 1 , x 2 , , x k 1 , x k , x k 1 , , x n
Chapter 8
Interpolation
Interpolation Methods
x 3 a1 x 2 b1 x c1
Pi x 3
x a2 x 2 b2 x c2
or
x b1 x c1
2
Pj x 3
x a2 x 2 b2 x c2
Cubic Spline Interpolation
Hermite Polynomials produce a smooth interpolation,
they have a disadvantage that the slope of the input
function must be specified at each breakpoint.
Cubic Spline interpolation use only the data points
used to maintaining the desired smoothness of the
function and is piecewise continuous.
Chapter 9
Approximation
Approximation Methods
What is the difference between approximation
and interpolation?
S ek
2
dS
0
Take the derivative with da
the coefficients and set it
dS
equal to zero. 0
db
Least Square Coefficients for
Quadratic Fit
The equations can be written as:
N 4 N N
N
2
xi i xi Yi
3 2
x xi
iN1 i 1 i 1
a i 1
x3
N N
N
i
x 2
xi b xiYi
i 1
i
i 1 i 1
i 1
N N N
c
xi 2
x N Yi
i 1
i
i 1 i 1
Polynomial Least Square
The technique can be used to all forms of polynomials
of the form:
y a0 a1 x a2 x 2 an x n
N N
N
N x i n
x
i Yi
N i 1 i 1
a 0 Ni 1
x a
1 xiYi
i 1
i
i 1
N N a n N
xin xi
2n
xi Yi
n
i 1 i 1 i 1
Polynomial Least Square
N n 1 k 1
Where, n is the degree polynomial and N is the
number of elements and Yk are the data points and,
n
yk a x j
j k
j 0
Nonlinear Least Squared
Approximation Method
How would you handle a problem, which is
modeled as:
y bx a
or
y be ax
Nonlinear Least Squared
Approximation Method
Take the natural log of the equations
y bx ln y ln b a ln x
a
y b a x
and
y be ln y ln b ax
ax
y b ax
Continuous Least Square
Functions
Instead of modeling a known complex function over a
region, we would like to model the values with a
simple polynomial. This technique uses a least
squares over a continuous region.
The coefficients of the polynomial can be determined
using same technique that was used in discrete
method.
Continuous Least Square
Functions
The technique minimizes the error of the function
uses an integral.
b
E f x sx dx
2
a
where
f x a0 a1 x a2 x 2
Continuous Least Square
Functions
Take the derivative of the error with respect to the
coefficients and set it equal to zero.
b
df x
2 f x sx
dE
dx 0
dai a dai
f x a0 P0 x a1P1 x an Pn x
Legendre Polynomial
These function are orthogonal over a range [ -1, 1 ].
This range can be scaled to fit the function. The
orthogonal functions are defined as:
1
# if i j
1 P x P x dx
0 if i j
i j
Continuous Functions
x xi x0
Lx L1 x y1 L2 x y2 L3 x y3
x x2 x x3 y x x1 x x3 y x x1 x x2 y
x1 x2 x1 x3 1 x2 x1 x2 x3 2 x3 x2 x3 x1 3
Lagrange Differentiation
Differentiate the Lagrange interpolation
2 x x2 x3
f x L x y1
x1 x2 x1 x3
2 x x1 x3 2 x x1 x2
y2 y3
x2 x1 x2 x3 x3 x2 x3 x1
Assume a constant spacing
2 x x2 x3 2 x x1 x3 2 x x1 x2
f x y1 y2 y3
2x 2
x 2
2x 2
Richardson Extrapolation
This technique uses the concept of variable grid sizes to
reduce the error. The technique uses a simple method for
eliminating the error. Consider a second order central
difference technique. Write the equation in the form:
f xi 1 2 f xi f xi-1
f xi a x 2
a x 4
x
1 2
2
Richardson Extrapolation
The central difference can be defined as
f xi 1 2 f xi f xi-1
f xi a x 2
a x 4
x
1 2
2
A Bx b1x 4 b 2 x 6
Richardson Extrapolation
The technique can be extrapolated to include the
higher order error elimination by using a finer grid.
x
16 B Bx
A
2
15
O x
6
Trapezoid Rule
Integrate to obtain the rule
b b 1
a
f ( x)dx L( x)dx h L( )d
a 0
1 1
f (a )h (1 )d f (b)h d
0 0
1 2 1
2
h
f (a )h ( ) f (b)h f (a ) f (b)
2 0
2 0
2
Simpson’s 1/3-Rule
Integrate the Lagrange interpolation
b 1 h 1
a
f(x)dx h L( )dξ f(x 0 ) ξ(ξ 1 )dξ
1 2 1
1 h 1
f(x 1 )h ( 1 ξ )dξ f(x 2 ) ξ(ξ 1 )dξ
2
0 2 1
1 1
h ξ 3
ξ 2
ξ 3
f(x 0 ) ( ) f(x 1 )h(ξ )
2 3 2 1 3 1
1
h ξ
3
ξ2
f(x 2 ) ( )
2 3 2 1
b b b-a
a
f(x)dx
a
L(x)dx ; h
3
3h
f ( x 0 ) 3 f ( x 1 ) 3 f ( x 2 ) f ( x 3 )
8
Midpoint Rule
Newton-Cotes Open Formula
b
a
f ( x )dx ( b a ) f ( x m )
ab ( b a )3
(b a )f ( ) f ( )
2 24
f(x)
a xm b x
Composite Trapezoid Rule
b x1 x2 xn
a
f(x)dx f(x)dx f(x)dx
x0 x1 xn 1
f(x)dx
h
f(x 0 ) f(x 1 ) h f(x 1 ) f(x 2 ) h f(x n1 ) f(x n )
2 2 2
f(x 0 ) 2 f(x 1 ) 2f(x i ) 2 f ( x n1 ) f ( x n )
h
2
f(x)
ba
h
n
x0 h x1 h x2 h x3 h x4 x
Composite Simpson’s Rule
Multiple applications of Simpson’s rule
b x2 x4 xn
a
f(x)dx f(x)dx f(x)dx
x0 x2
x n 2
f(x)dx
h h
f(x 0 ) 4f(x1 ) f(x 2 ) f(x 2 ) 4f(x 3 ) f(x 4 )
3 3
h
f(x n 2 ) 4f(x n 1 ) f(x n )
3
h
f(x 0 ) 4f(x1 ) 2f(x 2 ) 4f(x 3 ) 2f(x 4 )
3
4f(x 2i-1 ) 2 f ( x2i ) 4f(x 2i 1 )
2 f ( xn 2 ) 4 f ( xn 1 ) f ( xn )
Richardson Extrapolation
Use trapezoidal rule as an example
– subintervals: n = 2j = 1, 2, 4, 8, 16, ….
f(x)dx f(x 0 ) 2 f(x 1 ) 2 f ( x n1 ) f ( x n ) c j h 2 j
b h
a 2 j 1
j n Formula
I 0 f ( a ) f ( b )
h
0 1
2
I 1 f ( a ) 2 f ( x 1 ) f ( b )
h
1 2
4
I 2 f ( a ) 2 f ( x 1 ) 2 f ( x 2 ) 2 f ( x 3 ) f ( b )
h
2 4
8
3 8 I3
h
f ( a ) 2 f ( x 1 ) 2 f ( x 7 ) f ( b )
16
I j j f ( a ) 2 f ( x 1 ) 2 f ( x n 1 ) f ( b )
h
j 2j
2
Richardson Extrapolation
For trapezoidal rule
b
A f ( x )dx A( h ) c 1 h 2
a
A A( h ) c 1 h 2 c 2 h 4
h h 2 h 4
A A( 2 ) c 1 ( 2 ) c 2 ( 2 )
1 h c
A 4 A( ) A( h ) 2 h 4 B( h ) b2 h 4
3 2 4
A B ( h ) b2 h 4
1 h
h h 4 C ( h ) 16 B ( ) B ( h )
A B ( ) b 2 ( ) 15 2
2 2
Richardson Extrapolation
kth level of extrapolation
4 C ( h/2) C ( h )
k
D( h )
4 1
k
Romberg Integration
Accelerated Trapezoid Rule
4 k I j 1 ,k I j ,k
I j ,k ; k 1, 2, 3,
4 1
k
• Gaussian Quadratures
– select functional values at non-uniformly distributed
points to achieve higher accuracy
– change of variables so that the interval of integration is [-
1,1]
– Gauss-Legendre formulae
Gaussian Quadrature on [-1, 1]
1
n2:
1
f(x)dx c1 f(x 1 ) c 2 f(x 2 )
Exact integral for f = x0, x1, x2, x3
– Four equations for four unknowns
f 1
1 1dx 2 c 1 c 2 c 1 1
1
c 1
1
2
f x xdx 0 c 1 x 1 c 2 x 2 1
1
1 2 x1
f x x dx c 1 x 12 c 2 x 22
2 2
3
1
3 1
f 1
x x 3 dx 0 c 1 x 13 c 2 x 23
3 x2 3
1
Gaussian Quadrature on [-1, 1]
1
n2:
1
f(x)dx c1 f(x 1 ) c 2 f(x 2 )
1 1 1
I f ( x )dx f ( ) f ( )
1
3 3
Gaussian Quadrature on [-1, 1]
1 5 3 8 5 3
I f ( x )dx f ( ) f (0 ) f ( )
1 9 5 9 9 5
Summary
• Open book and open notes.
• The exam will be 5-8 problems.
• Short answer type problems use a table to
differentiate between techniques.
• Problems are not going to be excessive.
• Make a short summary of the material.
• Only use your notes, when you have forgotten
something, do not depend on them.