Professional Documents
Culture Documents
6.3
6.4
Suppose A is an n n matrix:
(i) If any row or column of A has only zero entries, then det A = 0.
(ii) If A has two rows or two columns the same, then det A = 0.
(iii) If A is obtained from A by the operation (Ei ) (Ej ), with i 6= j, then det A = det A.
(iv) If A is obtained from A by the operation (Ei ) (Ei ), then det A = det A.
(v) If A is obtained from A by the operation (Ei + Ej ) (Ei ) with i 6= j, then det A = det A.
(vi) If B is also an n n matrix, then det AB = det A det B.
(vii) det AT = det A.
(viii) When A1 exists, det A1 = (det A)1 .
(ix) If A is an upper triangular, lower triangular, or diagonal matrix, then det A =
6.5
Qn
i=1
aii .
Matrix Factorization
To find the LU decomposition of a matrix A, use Gaussian Elimination to find the upper triangular
matrix U . For L, the diagonal entries will all be 1. Fill in the rest of the lower entries with L1 , L2 , ...
and then do matrix multiplication with U to get A. This will give you a system of equations for solve
for all entries in L.
To use the LU decomposition to solve an equation Ax = b, solve U x = y, then Ly = b.
LU decomposition uses O(2n2 ) operations rather than O(n3 /3) in Gaussian Elmination.
Hunter Patton
6.6
|aii |
j=1,j6=i
A diagonally dominant matrix is said to be strictly diagonally dominant when the inequality above
is strict for each n.
A strictly diagonally dominant matrix A is nonsingular. Moreover, in this case, Gaussian elimination
can be performed on any linear system of the form Ax = b to obtain its unique solution without row
or column interchanges, and the computations will be stable with respect to the growth of round-off
errors.
A matrix A is positive definite if it is symmetric and if xT Ax > 0 for every ndimensional vector
x 6= 0.
If A is an n n positive definite matrix, then
(i) A has an inverse;
(ii) aii > 0, for each i = 1, 2, ..., n;
(iii) max1kjn |akj | max1in |aii |;
(iv) (aij )2 < aii ajj , for each i 6= j.
Cholesky???
5.1
a t b,
y(a) = ,
a t b,
y(a) =
is well-posed.
Hunter Patton
5.2
Eulers Method
Eulers method is the most elementary approximation technique for solving initial-value problems.
Eulers method is, for w0 = ,
for each i = 0, 1, ..., N 1.
wi+1 = wi + hf (ti , wi ),
where wi y(ti ).
a t b,
y(a) = .
Let w0 , w1 , ..., wN be the approximations generated by Eulers method for some positive integer N .
Then, for each i = 0, 1, 2, ..., N,
|y(ti ) wi |
5.3
hM L(ti a)
[e
1].
2L
where
T (n) (ti , wi ) = f (ti , wi ) +
h 0
hn1 (n1)
f (ti , wi ) + ... +
f
(ti , wi ).
2
n!
a t b,
y(a) = ,
with step size h and if y C n+1 [a, b], then the local truncation error is O(hn ).
The error for Eulers method grows at a linear rate, while Taylors method has less error.
5.4
Runge-Kutta Methods
Runge-Kutta methods have the high-order local truncation error of the Taylor methods but eliminate the need to compute and evaluate the derivatives of f (t, y).
Midpoint method:
w0 = ,
wi+1 = wi + f (ti +
h
h
, wi + f (ti , wi )).
2
2
3
Hunter Patton
wi+1 = wi +
h
[f (ti , wi ) + f (ti+1 , wi + hf (ti , wi ))],
2
for i = 0, 1, ..., N 1.
Heuns method:
w0 = ,
h
2h
2h
h
h
wi+1 = wi + (f (ti , wi )+3f (ti + , wi + f (ti + , wi + f (ti , wi )))),
4
3
3
3
3
for i = 0, 1, ..., N 1.
5.6
Multistep Methods
a t b,
y(a) = ,
has a difference equation for finding the approximation wi+1 at the mesh point ti+1 represented by
some big equation.
Fourth-order Adams-Bashforth technique:
w0 = ,
w1 = 1 ,
w2 = 2 ,
w3 = 3 ,
h
[44f (ti , wi ) 59f (ti1 , wi1 ) + 37f (ti2 , wi2 ) 9f (ti3 , wi3 )],
24
for each i = 3, 4, .., N 1.
wi+1 = wi +
The multistep methods require you to calculate starting values with other methods, which is not ideal.
Though, the final result is much more accurate, especially with the predictor-corrector methods.
Hunter Patton
5.9
7.1
X
n
x2i
1/2
and ||x|| = max1in |xi |.
i=1
x y=
n
X
xi yi
i=1
X
n
x2i
1/2 X
n
i=1
yi2
1/2
= ||x||2 ||y||2 .
i=1
A matrix norm on the set of all n n matrices is a real-valued function, || ||, defined on this set,
satisfying for all n n matrices A and B and all real numbers :
(i) ||A|| 0;
(ii) ||A|| = 0, iff A is 0, the matrix with all 0 entries;
(iii) ||A|| = ||||A||;
(iv) ||A + B|| ||A|| + ||B||;
(v) ||AB|| ||A||||B||.
If || || is a vector norm on Rn , then
||A|| = max||x||=1 ||Ax||
is a matrix norm.
For any vector z 6= 0, matrix A, and any natural norm || ||, we have
||Az|| ||A|| ||z||.
If A = (aij ) is an n n matrix, then
||A|| = max1in
n
X
|aij |.
j=1
Hunter Patton
7.2
7.3
The Jacobi iterative method is obtained by solving the ith equation in Ax = b for xi to obtain
(provided aii 6= 0)
n
X
bi
aij xj
+
, for i = 1, 2, .., n.
xi =
aii
aii
j=1,j6=i
(k)
For each k 1, generate the components xi of x(k) from the components of x(k1) by
" n
#
X
1
(k)
(k1)
xi =
aij xj
+ bi , for i = 1, 2, ..., n.
aii
j=1,j6=i
aij xj
=
aij xj
+ bi ,
xi =
aii
j=1
j=i+1
for each i = 1, 2, .., n.
One advantage of Gauss-Seidel over Jacobi is that we gain a more accurate approximation from using
(k)
components of xi that have already been calculated.
If A is strictly diagonally dominant, then for any choice of x(0) , both the Jacobi and Gauss-Seidel
methods give sequences {x(k) }
k=0 that converge to the unique solution of Ax = b.
Hunter Patton
7.4
SOR is used to accelerate the convergence for systems that are convergent by the Gauss-Seidel technique.
7.5
7.6
The conjugate gradient method of Hestenes and Stiefel chooses the search directions {v(k) } during the
iterative process so that the residual vectors {r(k) } are mutually orthogonal.
If the matrix A is ill-conditioned, the conjugate gradient method is highly susceptible to rounding
errors. So, although the answer should be obtained in n steps, this is not usually the case. As a direct
method the conjugate gradient method is not as good as Gaussian elimination with pivoting. The
main use of the conjugate gradient method is as an iterative method applied to a better-conditioned
system. In this case an acceptable approximate solution is often obtained in about n steps.
8.1
The general problem of approximating a set of data, {(xi , yi ) : i = 1, 2, ..., m} with an algebraic
polynomial
Pn (x) = an xn + an1 xn1 + ... + a1 x + a0 ,
uses (n + 1) equations of the following format
a0
m
X
i=1
xni + a1
m
X
xn+1
+
i
i=1
m
X
xn+2
+ ... + an
i
i=1
m
X
i=1
x2n
i =
m
X
yi xni
i=1
where m represents the number of data points and n represents the degree of the polynomial.
Error is as follows
E=
m
X
i=1
Hunter Patton
8.2
xj+k dx =
xj f (x)dx,
and error is
Z
f (x)
E=
a
9.1
n
X
!2
ak x
dx.
k=0
Ri =
n
X
)
|aij | ,
j=1,j6=i
where C denotes the complex plane. The eigenvalues of A are contained within the union of these
circles, R ni=1 Ri . Moreover, the union of any k of the circles that do not intersect the remaining
(n k) contains precisely k (counting multiplicities) of the eigenvalues.
If A is a matrix and 1 , ..., k are distinct eigenvalues of A with associated eigenvectors x(1) , x(2) , ..., x(k) ,
then {x(1) , x(2) , ..., x(k) } is a linearly independent set.
A set of vectors {v(1) , v(2) , ..., v(k) } is called orthogonal if (v(i) )T v(j) = 0, for all i 6= j. If, in addition,
(v(i) )T v(i) = 1, for all i = 1, 2, ...n, then the set is called orthonormal.
An orthogonal set of nonzero vectors is linearly independent.
Gram-Schmidt. Let {x1 , x2 , ..., xk } be a set of k linearly independent vectors in Rn . Then {v1 , v2 , ..., vk }
defined by
v1 = x1 ,
........
vk = xk
k1
X
i=1
9.2
vTi xk
vi .
vTi vi
A matrix Q is said to be orthogonal if its columns {qTt , qT2 , ..., qTn } form an orthonormal set in Rn .
Suppose that Q is an orthogonal n n matrix. Then
(i) Q is invertible with Q1 = QT ;
(ii) For any x and y in Rn , (Qx)T Qy = xT y;
(iii) For any x in Rn , ||Qx||2 = ||x||2 .
Any invertible matrix Q with Q1 = QT is orthogonal.
8
Hunter Patton
The n n matrix A is symmetric if and only if there exists a diagonal matrix D and an orthogonal
matrix Q with A = QDQT .
Suppose that A is a symmetric n n matrix. There exist n eigenvectors of A that form an orthonormal
set, and the eigenvalues of A are real numbers.
A symmetric matrix A is positive definite if and only if all the eigenvalues of A are positive.
9.3
The Power Method is an iterative technique used to determine the dominant eigenvalue of a matrix
- that is, the eigenvalue with the largest magnitude. One useful feature of the Power method is that
if produces not only an eigenvalue, but also an associated eigenvector. In fact, the Power method is
often applied to find an eigenvector for an eigenvalue that is determined by some other means.
The Inverse Power Method is a modification of the Power method that gives faster convergence.
It is used to determine the eigenvalue of A that is closest to a specified number q.
9.6