Professional Documents
Culture Documents
1.
Av = v,
then v is called an
eigenvector
of A and is an
eigenvalue of
A.
Does a matrix always have eigenvalues and eigenvectors? To answer this question, note that a real number is an eigenvalue of A if and only if
(A I)v = 0,
for some vector v . Hence it follows that det(A I) = 0. Note that if one considers
as a variable, then det(A I) becomes a polynomial of degree n. Therefore we
can conclude that there are at least 1 and at most n distinct (complex) eigenvalues
of an n n matrix.
Note that the matrix A applied to its eigenvector acts as if it is a scalar multiplication. Thus eigenvectors are `the important' directions of a matrix, when
considered as an operator. The largest eigenvalue also has the following property.
Theorem 2.
max||v||=1 |Av | = max ||, where the second maximum is taken over
A = P 1 DP.
Lecture 2
2.
A matrix A is symmetric if A = AT . The eigenvalues and eigenvectors of symmetric matrices will be of particular interest to us, since we will encouter many
such matrices throughout the course. The following proposition establishes some
properties of eigenvalues and eigenvectors of symmetric matrices.
Proposition 4.
U
Proof.
v T Av = v T (v) = ||v|| .
v T Av = v T A v = (v T ) v = ||v|| .
=
1
0
0
A1
,
For a diagonalizable matrix A, we refer to D = U AU 1 as the eigenvalue decomposition of A. We saw that symmetric matrices are diagonalizable. Can we do
something similar for general n n matrices? What about m n matrices? Singular value decomposition is a generalization of the eigenvalue decomposition that
acheives this goal. It is a tool which allows one to understand a matrix through a
diagonal matrix.
Theorem 5.
Every
Lecture 2
Before proving the theroem, let us rst compare it to the diagonalization (eigenvalue decomposition) A = U DU T of a matrix.
Both decompositions provide a method of understanding a matrix through a
diagonal matrix. Eigenvalue decomposition of a matrix is much more powerful and
it gives a basis to which the matrix acts like scalar multiplication. On the other
hand, singular value decomposition gives a basis of Rn and a basis of Rm , where
the action of the matrix on the rst basis can be understood through a scalar
multiplication of the second basis.
Suppose that A has rank r min{m, n}. Note that AT A also has rank
r. Let 12 , , r2 be the non-zero eigenvalues of AT A, and let vi be orthonormal eigenvectors (where vi is the eigenvector corresponding to i for 1 i r).
Complete the collection v1 , v2 , , vr into an orthonormal basis by adding vectors
vr+1 , , vn .
i
For 1 i r, let ui = Av
i . Note that for distinct i, j we have,
Proof.
uTi uj =
1 T T
1 T
v (i vi ) = 0,
v A Avi =
i j j
i j j
since vj and vi are orthogonal. Therefore, u1 , , ur is an orthogonal set of vectors. Complete this collection as a set of orthonormal basis by adding vectors
ur+1 , , um and let i = 0 for all r + 1 i m. Let U = [u1 u2 um ] and
V = [v1 v2 vn ] and note that
U T AV = U T [1 u1 2 u2 m um ] = [1 e1 2 e2 m em ] = ,
Exercise 7. Prove that the singular values can also be obtained by considering the
matrix AAT .
Recall the example given in the introduction where we consider a matrix whose
rows are indexed by stocks and columns by dates. Is it more sensible to consider a
eigenvalue decomposition or a singular value decomposition?
We mostly will use matrices with m n. In this case, we can further simplify
the decomposition as follows.
Corollary 8.
expressed as
T
Amn = Umm mm Vnm
m m matrix mm , and a n m
Proof.
Lecture 2
n-th column of mn , and remove the (m + 1)-th column to the n-th column of Vnn ,
the outcome of
T
mn Vnm
3
2
2 2
3 2
.
We see that
AAT =
3 2 2
2 3 2
3 2
13 12
2
2 3 = 12 13 2 .
2 2
2 2 8
The eigenvalues of this matrix are 0, 9, 25, and the singular values are 5, 3. We can
also use AT A, which is a 2 2 matrix, to obtain the singular values.
To obtain the singular value decoposition, we need to nd the eigenvectors of
AAT . To nd the eigenvector corresponding to the eigenvalue 25, note that
12 12
2
AAT 25I = 12 12 2 .
2
2 13
1 1
0 0
0 0
0
0 ,
1
1/2
1/ 2
u2 =
1/ 2
.
1/ 2
Hence,
U = [u1 u2 ]
and
A = U V T =
and V = [v1 v2 ] ,
1/2 1/ 2
5
0
1/ 2 1/ 2
0
3
1/ 2
1/ 2
1/ 18 1/ 18
4/ 18
.
Lecture 2
4.
Theorem 11.
Proof.
||Av||
||v||
over all non-zero vectors. The vector v has real entries. Without loss of generality
assume that v (1, 1, , 1) 0. Since A has positive entries, whenever we switch
the sign of a negative entry of v , we obtain a vector of the same norm as v having
larger value of ||Av||. This contradicts the fact that | | + . Therefore, we must
have + > | |.
A similar argument shows that all entries of an eigenvector w corresponding to
+ needs to be non-negative. Moreover, since A has positive entries, the only way
w can have a zero entry is if w is the all-zero vector. Hence we have Property 2.
Finally, to see Property 3, suppose that there are two linearly independent unit
eigenvectors w1 and w2 both corresponding to + . Since w1 6= w2 and w1 , w2 are
both unit vectors, we see that w1 w2 must have both positive and negative entries.
Moreover, w1 w2 is also an eigenvector corresponding to + . This is impossible
by the argument above saying that all entries of an eigenvector corresponding to
+ must have the same sign.
Perron-Frobenius theorem holds for a more general class of matrices (see, e.g.,
Wikipedia).
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.