You are on page 1of 3

EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS)

1. Review
-omitted2. Practice

2 3
(1) Diagonalize
, if possible. If not possible, explain why not.
4 1
Answer:




1 3
5 0
4/7 3/7
A=
1 4
0 2
1/7 1/7

4 0 2
(2) Diagonalize 2 4 2 , if possible. If not possible, explain why not.
0 0 4


This matrix is not diagonalizable. It has eigenvalue = 4, which occurs with multiplicity three.
But the 4-eigenspace is only one-dimensional, therefore we cannot find an independent set of more
than one eigenvector.
(3) Find examples of each of the following:
(a) A 2 2 matrix with no real eigenvalues.


0 1
1 0
(b) A 2 2 matrix with exactly one real eigenvalue, whose eigenspace is two-dimensional.


0 0
0 0
(c) A 2 2 matrix with exactly one real eigenvalue, whose eigenspace is one-dimensional.


1 1
0 1

(4) Is there a basis for P2 with respect to which the differential operator
is represented by a diagonal matrix? Explain.
Answer:
d
is
No. The standard matrix for dx

0 1 0
0 0 2
0 0 0

d
dx

: P2 P2 is diagonal, i.e.,

Its only eigenvalue is zero, and the 0-eigenspace is the same as the nullspace of the above matrix.
[NB - the 0-eigenspace is always the same as the nullspace.] This nullspace is 1-dimensional, since
there are two pivots in the matrix. Therefore we canot find three independent eigenvectors, so the
matrix is not diagonalizable.
1

EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS)

3. True or False
(1) An n n matrix always has n distinct eigenvectors. TRUE
A square matrix always has at least one nonzero eigenvector. [note that we have to allow complex
eigenvectors (and eigenvalues) for this to be true. But we do allow these]. Any scalar multiple of
this vector is also an eigenvector. Therefore there are actually infinitely many distinct eigenvectors,
hence there are certainly n distinct eigenvectors
(2) An n n matrix always has n independent eigenvectors. FALSE
Only diagonalizable n by n matrices have n independent eigenvectors
(3) If v1 and v2 form a basis for the nullspace of A 17I, then for any x such that Ax = 17x, we have
x = c1 v1 + c2 v2 for some scalars c1 and c2 . TRUE
The nullspace of A 17I is the same as the 17-eigenspace of A. If Ax = 17x, then x is in the
17-eigenspace of A, hence can be written as a linear combination of v1 and v2
(4) If P is invertible and AP = P D, where D is diagonal, then the columns of P are all eigenvectors of
A. TRUE
Let P = [v1 . . . vn ], and let the diagonal entries of D be d1 , . . . , dn . Then AP = [Av1 . . . Avn ], while
P D = [d1 v1 . . . dn vn ]. So AP and P D are equal if and only if their columns are the same, which is
to say Av1 = d1 v1 , etc... In other words, they are equal if and only if each vi is an eigenvector of
A (with eigenvalue di ).
(5) An n n matrix is diagonalizable if and only if the sum of the dimensions of its eigenspaces is n.
TRUE The matrix is diagonalizable if and only if there exist n linearly independent eigenvectors.
A maximal independent set of eigenvectors can be found be taking a basis for each eigenspace and
aggregating them into one set. This set will consist of n vectors if and only if the dimensions of the
eigenspaces add up to n.
(6) Two similar matrices have the same eigenvalues. [Remember that M and N are similar if M =
P N P 1 for some invertible matrix P ]. TRUE
If M and N are similar, then M = P N P 1 for some invertible matrix P . Now suppose is an
eigenvalue of M , say M v = v. Then P N P 1 v = v implies N (P 1 v) = (P 1 v), showing that
P 1 v is an eigenvector of N with eigenvalue . This shows that every eigenvalue of M is also an
eigenvalue of N . The other direction is similar.
(7) If a matrix has the three eigenvalues 1, 2 and 3, and if v1 is a 1-eigenvector, v2 a 2-eigenvector, and
v3 a 3-eigenvector for this matrix, then {v1 , v2 , v3 } is linearly independent. TRUE
Eigenvectors from diferent eigenspaces form independent sets (proof omitted)
(8) The eigenvalues of a matrix cannot tell you whether the matrix is invertible or not. FALSE
A matrix is invertible if and only if it does not have 0 as an eigenvalue. Reason: the 0-eigenspace is
the nullspace

0 1
(9) The matrix
has two distinct eigenvalues. TRUE
1 0
The eigenvalues are complex numbers: = i
(10) If A = P DP 1 , and the columns of an n n matrix P form the basis B for Rn , then D is the matrix
for the linear transformation T (x) = Ax with respect to the basis B. TRUE
P is invertible, so we can think of it as a change of basis matrix, from B to the standard basis. Then
the equation A = P DP 1 expresses the following change of basis diagram:
Rn (std basis)

P 1


Rn (basis B)

/ Rn (std basis)
O
P

/ Rn (basis B)

(11) If A = M DM 1 , and also A = N D0 N 1 , with D and D0 both diagonal, then D and D0 have the
same entries, but possibly in different orders. TRUE
According to the diagonalization procedure, the diagonal entries of both D and D0 must be eigenvalues

EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS)

of A. However, they might appear in different orders if the order of the basis of eigenvectors used to
construct M and N are different.
(12) If the 2 2 matrix A represents a reflection across the line L (through the origin) in R2 , then the
line perpendicular to L is an eigenspace of A. TRUE Points on the line perpendicular to L stay on
that line when reflected across L. In fact, if v is on this perpendicular, then its image under the
reflection is v. So this line is a (-1)-eigenspace for A.
4. Towards the Jordan Normal Form (optional)
Not every matrix can be diagonalized, but we can always get pretty close to a diagonal matrix by
choosing the right basis. Here are a few ideas related to this claim.


2 1
(1) The matrix
is a 2 2 Jordan block. Find its eigenvalue, and determine the corresponding
0 2
eigenspace.
(2) Hopefully you found that the eigenspace is only one-dimensional. Therefore J2 is not diagonalizable,
since there is no basis for R2 consisting of eigenvectors for J2 . However, we can make a basis of
generalized eigenvectors. The usual eigenvectors v satisy (AI)v = 0. A generalized eigenvector
is a vector w such that (A I)k w = 0 for some positive integer k. In the case of J2 , try to find a
generalized eigenvector w such that(A 2I)2 w= 0.
4 1 0
(3) Now consider the 33 Jordan block 0 4 1 . It has only one eigenvalue, 4, and the 4-eigenspace
0 0 4
is one-dimensional. Find a basis {v1 } for this eigenspace, and extend it to a basis {v1 , v2 , v3 }
consisting of generalized eigenvectors for J3 . That is to say (A 4I)k vk = 0 for k = 1, 2, and 3.
Equivalently, we have (A 4I)v1 = 0, (A 4I)v2 = v1 , and (A 4I)v3 = v2 .
(4) Why do the generalized eigenvectors produced in this way always form a basis? Prove that if
v1 , . . . , vn Rn , and Av1 = 0, Av2 = v1 , Av3 = v2 , etc., with A invertible, then {v1 , . . . , vn } is an
independent set.
(5) You have shown how to extend an eigenbasis for a Jordan block matrix as above to a basis for
Rn . Now prove that if A is any matrix (say 3 3 for simplicity) with only one eigenvalue , with
the -eigenspace
one-dimensional, then there is an matrix P such that A = P JP 1 , where here

1 0
J = 0 1 . [Hint: you would know what the columns of P were if A had 3 independent
0 0
eigenvectors - what should they be in this moe general case?] This is not a diagonalization of A, but
its pretty close.

You might also like