You are on page 1of 6

Eigenvalues and eigenvectors (Part 1)

Definitions and examples

Definition Let A be a square matrix. An eigenvalue of A is a number r which


when subtracted from each of the diagonal entries of A converts A into a
singular matrix. Doing this is the same as subtracting r times the identity
matrix I form A. Therefore, r is an eigenvalue of A if and only if A-rI is a
singular matrix.

Example: Suppose we have the next matrix [ ]


2 0
0 3 . Subtracting a 2 from

each of the diagonal entries yields the singular matrix [ ]


0 0
0 1 . Subtracting a

3 from each of the diagonal entries yields the singular matrix [ 1 0


0 0 ] .

Therefore, 2 and 3 are eigenvalues of [ ]


2 0
0 3
.

Example: Consider the matrix [ 1 1


1 1 ] . Note that if we calculate the

determinant we find that is a singular matrix, so remembering Theorem 23.2; 0


is one of the eigenvalues of the matrix. 2 is the other eigenvalue.

A matrix M whose entries are nonnegative and whose columns (or rows) each
add to 1, is called a Markov matrix. r=1 is an eigenvalue for every Markov
matrix.

For most matrices, one cant just look at the matrix to find out what number to
subtract form its diagonal entries to make the matrix singular. The principal
technique for determining whether or not a given matrix is singular is the
determinant. Therefore, r is an eigenvalue of A if and only if: det(A-rI)=0
For an nxn matrix A, the left-hand side of equation is an nth order polynomial in
the variable r, called the characteristic polynomial of A.

For example, for a general 2x2 matrix, the characteristic polynomial is

a second order polynomial.

The fact that the square matrix A-rI is a singular matrix when r is an eigenvalue
of A means that the system of equations (A-rI)v=0 has a solution other than
v=0. When r is an eigenvalue of A, a nonzero vector v such that (A-rI)v=0 is
called an eigenvector of A corresponding to eigenvalue r. We demand that v be
nonzero because the zero-vector, v=0, is a solution of (A-rI)v=0 for any r.

NOTES: In general, one chooses the simplest of the nonzero candidates. The
set of all solutions of linear equation (including v=0) is called the eigenspace of
A with respect to some eigenvalue.

Solving Linear Difference Equations

One dimensional equations

In this section we will see how eigenvalues and eigenvectors are used to solve
k-dimensional dynamical problems modeled with linear difference equations.
Linear difference equations in one dimension are very easy to solve. A typical

such equation is for some constant a.

This difference states that the amount of y in any period will be proportional to
the amount that existed in the previous period, with proportionality constant a.
A solution of this difference equation is an expression for
yn in terms of the

initial amount
y 0 , a and n.

Two-Dimensional Systems: An example

Leslie Population Model

Lets work out a concrete example of using a change of coordinates to solve a


particular linear system of difference equations.

Consider an organism that lives for two years.

Let: b1=Birth rate of individuals in their first year.

b2= Birth rate for individuals in their second year.

d1=Death rate of first-year individuals.

xn=Number of first-year individuals

yn=Number of second-year individuals

The dynamics over time of this population are described by the system of
difference equations:
Abstract Two-dimensional Systems

Lets redo this whole process for an abstract system of difference equations

Write P and P-1 for the change of coordinate matrices:

Where the original variables are written as z and the transformed variables as
Z. Now write out the original difference equation in the new variables Z:
Our goal now is to choose the transformation P so that the transformed system
is as simple as possible. If P-1AP were diagonal matrix D, the
transformed system of difference equations would be easily solved.

Lets compute, in the two-dimensional case, what kind of matrix P will lead to a
diagonal P-1AP. Let v1 and v2 be the two columns of the 2x2 matrix P; that is,

write P as [ v1 v2 ] and write the diagonal matrix D as D= [ ]


r1 0
0 r2 . Now, the

equation P-1AP=D is equivalent to the equation AP=PD for an invertible matrix

P. Write this equation as:

This shows that if P is nonsingular matrix such that P-1AP is a diagonal matrix,
then r1 and r2 are eigenvalues of A and the columns of P are the corresponding
eigenvectors.

K-Dimensional Systems

You might also like