Professional Documents
Culture Documents
Contents
1. Motivation
1.1. Diagonal matrices
1.2. Example: solving linear dierential equations
2. Eigenvalues and eigenvectors: definition and calculation
2.1. Definitions
2.2. The characteristic equation
2.3. Geometric interpretation of eigenvalues and eigenvectors
2.4. Digression: the fundamental theorem of algebra
2.5. Diagonal matrices
2.6. Similar matrices have the same eigenvalues
2.7. Projections
2.8. Trace, determinant and eigenvalues
2.9. The eigenvalue zero
2.10. Eigenvectors corresponding to dierent eigenvalues are
independent
2.11. Diagonalization of matrices with linearly independent
eigenvectors
2.12. Eigenspaces
2.13. Real matrices with complex eigenvalues; decomplexification
2.14. Jordan normal form
2.15. Block matrices
3. Solutions of linear dierential equations with constant coefficients
3.1. The case when M is diagonalizable.
3.2. Non-diagonalizable matrix
3.3. Fundamental facts on linear dierential systems
3.4. Eigenvalues and eigenvectors of the exponential of a matrix
3.5. Higher order linear dierential equations; companion matrix
3.6. Stability in dierential equations
4. Dierence equations (Discrete dynamical systems)
4.1. Linear dierence equations with constant coefficients
4.2. Solutions of linear dierence equations
4.3. Stability
4.4. Example: Fibonacci numbers
4.5. Positive matrices
4.6. Markov chains
5. More functional calculus
1
3
3
3
4
4
5
6
6
8
8
9
9
10
10
11
13
14
16
20
22
22
26
29
30
31
34
40
40
40
41
42
43
43
45
RODICA D. COSTIN
5.1.
5.2.
5.3.
45
46
49
1. Motivation
1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
a1 0
b1 0
(1)
M=
, N=
0 a2
0 b2
It can be easily checked that
M + N =
and
M
1
a1
0
1
a2
, M =
a1 + b1
0
0
a2 + b2
ak1 0
0 ak2
, MN =
a 1 b1
0
0
a 2 b2
v = Mv
or
(4)
(M
I)v = 0
RODICA D. COSTIN
det(M
I) = 0
Mv = v
I) = 0
M12
M22
..
.
Mn1
Mn2
...
...
M1n
M2n
..
.
. . . Mnn
M12
M22
= (M11
) (M22
M12 M21
M12
M22
M23
M13
M23
M33
M22
M23
M23
M33
+( 1)1+3 M13
+( 1)1+2 M12
M21 M22
M13
M23
M21
M23
M13 M33
RODICA D. COSTIN
+ ...
x1 )(x
x2 )
2
p(x). However,
4ac = ( 1) ( b2 + 4ac) and introducing the
p writing b
symbol i for
1 we can write the zeroes of p(x) as
p
p
bi
b2 + 4ac
b
b2 + 4ac
x1,2 =
=
i
2 R + iR = C
2a
2a
2a
Considering the two roots x1 , x2 complex, we can still factor ax2 +bx+c =
a(x x1 )(x x2 ), only now the factors have complex coefficients. Within
complex numbers every polynomial of degree two is reducible!
Note that the two roots of a quadratic polynomial with real coefficients
are complex conjugate: if a, b, c 2 R and x1,2 62 R then x2 = x1 .
2.4.3. The fundamental theorem of algebra. It is absolutely remarkable that
any polynomial can be completely factored using complex numbers:
Theorem 3. The fundamental theorem of algebra
Any polynomial p(x) = an xn +an 1 xn 1 +. . .+a0 with coefficients aj 2 C
can be factored
(8)
an xn + an
1x
n 1
+ . . . + a0 = an (x
x1 )(x
x2 ) . . . (x
xn )
z)(x
z) = x2
(z + z) x + |z|2
RODICA D. COSTIN
. . . dn
det(D
I) =
0
0
..
.
0
d2
..
.
0
...
...
0
0
..
.
matrix:
3
7
7
7
5
= (d1
1 )(d2
2 ) . . . (dn
n)
. . . dn
The eigenvalues are precisely the diagonal elements, and the eigenvector
corresponding to dj is ej (as it is easy to check). The principal axes of
diagonal matrices the coordinate axes. Vectors in the direction of one of
these axes preserve their direction and are stretched or compressed: if x =
cek then Dx = dk x.
Diagonal matrices are easy to work with: what was noted for the 2 2
matrices in 1 is true in general, and one can easily check that any power
Dk is the diagonal matrix having dkj on the diagonal.
If p(x) is a polynomial
p(t) = ak tk + ak
1t
k 1
+ . . . + a1 t + a0
p(M ) = ak M k + ak
1M
k 1
+ . . . + a1 M + a0 I
,S
Proposition 5. Two similar matrices have the same eigenvalues: if M, M
1
= S M S then the eigenvalues of M and of M
det(M
I) = det(S
= det S
I) = det S
MS
det(M
(M
I) det S = det(M
I)S
I)
n
X
Mjj
j=1
det M =
1 2...
1, . . . ,
be its n eigenvalues
and
(12)
Tr M =
+ ... +
In particular, the traces of similar matrices are equal, and so are their
determinants.
10
RODICA D. COSTIN
1 )(x
= xn
2 ) . . . (x
1
+ ... +
n)
n )x
n 1
+ . . . + ( 1)n (
1 2...
n)
I) = ( 1)n (
1) . . . (
n)
( 1)n p( )
)(M22
) . . . (Mnn
n 1
)
+ lower powers of
11
Proof.
Assume, to obtain a contradiction, that the eigenvectors are linearly dependent: there are c1 , . . . , ck 2 F not all zero such that
(14)
c1 v1 + . . . + ck vk = 0
Step I. We can assume that all cj are not zero, otherwise we just remove
those vj from (14) and we have a similar equation with a smaller k.
If after this procedure we are left with k = 1, then this implies c1 v1 = 0
which contradicts the fact that not all cj are zero or the fact that v1 6= 0.
Otherwise, for k 2 we continue as follows.
Step II. Then we can solve (14) for vk :
vk = c01 v1 + . . . + c0k
(15)
1 vk 1
k vk
Multiplying (15) by
(17)
0 = c01 (
= c01
1 v1
+ . . . + c0k
1 k 1 vk 1
+ . . . + c0k
1( k 1
k )vk 1
0
Note that all c0j ( j
k ) are non-zero (since all c1 are non-zero, and
j 6= k ).
If k=2, then this implies v1 = 0 which is a contradiction.
If k > 2 we go to Step I. with a lower k.
The procedure decreases k, therefore it must end, and we have a contradiction. 2
2.11. Diagonalization of matrices with linearly independent eigenvectors. Suppose that the M be an n n matrix has n independent eigenvectors v1 , . . . , vn .
Note that, by Theorem 10, this is the case if we work in F = C and all
the eigenvalues are distinct (recall that this happens with probability one).
Also this is the case if we work in F = R and all the eigenvalues are real
and distinct.
Let S be the matrix with columns v1 , . . . , vn :
S = [v1 , . . . , vn ]
which is invertible, since v1 , . . . , vn are linearly independent.
Since M vj = j vj then
(18)
M [v1 , . . . , vn ] = [
1 v1 , . . . ,
n vn ]
12
RODICA D. COSTIN
The left side of (18) equals M S. To identify the matrix on the right side
of (18) note that since Sej = vj then S( j ej ) = j vj so
2
3
0 ... 0
1
6 0
0 7
2 ...
6
7
[ 1 v1 , . . . , n vn ] = S[ 1 e1 , . . . , n en ] = S, where = 6 ..
..
.. 7
4 .
.
. 5
0 0 ... n
Relation (18) is therefore
M S = S,
or
M S = = diagonal
Note that the matrix S which diagonalizes a matrix is not unique. For
example, we can replace any eigenvector by a scalar multiple of it. Also, we
can use dierent orders for the eigenvectors (this will result on a diagonal
matrix with the same values on the diagonal, but in dierent positions).
Example 1. Consider the matrix (6) for which we found the eigenvalues
1 and 2 = 2 and the corresponding eigenvectors v1 = (0, 1)T .
1 =
v2 = (1, 1)T . Taking
0 1
S=
1
1
we have
S
MS =
1 0
0 2
Not all matrices are diagonalizable, certainly those with distinct eigenvalues are, and some matrices with multiple eigenvalues.
Example 2. The matrix
0 1
(19)
N=
0 0
has eigenvalues 1 = 2 = 0 but only one (up to a scalar multiple) eigenvector v1 = e1 .
Multiple eigenvalues are not guaranteed to have an equal number of independent eigenvectors!
N is not diagonalizable. Indeed, assume the contrary, to arrive at a
contradiction. Suppose there exists an invertible matrix S so that S 1 N S =
where is diagonal, hence it has the eigenvalues of N on its diagonal,
and therefore it is the zero matrix: S 1 N S = 0, which multiplied by S to
the left and S 1 to the right gives N = 0, which is a contradiction.
Some matrices with multiple eigenvalues may still be diagonalized; next
section explores when this is the case.
13
= {x 2 F n | M x =
j x}
j.
I) =
k
Y
)rj and r1 + . . . + rk = n
j=1
Then
dim V
Fn
4. M is diagonalizable in
and then
6
M := 4 1
(20)
rj
I) =
+3
7
1 5
= rj for all j = 1, . . . , k
= Fn
4=
( + 1) (
2)2
14
RODICA D. COSTIN
calculation shows that there are two independent eigenvectors corresponding to the eigenvalue 2 = 2, for example v2 = (1, 0, 1)T and v3 = (2, 1, 0)T
(the null space of M
2 I is two-dimensional). Let
2
3
0 1 2
6
7
S = [v1 , v2 , v3 ] = 4 1 0 1 5
1 1 0
then
1 0 0
6
MS = 4 0
0
7
2 0 5
0 2
cos
sin
(21)
R =
sin
cos
To find its eigenvalues calculate
det(R
I) =
cos
sin
sin
cos
= (cos
)2 +sin2 =
2 cos +1
det(M
I) =
+4
16 =
( + 4)
15
2 +4
and its eigenvalues are: 1,2 = 1 i 3 = 2ei/3 and 3 = 4, and corresponding eigenvectors v1,2 = ( 1 2i, 1, 0)T , and v3 = e3 . The matrix
S = [v1 , v2 , v3 ] diagonalized the matrix: S 1 M S is the diagonal matrix,
having the eigenvalues on the diagonal, but all these are complex matrices.
To understand how the matrix acts on R3 , we consider the real and
imaginary parts of v1 : let x1 = <v1 = 12 (v1 + v2 ) = ( 1, 1, 0)T and
1
y1 = =v1 = 2i
(v1 v2 ) = (2, 0, 0)T . Since the eigenspaces are invariant under M , then so is Sp(x1 , y1 ), over the complex and even over the real
numbers (since M has real elements). The span over the real numbers is the
xy-plane, and it is invariant under M . The figure shows the image of the
unit circle in the xy-plane under the matrix M : it is an ellipse.
Figure 1. The image of the unit circle in the xy-plane.
Along the direction of the third eigenvector (the z-axis) the matrix multiples any c e3 by 4.
In the basis x1 , y1 , v3 the matrix of the linear transformation has its
simplest form: using SR = [x1 , y1 , v3 ] we obtain the matrix of the transformation in this new basis as
p
2
3
1
3 0
6 p
7
SR 1 M SR = 4
3 1
0 5
0
/3 .
16
RODICA D. COSTIN
Suppose that some eigenvalues are not real. Then the matrices S and
are not real either, and the diagonalization of M must be done in Cn .
Suppose that we want to work in Rn only. Recall that the nonreal eigenvalues and eigenvectors of real matrices come in pairs of complex-conjugate
ones. In the complex diagonal form one can replace diagonal 2 2 blocks
0
j
0
j
by a 2 2 matrix which is not diagonal, but has real entries.
To see how this is done, suppose 1 2 C \ R and 2 = 1 , v2 = v1 .
Splitting into real and imaginary parts, write 1 = 1 +i 1 and v1 = x1 +iy1 .
Then from M (x1 + iy1 ) = (1 + i 1 )(x1 + iy1 ) identifying the real and
imaginary parts, we obtain
M x1 + iM y1 = (1 x
1 y)
+ i(
1x
+ 1 y)
6 0
7
6 ..
..
..
.. 7
4 .
.
.
. 5
0
0 0 ... m
We can similarly replace any pair of complex conjugate eigenvalues with
2 2 real blocks.
Exercise. Show that each 2 2 real block obtained through decomplexification has the form
= R
17
The Jordan blocks which appear on the diagonal of a Jordan normal form
are as follows.
1 1 Jordan blocks are just [ ].
2 2 Jordan blocks have the form
1
(22)
J2 ( ) =
0
For example, the matrix (19) is a Jordan block J2 (0).
3 3 Jordan blocks have the form
2
3
1 0
6
7
(23)
J3 ( ) = 6
1 7
4 0
5
0 0
|0
J2 ( )e2 = e1 + e2
J3 ( )e2 = e1 + e2
I)2 e2 = (J3 ( )
I)e1 = 0. Finally,
J3 ( )e3 = e2 + e3
18
RODICA D. COSTIN
2.14.2. The generalized eigenspace. Defective matrices are similar to a matrix which is block-diagonal, having Jordan blocks on its diagonal. An appropriate basis is formed using generalized eigenvectors:
Definition 13. A generalized eigenvector of M corresponding to the
eigenvalue is a vector x 6= 0 so that
(27)
(M
I)k x = 0
E = Cn
19
2.14.3. How to find a basis for each E that can be used to conjugate a
matrix to a Jordan normal form.
Example 1. The matrix
"
#
1+a
1
(28)
M=
1
a 1
is defective: it has eigenvalues a, a and only one independent eigenvector,
(1, 1)T . It is therefore similar to J2 (a). To find a basis x1 , x2 in which the
matrix takes this form, let x1 = (1, 1)T (the eigenvector); to find x2 we
solve (M aI)x2 = x1 (as seen in (24) and in (25)). The solutions are
x2 2 (1, 0)T + N (M aI), and any vector in this space works, for example
x2 = (1, 0)T . For
"
#
1 1
(29)
S = [x1 , x2 ] =
1 0
we have S
1M S
= J2 (a).
Example 2.
The matrix
6
M =4 1
0
2
2
1
7
1 5
(M
I)xk+1 = xk , k = 1, 2, . . .
Note that each x1 satisfies (27) for k = 1, x2 satisfies (27) for k = 2, etc.
At some step k1 the system (M
I)xk1 +1 = xk1 will have no solution;
we found the generalized eigenvectors x1 , . . . , xk1 (which will give a k1 k1
Jordan block). We then repeat the procedure for a dierent eigenvector in
the chosen basis for V , and obtain a new set of generalized eigenvectors,
corresponding to a new Jordan block.
Note: Jordan form is not unique.
20
RODICA D. COSTIN
2.14.4. Real Jordan normal form. If a real matrix has multiple complex
eigenvalues and is defective, then its Jordan form can be replaced with an
upper block diagonal matrix in a way similar to the diagonal case illustrated in 2.13.2, by replacing the generalized eigenvectors with their real
and imaginary parts.
For example, a real matrix which can be brought to the complex Jordan
normal form
2
3
+i
1
0
0
6
7
0
+i
0
0
6
7
4
5
0
0
i
1
0
0
0
i
can be conjugated (by a real matrix) to the real matrix
2
3
1 0
6
0 1 7
6
7
4 0 0
5
0 0
MP = 4
3
5
21
A B
A 0
M=
or M =
0 D
C D
A B
M=
C D
the identity
A B
C D
I
0
D 1C I
A B
det
= det(A BD
C D
BD
0
1C
B
D
C) det D
(2000), pp. 460-467, and P.D. Powell, Calculating Determinants of Block Matrices,
http://arxiv.org/pdf/1112.4379v1.pdf
22
RODICA D. COSTIN
u(0) = u0
1t
v 1 + . . . + am e
mt
vm ,
aj arbitrary constants
The matrix
(35)
u(t) = U (t)a,
where a = (a1 , . . . , am )T
The initial condition (33) determines the constants a, since (33) implies
U (0)a = u0 . Noting that U (0) = [v1 , . . . , vm ] = S therefore a = S 1 u0 and
the initial value problem (32), (33) has the solution
u(t) = U (t)S
u0
23
(37)
= x 2y,
= 2x + y,
x(0) =
y(0) =
2
1
, with u(0) =
Calculating the eigenvalues of M , we obtain 1 = 1, 2 = 3, and corresponding eigenvectors v1 = (1, 1)T , v2 = ( 1, 1)T . There are two independent solutions of the dierential system:
1
1
t
3t
u1 (t) = e
, u2 (t) = e
1
1
and a fundamental matrix solution is
(39)
"
e3t
e3t
1
1
a1
u(t) = a1 e t
+ a2 e3t
= U (t)
1
1
a2
This solution satisfies the initial condition if
1
1
a1
+ a2
=
1
1
which is solved for a1 , a2 : from
"
#
1
1
a1
a2
1 1
it follows that
"
1
a1
=
a2
1
1
1
therefore
(40)
u(t) =
+
2
so
x(t) =
y(t) =
+
2
+
2
1
1
e
e
"
+
t
t
1
2
1
2
+
2
+
2
+
2
1
2
1
2
3t
e3t
e3t
1
1
+
2
+
2
24
RODICA D. COSTIN
3.1.2. The matrix eM t . It is often preferable to work with a matrix of independent solutions U (t) rather than with a set of independent solutions.
Note that the m m matrix U (t) satisfies
d
(41)
U (t) = M U (t)
dt
In dimension one this equation reads du
dt = u having its general solution
t
u(t) = Ce . Let us check this fact based on the fact that the exponential
is the sum of its Taylor series:
1
X
1
1 2
1 n
1 n
x
e = 1 + x + x + ... + x + ... =
x
1!
2!
n!
n!
n=0
=1+
1
1
t+
1!
2!
2 2
t + ... +
1
n!
n n
x + ... =
1
X
1
n!
n=0
n=0
n n
n n 1
= e
n=1
Perhaps one can define, similarly, the exponential of a matrix and obtain
solutions to (41)?
For any square matrix M , one can define polynomials, as in (10), and it
is natural to define
1
X
1
1
1
1 n n
(42) etM = 1 + tM + t2 M 2 + . . . + tn M n + . . . =
t M
1!
2!
n!
n!
n=0
provided that the series converges. If, furthermore, the series can dierentiated term by term, then this matrix is a solution of (41) since
(43)
1
1
1
d tM
d X 1 n n X 1 d n n X n n 1 n
e =
t M =
t M =
t
M = M etM
dt
dt
n!
n! dt
n!
n=0
n=0
n=1
1 2
SS
M 3 = M 2 M = S2 S 1 SS
and so on; for any power
M n = Sn S 1
= SS
= S2 S
then
= S3 S
25
etM
1
1
X
1 n n X 1 n n
=
t M =
t S S
n!
n!
n=0
For
therefore
6
6
n = 6
4
1
X
1 n n 6
6
(47)
t =6
n!
4
n=1
6
6
=6
4
0
..
.
n
1
...
...
0
0
..
.
...
n
m
0
n
2
0
..
.
..
.
0
2 P1
(48)
=S
n=0
(45)
(46)
1 n n
1
n=1 n! t
0
..
.
..
.
0
6
6
=6
4
et 1
0
..
.
et 2
..
.
n=0
...
...
0
0
..
.
...
3
7
7
7
5
7
7
7 for n = 1, 2, 3 . . .
5
0
P1
1 n n
2
n=1 n! t
..
.
0
0
2
1
X
1 n n
t
n!
...
...
. . . et
etM = Set S
...
3
0
0
..
.
m
...
...
P1
0
0
..
.
1 n n
m
n=1 n! t
3
7
7
7
5
7
7
7 = et
5
which shows that the series defining the matrix etM converges and can be
dierentiated term-by-term (since these are true for each of the series in
(47)). Therefore etM is a solution of the dierential equation (41).
Multiplying by an arbitrary constant vector b we obtain vector solutions
of (32) as
(50)
Noting that u(0) = b it follows that the solution of the initial value problem
(32), (33) is
u(t) = etM u0
Note: the fundamental matrix U (t) in (35) is linked to the fundamental
matrix etM by
(51)
26
RODICA D. COSTIN
1
1
, u2 (t) = e
3t
1
1
1
2
1
2
1
t + 1 e3 t + 1 e
2e
2
2
etM
=
1
1 3t
1
t
e
e
+
2
2
2e
t
t
1 3t
2e
1 3t
2e
1 3t
2e
1 3t
2e
#
#
etM = SetJ S
It only remains to check that the series defining the exponential of a Jordan
form converges, and that it can be dierentiated term by term.
Also to be determined are m linearly independent solutions, since if M is
not diagonalizable, then there are fewer than m independent eigenvectors,
hence fewer than m independent solutions of pure exponential type. This
can be done using the analogue of (51), namely by considering the matrix
(53)
The columns of the matrix (53) are linearly independent solutions, and we
will see that among them are the purely exponential ones multipling the
eigenvectors of M .
Since J is block diagonal (with Jordan blocks along its diagonal), then its
exponential will be block diagonal as well, with exponentials of each Jordan
block (see 2.15.1 for multiplication of block matrices).
3.2.1. Example: 2 2 blocks: for
(54)
J=
"
1
0
2
3
and then
(56)
tJ
1
X
1 k k
=
t J =
k!
k=0
"
,...,J =
et
tet
et
27
"
k 1
k
,...
u(t) = a1 et x1 + a2 et (tx1 + x2 )
dx
dt
dy
dt
= (1 + a)x y,
= x + (a 1)y,
x(0) =
y(0) =
1 t+1
U (t) = [u1 (t), u2 (t)] = eat
1
t
and the general solution has the form u(t) = U (t)c for an arbitrary constant
vector c. We now determine c so that u(0) = (, )T , so we solve
1 1
c=
1 0
28
RODICA D. COSTIN
which gives
c=
1 1
1 0
"
"
t (
)+
1 t+1
at
at
u(t) = e
=e
1
t
+ t (
)
or
x(t) = eat (t (
) + ) , y(t) = eat (t (
)+ )
1 0
6
7
J =6
1 7
4 0
5
0 0
(60)
2
3
3
2
7 4 6
5, J = 4 0
0
Then
(62)
etJ =
1
X
3
4
3
7
5
et
1 k k 6
t J =6
4 0
k!
k=0
0
tet
et
0
1 2 t
2t e
tet
et
3
7
7
5
29
(64)
therefore
(65)
Pn
j=1
det U0
30
RODICA D. COSTIN
=e
31
(67)
1y
+ . . . + a1 y 0 + a0 y = 0
(n 1)
(68)
(69)
6
6
6
u = M u, where M = 6
6
4
0
1)
0
0
..
.
1
0
0
1
...
...
0
0
..
.
0
a0
0
a1
0
...
a2 . . .
1
an
7
7
7
7
7
5
1,
a0 x1
a1 x2
...
an
1 xn
= xn
+ an
n 1
+ . . . + a1 + a0 = 0
A set of functions which are not linearly dependent on I are called linearly
independent on I. This means that if, for some constants c1 , . . . , cn relation
(71) holds, then necessarily all c1 , . . . , cn are zero.
If all functions f1 , . . . , fn are enough many times dierentiable then there
is a simple way to check linear dependence or independence:
32
RODICA D. COSTIN
f1
1 times dierentiable on
fn (t)
fn0 (t)
..
.
(n 1)
(t) . . . fn
(t)
33
3y 00 + 4y = 0
tan
W (0)
(ii) y1 (t), . . . , yn (t) are linearly independent if and only if their Wronskian
is not zero.
3.5.4. Decomplexification. Suppose equation (67) has real coefficients, aj 2
R, but there are nonreal eigenvalues, say 1,2 = 1 i 1 . Then there are two
independent solutions y1,2 (t) = et(1 i 1 ) = et1 [cos(t 1 ) i sin(t 1 )]. If real
valued solutions are needed (or desired), note that Sp(y1 , y2 ) = Sp(yc , ys )
where
yc (t) = et1 cos(t 1 ), ys (t) = et1 sin(t 1 )
and ys , yc are two independent solutions (real valued).
Furthermore, any solution in Sp(yc , ys ) can be written as
(72)
where
A=
+ B)
q
C12 + C22
C2
C1
cos B = p 2
, sin B = p 2
2
C1 + C2
C1 + C22
x0 =
y, y 0 = k 2 x
where k 2 R
34
RODICA D. COSTIN
Ak cos(kt + B)
35
Therefore:
(i) if all j have negative real parts, then any solution u(t) of (75) converge
to zero: limt!1 u(t) = 0, and 0 is asymptotically stable.
(ii) If all < j 0, and some real parts are zero, and eigenvalues with zero
real part have the dimension of the eigenspace equal to the multiplicity of
the eigenvalue4 then 0 is stable.
(iii) If any eigenvalue has a positive real part, then 0 is unstable.
As examples, let us consider 2 by 2 systems with real coefficients.
Example 1: an asymptotically stable case, with all eigenvalues real.
For
"
#
"
#
"
#
5
2
3
0
1
2
M=
, M = SS 1 , with =
, S=
1
4
0
6
1 1
The figure shows the field plot (a representation of the linear transformation x ! M x of R2 ). The trajectories are tangent to the line field, and
they are going towards the origin. Solutions with initial conditions along
the directions of the two eigenvectors of M are straight half-lines (two such
4This means that if <
q(t).
with nonconstant
36
RODICA D. COSTIN
solutions are shown in the picture); these are the solutions u(t) = e
(Solutions with any other initial conditions are not straight lines.)
jt
cvj .
37
38
RODICA D. COSTIN
39
40
RODICA D. COSTIN
xk+1 = M xk
(77)
xk+2 = M1 xk+1 + M0 xk
xk
yk =
xk+1
Then yk satisfies the recurrence
yk+1 = M yk where M =
0
I
M0 M1
xk+p = Mp
1 xk+p 1
+ . . . + M1 xk+1 + M0 xk
x0
and denoting S
1x
41
= b,
xk = Sk b = b1
k
1 v1
+ . . . + bn
k
j
k
n vn
b1
k
k
k
xk = [y1 , y2 ]
=
b
y
+
b
k
y
+
y
1
1
2
1
2
k
b2
0
and the recurrence has two linearly independent solutions of the form k y1
and q(k) k where q(k) is a polynomial in k of degree one.
In a similar way, for p p Jordan blocks there are p linearly independent
solutions of the form q(k) k where q(k) are polynomials in k of degree at
most p 1, one of then being constant, equal to the eigenvector.
Example. Solve the recurrence relation yn+3 = 9 yn+2 24 yn+1 + 20yn .
Looking for solutions of the type yn = n we obtain n+3 = 9 n+2
24 n+1 +20 n which implies (disregarding = 0) that 3 9 2 +24
20 =
2
0 which factors (
5) (
2) = 0 therefore 1 = 5 and 2 = 3 = 2. The
general solution is zn = c1 5n + c2 2n + c3 n2n .
4.3. Stability. Clearly the constant zero sequance xk = 0 is a solution of
any linear homogeneous discrete equation (76): 0 is an equilibrium point
(a steady state).
42
RODICA D. COSTIN
Fk+2 = Fk+1 + Fk