You are on page 1of 6

Math 121, Fall 2009, Prof.

Bhat
Homework 1 Solutions

1. By diagonalizing the matrix and then using the matrix exponential, solve
 
dx 3 −2
= x(t)
dt 0 −6

for x(t). Solution: The matrix is upper-triangular so the eigenvalues


 are the diagonal
1
entries, 3 and −6. By inspection we see that the eigenvector corresponds to
0
the eigenvalue 3. As for the eigenvalue −6, to find the corresponding eigenvector we
consider the null space of the matrix obtained by subtracting −6 from the diagonal of
the original matrix:  
9 −2
.
0 0
2
The null space is spanned by the vector 9 , and one may check that this is indeed
1
an eigenvector of the original matrix corresponding to the eigenvalue −6. Putting the
eigenvectors together column by column, we write down the diagonalization of the
original matrix:    2    2 −1
3 −2 1 9 3 0 1 9
= .
0 −6 0 1 0 −6 0 1
With the diagonalization in hand, we can write down the matrix exponential:
    2   3t   2 −1  3t 2 −6t 
3 −2 1 9 e 0 1 9 e 9
(e − e3t )
exp t = = .
0 −6 0 1 0 e−6t 0 1 0 e−6t

This means that the solution of the ODE is


 3t 2 −6t 
e 9
(e − e3t )
x(t) = x(0).
0 e−6t

2. Consider the matrix  


1 a
A= .
0 1
(a) Find all eigenvalues of A. Solution: Again, the matrix is upper-triangular, so the
eigenvalues are given by the diagonal entries. In this case, there is a repeated
eigenvalue of 1.
(b) Find all eigenvectors of A. Do the eigenvectors of A form a basis for R2 ? Solution:
Subtracting 1 from the diagonal of the matrix gives
 
0 a
.
0 0
   
1 1
The null space of this matrix is spanned by . Hence is the only eigen-
0 0
vector (of the original matrix) corresponding to the eigenvalue 1. Since there is
only one eigenvector, it is impossible for the eigenvectors to form a basis for the
two-dimensional vector space R2 .
(c) Using any pencil-and-paper method of your choice, find the general solution of
dx
= Ax(t), x(0) = q.
dt
 
x1 (t)
Solution: We write the components of the vector x(t) = . Then the
x2 (t)
equation dx/dt = Ax(t) is equivalent to the system
d
x1 (t) = x1 (t) + ax2 (t)
dt
d
x2 (t) = x2 (t)
dt
The second equation has the solution x2 (t) = et x2 (0) where x2 (0) is an arbitrary
initial condition. Plugging this solution into the first equation gives, after moving
x1 (t) to the left-hand side of the equation,
d
x1 (t) − x1 (t) = ax2 (0)et .
dt
We now multiply through by the integrating factor e−t . This yields
d
e−t x1 (t) − e−t x1 (t) = ax2 (0).
dt
This enables us to use the product rule in reverse on the left-hand side and thereby
write
d  −t 
e x1 (t) = ax2 (0).
dt
Now integrating both sides with respect to t from t = 0 to t = s gives, by the
fundamental theorem of calculus,

e−s x1 (s) − x1 (0) = sax2 (0).

We replace s by t and solve for x1 (t), obtaining

x1 (t) = et x1 (0) + atet x2 (0).

Putting everything together, we have found the solution:


   t    t 
x1 (t) e atet x1 (0) e atet
x(t) = = = q.
x2 (t) 0 et x2 (0) 0 et
(d) Based on your answer to the previous part, what must exp(At) be? Solution: By
inspection of the general solution found in the previous part, we deduce that
 t 
e atet
exp(At) = .
0 et

3. Using the matrix exponential and associated Duhamel formula, solve


 √     
dx −3 2 2 a
= √ x(t) + −t
e , x(0) = .
dt 2 −2 −2 b
Solution: Now, from the notes in “matexp.pdf”, we know that Duhamel’s formula says
that the answer is
 √   Z t  √    
−3 2 −3 2 2
x(t) = exp √ t x(0) + exp √ (−s) −s
e ds .
2 −2 0 2 −2 −2
So as usual we must first go through the business of computing the matrix exponential.
 √ 
− 2
After a bit of linear algebra, we find that the matrix has an eigenvector
1
 √ 
1/ 2
corresponding to the eigenvalue −4 and also an eigenvector corresponding to
1
the eigenvalue −1. Hence the original matrix may be diagonalized:
 √   √ √   √ √ −1
−3
√ 2 − 2 1/ 2 −4 0 − 2 1/ 2
= .
2 −2 1 1 0 −1 1 1
Therefore,
 √    √ √   −4t  √ √ −1
√−3 2 − 2 1/ 2 e 0 − 2 1/ 2
exp t =
2 −2 1 1 0 e−t 1 1
 −4t −t −4t −t

1 2e + e −2e + 2e
=
3 −e−4t + e−t e−4t + 2e−t
Changing t to (−s) everywhere, we have
 √    4s 
−3
√ 2 2e + es −2e4s + 2es
exp (−s) = ,
2 −2 −e4s + es e4s + 2es
so, after a bit of algebra and a bit of calculus, we obtain
Z t  √     
−3 2 2 2 4e3t − 3t − 4
exp √ (−s) −s
e ds = .
0 2 −2 −2 9 2e3t + 3t − 2
Putting everything together, we arrive at the solution
1 2e−4t + e−t −2e−4t + 2e−t
   
2 4e3t − 3t − 4
x(t) = x(0) + .
3 −e−4t + e−t e−4t + 2e−t 9 2e3t + 3t − 2
4. Suppose M is an n × n matrix with n distinct eigenvalues λ1 , . . . , λn . Suppose that
λj < 0 for all j, i.e., all the eigenvalues are negative. Let x(t) be the solution of

dx
= M x(t), x(0) = q.
dt
Prove that no matter what we choose for the initial condition q, it is always true that

lim x(t) = 0.
t→∞

Solution: Since the n eigenvalues of M are distinct, each eigenvector of M is linearly


independent from all other eigenvectors. In other words, there exist n linearly indepen-
dent eigenvectors of M , which is enough vectors to form a basis for the n-dimensional
space Rn . This means that M is diagonalizable, i.e.,

M = V DV −1 ,

where V is a matrix formed by concatenating all n eigenvectors (written as column


vectors), and D is a diagonal matrix containing the eigenvalues:
 
λ1
D=
 ... .

λn

Hence the solution x(t) must take the form


 
e λ1 t
x(t) = V 
 ..  −1
 V q.
.
eλn t

Since each λi is negative, we see that limt→∞ eλi t = 0. This is enough to force

lim x(t) = 0
t→∞

regardless of q, finishing the proof.

5. Consider the system


2 1
 
−  
dx  t t 1 − t2
=  x(t) + .
dt 3 2 2t

t t
   
1 1 −1
First show that xh (t) = c1 t + c2 t solves the homogeneous system. Then
1 3
solve the nonhomogeneous system. Throughout this problem, assume t > 0.
   
1 1
Solution: Multiplying the original matrix by c1 t gives c1 , which is the same
  1 1
1
as the derivative of c1 t with respect to t.
1
   
1 −1 −1 −2
Similarly, multiplying the original matrix by c2 t gives c2 t , which is
  3 −3
1 −1
the same as the derivative of c2 t with respect to t.
3
Putting the two facts together, we see that xh (t), as defined in the statement of the
problem, is the general solution of the homogeneous system.
Using the homogeneous solution vector, let us define the matrix
t t−1
 
Ψ(t) = .
t 3t−1
The arguments we made above can be used to immediately confirm that
2 1
 

d t t
Ψ(t) =   Ψ(t),
dt 3 2

t t
i.e., Ψ(t) is a homogeneous solution matrix.
Now suppose an initial condition x() = q is given to us, for some  > 0. (Note
that we cannot use t = 0 as the point in time where the initial condition is given,
because the matrix in the original differential equation blows up at t = 0. Moreover,
the statement of the problem indicates that t > 0 always.) With this initial condition
at time t =  > 0, we can define
xh (t) = Ψ(t) [Ψ()]−1 q.
As you may check, this particular xh (t) solves the homogeneous problem and also
satisfies xh () = q.
To solve the inhomogeneous system, we follow the usual prescription of writing x(t) =
Ψ(t)u(t). Differentiating both sides with respect to t gives
dx dΨ(t) du
= u(t) + Ψ(t) .
dt dt dt
Using the previous equation for the time-derivative of Ψ(t), we get
2 1 2 1
   
− −
dx  t t du  t t x(t) + Ψ(t) du .
=   Ψ(t)u(t) + Ψ(t) = 
dt 3 2  dt  3 2 dt
− −
t t t t
We compare this last equation against the original inhomogeneous differential equation
and conclude that  
du 1 − t2
Ψ(t) = .
dt 2t
We multiply through by [Ψ(t)]−1 and integrate both sides from t =  > 0 to t = s,
resulting in Z t=s  
2
−1 1 − t
u(s) − u() = [Ψ(t)] dt.
t= 2t
This implies that

−s − 34 s2 +  + 43 2 + 32 log s
 
u(s) = u() + 2 3 4 2 3 4
− s4 + s3 + s8 + 4 − 3 − 8

This in turn implies

−t − 34 t2 +  + 34 2 + 23 log t
 
−1
u(t) = [Ψ()] x() + 2 3 4 2 3 4 .
− t4 + t3 + t8 + 4 − 3 − 8

Hence we obtain the general solution

−t − 34 t2 +  + 34 2 + 32 log t
  
−1
x(t) = Ψ(t) [Ψ()] x() + 2 3 4 2 3 4 .
− t4 + t3 + t8 + 4 − 3 − 8

You might also like