Professional Documents
Culture Documents
Dierential Equations
In this supplement, we use the methods we have developed so far to study
dierential equations.
1 Existence and Uniqueness of Solutions
Denition 1 A dierential equation is an equation of the form y
(t) =
F(y(t), t) where U is an open subset of R
n
R and F : U R
n
. An
initial value problem is a dierential equation combined with an initial con-
dition y(t
0
) = y
0
with (y
0
, t
0
) U. A solution of the initial value problem
is a dierentiable function y : (a, b) R
n
such that t
0
(a, b), y(t
0
) = y
0
and, for all t (a, b),
dy
dt
= F(y(t), t). The general solution of the dierential
equation is the family of all solutions for all initial values (y
0
, t
0
) U.
Theorem 2 Consider the initial value problem
y
1
2K
,
r
M
= sup{|f(t)| : t [t
0
, t
0
+ ]}
Let
S = {z C([t
0
, t
0
+ ]) : (z(s), s) R for all s [t
0
, t
0
+ ]}
S is a closed subset of the complete metric space C, so S is a complete metric
space. Consider the function I : S C dened by
I(z)(t) = y
0
+
t
t
0
F(z(s), s) ds
I(z) is dened and continuous because F is bounded and continuous on R.
Observe that if (z(s), s) R for all s [t
0
, t
0
+ ], then
|I(z)(t) y
0
| =
t
t
0
F(z(s), s) ds
|t t
0
| max{|F(y, s)| : (y, s) R}
M
r
so (I(z)(t), t) R for all t [t
0
, t
0
+ ]. Thus, I : S S.
Given two functions x, z S and t [t
0
, t
0
+ ],
|I(z)(t) I(x)(t)| =
y
0
+
t
t
0
F(z(s), s) ds y
0
t
t
0
F(x(s), s) ds
t
t
0
(F(z(s), s) F(x(s), s)) ds
|t t
0
| sup{|F(z(s), s) F(x(s), s)| : s [t
0
, t
0
+ ]}
K sup{|z(s) x(s)| : s [t
0
, t
0
+ ]}
z x
2
2
Therefore, I(z) I(x)
zx
2
, so I is a contraction. Since S is a
complete metric space, I has a unique xed point y S. Therefore, for all
t [t
0
, t
0
+ ], we have
y(t) =
t
t
0
F(y(s), s) ds
F is continuous, so the Fundamental Theorem of Calculus implies that
y
(t) = F(y(t), t)
for all t (t
0
, t
0
+ ). Since we also have
y(t
0
) = y
0
+
t
0
t
0
F(y(s), s) ds = y
0
y (restricted to (t
0
, t
0
+ )) is a solution of the initial value problem.
On the other hand, suppose that y is any solution of the initial value
problem on (t
0
, t
0
+ ). It is easy to check that ( y(s), s) R for all
s (t
0
, t
0
+ ), so we have |F( y(s), s)| M; this implies that y has a
extension to a continuous function (still denoted y) in S. Since y is a solution
of the initial value problem, the Fundamental Theorem of Calculus implies
that I ( y) = y. Since y is the unique xed point of I, y = y.
Example 3 Consider the initial value problem
y
(t) = 1 +y
2
(t), y(0) = 0
Here, we have F(y, t) = 1 +y
2
which is Lipschitz in y over sets U = V R,
provided that V is bounded, but not over all of R R. The theorem tells
us that the initial value problem has a unique solution over some interval of
times (a, b), with 0 (a, b). We claim the unique solution is y(t) = tan t. To
see this, note that
y
(t) =
d
dt
tan t
=
d
dt
sint
cos t
=
cos t cos t sint(sint)
cos
2
t
3
=
cos
2
t + sin
2
t
cos
2
t
= 1 +
sin
2
t
cos
2
t
= 1 + tan
2
t
= 1 + (y(t))
2
y(0) = tan 0
= 0
Notice that y(t) is dened for t
2
,
2
, but
lim
t
2
+
y(t) = and lim
t
y(t) =
Thus, the solution of the initial value problem cannot be extended beyond
the interval
2
,
2
(t) = 2
|y| 2
0 = 2
|y|
=
2
|y|
|y|
=
2
|y|
|y 0|
which is not a bounded multiple of |y 0|. Given any 0, let
y
(t) =
0 if t
(t )
2
if t
We claim that y
(t) = 0 =
|0| = 2
|y
(t) = 2(t ) =
4
2
(t )
2
= 2
|y
(t)|. For t = ,
lim
h0
+
y
( + h) y
()
h
= lim
h0
+
h
2
h
= 0
lim
h0
( + h) y
()
h
= lim
h0
0
h
= 0
so y
() = 0 = 2
|y
()|. Finally, y
(0) = 0, so y
(t) = F(y(t))
where F : R
n
R depends on t only through the value of y(t). A stationary
point of an autonomous dierential equation is a point y
s
R
n
such that
F(y
s
) = 0
We can study the qualitative properties of solutions of autonomous dier-
ential equations by looking for stationary points. Note that the constant
function
y(t) = y
s
5
is a solution of the initial value problem
y
= F(y), y(t
0
) = y
s
If F satises a Lipschitz condition, the constant function is the unique solu-
tion of the initial value problem.
If F is C
2
, then from Taylors Theorem, we know near a stationary point
y
s
, we have
F(y
s
+ h) = F(y
s
) + DF(y
s
)h + O
|h|
2
= DF(y
s
)h + O
|h|
2
Thus, when we are suciently close to the stationary point, the solutions
of the dierential equation are closely approximated by the solutions of the
linear dierential equation
y
= (y Y
s
)
= DF(y
s
)(y y
s
)
3 Complex Exponentials
The exponential function e
x
(for x R or x C) is given by the Taylor
series
e
x
=
k=0
x
k
k!
For x, y C, we have
e
x+y
= e
x
e
y
If x C, x = a + ib for a, b R, so
e
x
= e
a+ib
= e
a
e
ib
= e
a
k=0
(ib)
k
k!
= e
a
k=0
(ib)
2k
(2k)!
+
k=0
(ib)
2k+1
(2k + 1)!
= e
a
k=0
i
2k
b
2k
(2k)!
+ i
k=0
i
2k
b
2k+1
(2k + 1)!
6
= e
a
k=0
(1)
k
b
2k
(2k)!
+ i
k=0
(1)
k
b
2k+1
(2k + 1)!
= e
a
(cos b + i sinb)
Now suppose that t R, so
e
tx
= e
ta+itb
= e
ta
(cos tb + i sintb)
If a < 0, then e
tx
0 as t
If a > 0, then |e
tx
| as t
If a = 0, then |e
tx
| = 1 for all t R
4 Homogenous Linear Dierential Equations
with Constant Coecients
Let M be a constant n n real matrix. Linear dierential equations of the
type
y
= (y y
s
)
= M(y y
s
) (3)
are called homogeneous linear dierential equations with constant coecients.
1
Dierential Equation (3) has a complete solution in closed form. Note
that the matrix representing DF(y
s
) need not be symmetric, so it need not
be diagonalizable. However, if the n n matrix DF(y
s
) is diagonalizable
over C, the complete solution takes the following particularly simple form:
Theorem 7 Consider the linear dierential equation
y
= (y y
s
)
= M(y y
s
)
where M is a real nn matrix. Suppose that M can be diagonalized over the
complex eld C. Let U be the standard basis of R
n
and V = {v
1
, . . . , v
n
} be a
1
Later, we shall study equations of the type y
= (y y
s
)
= M(y y
s
) + H(t), where
H : R R
n
; such equations are called inhomogeneous; homogenous equations are ones
in which H is identically zero.
7
basis of (complex) eigenvectors corresponding to the eigenvectors
1
, . . . ,
n
C. Then the solution of the initial value problem is given by
y(t) = y
s
+
(Mtx)
U,V
(id)
1
(tt
0
)
0 0 0
0 e
2
(tt
0
)
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 e
n(tt
0
)
(Mtx)
V,U
(id)(y(t
0
) y
s
)
(4)
and the general complex solution is obtained by allowing y(t
0
) to vary over
C
n
; it has n complex degrees of freedom. The general real solution is obtained
by allowing y(t
0
) to vary over R
n
; it has n real degrees of freedom. Every real
solution is a linear combination of the real and imaginary parts of a complex
solution. In particular,
1. If the real part of each eigenvalue is less than zero, all solutions converge
to y
s
2. If the real part of each eigenvalue is greater than zero, all solutions
diverge from y
s
and tend to innity
3. If the real parts of some eigenvalues are less than zero and the real parts
of other eigenvalues are greater than zero, solutions follow hyperbolic
paths
4. If the real parts of all eigenvalues are zero, all solutions follow closed
cycles around y
s
Remark 8 If one or more of the eigenvalues are complex, each of the three
matrices in Equation (4) will contain complex entries, but the product of the
three matrices is real. Thus, if the initial condition y
0
is real, Equation (4)
gives us a real solution; indeed, it gives us the unique solution of the initial
value problem.
Remark 9 Given a xed time t
0
, the general real solution is obtained by
varying the initial values of y(t
0
) over R
n
, which provides n real degrees of
freedom. You might think that varying t
0
provides one additional degree of
8
freedom, but it doesnt. Given any solution satisfying the initial condition
y(t
0
) = y
0
, the solution is dened on some interval (t
0
, t
0
+ ); given
t
1
(t
0
, t
0
+ ), let y
1
= y(t
1
); then the solution with initial condition
y(t
1
) = y
1
is the same as the solution with initial condition y(t
0
) = y
0
. The
same holds true for the general complex solution.
Proof: We rewrite the dierential equation in terms of a new variable
z = (Mtx)
V,U
(id) y
which is the representation of the solution with respect to the basis V of
eigenvectors. Let z
s
= (Mtx)
V,U
(id) y
s
. Then we have
z z
s
= (Mtx)
V,U
(id) (y y
s
)
(z z
s
)
= z
= (Mtx)
V,U
(id) y
= (Mtx)
V,U
(id) M (y y
s
)
= (Mtx)
V,U
M (Mtx)
U,V
(z z
s
)
= B(z z
s
)
where
B =
1
0 0 0
0
2
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n
Thus, the i
th
component of (z(t) z
s
) satises the dierential equation
(z(t) z
s
)
i
=
i
(z(t) z
s
)
i
so
(z(t) z
s
)
i
= e
i
(tt
0
)
(z(t
0
) z
s
)
i
so
z(t) z
s
=
1
(tt
0
)
0 0 0
0 e
2
(tt
0
)
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 e
n(tt
0
)
(z(t
0
) z
s
)
9
y(t) y
s
= (Mtx)
U,V
(id) (z(t) z
s
)
= (Mtx)
U,V
(id)
1
(tt
0
)
0 0 0
0 e
2
(tt
0
)
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 e
n(tt
0
)
(z(t
0
) z
s
)
= (Mtx)
U,V
(id)
1
(tt
0
)
0 0 0
0 e
2
(tt
0
)
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 e
n(tt
0
)
(Mtx)
V,U
(id) (y(t
0
) y
s
)
which shows that the solution to the initial value problem is as claimed;
the general complex solution is found by varying y(t
0
) over C
n
. Given a
dierentiable function y : [t
0
, ) C
n
, write y(t) = w(t) + ix(t) as the
sum of its real and imaginary parts. Then y
(t) = w
(t) + ix
(t). Since
the matrix M is real, y is a complex solution if and only if w and x are
real solutions. A solution is real if and only if y(t
0
) R
n
, so the general real
solution has n real degrees of freedom; the solution of the initial value problem
is determined from the n real initial conditions y(t
0
). Statements (1)-(4)
about the asymptotic properties of the solutions follow from the properties
of complex exponentials in Section 3.
5 The Form of Real Solutions
When the eigenvalues of a real matrix M are complex, the corresponding
eigenvectors must be complex also. Consequently, even for a real solution y
of the undiagonalized equation, the solution z of the diagonalized equation,
which is the representation of y in terms of the basis V of eigenvectors, will
be complex. However, as noted in the previous section, the product of the
three matrices in Equation (4) is real, and hence the equation gives us a real
solution, indeed the unique solution of the initial value problem. The general
solution can be obtained by varying the initial condition y
0
.
We can determine the form of the real solutions once we know the eigen-
values, and in one important case, this provides us with an easier way to nd
10
the general solution of the dierential equation, or the unique solution of the
initial value problem.
Theorem 10 Consider the dierential equation
y
= (y y
s
)
= M(y y
s
)
Suppose that the matrix M can be diagonalized over C. Let the eigenvalues
of M be
a
1
+ ib
1
, a
1
ib
1
, . . . , a
m
+ ib
m
, a
m
ib
m
, a
m+1
, . . . , a
nm
Then for each xed i = 1, . . . , n, every real solution is of the form
(y(t) y
s
)
i
=
m
j=1
e
a
j
(tt
0
)
(C
ij
cos b
j
(t t
0
) + D
ij
sin b
j
(t t
0
))
+
nm
j=m+1
C
ij
e
a
j
(tt
0
)
The n
2
parameters
{C
ij
: i = 1, . . . , n; j = 1, . . . , n m} {D
ij
: i = 1, . . . , n; j = 1, . . . , m}
have n real degrees of freedom. The parameters are uniquely determined from
the n real initial conditions of an Initial Value Problem.
Proof: Rewrite the expression for the solution y as
(y(t) y
s
)
i
=
n
j=1
ij
e
j
(tt
0
)
Recall that the non-real eigenvalues occur in conjugate pairs, so suppose that
j
= a + ib,
k
= a ib
so the expression for (y(t) y
s
) contains the pair of terms
ij
e
j
(tt
0
)
+
ik
e
k
(tt
0
)
=
ij
e
a(tt
0
)
(cos b(t t
0
) + i sin b(t t
0
))
+
ik
e
a(tt
0
)
(cos b(t t
0
) i sinb(t t
0
))
= e
a(tt
0
)
((
ij
+
ik
) cos b(t t
0
) + i (
ij
ik
) sinb(t t
0
))
= e
a(tt
0
)
(C
ij
cos b(t t
0
) + D
ij
sin b(t t
0
))
11
Since this must be real for all t, we must have
C
ij
=
ij
+
ik
R and D
ij
= i (
ij
ik
) R
so
ij
and
ik
are complex conjugates; this can also be shown directly from
the matrix formula for y in terms of z.
Thus, if the eigenvalues
1
, . . . ,
n
are
a
1
+ ib
1
, a
1
ib
1
, a
2
+ ib
2
, a
2
ib
2
, . . . ,
a
m
+ ib
m
, a
m
ib
m
, a
m+1
, . . . , a
nm
every real solution will be of the form
(y(t) y
s
)
i
=
m
j=1
e
a
j
(tt
0
)
(C
ij
cos b
j
(t t
0
) + D
ij
sin b
j
(t t
0
))
+
nm
j=m+1
C
ij
e
a
j
(tt
0
)
Since the dierential equation satises a Lipschitz condition, the Initial Value
Problem has a unique solution determined by the n real initial conditions.
Thus, the general solution has exactly n real degrees of freedom in the n
2
coecients.
Remark 11 The constraints among the coecients C
ij
, D
ij
can be com-
plicated. One cannot just solve for the coecients of y
1
from the initial
conditions, then derive the coecients for y
2
, . . . , y
n
. For example, consider
the dierential equation
y
1
y
2
2 0
0 1
y
1
y
2
y
y
.
.
.
y
(n1)
the coecients
C
11
, . . . , C
1(nm)
, D
11
, . . . , D
1m
in the general solution are arbitrary real numbers; once they are set, the
other coecients are determined. Write
y(t) y
s
=
m
j=1
e
a
j
(tt
0
)
(C
j
cos b
j
(t t
0
) + D
j
sin b
j
(t t
0
))
+
nm
j=m+1
C
j
e
a
j
(tt
0
)
For the Initial Value Problem, compute the rst n 1 derivatives of y at t
0
and set them equal to the initial conditions. This yield n linear equations in
the n coecients, which have a unique solution.
Note also that
C
j
e
a
j
(tt
0
)
=
C
j
e
a
j
t
0
e
a
j
t
cos b
j
(t t
0
) = cos (b
j
t b
j
t
0
)
= cos b
j
t cos b
j
t
0
+ sinb
j
t sinb
j
t
0
sinb
j
(t t
0
) = sin(b
j
t b
j
t
0
)
= cos b
j
t sinb
j
t
0
+ sinb
j
t cos b
j
t
0
13
so we can also write
y(t) y
s
=
m
j=1
e
a
j
t
(C
j
cos b
j
t + D
j
sinb
j
t)
+
nm
j=m+1
C
j
e
a
j
t
6 Second Order Linear Dierential Equations
Consider the second order dierential equation y
= cy + by
. Let
y(t) =
y(t)
y
(t)
Then we have
y
(t) =
(t)
y
(t)
0 1
c b
y(t)
y
(t)
0 1
c b
y
The eigenvalues are
b
b
2
+4c
2
, the roots of the characteristic polynomial
2
= 2y + y
We let
y =
y
y
0 1
2 1
y
The eigenvalues are roots of the characteristic polynomial
2
2 = 0
so the eigenvalues and corresponding eigenvectors are given by
1
= 2 v
1
= (1, 2)
2
= 1 v
2
= (1, 1)
15
From this information alone, we know the qualitative properties of the solu-
tions are as given in the phase plane diagram:
The solutions are roughly hyperbolic in shape with asymptotes along
the eigenvectors. Along the eigenvector v
1
, the solutions ow o to
innity; along the eigenvector v
2
, the solutions ow to zero.
The solutions ow in directions consistent with the ows along the
asymptotes.
On the y-axis, we have y
= 2y + y
= y
= y
> 0, so y
= y
< 0, so y
is decreasing
along the direction of the solution.
Along the line y
= 2y, y
= 2y 2y = 0, so y
achieves a minimum
or maximum where it crosses that line.
We seek to nd the general solution. We have described two methods.
This method does not depend on the special structure y =
y
y
.
From Equation (4), the general solution is given by
y(t)
y
(t)
= Mtx
U,V
(id)
e
2(tt
0
)
0
0 e
(tt
0
)
Mtx
V,U
(id)
y(t
0
)
y
(t
0
)
1 1
2 1
e
2(tt
0
)
0
0 e
(tt
0
)
1/3 1/3
2/3 1/3
y(t
0
)
y
(t
0
)
1 1
2 1
e
2(tt
0
)
3
e
2(tt
0
)
3
2e
(tt
0
)
3
e
(tt
0
)
3
y(t
0
)
y
(t
0
)
e
2(tt
0
)
+2e
(tt
0
)
3
e
2(tt
0
)
e
(tt
0
)
3
2e
2(tt
0
)
2e
(tt
0
)
3
2e
2(tt
0
)
+e
(tt
0
)
3
y(t
0
)
y
(t
0
)
16
=
y(t
0
)+y
(t
0
)
3
e
2(tt
0
)
+
2y(t
0
)y
(t
0
)
3
e
(tt
0
)
2y(t
0
)+2y
(t
0
)
3
e
2(tt
0
)
+
2y(t
0
)+y
(t
0
)
3
e
(tt
0
)
The general solution has two real degrees of freedom; a specic solution
is determined by specifying the initial conditions y(t
0
) and y
(t
0
).
The second method depends on the special structure
y =
y
y
(t
0
) = 2C
1
e
2(tt
0
)
C
2
e
(tt
0
)
y
(t
0
) = 2C
1
C
2
C
1
=
y(t
0
) + y
(t
0
)
3
C
2
=
2y(t
0
) y
(t
0
)
3
y(t) =
y(t
0
) + y
(t
0
)
3
e
2(tt
0
)
+
2y(t
0
) y
(t
0
)
3
e
(tt
0
)
Typically, we are not interested in the solution for y
; however, we can
always obtain it by dierentiating the solution for y.
7 Inhomogeneous Linear Dierential Equa-
tions, with Nonconstant Coecients
Consider the inhomogeneous linear dierential equation
y
= M(t)y (6)
Theorem 13 The general solution of the inhomogeneous linear dierential
equation (5) is
y
h
+ y
p
where y
h
is the general solution of the homogeneous linear dierential equa-
tion (6) and y
p
is any particular solution of the inhomogeneous linear dier-
ential equation (5).
Proof: Fix any particular solution y
p
of the inhomogeneous dierential
equation (5). Suppose that y
h
is any solution of the corresponding homoge-
neous dierential equation (6). Let y
i
(t) = y
h
(t) + y
p
(t). Then
y
i
(t) = y
h
(t) + y
p
(t)
= M(t)y
h
(t) + M(t)y
p
(t) + H(t)
= M(t)(y
h
(t) + y
p
(t)) + H(t)
= M(t)y
i
(t) + H(t)
so y
i
is a solution of the inhomogeneous dierential equation (5).
Conversely, suppose that y
i
is any solutions of the inhomogenous dier-
ential equation (5). Let y
h
(t) = y
i
(t) y
p
(t). Then
y
h
(t) = y
p
(t) y
i
(t)
= M(t)y
i
(t) + H(t) M(t)y
p
(t) H(t)
= M(t)(y
i
(t) y
p
(t))
= M(t)y
h
(t)
so y
h
is a solution of the homogeneous dierential equation (6) and y
i
=
y + h + y
p
.
Thus, in order to nd the general solution of the inhomogeneous equation,
one needs to nd the general solution of the homogeneous equation and a
particular solution of the inhomogeneous equation.
A particular solution of the inhomogeneous equation in the constant co-
ecient case has the following simple form:
18
Theorem 14 Consider the inhomogeneous linear dierential equation (5),
and suppose that M(t) is a constant matrix M, independent of t. A particular
solution of the inhomogeneous linear dierential equation (5), satisfying the
initial condition y
p
(t
0
) = y
0
, is given by
y
p
(t) = e
(tt
0
)M
y
0
+
t
t
0
e
(ts)M
H(s) ds (7)
Proof: We verify that y
p
solves (7):
y
p
(t) = e
(tt
0
)M
y
0
+
t
t
0
e
(ts)M
H(s) ds
= e
(tt
0
)M
y
0
+
t
t
0
e
(tt
0
)M
e
(st
0
)M
H(s) ds
= e
(tt
0
)M
y
0
+
t
t
0
e
(st
0
)M
H(s) ds
p
(t) = Me
(tt
0
)M
y
0
+
t
t
0
e
(st
0
)M
H(s) ds
+e
(tt
0
)M
e
(t
0
t)M
H(t)
= My
p
(t) + H(t)
y
p
(t
0
) = e
(t
0
t
0
)M
y
0
+
t
0
t
0
e
(st
0
)M
H(s) ds
= y
0
Example 15
Consider the inhomogeneous linear dierential equation
y
1
y
2
1 0
0 1
y
1
y
2
sint
cos t
t
t
0
e
(ts)M
H(s) ds
=
e
t
0
0 e
t
1
1
t
0
e
(ts)
0
0 e
(ts)
sins
cos s
ds
19
=
e
t
e
t
t
0
e
ts
sin s
e
st
cos s
ds
=
e
t
1 +
t
0
e
s
sins ds
e
t
1 +
t
0
e
s
cos s ds
t
0
e
s
sins ds = e
s
sins
t
0
t
0
e
s
cos s ds
= e
t
sint + e
0
sin0 +
t
0
e
s
cos s ds
= e
t
sint + e
s
cos s
t
0
t
0
e
s
(sins) ds
= e
t
sint +e
t
cos t + e
0
cos 0
t
0
e
s
sins ds
= e
t
(sin t + cos t) + 1
t
0
e
s
sins ds
2
t
0
e
s
sins ds = e
t
(sin t + cos t) + 1
t
0
e
s
sins ds =
e
t
(sin t + cos t) + 1
2
t
0
e
s
cos s ds = e
s
cos s|
t
0
t
0
e
s
(sins) ds
= e
t
cos t e
0
cos 0 +
t
0
e
s
sins ds
= e
t
cos t 1 + e
s
sin s|
t
0
t
0
e
s
cos s ds
= e
t
cos t 1 + e
t
sint + e
0
sin0
t
0
e
s
cos s ds
= e
t
(sint + cos t) 1
t
0
e
s
cos s ds
2
t
0
e
s
cos s ds = e
t
(sint + cos t) 1
t
0
e
s
cos s ds =
e
t
(sint + cos t) 1
2
y
p
(t) =
e
t
1 +
t
0
e
s
sins ds
e
t
1 +
t
0
e
s
cos s ds
20
=
e
t
1 +
e
t
(sint+cos t)+1
2
e
t
1 +
e
t
(sint+cos t)1
2
e
t
3e
t
(sint+cos t)
2
e
t
1+e
t
(sint+cos t)
2
3e
t
sin tcos t)
2
e
t
+sin t+cos t
2
y
1
y
2
C
1
e
t
C
2
e
t
3e
t
sin tcos t
2
e
t
+sin t+cos t
2
D
1
e
t
sint+cos t
2
D
2
e
t
+
sin t+cos t
2
where D
1
and D
2
are arbitrary real constants.
8 Nonlinear Dierential Equations: Lineariza-
tion
Nonlinear dierential equations may be very dicult to solve in closed form.
There are a number of specic techniques that solve special classes of equa-
tions, and there are numerical methods to compute numerical solutions of
any ordinary dierential equation. Unfortunately, time does not permit us
to discuss either these special classes of equations, or numerical methods.
Linearization allows us to obtain qualitative information about the solu-
tions of nonlinear autonomous equations. As we noted in Section 2, the idea
is to nd stationary points of the equation, then study the solutions of the
linearized equation near the stationary points. This gives us a reasonably,
but not completely, reliable to the behavior of the solutions of the original
equation.
Example 16 Consider the second-order nonlinear autonomous dierential
equation
y
=
2
siny, > 0
21
This is the equation of motion of a pendulum, but it has much in common
with all cyclical processes, including processes such as business cycles. The
equation is very dicult to solve because of the nonlinearity of the function
siny. We dene
y(t) =
y(t)
y
(t)
(t) =
y
2
(t)
2
siny
1
(t)
Let
F( y) =
y
2
(t)
2
siny
1
(t)
y
2
(t)
2
sin y
1
(t)
0
0
siny
1
= 0 and y
2
= 0
y
1
= n and y
2
= 0
so the set of stationary points is
{(n, 0) : n Z}
We linearize the equation around each of the stationary points. In other
words, we take the rst order Taylor polynomial for F:
F(n + h, 0 + k) + o(|h| +|k|)
= F(n, 0) +
F
1
y
1
F
1
y
2
F
2
y
1
F
2
y
2
h
k
0
0
0 1
2
cos n 0
h
k
0 1
(1)
n+1
2
0
h
k
22
For n even, the eigenvalues are
2
+
2
= 0
1
= i,
2
= i
Close to (n, 0) for n even, the solutions spiral around the stationary point.
For y
2
= y
1
> 0, y
1
is increasing, so the solutions move in a clockwise
direction.
For n odd, the eigenvalues and eigenvectors are
2
= 0
1
= ,
2
=
v
1
= (1, ), v
2
= (1, )
Close to (n, 0) for n odd, the solutions are roughly hyperbolic; along v
2
,
they converge to the stationary point, while along v
1
, they diverge from the
stationary point.
From this information alone, we know the qualitative properties of the
solutions are as given in the phase plane diagram on the next page:
The solutions ow in directions consistent with the ows along the
asymptotes.
On the y-axis, we have y
=
2
siny = 0
so the derivative of y
1
(t)
y
2
(t)
9y
2
(t) + 4y
3
1
(t) + 4y
1
(t)y
2
2
(t)
4y
1
(t) + 9y
2
1
(t)y
2
(t) + 9y
3
2
(t)
, y
1
(0) = 3, y
2
(0) = 0 (8)
y
s
=
0
0
(t) =
0 9
4 0
y
The characteristic equation is
2
+36 = 0, so the matrix has distinct eigenval-
ues
1
= 6i and
2
= 6i; since both of these have real part zero, we know the
solutions of the linearized dierential equation follows closed curves around
24
zero. The associated eigenvectors are v
1
=
3i/2
1
and v
2
=
3i/2
1
,
so the change of basis matrices are
Mtx
U,V
(id) =
3i/2 3i/2
1 1
and Mtx
V,U
(id) =
i/3 1/2
i/3 1/2
3i/2 3i/2
1 1
e
6ti
0
0 e
6ti
i/3 1/2
i/3 1/2
3
0
3i/2 3i/2
1 1
ie
6ti
/3 e
6ti
/2
ie
6ti
/3 e
6ti
/2
3
0
(e
6ti
+ e
6ti
)/2 (e
6ti
e
6ti
)3i/4
(e
6ti
e
6ti
)i/3 (e
6ti
+ e
6ti
)/2
3
0
cos 6t 3(sin6t)/2
2(sin 6t)/3 cos 6t
3
0
3 cos 6t
2 sin 6t
since
e
6ti
+ e
6ti
= cos 6t + i sin6t + cos(6t) + i sin(6t)
= cos 6t + i sin6t + cos 6t i sin 6t
= 2 cos 6t
e
6ti
e
6ti
= cos 6t + i sin6t cos(6t) i sin(6t)
= cos 6t + i sin6t cos 6t + i sin 6t
= 2i sin 6t
Notice that
y
2
1
(t)
9
+
y
2
2
(t)
4
=
9 cos
2
t
9
+
4 sin
2
(t)
4
= cos
2
t + sin
2
t
= 1
25
so the solution of the linearized initial value problem is a closed curve running
counterclockwise around the ellipse with principal axes along the y
1
and y
2
axes, of length 3 and 2 respectively.
Let
G(y) =
y
2
1
9
+
y
2
2
4
so that the solution runs around a level set of G. We can get more information
about the asymptotic behavior of the solution of the orginal, unlinearized,
initial value problem (8) by computing
dG(y(t))
dt
:
dG(y(t))
dt
=
G
y
1
G
y
2
1
(t)
y
2
(t)
2y
1
(t)
9
y
2
(t)
2
9y
2
(t) + 4y
3
1
(t) + 4y
1
(t)y
2
2
(t)
4y
1
(t) + 9y
2
1
(t)y
2
(t) + 9y
3
2
(t)
= 2y
1
(t)y
2
(t) + 8y
4
1
(t)/9 + 8y
2
1
(t)y
2
2
(t)/9
+2y
1
(t)y
2
(t) + 9y
2
1
(t)y
2
2
(t)/2 + 9y
4
2
(t)/2
= 8y
4
1
(t)/9 + 97y
2
1
(t)y
2
2
(t)/18 + 9y
4
2
(t)/2
> 0 unless y(t) =
0
0
y
2
1
(t) + y
2
2
(t)
2
so
dG(y(t))
dt
is uniformly bounded away from zero, so G(y(t)) =
t
0
dG(y(s))
ds
ds
as t . The linear terms become dwarfed by the higher order terms,
which will determine whether the solution continues to spiral as it heads o
into the distance.
Suppose now we consider instead the initial value problem
1
(t)
y
2
(t)
9y
2
(t) 4y
3
1
(t) 4y
1
(t)y
2
2
(t)
4y
1
(t) 9y
2
1
(t)y
2
(t) 9y
3
2
(t)
, y
1
(0) = 3, y
2
(0) = 0 (9)
26
The linearized initial value problem has not changed. As before, we compute
dG(y(t))
dt
=
G
y
1
G
y
2
1
(t)
y
2
(t)
2y
1
(t)
9
y
2
(t)
2
9y
2
(t) 4y
3
1
(t) 4y
1
(t)y
2
2
(t)
4y
1
(t) 9y
2
1
(t)y
2
(t) 9y
2
2
(t)
= 2y
1
(t)y
2
(t) 8y
4
1
(t)/9 8y
2
1
(t)y
2
2
(t)/9
+2y
1
(t)y
2
(t) 9y
2
1
(t)y
2
2
(t)/2 9y
4
2
(t)/2
= 8y
4
1
(t)/9 97y
2
1
(t)y
2
2
(t)/18 9y
4
2
(t)/2
< 0 unless y(t) =
0
0
As before, y
dG(y(t))
dt
: C G(y(t)) G(y(0))
< 0
since the region {y : C G(y) G(y(0))} is compact. If G(y(t)) C for
all t,
G(y(t)) = G(y(0)) +
t
0
dG(y(s))
ds
ds
G(y(0)) +t
as t
which is a contradiction. Thus, G(y(t)) 0 and the solution of the initial
value problem (9) converges to the stationary point
0
0
as t .
Example 18 This example was assigned as a problem in 2007, much to the
regret of the students and the professor; the latter did penance by writing
out the solution. The principal axes of elliptical solutions of the linearized
equation align with the eigenvectors. In Example 17, the eigenvectors aligned
with the standard basis. When the eigenvectors do not align with the stan-
dard basis, the algebraic computations become quite challenging and tedious
27
to do by hand. This example shows that they can in fact be done by hand;
with an appropriate symbolic manipulation computer program, they would
be straightforward.
Consider the initial value problem
1
(t)
y
2
(t)
5y
1
(t) 13y
2
(t) y
3
1
(t) y
1
(t)y
2
2
(t)
13y
1
(t) 5y
2
(t) y
2
1
(t)y
2
(t) y
3
2
(t)
, y
1
(0) = 3, y
2
(0) = 3
(10)
The linearization around the stationary point
0
0
is
y
(t) =
5 13
13 5
y
The characteristic equation is (5)(5)+169 = 0,
2
+144 = 0, so there
are distinct eigenvalues
1
= 12i and
2
= 12i; since both of these have real
part zero, we know the solution of the linearized initial value problem follows
a closed curve around zero. The associated eigenvectors are v
1
=
1
512i
13
and v
2
=
1
5+12i
13
1 1
512i
13
5+12i
13
and Mtx
V,U
(id) =
125i
24
13i
24
12+5i
24
13i
24
1 1
512i
13
5+12i
13
e
12ti
0
0 e
12ti
125i
24
13i
24
12+5i
24
13i
24
3
3
1 1
512i
13
5+12i
13
(12 5i)e
12ti
/24 13ie
12ti
/24
(12 + 5i)e
12ti
/24 13ie
12ti
/24
3
3
(e
12ti
+ e
12ti
)/2 5i(e
12ti
e
12ti
)/24 13i(e
12ti
e
12ti
)/24
13i((e
12ti
e
12ti
)/24 (e
12ti
+ e
12ti
)/2 + 5i(e
12ti
e
12ti
)/24
3
3
3
3
28
since
e
12ti
+ e
12ti
= cos 12t + i sin12t + cos(12t) + i sin(12t)
= cos 12t + i sin12t + cos 12t i sin12t
= 2 cos 12t
e
12ti
e
12ti
= cos 12t + i sin12t cos(12t) i sin(12t)
= cos 12t + i sin12t cos 12t + i sin12t
= 2i sin 12t
Let
z
1
(t)
z
2
(t)
y
1
(t) + y
2
(t)
y
2
(t) y
1
(t)
6 cos 12t
4 sin 12t
so
z
2
1
(t)
36
+
z
2
2
(t)
16
= cos
2
12t + sin
2
12t = 1
This shows that the solution of the linearized initial value problem follows
an ellipse with principal axes of length 3
2 and 2
z
2
1
36
+
z
2
2
16
= 144
y
2
1
+ 2y
1
y
2
+ y
2
2
36
+
y
2
1
2y
1
y
2
+ y
2
2
16
= 13y
2
1
10y
1
y
2
+ 13y
2
2
dG(y(t))
dt
=
26y
1
(t) 10y
2
(t) 26y
2
(t) 10y
1
(t)
5y
1
(t) 13y
2
(t) y
3
1
(t) y
1
(t)y
2
2
(t)
13y
1
(t) 5y
2
(t) y
2
1
(t)y
2
(t) y
3
2
(t)
= 130y
2
1
(t) 338y
1
(t)y
2
(t) 26y
4
1
(t) 26y
2
1
(t)y
2
2
(t)
50y
1
(t)y
2
(t) + 130y
2
2
(t) + 10y
2
(t)y
3
1
(t) + 10y
1
(t)y
3
2
(t)
+338y
1
(t)y
2
(t) 130y
2
2
(t) 26y
2
1
(t)y
2
2
(t) 26y
4
2
(t)
130y
2
1
(t) + 50y
1
(t)y
2
(t) + 10y
3
1
(t)y
2
(t) + 10y
1
(t)y
3
2
(t)
= 26(y
4
1
(t) + 2y
2
1
(t)y
2
2
(t) + y
4
2
(t))
+20y
1
(t)y
2
(t)(y
2
1
(t) + y
2
2
(t))
29
= 26(y
2
1
(t) + y
2
2
(t))
2
+ 20y
1
(t)y
2
(t)(y
2
1
(t) + y
2
2
(t))
= 16(y
2
1
(t) + y
2
2
(t))
2
10(y
2
1
(t) + y
2
2
(t))(y
2
1
(t) 2y
1
(t)y
2
(t) + y
2
2
(t))
= 16(y
2
1
(t) + y
2
2
(t))
2
10(y
2
1
(t) + y
2
2
(t))(y
1
(t) y
2
(t))
2
< 0 unless y =
0
0
Therefore, the tangent to the solution of the nonlinear equation always points
inward to the elliptical level sets of the linearized equation and, as in Example
17, the solution of the initial value problem (10) converges to the stationary
point
0
0
as t .
In the initial value problems (8), (9) and (10), we were lucky to some
extent. We took G to be the function whose level sets are the solutions of
the linearized dierential equation, and found that the tangent to the solution
always pointed outside the level curve in (8) and always pointed inside the
level curve in (9). It is not hard to construct examples in which the tangent
points outward at some points and inward at others, so the value G(y(t)) is
not monotonic. One may be able to show by calculation that G(y(t)) ,
so the solution disappears o into the distance, or that G(y(t)) 0, so
the solution converges to the stationary point. An alternative method is to
choose a dierent function G, whose level sets are not the solutions of the
linearized dierential equation, but for which one can prove that
dG(y(t))
dt
is
always positive or always negative; this is called Liapunovs Second Method
(see Ritger and Rose [2]).
References
[1] E. A. Coddington and N. Levinson, Theory of Ordinary Dierential Equa-
tions, McGraw-Hill, New York, 1955.
[2] Paul D. Ritger and Nicholas J. Rose, Dierential Equations with Appli-
cations, McGraw-Hill, New York, 1968.
30