Professional Documents
Culture Documents
Author
Vincent Huang
Essentially I record lecture material (but not example problems) as well as the exercises from the
textbook which I think look interesting or difficult. Since I skip the easy problems and exercises,
these notes are probably not actually helpful if you’re trying to learn the content.
The book we use is Differential Equations with Boundary-Value Problems by Dennis G. Zill
and Warren S. Wright, Eighth Edition. This course (MATH 2320) is being taught by Professor
Jay Gutzler from Collin College, and it takes place at Plano West Senior High School on Mondays-
Thursdays in the spring.
1
2
Differential Equations Vincent Huang Page 3
Theorem 1.1 (Picard-Lindelof Theorem). Consider the IVP y 0 (x) = f (x, y), y(x0 ) = y0 . Let
δf
R be the rectangle given by a ≤ x ≤ b, c ≤ y ≤ d centered at (x0 , y0 ). If f and are
δy
continuous on R, there exists some interval I0 : (x0 − h, x0 + h) for h > 0 contained in [a, b]
and some unique y defined on I0 which is a solution to the IVP.
Proof. The proof taken from here is nice; however, it is pretty long so I’ll only sketch the main
points.
δf
Let M = max |f | and L = max δy (x, y). We’ll choose h < min( d−c , 1 ). By MVT it follows
2M 2L
that for any y1 , y2 , we have |f (x, y1 ) − f (x, y2 )| ≤ L|y1 − y2 |.
Now let Y be the space R x of all functions on (x0 − h, x0 + h) contained in R. For y ∈ Y we define
the operator T y = y0 + x0 f (x, y(x)) dx. Then it follows that T is a mapping from Y → Y , since
Rx
|T y − y0 | = | x0 f (x, y(x)) dx| ≤ M (x − x0 ) = M h < d−c 2
so T y remains in R.
Clearly the fixed points of T are exactly the solutions to the IVP, so it suffices to show there
is a unique fixed point.
To do this we define a metric δ(y1 , y2 ) = max|y1 (x) − y2 (x)|. It’s not hard to see that δ is a
metric of the distance between two functions y1 , y2 and that δ satisfies the Triangle Inequality, so
it gives us a metric space on Y . The main lemma about δ is the following:
Lemma: δ(T y1 , T y2 ) ≤ 21 δ(y1 , y2 ).
R x Rx
Proof: Clearly |T y1 − T y2 | = x0 f (x, y1 (x)) − f (x, y2 (x)) dx ≤ L x0 |y1 (x) − y2 (x)| dx. This
Theorem 1.2 (First-Order Linear ODEs). There is a straightforward algorithm to solve any
first-order linear ODE in standard form y 0 + P (x)y = Q(x).
Note that exact differentials form another class of solvable differential equations- for example,
given M (x, y) dx + N (x, y) dy = 0, we just integrate both sides to get f (x, y) = c for a constant c.
Thus it’s worth classifying all exact differentials.
Theorem 1.3 (Exact Differentials). If M, N have continuous partial derivatives in some re-
δM δN
gion, then M (x, y) dx + N (x, y) dy is an exact differential if and only if = .
δy δx
Proof. The only if direction follows from Clairaut. Therefore it suffices to construct f given
δM δN
= .
δy δx Rx
R y Let the common
R x R yvalue of the above two partials be T (x, y). Then I claim f = 0 M (u, y) du +
0
N (x, v) dv − 0 0 T (u, v) dv du works. Indeed, we compute
Z y Z y Z y Z y
δf d δ
= M (x, y) + N (x, v) dv − T (x, v) dv = M (x, y) + N (x, v) dv − T (x, y) dv
δx dx 0 0 0 δx 0
= M (x, y), and similarly for the partial derivative of f with respect to y.
There are also special cases where an equation can be manipulated into an exact differential. Eg.
My − Nx R My −Nx
dx
if doesn’t depend on y, then we can find some µ(x) = e N for which My µ =
N
Nx µ + N µx , so that M µ dx + N µ dy becomes an exact differential.
Homogeneous equations are another class of equations that we can solve. We do this by letting
y = ux, so that M (1, u) dx + N (1, u) dy = 0. Then clearly dy = u dx + x du, so we’re left with
(uN (1, u) + M (1, u)) dx + xN (1, u) du = 0 and this is now separable in x, u.
dy
Theorem 1.4 (Bernoulli’s Equation). Solve the equation + P (x)y = f (x)y n .
dx
dz dy dz
= (1 − n)y −n =⇒ + (1 − n)P (x)z = (1 − n)f (x).
dx dx dx
This is now a linear ODE in z, so we’re done.
dy
Example 1.1 (Page 75, #35, Riccati’s Equation). Consider Riccati’s Equation = P (x) +
dx
Q(x)y + R(x)y 2 . Solve the equation given an initial solution y1 .
Differential Equations Vincent Huang Page 5
• Newton’s Law of Cooling states that an object of temperature T (t) in a room of temperature
dT
Tm satisfies = k(T − Tm ). Assuming Tm is constant (eg. the object has little impact on
dt
its surroundings), we can transform into the first equation to get T = Tm + cekt .
di dq 1
• An RC circuit obeys the equation Ri + L = V , or equivalently R + q = V . This is
dt dt C
the same as above. Here R is resistance, C is capacitance, i is current, L is inductance, V is
voltage, and q is charge.
Similarly, we have some first-order non-linear equations of interest.
dP
• The logistic equation = P (a − bP ) is a more realistic model of population growth, where
dt
P is population. Using separation and partial fractions, we see it has solutions P (t) =
a
.
b + Ce−at
• Similarly, if X is the number of grams formed of a chemical during a chemical reaction, the
dX
law of mass action says = k(α − X)(β − X).
dt
Example 1.2 (Page 102, #15). We can model the fall of an object of mass m and velocity v
dv
in the presence of gravitational acceleration g with air resistance as m = mg − kv 2 for a
dt
constant k. Solve the equation and determine the terminal velocity.
In the higher-order case, linear ODEs again prove easier to solve than non-linear ones.
Theorem 2.1 (Existence and Uniqueness for Linear Equations). Let an , an−1 , . . . , a0 , g be
continuous functions on an interval I with an nonzero on I. If x = x0 ∈ I, then for any choice
of y0 , y1 , . . . , yn−1 , there is a unique solution to an (x)y (n) + an−1 (x)y (n−1) + · · · + a0 (x)y = g(x)
with y (i) (x0 ) = yi for each i.
Proof. The ideas for the proof are taken from here as well as here.
We’ll let Y = (y, y 0 , y 00 , . . . , y (n−1) )T be a matrix of the first n derivatives of y. Then the equation
can be rewritten in the form Y 0 = AY + B(x), where
0 1 0 ... 0 0
0 0 1 ... 0 0
.. .. .. .. ..
A= ,B = . .
. . . .
0 0 ... 0 1 0
− an − aan1 . . .
a0
− an−2
an
− an−1
an
g(x)
an
The proof of Picard-Lindelof from section 1 actually extends to general solution spaces, so we get
a unique solution to Y with Y (x0 ) = Y1 for any Y1 ∈ I n for some interval J ⊆ I. Let J be a
maximal such interval (i.e. the union of all such valid intervals). Our goal is to show that J = I.
First we show that Y must be unique on I. Suppose not, so two solutions Y1 , Y2 exist. Let
J ∗ be the largest interval on which Y1 , Y2 agree. If J ∗ 6= I then it has an endpoint in I, so by
continuity we get that Y1 , Y2 agree on the endpoint of J ∗ , hence we can use Picard-Lindelof to
show that Y1 = Y2 on the neighborhood of the endpoint of J ∗ , contradicting maximality.
The above paragraph shows at most one solution Y can exist on I. Take a solution Y defined
on J, so we want to extend Y to the rest of I. I claim that if J cannot be extended, then at its
endpoints Y must diverge. Indeed, WLOG let α Rbe the upper R x endpoint of J and suppose it’s not
x
the upper endpoint of I. Then we have Y (x) = x0 AY + x0 B(x) dx. Since A, B are continuous
on I they are bounded on J, so supposing that Y doesn’t diverge at α, we see that Y can be
extended to α by continuity. Furthermore, because of the integral equation, Y is differentiable at
α, so Picard-Lindelof extends Y past α.
As a result,
Rx it suffices to show that Y is bounded on J. By the Triangle Inequality,R we have
x
|Y | ≤ A1 x0 |Y | + B1 , where A1 , B1 are the maximal values of kAk, kBk. Let Y1 (x) = x0 |Y | dx,
6
Differential Equations Vincent Huang Page 7
|Y | ≤ A1 Y1 + B1 = B1 eA1 x ,
Proof. Clearly homogeneous equations can be written in the form Lf (x) = 0. Since L is a linear
operator, the solution set of f forms a vector space.
Definition 2.4 (Wronskian). Suppose f1 , f2 , . . . , fn each possess at least n − 1 derivatives. Then
we define the Wronskian of the fi as
f1 f2 ... fn
f10 f20 ... fn0
W (f1 , f2 , . . . , fn ) = det .. .. .. .
. . .
(n−1) (n−1) (n−1)
f1 f2 . . . fn
Proof. Clearly
P if the Wronskian is always nonzero, there do not exist nonzero constants c1 , c2 , . . . , cn
for which ci fi is identically zero, hence the fi are linearly independent.
Now it suffices to show that if the Wronskian is zero at some point, the fi are linearly dependent
and W is zero everywhere. This isn’t hard- if the Wronskian is zero at x0 , then WLOG we can
find constants ci with fn = c1 f1 + · · · + cn−1 fn−1 at x0 . But now c1 f1 + · · · + cn−1 fn−1 is also a
solution to the homogeneous equation by additivity, and it agrees with fn at x0 , so by uniqueness
in Theorem 2.1 we see fn is identically c1 f1 + · · · + cn−1 fn−1 , so the fi are linearly dependent.
Definition 2.5 (Fundamental Set). A set of n linearly independent solutions to an nth-order
linear ODE is called a fundamental set.
Differential Equations Vincent Huang Page 8
Theorem 2.4 (Existence of Fundamental Set). If an is nonzero and the ai (x) are continuous,
there always exists a fundamental set f1 , f2 , . . . , fn of n solutions to an nth-order linear ODE.
Consequently, any solution f to the ODE has form f = c1 f1 + c2 f2 + · · · + cn fn .
Proof. Let g(f ) = (f (x0 ), f 0 (x0 ), . . . , f (n−1) (x0 )) for some x0 ∈ I. By the Existence part of Theo-
rem 2.1 we can find f1 , f2 , . . . , fn such that the g(fi ) form a basis of Rn . By Theorem 2.3, these
fi are linearly independent, and their span is Rn , so by the uniqueness part of Theorem 2.1 the fi
span all solutions and form a basis as desired.
This now allows us to classify solutions to a non-homogeneous equation.
Proof. Because derivatives are additive, we know from subtraction that y is a solution of the
non-homogeneous ODE if and only if y − yp = c1 y1 + c2 y2 + · · · + cn yn , as desired.
Example 2.1 (Page 129, #39, Be Careful!). Solve the equation x2 y 00 − 4xy 0 + 6xy = 0 on
(−∞, ∞).
Proof. We must be extremely careful here becanse the leading coefficient a2 (x) = x2 isn’t nonzero
on R, so we can’t directly apply the theory above.
Instead we solve over R+ and R− separately. Over each separate interval, we have the indepen-
dent set {x2 , x3 }. (Motivation for finding this is to just substitute y(x) = P (x) for a polynomial
P , noting each term has the same degree in x). Meanwhile, x = 0 yields y(0) = 0. Therefore, on
R+ we take y = c1 x3 + c2 x2 , and on R− we take y = c3 x3 + c4 x2 , adjoined by y(0) = 0. It’s not
hard to see that this piecewise definition of y is differentiable at zero iff c2 = c4 , which yields our
general set.
Now that we’ve covered the general theory, we can move onto solving specific ODEs.
Theorem 2.6 (Reduction of Order). Given a solution y1 to the 2nd-order standard form ODE
y 00 + P (x)y 0 + Q(x) = 0, find another solution y2 independent of y1 .
Proof. Write y = uy1 , so that we have u00 y1 + 2u0 y10 + uy100 + P (x)(uy10 + u0 y1 ) + Q(x) = 0. This
reduces to u00 y1 + u0 (2y10 + PZ(x)y1 ) = 0, which is now a first-order linear ODE in u0 . Solving gives
c1 c1
u0 = 2 R P dx , hence u = 2
R dx + c2 . We don’t really care about u up to linearity, so
y1 e y1 e P dxZ Z
1 1
WLOG c1 = 1, c2 = 0, so we get u = 2
R
P dx
dx, hence y2 = y1 2
R dx works.
y1 e y1 e P dx
Now we turn to general linear homogeneous ODEs with constant coefficients. We’ve already
done the first order case, so now we cover the second order case, which is extremely common in
applications and which motivates the general case.
Differential Equations Vincent Huang Page 9
• If m1 , m2 are complex, the solutions are eαx (c1 cos βx+c2 sin βx), where m1 , m2 = α±βi.
Proof. The first case is easy- letting y = emx , we see y is a solution iff m is a root of ax2 + bx + c, so
we’re done. The second case follows similarly, except we obtain the second independent solution
xemx from the theory of finite differences. In the last case, our solutions are
y = C1 e(α+βi)x + C2 e(α−βi)x = C1 eαx (cos βx + i sin βx) + C2 eαx (cos β − i sin βx)
Theorem 2.8 (General Linear Homogeneous ODEs). Consider the ODE an y (n) + an−1 y (n−1) +
· · · + a1 y 0 + a0 y = 0. Then we can determine the general solutions to the ODE.
Proof. Let an xn + an−1 xn−1 + · · · + a1 x + a0 have root r with multiplicity k. Then the root r
generates the solution space c1 erx + c2 xerx + · · · + ck xk−1 erx in a similar manner to the above, so
taking the direct sum of all such solution spaces over the different roots r gives the result.
Now we turn to non-homogeneous equations where the coefficients ai are linear and g has a “nice”
form like a polynomial or exponential, possibly multiplied by a trig function. Then we can assume
any solution y will have a similar form to g, and solve for coefficients. This is best explained
through an example:
Example 2.2 (Page 141, #2). Find a particular solution of y 00 − y 0 + y = 2 sin 3x.
Proof. We’ll guess that y = a sin 3x + b cos 3x. Then the left hand side equals −9a sin 3x −
9b cos 3x − 3a cos 3x + 3b sin 3x + a sin 3x + b cos 3x = 2 sin 3x. Thus we have −9a + 3b + a = 2 and
6 16 16 6
−9b − 3a + b = 0. Therefore b = 73 , a = − 73 , hence y = − 73 sin 3x + 73 cos 3x.
Note that in general, the solution form we guess for y shouldn’t have any terms in common with
solutions to the corresponding homogeneous equation (as otherwise we could just subtract them
out). Also, if g is a sum of multiple “different” terms like x2 , sin x, we can split g into the separate
components, solve separately, and add the solutions together.
Differential Equations Vincent Huang Page 10
Proof. Note that the solution to the corresponding homogeneous solution is c1 + c2 x + c3 ex + c4 e−x .
Now we solve y (4) − y 00 = 4x. This is not hard- it’s easy to see that y = − 23 x3 works. Next we
solve y (4) − y 00 = 2xe−x . We guess the solution has form Axe−x + Bx2 e−x + Cx2 e−x + Dx3 e−x .
The first term has first derivative e−x − xe−x , second derivative −2e−x + xe−x , third derivative
3e − xe−x , and fourth derivative −4e−x + xe−x . The second term has derivative 2xe−x − x2 e−x ,
−x
second derivative 2e−x −4xe−x +x2 e−x , third derivative −6e−x +6xe−x −x2 e−x , and fourth derivative
12e−x − 8xe−x + x2 e−x . We realize that this is actually enough information to solve the problem,
as it yields the system −2A + 10B = 0, −4B = 2, so B = − 21 and A = − 25 . Thus the general
solution is − 32 x3 − 52 xe−x − 21 x2 e−x + c1 + c2 x + c3 ex + c4 e−x , and we’re done.
d
Letting D = dx be the differential operator, we see that the LHS of homogeneous linear ODEs can
be written in the form Lx = 0 for some polynomial L in D. Note that since D commutes with
itself and is a linear operator, we can factor L as a polynomial in D.
Definition 2.6 (Annihilator, Annihilates). If L is a linear differential operator with L(f (x)) = 0,
then we say that L annihilates f and is an annihilator of f (x).
Note that L = Dn annihilates xk for 0 ≤ k < n while L = (D − α)n annihilates xk eαx for
0 ≤ k < n. Similarly, by the same reasoning as in our analysis of second-order linear ODEs, we
see that (D2 − 2αD + (α2 + β 2 ))n annihilates eαx xk cos βx, eαx xk sin βx for k ≤ n.
Definition 2.7 (Complementary Solution). Given an equation L(y) = g(x), we say the set of all
solutions L(y) = 0 is the set of complementary solutions to the equation, in the sense that adding
a complementary solution to a solution to the original equation will generate all the other solutions
to the original equation.
Theorem 2.9 (Annihilator Method). Given an equation L(y) = g(x) for some differential
operator L and function g of the form xk eαx cos βx, we can determine the general form of the
solution y.
Proof. Let L1 be the smallest order operator which annihilates g, so that (L1 · L)y = 0. Then since
we know L1 , L, we can compute the general form of all functions y annihilated by L1 ◦ L, yielding
the general solution form for y in a more systematic manner.
Note that the roots corresponding to L are complementary solutions, while the roots corre-
sponding to L1 give us the solution to the new equation.
Example 2.4 (Page 156, #64). Solve the equation y (4) − 4y 00 = 5x2 − e2x .
Proof. 5x2 − e2x is annihilated by D3 (D − 2), while the left side corresponds to D4 − 4D2 =
D2 (D − 2)(D + 2), so we know that y must be annihilated by D5 (D − 2)2 (D + 2). Therefore y is
within the span of 1, x, x2 , x3 , x4 , e−2x , e2x , xe2x .
Differential Equations Vincent Huang Page 11
By our previous discussion, we know the coefficients of 1, x, e±2x are arbitrary, so we can focus
on the other basis terms Ax2 + Bx3 + Cx4 + Dxe2x .
We see with some work that y 00 = 2A+6Bx+12Cx2 +4De2x +4Dxe2x and y 0000 = 24C +32De2x +
16Dxe2x . Setting y 0000 − 4y 00 equal to 5x2 − e2x eventually lets us find A = − 16
5 5
, C = − 48 1
, D = − 16 ,
5 5 1
hence the general solution is y = c0 + c1 x + c2 e−2x + c3 e2x − x2 − x4 − xe2x , as desired.
16 48 16
Now we turn to a new method, the Variation of Parameters.
Theorem 2.10 (Variation of Parameters). Given an ODE in standard form y (n) +Pn−1 y (n−1) +
· · · + P0 y = f (x) along with the form of the complementary solution c1 y1 + c2 y2 + · · · + cn yn ,
solve the original ODE.
(k)
Continuing inductively in this manner, we end up assuming that u0i yi = 0 for 0 ≤ k ≤ n−2.
P
Then we are left with the equation
X (n)
X (n−1)
ui (P0 yi + P1 yi0 + · · · + yi ) + u0i yi = f.
P 0 (n−1)
The first term is identically zero, so we get an nth equation ui yi = f.
Now let
u01 y1 y2 ... yn 0
u0 y10 y 0
. . . y 0
0
2 n
2
U = .. , A = .. .. .. , F = .. .
. . . . .
0 (n−1) (n−1) (n−1)
un y1 y2 . . . yn f
Then we just need to solve AU = F . Let Ai be A with the ith column replaced by F and let
Wi
W be the Wronskian of the yi , and Wi = det Ai so that u0i = for each i by Cramer’s Rule.
W
Since the yi are linearly independent, W is never zero, so these solutions are well-defined. Thus
integrating gives the original ui , and therefore a solution to the ODE.
Once again this is best explained with an example.
Differential Equations Vincent Huang Page 12
e2x
Example 2.5 (Page 162, #28). Solve y 000 − 3y 00 + 2y 0 = .
1 + ex
Proof. We see that the complementary solutions are given by y1 = 1, y2 = ex , y3 = e2x . I’m
not going to write out all the matrices and Wronskians, but it’s straightforward to compute that
5x 4x 3x Z
e e e W1
W = 2e3x , W1 = x
, W2 = −2 x
, W3 = x
. We then deduce that u1 = 1 dx =
1+e 1+e Z 1+e Z W
1 x 1 W2 W3
e − ln(ex + 1). Similarly we have u2 = ex dx = −ex ln(1 + ex ) and u3 = e2x dx =
2 2 W W
e2x
(x − ln(1 + ex )). Therefore we conclude the general solution set is given by
2
1 e2x e2x
y = c1 + c2 ex + c3 e2x − ln(ex + 1) − ex ln(1 + ex ) + x− ln(1 + ex ) .
2 2 2
Theorem 2.11 (Cauchy-Euler Equation). We can find the general form of the solution when
g = 0.
Proof. Let x = ez . I claim by induction that if D is the differential operator with respect to z,
then xk y (k) = D(D − 1) . . . (D − k + 1)y.
The base case k = 0 is clear. For the inductive step, suppose xk y (k) = D(D −1) . . . (D −k +1)y.
Then
0 dz
xk+1 y (k+1) = xk+1 e−kz D(D − 1) . . . (D − k + 1)y = xk+1 D(e−kz D(D − 1) . . . (D − k + 1)y) ,
dx
the last step by the Chain Rule applied to x, z. This in turn equals xk D(e−kz D(D − 1) . . . (D −
k + 1)y), which by the Product Rule is
Note that if r is complex, we can write it as α + βi, so that α − βi is also a root. Then we get
solutions of the form C1 xα+βi + C2 xα−βi = xα (C1 cos βx + C1 i sin βx + C2 cos βx + C2 i sin βx) =
xα (cos βx(C1 + C2 ) + i sin βx(C1 − C2 )). For this to be real we need C1 , C2 to be conjugates again,
hence we get solutions of the form xα (c1 cos(βx) + c2 sin(βx)).
Now we will examine solving equations of the form y 00 +P (x)y 0 +Q(x) = f (x) without even knowing
f (x).
Theorem 2.12 (Second-Order ODEs). Show that, if y1 , y2 are linearly independent solutions
y2 (x)y1 (t) − y1 (x)y2 (t)
to Ly = 0, then G(x, t) = is Green’s function for L.
W (y1 , y2 )
Proof. We recall from the Variation of Parameters method that yp = u1 y1 + u2 y2 where u01 =
y2 f 0 y1 f
− , u2 = . Integrating to retrieve u1 , u2 and rearranging terms gives the desired form of
W W
Green’s Function.
Rx
Note that additionally, yp (x) = x0 G(x, t)f (t) dt as defined in Definition 2.9 satisfies yp (x0 ) =
yp0 (x0 ) = 0, which also allows us to solve IVP problems more easily.
Example 2.6 (Page 179, #30). Solve the IVP x2 y 00 − xy 0 + y = 0 with initial conditions
y(1) = 4, y 0 (1) = 3.
Example 2.7 (Page 184, #16). Solve the system D2 x − 2(D2 + D)y = sin t and x + Dy = 0.
Proof. We can multiply the second equation by 2D + 2 and add to get (D2 + 2D + 2)x = sin t. By
the Annihilator method we get (D2 + 1)(D2 + 2D + 2)x = 0, hence we find x = c1 cos t + c2 sin t +
e−t (c3 cos t + c4 sin t). The latter two terms correspond to solutions of D2 + 2D + 2, hence the
constants can vary, but the first two terms are fixed by (D2 + 2D + 2)x = sin t. It’s not hard to
−t
solve the resulting system to get c1 = − 52 , c2 = 15 . Hence x = − 52 cos t+ 1
R 5 sin t+e 2(c3 cos t+c 4 sin t).
1
R −tNow we note that Dy = −x, so Rintegrating gives R y = c5 + x = c5 − 5 sin t − 5 cos t +
e (c3 cos t + c4 sin t). Letting A = e−t cos t, B = e−t sin t, we have by integration by parts
Differential Equations Vincent Huang Page 14
−t
that A = e−t sin t + B and B = −e−t cos t − A. It follows that A = e 2 (sin t − cos t) and B =
−t
− e 2 (sin t + cos t). Substituting back in, we eventually see that y has form c5 − 52 sin t − 51 cos t +
1 −t
2
e (c3 − c4 ) sin t − 12 e−t (c3 + c4 ) cos t.
We can also sometimes solve non-linear ODEs if we get lucky (i.e. through substitutions, integrat-
ing factors, taylor series), though this is not nearly as easy.
Proof. We first let u = y 0 so that xu0 = u + xu2 . This becomes a Bernoulli equation, so dividing
0
by u2 and letting v = u1 , we have v 0 = − uu2 , so
xu0
Z Z
1
2
= + x =⇒ −xv 0 = v + x =⇒ 0
(v + xv ) = −x.
u u
2
We therefore see that xv = − x2 + c, so
x c 2x
v=− + =⇒ u = − 2 =⇒ y = − ln(x2 − c1 ) + c2 .
2 x x − c1
d2 x
• We can model simple harmonic motion or free undamped motion by 2 + ω 2 x = 0. For
dt
example, in the case of a spring with spring constant m and a mass m hanging off the spring,
we have ω 2 = k/m by Hooke’s Law and Newton’s Second Law. Thus x = c1 cos ωt + c2 sin ωt,
so the period of motion is T = 2π ω
and the frequency is f = T1 = 2π
ω
. We sometimes also
refer to ω, in radians per second, as the angular or natural frequency of the system. With a
bit
p of trigonometry we see this actually implies x c= A sin(ωt + φ), where the amplitude A is
c21 + c22 and the phase angle φ satisfies tan φ = c21 .
• Suppose we have a scenario like the above, except that a body of velocity v generates a
d2 x
resistive (eg. frictional) force −βv. So we have a free damped motion given by +
dt2
dx β
2λ + ω 2 x = 0, where 2λ = . The roots of the characteristic equation are thus −λ ±
√ dt m
λ2 − ω 2 . If λ√2 − ω 2 > 0 we √ say the system is overdamped because β is large, causing
−λt λ2 −ω 2 t − λ2 −ω 2 t
x(t) = e (c1 e + c2 e ) to drop to zero. If λ2 − ω 2 = 0 we call the system
critically damped and our solution is now x(t) = e−λt (c1 + c2 t). Once again x(t) tends
to zero, but at a slower rate than an overdamped √ system. √ If λ2 − ω 2 < 0 our system is
−λt ω 2 −λ2 t 2 2
underdamped, and we now have x(t) = e (c1 e + c2 e ω −λ t ), so x(t) oscillates more
but still eventually goes to zero.
d2 q dq 1
• Similarly, in an LRC circuit, we have L 2
+ R + q = (t) by summing voltages, where
dt dt C
(t) is the emf generated by the circuit. We have similar notions of overdamped, critically
damped, and underdamped here.
Differential Equations Vincent Huang Page 15
• For a beam bending as it carries a load w(x), we know its deflection y(x) satisfies EIy (4) =
w(x). Here E is Young’s modulus of elasticity while I is the moment of inertia around
a central axis; we call EI the flexural ridigity of the system. If the beam’s endpoint is
embedded, we have y = y 0 = 0 at the endpoint; if it’s free, we have y 00 = y 000 = 0; if it’s
simply supported or hinged, we have y = y 000 = 0.
Definition 2.10 (Eigenvalue, Eigenfunction). For a given boundary-value problem involving some
parameter λ, we say λ is an eigenvalue if it yields a nontrivial solution y, which we call the
corresponding eigenfunction.
• A nonlinear spring acts in the same way as a spring, except the restoring force isn’t kx (eg.
it might be kx2 ).
d2 θ g
• In a simple pendulum, we have the equation 2
+ sin θ = 0. We can linearize this for
dt l
θ3
small values of θ by replacing sin θ with either θ or θ − .
6
• In physics problems where mass varies, instead of using F = ma we must use F = (mv)0 =
m0 v + ma, which gives a more complicated differential equation.
3
16