Professional Documents
Culture Documents
set when you came in. Read the information sheet. It contains a lot
of important information about how this course will work this term.
two course packets from Graphic Arts in the basement of Building 11.
We'll use EP (5th or 4th ed) but not the freely bundled Polking.
by the way, and the registrar messed up this process. If you got
letters from the registrar and from the UMO, the UMO is right.
between us. It's private; only I can see the numbers you put up, pretty
much.
where
due
Any questions?
2.016 Hydrodynamics
3.23 Electrical, Optical, and Magnetic Properties of Materials
6.630 Electromagnetics
18.100 Analysis I
'Solving'
Differential Equation: _______________\ Behavior over time
Short term information /
/\
\ /
\ /
Model \ / Interpretation
\ /
\ /
\/
Physical World
[4] In this first Unit we will study ODEs involving only the first
derivative:
We have written down the "general solutions." Their graphs fill up the
plane.
A "particular solution" arises from choosing a specific value for the
constant of integration. Often it occurs by specifying a point (a,b) you
want the solution curve to pass through. This is an "initial value."
An INTEGRAL CURVE is a curve in the plane that has the given slope
at every point it passes through.
m = 1 : x = y^2 - 1
m = -1 : x = y^2 + 1 .
I invoked the Mathlet Isoclines and showed the example.
I drew some solution curves. Many get trapped along the bottom
branch of the parabola. Can we explain this? I cleared the solutions
and called attention to the fact that everywhere, the direction
on the null-cline points into the region between m = 0 and m = -1 ,
and everywhere to the right of some point (actually it's (5/4,-1/2) )
on m = -1 the direction field also points into the region.
Take a point (x,y) . The slope of the line from (0,0) to it is y/x .
is given by negative reciprocal. So the slope field now goes around the
The E and U theorem says that there is just one integral curve through
ALLOWED.
Direction fields let you visualize this, but we also want to be able to
Step 1: put all the x's on one side, y's on the other:
(If this can't be done, the equation isn't separable and this method
doesn't
work.)
y^2/2 + c1 = - y^2/2 + c2
Clean up by combining constants of integration:
x^2 + y^2 = c
variables
Each integral curve contains TWO solution functions: one above, one
below.
Your initial condition tells you which you are on. There is NO solution
equation,
value
through
the point (1,2). We can get this by computing what c must be to make
it
Numerical Methods
. Numerical methods
remains.
this time. This solution seems to be one of those trapped in the funnel,
(x) .
But what about y(1) ?
Well, we know that the integral curve is NOT straight. What to do?
Approximate it by a polygon!
So use the tangent line approximation to go half way, and then check
the direction field again:
We follow the line segment from there with this slope for a run of .5
to get to the point with x = 1 and
y = .5 + (slope)(run) = .5 + (.25)(.5) = .625 .
[2] This is a general method for computing y(b) from y(a) and the
direction field. In fact, it approximates y(x) for all x between a
and b.
Pick a step size h (1 or 1/2 above). It should probably be so that
organized way:
In the line n = 1 , y1 = y0 + h A0
In the line n = 2 , y2 = y1 + h A1
k xk yk Ak = xk - yk^2 h Ak
________________________________________________________________________
__
0 0 1 -1 -.5
1 .5 .5 .25 .125
2 1.0 .625
Euler's method is rarely exact. Let's identify some sources for error.
Much of numerical analysis is understanding and bounding potential
error.
(2) The Euler polygon is not an exact solution, and the direction field
at its vertices differs more and more from the direction field under the
actual solution. At places where the direction field is changing
rapidly,
this produces very bad approximations quickly.
For short intervals, at least, we can at least predict whether the Euler
method will give an answer that is too large or too small. In this
example, the direction field turns up secretly while the polygon is
running;
so the actual answer is TOO BIG. This is a general thing.
1. Too high
2. Too low
If the integral curve is bending up, the Euler approximation is too low.
If it's bending down, the Euler approximation is too high.
[4] The way to address the the problem of variability of the direction
field
is to poll the values of the direction field in the area that the next
strut
in the Euler polygon plans to traverse, and use that information to give
a
better derivative for the next strut. These polling methods can be very
clever.
If you poll once at the midway point and simply take the average, you
get
the "improved Euler method" (also known as Heun's method).
If you poll four times sequentially (in a certain very clever way) you
get
"RK4."
Equal cost:
The error for the Euler method is around 1/1000, even using 1000
3}
and RK4 will always win. There are still higher order methods,
but they involve more overhead as well and experience has shown
Beware! ODE solvers are tricky and avoid things like this. One trick:
when the direction field is steep, use smaller stepsizes.
18.03 Class 3, Feb 13, 2006
___________________________
| |
|___________________________|
Now is the moment to let the interest period Delta t tend to zero:
x' = I x + q
Note: q(t) can certainly vary in time. The interest rate can too.
so I = I(t) , q = q(t) .
x' - I x = q
The system responds to the input signal and yields the function x(t),
the "output signal." Here's a picture:
initial condition
|
|
|
V
______________
| |
|______________|
input output
The greater the temperature difference between inside and outside, the
faster T(t) changes.
This happens often: a the input signal is a product and one of the
factors
is p (which is k here). The other factor then has the same units as
the
output signal. k is a "coupling constant."
The system here is the cooler. The input signal is k times the
external
temperature.
1. None
2. (a) only
3. (b) only
4. (c) only
5. All
6. All but (a)
7. All but (b)
8. All but (c)
Blank. Don't know.
influence:
dx
---- = - p(t) dt
ln|x| = - P + c
x = +- e^c e^{-P}
x = C e^{-P(t)}
[1] Definition: A "linear ODE" is one that can be put in the "standard
form"
_____________________________
| |
|_____________________________|
x' + p(t) x = 0 .
This isn't separable: it's something new. We'll describe a method which
Let's see how this works in our example. The associated homogeneous
equation is x' + (1/3) x = 0, which has nonzero solution
xh = e^{-t/3}
__________________________________________
20 + 2t = e^{-t/3} u'
v = 2t ,
dw = e^{t/3} dt
dv = 2 dt , w = 3 e^{t/3} [another place where we can take c = 0!]
x = e^{-t/3} u = 42 + 6t
x = e^{-t/3} u = 42 + 6t + c e^{-t/3}
x = 42 + 6t - 10 e^{-t/3}
p x = p u xh
____________________
q = ( xh' + p xh ) u + xh u'
u = integral xh^{-1} q dt
x = xh integral xh^{-1} q dt
This one has an easier way: the left hand side is (xy)'
Integrate: xy = ln x + c
[6] The method of "integrating factors" finds a function that you can
multiply
both sides and get into this position.
Solve x' + tx = 2t
Let's check:
as claimed. So we have
so x = 2 + c e^{ - t^2 / 2 }
Maybe complex numbers seem obscure because you are used to imagining
numbers
by giving them units: 5 cars, or -3 miles. Complex numbers do not accept
units.
Also, there is no ordering on complex numbers, no "<."
Now general
so we can continue
____
(*) encourages us to define the "complex conjugate" a+bi = a - bi
_
and in these terms it reads: z.z = |z|^2
_
Divide by |z|^2 and z to see 1/z = z / |z|^2 .
___ _ _ __ _ _
Conjugation satisfies w+z = w + z , wz = w.z
_
Question 2: If z = - z , then
1. z is purely imaginary
2. z is real
4. z = 0
I'll check this in case w and z are both on the unit circle. Then:
((cos a)(cos b) - (sin a)(sin b)) + i ((cos a)(sin b) + (sin a)(cos b))
= cos(a+b) + i sin(a+b)
addition formulas for sin and cos , and if you understand complex
Question 3. (1+i)^4 =
1. - 1
2. 4
3. - 4
4. - sqrt(2)
5. 4 i
You went straight to the answer, but let's tabulate some powers:
The derivative is computed for each component, and gives you the
velocity
1.
So we will write the solution to (**) as e^{it} .
z = cos t + i sin t
To check, compute
iz = i cos t - sin t
In fact, for any complex number a+bi you can compute that the solution
to z' = (a+bi)z , z(0) = 1 , is
In fact:
n = 2 : z = +- 1
.....
This gives
( -1 + sqrt 3 i ) / 2 .
( -1 - sqrt 3 i ) / 2
That's it, there's no other way for it to happen. The cube roots of
unity
start with 1 and divide the unit circle evenly into 3 parts.
In general, the nth roots of unity divide the circle into n equal
parts.
Now the magnitude must be a positive real fourth root of 16, namely, 2:
all the 4th roots of 16 lie on the circle of radius 2. 2 itself is one.
z = cos(t) + i sin(t)
We saw this geometrically but you can also just check it:
In fact the same easy check shows that for any complex number a+bi
the solution of z' = (a+bi) z with z(0) = 1 is
You can see this using the usual rule for real exponentials together
with the angle addition formulas, or by using the uniqueness theorem
for solutions to ODEs. See the Supplementary Notes.
z + z z - z
2 2 i
Proof by diagram.
______
Apply this to z = e^{it}. I will need to know what e^{it} is.
Reflecting across the real axis reverses the angle: so
______
e^{it} = e^{-it} .
From Euler's formula, (**), and the "general fact" at the start, we find
e^{it} + e^{-it} e^{it} - e^{-it}
cos(t) = ---------------- sin(t) = ----------------
2 2 i
Anything you want to know about sines and cosines can be obtained from
[3] Sinusoids
(co)sine wave.
sinusoid.
or parameters:
Now,
symbol,
This is the time at which f(t_0) = A . Usually you make sure 0 =< t_0
< P .
Applications of C:
[1] Integration
parts.
Same for integrating.
2 x = 2 e^{-2t} u
let's use the method of optimism, or the inspired guess. The inspiration
here is based on the fact that differentiation reproduces exponentials:
d
-- e^{rt} = r e^{rt}
dt
Since the right hand side is an exponential, maybe the output signal x
will
be too: TRY x = A e^{3t} . I don't know what A is yet, but:
x' = A 3 e^{3t}
2 x = 2 A e^{3t}
-----------------
x' + 2x = cos(t)
z' + 2z = e^{it}
To get a solution to the original equation we should take the real part
of this! Expand each factor in real and imaginary parts:
z_p = ((2-i)/5) ( cos(t) + i sin(t) )
as before!
x' = A r e^{rt}
kx = k A e^{rt}
______________________
B e^{rt} = A (r + k) e^{rt}
A = B / (r + k) so:
as long as r + k is not 0 .
I claim that
a cos(omega t) + b sin(omega t)
and try to find out what I should take for A and phi. Expand this
using the cosine difference formula:
There's a name for such A , phi: they are the polar coordinates of
the point (a,b) in the plane. They do exist!
Autonomous equations
is falling, decaying.
Autonomous means conditions are constant in time, though they may depend
p the growth rate decreases to zero. When y > p , the growth rate becomes
k(y) = k0 (1 - (y/p)).
nonlinear.
Values of y such that g(y) = 0, are called "critical points" for the
equation y' = g(y). The horizontal lines with these values of y form
or "equilibria."
To see the rest of the direction field, plot the graph of g(y).
axis at y = 0 and y = p .
I drew some isoclines and some solutions. This is the "Logistic" or "S"
g(y) > 0 and a downward pointing arrow if g(y) < 0 . This simple
diagram tells you roughly how the system behaves. It's called the
"phase line."
Question 8.1. In the autonomous equation y' = g(y) , where g(y) has
g'(p) < 0,
equation is
y' = (1-y)y - a
closer together. This says that the range of populations which are
crashes to zero.
y = e^{k_0 t} , y = 0 , y = - e^{k_0 t}
1. True
2. False
1. g(y)=0
2. g'(y)=0
3. g''(y)=0
x = x_p + c x_h
- If no:
2 x = 2 e^{2t}
--------------------------------
or u' = t + 1 so u = t^2/2 + t + c
so t^2/2 + t + c = e^{2t} x
Try z = A e^{2it}
z' = A 2i e^{2it}
2 z = 2 A e^{2it}
----------------------
4 e^{2it} = A ( 2 + 2i ) e^{2it}
so A = 4 / ( 2 + 2i ) = 2 / (1 + i )
z_p = ( 2 / ( 1 + i ) ) e^{2it}
There are two ways to get the real part out of this.
Which to use depends upon what you want.
- x^{-1} = t + c
x^{-1} = c - t
x = 1 / (c - t)
Jeff Xia showed that a certain 5-planet system moves off to infinity
in finite time!
18.03 Class 11, March 3, 2006
|
|| |-------> F_ext ||
|| | ||
|| ___|___ _____________ ||
|| | | _____ | ||
||---VVVVVVV---| |------|_____| |-------||
|| |_______| _____________| ||
|| O | O ||
|| | ||
|------->
| x
If x = 0 , F_spr(x) = 0
The dashpot force is frictional. This means that it depends only on the
If x' = 0 , F_dash(x') = 0
The simplest way to model this behavior (and one which is valid in
general
for small x', by the tangent line approximation) is
So the equation is
mx" + bx' + kx = F_ext
Most of the time we will assume that the "coefficients" b(t) and k(t)
k ] x = c e^{rt}
b ] x' = c r e^{rt}
___________________
p(s) = ms^2 + bs + k
c1 e^{-t} + c2 e^{-4t}
x = c1 x1 + c2 x2 "linear combination"
Just two solutions determine all solutions. This is like saying that
for any two vectors in the plane such that neither is a multiple of the
other,
every vector in the plane is a linear combination of them.
x = c1 e^{-t} + c2 e^{-4t}
1 = x(0) = c1 + c2
2 = x'(0) = - c1 - 4 c2
3 = -3 c2 so c2 = - 1
and then c1 = 1 - c2 = 2:
x = 2 e^{-t} - e^{-4t}
18.03 Class 12, March 6, 2006
roots,
damping criteria.
p(s) = s^2 + bs + k
All solutions go to zero: no oscillation here. When the roots are real
and
not equal to each other the system is called "Overdamped."
r = -2 +- sqrt(4-5) = -2 +- i
e^{(-2+i)t} , e^{(-2-i)t}
where m, b, and k
are real, then the real and imaginary parts of x are also solutions.
k ] x = u + iv
___________________
Both things in parentheses are real, so the only way this can happen is
for
both of them to be zero.
So in our situation,
x = a cos(omega_n t) + b sin(omega_n t)
= A cos(omega_n t - phi)
so
-- sum of the roots is - b : so the average of the roots is -b/2 .
1. oscillate
2: This is tricky: If the roots are not real, then the solutions
are e^{-bt/2} times a sinusoid, so they die away. If they are real,
then the solutions are combinations of e^{r1 t} and e^{r2 t}
This is transitional;
just
solution,
e^{-bt/2}.
The fact is that we can write down another solution, namely t e^{-bt/2}
The general solution is (at+b) e^{-bt/2} and it dies away in that case
too.
things must
solutions
Overdamped b^2/4 > k Two diff. real e^{r1 t}, e^{r2 t} same
te^{rt}
damped
below)
* The name here is appropriate under the assumption that b and k are
both
non-negative. The rest of the table makes sense in general, but it
doesn't
have a good interpretation in terms of a mechanical system.
r = (- b +- sqrt(b^2 - 4k)) / 2
= -b/2 +- sqrt((b/2)^2 - k) so
* The name here is appropriate under the assumption that b and k are
both
non-negative. The rest of the table makes sense in general, but it
doesn't
have a good interpretation in terms of a mechanical system.
If b > 0 and k > 0 , then all solutions die off. The are "transients."
is the "damped circular frequency," and the real part of the roots is
the "growth rate" -b/2 :
-b/2 +- i omega_d
-b/2 + i omega_d |
*
|
|
_______________________________________
|
|
-b/2 - i omega_d |
* |
|
|
. |<-----purely imaginary
. complex roots .
. | .
. Re < 0 | Re > 0 .<----- repeated if k = b^
2/4
. | .
. | .
real < 0 . | . real > 0
.. | ..
.. | ..
------------ ....... ------------> - b
^
|
real, opposite sign |______ at least one zero
root
^
. |<----- sinusoidal solutions
. | .
. | .
. stable, | unstable, .<----- t e^{rt} too if k
= b^2/4
. oscillating
| oscillating .
. | .
stable . | . unstable, not oscillating
not oscillating.. | ..
.. | ..
------------ ....... ------------> - b
^
|
most solutions grow |______ some nonzero
constant
solutions
3. This system is linear but the coefficients are not constant in time.
4. Either 2 or 3 holds but we can't say which.
Constant
Sinusoidal
Polynomial
Sums of these
k) x = xp + xh
b) x' = xp' + xh'
m) x" = xp" + xh"
__________________
mx" + bx' + kx = (m xp" + b xp' + k xp) + (m xh" + b xh' + k xh) = F_ext
+ 0
as we wanted.
(2) find the general solution of (*)_h (which we have worked on for a
while).
There are two frequencies here: the natural frequency of the system
and the frequency omega of the input signal.
I showed what happens with a weight on a rubber band: for small omega
the weight follows the motion of my hand; it passes "resonance," where
the response amplitude is large; and when omega is larger the response
is exactly anti-phase. Why? And what's this resonance?
xp = B cos(omega t)
omega_n^2) xp = B cos(omega t)
xp" = - B omega^2 cos(omega t)
-----------------------------------
A cos(omega t) = xp" + omega_n^2 xp = B(omega_n^2 - omega^2) cos(omega
t)
B = A / (omega_n^2 - omega^2) .
When omega > omega_n , the gain falls back towards zero.
Also: when omega < omega_n the denominator is positive, and the
output
is a positive multiple of the input. When omega > omega_n the
denominator
is negative, and the output signal is a negative multiple of the input:
this is PHASE REVERSAL.
k] xp = B e^{rt}
b] xp' = B r e^{rt}
or xp = ( A / p(r)) e^{rt}
xp = ( 4 / 17 ) e^{3t} .
Of course this will let us solve (*) for sinusoidal signals as well:
We want the imaginary part. Lets do it by writing out real and imaginary
parts:
A better model for the weight at the end of the rubber band is:
| |
> |
< | y
> |
< V
|
|
---------
| m |------------------------- x = 0
---------
|
|
| x
m x" = k ( y - x )
m x" + k x = k y(t)
Most systems are more general than the simple spring/dashpot/mass system
we have been looking at. For example,
-----------------------------------------------
| |
> |
k1 < | y(t)
> |
< V
|
|
---------
| m_1
|-------------------------
--------- |
| |
> | x_1
k2 < |
> V
<
---------
| m_2
|------------------------
--------- |
|
| x_2
m2 x2" = k2 ( x1 - x2 )
If you differentiate the second equation twice and plug in the first,
The theory of such systems is just like the theory we have developed
for first and second order linear constant coefficient equations.
The left hand side represents the system; the numbers ak are the
"coefficients." The system is called a "Linear Time Invariant" or LTI
system. It has a "characteristic polynomial,"
D^2 + 2D + 2I = p(D)
xp = ( A / p(r) ) e^{rt}
and (a_n D^n + ... + a_0 I) e^{rt} = (a_n r^n + ... + a_0) e^{rt}
In terms of operators:
D(vu) = v Du + u Dv
= e^{rt} ( Du + ru )
= e^{rt} ( D + rI ) u
Apply D again:
= e^{rt} (D+rI)(D+rI) u
= e^{rt} (D+rI)^2 u
-1] x = e^{-t} I u
-------------------------
e^{-t} = e^{-t} ( (D-I)^2 - I ) u
so want ( (D-I)^2 - I ) u = 1
or ( D^2 - 2D ) u = 1 i.e. u" - 2u' = 1
Notice that if p(s) = a_n s^n + a_(n-1) s^{n-1} + ... + a_1 s + a_0
then p(0) = a_0.
Proof by example:
x = at^2 + bt + c
3) x = at^2 + bt + c
2) x' = 2at + b
1) x" = 2a
_________________________
x" + x' = t
u = at + b
u' = a
----------------
t = at + (a+b)
Frequency response
The "gain" is the ratio of the output amplitude to the physical signal
amplitude. In this case,
The traditional thing to graph is - phi rather than phi: this graph
is constant zero for omega < omega_n and then switches discontinuously
to - phi = -pi for omega > omega_n .
[3] Damped systems: Frequency response
amplitude
I displayed "Amplitude and Phase, Second Order" and set k = 3.24 and
b = .5 . In it, B = 1 .
In the expression
I claim that
Then
Gain:
when omega is near to the natural frequency of the system, since the
k/omega^2.
As b grows larger, the second term dominates and for modest values of
omega
resonant peak.
(2) .
The gain won't be maximal then (think of the case of b large), but you
should expect it to be relatively large.
18.03 Class 17, March 17, 2006
| b m k |/
| |----------- ______ |/
|---| |=======------|______|----/\/\/\------|/
| |----------- | |/
| | |/
|-------> |-------->
y(t) x(t)
We are driving the system by motion of the far end of the dashpot, while
keeping the far end of the spring constant.
[2] Let's see the equation of motion for this system. Arrange the
position parameter x so that x = 0
when the spring is relaxed.
The dashpot exerts a force proportional to the speed at which the piston
is moving through the cylinder.
This speed is (y-x)' . When y' > x' , the force is positive, so the
dashpot force is b(y-x)'
We know that there will be a sinusoidal system response; and that that
is
the response we'll see very quickly, since the transients damp out.
We also know that we should try to express the sinusoidal system
response in terms of a gain and a phase lag with respect to the physical input
signal. Despite the appearance of the right hand side of the equation,
it's clear that we should take as physical input signal the function
y = B cos(omega t), so we look to find gain and phase lag phi such
that
We also know that gain and phi are computed by finding the "complex
[4] The system was subjected to several input frequencies. One odd
thing appeared: for small frequency,the system response is *ahead* of the system input:
< 0 in that case.
Also maximal gain seems to happen when the phase lag is zero.
As omega varies, this sweeps out a curve in the complex plane. To see
what
that curve is, look first at the denominator. Its real part is always 1
when
omega > omega_n , it is positive: the direction of movement is
upward.
The angle gets reversed. So W(i omega) moves clockwise along that
circle.
omega = omega_n , and then comes to rest near - pi/2 as omega --->
infty.
4] z = e^{2it} u
0] z' = e^{2it} ( u' + 2i u )
1] z" = e^{2it} ( u" + 2i u' + 2i u' + (2i)^2 u )
------------------------------------------------------
e^{2it} t = e^{2it} ( u" + 4i u' + (4-4) u )
so u" + 4i u' = t
4i] v = at + b
v' = a
----------------
t = 4iat + (a + 4ib)
Suppose that I open TWO bank accounts and proceed to save at rates q_1
(t)
Is this any different than opening ONE bank account and saving at the
rate
q_1(t) + q_2(t) ?
x1' - I x1 = q1
x2' - I x2 = q2
--------------------------
It lets you break up the input signal into constituent parts, solve for
them
separately, and then put the results back together. This is why it isn't
so
bad that we spent all that time studying very special input signals.
Arg(a+bi) = arctan(b/a)
Frequency response is about the amplitude and phase lag of a sinusoidal
(steady state) response of a system to a sinusoidal signal of some
frequency.
= (3 - omega^2) + 2i omega
for this.
Good luck!
f(t)
So strictly speaking the examples given are not periodic, but rather
they
coincide with periodic functions for some period of time. Our methods
will
periodic
THE
period.
choose
[2] Sine and cosines are basic periodic functions. For this reason a
natural
period to start with is P = 2\pi .
Coefficients."
We'll see why the odd choice of a0/2 for the constant term shortly.
illustrate
this:
There are integrals for computing these coefficients, but using the
Ave(f(t)) = a0/2 or
The only function which is both even and odd is the zero function. For
f(t) = f(-t) and f(t) = -f(-t) together imply that f(t) = -f(t) .
[The same argument shows that if a polynomial is even then it's a sum of
even powers of t ; if it's odd then it's a sum of odd powers of t . ]
The first of these is easy, since the product is odd and the interval
you are
integrating over is symmetric. The others require some trig identity
which you
can find in Edwards and Penney.
Only one of all these terms is nonzero: the cosine term with m = n ,
and since then am = an , we discover
1 -1 2
2 1 0
3 -1 2
[1] If f(t) is any decent periodic of period 2pi, it has exactly one expression as
but one can often discover them without evaluating these integrals.
[2] Example: the "standard squarewave" sq(t) = 1 for 0 < t < pi,
-1 for -pi < 0 < 0
[3] Any way to get an expression (*) will give the same answer!
How to write it like (*) ? Well, there's a trig identity we can use:
(A,phi)
a = cos(phi), b = sin(phi) :
| t /
| /
pi |---/ t = (pi/L)x
| /|
| / |
|/ |
-------------- x
| L
To find out, I got Matlab to sum the first 10 nonzero terms of the
Fourier series for (pi/4) sq(t) ; that is,
(f(a+) + f(a-))/2 .
The graph is even flatter, but still shows a sharp overshoot near the
discontinuities in sq(t) .
[1] My muddy point from the last lecture: I claimed that the Fourier
series
for f(t) converges wherever $f$ is continuous. What does this really
say?
For example,
so
it converges to pi/4?
And so on: there are infinitely many summations like this contained in
(*) .
We could calculate the coefficients, using the fact that f(t) is even
To find the constant term, remember that it's the average value of f
(t),
which is 0:
[Example:
This can be obtained from the sum of odd reciprocal squares using the
geometric series - can you see how?]
is
But this IS the case if the average value of the function is zero.
If you think of this one term at a time, the point is that the integral
of
is always periodic.]
x" + kx = k f(t)
x = B sin(omega t)
I showed the Harmonic Frequency Response Applet, with sine input, and
By superposition and Fourier series we can now handle ANY periodic input
signal.
system,
expect
omega_n = 1, 3, 5, ...
The system is detecting information about the timbre of the input signal
here.
We can use Fourier series to analyze the system response more closely:
When omega_n is very near to k^2 , k odd, but less than k^2 , the
term
sin(kt)/(omega_n^2 - k^2)
is a large negative multiple of sin(kt) . This appears on the applet.
Then when omega_n passes k^2 the dominant term flips sign and
becomes
a large positive multiple of sin(kt) .
You have been using this system for the past 40 minutes: this is how the
ear works: in the cochlea, there is a row of hairs of different lengths.
They act like springs. They have different natural frequencies. Various
hairs vibrate more intensely in response to various different
frequencies.
Your ear acts as a Fourier analyzer. The omega_n axis is the axis
along
the cochlea.
18.03 Class 23, April 7
[1] Model of on/off process: a light turns on; first it is dark, then
it is
light. The basic model is the Heaviside unit step function
lim_{t-->a} f(t)
You will also often care about the values just to the left of t=a,
or just to the right. These are captured by
u(t-a) turns on at t = a
Q1: What is the equation for the function which agrees with
f(t) between a and b ( a < b ) and is zero outside this window?
Ans: (3).
= 0 for t < a
Q'(t) = q(t)
account.
I can model the cumulative total deposit using the step function:
Q(t) = c + t + 30 u(t-1)
What about the rate? For this we would need to be able to talk about
the
derivative of u(t) , in such a way that its integral recovers u(t).
delta(t) = u'(t)
matter.
Maybe the bank adds one dollar per millisecond: I don't care.
area 1 under the graph, and the nonzero values are concentrated around
t=0.
Using this we can write down a formula for the new rate of savings:
q(t) = 1 + 30 delta(t-1)
[4] We'll call piecewise continuous functions "regular." We can now add
in combinations of delta functions, called "singularity functions."
A combination of a regular function and a combination of delta functions
is a "generalized function":
For example,
q_r(t) = 1
It makes sense to say that Q'(t) = q(t) . Whenever you have a gap
in the graph of f(t) , so that f(a+) is different from f(a-) ,
the derivative will have a delta contribution:
He was the one who wrote down Maxwell's equations in the compact vector
form you see now on ``Let there be light'' T-shirts.]
Quotes:
[6] When you fire a gun, you exert a very large force on the bullet
over a very short period of time. If we integrate F = ma = mx"
we see that a large force over a short time creates a sudden change in
the momentum, mx' . This is called an "impulse."
Q2: What does the graph of the generalized derivative of v(t) look
like?
(1) ^ ^
| |
v_0 | | v_0
| |
----------| |----------------
|
|
|________|
(2) ^
|
v_0 |
|
----------| ---------------
| |
|________|
|
| v_0
|
v
(3) ^
|
v_0 |
|
----------| --------------
| |
|_________|
(4)
---------- -------------
| |
|_________|
Ans: (1).
[7] People often want to know what the delta function REALLY IS.
One answer is that it is a symbol, representing a certain approximation
to reality and obeying certain rules.
The most we can ever know about the function f(t) is the collection of
devices.
I will make the assumption that the function m(t) itself is continuous
(or better).
function"
does:
[1] In real life one often encounters a system with unknown system
parameters.
Then we apply some input signal, and solve from this starting point.
xp = 1/k , xh = c e^{-kt} , so
x = (1/k)(1 - e^{-kt})
The solution to (*) called the "unit step response" of the system.
I'll denote the unit step response by v(t) today.
= 0 for t < 0 .
NB: the solution is continuous, despite the fact that the signal is not.
Solving ODEs increases the degree of regularity.
happens.
function"
= 0 for t < 0
(a) It doesn't really matter when you start the clock, if the system you
are looking at is time-invariant.
= 0 for t < a
the derivative of the unit step response is the unit impulse response.
The unit step and unit impulse functions are very simple signals,
and the system response gives a very clean view of the system itself.
They determine the system (assuming it is LTI), and we'll see next how
the
unit impulse response can be used to reconstruct the system response
to ANY signal. This process will work for p(D) of any order.
which we solve using the usual methods. This solution (times u(t)) is
the
"unit impulse response" or "weight function" of mD^2 + bD + kI .
[6] Convolution. I claim that the weight function w(t) --- the
solution
to p(D)x = delta(t) with rest initial conditions --- contains complete
data about the LTI operator p(D) (and so about the system it
represents).
Strike a system and watch it ring. That gives you enough information to
predict the system response to ANY input signal!
In fact there is a formula which gives the system response (with rest
initial conditions) to any input signal q(t) as a kind of "product"
of w(t) with q(t) .
Convolution
In fact there is a formula which gives the system response (with rest
initial conditions) to any input signal q(t) as a kind of "product"
of w(t) with q(t) .
Suppose phosphates from a farm run off fields into a lake, at a rate
q(t) which varies with the seasons. For definiteness let's say
q(t) = 1 + cos(bt)
Once in the lake, the phosphate decays: it's carried out of the stream
at a rate proportional to the amount in the lake:
This tells you how much of each pound is left after t units of time have
elapsed. If c pounds go in at time tau, then
w(t-tau) c
Fix a time t . We'll think about how x(t) gets built up from the
contributions made during the time between 0 and t . We'll need
another
input.
Divide time into very small intervals, Delta tau (1 second maybe) .
but (#) works in general, and for systems of any order, not just
first order systems. It's superposition of infinitesimals.
x(0) = 0
The graphs in the Mathlet show this effect; in fact from the graph
we could deduce the value of a and b that the programmmer chose.
This works for any order. In the second order case, you think of the
input signal as made up of many little blows, each producing a decaying
ringing into the future. They get added up by superpostion, and this is
the convolution integral. It is sometimes called the superposition
integral.
____________
| |
|____________|
In particular,
____________
| |
|____________|
(f(t)*g(t))*h(t) = f(t)*(g(t)*h(t))
The book carries out the integration manipulation you need to do to see
this.
____________ ____________
| | | |
h(t)---->| "g(t)" |---->g(t)*h(t)---->| "f(t)" |---->f(t)*(g
(t)*h(t))
|____________| |____________|
____________ ____________
| | | |
delta(t)---->| "g(t)" |---->g(t)---->| "f(t)" |---->f(t)*g(t)
|____________| |____________|
* is also commutative:
w(t)*delta(t) = w(t)
compute w(t) .
so v^{(n)}(0+) = 1/a_n
poles
------------------------------------------------------------------------
| The t domain
|
|
|
|
|
|
|
|
|
| convolution
|
|
|
|
|
------------------------------------------------------------------------
| ^
L | | L^{-1}
v |
------------------------------------------------------------------------
| The s domain
|
|
|
| s is complex
|
|
|
| beautiful functions F(s) , often rational = poly/poly
|
|
|
| and algebraic equations relating them
|
|
|
| ordinary multiplication of functions
|
|
|
| systems represented by their transfer functions W(s)
|
|
|
------------------------------------------------------------------------
The use in ODEs will be to apply L to an ODE, solve the resulting very
simple algebraic equation in the s world, and then return to reality
using the "inverse Laplace transform" L^{-1}.
(s-z).
= F(s-z).
This calculation (*) is more powerful than you may imagine at first,
since z may be complex. Using linearity and
we find
Using
we find
L[sin(omega t)] = omega/(s^2 + omega^2).
= e^{-bs}
In particular,
L[delta(t)] = 1
Compute:
u = e^{-st} du = -s e^{-st} dt
dv = f'(t) dt v = f(t)
... = s F(s)
and so
Definition:
Rules:
of f'(t) at t = 0 .
Computations:
L[1] = 1/s
L[e^{as}] = 1/(s-a)
L[delta(t-a)] = e^{-as}
Compute:
u = e^{-st} du = -s e^{-st} dt
dv = f'(t) dt v = f(t)
We continue to assume that f(t) doesn't grow too fast with t (so that
the integral defining F(s) converges for Re(s) sufficiently large).
This means that for s sufficiently large, the evaluation of the first
term at infinity becomes zero. Since we are always assuming rest
initial conditions, the evaluation at zero is also zero. Thus
... = s F(s)
Now, what is f'(t) ? If f(t) has discontinuities, we must mean the
generalized derivative. There is one discontinuity in f(t) that we
can't
just wish away: f(0-) = 0 , while we had better let f(0+) be whatever
it wants to be. We have to expect a discontinuity at t = 0 .
and so
This is what the book tells us, and this is typically what we will use.
But remember, (1) it is only good if f(t) is continuous for t > 0 ,
and (2) it does NOT compute the LT of f'(t), but rather of (f')_r(t) .
[2] In summary the use of Laplace transform in solving ODEs goes like
this:
|
| solve
L^{-1} v
For this to work we have to recover information about f(t) from F(s).
There isn't a formula for L^{-1}; what one does is look for parts of
F(s) in our table of computations. It's an art, like integration.
There is no free lunch.
Theorem: If f(t) and g(t) are generalized functions with the same
Laplace transform, then f(a+) = g(a+), f(a-) = g(a-) for every a ,
and the singular parts coincide: f_s(t) = g_s(t) -- that is, any
occurances of delta functions are the same in f(t) as in g(t).
so X = 5/(s+3) + 1/((s+1)(s+3))
forms.
1/(s+3) = a + (s+1)(a/(s+3))
1/(3-1) = a + 0 : a = 1/2 .
This process "covers up" occurances of the factor (s+1), and also
all unwanted unknown coefficients. It gives b too:
1/(-3+1) = 0 + b : b = -1/2.
So X = (1/2)/(s+1) + (9/2)/(s+3)
The graph of f_a(t) is the same as the graph of f(t) but shifted to
the
right by a units. For t < a , f_a(t) = 0 . a >= 0 for us.
L[u(t-a)] = e^{-as}/s
Rules:
(ignoring singularities at t = 0 )
Computations:
L[1] = 1/s
L[e^{as}] = 1/(s-a)
L[delta(t-a)] = e^{-as}
We'll assume that f(t) and f'(t) are continuous for t > 0 and
ignore
singularities at t = 0 , so that
So then
L[f"(t)] = s (s F(s) - f(0+)) - f'(0+) = s^2 F(s) - s f(0+) -
f'(0+)
Note that this is EASY to solve using our old linear methods:
by inspection (or undetermined coefficients, or the Key Formula)
xp = 1/5 is a solution; the general solution is this plus a homogeneous
solution, which you choose to satisfy the initial conditions.
Nevertheless we have some technique to show you in working it out
using LT.
use
+- 2i .)
F(s) = (2s+5)/(s^2+4)
Now look at second term. We'll use partial fractions for it, but notice
that we also complete the square: there are constants a, b, c, such
that
Note that I've completed the square and written the numerator using (s+
1), in anticipation that I'll need things in that form when it comes time to
appeal again to the s-shift rule to recognize things as Laplace
transforms.
5/(1+4) = a or a = 1.
5/(-1+2i) = b(2i) + c.
Notice how useful it was to have things expressed in terms of s+1 here.
We can use this to solve for b and c, which are supposed to be real.
Rationalizing the denominator,
We can either find L^{-1} of this and add it to what we did before,
or (better) not have rushed to find L^{-1} before and assemble things
now:
x = 1 + e^{-t}(cos(2t) + 2 sin(2t)) .
L[w(t)] = 1/p(s)
is the "transfer function." It has the property that for any complex
number r, x = W(r)e^{rt} satisfies p(D)x = e^{rt} .
And the unit step response v(t) is the solution to p(D)v = u(t)
with rest initial conditions: apply LT:
W(s) = 1/(s^2 + 2s + 5)
On Monday I'll try to put this all together, and talk about what
the Laplace transform is really good at.
18.03 Class 29, Apr 24
[1] I introduced the weight function = unit impulse response with the
mantra
that you know a system by how it responds, so if you let it respond to
the
simplest possible signal (with the simplest possible initial conditions)
then you should be able to determine the system parameters.
How?
p(s) W(s) = 1
That is, Laplace transform is the device for extracting the system
parameters from the unit impulse response.
W(s) = (2)/((s+1)^2+4)
and
so we discover, if you like, that the mass is 1/2, the damping constant
is 1, and the spring constant is 2.
[Of course we knew that, too: the impulse response is (for t > 0) a
homogeneous system response, so the roots of the characteristic
polynomial
are visible and must be - 1 +- 2i . The roots don't quite determine
the polynomial, since you can always multiply through by a constant
and get another polynomial with the same roots. If you normalize to
s^2 + bs + k then
b = - (sum of roots) = 2
k = product of roots = 5
The constant is the mass, and this can be derived too, from w'(0) = 2 :
the change in momentum is 1, so if the change in velocity is to be 2,
the mass must be 1/2.]
[2] A few weeks ago we described the system response (with rest initial
conditions) to a general input signal q(t) in terms of the unit
impulse
response w(t):
p(s) X = F(s)
so X = W(s)F(s)
W(i omega) is the "complex gain." (Here we are supposing that the
"physical input signal," with respect to which we should be measuring
the
gain and the phase lag, is just the input signal.)
Just try to understand |1/s| ; put the argument aside for another day.
over
tent,
with a tent post stuck in the ground at s = 0 . Maybe it's for this
reason
above.
r1 = - 1 + 2i
r2 = - 1 - 2i
are the roots. W(s) then becomes infinite when s comes to be one of
r1 or r2 . It falls off towards zero when s moves away. The graph
of |1/p(s)| is a tent with two poles.
[5] Here's the vision that unifies most of what we have done in this
course
so far:
You have a system (a black box, with springs and masses and dashpots,
for example) which you wish to understand. This means really that you
We will only be able to analyze systems which are LINEAR and TIME
INVARIANT:
delaying
signals.
understand
Graph |W(s)| . This will be a surface lying over the complex plane.
The intersection of the graph of W(s) with the vertical plane lying
over
the imaginary axis is the amplitude response curve (extended to an even
function, allowing negative omega).
If you increase the damping, the poles move deeper into negative real
part
space, and eventually the two humps in the frequency response curve
merge.
If you have a higher order system, you get more poles, and a more
complicated
amplitude response curve.
18.03 Class 31, April 28, 2006
[1] There are two fields in which rabbits are breeding like rabbits.
Field 1 contains x(t) rabbits, field 2 contains y(t) rabbits.
In both fields the rabbits breed at a rate of 3 rabbits per rabbit per
year.
Note that the rabbits cancel, so the units are (year)^{-1} .
They can also leap over the hedge between
the fields. The grass is greener in field 2, so rabbits from field 1
jump at the rate of 5 yr^{-1} , while rabbits from field 2 jump only at
the
rate of 1 yr^{-1}.
The net growth rate of the field 1 population is -2 because of all the
jumping, and the net growth rate in field 2 is 2. On the other hand,
field.
looks like
x' = ax + by
y' = cx + dy
x_1 = e^{3t}
x_2 = e^{-3t}
y_1 = 5e^{3t}
y_2 = -e^{-3t}
x = a e^{3t} + b e^{-3t}
y = 5a e^{3t} - b e^{-3t}
We are studying
x' = ax + by
(*)
y' = cx + dy
A = | a b |
| c d |
In these notes I will use Matlab notation and write this array as
[ a b ; c d ]
| x |
| y |
| a b | | x | = | ax + by |
| c d | | y | | cx + dy |
or [ a b ; c d ][ x ; y ] = [ ax+by ; cx+dy ]
u' = Au
x" - x' + 4x = 0
We can derive a first order linear system from this, by the trick of
defining
y = x'
so then
Together we have
x' = y
y' = -4 x + y
A = [ 0 1 ; -4 1 ]
We can see more precisely what the trajectories are in this case, by
x" - x' + 4 x = 0
[5] It turns out that the same system models the relationship between
Romeo and Juliet. The MIT Humanities Department has analyzed the plot of
Shakespeare's play and found the following. If R denotes Romeo's love
for Juliet, and J denotes Juliet's love for Romeo, then
R' = J
J' = -R + 4J
the
more complex. She has a healthy self awareness; if she loves him, that
very fact causes her to love him more. On the other hand, if he seems to
love her, she gets frightened and starts to love him less.
Let's start the action at (1,0). So Romeo is fond of Juliet but she is
neutral
towards him. However, she does notice that he is fond of her, and this
makes
her somewhat hostile. As she becomes more distant, his affection wanes.
continues;
presently he stays away from her, and this very fact makes her more
interested.
starts
He then starts to feel better towards her, but still stays away, and now
both his attitude and hers cause her to feel progressively more well
increased.
increases.
det(A) = ad - bc
We have found:
The "Linear Phase Portraits: Matrix Entry" Mathlet shows that some
trajectories seem to be along straight lines. Let's find them first.
That is to say, we are going to look for a solution of the form
u(t) = r(t) v
One thing for sure: the velocity vector u'(t) also points in the same
(or reverse) direction as u(t). So for any vector v on this
trajectory,
A v = lambda v
for some number lambda. This Greek letter is always used in this
context.
lambda v = (lambda I) v
0 = A v - (lambda I) v = (A - lambda I) v
det(A - lambda I) = 0
solution).
= 1 - 2 lambda + lambda^2 - 4
= lambda^2 - 2 lambda - 3
of A .
[4] Now we can find those special directions. There is one line for
lambda_1 and another for lambda_2 . We have to find nonzero solution
v to
(A - lambda I) v = 0
A [ 1 ; -1 ] = 0
and you can check this directly. Any such v (even zero) is called
an "eigenvector" of A.
so (since v is nonzero)
r' = lambda r
r = c e^{lambda t}
u = e^{-t} [1;-1]
In fact we've found all solutions which occur along that line:
u = c e^{-t} [1;-1]
General fact: the eigenvalue turns out to play a much more important
role
than it looked like it would: the straight line solutions are
*exponential*
solutions, e^{lambda t} v , where lambda is an eigenvalue for
the matrix and v is a nonzero eigenvector for this eigenvalue.
A - lambda I = [ -1 1 ; 1 -1 ]
e^{3t} [1;1]
the two eigensolutions (as long as there are two distinct eigenvalues).
example
When t is very negative, -10, say, the first term is very big and the
second tiny: the solution is very near the line through [1 ; -1].
As t gets near zero, the two terms become comparable and the solution
curves around. As t gets large, 10, say, the second term is very big
and the first is tiny: the solution becomes asymptotic to the line
through
[1 ; 1].
[6] Comments:
p_A(lambda) = (a-lambda)(d-lambda) - bc
tr A,
so
Both these facts hold in higher dimensions as well. Most real numbers we
know about are eigenvalues of symmetric matrices - the mass of an
elementary
particle, for example.
18.03 Class 33, May 3
[1] The method for solving u' = Au that we devised on Monday is this:
(3) For each eigenvalue find a nonzero eigenvector --- v such that
Av = lambda v or (A - lambda I) v = 0
These are also called "normal modes." The general solution is a linear
combination of them.
[2] This makes you think there are always ray solutions. But what about
the Romeo and Juliet example, which spirals and obviously has no such
solution? Or, what about
A = [ 1 2 ; -2 1 ] .
Let's apply the method and see what happens. tr(A) = 2 , det(A) = 5,
so
(As always for real polynomials, the roots (if not real) come as complex
conjugate pairs.)
We could abandon the effort at this point, but we had so much fun and
success with complex numbers earlier that it seems we should carry on.
A - (1+2i)I : [ - 2i , 2 ; -2 , -2i ][ ? ; ? ] = [ 0 ; 0 ]
Standard method: use the entries in the top row in reverse order with
one sign changed: [ 2 ; 2i ] or, easier, in this case,
v_1 = [ 1 ; i ].
-2 . 1 - 2i . i = 0
e^{(1+2i)t} [ 1 ; i ]
equations,
e^{(1+2i)t} [ 1 ; i ]
These are two independent real solutions. Both spiral around the origin,
clockwise, while fleeing away from it exponentially. They satisfy
v_2 = [ 1 ; -i ]
so another normal mode is e^{(1-2i)t} [ 1 ; -i ] .
This is complex conjugate to the one we had before, so its real and
imaginary parts give the same solutions we had before (up to sign).
("unstable")
failure of the method at all. There are in fact ray solutions, but they
are
A = [ -2 1 ; -1 0 ]
A - (-1)I = [ -1 1 ; -1 1 ][ ? ; ? ] = [ 0 ; 0 ] : v_1 = [ 1
; 1 ]
u_1 = e^{-t} [ 1 ; 1 ]
But we need another solution. Here is how to find one; I won't go into
details, just give you the method.
Write down the same matrix A - lambda_1 I but now find a vector w
such that
(A - lambda_1 I) w = v_1 .
Then
is a second solution.
In our case:
[ -1 1 ; -1 1 ] [ ? ; ? ] = [ 1 ; 1 ]
= e^{-t} [ t ; t+1 ]
[0;1] isn't the only vector that works here; [0;1] + c v_1 does too
u = a u_1 + b u_2 .
A = [ 2 0 ; 0 2 ]
A - lambda_1 I : [ 0 0 ; 0 0 ] [ ? ; ? ] = [ 0 ; 0 ]
Now ANY vector is an eigenvector! Instead of only one line you get the
entire plane. For any vector v ,
e^{2t} v
In the 2x2 case, if the eigenvalue is repeated you are in the defective
case unless the matrix is precisely [ lambda_1 , 0 ; 0 , lambda_1 ]
For larger square matrices this becomes the story of Jordan form.
To learn more about all this you should take 18.06 or 18/700.
Comparing coefficients,
so the two numbers tr(A) and det(A) , extracted from the four
numbers
a, b, c, d, are determined by the eigenvalues. Conversely, they
determine
the eigenvalues, as the roots: by the quadratic formula,
are real and different from each other if det(A) < tr(A)^
2/4
Notice that if the eigenvalues are not real, their real part is tr
(A)/2.
If the eigenvalues are real, they have the same sign exactly when their
product is positive, and that sign is positive if their sum is also
positive.
Relationship between tr and det vs eigenvalues
det
^
. |<-----purely imaginary
. complex roots .
. | .
. | .
. | .
.. | ..
.. | ..
[2] Spirals. We have seen that when the eigenvalues are non-real, we get
spirals. Two more comments on this:
(2) We can tell which you get by thinking about what u' is at some
point.
A).
counterclockwise;
[3] Saddles. When the eigenvalues are real and of opposite sign, the
positive
eigenvalue and the other with negative. Normal modes along one move out,
and along the other move in. The general solution is a combination of
these
two.
converges
By moving the trace slider you change the relative size of the
eigenvalues;
eigenvalues. The upper left box lets us explore them. One thing you can
do
is rotate the whole picture. The other thing you can do is change the
angle
[4] Nodes: When the eigenvalues are real and of the same sign, but
distinct,
Eg A = [ -2 , 1 ; 1 , -2 ] has
so the eigenvalues are -1 and -3. On the Mathlet (with tr = -4, det
= 3,
s = 0, theta = pi/2) you can see that the
eigenlines are again of slope +- 1 : slope +1 with eigenvalue -1 and
the
other with eigenvalue -3. You can check this:
[ -2 , 1 ; 1 , -2 ][ 1 ; 1 ] = [ -1 ; -1 ]
[ -2 , 1 ; 1 , -2 ][ 1 ; -1 ] = [ -3 ; 3 ]
Both normal modes decay to zero, but the one with eigenvalue -3
decays much faster: so the non-normal mode trajectories become tangent
to the eigenline with smaller |lambda|.
det
^
. |<-----centers .
. | . <--- stars or
. stable | unstable . defective nodes
. spirals | spirals .
. | .
. | .
stable . | . unstable
nodes .. | .. nodes
.. | ..
------------ ....... ------------> tr
these regions:
Supplementary Notes.
eigenvalues.
[6] Stability: All linear systems fall into one of the following
categories:
x" + bx' + kx = 0
x' = y
y' = - kx - by
A = [ 0 1 ; -k -b ]
[ -lambda_1 , 1 ; * , * ] v = 0
k > 0?
overdamping?
Ans: The part of the upper left quadrant which is below the critical
parabola,
that
x = 0
axis at most once, and can have at most one critical point (zero
slope).
characteristic
polynomial, i.e. eigenvalues, are -1/2 and -1 , with eigenvectors
[ 1 ; -1/2 ] and [ 1 ; -1 ] respectively, so basic solutions
intercept;
reaches
infty.
(1/2)x = 0
(-c/2) e^{-t/2} . This explains why for t >> 0 the slope of the
Recall from day one that x' = ax with initial condition x(0) has
solution
x = x(0)e^{at} . (*)
equations?
Note that the initial value u(0) is a vector, and u(t) is a vector
valued
function. So the expression e^{At} must denote a matrix, or rather a
matrix
valued function.
What could e^{At} be? What is its first column? Recall that the
first column of any matrix B is the product B[1;0] . Combining
this with (**) we see:
Similarly,
Phi(t) c_2
The right hand matrix is there to get the initial conditions right.
Evaluate at t = 0 :
I = Phi(0) c
[ a b ; c d ]^{-1} = (1/det A) [ d -b ; -c a ]
--- the diagonal terms get their positions reversed, and the off
diagonal
Phi(0) = [ 1 , 1 ; -1/2 , -1 ]
Phi(0)^{-1} = [ 2 , 2 ; -1 , -2 ] .
the same as the number of rows in B , then we can form the "product matrix" AB.
[2] I know you don't want to hear more about Romeo and Juliet. This is
about amorous armadillos, named Xena and Yan.
x' = - x + 3y
y' = -3x - y
Matrix [ -1 , 3 ; -3 , -1 ] .
Eigenvalues: -1 +- 3i
clockwise.
Do we want more? We already know that this romance will peter out into
dull acceptance. Maybe that's enough information about X and Y's love life
for us.
e^{-t}[sin(3t) ; cos(3t)]
e^{A0} = I
(e^{At})^{-1} = e^{-At}
x' = - x + 3 y + a
y' = -3 x - y + b
u' = A u + c
u_p = - [-.1,-.3;.3,-.1] c
Specifically, then,
u_p = [.1,.3;-.3,.1] [10;40] = [13;1] .
u' = Au + q(t)
x' = g(x)
x' > 0.
When x > 3 , x' < 0 , and, unrealistically, when x < 0 , x' < 0 too.
------<-----*---->------*------<--------
[2] One day, a wolf swims across from the neighboring island, pulls
himself
up the steep rocky shore, shakes the water off his fur, and sniffs the
air.
Two wolves, actually.
Wolves eat deer, and this has a depressing effect on the growth rate of
deer.
Let's model it by
x' = (3-x-y) x
y' = (1-y+x) y
x' = f(x,y)
y' = g(x,y)
case,
in which
f(x,y) = ax + by
g(x,y) = cx + dy
Solutions are parametrized curves such that the velocity vector at the
point v is given by F(v), both in direction and magnitude.
You can see some solutions, and how they thread their way through the
vector field.
[4] The fact is that normally a process spends most of its time near
equilibrium. Not exactly AT equilibrium, but near to it. It is always
in the act of returning to equilibrium after being jarred off it by
something.
It is horizontal where
and the place where both y = 3-x and y = 1+x , which is at (1,2) .
Notice that along the x axis, where y = 0 , we get exactly the phase
line
of the deer population without wolves, and along the y axis, where x
= 0,
we get the phase line of the wolf population without deer. These two
phase
lines sit inside the phase portrait of the deer/wolf system.
[5] We'll study the behavior near the critical point (0,0).
Expanding out,
Near the origin the quadratic terms in the vector field are
insignificant, so
"to first order"
x' = 3x
y' = y
The deer and wolf populations both expand by natural growth. This is a
homogeneous linear system with matrix [ 3 0 ; 0 1 ] . The eigenvalues
are 1 and 3 , so we have a node. The eigenline for the smaller
eigenvalue
is the y axis, so this is the line to which all solutions (except for
the other normal mode!) all solutions become tangent as the approach the
origin.
18.03 Class 38, May 15
L length of pendulum
m mass of bob
g acceleration of gravity
|\
|theta
|
\
| \
| \/
| /
| / mg sin(theta)
| /
|/
s = L theta
F = ms" = mL theta"
s' = L theta'
Friction is very nonlinear, in fact, but for the moment let's suppose
that
we are restricting to small enough values of theta' so that the
behavior
is linear. (It's surely zero when theta' = 0.) So:
x = theta , y = x'
x' = y
y' = - k sin(x) - by
[2] We studied the vector field for the deer/wolf population model,
point.
say
v' = J(a,b) v
J(x,0) = [ 0 1 ; -k -b ]
J(x,0) = [ 0 1 ; k -b ]
J(0,0) = [ 0 , 1 ; -65 , -2 ]
lambda^2 + 2 lambda + 65
about half.
J(pi,0) = [ 0 , 1 ; 65 , 2 ]
lambda^2 + 2 lambda - 65
We have a saddle.
The eigenlines are both pretty steep, making a sharp V tilted somewhat
Trajectories coming down from the left represent the pendulum swinging
around in counterclockwise complete circles. Trajectories coming up from
the right represent the pendulum swinging around in clockwise circles.
Thus J(x,y) = [ 3 - 2x - y , -x ; y , 1 + x - 2y ]
We can do this with any of the critical points. For another example,
J(3,0) = [ -3 -3 ; 0 4 ]
J(1,2) = [ -1 -1 ; 2 -2 ]
lambda^2 + 3 lambda + 4
and eigenvalues -(3/2) +- i sqrt(7)/2 .
phase
portrait.
system,
We see that the entire upper right quadrant is a "basin," with the
these stable levels no matter what the initial populations are (as long
limiting value, they oscillate about them. We can say very exactly
how this happens: the solutions for the deer look like
Things get tricky if the trace and determinant of the Jacobian matrix
lie on one of the dividing curves in the (tr,det) plane.
None of this is too bad, but there is one case where the actual phase
portrait
may not resemble the linear one at all. This is when the Jacobian matrix
is
degenerate, i.e. has determinant zero, i.e. has at least one eigenvalue
which
is zero.
For example,
x' = 4xy
y' = x^2 - y^2
so J(0,0) = [ 0 0 ; 0 0 ]
What is the phase portrait of this linear system? It just says u' = 0
On the other hand, I showed a Matlab plot of the phase portrait: it has
six
systems.
The lesson: If you find det J(a,b) = 0 you have to assume that the
phase
portrait near the critical point (a,b) will look different from the
linear
phase portrait. If J(a,b) is nonzero, it will look the same.
[We can verify that there are ray solutions to this equation by finding
the slope and substituting. If y = mx, then x' = 4mx^2 and y' = (1-m^
2)x^2.
The slope of the vector field must also be m , in order to keep the
solution
from wandering off the ray, so
There are solutions along the lines through the origin with these
slopes.]
x' = y
y' = - x - c (x^2-1) y
This system turns out to continue to have periodic solutions. When c >
0
the situation is in fact even better: there is ONLY ONE periodic
trajectory,
and all other nonzero solutions converge to it. This is a "limit cycle."
The human heart (and I promise that this is the last time I'll mention
it)
is in fact controlled by an equation like this. This is why it returns
to
a normal periodic pattern after being disturbed.
[3] It can be shown that limit cycles are typical for 2D systems.
But when you move to 3D things are much more complicated. The first such
system was discovered right here at MIT by Edward Lorenz, who was
modeling
"convection rolls" in the upper atmosphere. In 1963 he wrote down a
fairly simple model, a nonlinear autonomous system in 3 dimensions:
x' = -ax + by
y' = -xz + rx - y
z' = xy - bz
The solutions don't ever settle down to a periodic orbit; but neither do
they run off to infinity. They just wrap pretty crazily around the two
nonzero unstable equilibria which exist provided that r > a/b .
[4] The unifying theme of the course has been the exponential function.
See how many ways you can find it among the Ten Essential Skills:
Complex Arithmetic
(Complex conjugation, magnitude of a complex number, division by complex numbers)
Cartesian and Polar Forms
Euler’s Formula
De Moivre’s Formula
Differentiation of Complex Functions
One of the most important numbers in complex analysis is i. The number i is a solution of
the equation x2 + 1 = 0 and has the property that i2 = −1. (Of course, another solution of
x2 + 1 = 0 is −i.)
We write a complex number z as z = x + iy (or x + yi), where x and y are real numbers.
We call x the real part of z and y the imaginary part of z. We write Re z = x and Im z = y.
Whereas the set of all real numbers is denoted by R, the set of all complex numbers is
denoted by C.
Every real number x can be considered as a complex number x + i0. In other words, a real
number is just a complex number with vanishing imaginary part.
Two complex numbers are said to be equal if they have the same real and imaginary
parts. In other words, the complex numbers z1 = x1 + iy1 and z2 = x2 + iy2 are equal if and
only if x1 = x2 and y1 = y2 .
Example 2 − 3i 6= 2 + 3i
If z = x + iy ∈ C then x, y ∈ R and the absolute value of z is |z| = (x2 + y 2 )1/2 and the
conjugate of z is z̄ = x − iy.
Clearly then
zz̄ = (x + iy)(x − iy) = x2 + y 2 = |z|2
†
This handout is an amended version of the original.
For example, (a) follows from the definition of addition in C and the commutative law for
real numbers:
z1 + z2 = (x1 + iy1 ) + (x2 + iy2 )
= (x1 + x2 ) + i(y1 + y2 )
= (x2 + x1 ) + i(y2 + y1 )
= (x2 + iy2 ) + (x1 + iy1 )
= z2 + z1
The other properties (b)-(g) may be similarly verified.
Because addition and multiplication in C satisfy (a)-(g), the set C, together with the two
operations of addition and multiplication, is a field. The set R of real numbers is also a field.
The operation of taking complex conjugates satisfies two basic algebraic rules:
z1 + z2 = z̄1 + z̄2 , z1 z2 = z̄1 z̄2
Note that if z = x + iy then z + z̄ = x + iy + x − iy = 2x and z − z̄ = x + iy − x + iy = 2iy.
From these two equalities we have
z + z̄ z − z̄
Re z = x = , Im z = y =
2 2i
We can also find the real and imaginary parts of z1 /z2 using complex conjugates. If z2 6= 0,
z1 1 z̄2 z1 z̄2 z1 z̄2
= z1 · = z1 · 2
= = ·
z2 z2 |z2 | z2 z̄2 z2 z̄2
x1 + iy1 x1 + iy1 x2 − iy2 (x1 + iy1 )(x2 − iy2 )
= = · =
x2 + iy2 x2 + iy2 x2 − iy2 x22 + y22
à !
x 1 x 2 + y1 y2 x2 y1 − x1 y2
= 2 2
+i
x2 + y2 x22 + y22
−3 + i −1 − 2i 3 + 6i − i − 2i2 5 + 5i
= · = 2
= =1+i
−1 + 2i −1 − 2i 1 − 4i 5
√ √
Example | − 3 + 2i| = 9 + 4 = 13
We can therefore solve any polynomial equation completely by using complex numbers. We
can’t say the same thing for reals. Consider x2 + x + 1 = 0. The quadratic formula yields
√
−1 ± 1 − 4
x=
2
These numbers are also really not “imaginary.” You can visualize z = x + iy as the point
(x, y) ∈ R2 , which is the (two dimensional) Cartesian plane.
Im 6
x z = x + iy = reiθ
© s
©
r© © ©
© © y
© © θ
© -
H
O H Re
H
H
H
z̄ = x − iy = re−iθ
s
Polar Coordinates
Let z = x + iy be a non-zero complex number. The point (x, y) has polar coordinates (r, θ),
x = r cos θ, y = r sin θ
r(cos θ + i sin θ) is a way of expressing a complex number by using polar coordinates. The
positive number r is just the modulus of z and the angle θ is called the argument of z. It is
determined up to multiples of 2π.
Examples‡
√ µ π π
¶ √ µ
2π 2π
¶
1 + i = 2 cos + i sin , −1 + i 3 = 2 cos + i sin
4 4 3 3
Euler’s Formula
We are all quite familiar with the Taylor series of the exponential function ea when the
exponent a is real§
a2 a3 a4 a5
ea = 1 + a + + + + + ···
2! 3! 4! 5!
‡
These relations are derived in an example on Page 6.
§
Here, n! (n factorial) is defined by n! = n(n − 1)(n − 2) · · · 1 with 0! = 1. Hence, 1! = 1, 2! = 2 · 1 = 2,
3! = 3 · 2 · 1 = 6, 4! = 4 · 3 · 2 · 1 = 24, 5! = 5 · 4 · 3 · 2 · 1 = 120, etc.
However, we may also use the same Taylor series to define the exponential function of an
imaginary exponent (and complex exponents as well). Let θ be a real number and consider
the exponential series when we replace a by θ. We then have
(iθ)2 (iθ)3 (iθ)4 (iθ)5
eiθ = 1 + iθ + + + + + ···
2! 3! 4! 5!
Now let’s reduce this series by using the relation i2 = −1 and collect the real terms together
and the imaginary terms together. We obtain (after adding some more terms to make the
final result more obvious)
2 3 4 5 6 7
iθ 2θ 3θ 4θ 5θ 6θ 7θ
e = 1 + iθ + i +i +i +i +i +i + ···
2! 3! 4! 5! 6! 7!
θ2 θ4 θ6
=1− + − + ···
à 2! 4! 6! !
θ3 θ5 θ7
+i θ− + − + ···
3! 5! 7!
We now have that the real part of eiθ and the imaginary part of eiθ are each a familiar Taylor
series. These are
θ2 θ4 θ6
cos θ = 1 − + − + ···
2! 4! 6!
θ 3 θ5 θ 7
sin θ = θ − + − + ···
3! 5! 7!
Thus we have obtained the famous formula of Euler which is
for any real θ. In other words, every complex number of the form eiθ lies on the unit circle
x2 + y 2 = 1 in the xy-plane.
Examples
√
iπ/6 π π 3 1
e = cos + i sin = +i
6 6 2 2
π π 1 1
eiπ/4 = cos + i sin = √ + i √
4 4 2 2
eiπ = −1
ein2π = 1 (n integer)
ei(θ+n2π) = eiθ (θ ∈ R, n integer)
The last result follows from the periodicity of cos θ and sin θ; if n is an integer then cos(θ +
n2π) = cos θ and sin(θ + n2π) = sin θ.
So now the complex number z written in polar coordinates r(cos θ + i sin θ) can be written
as reiθ . This latter form will be called the polar form of the complex number z.
When two complex numbers are in polar form, it is very easy to compute their product.
Suppose that z1 = r1 eiθ1 = r1 (cos θ1 + i sin θ1 ) and z2 = r2 eiθ2 = r2 (cos θ2 + i sin θ2 ) are two
non-zero complex numbers. We first compute their product the hard way as follows:
(We used trigonometric addition formulas to simplify in the second to last step.) So now we
have a very easy formula for multiplying two complex numbers in polar form. We simply
multiply their moduli and add their arguments.
From the rule for multiplication of complex numbers in polar form follows a rule for division.
We can easily see that
1
reiθ · e−iθ = ei(θ−θ) = e0 = 1
r
Hence
(reiθ )−1 = r−1 e−iθ
We now turn our attention to dividing two complex numbers in polar form
Hence
√ iθ √ √
1+i= 2e 1 = 2 (cos θ1 + i sin θ1 ) , −1 + i 3 = 2eiθ2 = 2 (cos θ2 + i sin θ2 )
The following formula follows from the rules for multiplication and division of complex
numbers in polar form.
De Moivre’e Formula
³ ´n
eiθ = einθ , n = 0, ±1, ±2, . . .
i.e. (cos θ + i sin θ)n = cos nθ + i sin nθ, n = 0, ±1, ±2, . . .
Note that this gives us a very easy means for calculating the multiple angle formulas in
trigonometry. For example, let n = 3 in the above. Then, since (a+b)3 = a3 +3a2 b+3ab2 +b3 ,
Clearly the real part gives cos 3θ in terms of cos θ and sin θ while the imaginary part gives
the corresponding expression for sin 3θ. Thus if you know De Moivre’s formula, and the
binomial expansion (a + b)n , you can always calculate any of the multiple angle formulas.
One can also define a general complex exponential function. The argument of the exponential
can be complex in general in which case
ea+ib := ea eib = ea (cos b + i sin b)
The complex exponential function has the following properties:
³ ´n ³ ´n
ea+ib = ea eib = ean eibn = ean+ibn = e(a+ib)n
The details of the last equality are left to the reader. Thus the laws of exponents for real
exponential functions also hold for complex exponential functions.
In polar form
√ µ π π
¶ √
1+i=
2 cos + i sin = 2eiπ/4
4 4
and hence by the laws of exponents
³√ ´15 ³√ ´15 ∙ 15π 15π
¸
(1 + i)15 = 2 ei15π/4 = 2 cos + i sin
4 4
³√ ´15 √ √
Now 2 = 215/2 = 27 2 = 128 2. Also 15π/4 = 4π − π/4. Therefore
à !
15
√ ∙ µ π¶ µ
π
¶¸ √ 1 1
(1 + i) = 128 2 cos − + i sin − = 128 2 √ − i √ = 128 − 128i
4 4 2 2
To solve this equation, start with z = reiθ , then, expressing 2i in polar form also, gives
z 3 = r3 ei3θ = 2i = 2eiπ/2 = 2ei(π/2+n2π)
Note the n2π factor in the phase. We don’t know that the phase on the right-hand side
is equal to the phase on the left-hand side. We do know that up to the factor of n2π, the
phases must be the same. Equating moduli and arguments in the above equation gives
π
r3 = 2, 3θ = + n2π
2
so
π 2π π 4π
r = 21/3 , θ= +n = +n
6 3 6 6
Plugging in n = 0, 1, 2 we obtain the roots
Ã√ !
√ √ 3 1
z1 = 2eiπ/6 = 2
3 3
+i
2 2
à √ !
√ √ − 3 1
2ei5π/6
3 3
z2 = = 2 +i
2 2
√ √
2ei3π/2
3 3
z3 = = −i 2
These roots are evenly spaced at 2π/3 = 120◦ intervals on a circle of radius 21/3 = 1.26.
Note that if we continue and take n = 4, 5, . . . we will not obtain any more or different roots.
(Try it and see.)
A function f that depends on a real variable t can take on complex values. Such a complex
function has the form
f(t) = u(t) + iv(t)
where u and v are real functions of t. The derivative of f with respect to t is given by
Example
f (t) = t2 + i cos t, f 0 (t) = 2t − i sin t
Let us see what happens if we differentiate the complex exponential function. We proceed
by first decomposing the complex function into real and imaginary parts, differentiating, and
then finally reassembling the complex exponential. The individual steps are as follows:
h i0 ³ ´0
e(a+ib)t = eat+ibt
h i0
= eat (cos bt + i sin bt)
³ ´0
= eat cos bt + ieat sin bt
³ ´
= aeat cos bt + beat (− sin bt) + i aeat sin bt + beat cos bt
Therefore, complex exponential functions have the same kind of differentiation properties as
real exponential functions.
Let us now do an example directly related to ODE’s. Let’s take a complex function and see
if it can be a solution of an ODE.
Exercises
2. Compute
(a) |3 + 4i|
(b) |i(2 + i)| + |1 + 4i|
3. Solve
(a) z 2 − 2z + 5 = 0
(b) z 4 − 1 = 0
5. Compute
(a) (1 − i)11
h √ i13
1
(b) 2
(−1 + i 3)
6. Solve completely
(a) z 5 = 1
(b) z 3 = −i
7. Show that
(a) ¯z̄ = z
(c) z is real if and only if z = z̄
9. Let L = D2 + D + 1. Compute
(a) L(eit )
it
(b) L(te
" ³
) ´# √
− 12 +i 2
3
t
(c) L e
0 Linear Equations
2x1 + x2 = −1
(1)
x1 − x2 = 2
Each equation of the linear system (1) represents a straight line. The system has:
(a) no solution (the two lines are parallel), or
(b) one unique solution (the two lines intersect), or
(c) an infinite number of solutions (the two lines are coincident).
2x1 + 2x2 − 3x3 = 0
−x1 − x2 + x3 = 1 (2)
x1 + x2 − 2x3 = −1
Each equation of the linear system (2) represents a plane. The system has:
(a) no solution (the three planes are parallel or have no common intersection), or
(b) one unique solution (the three planes intersect at just one point), or
(c) an infinite number of solutions (the three planes are coincident or intersect in a line).
Note that in (1) and (2) that the unknowns, the coordinates x1 , x2 and x3 , appear by
themselves with exponent 1. The following systems are nonlinear:
x21 + x2 = 1
(3)
x1 + x2 = 0
x1 x2 + x2 = −1
(4)
x1 − x22 = 0
We shall only concern ourselves with linear systems such as (1) and (2) that have the same
number of equations as unknowns.
1 2 × 2 Systems
We first note that equations (7) and (11) are always valid. If ad − bc = 0, then (7) and (11)
become
0x1 = de − bf (13)
and
0x2 = af − ce (14)
If the right-hand side of either (13) or (14) is nonzero, we would have the ridiculous equation
zero=nonzero, which means that the system (5) is not solvable.
Conclusion
If ad − bc 6= 0, the 2 × 2 system
ax1 + bx2 = e
cx1 + dx2 = f
is always solvable. The solution is unique and is given by
de − bf
x1 =
ad − bc (15)
af − ce
x2 =
ad − bc
If ad − bc = 0, then the system is in general not solvable.
2 Cramer’s Rule
à !
a b
We all know that the determinant of a 2 × 2 matrix is given by the formula
c d
¯ ¯
¯ a b ¯¯
¯
¯ ¯ = ad − bc (16)
¯ c d ¯
ax1 + bx2 = e
cx1 + dx2 = f
is just
¯ ¯
¯ a b ¯¯
¯
¯
¯
¯=6 0 (17)
c d ¯
We can therefore state that a 2 × 2 system is always solvable if and only if the determinant
of the coefficient matrix is nonzero.
Not only can we describe the solvability condition in terms of a determinant, we can also
represent the solution by determinants. All we have to do is to recognize that
¯ ¯
¯ e b ¯¯
¯
¯ ¯ = ed − bf
¯ f d ¯
¯ ¯ (18)
¯ a e ¯¯
¯
¯ ¯ = af − ce
¯ c f ¯
à !
a e
can be obtained by replacing the second column of the coefficient matrix with the
c f
right-hand side.
Example
Solve the system
à !à ! à !
2x1 + x2 = −1 2 1 x1 −1
which, in matrix form, is =
x1 − x2 = 2 1 −1 x2 2
Solution ¯ ¯
¯ −1 1 ¯
¯ ¯
¯ ¯
¯ 2 −1 ¯ −1 1
x1 = ¯¯ ¯
¯ = =
¯ 2 1 ¯ −3 3
¯ ¯
¯ 1 −1 ¯
¯ ¯
¯ 2 −1 ¯¯
¯
¯ ¯
¯ 1 2 ¯ 5 5
x2 = ¯ ¯ = =−
¯ 2 1 ¯ −3 3
¯ ¯
¯ ¯
¯ 1 −1 ¯
Now that we have a good understanding of 2 × 2 systems, the natural question to ask is:
“What about 3 × 3, 4 × 4, . . . , n × n systems?” The answer is that Cramer’s rule works in
the general case.
The derivation of the general Cramer’s rule is nontrivial. You will see it in MA232/339.
Here we ask you to accept it on the basis of the 2 × 2 case.
In order to use the general Cramer’s rule we must know how to compute the determinant of
n × n matrices. This is dealt with in the next section.
3 Determinants of n × n Matrices
Example
Compute ¯ ¯
¯ 1 2 3 ¯¯
¯
¯
¯ 4 5 6 ¯¯
¯ ¯
¯ 7 8 9 ¯
Solution ¯ ¯
¯ 1 2 3 ¯¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ 5 6 ¯ ¯ 4 6 ¯ ¯ 4 5 ¯
¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ 4 5 6 ¯¯ = 1 ¯ ¯ − 2¯ ¯ +3¯ ¯
¯ ¯ 8 9 ¯ ¯ 7 9 ¯ ¯ 7 8 ¯
¯ 7 8 9 ¯
= 1(5 · 9 − 8 · 6) − 2(4 · 9 − 7 · 6) + 3(4 · 8 − 7 · 5)
= 1(−3) − 2(−6) + 3(−3)
=0
What about the determinants of 4 × 4 matrices? We use the same three steps since we now
know how to compute the determinants of 3 × 3 matrices.
Example
Compute ¯ ¯
¯ 1 −1 2 3 ¯¯
¯
¯ 0 1 3 2 ¯¯
¯
¯ ¯
¯ −1 1 0 1 ¯¯
¯
¯ 0 2 1 0 ¯
Solution
¯ ¯
¯ 1 −1 2 3 ¯¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ 1 3 2 ¯¯ ¯ 0 3 2 ¯ ¯ 0 1 2 ¯ ¯ 0 1 3 ¯
¯ 0 1 3 2 ¯¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯
¯ ¯ = 1¯ 1 0 1 ¯¯ − (−1) ¯¯ −1 0 1 ¯¯ + 2 ¯¯ −1 1 1 ¯¯ − 3 ¯¯ −1 1 0 ¯¯
¯ −1 1 0 1 ¯¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ 2 10 ¯ ¯ 0 1 0 ¯ ¯ 0 2 0 ¯ ¯ 0 2 1 ¯
¯ 0 2 1 0 ¯
= 1(7) − (−1)(−2) + 2(−4) − 3(−5)
= 12
There is nothing that can stop us now. Once we know how to handle 4 × 4 determinants,
we can compute 5 × 5 determinants using Steps I-III above, and so on.
We must point out that there are many properties of determinants that we have not discussed.
In particular, there are more efficient ways of computing n × n determinants. You will learn
more about determinants in MA312/339.
4 Gaussian Elimination
Cramer’s rule is an important theoretical result because it enables us to represent the solution
of an n × n linear system in terms of determinants. But it is not an efficient way to compute
the value of the solution when n ≥ 3. The method of Gaussian elimination is more efficient
in these cases.
In the every day world, linear systems of equations are respesented in matrix form and are
solved using matrix algebra. For this reason, Gaussian elimination is introduced here using
matrices.
ax1 + bx2 = e
cx1 + dx2 = f
is
à !
a b e
(22)
c d f
Notice that the xi ’s, in this case, x1 and x2 , are placeholders that can be omitted until the
end of the elimination process. An invaluable aid to the latter is a check column each
entry of which is simply the sum of all the terms of the corresponding row of the augmented
matrix:
à !
a b e a+b+e
(23)
c d f c+d+f
The check column is shown here to the right of the augmented matrix. Its usefulness will
become clear as we proceed.
Definition
In matrix algebra, an elementary row operation entails:
(I) Interchanging two rows,
(II) Multiplying a row by a nonzero real number,
(III) Adding a multiple of one row to another.
Definition
A matrix is said to be in row echelon form if
(a) The first nonzero entry in each row is 1,
(b) If row k does not consist entirely of zeros, the number of leading zero entries in row k + 1
is greater than the number of leading zero entries in row k, and
(c) If there are rows whose entries are all zero, they are below the rows having nonzero
entries.
Definition
The process of using elementary row operations I, II, and III to transform a linear system
into one whose augmented matrix is in row echelon form is called Gaussian elimination.
Note that elementary row operation II is necessary in order to scale the rows so that the
leading coefficients are all 1. If the row echelon matrix contains a row of the form (00 · · · 0|1)
the system is inconsistent. Otherwise, the system will be consistent. If the system is con-
sistent and the nonzero rows of the the row echelon matrix form a triangular system, the
system will have a unique solution.
Definition
A matrix is said to be in reduced row echelon form if:
Definition
The process of using elementary row operations to transform a matrix into reduced row
echelon form is called Gauss-Jordan reduction. For a consistent linear system, Gauss-
Jordan reduction transforms the coefficient matrix into an identity matrix.
2x1 + x2 = −1
(24)
x1 − x2 = 2
à !
2 1 −1 2
1 −1 2 2
à !
R2 1 −1 2 2
,→
R1 2 1 2 2
à !
R1 1 −1 2 2
,→
−2R1 + R2 0 3 −5 −2
à !
3R1 + R2 3 0 1 4
,→
R2 0 3 −5 −2
à !
R1 /3 1 0 1/3 4/3
,→
R2 /3 0 1 −5/3 −2/3
Note that at each stage of the reduction, R1 and R2 refer to the preceding matrix.
Note also that you should always check the solution set by substituting it back into the
original system of equations. In this case,
µ ¶µ ¶
1 5 √
2 + − = −1
3 3
µ ¶ µ ¶
1 5 √
− − =2
3 3
∗
Often, elsewhere in this class, for 2 × 2 linear systems, we will use straightforward elimination which in
this case proceeds as follows. We observe that adding the two rows eliminates x2 and gives the equation
3x1 = 1 from which x1 = 1/3. Then from the second equation (either equation suffices) x2 = x1 − 2 = −5/3.
x1 + 2x2 − 3x3 = 0
−x1 − x2 + x3 = 1 (25)
x1 + x2 − 2x3 = −1
1 2 −3 0 0
−1 −1 1 1 0
1 1 −2 −1 −1
R1 1 2 −3 0 0
,→ R1 + R2 0 1 −2 1 0
R2 + R3 0 0 −1 0 −1
R1 1 2 −3 0 0
,→ R2
0 1 −2 1 0
−R3 0 0 1 0 1
R1 + 3R3 1 2 0 0 3
,→ R2 + 2R3 0 1 0 1 2
R3 0 0 1 0 1
R1 − 2R2 1 0 0 −2 −1
,→ R2 0 1 0 1 2
R3 0 0 1 0 1
{x1 = −2, x2 = 1, x3 = 0}
0 1 1 1 1 4
1 3 0
2 −1 5
1 3 3 3 0 10
0 2 0 1 2 5
R2 1 3 0 2 −1 5
R1 0 1 1 1 1
4
,→
R4 0 2 0 1 2 5
R3 1 3 3 3 0 10
R1 1 3 0 2 −1 5
R2 0 1 1 1
1 4
,→
2R2 − R3 0 0 2 1 0 3
−R1 + R4 0 0 3 1 1 5
R1 1 3 0 2 −1 5
R2 0 1 1 1
1 4
,→
R3 0 0 2 1 0 3
3R3 − 2R4 0 0 0 1 −2 −1
R1 − 2R4 1 3 0 0 3 7
R2 − R4 0 1 1 0 3 5
,→
R3 − R4 0 0 2 0 2 4
R4 0 0 0 1 −2 −1
R1 1 3 0 0 3 7
2R2 − R3 0 2 0 0 4 6
,→
R3 0 0 2 0 2 4
R4 0 0 0 1 −2 −1
2R1 − 3R2 2 0 0 0 −6 −4
R2 0 2 0 0 4 6
,→
R3 0 0 2 0 2 4
R4 0 0 0 1 −2 −1
R1 /2 1 0 0 0 −3 −2
R2 /2 0 1 0 0
2 3
,→
R3 /2 0 0 1 0 1 2
R4 0 0 0 1 −2 −1
5 Exercises
¯ ¯
¯ 1 −1 0 1 ¯
¯ ¯
¯ 0 2 −1 1 ¯
¯ ¯
4. ¯
¯
¯
¯
Ans: −11
¯ 1 3 2 0 ¯
¯ 1 0 1 −1 ¯
x1 + 3x2 = 0
5. Ans: {x1 = 9, x2 = −3}
2x1 + 4x2 = 6
2x1 + x2 + x3 = 8
6. 4x1 + x2 = 11 Ans: {x1 = 2, x2 = 3, x3 = 1}
−2x1 + 2x2 + x3 = 3
x1 + 3x2 = 0
7. Ans: {x1 = 9, x2 = −3}
2x1 + 4x2 = 6
2x1 + x2 + x3 = 8
8. 4x1 + x2 = 11 Ans: {x1 = 2, x2 = 3, x3 = 1}
−2x1 + 2x2 + x3 = 3
2x1 − x2 = 6
9. −x1 + 2x2 − x3 = 0 Ans: {x1 = 3, x2 = 0, x3 = −3}
−x2 + 2x3 = −6
3x2 + 3x3 + x4 = −2
½ ¾
2x1 + 4x2 + 2x4 = 3 1
10. Ans: x1 = , x2 = 0, x3 = −1, x4 = 1
2x1 + 7x2 + 9x3 + 7x4 = −1 2
6x3 + 5x4 = −1
mẍ + bẋ + kx = ky .
Now suppose instead that we fix the top of the spring and drive the system by
moving the bottom of the dashpot instead. Here’s a frequency response analysis
of this problem. This time I’ll keep m around, instead of setting it equal to 1 or
dividing through by it. A new Mathlet, Amplitude and Phase: Second Order,
II, illustrates this system with m = 1.
Suppose that the position of the bottom of the dashpot is given by y(t), and again
the mass is at x(t), now arranged so that x = 0 when the spring is relaxed. Then the
force on the mass is given by
d
x = −kx + b
m¨ (y − x)
dt
since the force exerted by a dashpot is supposed to be proportional to the speed of
the piston moving through it. This can be rewritten
y = B cos(ωt) .
biω
W (iω) = .
p(iω)
so that
zp = W (iω)Beiωt .
�
Using the natural frequency ωn = k/m ,
so
biω
W (iω) = .
m(ωn2 − ω 2 ) + biω
xp = gain · B cos(ωt − φ) .
Thus both the gain and the phase are displayed by the curve parametrized by
the complex valued function W (iω). To understand this curve, divide numerator and
denominator in the expression for W (iω) by biω:
�−1
i ωn2 − ω 2
�
W (iω) = 1 − .
b/m ω
As ω goes from 0 to ∞, (ωn2 − ω 2 )/ω goes from +∞ to −∞, so the expression inside
the brackets follows the vertical straight line in the complex plane with real part 1,
moving upwards. As z follows this line, 1/z follows a circle of radius 1/2 and center
1/2, traversed clockwise (exercise!). It crosses the real axis when ω = ωn .
This circle is the “Nyquist plot.” It shows that the gain starts small, grows to a
maximum value of 1 exactly when ω = ωn (in contrast to the springdriven situation,
where the resonant peak is not exactly at ωn and can be either very large or non
existent depending on the strength of the damping), and then falls back to zero. For
large ω, W (iω) is approximately −ib/mω, so the gain falls off like (b/m)ω −1 .
The Nyquist plot also shows that −φ = Arg (W (iω)) moves from near π/2 when
ω is small, through 0 when ω = ωn , to near −π/2 when ω is large.
And it shows that these two effects are linked to each other. Thus a narrow
resonant peak corresponds to a rapid sweep across the far edge of the circle, which in
turn corresponds to an abrupt phase transition from −φ near π/2 to −φ near −π/2.
MA 232 Differential Equations Fall 1999 Kevin Dempsey
Test I
1
Please do not detach pages 2
Loose pages will NOT be graded 3
√ 4
Put a check mark next 5
to your recitation section Total
√
Section Day Period Time Room Leader
11 Monday 2 9—9:50 SC 386 Scott Gregowske
12 Monday 4 11—11:50 SC 386 Sovia Lau
13 Monday 4 11—11:50 SC 342 Chiu Wah Cheng
14 Monday 7 3—3:50 Rowley 150 Chaya Tuok
15 Monday 8 4—4:50 SC 303 Phillip Allen
16 Tuesday 1 8—8:50 SC 356 Elizabeth Baker
17 Tuesday 5 1—1:50 SC 344 Tom Pearsall
18 Tuesday 7 3—3:50 SC 344 Wesley Kent
19 Tuesday 8 4—4:50 SC 305 Ed Landry
1 (20 pts)
dy
(i) The differential equation + x100 y = 0 is linear T
dx
2
dy
(ii) The differential equation + 4y = sin x is nonlinear F
dx 2
2 2
∂u ∂u
(iii) − = 0 is a partial differential equation T
∂t 2 ∂x 2
dy
(iv) If + xy = 2 and y(1) = 3, then y 0 (1) = 0 F
dx
dy 2
(v) The differential equation = ey sin2 y is autonomous T
dt
1 dp
(vi) The line p = is an isocline of the differential equation = p(1 − p) T
2 dt
(vii) If the Wronskian W [y1 , y2 ](x) of two functions y1 (x) and y2 (x) is identically zero on an
interval (a, b) then the two functions are linearly dependent on that interval F
(viii) If the two functions y1 (x) and y2 (x) are solutions of the second-order variable-coefficient
ordinary differential equation y 00 +p(x)y 0 +q(x)y = 0 on an interval (a, b), then the set {y1 , y2 }
is a fundamental solution set on (a, b) F
dy y+1
(ix) y(t) = t and y(t) = 2t + 1 are both solutions of = T
dt t+1
2 (20 pts)
dy y
=
dt 1 + y2
Ans
1 + y2
dy = dt
y
Z Ã ! Z
1
+ y dy = dt
y
y2
ln |y| + = t+C
2
Ans ¯ ¯ ¯ ¯ ¯ ¯
¯ −1 2 ¯¯ ¯ 4 2 ¯ ¯ 4 −1 ¯¯
= 1 ¯¯ ¯
¯ −2¯
¯ ¯
¯ −3¯ ¯
¯ 3 1 ¯ ¯ 0 1 ¯ ¯ 0 3 ¯
= 1(−1 − 6) − 2(4 − 0) − 3(12 − 0)
= −7 − 8 − 36
= −51
3 (20 pts)
Ans
dy 3
+ y = 3x − 2
dx x
R 3
(3/x)dx
µ(x) = e = e3 ln |x| = eln |x| = |x|3
(
x3 , x≥0
=
−x3 , x<0
Take µ(x) = x3 (since the sign cancels)
½Z ¾
1
y(x) = x3 (3x − 2) dx + C
x3
½Z ¾
1 4 3
= (3x − 2x ) dx + C
x3
µ ¶
1 3 5 1 4
= x − x +C
x3 5 2
3 2 x C
= x − + 3
5 2 x
3 1 C 9
y(1) = − + → C=
5 2 1 10
3 2 x 9
y(x) = x − + x−3
5 2 10
4 (20 pts)
Ans
1 −1 1 −2 −1
−1 3 −1 3 4
−1 1 5 1 6
R1 1 −1 1 −2 −1
,→ R1 + R2 0 2 0 1 3
R1 + R3 0 0 6 −1 5
6R1 − R3 6 −6 0 −11 −11
,→ R2 0 2 0 1 3
R3 0 0 6 −1 5
R1 + 3R2 6 0 0 −8 −2
,→ R2
0 2 0 1
3
R3 0 0 6 −1 5
R1 /6 1 0 0 −4/3 −1/3
,→ R2 /2 0 1 0 1/2 3/2
R3 /6 0 0 1 −1/6 5/6
5 (20 pts)
Does the set {x, x2 , x3 , x4 } contain a fundamental solution set for the differential equation
x2 y 00 − 4xy 0 + 6y = 0
Ans
Let Ly = x2 y 00 − 4xy 0 + 6y
Since x4 6= 0 on (−∞, 0) and (0, +∞), {x2 , x3 } is a fundamental solution set for the given
ODE on the intervals (−∞, 0) and (0, +∞).
Test II
1
Please do not detach pages 2
Loose pages will NOT be graded 3
√ 4
Put a check mark next 5
to your recitation section Total
√
Section Day Period Time Room Leader
11 Monday 2 9—9:50 SC 386 Scott Gregowske
12 Monday 4 11—11:50 SC 386 Sovia Lau
13 Monday 4 11—11:50 SC 342 Chiu Wah Cheng
14 Monday 7 3—3:50 Rowley 150 Chaya Tuok
15 Monday 8 4—4:50 SC 303 Phillip Allen
16 Tuesday 1 8—8:50 SC 356 Elizabeth Baker
17 Tuesday 5 1—1:50 SC 344 Tom Pearsall
18 Tuesday 7 3—3:50 SC 344 Wesley Kent
19 Tuesday 8 4—4:50 SC 305 Ed Landry
1 (20 pts)
(a)(3 pts) Does the complex number −i lie on the unit circle? (The unit circle is the circle
of radius 1 centered at the origin.)
(b)(3 pts) What does the Fundamental Theorem of Algebra say about the complex polyno-
mial z 5 + i = 0?
Ans Yes
Ans
z5 + i = 0
z 5 = r5 ei5θ = −i = ei(3π/2+n2π) , n = 0, 1, 2, . . .
r=1
3π
5θ = + n2π
2
3π 2π
θ= +n
10 5
3π 7π 11π 15π 19π
= , , , ,
10 10 10 10 10
3π 7π 11π 3π 19π
= , , , ,
10 10 10 2 10
(e)(3 pts) Do the roots in (d) all lie on the unit circle?
2 (20 pts)
y 00 + 4y 0 + 13y = 0
y(0) = −1
y 0 (0) = 1
Ans
Substituting y = erx gives the auxiliary equation r2 + 4r + 13 = 0 which has roots r =
−2 ± 3i so the fundamental solution set for the given homogeneous differential equation is
{e−2x cos 3x, e−2x sin 3x} and a general solution of the equation is therefore
where the constants c1 and c2 are arbitrary. We have to find the the values of c1 and c2 so
that the solution passes through the point (0, −1) with a slope of 1.
y 0 = (−2c1 e−2x cos 3x − 3c1 e−2x sin 3x) + (−2c2 e−2x sin 3x + 3c2 e−2x cos 3x)
y(0) = c1 = −1
1
y 0 (0) = −2c1 + 3c2 = 1 → c2 = −
3
1
y = −e−2x cos 3x − e−2x sin 3x
3
3 (20 pts)
using the method of undetermined coefficients (not just the form of the particular solution,
you have to explicitly determine the constants).
Ans
Substituting y = erx into the corresponding homogeneous differential equation y 00 +2y 0 +y = 0
gives the auxiliary equation r2 + 2r + 1 = (r + 1)2 = 0 which has repeated roots r = −1, −1
so the fundamental solution set for the homogeneous equation is {e−x , xe−x }. Now the RHS
of the given nonhomogeneous equation is (x2 + x + 1)e−x which in Table 4.1 on Page 197 of
the text is Type (IV) so the particular solution form is
yp = x2 (Ax2 + Bx + C)e−x
4 (20 pts)
y 00 + y = sec θ
Ans
Substituting y = erθ into the corresponding homogeneous equation y 00 + y = 0 gives the
auxiliary equation r2 + 1 = 0 which has roots r = ±i so the fundamental solution set for
the homogeneous equation is {y1 , y2 } = {cos θ, sin θ}. The Wronskian of the fundamental
solutions is ¯ ¯
¯ cos θ sin θ ¯¯
¯
W [y1 , y2 ](θ) = ¯¯ ¯ = cos2 θ + sin2 θ = 1
− sin θ cos θ ¯
Now the method of variation of parameters gives
Z
−gy2 Z (− sec θ)(sin θ) Z
v1 = = dθ = − tan θ dθ = ln | cos θ| + C1
W 1
Z Z Z
gy1 (sec θ)(cos θ)
v2 = = dθ = 1 dθ = θ + C2
W 1
yp = v1 y1 + v2 y2
5 (20 pts)
Use the elimination method to find a general solution of the first-order linear syatem
x0 + y = sin t
−x + y 0 = sin t
Ans
d
In operator notation with D ≡ the given system is
dt
D[x] + y = sin t
−x + D[y] = sin t
From Table 4.1 the RHS of this equation is Type (III) and, because the fundamental solution
set for the corresponding homogeneous equation x00 + x = 0 is {cos t, sin t}, the particular
solution form is
xp = tf where f = A cos t + B sin t
Use the method of undetermined coefficients to find the constants A and B
xp = tf
x0p = f + tf 0
x00p = f 0 + f 0 + tf 00 = 2f 0 − tf = 2f 0 − xp
y = −x0 + sin t
n o
= − −c1 sin t + c2 cos t + 12 (cos t + sin t) + 12 t(− sin t + cos t) + sin t
µ ¶ µ ¶
1 1 1
= c1 + sin t − c2 + cos t + t(sin t − cos t)
2 2 2
x = y 0 − sin t
µ ¶ µ ¶
1 1 1
= a2 − cos t − a1 + sin t + t(cos t + sin t)
2 2 2
Test III
1
Please do not detach pages 2
Loose pages will NOT be graded 3
√ 4
Put a check mark next 5
to your recitation section Total
√
Section Day Period Time Room Leader
11 Monday 2 9—9:50 SC 386 Scott Gregowske
12 Monday 4 11—11:50 SC 386 Sovia Lau
13 Monday 4 11—11:50 SC 342 Chiu Wah Cheng
14 Monday 7 3—3:50 Rowley 150 Chaya Tuok
15 Monday 8 4—4:50 SC 303 Phillip Allen
16 Tuesday 1 8—8:50 SC 356 Elizabeth Baker
17 Tuesday 5 1—1:50 SC 344 Tom Pearsall
18 Tuesday 7 3—3:50 SC 344 Wesley Kent
19 Tuesday 8 4—4:50 SC 305 Ed Landry
1a (10 pts)
A mass is attached to a spring suspended from a high ceiling, thereby stretching the spring
a certain distance on coming to rest at equilibrium. When the mass is given an initial
displacement and an initial velocity, the equation of simple harmonic motion of the mass is
µ ¶
π
x(t) = 1.1 sin 7t −
4
[This equation assumes that x(t) is positive upwards; x(t) is the displacement in meters of
the mass above the equilibrium position t seconds after the motion begins.]
(a)(2 pts) What was the initial displacement (direction and magnitude) of the mass? Explain
your answer in words too.
Ans
√
x(0) = 1.1 sin(−π/4) = −1.1/ 2 = −0.778 m
The mass was displaced 0.778 m downwards.
(b)(2 pts) What was the initial velocity (direction and magnitude) of the mass? Explain
your answer in words too.
Ans
x0 (t) = 7.7 cos(7t − π/4)
√
x0 (0) = 7.7 cos(−π/4) = 7.7/ 2 = 5.444m/sec
The spring was given an upwards velocity of 5.444 m/sec
(c)(6 pts) When will the mass first pass through the equilibrium position in a downwards
direction?
Ans
When the mass first goes through the equilibrium position x(t) = 1.1 sin(7t − π/4) = 0 and
7t − π/4 = 0 but at this time x0 (t) = 7.7 m/sec (upwards!). The next time the mass goes
through the equilibrium position x(t) = 1.1 sin(7t − π/4) = 0 and 7t − π/4 = π at which
time x0 (t) = −7.7 m/sec (downwards). Hence the first time that the mass passes downwards
through the equilibrium position is given by t = 5π/28 = 0.561 seconds.
1b (10 pts)
1 − 3i
Express the complex number in polar form.
2+i
Ans
√
1 − 3i 10 eiθ1 √
= √ iθ = 2 ei(θ1 −θ2 )
2+i 5e 2
1 3
cos θ1 = √ , sin θ1 = − √
10 10
θ1 = tan−1 (tan θ1 ) = tan−1 (−3) = −1.25 rad (= −0.40π = −71.57◦ )
2 1
cos θ2 = √ , sin θ2 = √
5 5
θ2 = tan−1 (tan θ2 ) = tan−1 (1/2) = 0.46 rad (= 0.15π = 26.57◦ )
√
1 − 3i 10 e−1.25i √ √ √ √
= √ 0.46i = 2 e−1.71i = 2 ei 4.57 (= 2 e−i0.55π = 2 ei1.45π )
2+i 5e
Ans
1 − 3i 1 − 3i 2 − i 2 − i − 6i + 3i2 −1 − 7i 1 7 √
= · = 2
= = − − i = 2 eiθ
2+i 2+i 2−i 4−i 5 5 5
1 7
cos θ = − √ , sin θ = − √
5 2 5 2
θ = tan−1 (tan θ) + π = tan−1 (7) + π = 1.43 + π = 4.57 rad (= 1.45π = 261.87◦ )
1 − 3i √ i 4.57 √ i1.45π
= 2e = 2e
2+i
2 (20 pts)
Z ∞
Find the Laplace Transform F (s) = L{f (t)} (s) = e−st f (t) dt of
0
(
1 0<t<3
f (t) =
e2t 3<t
Ans
Z 3 Z ∞
F (s) = e−st · 1 dt + e−st e2t dt
0 3
#3 " #N
e−st e−(s−2)t
= + lim
−s 0
N→∞ −(s − 2) 3
µ ¶
e−3s 1 e−(s−2)N e−(s−2)3
= − + lim −
−s −s N→∞ −(s − 2) −(s − 2)
1 e−3s e6−3s
= − + (s > 2)
s s s−2
Note that F (2) DNE.
Z ∞ Z 3 Z ∞ Z 3 Z N
−2t −2t −2t 2t −2t
F (2) = e f (t) dt = e · 1 dt + e · e dt = e dt + lim 1 dt
0 0 3 0 N→∞ 3
3 (20 pts)
Ans
4 (20 pts)
y 00 + 3y 0 + 5y = cos 3t
y(0) = −1
y 0 (0) = 2
Ans
h i s
s2 Y (s) − sy(0) − y 0 (0) + 3 [sY (s) − y(0)] + 5Y (s) =
s2 +9
h i s
s2 Y (s) − s(−1) − 2 + 3 [sY (s) − (−1)] + 5Y (s) =
s2 +9
s
(s2 + 3s + 5)Y (s) + s − 2 + 3 =
s2 +9
s
(s2 + 3s + 5)Y (s) = −s−1
s2 +9
s − (s + 1)(s2 + 9)
=
s2 + 9
s − [s3 + 9s + s2 + 9]
=
s2 + 9
s3 + s2 + 8s + 9
Y (s) = − 2
(s + 3s + 5)(s2 + 9)
5 (20 pts)
2s2 − 3s + 13
(s2 − 2s + 5)(s + 3)
Ans
1
s=1 → 2 − 3 + 13 = [0 + B(2)] (4) + (2) [0 + 4] → B= 2
2s2 − 3s + 13 1 2 2
2
= 2 2
+
(s − 2s + 5)(s + 3) 2 (s − 1) + 2 s+3
( ) ( ) ½ ¾
−1 2s2 − 3s + 13 1 2 2
L = L−1 −1
+L
(s2 − 2s + 5)(s + 3) 2 (s − 1)2 + 22 s+3
1
= et sin 2t + 2e−3t
2
TEST I
Please show your working, use correct notation, and check to see that you have
indeed answered the question.
Three blank pages are provided for additional workspace; please DO NOT
hand in loose pages, they WILL NOT be graded.
Problem Score
1
2
3
4
5
Total
Solution
(a)(5 pts)
y ≡ 0 clearly satisfies the ODE and therefore is a solution.
(b)(15 pts)
If y ≠ 0 , the variables x and y can be separated and the ODE can be written
as the equality of two differentials as
dy
− = sin x dx
y2
Integrating both sides of this equation gives
dy
∫− y 2
= ∫ sin x dx
1
= − cos x + C
y
Hence, the general solution for y not identically zero is
y −1 + cos x = C
Solution
Let t be the time in minutes [min] after the salt solution begins to flow, and let
x (t ) represent the amount of salt in the tank at time t . Then
dx g L x (t ) g L x
= 10 4 − 2 = 40 −
dt L min 20 + 2t L min 10 + t
dx
In the standard form + P(t ) x = Q , this equation is
dt
dx 1
+ x = 40
{ K (*)
dt 10 12+3t Q(t )
P(t )
1 1
wherein P(t ) = and Q (t ) = 40 . Now ∫ P(t ) dt = ∫ dt = ln (10 + t ) so the
10 + t 10 + t
integrating factor µ (t ) = e ∫
P ( t ) dt
= e ln (10+t ) = 10 + t , and the general solution of
(*) is
x (t ) =
1
µ (t )
{∫ µ (t )Q(t ) dt + C}= 101+ t {∫ 40(10 + t ) dt + C}
=
1
10 + t
{20(10 + t ) 2 + C}
C
= 20(10 + t ) +
10 + t
Now at time zero there is no salt in the tank so
C
x (0) = 200 +
= 0 → C = −2000
10
Thus the amount of salt in the tank at any time t is given by
2000
x (t ) = 20(10 + t ) −
10 + t
After 10 minutes, as the tank is about to overflow, the amount of salt in the
tank is
2000
x (10) = 400 − = 400 − 100 = 300 g .
20
Alternatively, one might do the integration above differently as per
1
x (0) = {0 + 0 + C} = C = 0 → C = 0
10 + 0 10
2
400t + 20t
x (t ) =
10 + t
4000 + 2000
x (10) = = 300 g
20
Solution
3 − 2 2 9
1 − 2 1 5
2 −1 − 2 − 1
R2 1 − 2 1 5
→ R1 3 − 2 2 9
R3 2 −1 − 2 − 1
R1 1 − 2 1 5
→ R2 − 3R1 0 − 4 −1 −6
R3 − 2 R1 0 3 − 4 − 11
R1 1 − 2 1 5
→ R2 0 4 −1 − 6
3R 2 − 4 R 3 0 0 13 26
R1 1 − 2 1 5
→ R2 0 4 −1 − 6
R3 / 2 0 0 1 2
R1 − R3 1 − 2 0 3
→ R2 + R3 0 4 0 − 4
R3 0 0 1 2
R1 1 − 2 0 3
→ R2 / 4 0 1 0 − 1
R3 0 0 1 2
R1 + 2 R2 1 0 0 1
→ R2 0 1 0 − 1
R3 0 0 1 2
Solution
(a)(13 pts)
d2y dy
Writing the ODE in the standard form 2
+ p (t ) + q(t ) y = g (t ) of Theorem 2
dt dt
(Page 157) gives (for t away from 1)
d2y 3t dy 4 sin t
2
− + y=
dt t − 1 dt t − 1 t −1
so by comparison
3t 4 sin t
p( t ) = −
, q(t ) = , g (t ) =
t −1 t −1 t −1
These functions are continuous on (−∞,1) and (1,+∞) , but by Theorem 2, in
order to guarantee existence and uniqueness, the initial point x 0 = −2 has to be
in the interval in question. Hence, the largest interval on which the given initial
value problem is guaranteed to have a unique solution is ( −∞,1) = {x : −∞ < x < 1} .
(b) (7 pts)
L( y1 + y 2 ) = ( y1 + y 2 )′′ + ( y1 + y 2 ) 2
= y1′′ + y 2′′ + y12 + 2 y1 y 2 + y 22 = L( y1 ) + L( y 2 ) + 2 y1 y 2 ≠ L( y1 ) + L( y 2 )
L(cy ) = ( cy )′′ + ( cy ) 2 = cy ′′ + c 2 y 2 ≠ cL( y )
(b)(5 pts)
Also by virtue of the same Wronskian result, the { y1 , y 2 } is a fundamental
solution set on ( −∞, ∞) and therefore it is immediately true that
y ( x ) = c1e 2 x + c 2 e 3 x
wherein c1 and c2 are arbitrary constants, is a general solution of (*).
(c)(5 pts)
y ( x ) = c1e 2 x + c2 e 3 x
y ′( x ) = 2c1e 2 x + 3c2 e 3 x
y (0) = c1 + c2 = −1
y ′(0) = 2c1 + 3c2 = −4
c1 = 1 → c2 = −2
y ( x ) = e 2 x − 2e 3 x
Test II
Wednesday, March 24, 1999
8:30—9:30 p.m.
SC 360 & 362
Name
Student ID#
Signature
Problem Score
1
2
3
4
5
Total
Use synthetic division to find a general solution of the third order constant coefficient dif-
ferential equation
y 000 − 5y 00 + 8y 0 − 6y = 0.
Solution
The auxiliary equation is r3 − 5r2 + 8r − 6 = 0; synthetic division for the factor (r − 3) gives
3 1 -5 8 -6
0 3 -6 6
1 -2 2 0
indicating that the cubic factorizes to (r − 3)(r2 − 2r + 2) = 0, so the roots of the auxiliary
equation are r = 1 ± i, 3, and a general solution of the third order differential equation is
(a)(4 pts) Show that {e2x , xe2x } is a fundamental solution set for the homogeneous second
order constant coefficient differential equation
y 00 − 4y 0 + 4y = 0.
(b)(16 pts) Using Table 4.1 (see the Formula Sheet), determine the form of a particular
solution yp (x) for the nonhomogeneous second order constant coefficient differential equation
y 00 − 4y 0 + 4y = g(x),
for
(i) g(x) = 3 cos 2x, (ii) g(x) = x2 sin 2x, (iii) g(x) = e3x , (iv) g(x) = x2 e2x − e2x .
Solution
(a)(10 pts) Find a particular solution yp (x) of the nonhomogeneous second order constant
coefficient differential equation
y 00 − 2y 0 + y = x2 + 1
y 00 − 2y 0 + y = x−2 ex + x2 + 1
(a)
yp (x) = Ax2 + Bx + C
yp0 (x) = 2Ax + B
yp00 (x) = 2A
yp00 − 2yp0 + yp = 2A − 2(2Ax + B) + Ax2 + Bx + C
= Ax2 + (B − 4A)x + (2A − 2B + C) = x2 + 1
A = 1
B − 4A = 0 → B=4
2A − 2B + C = 1 → C =7
yp (x) = x2 + 4x + 7
(b) By linearity and the principle of superposition we need only find a particular solution
for y 00 − 2y 0 + y = x−2 ex using the method of variation of parameters and then add the result
obtained in part (a). The homogeneous ODE y 00 − 2y 0 + y = 0 has the auxiliary equation
r2 − 2r + 1 = (r − 1)2 = 0 so the fundamental solution set is {y1 , y2 } = {ex , xex }. The
Wronskian W [y1 , y2 ](x) = e2x as can be readily verified. Thus
Z Z
−x−2 ex · xex
v1 (x) = dx = −x−1 dx = − ln |x|
e2x
Z −2 x x Z
x e ·e 1
v2 (x) = 2x
dx = x−2 dx = −
e x
x 1 x x
yp (x) = (− ln |x|)e + (− )(xe ) = −e (1 + ln |x|)
x
yp (x) = −ex ln |x|
The last equation holds because ex satisfies the homogeneous equation. Hence, a particular
solution for x > 0 is
yp (x) = −ex ln x + x2 + 4x + 7
while for x < 0 it is
yp (x) = −ex ln(−x) + x2 + 4x + 7
Problem 4 (20 pts) Student ID #
Use the elimination method to find a general solution for the first order linear system
dx
− y = t2 ,
dt
dy
x+ = 1.
dt
.
Solution
D[x] − y = t2
x + D[y] = 1
D2 [x] − D[y] = 2t
x + D[y] = 1
D2 [x] + x = x00 + x = 2t + 1.
A mass of 4 kg is attached to a spring hanging from a ceiling, thereby stretching the spring
9.8 cm on coming to rest at equilibrium. The mass is then lifted up 10 cm above the
equilibrium point and given a downward velocity of 1 m/sec. Determine the equation for
the simple harmonic motion of the mass. When will the mass first reach its minimum height
after being set in motion.
Solution
Test III
Wednesday, April 21, 1999
8:30—9:30 p.m.
SC 360 & 362
Name
Student ID#
Signature
Problem Score
1
2
3
4
5
Total
A linear homogeneous differential equation with constant coefficients has the auxiliary equation
14
(b)(3 pts) What does the fundamental theorem of algebra tell us about this auxiliary equation?
(c)(5 pts) List the roots of this auxilary equation? If a root has mutiplicity m, repeat it m times.
(d)(7 pts) Write down a general solution of the linear homogeneous differential equation whose
auxiliary equation is shown above.
c1 e−x
+ (c2 + c3 x)e3x
+ (c4 + c5 x + c6x2 )e−2x
+ (c7 + c8 x)e−x cos 2x + (c9 + c10 x)e−x sin 2x
+ c11 + c12x + c13 x2 + c14 x3
(e)(2 pts) Should the number of arbitrary constants in your general solution agree with the number
of roots (counting multiplicities) listed in Part (c)?
Yes
Problem 2 (20 pts) Student ID #
(a)(4 pts) Let f (t) be a function defined on [0, +∞). The Laplace transform of f is the function
F defined by the integral
Z ∞
F (s) = e−st f (t) dt . . . (∗)
0
(c)(4 pts) Explain mathematically what it means to say that the integral in (*) exists as an im-
proper integral
Z ∞ Z N
−st
e f (t) dt = lim e−st f (t) dt
0 N →∞ 0
(d)(4 pts) Determine the Laplace transform of the function f(t) = cos bt, where b is a nonzero
constant.
Z ∞ Z N
F (s) = e−st cos bt dt = lim e−st cos bt dt
0 N →∞ 0
¯N
e−st ¯
¯
= lim (−s cos bt + b sin bt¯
¯
N →∞ s2 + b2
0
" #
e−sN 1
= lim 2 2
(−s cos bN + b sin bN ) − 2 (−s)
N →∞ s +b s + b2
s
= for s > 0
s2 + b2
(e)(4 pts) What is the domain of the Laplace transform determined in Part (d)? Explain why.
(a)(5 pts) If c1 , c2 are arbitrary constants, and if f1 , f2 are functions whose Laplace transforms
exist for s > α, what does the linearity of the Laplace transform operator L tell us about
L {c1 f1 + c2 f2 }?
1 1 3! s s+5
= 5 +3 +5 + 7 + 9
s s−2 (s − 2)4 s2 + 32 (s + 5)2 + 32
(c)(5 pts) What is the domain of the Laplace transform in (b)? Explain why.
(a)(10 pts) Use an appropriate trigonometric identity to determine the Laplace transform
n√ o
√
1 π
L t (s) = F (s) =
2 s3/2√
n √ o dF 3 π
L −t t (s) = (s) = − 5/2
ds 4√s
n √ o d F
15 π
L t2 t (s) = 2
(s) =
ds 8 s7/2
Problem 5 (20 pts) Student ID #
(a)(6 pts) Given that Y (s) = L {y} (s), where y(t) satisfies the initial value problem
P (s)
find the rational function that Y (s) is equal to.
Q(s)
y 00 − 2y 0 + 7y = t2 et , y(0) = 0, y 0 (0) = 0,
then the Laplace transform of y(t), namely, Y (s) = L {y} (s), has the partial fraction expansion
2 1 s−1 1 1 1 1
Y (s) = = − + .
(s2 − 2s + 7)(s − 1)3 18 s − 2s + 7 18 s − 1 3 (s − 1)3
2
Find the solution of the initial value problem by inverting this Laplace transform?
1 t √ 1 t 1
y(t) = e cos 6t − e + t2 et
18 18 6
Problem 5 (Cont’d) Student ID #
(c)(10 pts) Determine the partial fraction expansion for the rational function
4s2 + 13s + 19
Y (s) = .
(s − 1)(s2 + 4s + 13)
4s2 + 13s + 19
Y (s) =
(s − 1) [(s + 2)2 + 32]
A B(s + 2) + 3C
= +
s−1 (s + 2)2 + 32
s = 1 → 4 + 13 + 19 = A(32 + 9) + 0 → 36 = 18A → A = 2
s = −2 → 16 − 26 + 19 = (2)(9) + (3C)(−3) → 9 = 18 − 9C → C = 1
2 2(s + 2) + 3
Y (s) = +
s−1 (s + 2)2 + 32
MA232 Differential Equations Spring 1999
Final
Name
Student ID#
Signature
Problem Score
1
2
3
4
5
6
7
8
9
10
Bonus
Total
dy y cos x
=
dx 1 + 2y2
(b)(5 pts) Use your answer to Part (a) to solve the initial value problem
dy y cos x
= , y(0) = 1
dx 1 + 2y2
(a)
1 + 2y 2
dy = cos x dx
y
ln |y| + y 2 = sin x + C
(b)
0+1 = 0+C
ln |y| + y2 = sin x + 1
2 (20 pts) Student ID #
A 60 L tank initially contains 30 L of pure water. A solution containing 10 g/L of salt flows into the tank
at a rate of 6 L/min, and the well-stirred mixture flows out at a rate of 3 L/min. How much salt is in the
tank just before it overflows?
x(t)
= (10)(6) − ·3
30 + 3t
x(t)
= 60 −
10 + t
dx 1
+ x(t) = 60
dt 10 + t
R dt
µ(t) = e 10+t = eln(10+t) = 10 + t
∙Z ¸
1
x(t) = µ(t)Q(t) dt + C
10 + t
∙Z ¸
1
= (10 + t)60 dt + C
10 + t
1 h i
= 30(10 + t)2 + C
10 + t
C
= 30(10 + t) +
10 + t
C
x(0) = 0 = 300 + → C = −3000
10
3000
x(t) = 30(10 + t) −
10 + t
3000
x(10) = 600 − = 600 − 150 = 450 g
20
3 (20 pts) Student ID #
y 00 − 2y 0 + 5y = 0.
(b)(16 pts) Using Table 4.1 (see the Test II Formula Sheet), determine the form of a particular solution
yp (x) of the corresponding nonhomogeneous second order constant coefficient differential equation
y 00 − 2y 0 + 5y = g(x),
(a)
r2 − 2r + 5 = 0
(r − 1)2 + 4 = 0
r = 1 ± 2i
(b)
(a)(4 pts) Find a general solution of the linear second order constant coefficient homogeneous differential
equation
y 00 − 6y 0 + 9y = 0
(b)(12 pts) Find a particular solution yp (x) of the nonhomogeneous differential equation
y 00 − 6y 0 + 9y = x−3 e3x
(a)
r 2 − 6r + 9 = 0
(r − 3)2 = 0
r = 3, 3
(b){y1 (x), y2 (x)} = {e3x , xe3x } is a fundamental solution set on (−∞, ∞) as the following
nonzero Wronskian attests.
¯ ¯
¯ 3x ¯
¯ e xe3x ¯
W (x) = ¯ ¯ = e6x + 3xe6x − 3xe6x = e6x
¯ ¯
¯ 3e3x 3x 3x
e + x3e ¯
Z Z Z
−g(x)y2 (x) −x−3 e3x · xe3x
v1 (x) = dx = dx = −x−2 dx = x−1
W (x) W (x)
Z Z
g(x)y1 (x) x−3 e3x e3x x−2
v2 (x) = dx = dx =
W (x) e6x −2
1 1
yp (x) = v1 (x)y1 (x) + v2 (x)y2 (x) = x−1 e3x − x−2 xe3x = x−1 e3x
2 2
(c)
1
y(x) = c1 e3x + c2 xe3x + x−1 e3x
2
5 (20 pts) Student ID #
Use the elimination method to find a general solution for the first order linear system
dx
+ y = t2 ,
dt
dy
x+ = 1.
dt
D[x] + y = t2
x + D[y] = 1
D 2 [x] + D[y] = 2t
x + D[y] = 1
x00 − x = 2t − 1
r2 − 1 = 0 → r = ±1
x(t) = c1 e−t + c2 et + 1 − 2t
dx
y(t) = − + t2
dt
= c1 e−t − c2 et + 2 + t2
6 (20 pts) Student ID #
A mass of 4 kg is attached to a spring hanging from a ceiling, thereby stretching the spring 9.8 cm on
coming to rest at equilibrium. The mass is then pulled down 10 cm below the equilibrium point and given
an upward velocity of 1 m/sec. Determine the equation for the simple harmonic motion of the mass. When
will the mass first reach its maximum height after being set in motion?
9.8
mg = k` → (4)(9.8) = k → k = 400
100
4x00 + 400x = 0
x00 + 100x = 0
r2 + 100 = → r = ±10i
x(t) = c1 cos 10t + c2 sin 10t
= A sin(10t + φ)
0
x (t) = 10A cos(10t + φ)
1
A sin φ = x(0) =
10
x0 (0) 1
A cos φ = =−
10 10
3π
φ =
4
x(t) = −A → sin(10t + φ) = −1
3π
10t + φ =
2
3π/2 − φ 3π
t = = = 0.236 sec
10 40
7 (20 pts) Student ID #
(a)(6 pts) Given that Y (s) = L {y(t)} (s), where y(t) satisfies the initial value problem
P (s)
find the rational function that Y (s) is equal to. (Fully expand the numerator polynomial P (s).)
Q(s)
h i 2
s2 Y (s) − sy(0) − y 0 (0) − 2 [sY (s) − y(0)] + 5Y (s) =
s2 + 22
h i 2
s2 Y (s) − s(−2) − (1) − 2 [sY (s) − (−2)] + 5Y (s) =
s2 + 4
³ ´ 2
s2 − 2s + 5 Y (s) + 2s − 1 − 4 =
s2 +4
³ ´ 2
s2 − 2s + 5 Y (s) = − 2s + 5
s2 +4
2 − (2s − 5)(s2 + 4)
Y (s) =
(s2 − 2s + 5)(s2 + 4)
−2s3 + 5s2 − 8s + 22
=
(s2 − 2s + 5)(s2 + 4)
1 1 −6(s − 1) + 1
Y (s) = − +
s s+4 (s − 1)2 + 22
1
y(t) = 1 − e−4t − 6et cos 2t + et sin 2t
2
7 (Cont’d) Student ID #
(c)(10 pts) Determine the partial fraction expansion for the rational function
3s + 5
Y (s) = .
s (s2− 2s + 5)
3s + 5 A B(s − 1) + C2
= +
s(s2 − 2s + 5) s (s − 1)2 + 22
s = 0 → 5 = 5A → A = 1
s = 1 → 8 = (1)(4) + (0 + 2C)(1) → 2C = 4 → C = 2
3s + 5 1 −(s − 1) + 2(2)
= +
s(s2 − 2s + 5) s (s − 1)2 + 22
1 5−s
= + 2
s s − 2s + 5
8 (20 pts) Student ID #
(a)(15 pts)
1 −2 3 x1 7
−1 1 −2 x2 = −5
2 −1 −1 x3 4
Given the matrix equation Ax = b above, use augmented matrices and elementary row operations to
simultaneously determine A−1 and x.
(b)(5 pts) Compute the matrix product shown below
1 −2 3 x
−1 1 −2 y
2 −1 −1 z
R1 1 −2 3 1 0 0 7
(A|I|b) = R2 −1 1 −2 0 1 0 −5
R3 2 −1 −1 0 0 1 4
R1 1 −2 3 1 0 0 7
→ R1 + R2 0 −1 1 1 1 0 2
2R2 + R3 0 1 −5 0 2 1 −6
R1 1 −2 3 1 0 0 7
→ R2 0 −1 1 1 1 0 2
R2 + R+3 0 0 −4 1 3 1 −4
4R1 + 3R3 4 −8 0 7 9 3 16
→ 4R2 + R3 0 −4 0 5 7 1 4
R3 0 0 −4 1 3 1 −4
R1 − 2R2 4 0 0 −3 −5 1 8
→ R2 0 −4 0 5 7 1 4
R3 0 0 −4 1 3 1 −4
R1 /4 1 0 0 − 34 − 54 1
4 2
³ ´
−1
→ −R2 /4 0 1 0 − 54 − 74 − 14 −1 = I|A |x
−R3 /4 0 0 1 − 14 − 34 − 14 1
1 −2 3 x x − 2y + 3z
−1 1 −2 y = −x + y − 2z
2 −1 −1 z 2x − y − z
9 (20 pts) Student ID #
x1 = w x01 = w0 = x2
x2 = w 0 x02 = w00 = x3
¯ ¯
¯ ¯
¯ 5−r −1 ¯
|A − rI| = ¯ ¯
¯ ¯
¯ 3 1−r ¯
= (5 − r)(1 − r) + 3
= r 2 − 6r + 8
= (r − 2)(r − 4) for r = 2, 4
3 −1 1
r1 = 2 → (A − 2I)u1 = =0
3 −1 3
1 −1 1
r2 = 4 → (A − 4I)u2 = =0
3 −3 1
1 1
x(t) = c1 e2t + c2 e4t
3 1
1 1 1 1 c1 2
x(0) = c1 + c2 = =
3 1 3 1 c2 −1
1 1 2 R1 1 1 2
→
3 1 −1 3R1 − R2 0 2 7
2R1 − R2 2 0 −3
→
R2 0 2 7
c1 − 32
=
7
c2 2
3 7 1
1 1 −3e2t + 7e4t
x(t) = − e2t + e4t =
2 3 2 1 2 −9e2t + 7e4t
Bonus (20 pts) Student ID #
Z ∞ √
−x2 π
(a)(15 pts) Use e dx = to show that
0 2
n o r
−1/2 π
L t (s) = , s>0
s
(b)(5 pts) Use Part (a) to show that
n o √
1/2 π
L t (s) = , s>0
2s3/2
(a) Letting t = x2 /s (which immediately implies that s > 0 since t ∈ [0, +∞)) gives
n o Z N
L t−1/2 (s) = lim e−st t−1/2 dt
N →∞ 0
Z √sN √
−x2 s 2x
= lim e dx, s>0
N →∞ 0 x s
Z ∞
2 2
= √ e−x dx, s>0
s 0
r
π
= , s>0
s
(b)
dF
L {tf (t)} = − (s)
ds
n o d ³√ −1/2 ´
L t1/2 = − πs , s>0
ds
1 √ −3/2
= πs , s>0
2
√
π
= 3/2
, s>0
2s
MA 232 Differential Equations Fall 1999 Kevin Dempsey
Suppose P (x) and Q(x) are continuous on the interval (a, b) that contains the point x0 .
Then, for any choice of initial value y0 , there exists a unique solution y(x) on (a, b) to the
initial value problem
dy
+ P (x)y = Q(x), y(x0 ) = y0 .
dx
In fact, the solution is given by
½Z ¾ R
1 P (x) dx
y(x) = µ(x)Q(x) dx + C , µ(x) = e
µ(x)
where the arbitrary constant C is determined by the initial condition y(x0 ) = y0 .
Suppose p(x), q(x) and g(x) are continuous on an interval (a, b) that contains the point x0 .
Then, for any choice of the initial values y0 and y1 , there exists a unique solution y(x) on
the interval (a, b) to the initial value problem
d2 y dy
2
+ p(x) + q(x)y = g(x), y(x0 ) = y0 , y 0 (x0 ) = y1 .
dx dx
Fundamental Solution Sets, Linear Independence, and the Wronskian (Page 165)
If y1 and y2 are solutions of y 00 + py 0 + qy = 0 on (a, b), then the following are equivalent:
1. {y1 , y2 } is a fundamental solution set on (a, b).
2. y1 and y2 are linearly independent on (a, b).
¯ ¯
¯
y (x) y2 (x) ¯
¯ ¯
3. W (x) = 10 ¯ ¯ = y1 (x)y20 (x) − y10 (x)y2 (x) 6= 0 for all x in (a, b).
y1 (x) y20 (x)
¯ ¯
Table 4.1 (Page 197) Method of Undetermined Coefficients for L[y](x) = g(x)
(V) pn (x) cos βx + qm (x) sin βx xs {PN (x) cos βx + QN (x) sin βx},
where qm (x) = bm xm + · · · + b1 x + b0 QN (x) = BN xN + · · · + B1 x + B0
and N = max(n, m)
(VI) aeαx cos βx + beαx sin βx xs {Aeαx cos βx + Beαx sin βx}
(VII) pn (x)eαx cos βx + qm (x)eαx sin βx xs eαx {PN (x) cos βx + QN (x) sin βx},
where N = max(n, m)
The nonegative integer s is chosen to be the smallest integer so that no term in the
particular solution yp (x) is a solution of the corresponding homogeneous equation
L[y](x) = 0.
†
Pn (x) must include all its terms even if pn (x) has some terms that are zero.
If {y1 , y2 } is a fundamental solution set for the homogeneous second order not necessarily
constant coefficient differential equation y 00 + p(x)y 0 + q(x)y = 0, then a particular solution
yp of its nonhomogeneous counterpart y 00 + p(x)y 0 + q(x)y = g(x) is given by
yp (x) = v1 (x)y1 (x) + v2 (x)y2 (x),
where Z Z
−g(x)y2 (x) g(x)y1 (x)
v1 (x) = dx, v2 (x) = dx.
W [y1 , y2 ](x) W [y1 , y2 ](x)
Here, the Wronskian ¯ ¯
¯
y y ¯¯
¯
W [y1 , y2 ](x) = 10 20 ¯¯ = y1 y20 − y10 y2 ,
¯
¯
y1 y2
being nonzero is a property satisfied by fundamental solutions. Having determined yp , a
general solution of the nonhomogeneous differential equation is given by
y(x) = c1 y1 (x) + c2 y2 (x) + yp (x)
the first two terms of which comprise the general solution of the homogeneous equation.
Miscellaneous Z Z
u dv = uv − v du (Integration by Parts)
d x a
tan−1 = 2
dx a a + x2
Z
dx 1 −1 bx
= tan
a2 + b2 x2 ab a
Z ax
e
eax cos bx dx = 2 (a cos bx + b sin bx)
a + b2
Z
ax eax
e sin bx dx = 2 (a sin bx − b cos bx)
a + b2