Professional Documents
Culture Documents
This introductory module comprises of four lectures. In these four lectures, we introduce
to the readers some basic concepts from multivariable calculus and some essential results
from ordinary dierential equations(ODEs). Some geometrical concepts necessary for the
subsequent modules are also discussed.
Module 1 is organised as follows. In Lecture 1, we review some basic denitions and
results from multivariable calculus. In Lecture 2, we discuss some essential formulas for
solving linear rst-order and second-order (with constant coecients only) ODEs. In
addition, we review the basic existence and uniqueness theorems for initial value problem
(IVP) for ODEs and systems of ODEs. In Lecture 3, we discuss some geometrical concepts
like surfaces, normals and integral curves and surfaces of vector elds. Finally, Lecture 4
is devoted to methods for nding the integral curves of a vector eld by solving systems
of ODEs.
Lecture 1
In this lecture, we recall some basic concepts from multivariable calculus. The concepts
of limits, continuity, partial derivatives, directional derivatives, chain rules, tangent plane
and normals are discussed.
For any (x, y), (x0 , y0 ) R2 , let us denote
(x,y)(x0 ,y0 )
xx0
is continuous at (x0 , y0 ) if
lim
(x,y)(x0 ,y0 )
f (x, y) = f (x0 , y0 ).
fx (x, y) := lim
z
f
=
= zx ,
x
x
fy =
f
z
=
= zy .
y
y
Partial derivatives can also be dened for functions of three or more variables. In general,
if z is a function of n variables, z = f (x1 , x2 , . . . , xn ), its partial derivative with respect
to the ith variable xi is
f (x1 , . . . , xi1 , xi + h, xi+1 , . . . , xn ) f (x1 , . . . , xi , . . . , xn )
z
:= lim
.
h0
xi
h
We also write
zxi =
z
f
=
= fxi .
xi
xi
Since the partial derivatives are themselves functions, we can take their partial derivatives
to obtain higher derivatives. If z = f (x, y), we may compute
( )
z
2z
fxx (x, y) =
=
, fyy (x, y) =
x x
x2
( )
z
2z
fxy (x, y) =
=
, fyx (x, y) =
y x
yx
( )
z
=
y y
( )
z
=
x y
2z
,
y 2
2z
.
xy
In general, fxy = fyx . However, the following theorem gives condition under which we can
assert that fxy = fyx .
THEOREM 4. Let f : Dr (x0 , y0 ) R. If fxy and fyx are both continuous at (x0 , y0 ),
then
fxy (x0 , y0 ) = fyx (x0 , y0 ).
DEFINITION 5. (Chain rule) Let z1 = f1 (x1 , . . . , xn ), . . . , zm = fm (x1 , . . . , xn ) be m
functions of n variables, and let x1 = g1 (t1 , . . . , tk ), . . . , xn = gn (t1 , . . . , tk ) be n functions
of k variables, all with continuous partial derivatives.
Consider the zi s as functions of the tj s by
zi = fi (g1 (t1 , . . . , tk ), . . . , gn (t1 , . . . , tk )).
Then
zi
zi x1
zi x2
zi xn
=
+
+ +
.
tj
x1 tj
x2 tj
xn tj
DEFINITION 6. If z = f (x, y) is a function of two variables, its gradient vector eld f
is dened by
f (x, y) := (fx (x, y), fy (x, y)) = (
z z
, ).
x y
u u u
,
,
).
x y z
(1)
F dx F dy
+
=0
x dx
y dx
F
F dy
+
= 0,
x
y dx
h0
(2)
This expresses the directional derivative in the direction of u as the scalar projection
of the gradient vector onto u. From (2), we have
Du f (x, y) = f (x, y) u
= |f ||u| cos
= |f | cos ,
where is the angle between f and u. The maximum value of cos is 1 and this occurs
when = 0. Therefore, the maximum value of Du f (x, y) is |f | and it occurs when = 0
i.e., when u has the same direction as f .
Similarly, the directional derivative of functions of three variables with unit vector
u = (u1 , u2 , u3 ) can be written as
Du f (x, y, z) = f (x, y, z) u.
We now introduce the concept of dierentiability for functions of several variable, lets
rst recall the denition of dierentiability in one variable case.
Let D be an open subset R. The function f : D R is said to be dierentiable at
x0 D if
lim
xx0
f (x) f (x0 )
x x0
exists. The value of this limit is called the derivative of f at x0 and is denoted by f (x0 ).
The above denition may be restated as follows: The function f : D R is dierentiable at x0 D if there is a number f (x0 ) such that
lim
xx0
(3)
xx0
(4)
xx0
(5)
f2
= x2 ,
x1
f3
= 0.
x1
f1
= 0,
x2
Therefore,
f2
= x1 ,
x2
f3
= 2x2
x2
2x1 0
Jf (x1 , x2 ) =
x2
0
x1
2x2
Practice Problems
1. Show that lim(x,y)(0,0)
2. Using and denition prove that f (x, y) = |x| is continuous at (0, 0).
{
3. Let
f (x, y) =
xy(x2 y 2 )
,
x2 +y 2
0,
2 y 2 )/2
10
Lecture 2
In this lecture, we recall some methods of solving rst-order IVP in ODE (separable and
linear) and homogeneous second-order linear ODEs with constant coecients. These results will be useful while solving linear homogeneous PDEs using the variables separable
method in the subsequent modules (cf., Module 5, Module 6 and Module 7). Some fundamental results on existence, uniqueness and continuous dependence of solutions on given
data will also be discussed.
First-Order ODEs: A rst-order ODE is separable if it can be written in the form
f (y)
dy
= g(x),
dx
(1)
dy
f (y) dx = g(x) dx
dx
=
f (y) dy = g(x) dx,
and from which we nd solutions y(x).
Consider a rst order linear nonhomogeneous equation ODE in the standard form
y (x) + p(x)y(x) = q(x).
When q(x) = 0, the resulting equation
y (x) + p(x)y(x) = 0
is called homogeneous equation which can be put in a separable form
dy
= p(x) dx,
y
(y = 0).
)
p(x) dx .
(2)
11
}
(x)q(x)dx + C1 .
(3)
5x4
.
(1+x2 )
(4)
]
{2x/(1 + x )}dx = exp[log(1 + x2 )] = 1 + x2 .
2
(5)
where the coecients a, b, and c are real constants with a = 0. Let m1 and m2 be the
roots of the associated auxiliary equation
am2 + bm + c = 0.
If m1 and m2 are real and distinct (b2 4ac > 0), then the general solution of (5) is
y(x) = c1 em1 x + c2 em2 x .
If m1 = m2 = m (b2 4ac = 0), then the general solution of (5) is
y(x) = c3 emx + c4 xemx .
12
If m1 = + i and m2 = i (b2 4ac < 0), then the general solution of (5) is
y(x) = ex [c5 cos(x) + c6 sin(x)].
Here, ci , i = 1, 2, 3, 4, 5, 6 are arbitrary constants.
On Existence and Uniqueness of IVP: Consider the following IVPs:
|y | + 2|y| = 0, y(0) = 1.
(6)
y (x) = x, y(0) = 1.
(7)
xy = y 1, y(0) = 1.
(8)
Note that the IVP (6) has no solution, the problem (7) has precisely one solution, namely
y = 12 x2 + 1 and the problem (8) has innitely many solutions, namely y = 1 + cx, where
c is an arbitrary constant. From the above three IVPs, we observe that an IVP
y (x) = f (x, y),
y(x0 ) = y0
may have none, precisely one, or more than one solution. This leads to the following
fundamental results.
THEOREM 2. (Existence) Let R : |x x0 | < a, |y y0 | < b be a rectangle. If f (x, y)
is continuous and bounded in R i.e., there is a number K such that
|f (x, y)| K (x, y) R,
then the IVP
y (x) = f (x, y),
y(x0 ) = y0
(9)
has at least one solution y(x). This solution is dened for all x in the interval
|x x0 | < ,
where = min{a,
b
}.
K
f
y
are continuous and bounded in R i.e., there exist two number K and M such that
|f (x, y)| K (x, y) R,
f
M (x, y) R,
y
(10)
(11)
then the IVP (9) has a unique solution y(x). This solution is dened for all x in the
interval
|x x0 | < ,
where = min{a,
b
}.
K
13
EXAMPLE 4. Let R : |x| < 5, |y| < 3 be the rectangle. Consider the IVP
y = 1 + y2,
y(0) = 0
over R.
Here, a = 5, b = 3. Then
max |f (x, y)| = max |1 + y 2 | 10(= K),
(x,y)R
(x,y)R
f
max = max 2|y| 6(= M ).
(x,y)R y
(x,y)R
b
3
= min{a, } = min{5,
} = 0.3 < 5.
K
10
Note that the solution of the IVP is y = tan x. This solution is valid in the interval
|x| < 0.3 in stead of the entire interval |x| < 5. It is easy check that the solution y = tanx
is discontinuous at x = /2, and hence, there is no continuous solution valid in the entire
interval |x| < 5.
The conditions in Theorem 3 are sucient conditions rather than necessary ones, and
can be lessened. By the mean value theorem of dierential calculus, we have
f (x, y2 ) f (x, y1 ) = (y2 y1 )
f
(x, ),
y
where (x, y1 ), (x, y2 ) R and lies between y1 and y2 . In view of the condition
f
M (x, y) R,
y
it follows that
|f (x, y2 ) f (x, y1 )| M |y2 y1 |,
(12)
which is known as a Lipschitz condition. Thus, the condition (11) can be weakened to
obtain the following existence and uniqueness result.
THEOREM 5. (Picards Theorem) Let R : |x x0 | < a, |y y0 | < b be a rectangle.
Let f (x, y) be continuous and bounded in R i.e., there exists a number K such that
|f (x, y)| K (x, y) R.
Further, let f satisfy the Lipschitz condition with respect to y in R, i.e., there exists a
number M such that
|f (x, y2 ) f (x, y1 )| M |y2 y1 | (x, y1 ), (x, y2 ) R.
(13)
14
Then, the IVP (9) has a unique solution y(x). This solution is dened for all x in the
interval
|x x0 | < ,
where = min{a,
b
}.
K
Note that the continuity of f is not enough to guarantee the uniqueness of the solution
which can be seen from the following example.
EXAMPLE 6. (Nonuniqueness) Consider the IVP:
y =
Note that f (x, y) =
|y|, y(0) = 0.
y 0 and y =
x2 /4,
x0
x2 /4,
x < 0.
are two solutions of the given IVP. The uniqueness fails because the Lipschitz condition
(13) is violated in any region which include the line y = 0. With y1 = 0 and y2 > 0, we
note that
y2
|f (x, y2 ) f (x, y1 )|
|f (x, y2 ) f (x, 0)|
1
=
=
= ,
|y2 y1 |
|y2 |
y2
y2
i = 1, . . . , n,
(14)
(15)
15
Let each of the functions f1 , . . . , fn be continuous and bounded in Q, and satisfy the following Lipschitz condition with respect to the variables y1 , y2 , . . . , yn , i.e., there exists
constants L1 , . . . , Ln such that
|f (x, y11 , . . . , yn1 ) f (x, y12 , . . . , yn2 )| L1 |y11 y12 | + + Ln |yn1 yn2 |
for all pairs of points (x, y11 , . . . , yn1 ), (x, y12 , . . . , yn2 ) Q. Then there exists a unique set of
functions y1 (x), . . . , yn (x) dened for x in some interval |x x0 | < h, 0 < h < a such
that y1 (x), . . . , yn (x) solve (14)-(15).
Practice Problems
1. Determine whether the given dierential equation is separable.
(a)
dy
dx
yex+y
;
x2 +y
dy
(b) x dx
= 1 + y 2 ; (c)
dy
dx
= sin(x + y).
2. Solve the following rst-order linear equations subject to the given conditions:
(a)
dy
dx
y
x
dy
dy
= 1, y(2) = 1; (b) 4 dx
+ 3xy = 5, y(0) = 1; (c) sin x dx
+ y cos x =
x sin x, y(/2) = 2.
3. Find the general solution of the following second-order homogeneous linear ODEs.
(a)
d2 y
dx2
dy
+ 5y = 0; (b)
+ 4 dx
d2 y
dx2
dy
dx
= 0; (c)
d2 y
dx2
dy
+ 4y = 0.
2 dx
4. Does f (x, y) = |x| + |y| satisfy a Lipschitz condition in the xy-plane? Does f /y
exist?
5. Find all solutions of the IVP
dy
dx
= 2 y, y(1) = 0.
16
Lecture 3
In Lecture 3, we recall some geometrical concepts that are essential for understanding
the nature of solutions of partial dierential equations to be discussed in the subsequent
lectures.
Surface: A surface is the locus of a point moving in space with two degrees of freedom.
Generally, we use implicit and explicit representations for describing such a locus by
mathematical formulas.
In the implicit representation we describe a surface as a set
S = {(x, y, z) | F (x, y, z) = 0},
i.e., a set of points (x, y, z) satisfying an equation of the form F (x, y, z) = 0.
Sometimes we can solve such an equation for one of the coordinates in terms of the
other two, say for z in terms of x and y. When this is possible we obtain an explicit
representation of the form z = f (x, y).
EXAMPLE 1. A sphere of radius 1 and center at the origin has the implicit representation
x2 + y 2 + z 2 1 = 0.
When this equation is solved for z it leads to two solutions:
z=
1 x2 y 2 and z = 1 x2 y 2 .
The rst equation gives an explicit representation of the upper hemisphere and the
second of the lower hemisphere.
We now describe here a class of surfaces more general than surfaces obtained as graphs
of functions. For simplicity, we restrict the discussion to the case of three dimensions.
Let R3 and let F (x, y, z) C 1 (), where C 1 () := {F (x, y, x) C() :
Fx , Fy , Fz C()}. We know the gradient of F , denoted by F , is a vector valued
function dened by
F = (
F F F
,
,
).
x y z
One can visualize F as a eld of vectors (vector elds), with one vector, F , emanating
from each point (x, y, z) . Assume that
F (x, y, z) = (0, 0, 0), x .
(1)
17
This means that the partial derivatives of F do not vanish simultaneously at any point of
.
DEFINITION 2. (Level surface) The set
Sc = {(x, y, z) | (x, y, z) and F (x, y, z) = c},
for some appropriate value of the constant c, is a surface in . This surface is called a
level surface of F .
Note: When R2 , the set Sc = {(x, y) | (x, y) and F (x, y) = c} is called a
level curve in .
Let (x0 , y0 , z0 ) and set c = F (x0 , y0 , z0 ). The equation
F (x, y, z) = c
(2)
represents a surface in passing through the point (x0 , y0 , z0 ). For dierent values of c,
(2) represents dierent surfaces in . Each point of lies on exactly one level surface of
F . Any two points (x0 , y0 , z0 ) and (x1 , y1 , z1 ) of lie on the same level surface if and only
if
F (x0 , y0 , z0 ) = F (x1 , y1 , z1 ).
Thus, one may visualize as being laminated by the level surfaces of F . The equation
(2) represents one parameter family of surfaces in .
EXAMPLE 3. Take = R3 \(0, 0, 0) and let F (x, y, z) = x2 + y 2 + z 2 . Then
F (x, y, z) = (2x, 2y, 2z).
Note that the condition (1) is satised (x, y, z) . The level surfaces of F are spheres
with center at the origin.
EXAMPLE 4. Take = R3 . Then F (x, y, z) = (0, 0, 1). The condition (1) is satised
at every point of . The level surfaces are planes parallel to the (x, y)-plane.
Consider the surface given by the equation (2) and let the point (x0 , y0 , z0 ) lie on this
surface. We now ask the following question: Is it possible to describe Sc by an equation
of the form
z = f (x, y),
(3)
18
(4)
Note that the point (0, 0, 1) lies on this surface and Fz (0, 0, 1) = 2. By the implicit function
function theorem, we can solve (4) for z near the point (0, 0, 1). In fact, we have
z=+
1 x2 y 2 , x2 + y 2 < 1.
(5)
In the upper half space z > 0, (4) and (5) describe the same surface.
The point (0, 0, 1) is also an the surface (4) and Fz (0, 0, 1) = 2. Near (0, 0, 1),
we have
z = 1 x2 y 2 , x2 + y 2 < 1.
(6)
In the lower half space z < 0, (4) and (6) represents the same surface.
On the other hand, at the point (1, 0, 0), we have Fz (1, 0, 0) = 0. Clearly, it is not
possible to solve (4) for z in terms of x and y near this point.
Note that the set of points satisfying the equations
F (x, y, z) = c1 , G(x, y, z) = c2
(7)
must lie on the intersection of these two surfaces. If F and G are not colinear at any
point of the domain , where both F and G are dened, i.e.,
F (x, y, z) G(x, y, z) = 0, (x, y, z) ,
(8)
then the intersection of the two surfaces given by (7) is always a curve.
(
Since
F G =
)
,
(9)
.
(y, z)
y z
z y
The condition (8) means that at every point of at least one of the Jacobian on the right
side of (9) is dierent from zero.
19
EXAMPLE 7. Let
F (x, y, z) = x2 + y 2 z,
G(x, y, z) = z.
Note that F = (2x, 2y, 1) and G = (0, 0, 1). It is easy to see that if = R3 with
the z-axis removed, then the condition (8) is satised in . The pair of equations
x2 + y 2 z = 0,
z=1
represents a circle which is the intersection of the paraboloidal surface represented by the
rst equation and the plane represented by the second equation.
Systems of Surfaces: A one-parameter system of surfaces is represented by the equation
of the form
f (x, y, z, c) = 0.
(10)
(11)
lim
c0
f (x, y, z, c + c) f (x, y, z, c)
.
c
f (x, y, z, c) = 0.
c
(12)
f (x, y, z, c) = 0
c
20
f
c
z=c
describe the characteristic curve to the surface. Eliminating the parameter c, the envelope
of this family is the cylinder
x2 + y 2 = 1.
Now consider the two parameter system of surfaces dened by the equation
f (x, y, z, c, d) = 0,
(13)
f (x, y, z, c, d) = 0,
c
f (x, y, z, c, d) = 0.
d
This point is called the characteristics point of the two-parameter system (13). As the
parameters c and d vary, this point generates a surface which is called the envelope of the
surfaces (13).
DEFINITION 11. (Envelope of two-parameter system)
The surface obtained by eliminating c and d from the equations
f (x, y, z, c, d) = 0,
f (x, y, z, c, d) = 0,
c
f (x, y, z, c, d) = 0
d
21
Integral Curves of Vector Fields: Let V(x, y, z) = (P (x, y, z), Q(x, y, z), R(x, y, z))
be a vector eld dened in some domain R3 satisfying the following two conditions:
V = 0 in , i.e., the component functions P , Q and R of V do not vanish simultaneously at any point of .
P, Q, R C 1 ().
DEFINITION 13. A curve C in is an integral curve of the vector eld V if V is tangent
to C at each of its points.
EXAMPLE 14. 1. The integral curves of the constant vector elds V = (1, 0, 0) are lines
parallel to the x-axis (see Fig. 1.1).
2. The integral curves of V = (y, x, 0) are circles parallel to the (x, y)-plane and
centered on the z-axis (see Fig. 1.1).
dy
= Q(x, y, z),
dt
dz
= R(x, y, z).
dt
(14)
A solution (x(t), y(t), z(t)) of the system (14), dened for t in some interval I, may be
regarded as a curve in . We call this curve a solution curve of the system (14). Every
solution curve of the system (14) is an integral curve of the vector eld V. Conversely, if
C is an integral curve of V , then there is a parametric representation
x = x(t),
y = y(t), z = z(t); t I,
22
of C such that (x(t), y(t), z(t)) is a solution of the system of equations (14). Thus, every
integral curve of V , if parametrized appropriately, is a solution curve of the associated
system of equations (14).
It is customary to write the systems (14) in the form
dx
dy
dz
=
=
.
P
Q
R
(15)
EXAMPLE 16. The systems associated with the vector elds V = (x, y, x) and V =
(y, x, 0), respectively, are
dx
dy
dz
=
=
,
x
y
z
dx
dy
dz
=
=
.
y
x
0
(16)
(17)
Note that the zero which appears in the denominator of (17) should not be disturbing. It
simply means that dz/dx = dz/dy = dz/dt = 0.
Before we discuss the method of solutions of (15), let us introduce some basic denitions and facts (cf. [11]).
DEFINITION 17. Two functions (x, y, z), (x, y, z) C 1 () are functionally independent
in R3 if
(x, y, z) (x, y, z) = 0, (x, y, z) .
(18)
Geometrically, condition (18) means that and are not parallel at any point of
.
DEFINITION 18. A function C 1 () is called a rst integral of the vector eld V =
(P, Q, R) (or its associated system (15)) in , if at each point of , V is orthogonal to
, i.e.
V = 0
P
+Q
+R
= 0 in .
x
y
z
THEOREM 19. Let 1 and 2 be any two functionally independent rst integrals of V in
. Then the equations
(x, y, z) = c1 ,
2 (x, y, z) = c2
(19)
23
w
w
w
+Q
+R
x
y
z
= Pf
+ Qf
+ Rf
x
y
z
(
)
= f P
+Q
+R
= 0.
x
y
z
2 = z
are two solutions which are functionally independent. The integral curves of V are described by the equations
y = c1 ,
z = c2 ,
2 (x, y, z) = z
(20)
are two functionally independent rst integrals of V. Therefore, the integrals curves of V
in are given by
x2 + y 2 = c1 ,
z = c2 .
(21)
The above equations describe circles parallel to the (x, y)-plane and centered on the z-axis
(see the second gure of Fig 1.1).
24
Practice Problems 3
1. Find a vector V (x, y, z) normal to the surface z =
x2 + y 2 + (x2 + y 2 )3/2 .
2. If f (x, y, z) is always parallel to the vector (x, y, z), show that f must assume equal
values at the points (0, 0, a) and (0, 0, a).
3. Find F , where F (x, y, z) = z 2 x2 y 2 . Find the largest set in which grad F does
not vanish?
4. Find a vector normal to the surface z 2 x2 y 2 = 0 at the point (1, 0, 1).
5. If possible, solve the equation z 2 x2 y 2 = 0 in terms of x, y near the following
(c) V =
(b) V =
(2, 3y 2 , 0).
d
dt {u(x(t), y(t), z(t))}].
8. If V be the vector eld given by V = (x, y, z) and let be the octant x > 0, y > 0,
z > 0. Show that u1 (x, y, z) =
rst integrals of V in
y
x
and u2 (x, y, z) =
z
x
25
Lecture 4
In the previous lecture, we have seen that the integral curves of the set of dierential
equations
dx
dy
dz
=
=
P
Q
R
(1)
u2 (x, y, z) = c2 ,
(2)
then varying c1 and c2 we get a two-parameter family of curves satisfying (1). In this
lecture, we shall describe methods for nding integral curves of the set of dierential
equations (1).
Method I: Along any tangential direction through a point (x, y, z) to u1 (x, y, z) = c1
we have
u1
u1
u1
dx +
dy +
dz = 0.
x
x
x
(3)
If u1 (x, y, z) = c is a suitable one-parameter system of surfaces, then the tangential direction to the integral curve through the point (x, y, z) is also a tangential direction to this
surface. Hence
(P, Q, R) u1 = 0
u1
u1
u1
+Q
+R
= 0.
P
x
x
x
P P1 + QQ1 + RR1 = 0.
(4)
u1
,
x
Q1 =
u1
,
y
R1 =
u1
.
z
(5)
26
REMARK 1. The method described above for nding solutions of (1) is by inspection. A
good deal of intuition is required to determine the forms of the functions P1 , Q1 and R1
(cf. [10]).
EXAMPLE 2. Find the integral curves of the equations
dx
dy
dz
=
=
.
y(x + y)
x(x + y)
z(x + y)
(6)
R = z(x + y).
If we choose
P1 =
1
1
, Q1 = ,
z
z
R1 =
x+y
z2
then condition P P1 + QQ1 + RR1 = 0 is satised. The function u1 (x, y, z) is then determined as follows:
1
1
x+y
u1 (x, y, z) =
dx +
dy + ( 2 ) dz
z
z
z
x y x+y
+ +
=c
=
z
z
z
x+y
= 2
= c.
z
Similarly, choose P1 = x, Q1 = y and R1 = 0 and verify that the condition (4) is get
satised. The function u2 is then determined as
1
u2 (x, y, z) = (x2 y 2 ).
2
Thus, the integral curves of the dierential equations (6) are the member of the twoparameter family of curves
x + y = c1 z,
x2 y 2 = c2 .
Method II: Suppose that we are able to nd three functions P1 , Q1 and R1 such that
the ratio
P1 dx + Q1 dy + R1 dz
= dW1 ,
P P1 + QQ1 + RR1
(7)
(8)
27
x2 +y 2
z .
(9)
Observe that P + Q = 2y.
=
=
dx + dy
dy
=
2y
x+y
(x + y)(dx + dy) = 2y
1
d{(x + y)2 } = 2y.
2
(x + y)2
y 2 = c1 .
2
x2 y 2 + z 2 = c2
Method III: When one of the variables is absent from (1), we can derive the integral
curves in a simple way.
For the sake of deniteness, let P and Q be functions of x and y only. Then the
equation
dx
dy
=
P
Q
28
may be written as
dy
Q
= f (x, y), where f (x, y) = .
dx
P
Let this equation has a solution of the form
(x, y, c1 ) = 0.
(10)
(11)
(12)
(13)
dy
y
=x
dx x
d (y)
= 1,
dx x
dz
y
z
z
= + = c1 + x +
dx
x x
x
d ( z ) c1
=
+ 1,
dx x
x
z = c1 x log x + c2 x + x2 .
Practice Problems 4
Find the integral curves of the following system of ODEs:
1.
dx
dy
dz
=
=
yz
zx
xy
2.
dx
dy
dz
=
=
z
xz
y
3.
dx
dy
dz
=
=
xz y
yz x
xy z
4.
dx
dy
dz
=
=
y + 3z
z + 5x
x + 7y
29
Lecture 1
A rst order PDE in two independent variables x, y and the dependent variable z can be
written in the form
f (x, y, z,
z z
, ) = 0.
x y
(1)
z
,
x
q=
z
.
y
(2)
The equations of the type (2) arise in many applications in geometry and physics. For
instance, consider the following geometrical problem.
EXAMPLE 1. Find all functions z(x, y) such that the tangent plane to the graph z = z(x, y)
at any arbitrary point (x0 , y0 , z(x0 , y0 )) passes through the origin characterized by the PDE
xzx + yzy z = 0.
The equation of the tangent plane to the graph at (x0 , y0 , z(x0 , y0 )) is
zx (x0 , y0 )(x x0 ) + zy (x0 , y0 )(y y0 ) (z z(x0 , y0 )) = 0.
This plane passes through the origin (0, 0, 0) and hence, we must have
zx (x0 , y0 )x0 zy (x0 , y0 )y0 + z(x0 , y0 ) = 0.
(3)
For the equation (3) to hold for all (x0 , y0 ) in the domain of z, z must satisfy
xzx + yzy z = 0,
which is a rst-order PDE.
EXAMPLE 2. The set of all spheres with centers on the z-axis is characterized by the
rst-order PDE yp xq = 0.
The equation
x2 + y 2 + (z c)2 = r2 ,
(4)
where r and c are arbitrary constants, represents the set of all spheres whose centers lie
on the z-axis. Dierentiating (4) with respect to x, we obtain
(
)
z
2 x + (z c)
= 2 (x + (z c)p) = 0.
x
(5)
(6)
Eliminating the arbitrary constant c from (5) and (6), we obtain the rst-order PDE
yp xq = 0.
(7)
(8)
df
du .
q = 2yf (u),
Eliminating f (u) from the above two equations, we obtain the same
The applications of conservation principles often yield a rst-order PDEs. We have seen
in the previous two examples that a rst-order PDE can be formed either by eliminating arbitrary constants or an arbitrary function involved. Below, we now generalize the
arguments of Example 2 and Example 3 to show that how a rst-order PDE can be formed.
Method I (Eliminating arbitrary constants): Consider two parameters family of surfaces described by the equation
F (x, y, z, a, b) = 0,
(9)
where a and b are arbitrary constants. Equation (9) may be thought of as a generalization
of the relation (4).
(10)
(11)
Eliminate the constants a, b from equations (9), (10) and (11) to obtain a rst-order PDE
of the form
f (x, y, z, p, q) = 0.
(12)
This shows that a family of surfaces described by the relation (9) gives rise to a rst-order
PDE (12).
Method II (Eliminating arbitrary function): Now consider the generalization of Example 3. Let u(x, y, z) = c1 and v(x, y, z) = c2 be two known functions of x, y and z
satisfying a relation of the form
F (u, v) = 0,
(13)
(u, v)
(u, v)
(u, v)
+q
=
,
(y, z)
(z, x)
(x, y)
(14)
(u,v)
(x,y)
= ux vy uy vx .
We classify the equation (1) depending on the special forms of the function f . If (1) is of
the form
a(x, y)
z
z
+ b(x, y)
+ c(x, y)z = d(x, y)
x
y
then it is called linear rst-order PDE. Note that the function f is linear in
z with all coecients depending on the independent variables x and y only.
If (1) has the form
a(x, y)
z
z
+ b(x, y)
= c(x, y, z)
x
y
z z
x , y
and
z
y .
z
x
and
z
y
z
x
are
z
z
+ b(x, y, z)
= c(x, y, z)
x
y
then it is called quasi-linear PDE. Here the function f is linear in the derivatives
and
z
y
z
x
well as on the unknown z. Note that linear and semilinear equations are special cases of
quasi-linear equations.
Any equation that does not t into one of these forms is called nonlinear.
EXAMPLE 5.
1. xzx + yzy = z (linear)
2. xzx + yzy = z 2 (semilinear)
3. zx + (x + y)zy = xy (linear)
4. zzx + zy = 0 (quasilinear)
5. xzx2 + yzy2 = 2 (nonlinear)
Recall the initial value problem for a rst-order ODE which ask for a solution of the
equation that takes a given value at a given point of R. The IVP for rst-order PDE ask
for a solution of (2) which has given values on a curve in R2 . The conditions to be satised
in the case of IVP for rst-order PDE are formulated in the classic problem of Cauchy
which may be stated as follows:
Let C be a given curve in R2 described parametrically by the equations
x = x0 (s),
y = y0 (s); s I,
(15)
where x0 (s), y0 (s) are in C 1 (I). Let z0 (s) be a given function in C 1 (I). The IVP or
Cauchys problem for rst-order PDE
f (x, y, z, p, q) = 0
is to nd a function u = u(x, y) with the following properties:
(16)
u(x, y) and its partial derivatives with respect to x and y are continuous in a region
of R2 containing the curve C.
u = u(x, y) is a solution of (16) in , i.e.,
f (x, y, u(x, y), ux (x, y), uy (x, y)) = 0 in .
On the curve C
u(x0 (s), y0 (s)) = z0 (s), s I.
(17)
The curve C is called the initial curve of the problem and the function z0 (s) is called the
initial data. Equation (17) is called the initial condition of the problem.
NOTE: Geometrically, Cauchys problem may be interpreted as follows: To nd a solution
surface u = u(x, y) of (16) which passes through the curve C whose parametric equations
are
x = x0 (s), y = y0 (s) z = z0 (s).
(18)
Further, at every point of which the direction (p, q, 1) of the normal is such that
f (x, y, z, p, q) = 0.
The proof of existence of a solution of (16) passing through a curve with equations
(18) requires some more assumptions on the function f and the nature of the curve C.
We now state the classic theorem due to Kowalewski in the following theorem (cf. [10]).
THEOREM 6. (Kowalewski) If g(y) and all its derivatives are continuous for |y y0 | < ,
if x0 is a given number and z0 = g(y0 ), q0 = g (y0 ), and if f (x, y, z, q) and all its partial
derivatives are continuous in a region S dened by
|x x0 | < ,
|y y0 | < , |q q0 | < ,
(19)
which contains two arbitrary constants a and b and is a solution of a rst-order PDE is
called a complete solution or a complete integral of that rst-order PDE.
DEFINITION 8. (A general solution or a general integral) Any relation of the form
F (u, v) = 0
involving an arbitrary function F connecting two known functions u(x, y, z) and v(x, y, z)
and providing a solution of a rst-order PDE is called a general solution or a general
integral of that rst-order PDE.
It is possible to derive a general integral of the PDE once a complete integral is known.
With b = (a), if we take any one-parameter subsystem
f (x, y, z, a, (a)) = 0
of the system (19) and form its envelope, we obtain a solution of equation (16). When
(a) is arbitrary, the solution obtained is called the general integral of (16) corresponding
to the complete integral (19).
When a denite (a) is used, we obtain a particular solution.
DEFINITION 9. (A singular integral) The envelope of the two-parameter system (19)
is also a solution of the equation (16). It is called the singular integral or singular solution
of the equation.
NOTE: The general solution of an equation of type (1) can be obtained by solving
systems of ODEs. This is not true for higher-order equations or for systems of rst-order
equations.
Practice Problems
1. Classify whether the following PDE is linear, quasi-linear or nonlinear:
(a) zzx 2xyzy = 0; (b) zx2 + zzy = 2; (c) zx + 2zy = 5z; (d) xzx + yzy = z 2 .
2. Eliminate the arbitrary constants a and b from the following equations to form the
PDE:
(a) ax2 + by 2 + z 2 = 1; (b) z = (x2 + a)(y 2 + b).
xy
).
z
Lecture 2
(1)
where a, b, c, and d are given functions of x and y. These functions are assumed to be
continuously dierentiable. Rewriting (1) as
a(x, y)zx + b(x, y)zy = c(x, y)z + d(x, y),
(2)
dy
= b(x, y),
dt
(3)
dy
determine a family of curves x = x(t), y = y(t) whose tangent vector ( dx
dt , dt ) coincides
with the direction of the vector (a, b). Therefore, the derivative of z(x, y) along these
curves becomes
dz
d
z dx z dy
= z{(x(t), y(t))} =
+
dt
dt
x dt
y dt
= zx (x(t), y(t))a(x(t), y(t)) + zy (x(t), y(t))b(x(t), y(t))
= c(x(t), y(t))z(x(t), y(t)) + d(x(t), y(t))
= c(t)z(t) + d(t),
where we have used the chain rule and (1). Thus, along these curves, z(t) = z(x(t), y(t))
satises the ODE
t
0 c( )d
(4)
]
( )d( )d + z(0) .
(5)
10
The approach described above to solve (1) by using the solutions of (3)-(4) is called the
method of characteristics. It is based on the geometric interpretation of the partial
dierential equation (1).
NOTE: (i) The ODEs (3) is known as the characteristics equation for the PDE (1). The
solution curves of the characteristic equation are the characteristics curves for (1).
(ii) Observe that (t) and d(t) depend only on the values of c(x, y) and d(x, y) along
the characteristics curve x = x(t), y = y(t). Thus, equation (5) shows that the values z(t)
of the solution z along the entire characteristics curve are completely determined, once
the value z(0) = z(x(0), y(0)) is prescribed.
(iii) Assuming certain smoothness conditions on the functions a, b, c, and d, the existence and uniqueness theory for ODEs guarantees a unique solution curve (x(t), y(t), z(t))
of (3)-(4) (i.e., a characteristic curve) passes through a given point (x0 , y0 , z0 ) in (x, y, z)space.
In practice we are not interested in determining a general solution of the partial dierential
equation (1) but rather a specic solution z = z(x, y) that passes through or contains a
given curve C. This problem is known as the initial value problem for (1). The method
of characteristics for solving the initial value problem for (1) proceeds as follows.
Let the initial curve C be given parametrically as:
x = x(s), y = y(s), z = z(s).
(6)
for a given range of values of the parameter s. The curve may be of nite or innite extent
and is required to have a continuous tangent vector at each point.
Every value of s xes a point on C through which a unique characteristic curve passes
(see, Fig. 2.1). The family of characteristic curves determined by the points of C may be
parameterized as
x = x(s, t), y = y(s, t), z = z(s, t)
with t = 0 corresponding to the initial curve C. That is, we have
x(s, 0) = x(s),
11
The functions x(s, t) and y(s, t) are the solutions of the characteristics
system (for each xed s)
d
d
x(s, t) = a(x(s, t), y(s, t)),
y(s, t) = b(x(s, t), y(s, t))
dt
dt
with given initial values x(s, 0) and y(s, 0).
(7)
Suppose that
z(x(s, 0), y(s, 0)) = g(s),
(8)
where g(s) is a given function. We obtain z(x(s, t), y(s, t)) as follows: Let
z(s, t) = z(x(s, t), y(s, t)), c(s, t) = c(x(s, t), y(s, t)), d(s, t) = d(x(s, t), y(s, t))
[
and
(s, t) = exp
]
c(s, t)dt .
(9)
(10)
(11)
z(s, t) is the value of z at the point (x(s, t), y(s, t)). Thus, as s and t vary, the point
(x, y, z), in xyz-space, given by
x = x(s, t),
y = y(s, t),
z = z(s, t),
(12)
12
traces out the surface of the graph of the solution z of the PDE (1) which meets the
initial curve (8). The equations (12) constitute the parametric form of the solution of (1)
satisfying the initial condition (8) [i.e., a surface in (x, y, z)-space that contains the initial
curve ]
NOTE: If the Jacobian J(s, t) = xs yt xt ys = 0, then the equations x = x(s, t) and
y = y(s, t) can be inverted to give s and t as (smooth) functions of x and y i.e., s = s(x, y)
and t = t(x, y). The resulting function z = z(x, y) = z(s(x, y), t(x, y)) satises the PDE
(1) in a neighborhood of the curve C (in view of (4) and the initial condition (6)) and is
the unique solution of the IVP.
EXAMPLE 1. Determine the solution the following IVP:
z
z
+c
= 0,
y
x
z(x, 0) = f (x),
y = 0, z = f (s).
(13)
The family of characteristics curves x((s, t), y(s, t)) are determined by solving the ODEs
d
x(s, t) = c,
dt
d
y(s, t) = 1
dt
c2 (s) = 0,
and hence
x(s, t) = ct + s and y(s, t) = t.
13
Step 4. (Expressing z(s, t) in terms of z(x, y)) Expressing s and t as s = s(x, y) and
t = t(x, y), we have
s = x cy,
t = y.
(s > 0).
The family of characteristics curves (x(s, t), y(s, t)) are determined by solving
d
x(s, t) = y(s, t),
dt
d
y(s, t) = x(s, t)
dt
y(s, 0) = s2 .
14
c2 (s) = s2 ,
and hence
x(s, t) = s cos(t) s2 sin(t) and y(s, t) = s sin(t) + s2 cos(t).
Step 3. (Writing the parametric form of the solution)
Comparing with (1), we note that c(x, y) = 0 and d(x, y) = 0. Therefore, using (10)
and (11), it follows that
d(s, t) = 0, (s, t) = 1.
In view of the given condition curve and z = z(s, t), we obtain
z(x(s, 0), y(s, 0)) = z(s, s2 ) = g(s) = s3 ,
z(s, t) = s3 .
1 [
z(x, y) = 1 + 1 + 4(x2 + y 2 )
.
8
Practice Problems
1. Find the general solution of the following PDE in the indicated domain.
(A) xzx + 2yzy = 0, for x > 0, y > 0
(B) yzx 4xzy = 2xy, for all (x, y)
(C) xzx xyzy = z, for all (x, y)
2. Find a particular solution of the following PDEs satisfying the given side conditions.
z(s2 , s) = s3
15
Lecture 3
16
z
z
+ b(x, y, z)
= c(x, y, z).
x
y
(1)
Such equations occur in a variety of nonlinear wave propagation problems. Let us assume
that an integral surface z = z(x, y) of (1) can be found. Writing this integral surface in
implicit form as
F (x, y, z) = z(x, y) z = 0.
Note that the gradient vector F = (zx , zy , 1) is normal to the integral surface F (x, y, z) =
0. The equation (1) may be written as
azx + bzy c = (a, b, c) (zx , zy , 1) = 0.
(2)
This shows that the vector (a, b, c) and the gradient vector F are orthogonal. In other
words, the vector (a, b, c) lies in the tangent plane of the integral surface z = z(x, y) at
each point in the (x, y, z)-space where F = 0.
At each point (x, y, z), the vector (a, b, c) determines a direction in (x, y, z)-space is
called the characteristic direction. We can construct a family of curves that have the
characteristic direction at each point. If the parametric form of these curves is
x = x(t), y = y(t), and z = z(t),
(3)
dy
= b(x(t), y(t), z(t)),
dt
dz
= c(x(t), y(t), z(t)),
dt
(4)
because (dx/dt, dy/dt, dz/dt) is the tangent vector along the curves. The solutions of (4)
are called the characteristic curves of the quasilinear equation (1).
We assume that a(x, y, z), b(x, y, z), and c(x, y, z) are suciently smooth and do not
all vanish at the same point. Then, the theory of ordinary dierential equations ensures
that a unique characteristic curve passes through each point (x0 , y0 , z0 ). The IVP for
(1) requires that z(x, y) be specied on a given curve in (x, y)-space which determines a
curve C in (x, y, z)-space referred to as the initial curve. To solve this IVP, we pass a
characteristic curve through each point of the initial curve C. If these curves generate a
surface known as integral surface. This integral surface is the solution of the IVP.
17
REMARK 1. (i) The characteristic equations (4) for x and y are not, in general, uncoupled
from the equation for z and hence dier from those in the linear case (cf. Eq. (3) of Lecture
2).
(ii) The characteristics equations (4) can be expressed in the nonparametric form as
dx
dy
dz
=
=
.
a
b
c
(5)
Below, we shall describe a method for nding the general solution of (1). This method
is due to Lagrange hence it is usually referred to as the method of characteristics or the
method of Lagrange.
(6)
(7)
Proof. If u(x, y, z) = c1 and v(x, y, z) = c2 satisfy the equations (1) then the equations
ux dx + uy dy + uz dz = 0,
vx dx + vy dy + vz dz = 0
are compatible with (7). Thus, we must have
aux + buy + cuz = 0,
avx + bvy + cvz = 0.
Solving these equations for a, b and c, we obtain
a
(u,v)
(y,z)
b
(u,v)
(z,x)
c
(u,v)
(x,y)
(8)
Eliminating
F
u
and
F
v
18
(9)
z
z
+b
= c.
x
y
Thus, we nd that F (u, v) = 0 is a solution of the equation (1). This completes the
proof.
.
REMARK 3.
All integral surfaces of the equation (1) are generated by the integral
x
y
19
Consequently, we have
d{xy
z2
1
} = 0 and d{ (x2 y 2 z 2 )} = 0.
2
2
y = y(s),
dy
dx
a[x(s), y(s), z(s)]
b(x(s), y(s), z(s)] = 0
ds
ds
(10)
on C. Then, there exists a unique solution z = z(x, y), dened in some neighborhood of
the initial curve C, satises (1) and the initial condition z(x(s), y(s)) = z(s).
Proof. The characteristic system (4) with initial conditions at t = 0 given as x =
x(s), y = y(s), and z = z(s) has a unique solution of the form
x = x(s, t), y = y(s, t), z = z(s, t),
with continuous derivatives in s and t, and
x(s, 0) = x(s), y(s, 0) = y(s), z(s, 0) = z(s).
This follows from the existence and uniqueness theory for ODEs. The Jacobian of the
transformation x = x(s, t), y = y(s, t) at t = 0 is
[
]
x x
y
x
s t
=
J(s) = J(s, t)|t=0 = y y
a
b
= 0.
t
t t=0
s
t
t=0
(11)
20
z dx z dy
+
x dt
y dt
z
z
= a
+b ,
x
y
dz
dt
where we have used (4). The uniqueness of the solution follows from the fact that any two
integral surfaces that contain the same initial curve must coincide along all the characteristic curves passing through the initial curve. This is a consequence of the uniqueness
theorem for the IVP for (4). This completes our proof.
.
y = 0, z = f (s).
dy
= 1,
dt
dz
= 0.
dt
Let the solutions be denoted as x(s, t), t(s, t), and z(s, t). We immediately nd that
x(s, t) = zt + c1 (s),
21
z(s, t) = f (s).
t(x, y) = y.
Since t = y and s = x tf (s) = x yz, the solution can also be given in implicit form as
z = f (x yz).
EXAMPLE 8. Solve the following quasi-linear PDE:
zzx + yzy = x, (x, y) R2
subject to the initial condition
z(x, 1) = 2x, x R.
Solution. Here a(x, y, z) = z,
b(x, y, z) = y,
equations are
dx
= z, x(s, 0) = s,
dt
dy
= y, y(s, 0) = 1,
dt
dz
= x, z(s, 0) = 2s.
dt
On solving the above ODEs, we obtain
s
s
x(s, t) = (3et et ), y(s, t) = et , z(s, t) = (3et + et ).
2
2
Solving for (s, t) in terms of (x, y), we obtain
s(x, y) =
2xy
,
3y 2 1
t(x, y) = ln(y),
(3y 2 + 1)x
.
(3y 2 1)
22
Note that the characteristics variables imply that y must be positive (y = et ). In fact, the
solution z is valid only for 3y 2 1 > 0, i.e., for y >
1
3
= 0.
Practice Problems
1. Find a solution of the PDE zx + zzy = 6x satisfying the condition z(0, y) = 3y.
2. Find the general integral of the PDE
(2xy 1)zx + (z 2x2 )zy = 2(x yz)
and also the particular integral which passes through the line x = 1, y = 0.
3. Solve zx + zzy = 2x, z(0, y) = f (y).
4. Find the solution of the equation zx + zzy = 1 with the data
x(s, 0) = 2s,
y(s, 0) = s2 , z(0, s2 ) = s.
5. Find the characteristics of the equation zx zy = z, and determine the integral surface
which passes through the parabola x = 0, y 2 = z.
Lecture 4
23
(1)
(2)
(3)
Note that Fp = a(x, y) and Fq = b(x, y). Hence, (3) may be written as
x (t) = Fp and y (t) = Fq .
(4)
For solving rst-order nonlinear PDE (1), the relation (4) motivates us to dene characteristics curves as solutions of the system
x (t) = Fp (x(t), y(t), z(t), p(t), q(t)) and y (t) = Fq (x(t), y(t), z(t), p(t), q(t)),
(5)
where z(t) = z(x(t), y(t)), p(t) = zx (x(t), y(t)), q(t) = zy (x(t), y(t)). However, unlike the
linear case, the right sides of (5) depend not only on x(t) and y(t), but also on z(t), p(t)
and q(t). Thus, we can expect a large system of ve ODEs for the ve unknown x(t), y(t),
z(t), p(t) and q(t). For the remaining three equations, notice that
d
{z(x(t), y(t))}
dt
= zx x (t) + zy y (t)
z (t) =
(6)
24
p (t) =
= zxx Fp (x(t), y(t), z(t), p(t), q(t)) + zxy Fq (x(t), y(t), z(t), p(t), q(t)).
(7)
Using the fact that z(x, y) should solve the PDE (1), we obtain
d
{F (x, y, z(x, y), zx (x, y), zy (x, y))}
dx
= Fx + Fz zx + Fp zxx + Fq zyx .
0 =
Therefore,
p (t) = zxx Fq + zxy Fq = (Fx + pFz ).
(8)
(9)
Similarly,
(10)
p (t) = {Fx (x(t), y(t), z(t), p(t), q(t)) + p(t)Fz (x(t), y(t), z(t), p(t), q(t))}
q (t) = {Fy (x(t), y(t), z(t), p(t), q(t)) + q(t)Fz (x(t), y(t), z(t), p(t), q(t))}
These equations constitute the characteristics system of the PDE (1) and are known as
the characteristics equations associated with PDE (1).
NOTE: If the functions which appear in equations (10) satisfy a Lipschitz condition,
there is a unique solution of the equations for each prescribed set of initial values of the
variables. Therefore the characteristics strip is uniquely determined by any initial element
(x(t0 ), y(t0 ), z(t0 ), p(t0 ), q( t0 )) at any initial point t0 of t.
An important result about characteristic strips is given below.
THEOREM 1. The function F (x, y, z, p, q) is a constant along every characteristics strip
of the equation F (x, y, z, p, q) = 0.
25
(11)
where G(s) is a continuously dierentiable function. Such a problem may have no solution
(e.g., the PDE zx2 + zy2 + 1 = 0). However, if a solution exists in some neighborhood of the
initial curve, then such a solution can often be determined using the following steps (cf.
[1]).
Step 1: Find functions h(s) and k(s) (if possible) such that
F (f (s), g(s), G(s), h(s), k(s)) = 0, G (s) = h(s)f (s) + k(s)g (s) and
Fp (f (s), g(s), G(s), h(s), k(s))g (s) Fq (f (s), g(s), G(s), h(s), k(s))f (s) = 0. (12)
Note that if h(s) and k(s) do not exist, then (11) has no solution. If there are several
choices for (h(s), k(s)), then a solution of (11) exists for each such choice.
Step 2: For each xed s, solve the following charateristics system for x(s, t), y(s, t),
z(s, t), p(s, t),q(s, t) with the given initial conditions p(s, 0) = h(s), q(s, 0) = k(s), where
h(s) and k(s) are the functions found in Step 1.
26
d
x(s, t) = Fp (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
d
y(s, t) = Fq (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
d
z(s, t) = p(s, t)Fp (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+q(s, t)Fq (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
d
p(s, t) = [Fx (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+p(s, t)Fz (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]
d
q(s, t) = [Fy (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+q(s, t)Fz (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]
(13)
(14)
traces out the graph of a solution z of (11) in the xyz-space, in a neighborhood of the
curve traced out by (f (s), g(s), G(s)). In some cases, one can use the rst two equations
in (14) to solve for s and t in terms of x and y (say, s = s(x, y) and t = t(x, y)) to obtain a
solution z(x, y) = z(s(x, y), t(x, y)), for (x, y) in a neighborhood of the curve (f (s), g(s)).
To illustrate the above steps, let us consider the following example.
EXAMPLE 2. Solve the PDE zx zy z = 0 subject to the condition z(s, s) = 1.
Solution. Here, we have
F (x, y, z, p, q) = pq z.
The characteristics system (13) takes the form
dy
dz
dx
= Fp = q(t),
= Fq = p(t),
= pFp + qFq = 2p(t)q(t),
dt
dt
dt
dp
dq
= [Fx + p(t)Fz ] = p(t),
= [Fy + q(t)Fz ] = q(t).
dt
dt
Note that
dp
dq
= p(t) = p(t) = cet and
= q(t) = q(t) = det ,
dt
dt
where c and d are arbitrary constants. Since we are looking for a characteristics strip (i.e.,
F (x, y, z, p, q) = 0), we set z(t) = p(t)q(t) = cde2t . The equations for the characteristic
strip are:
x(t) = det + d1 ,
q(t) = det ,
27
q(s, t) = et .
(x + y + 2)2
.
4
If we choose h(s) = 1 and k(s) = 1, the solution is given by
z(x, y) = e2t =
z(x, y) =
(x + y 2)2
.
4
Practice Problems
Solve the following Cauchys problem:
1. pq z = 0, z(x, x) = x
2. p + zq = 2x, z(0, y) = f (y)
3. Find the solution of the equation p + zq = 1 with the data
x(s, 0) = 2s,
y(s, 0) = s2 , z(0, s2 ) = s.
4. Find the characteristics of the equation pq = z, and determine the integral surface
which passes through the parabola x = 0, y 2 = z.
Lecture 5
28
In this lecture, we shall study compatible systems of rst-order PDEs and the Charpits
method for solving nonlinear PDEs. Lets begin with the following denition.
DEFINITION 1. (Compatible systems of first-order PDEs) A system of two rstorder PDEs
f (x, y, z, p, q) = 0
(1)
g(x, y, z, p, q) = 0
(2)
and
f f
p q
(i) J = (f,g)
= 0 on D.
(p,q) =
gp gq
(ii) p and q can be explicitly solved from (1) and (2) as p = (x, y, z) and q = (x, y, z).
Further, the equation
dz = (x, y, z)dx + (x, y, z)dy
is integrable.
THEOREM 3. A necessary and sucient condition for the integrability of the equation
dz = (x, y, z)dx + (x, y, z)dy is
[f, g]
(f, g) (f, g)
(f, g)
(f, g)
+
+p
+q
= 0.
(x, p) (y, q)
(z, p)
(z, q)
In other words, the equations (1) and (2) are compatible i (3) holds.
EXAMPLE 4. Show that the equations
xp yq = 0,
fx = p, fy = q, fz = 0, fp = x, fq = y.
and
gx = zp 2y, gy = zq 2x, gz = xp + yq, gp = zx, gq = zy.
(3)
29
Compute
(f, g)
J
(p, q)
f f x y
p q
=
=
= zxy + zxy = 2zxy = 0
gp gq zx zy
for x = 0, y = 0, z = 0. Further,
f f
(f, g)
x p
=
=
gx gp
(x, p)
f f
(f, g)
z p
=
=
gz gp
(z, p)
f f
(f, g)
y q
=
=
gy gq
(y, q)
f f
(f, g)
z q
=
=
gz gq
(z, q)
p
zp 2y
0
xp + yq
q
zq 2x
0
xp + yq
x
= zxp x(zp 2y) = 2xy
zx
x
= 0 x(xp + yq) = x2 p xyq
zx
y
= qzy + y(zq 2x) = 2xy
zy
y
= y(xp + yq) = y 2 q + xyp.
zy
[f, g]
= y 2 q 2 x2 p2
= 0.
So the equations are compatible.
Next step to determine p and q from the two equations xp yq = 0, z(xp + yq) = 2xy.
Using these two equations, we have
zxp + zyq 2xy = 0 = xp + yq =
= 2xp =
2xy
z
2xy
y
= p = = (x, y, z).
z
z
and
xp
xy
x
=
=
y
yz
z
x
= q = = (x, y, z).
z
xp yq = 0 = q =
30
(4)
g x2 p + q xz = 0
(5)
are compatible. They have common solutions z = x + c(1 + xy), where c is an arbitrary
constant. Note that z = x(y + 1) is a solution of (4) but not of (5).
Charpits Method: It is a general method for nding the complete integral of a
nonlinear PDE of rst-order of the form
f (x, y, z, p, q) = 0.
(6)
Basic Idea: The basic idea of this method is to introduce another partial dierential
equation of the rst order
g(x, y, z, p, q, a) = 0
(7)
q = q(x, y, z, a).
(8)
is integrable.
When such a function g is found, the solution
F (x, y, z, a, b) = 0
of (8) containing two arbitrary constants a, b will be the solution of (6).
Note: Notice that another PDE g is introduced so that the equations f and g are compatible and then common solutions of f and g are determined in the Charpits method.
The equations (6) and (7) are compatible if
[f, g]
(f, g)
(f, g)
(f, g) (f, g)
+
+p
+q
= 0.
(x, p) (y, q)
(z, p)
(z, q)
31
g
g
g
g
g
+ fq
+ (pfp + qfq )
(fx + pfz )
(fy + qfz )
= 0.
x
y
z
p
q
(9)
Now solve (9) to determine g by nding the integrals of the following auxiliary equations:
dy
dz
dp
dx
dq
=
=
=
=
fp
fq
pfp + qfq
(fx + pfz )
(fy + qfz )
(10)
These equations are known as Charpits equations which are equivalent to the characteristics equations (10) of the previous Lecture 4.
Once an integral g(x, y, z, p, q, a) of this kind has been found, the problem reduces to
solving for p and q, and then integrating equation (8).
REMARK 5. 1. For nding integrals, all of Charpits equations (10) need not to be used.
2. p or q must occur in the solution obtained from (10).
EXAMPLE 6. Find a complete integral of
p2 x + q 2 y = z.
Solution. To nd a complete integral, we proceed as follows.
Step 1: (Computing fx , fy , fz , fp , fq ).
Set f p2 x + q 2 y z = 0. Then
fx = p2 ,
fy = q 2 ,
fz = 1, fp = 2px, fq = 2qy.
dy
dz
dp
dq
dx
=
=
=
=
fp
fq
pfp + qfq
(fx + pfz )
(fy + qfz )
dx
dy
dz
dp
dq
=
=
=
=
2
2
2
2px
2qy
2(p x + q y)
p + p
q 2 + q
2p3 x
=
(11)
32
On integrating, we obtain
log(p2 x) = log(q 2 y) + log a
p2 x = aq 2 y,
(12)
p2 x = aq 2 y
(aq 2 y) + q 2 y = z = q 2 y(1 + a) = z
[
]1/2
z
z
2
q =
= q =
.
(1 + a)y
(1 + a)y
and
y
z
y
az
=a
=
x
(1 + a)y x
(1 + a)x
[
]1/2
az
p=
.
(1 + a)x
p2 = aq 2
=
Step 4: (Writing dz = p(x, y, z, a)dx + q(x, y, z, a)dy and nding its solution).
Writing
]1/2
]1/2
[
az
z
dz =
dx +
dy
(1 + a)x
(1 + a)y
)
(
( )1/2
( a )1/2
1 + a 1/2
1
dz =
dx +
dy.
z
x
y
[
=
Integrate to have
Practice Problems
1. Show that the equations xp yq = x and x2 p + q = xz are compatible and solve
them.
2. Show that the equations f (x, y, p, q) = 0 and g(x, y, p, q) = 0 are compatible if
(f, g) (f, g)
+
= 0.
(x, p) (y, p)
3. Find complete integrals of the equations:
(i) p = (z + qy)2 ;
(ii) (p2 + q 2 )y = qz
Lecture 6
33
We shall consider some special types of rst-order partial dierential equations whose
solutions may be obtained easily by Charpits method.
Type (a): (Equations involving only p and q)
If the equation is of the form
f (p, q) = 0
(1)
(2)
1
a.
(3)
34
(4)
where a is an arbitrary constant. Solving (3) and (4) for p and q, we obtain
q = Q(a, z) = p = aQ(a, z).
Now
dz = pdx + qdy
=
dz
= ax + y + b,
Q(a, z)
(5)
Now,
q 2 (1 + a2 z 2 ) = 1
q = (1 + a2 z 2 )1/2 .
(
p = (1 q )/z = 1
2
=
=
1
(1 + a2 z 2 )
)(
1
z2
a2
1 + a2 z 2
p = a(1 + a2 z 2 )1/2 .
p2 =
(1 + a2 z 2 )1/2 dz = adx + dy
}
1 {
az(1 + a2 z 2 )1/2 log[az + (1 + a2 z 2 )1/2 ] = ax + y + b,
2a
35
(6)
That is, a PDE in which z is absent and the terms containing x and p can be separated
from those containing y and q. For this type of equation, Charpits equations become
dy
dz
dp
dq
dx
=
=
=
=
.
fp
gq
pfp qgq
fx
gy
From the last two relation, we obtain an ODE
dp
dx
dp fx
=
=
+
=0
fx
fp
dx fp
(7)
g(y, q) = a
and then use the relation dz = pdx + qdy to determine a complete integral.
EXAMPLE 3. Find a complete integral of p2 y(1 + x2 ) = qx2 .
Solution. First we write the given PDE in the form
p2 (1 + x2 )
q
=
(separable equation)
2
x
y
It follows that
p2 (1 + x2 )
ax
= a2 = p =
,
2
x
1 + x2
where a is an arbitrary constant. Similarly,
q
= a2 = q = a2 y.
y
Now, the relation dz = pdx + qdy yields
ax
a2 y 2
dz =
dx + a2 ydy = z = a 1 + x2 +
+ b,
2
1 + x2
where a and b are arbitrary constant, a complete integral for the given PDE.
36
(8)
1
,
p+q
(9)
1
,
a+b
Practice Problems
Find complete integrals of the following PDEs.
1. p + q = pq
2.
p+ q =1
3. z = p2 q 2
4. p(1 + q) = qz
5. p2 + q 2 = x + y
6. z = px + qy + 1 + p2 + p2
7. zpq = p2 (xq + p2 ) + q 2 (yp + q 2 )
Lecture 7
37
(1)
where x, y, z are independent variables and u is the dependent variable. Note that the
dependent variable u does not appear in the PDE (1).
Idea of Jacobis Method: The fundamental idea of Jacobis method is to introduce
two rst-order PDEs involving two arbitrary constants a and b of the following form
h1 (x, y, z, ux , uy , uz , a) = 0,
(2)
h2 (x, y, z, ux , uy , uz , b) = 0
(3)
such that
(f, h1 , h2 )
= 0
(ux , uy , uz )
(4)
and
Equations (1), (2) and (3) can be solved for ux , uy , uz ;
The equation
du = ux dx + uy dy + uz dz
(5)
is integrable.
If h1 = 0 and h2 = 0 are compatible with f = 0 then h1 and h2 satisfy
(f, h)
(f, h)
(f, h)
+
+
=0
(x, ux ) (y, uy ) (z, uz )
(6)
h
h
h
h
h
h
+ fuy
+ fuz
fx
fy
fz
=0
x
y
z
ux
uy
uz
(7)
(8)
38
The method just described above can be applied to solve rst-order equation of the
form
f (x, y, z, p, q) = 0.
(9)
Next, we shall show how to transform the equation f (x, y, z, p, q) = 0 into the equation
g(x, y, z, ux , uy , uz ) = 0 so that the above procedure can be applied.
If u(x, y, z) is a relation between x, y and z and satises (9) then we have
ux + uz zx = 0 = ux + uz p = 0 = p = ux /uz ,
uy + uz zy = 0 = uy + uz q = 0 = q = uy /uz .
Substituting
p = ux /uz and q = uy /uz
in (9) we obtain an equation
g(x, y, z, ux , uy , uz ) = 0
(10)
u2y
u2x
x
+
y=z
u2z
u2z
xu2x + yu2y zu2z = 0.
Thus,
f (x, y, z, ux , uy , uz ) = xu2x + yu2y zu2z = 0.
Step 2: Solving PDE (11) by Jacobis method
Step 2(a): Compute fux , fuy , fuz , fx , fy , fz
fux = 2xux , fuy = 2yuy , fuz = 2zuz , fx = u2x , fy = u2y , fz = u2z .
Step 2(b): Writing auxiliary equation and solving for ux , uy and uz .
The auxiliary equations are given by
duy
dx
dy
dz
dux
duz
=
=
=
=
=
fux
fuy
fuz
fx
fy
fz
du
dx
dy
dz
dux
duz
y
=
=
=
=
= 2
2xux
2yuy
2zuz
u2x
u2y
uz
(11)
39
Now,
dux
dx
ux dx
2xdux
=
=
=
2
2
2xux
ux
2xux
2xu2x
ux dx = 2xdux
dx
dux
= 2
x
ux
log x = 2 log(ux ) + log(a)
=
=
=
=
=
Similarly, we get
yu2y
( )1/2
b
= b = uy =
.
y
and
(a + b)
uz =
z
]1/2
.
du =
=
( a )1/2
x
(
)
( )1/2
a + b 1/2
b
dy +
dz
dx +
y
z
Step 3: (Finding solution of the given PDE from the solution of PDE (11)).
Writing u = c in (12), we get the complete integral of the given PDE as
[(
z=
ax
a+b
)1/2
(
+
by
a+b
)1/2 ]2
Practice Problems
Find the complete integral of the following PDEs:
1. (p2 + q 2 )y = qz
2. z 2 = pqxy
3. 2(y + zq) = q(xp + yq)
(12)
Lecture 1
Classication of PDEs is an important concept because the general theory and methods
of solution usually apply only to a given class of equations. Let us rst discuss the
classication of PDEs involving two independent variables.
Consider the following general second order linear PDE in two independent variables:
A
2u
2u
2u
u
u
+
B
+
C
+D
+E
+ F u + G = 0,
2
2
x
xy
y
x
y
(1)
(2)
where
u
u
2u
2u
2u
, uy =
, uxx =
,
u
=
,
u
=
.
yy
xy
x
y
x2
xy
y 2
Assume that A, B and C are continuous functions of x and y possessing continuous partial
ux =
(3)
We know that the nature of the curves will be decided by the principal part ax2 +bxy +cy 2
i.e., the term containing highest degree. Depending on the sign of the discriminant b2 4ac,
we classify the curve as follows:
If b2 4ac > 0 then the curve traces hyperbola.
If b2 4ac = 0 then the curve traces parabola.
If b2 4ac < 0 then the curve traces ellipse.
With suitable transformation, we can transform (3) into the following normal form
x2 y 2
2 = 1 (hyperbola).
a2
b
2
x = y (parabola).
x2 y 2
+ 2 = 1 (ellipse).
a2
b
Linear PDE with constant coecients. Let us rst consider the following general
linear second order PDE in two independent variables x and y with constant coecients:
Auxx + Buxy + Cuyy + Dux + Euy + F u + G = 0,
(4)
where the coecients A, B, C, D, E, F and G are constants. The nature of the equation
(4) is determined by the principal part containing highest partial derivatives i.e.,
Lu Auxx + Buxy + Cuyy .
(5)
For classication, we attach a symbol to (5) as P (x, y) = Ax2 + Bxy + Cy 2 (as if we have
replaced x by
and y by
y ).
(6)
(7)
(8)
Linear PDE with variable coecients. The above classication of (4) is still valid if
the coecients A, B, C, D, E and F depend on x, y. In this case, the conditions (6), (7)
and (8) should be satised at each point (x, y) in the region where we want to describe its
nature e.g., for elliptic we need to verify
B 2 (x, y) 4A(x, y)C(x, y) < 0
for each (x, y) in the region of interest. Thus, we classify linear PDE with variable coecients as follows:
B 2 (x, y) 4A(x, y)C(x, y) > 0 at (x, y) = Eq. (4) is hyperbolic at (x, y)
(9)
(10)
(11)
Note: Eq. (4) is hyperbolic, parabolic, or elliptic depends only on the coecients of the
second derivatives. It has nothing to do with the rst-derivative terms, the term in u, or
the nonhomogeneous term.
EXAMPLE 1.
1. uxx + uyy = 0 (Laplace equation). Here, A = 1, B = 0, C = 1 and B 2 4AC =
4 < 0. Therefore, it is an elliptic type.
4. uxx + xuyy = 0,
hyperbolic for x < 0 and elliptic for x > 0. This example shows that equations with
variable coecients can change form in the dierent regions of the domain.
i=1 j=1
u
2u
aij
+
bi
+ cu + d = 0,
xi xj
xi
n
(12)
i=1
where the coecients aij , bi , c and d are functions of x = (x1 , x2 , , xn ) alone and u =
u(x1 , x2 , , xn ).
Its principal part is
L
n
n
aij
i=1 j=1
2
.
xi xj
(13)
n
n
a
ij
i=1 j=1
Note that
2u
xi xj
2u
xj xi . As in two-space
u
replacing x
by xi ).
i
2
.
xi xj
(14)
P (x1 , x2 , , xn ) =
n
n
aij xi xj .
(15)
i=1 j=1
Since A is a real valued symmetric (aij = aji ) matrix, it is diagonalizable with real
eigenvalues 1 , 2 , . . . , n (counted with their multiplicities). In other words, there exists
a corresponding set of orthonormal set of n eigenvectors, say 1 , 2 , , n with R =
RT AR =
=D
(16)
n
We now classify (12) depending on sign of eigenvalues of A:
(a) If i > 0 i or i < 0 i then (12) is elliptic type.
(b) If one or more of the i = 0 then (12) is parabolic type.
(c) If one of the i < 0 or i > 0, and all the remaining have
opposite sign then (12) is said to be of hyperbolic type.
EXAMPLE 2.
1. 2 u = uxx + uyy + uzz = 0. In this case, i = 1 > 0 for all i = 1, 2, 3. It is an elliptic
PDE since all eigenvalues are of one sign.
2. It is an easy exercise to check that ut 2 u = 0 is of parabolic type.
3. The equation utt 2 u = 0 is of hyperbolic type.
EXAMPLE 3. Classify ux1 x1 + 2(1 + cx2 )ux2 x3 = 0.
To symmetrize, write it as
ux1 x1 + (1 + cx2 )ux2 x3 + (1 + cx2 )ux3 x2 = 0
i.e., xT Ax cx2 = 0, where
A= 0
0
0
0 1 + cx2
x1
1 + cx2
x = x2
0
x3
1
0
0
1 =
0 2 = 1/2 3 = 1/ 2
1/ 2
1/ 2
0
0
0
R=
0 1/2 1/ 2
0 1/ 2 1/ 2
So
Note that R = RT = R1 .
RT AR =
0 1 + cx2
0
0
0
(1 + cx2 )
=D
Lecture 2
By a suitable change of the independent variables we shall show that any equation of the
form
Auxx + Buxy + Cuyy + Dux + Euy + F u + G = 0,
(1)
(2)
(3)
(5)
The equation (5) shows that the transformation of the independent variables does not
modify the type of PDE.
We shall determine and so that (4) takes the simplest possible form. We now
consider the following cases:
Case I: B 2 4AC > 0
Case II:
B2
Case III:
(Hyperbolic type)
4AC = 0
B2
(Parabolic type)
4AC < 0
(Elliptic type)
Case I: Note that B 2 4AC > 0 implies the equation A2 + B + C = 0 has two real
and distinct roots, say 1 and 2 . Now, choose and such that
= 1
and
= 2 .
x
y
x
y
(6)
(7)
Let the family of curves determined by the solution of (7) for i = 1 and i = 2 be
f1 (x, y) = c1 and f2 (x, y) = c2 ,
(8)
respectively. These family of curves are called characteristics curves of PDE (5). With
(as B
> 0) and use (7)-(8) to obtain
this choice, divide (4) throughout by B
2u
= (, , u, u , u ),
(9)
or
or
4x2 u = (u u )
1
(u u )
4( )u =
4( )
1
u =
(u u )
4( )
dy
dx
(10)
dy
dx
1 = 0 is x y = c1 Take = x y. Choose = x + y.
proceed as in Example 1 to obtain u = 0 which is the canonical form of the given PDE.
CASE III: When B 2 4AC < 0, the roots of A2 + B + C = 0 are complex. Following
the procedure as in CASE I, we nd that
u = 1 (, , u, u , u ).
(11)
10
The variables , are infact complex conjugates. To get a real canonical form use the
transformation
1
1
= ( + ), = ( )
2
2i
to obtain
1
u = (u + u ),
4
which follows from the following calculation:
(12)
1
1
u = u + u = u + u
2
2i
1
1
u =
(u + u ) + (u + u )
2
2i
1
=
(u + u ).
4
The desired canonical form is
u + u = (, , u(, ), u (, ), u (, )).
(13)
1
u .
2
Practice Problems
1. Reduce the following equations to canonical/normal form:
(A) 2uxx 4uxy + 2uyy + 3u = 0.
(B) uxx + yuyy = 0.
(C) uxy + ux + uy = 2x.
2. Show that the equation
uxx 6uxy + 12uyy + 4ux u = sin(xy)
is of elliptic type and obtain its canonical form.
3. Determine the regions where Tricomis equation uxx + xuyy = 0 is of elliptic,
parabolic, and hyperbolic types. Obtain its characteristics and its canonical form in
the hyperbolic region.
Lecture 3
11
A very important fact concerning linear PDEs is the superposition principle, which is
stated below.
A linear PDE can be written in the form
L[u] = f,
(1)
where L[u] denotes a linear combination of u and some of its partial derivatives, with
coecients which are given functions of the independent variables.
DEFINITION 1. (Superposition principle) Let u1 be a solution of the linear PDE
L[u] = f1
and let u2 be a solution of the linear PDE
L[u] = f2 .
Then, for any any constants c1 and c2 , c1 u1 + c2 u2 is a solution of
L[u] = c1 f1 + c2 f2 .
That is,
L[c1 u1 + c2 u2 ] = c1 f1 + c2 f2 .
(2)
In particular, when f1 = 0 and f2 = 0, (2) implies that if u1 and u2 are solutions of the
homogeneous linear PDE L[u] = 0, then c1 u1 + c2 u2 will also be a solution of L[u] = 0.
EXAMPLE 2. Observe that u1 (x, y) = x3 is a solution of the linear PDE uxx uy = 6x,
and u2 (x, y) = y 2 is a solution of uxx uy = 2y. Then, using superposition principle, it
is easy to verify that 3u1 (x, y) 4u2 (x, y) will be a solution of uxx uy = 18x + 8y.
REMARK 3. Note that the principle of superposition is not valid for nonlinear partial
dierential equations. This failure makes it dicult to form families of new solutions
from an original pair of solutions.
EXAMPLE 4. Consider the nonlinear rst order PDE ux uy u(ux + uy ) + u2 = 0. Note
that ex and ey are two solutions of this equation. However, c1 ex + c2 ey will not be a
solution, unless c1 = 0 or c2 = 0.
Solution. Dene D[u] := (ux u)(uy u). For any u, v C 1 , we have
D[u + v] = (ux + vx u v)(uy + vy u v)
= D[u] + D[v] + (uy u)(vx v) + (ux u)(vy v).
12
The computation shows that D[u + v] = D[u] + D[v] in general. Taking u = c1 ex and
v = c2 ey , an easy computation shows that
D[c1 ex + c2 ey ] = D[c1 ex ] + D[c2 ey ] + (c1 ex )(c2 ey ) = c1 c2 ex+y .
Thus, D[c1 ex + c2 ey ] = 0 only if c1 = 0 or c2 = 0.
Well-posed problems
A set of conditions was proposed by Hadamard (cf. [12]), who listed three requirements
that must be met when formulating an initial and /or boundary value problem. A problem
for which the PDE and the data lead to a solution is said to be well posed or correctly
posed if the following three conditions are satises:
Hadamards conditions for a well-posed problem are:
1. The solution must exist.
2. The solution should be unique.
3. The solution should depend continuously on the initial and/or boundary data.
If it fails to meet these requirements, it is incorrectly posed.
The conditions (1)-(2) require that the equation plus the data for the problem must
be such that one and only one solution exists. The third condition states that a small
variation of the data for the problem should cause small variation in the solution. As data
are generally obtained experimentally and may be subject to numerical approximations,
we require that the solution be stable under small variations in initial and/or boundary
values. That is, we cannot allow large variations to occur in the solution if the data are
altered slightly.
A simple example of a ill posed problem is given below.
EXAMPLE 5. Consider Cauchys problem for Laplaces equation in y 0:
2u 2u
+ 2 = 0,
x2
y
u(x, 0) = 0,
1
uy (x, 0) = sin nx,
n
where n is a positive integer, is not well-posed.
(3)
(4)
(5)
1
n2
13
so that for large n the Cauchy data u(x, 0) and uy (x, 0) can be made arbitrarily small
in magnitude. However, the solution u(x, y) oscillates with an amplitude that grows
exponentially like eny as n . Thus, arbitrarily small data can lead to arbitrarily large
variation in solutions and hence the solution is unstable. This violates the condition (3)
i.e., the continuous dependence of the solution on the data.
Boundary value problems are not well posed for hyperbolic and parabolic equations.
This follows because these are, in general, equations whose solutions evolve in time and
their behavior at later times is predicted by their previous states.
EXAMPLE 6. Consider the hyperbolic equation
uxy = 0 in 0 < x < 1, 0 < y < 1
with the boundary conditions
u(x, 0) = f1 (x), u(x, 1) = f2 (x) for 0 x 1,
u(0, y) = g1 (y), u(1, y) = g2 (y) for 0 y 1.
We shall show that this problem has no solution if the data are prescribed arbitrarily.
Since uxy = 0 implies that ux (x, y) = constant, we have
ux (x, 0) = ux (x, 1).
In view of the given BC, we have
ux (x, 0) = f1 (x)
Thus, unless f1 (x) and f2 (x) are prescribed such that f1 (x) = f2 (x), the BVP cannot be
solved. Therefore, it is incorrectly posed.
Method of factorization
There is no general methods are available for obtaining the general solutions of secondorder PDEs. Sometimes PDE of second-order can be factorized into two rst-order equations. The equations
u = 0,
yuxx + (x + y)uxy + xuyy = 0.
14
are examples of such equation. It is often much easier to factorize an equation when
in its canonical form. But, we can often factorize equations with constant coecients
directly. The method of factorization can be a useful method of solution for hyperbolic
and parabolic equations.
EXAMPLE 7. The equation
uxx uyy + 4(ux + u) = 0
)(
)
+
+2
+ 2 u = 0.
x y
x y
It is equivalent to the pair of rst order equations
can be written as
ux uy + 2u = v,
and
vx + vy + 2v = 0.
EXAMPLE 8. The hyperbolic equation
acuxy + aux + cuy + u = 0
can be written as
)(
)
(
+1
c
+ 1 u = 0.
a
x
y
It is equivalent to
cuy + u = v,
avx + v = 0.
Note: Unlike the case when the coecients are constant, the dierential operators need
not commute.
Practice Problems
1. If u1 (x, y) = x3 solves uxx + uyy = 2 and u2 (x, y) = c3 + dy 3 solves uxx + uyy =
6cx + 6dy for real constants c and d then nd a solution of uxx + uyy = ax + by + c
for given real constants a, b and c.
2. Let u1 (x, y) be the solution to the Cauchy problem
uxx + uyy = 0,
u(x, 0) = f (x),
uy (x, 0) = g(x),
15
1
n2
u(0, y) = u(l, y) = 0,
u(x, 0) = u(x, T ) = 0,
is not wellposed.
0 y T,
0 x l,
Lecture 1
In this lecture, we shall discuss a class of expansions which are particularly useful in the
study of solution of PDEs. To begin with, we now review some function property that are
particularly relevant to this study.
DEFINITION 1. (Periodic function) A function is periodic of period L if f (x+L) = f (x)
for all x in the domain of f .
The smallest positive value of L is called the fundamental period. The trigonometric
functions sin x and cos x are examples of periodic functions with fundamental period 2
and tan x is periodic with fundamental period . A constant function is a periodic function
with arbitrary period L.
It is easy to verify that if the functions f1 , . . . , fn are periodic of period L, then any
linear combination
c1 f1 (x) + + cn fn (x)
is also periodic. Furthermore, if the innite series
1
nx
nx
a0 +
an cos(
) + bn sin(
)
2
L
L
n=1
consisting of 2L-periodic functions converges for all x, then the function to which it converges will be periodic of period 2L.
There are two symmetry properties of functions that will be useful in the study of
Fourier series.
DEFINITION 2. (Even function and Odd function) Let f : [L, L] R. Then f (x)
is called even, if f (x) = f (x) for all x [L, L]. f (x) is called odd, if f (x) = f (x),
for all x [L, L].
Note: The graph of an even function is symmetric with respect to the y-axis. Note that
if (x, f (x)) is on the graph of an even function f (x), then (x, f (x)) will also be on the
graph (i.e., the graph is invariant under reection in the y-axis), see Figure 4.1.
The graph of an odd function is symmetric with respect to the origin. If f (x) is odd,
then (x, f (x)) is on the graph if and only if (x, f (x)) is on the graph. That is, the
graph is invariant under reection through the origin, see Figure 4.2.
EXAMPLE 3. The functions f (x) = x2n , n = 0, 1, 2, . . . are even functions whereas f (x) =
x2n+1 , n = 0, 1, 2, . . . are odd functions. The functions sin x and tan x are odd functions
and cos x is an even function.
L f (x)dx
L f (x)dx
L
0
nx
sin
dx = 0,
L
L
cos
L
nx
dx = 0,
L
n = 1, 2, . . . ,
For m. n = 1, 2, . . . , we have
mx
nx
cos
dx = 0
L
L
L
{
L
0,
nx
mx
sin
dx =
sin
L
L
L,
L
{
L
0,
nx
mx
cos
dx =
cos
L
L
L,
L
sin
(1)
m = n,
m = n.
m = n,
m = n.
(2)
(3)
fm (x)fn (x)w(x)dx = 0,
whenever m = n.
]1/2
f (x)w(x)dx
.
2
(4)
fm (x)fn (x)w(x)dx =
a
0, m =
n,
1, m = n.
(5)
1
nx
nx
a0 +
) + bn sin(
),
an cos(
2
L
L
(6)
n=1
where
1
an =
L
and
1
bn =
L
f (x) cos(
L
nx
)dx, n = 0, 1, 2, 3 . . . ,
L
f (x) sin(
L
nx
)dx, n = 1, 2, 3 . . . ,
L
is called the Fourier series of f (x). This series is named after the outstanding French
mathematical physicist Joseph Fourier (1768-1830).
Suppose that f (x) is of the form
1
nx
nx
f (x) = a0 +
[an cos(
) + bn sin(
)].
2
L
L
N
(7)
n=1
1 L
nx
an =
f (x) cos(
) dx, n = 0, 1, . . . ,
L L
L
1 L
nx
bn =
f (x) sin(
) dx, n = 1, 2, . . . , .
L L
L
(8)
(9)
REMARK 5.
Not every function f (x) has the representation of the form (7). The right side of (7) is
smooth i.e., C (innitely dierentiable functions), but many functions have graphs
with jumps or corners. We will encounter functions f (x) for which the integral (8)
and (9) are not zero for innitely many values of n. In such cases, f (x) can not be
represented as a nite sum as in (7). Also, even if N , the sum (7) might not
converge to f (x), unless some additional assumptions are made (cf. [1])
DEFINITION 6. (Fourier series) Let f (x) : [L, L] R be such that the integrals
nx
1 L
f (x) cos(
)dx, n = 0, 1, 2, 3 . . . ,
(10)
an =
L L
L
and
1
bn =
L
f (x) sin(
L
nx
)dx, n = 1, 2, 3 . . . ,
L
(11)
exists and are nite. Then the Fourier series(FS) of f on [L, L] is the expression
[
1
nx ]
nx
f (x) a0 +
) + bn sin(
) .
an cos(
2
L
L
(12)
n=1
1 < x < 0,
1,
0 < x < .
Solution. Here L = . Note that f (x) is an odd function. Since the product of an
odd function and an even function is odd, f (x) cos nx is also an odd function. Hence
an =
n = 0, 1, 2, . . .
Since f (x) sin x is an even function (as the product of two odd functions), we have
1
2
f (x) cos nxdx =
sin nxdx
0
[
]
[
]
2 1 (1)n
2 cos nx
, n = 1, 2, 3, . . . .
=
= n n
n
0
{
0,
n even,
=
4
n , n odd.
bn =
Thus
f (x)
[
]
2 [1 (1)n ]
4
1
1
sin nx =
sin x + sin 3x + sin 5x + .
3
5
n=1
REMARK 8.
If f is any odd function, then its FS consists only of sine terms. (see, Example 7).
If f is an even function, then its FS consists only of cosine terms (including cos 0x).
EXAMPLE 9. Find the FS of the function f (x) = x for L x L.
an =
x cos(
For n = 0, we get
1
a0 =
L
L
1 x2
xdx =
= 0.
L 2 L
L
L
nx
x
nx L
nx
1
x sin(
)dx =
cos(
) +
cos(
)dx
L
n
L
n
L
L
L
L
2L
L
nx L
cos(n) +
)
sin(
(n)
(n)2
L L
2L
(1)n+1 , n = 1, 2, 3, . . . .,
n
1
L
where we have used the fact cos(n) = (1)n . Thus, the FS of f (x) is given by
f (x)
(1)
n=1
nx
2L
1
nx
sin(
)=
(1)n+1 sin(
).
n
L
n
L
n+1 2L
n=1
Practice Problems
1. Find the FS of the following function:
{
{
0 2 x 1,
x2
1 x 0,
(a) f (x) =
; (b) f (x) =
1, 1 < x 2.
1 + x, 0 < x 1.
(c) f (x) = | sin x|, < x < .
2. If the 2-periodic even function is given by f (x) = |x| for < x < , show that
f (x) =
4 cos(2n 1)x
.
2
(2n 1)2
n=1
3. Show that
1
1 1 1
+ + = .
3 5 7
4
Lecture 2
In this lecture, we shall discuss the convergence of FS (without proofs) and the properties
of termwise dierentiation and integration.
In Example 9 (cf. Lecture 1 of Module 4), we notice that the FS of f (0) = 0 = f (0).
However, the FS of f (L) = 0 = L = f (L). Thus, the FS of f (x) is not f (x) at
x = L.
Under various assumptions on a function f (x), one can prove that the Fourier series
of f (x) does converge to f (x) for all x in [L, L]. We now have the following results:
THEOREM 1. (The convergence of FS) Let f C 2 ([L, L]) ( a twice continuously
dierentiable function on the interval [L, L]) be such that f (L) = f (L) and f (L) =
f (L). Let an and bn be the Fourier coecients of f (x), and let
M = max |f (x)|.
x[L,L]
4L2 M
nx
nx
a0
+
[an cos(
) + bn sin(
)] 2 ,
f (x)
2
L
L
N
(1)
n=1
1
nx
nx ]
f (x) a0 +
an cos(
) + bn sin(
) .
2
L
L
n=1
The inequality (1) tells us how many terms of Fourier Series f (x) will suce, in
order to approximate f (x) to within a certain error.
The conditions f (L) = f (L) and f (L) = f (L) ensure that if the interval [L, L]
is bent into a circle by joining x = L to x = L, then the graph of f (x) above this
circle is continuous and has a well dened tangent line (or derivative) at the juncture
where L and L are identied.
Periodic extensions of functions: We begin with some facts concerning periodic functions and periodic extensions of functions.
DEFINITION 3. Let f : [L, L] R be such that f (L) = f (L). Then the periodic
extension of f (x) is the unique periodic function fe(x) of period 2L, such that fe(x) = f (x)
for L x L.
In order to have periodic extension to exist, we must have f (L) = f (L). Thus, note
that the function f (x) = x, dened for L x L, does not have a periodic extension.
To remedy this situation, we dene
1
f (L) = {f (L ) + f (L+ )},
2
where
f (L ) := lim f (x) := lim f (L h)
xL
h0+
and
f (L+ ) :=
lim
x(L)+
We now state a sequence of convergence results without proofs. For a proof, readers
may refer to [1, 9].
THEOREM 4. (Pointwise convergence of FS) Let f (x) C 1 ([L, L]) and assume that
f (L) = f (L) and f (L) = f (L). Then the FS of f (x) = f (x) x [L, L].
That is, if the N -th partial sum of Fourier series of f (x) is denoted by
1
nx
nx
SN (x) = a0 +
[an cos(
) + bn sin(
)],
2
L
L
N
n=1
where
1
an =
L
ny
1
) dy and bn =
f (y) cos(
L
L
L
f (y) sin(
L
ny
) dy,
L
1
nx
nx
f (x) a0 +
[an cos(
) + bn sin(
)] lim SN (x) = f (x).
N
2
L
L
n=1
REMARK 5. Note that when f and f are piecewise continuous on [L, L], the Fourier
series converges to f (x). FS converges to the average of the left and right-hand limits at
points where f is discontinuous.
THEOREM 6. (Uniform convergence of FS) Let f (x) C 2 ([L, L]) be such that
f (L) = f (L) and f (L) = f (L). Then the FS of f (x) converges uniformly to f (x).
That is, the sequence of partial sums S1 (x), S2 (x), . . . of Fourier Series of f (x) converge uniformly to f (x). Indeed,
|f (x) SN (x)|
4L2 M
, x [L, L],
2N
where
M = max |f (x)|.
LxL
x, 0 < x < 1,
f (x) =
2, 1 < x < 2,
(x 2)2 , 2 x 3
is piecewise continuous on [0, 3].
DEFINITION 9. f (x) is piecewise C 1 on [a, b] if f (x) and f (x) are piecewise continuous
on [a, b].
Note: If f (x) is piecewise continuous on [a, b], then f (x) is automatically piecewise
continuous on [a, b].
The following denition is convenient in the formulation of the convergence theorem
for FS of piecewise C 1 functions.
DEFINITION 10. Let f (x) : [L, L] R be a piecewise C 1 -function. Dene the adjusted
function f(x) as follows:
{
f(x) =
1
+
2 [f (x ) + f (x )],
1
+
2 [f (L ) + f (L )],
L < x < L,
x = L.
(2)
Note: The above denition tells that f(x) coincides at all points in (L, L), where f (x)
is continuous, but f(x) is the average of the left-hand and right-hand limits of f (x) at
points of discontinuity in (L, L). The value of f(x) at x = L can be thought of as an
average of left-hand and right-hand limits, if we bend the interval [L, L] into a circle.
10
THEOREM 11. Let f : [L, L] R be a piecewise C 1 function and let f(x) be the adjusted
function as dened in (2). Then FS of f (x) = f(x), for all x [L, L]. In fact, we have
FS of f (x) = fe(x),
< x < ,
LxL
(3)
The term-by-term dierentiation of a Fourier series is not always permissible. For example,
the FS for f (x) = x, < x < (see, Example 7) is
f (x) 2
(1)n+1
n=1
sin nx
,
n
(1)n+1 cos nx
n=1
11
n=1
then
f (x)
n {
n=1
an sin
nx
nx }
+ bn cos
.
L
L
Notice that the above theorem does not apply to Example 7 as the 2-periodic extension of f (x) fails to be continuous on (, ).
Termwise integration of a FS is permissible under much weaker conditions.
THEOREM 15. (Integration of FS) Let f (x) : [L, L] R be piecewise continuous
function with FS
a0 {
nx
nx }
+
an cos
+ bn sin
.
2
L
L
f (x)
n=1
nt
nt
a0
dt +
an cos
+ bn sin
dt.
f (t)dt =
L
L
L
L
L 2
n=1
Practice Problems
1. Discuss the convergence of the FS of the following functions f :
{
{
x,
0 < x < ,
x, 0 < x < ,
(a) f (x) =
; (b) f (x) =
x, < x < 2.
0, < x 2.
2. Determine the FS of the following functions by dierentiating the appropriate FS:
(i) sin2 x, 0 < x < ; (ii) cos x + cos(2x), 0 < x <
3. Dierentiate the FS of f (x) = | sin x| term by term to prove that
cos x =
8 n sin(2nx)
.
(1 4n2 )
n=1
4. Find the function represented by the new series which are obtained by termwise
integration of the following series from 0 to x:
(1)k+1
cos(kx)
= ln(2 cos(x/2)), < x < .
k
12
Lecture 3
Recall that the FS for an odd/even function dened on [L, L] consists entirely of sine/cosine
terms. If a function f dened in 0 x L can be extended to the interval [L, L] in such
a way that the extended function is odd/even. This leads to the following denitions.
DEFINITION 1. (Even and Odd extensions) Let f : [0, L] R. The even extension
of f (x) is the unique even function fe (x) dened for x [L, L] with fe (x) = f (x) for
x [0, L], i.e.,
{
fe (x) =
if 0 x L,
f (x)
f (x) if L x 0.
If f (0) = 0 then the odd extension fo (x) is the unique odd function dened for x [L, L],
such that fo (x) = f (x) for x [0, L], i.e.,
{
fo (x) =
f (x)
if 0 x L,
f (x) if L x 0.
nx
1
f (x) cos(
)dx and bn =
L
L
L
f (x) sin(
L
nx
)dx.
L
f (x) cos(
0
nx
)dx, (n = 0, 1, 2, . . .).
L
(1)
13
2 L
nx
f (x) sin(
)dx, (n = 1, 2, 3, . . .).
bn =
L 0
L
(2)
DEFINITION 3. Let f : [0, L] R be such that the integrals (1) and (2) exist. Then the
Fourier sine series (FSS) of f (x) is the expression
FSS of f (x) =
n=1
nx
2
bn sin(
)dx, where bn =
L
L
f (x) sin(
0
nx
)dx.
L
(3)
nx
2
1
FCS of f (x) = a0 +
an cos(
), where an =
2
L
L
n=1
f (x) cos(
0
nx
)dx.
L
(4)
THEOREM 4. Let f : [0, L] R. Suppose that the integrals in (1) and (2) exist. Then
(redening f (0) = 0) the FSS of f (x) is the Fourier series of the odd extension f0 (x)
dened on [L, L], i.e.,
FSS of f (x) = FS of fo (x).
The FCS of f (x) is the Fourier series of the even extension fe (x) dened on [L, L], i.e.,
FCS of f (x) = FS of fe (x).
Since FSS of f (x) = FS of fo (x) and FCS of f (x) = FS of fe (x), we can obtain
convergence results for FSS of f (x) and FCS of f (x) by applying Theorem 11 (see, previous
lecture) to the extensions fo (x) and fe (x).
THEOREM 5. Let f (x) : [0, L] R be a piecewise C 1 function. Then
{
1
+
2 [f (x ) + f (x )], 0 < x < L,
FSS of f (x) =
0,
x = 0 or L.
(5)
If f (x) is also continuous on [0, L] with f (0) = 0 and f (L) = 0, then the partial sums
SN (x) of FSS of f (x) converge uniformly to f (x) on [0, L], i.e.,
max {|f (x) SN (x)|} 0 as N .
0xL
f (L ),
x = L.
(6)
(7)
If f (x) is also continuous on [0, L], then the partial sums SN (x) of FCS of f (x) converge
uniformly to f (x).
14
FSS of f (x) =
n=0
where
2
cn =
L
1
cn sin[(n + )x/L],
2
1
f (x) sin[(n + )x/L], n = 0, 1, 2, . . . .
2
(8)
(9)
1 e
2 [f (x )
0,
x = 0.
1 e
e +
f (L ),
x = L.
(10)
(11)
Moreover, if f (x) is piecewise C 1 and continuous with f (0) = 0, then the partial sums
SN (x) of FSS of f e (x) converge uniformly to f (x) on [0, L].
Practice Problems
1. Construct the FCS for the following functions:
{
(a) f (x) =
0 x 1,
1, 1 < x 2.
{
; (b) f (x) =
2+x
0 x 1,
1 x, 1 < x 2.
2, /2 < x .
Lecture 1
We shall derive heat equation from the principle of conservation of energy and the fact
that heat ows from hot regions to cold regions.
Consider a wire or rod of length L which is made of some heat-conducting material
and is insulated on the outside, except possibly over the ends at x = 0 and x = L. Let
u(x, t) denote the temperature at x at time t. u(x, t) is assumed to be constant on each
cross section at each time. By the principle of conservation of energy (heat energy), the
net change of heat inside the segment P Q (between x and x + x) is equal to the net
heat ux across the boundaries and the total heat generated inside P Q. If c is thermal
capacity of the rod, is the density of the rod, A is the cross-section area of the rod, k is
thermal conductivity of the rod and f (x, t) is the external heat source, then we calculate
these terms as follows:
x+x
cAu(, t)d .
x
x+x
x+x
cAu(, t)ds = cA
x
ut (, t)d .
x
d
dt
x+x
x+x
cAu(, t)d = cA
x
ut (, t)d
x
x+x
f (, t)d.
x
(1)
(2)
1
c f (x, t)
is
If f (x) is continuous on [a, b], then there exists at least one number in (a, b) such that
b
f (x)dx = f ()(b a).
a
with
(x) =
where 1 and 2 are the thermal diusivity coecients of copper and steel, respectively.
Types of BCs: There are three types of boundary conditions that can occur for heat
ow problems. They are
Dirichlet boundary conditions (temperature is specied on the boundary):
Consider heat ow problem in a rod (0 x L). The specication of the temperatures u(0, t) and u(L, t) at the ends are classied as Dirichlet type BC.
Neumann boundary conditions ( heat ow across the boundary is specied):
The specication of the normal derivative (i.e.,
u
n ,
to the boundary) on the boundary is classied as Neumann type BCs. For instance,
if the end points of a rod is insulated (i.e., we do not allow any ow of heat across
the boundary), the BCs are
ux (0, t) = 0,
ux (L, t) = 0,
0 < t < .
Lecture 2
In this lecture, we shall prove the maximum and minimum properties of the heat equation.
These properties can be used to prove uniqueness and continuous dependence on data of
the solutions of these equations.
To begin with, we shall rst prove the maximum principle for the inhomogeneous heat
equation (F = 0).
THEOREM 1. (The maximum principle) Let R : 0 x L, 0 t T be a closed
region and let u(x, t) be a solution of
ut 2 uxx = F (x, t) (x, t) R,
(1)
which is continuous in the closed region R. If F < 0 in R, then u(x, t) attains its maximum
values on t = 0, x = 0 or x = L and not in the interior of the region or at t = T . If
F > 0 in R, then u(x, t) attains its minimum values on t = 0, x = 0 or x = L and not in
the interior of the region or at t = T .
Proof. We shall show that if a maximum or minimum occurs at an interior point
0 < x0 < l and 0 < t0 T , then we will arrive at contradiction. Let us consider the
following cases.
Case I: First, consider the case with F < 0. Since u(x, t) is continuous in a closed and
bounded region in R, u(x, t) must attain its maximum in R. Let (x0 , t0 ) be the interior
maximum point. Then, we must have
uxx (x0 , t0 ) 0,
ut (x0 , t0 ) 0.
(2)
Case II: Consider the case with F > 0. Let there be an interior minimum point
(x0 , t0 ) in R such that
uxx (x0 , t0 ) 0,
ut (x0 , t0 ) 0.
(3)
Note that the inequalities (3) is same as (2) with the signs reversed. Again arguing as
before, this leads to a contradiction, hence the minimum must be assumed on the initial
line or on the boundary.
Note: When F = 0 i.e., for homogeneous equation, the inequalities (2) at a maximum or
(3) at a minimum do not leads to a contradiction when they are inserted into (1) as uxx
and ut may both vanish at (x0 , t0 ).
Below, we present a proof of the maximum principle for the homogeneous heat equation.
THEOREM 2. (The maximum principle) Let u(x, t) be a solution of
ut = 2 uxx 0 x L, 0 < t T,
(4)
(5)
where > 0 is a constant and u satises (4). Note that v(x, t) is continuous in R and
hence it has a maximum at some point (x1 , t1 ) in the region R.
Assume that (x1 , t1 ) is an interior point with 0 < x1 < L and 0 < t1 T . Then we
nd that
vt (x1 , t1 ) 0, vxx (x1 , t1 ) 0.
(6)
(7)
Let
M = max{u(x, t)} on t = 0, x = 0, and x = L,
i.e., M is the maximum value of u on the initial line and boundary lines. Then
v(x, t) = u(x, t) + x2 M + L2 , for 0 x L, 0 t T.
(8)
(9)
(10)
As a consequence of the maximum principle, we can show that the heat ow problem has
a unique solution and depend continuously on the given initial and boundary data.
THEOREM 4. (Uniqueness result) Let u1 (x, t) and u2 (x, t) be solutions of the following
problem
PDE:
BC:
IC:
ut = 2 uxx ,
(11)
u(x, 0) = f (x),
where f (x), g(t) and h(t) are given functions. Then u1 (x, t) = u2 (x, t), for all 0 x L
and t 0.
Proof. Let u1 (x, t) and u2 (x, t) be two solutions of (11). Set w(x, t) = u1 (x, t)
u2 (x, t). Then w satises
wt = 2 wxx 0 < x < L, t > 0,
w(0, t) = 0,
w(L, t) = 0,
w(x, 0) = 0.
By the maximum principle (cf. Theorem 2), we must have
w(x, t) 0 = u1 (x, t) u2 (x, t), for all 0 x L, t 0.
A similar argument with w
= u2 u1 yields
u2 (x, t) u1 (x, t) for all 0 x L, t 0.
Therefore, we have
u1 (x, t) = u2 (x, t) for all 0 x L, t 0,
and this completes the proof.
THEOREM 5. (Continuous Dependence on the IC and BC) Let u1 (x, t) and u2 (x, t),
respectively, be solutions of the problems
ut = 2 uxx ;
u(0, t) = g1 (t) u(L, t) = h1 (t);
u(x, 0) = f1 (x);
ut = 2 uxx
u(0, t) = g2 (t) u(L, t) = h2 (t)
u(x, 0) = f2 (x),
in the region 0 x L, t 0. If
|f1 (x) f2 (x)| for all x, 0 x L,
(12)
and
|g1 (t) g2 (t)| and |h1 (t) h2 (t)| for all t, 0 t T,
for some 0, then we have
|u1 (x, t) u2 (x, t)| for all x and t, where 0 x L, 0 t T.
Proof. Let v(x, t) = u1 (x, t) u2 (x, t). Then vt = 2 vxx and we obtain
|v(x, 0)| = |f1 (x) f2 (x)| , 0 x L,
|v(0, t)| = |g1 (t) g2 (t)| , 0 t T,
|v(L, t)| = |h1 (t) h2 (t)| , 0 t T.
Note that the maximum of v on t = 0 (0 x L) and x = 0 and x = L (0 t T ) is
not greater than . The minimum of v on these boundary lines is not less than . Hence,
the maximum/minimum principle yields
v(x, t) = |u1 (x, t) u2 (x, t)| = |v(x, t)| .
Note: (i) We observe that when = 0, the problems in (12) are identical. We conclude
that |u1 (x, t) u2 (x, t)| 0 (i.e. u1 = u2 ). This proves the uniqueness result.
(ii) Suppose a certain initial/boundary value problem has a unique solutions. Then
a small change in the initial and/or boundary conditions yields a small change in the
solutions.
For the inhomogeneous equation (1), we have seen that the maximum or minimum
values must be attained either on the initial line or the boundary lines and that they
cannot be assumed in the interior. This result is known as a strong maximum or minimum
principle.
THEOREM 6. (Strong maximum principle) Let u(x, t) be a solution of the heat equation in the rectangle R : 0 x L, 0 t T . If u(x, t) achieves its maximum at
(x , T ), where 0 < x < L, then u must be constant in R.
Practice Problems
1. Use the maximum/minimum principle to show that the solution u of the problem
ut = uxx , 0 < x < , t > 0,
ux (0, t) = 0, ux (, t) = 0, t > 0,
1
u(x, 0) = sin(x) + sin(2x), 0 x
2
satises 0 u(x, t)
3 3
4 ,
t 0.
10
0 t T,
11
Lecture 3
Separation of variables is one of the oldest technique for solving initial-boundary value
problems (IBVP) and applies to problems, where
PDE is linear and homogeneous (not necessarily constant coecients) and
BC are linear and homogeneous.
Basic Idea: To seek a solution of the form
u(x, t) = X(x)T (t),
where X(x) is some function of x and T (t) in some function of t. The solutions are simple
because any temperature u(x, t) of this form will retain its basic shape for dierent
values of time t. The separation of variables reduced the problem of solving the PDE
to solving the two ODEs: One second order ODE involving the independent variable x
and one rst order ODE involving t. These ODEs are then solved using given initial and
boundary conditions.
To illustrate this method, let us apply to a specic problem. Consider the following
IBVP:
PDE:
BC:
IC:
ut = 2 uxx ,
0 x L, 0 < t < ,
(1)
0 < t < ,
(2)
u(0, t) = 0 u(L, t) = 0,
u(x, 0) = f (x),
0 x L.
(3)
Step 1:(Reducing to the ODEs) Assume that equation (1) has solutions of the form
u(x, t) = X(x)T (t),
where X is a function of x alone and T is a function of t alone. Note that
ut = X(x)T (t) and uxx = X (x)T (t).
Now, substituting these expression into ut = 2 uxx and separating variables, we obtain
X(x)T (t) = 2 X (x)T (t)
T (t)
X (x)
=
.
2 T (t)
X(x)
12
Since a function of t can equal a function of x only when both functions are constant.
Thus,
T (t)
X (x)
=
=c
2 T (t)
X(x)
for some constant c. This leads to the following two ODEs:
T (t) 2 cT (t) = 0,
(4)
X (x) cX(x) = 0.
(5)
Thus, the problem of solving the PDE (1) is now reduced to solving the two ODEs.
Step 2:(Applying BCs)
Since the product solutions u(x, t) = X(x)T (t) are to satisfy the BC (2), we have
u(0, t) = X(0)T (t) = 0 and X(L)T (t) = 0, t > 0.
Thus, either T (t) = 0 for all t > 0, which implies that u(x, t) = 0, or X(0) = X(L) = 0.
Ignoring the trivial solution u(x, t) = 0, we combine the boundary conditions X(0) =
X(L) = 0 with the dierential equation for X in (5) to obtain the BVP:
X (x) cX(x) = 0,
X(0) = X(L) = 0.
(6)
There are three cases: c < 0, c > 0, c = 0 which will be discussed below. It is convenient
to set c = 2 when c < 0 and c = 2 when c > 0, for some constant > 0.
Case 1. (c = 2 > 0 for some > 0). In this case, a general solution to the dierential
equation (5) is
X(x) = C1 ex + C2 ex ,
where C1 and C2 are arbitrary constants. To determine C1 and C2 , we use the BC
X(0) = 0, X(L) = 0 to have
X(0) = C1 + C2 = 0,
(7)
X(L) = C1 eL + C2 eL = 0.
(8)
From the rst equation, it follows that C2 = C1 . The second equation leads to
C1 (eL eL ) = 0,
C1 (e2L 1) = 0,
C1 = 0.
13
L = n,
n = 0, 1, 2, . . . .
n
, n = 1, 2, 3, . . . .
L
Here, we exclude n = 0, since it makes c = 0. Therefore, the nontrivial solutions (eigenfunctions) Xn corresponding to the eigenvalue c = 2 are given by
Xn (x) = an sin(
nx
),
L
(9)
is
Tn (t) = bn e
2 ( n )2 t
L
14
Combing this with (9), the product solution u(x, t) = X(x)T (t) becomes
nx
2 n 2
un (x, t) := Xn (x)Tn (t) = an sin(
)bn e ( L ) t
L
nx
2t
2 ( n
)
L
= cn e
sin(
), n = 1, 2, 3, . . . ,
L
where cn is an arbitrary constant.
Since the problem (9) is linear and homogeneous, an application of superposition
principle gives
u(x, t) =
un (x, t) =
n=1
cn e
2 ( n )2 t
L
sin(
n=1
nx
),
L
(10)
which will be a solution to (1)-(3), provided the innite series has the proper convergence
behavior.
Since the solution (10) is to satisfy IC (3), we must have
u(x, 0) =
cn sin
( nx )
n=1
= f (x),
0 < x < L.
cn sin
( nx )
L
n=1
which is called a Fourier sine series (FSS) with cn s are given by the formula
nx
2 L
cn =
f (x) sin(
)dx.
L 0
L
(11)
(12)
Then the innite series (10) with the coecients cn given by (12) is a solution to the
problem (1)-(3).
EXAMPLE 1. Find the solution to the following IBVP:
ut = 3uxx 0 x , 0 < t < ,
(13)
(14)
(15)
0 x .
Solution. Comparing (13) with (1), we notice that 2 = 3 and L = . Using formula
(10), we write a solution u(x, t) as
u(x, t) =
n=1
cn e3n t sin(nx).
2
15
cn sin(nx).
n=1
Practice Problems
1. Solve the following IBVP:
ut = 16uxx , 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0,
u(x, 0) = (1 x)x, 0 < x < 1.
2. Solve the following IBVP:
ut = uxx , 0 < x < , t > 0,
ux (0, t) = ux (, t) = 0, t > 0,
u(x, 0) = 1 sin x, 0 < x < .
16
Lecture 4
Time-Independent Homogeneous BC
(1)
BC:
(2)
IC:
0 x L,
u(x, 0) = f (x),
(3)
d = a and c = (b a)/L.
Thus,
up (x, t) = (b a)x/L + a
solves both the PDE with the BCs being satised.
Consider the related homogeneous problem (i.e., with homogeneous PDE and BC)
PDE:
BC:
0 < t < ,
v(x, t) =
n=1
cn e(n/L)
2 2 t
sin(nx/L).
(4)
17
Now, set u(x, t) = up (x, t)+v(x, t). Then it is easy to verify that u(x, t) solves (1). Indeed,
u(x, t) solves (1) by the superposition principle. Further, we have
BC:
IC:
REMARK 1. (i) It is necessary to subtract up (x, 0) from f (x) to form the initial condition
for the related problem (4) so that the initial condition (3) is satised.
(ii) Since any particular solution will do, for simplicity one should consider a particular
solution of the form cx + d, and nd the constants, using the BC. The reason is that the
formula only applies to the BC of (2). For other BC, we obtain other particular solution.
For example, If ux (0, t) = a, u(L, t) = b then up (x, t) = a(x L) + b.
EXAMPLE 2.
PDE:
BC:
IC:
(5)
(6)
(7)
vx (0, t) = 0 v(1, t) = 0,
2 t/2
[1 +
1
cos(3x/2)].
2
Then
1
cos(3x/2)].
2
From the above examples, we notice that the particular solution is time independent, or
u(x, t) = x 3 + e9
2 t/2
[1 +
in steady-state.
Note: Any steady-state solution of the heat equation ut = 2 uxx is of the form cx + d.
18
The solutions u(x, t) are sums of a steady-state particular solution of the PDE and
BC and the solution v(x, t) of the related homogeneous problem which is transient in the
sense that v(x, t) 0 as t . Thus
u(x, t) = up (x, t)
(steady-state solution)
That is, the solution u approaches the steady-state solution as t . However, for some
types of BC there are no steady-state particular solutions, as illustrated in the following
example.
EXAMPLE 3. Consider the problem
PDE:
BC:
IC:
(8)
ux (0, t) = a ux (L, t) = b,
(9)
u(x, 0) = f (x),
(10)
=
=
c 2
x + dx + e,
h(x) =
22
19
(b a)2
cL
+
d
=
c
=
.
2
L
Thus, a particular solution (taking e = 0, for simplicity) is obtained as:
b = (up )x (L, t) = h (L) =
up (x, t) =
(b a) 2
(b a) 2
(b a) 2
1
t+
x + ax =
[ t + x2 ] + ax.
L
2L
L
2
2 2
= up (x, t) +
cn e(n/L) t cos(nx/L),
n=0
Practice Problems
1. Solve the following IBVP:
ut = uxx , 0 < x < L, t > 0,
u(0, t) = a, u(L, t) = b, t > 0,
u(x, 0) = a + bx, 0 x L.
2. Solve the following IBVP:
ut = 4uxx , 0 < x < , t > 0,
u(0, t) = 5, u(, t) = 10, t > 0,
u(x, 0) = sin x sin 3x, 0 < x < .
(11)
20
Lecture 5
Time-Dependent BC
In this lecture we shall learn how to solve the inhomogeneous heat equation
ut 2 uxx = h(x, t)
with time-dependent BC. To begin with, let us consider the following IBVP problem with
time-dependent BC:
PDE:
BC:
IC:
0 < t < ,
u(x, 0) = f (x).
(1)
(2)
(3)
In the previous lecture, we had discussed the solution of this problem in the case where a(t)
and b(t) are constant functions (independent of t) and f (x) is a suitable given function.
Notice that the function w(x, t) dened by
]
[
b(t) a(t)
x + a(t)
w(x, t) =
L
satises the BC (2). However, w(x, t) will not satisfy the PDE (1) unless a(t) and b(t)
are constant. In fact,
[
wt wxx
2
]
b (t) a (t)
=
x + a (t).
L
21
Thus, the function v(x, t) must satisfy the following related problem with homogeneous
BC, but inhomogeneous PDE:
PDE:
BC:
IC:
0 x L, 0 < t < ,
0 < t < ,
(4)
(5)
(6)
Note: When a(t) and b(t) are constants, the PDE is homogeneous. But, in this case,
v(x, t) satises nonhomogeneous PDE.
The problem (4)-(6) is a special case of the following general problem:
PDE:
BC:
IC:
0<t<
v(x, 0) = g(x).
(7)
(8)
(9)
The solution procedure to the above problem was given by the French mathematician and
physicist Jean-Marie-Constant Duhamel (1797-1872). The method is known as Duhamels
principle.
Suppose u1 and u2 are solutions of the following problems:
(P 1 :)
PDE:
BC:
u1 (0, t) = 0, u1 (L, t) = 0
IC:
u1 (x, 0) = g(x)
PDE:
(P 2 :) BC:
IC:
It is easy to check that v(x, t) = u1 (x, t) + u2 (x, t) solves (7)-(9). The solution u1 to the
problem (P 1) is known (cf. Lecture 4 in Module 5). It remains only to solve the problem
(P2) for u2 .
The above observation has led to the following (cf. [1]).
THEOREM 1. A solution to problem (1)-(3) is given by
u(x, t) = w(x, t) + u1 (x, t) + u2 (x, t),
where
]
b(t) a(t)
w(x, t) =
x + a(t)
L
is the particular solution of the BC and u1 (x, t) solves (P1) with g(x) = f (x) w(x, 0)
and u2 (x, t) solves (P2) with h(x, t) = (wt 2 wxx ) = [b (t) a (t)]x/L a (t).
22
Duhamels principle
The basic idea of Duhamels principle is to transfer the source term h(x, t) to initial
condition of related problems. This is done in the following manner. The function dened
by
u(x, t) =
v(x, t; s)ds
0
0 < t < ,
(11)
(12)
(13)
Note that both PDE and BC are homogeneous. We use translation in time
u(x, t) =
v(x, t s; s)ds
(14)
(15)
(16)
0 < t < ,
u(x, 0) = 0.
(17)
(18)
(19)
vt = 2 vxx ,
0 x , 0 < t < ,
v(0, t; s) = 0, v(, t; s) = 0,
v(x, 0; s) = h(x, s) = s sin(x).
0 < t < ,
(20)
(21)
(22)
23
(23)
(24)
(25)
has a solution v(x, t; s), where v(x, t; s), vt (x, t; s) and vxx (x, t; s) are continuous (in all
three variables). Then the unique solution of the problem
PDE:
BC:
IC:
0 < t < ,
u(x, 0) = 0.
is given by
(26)
(27)
(28)
u(x, t) =
v(x, t; s)ds.
(29)
satises the IC u(x, 0) = 0 and the BC u(0, t) = u(L, t) = 0. Observe that v(x, t; s)
satises the BC (24). Now, with g(t, s) = v(x, t; s), where x xed, we have
t
ut (x, t) = v(x, t; t) +
vt (x, t; s)ds
0
t
= h(x, t) +
2 vxx (x, t; s)ds.
0
24
u(x, t) =
v(x, t s; s)ds
u1 (0, t) = 0 u1 (1, t) = 0
u1 (x, 0) = sin(x)
2 2 t
u2 (0, t) = 0 u2 (1, t) = 0
u2 (x, 0) = 0.
u2 (x, t) =
v(x, t s; s)ds,
25
2 2 t
u2 (x, t) =
sin(2x). Thus,
s e4
2 2 (ts)
sin(2x)ds
t
2 2
4 2 2 t
= e
sin(2x)
s e4 s ds
0
]
[
2 2
2 2 2
2 2
= (4 )
4 t + e4 t 1 sin(2x).
0
=0
u(0, t)
=0
ux (L, t) = 0
ux (0, t)
=0
ux (L, t) = 0.
Practice Problems
1. Solve the following IBVP:
ut = 2 uxx + cos(3t), 0 < x < 1, t > 0,
ux (0, t) = 0, ux (1, t) = 1, t > 0,
1
u(x, 0) = cos(x) x2 x, 0 < x < 1.
2
2. Solve the following IBVP:
ut = 4uxx + et sin(x/2) sin(t), 0 < x < , t > 0,
u(0, t) = cos(t), u(, t) = 0, t > 0,
u(x, 0) = 1, 0 < x < .
Lecture 1
We begin by studying the one-dimensional wave equation, which describe the transverse
vibrations of a string. Consider the small vibrations of a string that is fastened at each
end (see, Fig. 6.1). We now make the following assumptions:
The string is made of a homogeneous material (i.e., the mass/unit length of the
string is constant).
There is no eect of gravity and external forces.
The vibration takes place in a plane.
The mathematical model equation under these assumptions describe small vibrations of
the string. Let the forces acting on a small portion P Q of the string. Since the string
(1)
Let T1 sin 1 and T2 sin 2 be two components of T1 and T2 , respectively in the vertical
direction. The minus sign indicates that component at P is directed downward. By
Newtons second law, the resultant of these two forces is equal to the mass x of the
portion times the acceleration utt , evaluated at some point between x and x + x. If is
the mass of the undeected string per unit length and x is length of the portion of the
undeected string then we have
T2 sin 2 T1 sin 1 = xutt .
= tan 2 tan 1 =
utt .
T2 cos 2 T1 cos 1
T
(2)
Note that tan 1 and tan 2 are the slopes of the curve of the string at x and x + x, i,e.,
tan 1 = (ux )P ,
tan 2 = (ux )Q .
Here, partial derivatives are used because u also depends on t. Dividing (2) by x, we
have
1
(3)
T
.
NOTE: The notation c2 (instead of c) for the physical constant T / has been chosen to
indicate that this constant is positive. The constant c2 depends on the density and tension
of the string.
As the problem is linear, it is enough to prove the uniqueness of solution. The uniqueness result is proved in the following theorem.
THEOREM 1. Let u1 (x, t) and u2 (x, t) be two solutions of
PDE:
BC:
IC:
utt = c2 uxx ,
0 x L, < t < ,
vt (x, t)dt.
0
(4)
We now claim that vt (x, t) = 0 for all x in [0, L] and for all t. Construct the function
H(t) =
(5)
H (t) =
{c2 2vx vxt + 2vt vtt }dx
0
L
2
= 2c
{vx vxt + vt vxx }dx
0
L
2
(vx vt )dx
= 2c
0 x
L
= 2c2 {vx (x, t)vt (x, t)}
0
= 0,
where in the last step we have used vt (0, t) =
d
dt v(0, t)
Thus,
H (t) = 0 = H(t) = C,
where C is an arbitrary constant. Since H(0) = 0, we have C = 0 and, hence H(t) = 0.
Thus, (5) becomes
t R.
v(x, t) =
Lecture 2
In this lecture, we shall show that the solution of the wave equation
utt = c2 uxx
can be immediately obtained with suitable transformation of the independent variables.
We shall derive DAlembert formula for the solution of the wave equation for an innite
string ( < x < ) with IC u(x, 0) = f (x) and ut (x, 0) = g(x).
Consider the following IVP:
PDE:
IC:
utt = c2 uxx ,
< x < , t 0,
(1)
(2)
= x ct,
we note that
ux = u x + u x = u + u .
uxx = (u + u )x
= (u + u ) x + (u + u ) x
= u + 2u + u .
Similarly,
utt = c2 (u 2u + u ).
Substituting the expression for uxx and utt in utt = c2 uxx yields
u = 0,
(3)
where () is the antiderivative of (), and () is any function of . Thus, the general
solution of u = 0 is
u(, ) = () + (),
(4)
(5)
This is the general solution of the wave equation. We may interpret (5) as the sum of any
two moving waves, each moving in opposite directions with velocity c.
Step 4. (Applying IC to the general solution): In order to solve IVP (1)-(2), the general
solution u(x, t) is required to satisfy the two initial conditions
u(x, 0) = f (x),
ut (x, 0) = g(x).
(6)
(7)
c(x) + c(x) =
g( ) d + K.
(8)
x0
Solving for (x) and (x) from (6) and (8), we obtain
1
1 x
(x) = f (x)
g( ) d
2
2c x0
x
1
1
g( ) d
(x) = f (x) +
2
2c x0
(9)
(10)
x+ct
g( ) d.
(11)
xct
The equation (11) is known as DAlembert solution to the IVP (1)-(2). This formula is
of great interest in itself, and it avoids the problem of convergence of innite series in the
Fourier series approach.
1
1 x+ct
[f (x + ct) + f (x ct)] +
g(r) dr
2
2c xct
1 xct
1
[f (x ct) f (x + ct)]
g(s)ds
2
2c x+ct
1
1 xct
[f (x ct) + f (x + ct)] +
g(s)ds
2
2c x+ct
x+ct
1
1
[f (x + ct) + f (x ct)]
g(s)ds
2
2a xct
u(x, t).
Similarly, we can show that if f (x) and g(x) are even then u(x, t) is even i.e.,
u(x, t) = u(x, t).
Periodic initial data yield periodic solutions.
If f (x + 2L) = f (x) and g(x + 2L) = g(x), then u(x + 2L, t) = u(x, t). That is, if f
and g are periodic of period 2L then u(x, t) is also periodic of period 2L in x. This
follows easily from DAlemberts formula. This fact is useful in dealing with nite
strings.
It can be shown that if f (x) and g(x) are periodic of period 2L and
g(x)dx = 0,
L
then u(x, t) is not only periodic in x of period 2L, but also periodic in t of period
2L/c.
1
2c
x+ct
g( ) d.
xct
The solution u at (x, t) may be interpreted as integrating the initial velocity between xct
and x + ct on the initial line t = 0.
Let us consider the following examples.
EXAMPLE 2. (Zero initial velocity) Solve the IVP:
utt = c2 uxx ,
PDE:
IC:
< x, t < ,
u(x, 0) = sin(x),
ut (x, 0) = 0.
Solution: Applying DAlemberts formula (11) with f (x) = sin(x) and g(x) = 0, we
obtain
1
[sin(x ct) + sin(x + ct)] .
2
3. (Zero initial displacement) Consider the IVP:
u(x, t) =
EXAMPLE
< x, t <
Solution: Here the string is initially straight (u(x, 0) = 0), but has a variable velocity
at t = 0 (ut (x, 0) = sin(x)). Thus, applying DAlemberts formula (11) with f (x) = 0 and
g(x) = sin(x), we obtain
1
u(x, t) =
2c
x+ct
sin( )d =
xct
1
[cos(x + ct) cos(x ct)] .
2c
Practice Problems
1. Solve the following IVP:
utt = 9uxx , < x < , t > 0,
u(x, 0) = sin x, ut (x, 0) = cos x, < x < .
2. Solve the following IVP:
utt = c2 uxx , < x < ,
t > 0,
10
Lecture 3
Before we introduce the semi-innite string problem, let us look at some special cases of
DAlemberts formula derived in the previous lecture.
EXAMPLE 1. Consider the problem for the semi-innite string (0 x < ) with xed
end at x = 0:
PDE:
BC:
IC:
utt = c2 uxx ,
u(0, t) = 0
u(x, 0) = f (x),
ut (x, 0) = 0.
Solution. Note that f (x) is dened for x 0. Consider the odd extension f0 (x),
< x < as follows:
{
f0 (x) =
f (x)
for x 0,
f (x)
for x 0.
x, t <
ut (x, 0) = 0.
1
u(x, 0) = [f0 (x + c 0) + f0 (x c 0)] = f0 (x),
2
which is the same as f (x) when x 0.
Semi-innite string problem: We shall nd the solution of the following wave equation
whose left end xed at zero and has given initial conditions:
PDE:
BC:
IC:
utt = c2 uxx ,
0 < x < .
11
Recall that the solution of the PDE (1) is given by (see (5), Lecture 2 of this module)
u(x, t) = (x ct) + (x + ct).
(1)
Substitute the general solution into the initial conditions, we arrive at (cf. (9)-(10), Lecture
2 of this module)
(x ct) =
(x + ct) =
1
1
f (x ct)
2
2c
1
1
f (x + ct) +
2
2c
xct
g() d.
(2)
g() d.
(3)
x0
x+ct
x0
Since we are looking for the solution u(x, t) everywhere in the rst quadrant (x > 0, t > 0)
of the xt-plane, we must nd (xct) < xct < and (x+ct) 0 < x+ct < .
Using (1), (2) and (3), for x ct 0, it follows that
u(x, t) = (x ct) + (x + ct)
1
1 x+ct
=
[f (x ct) + f (x + ct)] +
g()d.
2
2c xct
When x < ct, use of BC u(0, t) = 0 leads to
(ct) = (ct)
and hence,
1
1
(x ct) = f (ct x)
2
2c
ctx
g() d + K.
x0
1
2c
1
2c
x+ct
x+ct
xct
x+ct
ctx
g() d x ct
g() d x < ct.
12
1
(f (x + t) + f (x t))
2
1
(| sin(x + t)| + | sin(x t)|).
2
For x < t,
u(x, t) =
=
1
(f (x + t) f (t x))
2
1
(| sin(x + t)| | sin(t x)|).
2
Practice Problems
1. Solve the following IBVP:
utt = uxx , 0 < x < ,
t > 0,
ux (0, t) = 0, t 0,
u(x, 0) = cos x, ut (x, 0) = 0, 0 x < .
t > 0,
u(0, t) = 0, t 0,
u(x, 0) = x2 , ut (x, 0) = 0, 0 x < .
13
Lecture 4
In this lecture, we shall study the transverse vibrations of a nite string. If u(x, t) represents the displacement (deection) of the string and the ends of the string are held xed,
then the motion of the string is described by the following initial-boundary value problem
(IBVP):
PDE:
BC:
IC:
utt = c2 uxx ,
(1)
u(0, t) = 0;
(2)
(3)
While studying the wave equation in a bounded region of space 0 < x < L, it is to be
noted that the waves no longer appear to be moving due to their repeated interaction with
boundaries. These waves are known as standing waves (e.g., a guitar string xed at both
ends). The boundary condition in (2) reect the fact the string is held xed at the two
end points x = 0 and x = L.
We shall apply the method of separation of variables to solve this problem.
Step 1. (Reducing to a system of ODEs): We seek solutions of the form
u(x, t) = X(x)T (t).
(4)
(5)
X (x) kX(x) = 0.
(6)
The ODE X kX = 0 is solved for X(x) in a manner similar to that of heat equation
(see, Lecture 3 of Module 5), but the solution of the ODE T c2 kT = 0 for T (t) are
dierent, because of the second-order time derivative.
Step 2. (Solving the ODEs): Investigating the solutions of these two ODEs for all dierent
values of k lead into the following cases.
14
X(x) = Cx + D.
This case is of no interest because use of BC yields trivial solution u 0. Hence, for
nontrival solution, we are left with the possibility of choosing k < 0.
Case III : Let k < 0. Set k = 2 for some R and = 0.
The solutions of T (t) + c2 T (t) = 0 is given by
T (t) = A sin(ct) + B cos(ct).
The solutions of X (x) + 2 X(x) = 0 is
X(x) = C sin(x) + D cos(x),
where A, B, C and D are constants. Then
u(x, t) = [A sin(ct) + B cos(ct)][C sin(x) + D cos(x)].
Our goal is to nd the constants A, B, C and D and the negative separation constant
so that the expression
u(x, t) = [C sin(x) + D cos(x)][A sin(ct) + B cos(ct)]
(7)
satises the BC. As u(x, t) has to satisfy the BC (2), substituting (7) into u(0, t) =
u(L, t) = 0 gives
u(0, t) = X(0)T (t) = D[A sin(ct) + B cos(ct)] = 0
= D = 0.
15
Note that the choice of C = 0 in (7) would lead to X(x)T (t) = 0. Thus, the sequence of
solutions given by
un (x, t) = Xn (x)Tn (t)
[
]
nx
nct
nct
= sin(
) an sin(
) + bn cos(
) , n = 1, 2, 3,
L
L
L
As the PDE is linear, by superposition principle we write
u(x, t) =
sin(
n=1
[
]
nx
nct
nct
) an sin(
) + bn cos(
) .
L
L
L
n
L
(8)
n=1
n=1
bn sin(
an (
nx
) = f (x),
L
nc
nx
) sin(
) = g(x),
L
L
which represent the Fourier sine expansion of f (x) and g(x), respectively. The coecients
an and bn are given by
an =
bn =
L
2
nx
g(x) sin(
)dx,
nc 0
L
2 L
nx
f (x) sin(
)dx.
L 0
L
(9)
(10)
n=1
sin(
[
]
nx
nct
nct
) an sin(
) + bn cos(
) ,
L
L
L
(11)
The function u(x, t) given by (11) with coecients (9) and (10), is a
solution of (1) that satises the conditions (2) and (3), provided that the series (11)
converges and also that the series obtained by dierentiating (11) twice (term-wise)
with respect to x and t, converge and have the sums uxx and utt , respectively, which
are continuous.
16
Note that each un in (8) represents a harmonic motion having the frequency n /2 =
cn/2L cycles per unit time. This motion is called the nth normal mode of the string.
The rst normal mode is known as the fundamental mode (n = 1), and the others
are known as overtones.
Practice Problems
1. Solve the following IBVP:
utt = uxx , 0 < x < 1,
t > 0,
t > 0,
17
Lecture 5
Recall the Duhamels principle for inhomogeneous heat equations that arises due to internal heat sources. We solve the inhomogeneous heat equation by solving a family of
related problems in which the sources appears in the initial conditions instead of the differential equation. The same idea works for inhomogeneous wave equations. To illustrate
the procedure, let us consider the following innite string problem:
PDE:
IC:
< x, t < ,
u(x, 0) = 0, ut (x, 0) = 0.
(1)
(2)
To motivate the method of Duhamel for the string problem, let the acceleration h(x, s) be
applied to the string at t = s s and let the acceleration be turned o at t = s. The
string will then acquire a velocity of h(x, s)s, and its position change is h(x, s)(s)2 /2.
Assuming s to be small enough, the change in position can be neglected. The eect of
the imposed acceleration is v(x, t; s)s, where v(x, t; s) is the solution of
PDE:
IC:
vtt = c2 vxx ,
< x < , t s,
(3)
(4)
This problem has initial conditions given at the arbitrary time t = s, instead of t = 0. We
can write v(x, t; s) = v(x, t s; s), where v(x, t; s) solves
< x < , t 0
(5)
(6)
vtt = c2 vxx ,
PDE:
IC:
1 x+ct
v(x, t; s) =
h(r, s)dr,
2c xct
(7)
1
2c
x+c(ts)
h(r, s)dr.
xc(ts)
u(x, t) =
v(x, t; s)ds =
0
1
v(x, t s; s)ds =
2c
x+c(ts)
h(r, s)drds.
0
xc(ts)
(8)
18
1 x+ct
v(s, t; s) =
h(r, s)ds.
2c xct
Note that v(s, t; s) is in C 2 since h(x, t) is assumed to be in C 1 . Dierentiate twice with
respect to t to obtain
ut (x, t) = v(x, 0; s) +
vt (x, t s; s)ds =
vt (x, t s; s)ds,
(9)
and
utt uxx = x t,
< x, t < ,
(10)
Solution. Splitting the problem (10) into two problems with u1 (x, t) and u2 (x, t)
solve
(u1 )tt (u1 )xx = 0,
u1 (x, 0) = x4 ,
(u1 )t (x, 0) = sin(x),
and
(u2 )tt (u2 )xx = x t,
u2 (x, 0) = 0,
(u2 )t (x, 0) = 0.
respectively. The solution of (8) is then u(x, t) = u1 (x, t) + u2 (x, t). By DAlemberts
formula
1
1
u1 (x, t) = [(x + t)4 + (x t)4 ] [cos(x + t) cos(x t)].
2
2
19
1
2
t
0
x+(ts)
x(ts)
1
(r s)drds =
2
t[
0
t)2
r2
sr
2
]x+ts
ds
xt+s
]
[
1 t (x + t s)2 (x + s
s(x + t s) + s(x + s t) ds
2 0
2
2
]
[
1 t
(x + t)2 (x t)2
2
2s 2s(x + t) +
ds
2 0
2
2
t3 t2 (x + t)
t3 t2 x
+ t2 x = +
.
3
2
6
2
Practice Problems
1. Solve the following nonhomogeneous IBVP:
utt = uxx + x sin t, 0 < x < 1,
t > 0,
2u
i=1 x2i
(1)
Laplace s equation is called potential theory. The equation (1) is often referred to as the
potential equation as the function u is frequently a potential function. Solutions of (1)
that have continuous second-order partial derivatives are called harmonic functions. For
easy of exposition, we shall study Laplaces equation in two dimensions.
This module consists of ve lectures. The rst lecture introduces some basic concepts
and the maximum and minimum principle for boundary value problems (BVP). In the
second lecture, we discuss the Greens identities, fundamental solution of the Laplace
equation and the Poisson integral formula. The solution of the Laplace equation for
rectangular region is discussed in the third lecture. The mixed BVP for a rectangle is
discussed in the fourth lecture. In the fth lecture, we solve the Laplace equations for
the annular region between concentric circles. Finally, the sixth lecture is devoted to the
interior and exterior Dirichlet problems for the Laplace equations.
Lecture 1
Let be an open region in R2 . The Laplace equation in two dimension is of the form
2 u(x, y) = 0,
where 2 :=
2
x2
(x, y) ,
(1)
+ x
2 is the Laplace operator or the Laplacian. The equation of the type
(1) plays an important role in a variety of physical contexts such as in Gravitation theory,
electrostatics, steady-state heat conduction problems and uid ow problems.
Some examples of physical problems(cf. [10]):
EXAMPLE 1. (Gravitation theory) The force of attraction F , both inside and outside the
attracting matter, can be expressed in terms of a gravitational potential u by the equation
F = u.
In empty space u satises Laplaces equation
2 u = 0.
EXAMPLE 2. (Steady-state heat ow problem) In the theory of heat conduction if the
temperature u does not vary with the time, then u satises the equation
( u) = 0,
where is the thermal conductivity. If is a constant throughout the medium then
2 u = 0.
EXAMPLE 3. (Fluid ow problem) The velocity q of a perfect uid in irrotational motion
can be expressed in terms of a velocity potential u by the equation
q = u.
If there are no sources or sinks at all points of the uid the function u satises Laplaces
equation
2 u = 0.
The inhomogeneous Laplace equation
2 u(x, y) = f (x, y) in ,
where f is a given function is known as the Poisson equation.
Types of BVP
Because these solutions do not depend on time, initial conditions are irreverent and only
boundary conditions are specied. There are three basic types of boundary conditions
that are usually associated Laplaces equation. They are
Dirichlet BVP: If the BC are of Dirichlet type i.e., if the solution u(x, y) to Laplace
equation in a domain is specied on the boundary i.e.,
u(x, y) = f (x, y) on ,
where f (x, y) is a given function. The Laplace equation together with Dirichlet BC
are called the Dirichlet problem / Dirichlet BVP. The Dirichlet problem for
Laplace equation is of the form
2 u(x, y) = 0 in ;
u(x, y) = f (x, y) on .
Neumann BVP: We know the BC are of Neumann type if the directional derivative
u
n
In physical terms, the normal component of the solution gradient is known on the
boundary. In steady-state heat ow problem, Neumann BC means the rate of heat
loss or gain through the boundary points is prescribed.
The Laplace equation together with Neumann BC are called the Neumann BVP/
Neumann problem which is written as
u
(x, y) = g(x, y) for (x, y) .
n
2 u = 0 in ;
The Neumann problem will have no solution unless we assume that the average
value of the function g on is zero. This assumption is known as the compatibility
u
=
g = 0,
n
which will be discussed in the next lecture.
condition
Robins BVP. The boundary conditions are called Robins type or mixed type if
Dirichlet BC are specied on part of the boundary and Neumann type BC are
specied on the remaining part of the boundary . For example,
u
+ c(u g) = 0,
n
where c is a constant and g is a given function that can vary over the boundary. The
Laplace equation together with the Rabins/Mixed BC known as Rabins BVP /
Mixed BVP.
The maximum/minimum principle for Laplaces equation is stated in the following theorem.
THEOREM 4. (The maximum/minimum principle for Laplaces equation)
be a solution of Laplaces equation
Let u(x, y) C 2 () C()
2 u(x, y) := uxx + uyy = 0
(2)
in a bounded region with boundary . Then the maximum and minimum values of u
attain on . That is,
max u(x, y) = max u(x, y); and min u(x, y) = min u(x, y).
(3)
where d is the diameter of . For such (0 < < (M0 Mb )/d2 ), the maximum of v can
not occur on because
M0 = v(x0 , y0 ) > max v(x, y).
(4)
(5)
uyy (x0 , y0 ) 0.
u(x, y) = g(x, y) on
(6)
if it exists, is unique.
Proof. Let u1 (x, y) and u2 (x, y) be two solutions of (6). Set v(x, y) = u1 (x, y)
u2 (x, y). Then v satises
2 v = 0 in , v = 0 on .
The maximum/minimum principle yields (cf. Theorem 4)
v = 0 in = u1 u2 = 0 in .
Thus, we have
u1 = u2 ,
which proves the uniqueness.
Next, we shall prove the continuous dependence of the solution on the boundary data.
THEOREM 7. The solution of the Dirichlet problem depends continuously on the boundary
data.
Proof. Let ui , i = 1, 2 be the solutions of
2 ui = F in R2 , ui = fi on .
Then the function v = u1 u2 solves
2 v = 0 in with v = f1 f2 on .
By the maximum/minimum principle v attains its maximum/minimum on . Thus, for
we have
all (x, y) ,
max(|f1 f2 |) min(f1 f2 ) v(x, y) max(f1 f2 ) max(|f1 f2 |).
Therefore,
|f1 f1 | < = |v(x, y)| <
This completes the proof.
for all (x, y) .
Practice Problems
1. Let u satisfy the Laplace equation in a disk = {(x, y) | x2 +y 2 < 1} and continuous
If u(cos , sin ) sin + cos(2), then show that
on .
u(x, y) y + x2 y 2 ,
(x, y) .
Lecture 2
In this lecture, we shall learn about some important identities known as Greens identities
and its special forms. As a consequence of these identities we can prove the uniqueness of
the solution to the Dirichlet problem and the compatibility conditions for the Neumann
problems. The fundamental solutions for the Laplace equation will be discussed.
Let be bounded domain in R2 with smooth boundary . Recall the following
Gauss divergence theorem: For u, v C 1 ()
u
v
v
dx =
vu nds
u
dx,
xk
xk
(1)
where n is the outward unit normal the boundary and ds is the element of arc length.
As a consequence of Gauss divergence theorem, the following identity known as Greens
identity hold true:
u
v udx =
v ds
u vdx.
Integrating the second term of the right hand side once more by parts we obtain
)
(
u
v
2
2
v udx =
u vdx +
v
u
ds.
n
n
Here,
(2)
(3)
u
2 udx =
ds.
n
(4)
Another special case of interest by choosing v = u. In this case, the equation (2) yields
the energy identity
|u| dx +
2
u udx =
u
ds.
n
(5)
it follows that
If 2 u = 0 in then for u C 2 (),
|u|2 dx = 0
u = 0
u = constant.
This observation leads to uniqueness theorems for the Dirichlet problem and the Neumann
problem.
REMARK 1. Using Greens identity (2), one can easily prove that:
of the Dirichlet problem is determined uniquely.
(i) A solution u C 2 ()
of the Neumann problem is determined uniquely within an
(ii) A solution u C 2 ()
additive constant.
Observe that the solution of the Neumann problem can only exist if the data satisfy
the condition known as compatibility condition. For example, the compatibility condition
for the Neumann problem:
2 u = 0 in ,
u
= g on
n
is
gds = 0,
(6)
is its spherical symmetry. The Laplace equation is preserved under rotations about a point
. Therefore, it is reasonable to assume that there exist special solutions v(x) of (6) that
are invariant under rotations about . Such solutions would be of the form
v = (r),
(7)
v
u n
u
r = |x | = t (xi i )2
where
i=1
represents the Euclidean distance between x and . By the chain rule of dierentiation
we nd that
1
dr
=
dxi
2
)1/2
(xi i )
2xi =
i=1
xi
.
r
dr
xi
x2
= (r)
= (r)( ), vxi xi = (r) 2i +
dxi
r
r
Hence,
v=
2
i=1
vxi xi = (r) +
1 x2i
3
r
r
n1
(r) = 0.
r
)
.
10
If (r) = 0, we have
(r)
1n
=
.
(r)
r
1
log r, r > 0
2
is a fundamental solution to two dimensional Laplaces equation (6). For a proof, see [5].
The Poisson Integral Formula. We know the function u C 2 () satisfying the Laplace
equation 2 u = 0 is harmonic. The following result express the solution of the Dirichlet
problem in terms of an integral known as The Poisson integral formula.
THEOREM 2. (The Poisson integral formula) Let f () be a continuous function and
f ( + 2) = f (). Dene
(r02 r2 )f (s)
1
ds, r < r0 ,
2 r02 2rr0 cos( s) + r2
u(r0 , ) = f (), r = r0 .
u(r, ) =
where u(r, ) = u(x, y) = u(r cos , r sin ). That is, u(r, ) is harmonic on the open disk
D = {(x, y) | (x2 + y 2 )1/2 < r0 }.
Some consequences of the Poisson integral formula are given below.
THEOREM 3. Let u be a harmonic function on some region . The value of u at the
center of any disk D with D is the average (or mean) of the values of u on the
circular boundary D of D.
11
Note: The mean value property can be used to prove the maximum and minimum principle for solutions for Laplaces equation. It can be used to show that whenever the
maximum or minimum is attained in the interior of the region, the solution u must be
identically constant. This is the strong maximum and minimum principle for Laplaces
equation.
THEOREM 4. (The strong maximum/minimum principle) Let u be a harmonic
function on an open connected set . Suppose that the maximum or minimum of u is
attained at some point in . Then u must be constant throughout .
We know by denition a harmonic function u on an open region is only required to
be C 2 (). But, u actually C () (innitely dierentiable function). Thus, we have the
following result.
THEOREM 5. (Regularity result) If u is harmonic on an open region , then u
C ().
Practice Problems
1. Prove that a solution of the Neumann problem
2 u = f in , u = g on
diers from another solution by a constant.
2. Prove that u1 (x, y) = 1 + log(x2 + y 2 ) and u2 (x, y) = 1 log(x2 + y 2 ) are harmonic,
where dened. Note that u1 = u2 on the circle x2 + y 2 = 1, but unequal inside
the circle. Why does this not contradict the uniqueness theorem for the Dirichlet
problem.
3. Let u be harmonic in the disk x2 + y 2 < r02 . If u achieves its maximum at the point
(0, 0), then show that u must be constant throughout this disk.
12
Lecture 3
In this lecture we shall discuss the solution of the Laplace equation with Dirichlet type
BC in cartesian coordinates.
Consider the following Dirichlet problem in a rectangle:
PDE:
BC:
uxx + uyy = 0,
u(x, 0) = f1 (x),
u(x, b) = f2 (x), 0 x a,
(1)
(2)
uxx + uyy = 0,
BC:
u(x, 0) = f (x),
u(x, b) = 0,
u(0, y) = 0, u(a, y) = 0,
0 x a,
(3)
(4)
0 y b.
Apply the method of separation of variables to solve this problem. The step-wise
solution procedure is given below.
Step 1: (Reducing to ODEs)
Separating variables, we seek for a solution of the form
u(x, y) = X(x)Y (x).
Substituting this into (3), we obtain
X Y (y) + X(x)Y (y) = 0
13
and hence,
X (x)
Y (y)
=
= k,
X(x)
Y (y)
for some constant k, which is called the separation constant. This leads two ODEs
X (x) kX(x) = 0,
Y (y) + kY (y) = 0.
(5)
(6)
Y (y) = (C + Dy).
Therefore,
u(x, y) = (A + Bx)(C + Dy).
Case 3 : Suppose k < 0, set k = 2 , where > 0.
The solutions of ODEs are given by
X(x) = [A cos(x) + B sin(x)]
Y (x) = [Cey + Dey ].
Thus , the solution of PDE is
u(x, t) = [A cos(x) + B sin(x)][Cey + Dey ].
Step 3: (Applying the BC)
Using the boundary conditions u(0, y) = 0 and u(a, y) = 0 for the product solution
obtained for the case k > 0 leads to the equations
A + B = 0, Aea + Bea = 0,
14
which has a trivial solution A = 0 and B = 0. Thus, only the trivial solution u(x, y) = 0
is possible. Similarly, use of boundary conditions u(0, y) = 0 and u(a, y) = 0 also leads
to a trivial solution u(x, y) = 0 for the case k = 0. Let us examine the product solution
obtained in Case 3 (for k < 0) i.e.,
u(x, y) = [A cos(x) + B sin(x)][Cey + Dey ].
Using the boundary condition u(0, y) = 0 yields A = 0. The condition u(a, y) = 0 gives
B sin(a)][Cey + Dey ] = 0.
For a non-trivial solution,
B = 0 = sin a = 0
= a = n or =
n
, n = 1, 2, 3, . . . .
a
ny
ny
nx
)[Cn e a + Dn e a ]
a
sin(
=
=
Dn = Cn
nb
a
nb
a
, n = 1, 2, . . . , .
un (x, y) = sin(
=
Setting cn =
2Cn
e
nb
a
u(x, y) =
cn sin(
n=1
nx
n(y b)
) sinh(
).
a
a
n=1
cn sin(
nx
nb
) sinh(
),
a
a
15
nb
2 a
nx
cn sinh(
)=
f (x) sin(
)dx,
a
a 0
a
and this implies
2
cn =
a sinh( nb
a )
f (x) sin(
0
nx
)dx.
a
(7)
cn sin(
n=1
nx
n(y b)
) sinh(
)
a
a
uxx + uyy = 0,
is
u(x, y) =
nx
n(b y)
) sinh[
]
a
a
n=1
ny
nx
) sinh(
)
+Bn sin(
a
a
ny
n(a x)
+Cn sin(
) sinh[
]
b
b ]
ny
nx
+ Dn sin(
) sinh(
) ,
b
b
An sin(
where
nb
)
a
na
= cn / sinh(
)
b
An = an / sinh(
Cn
nb
)
a
na
Dn = dn / sinh(
).
b
Bn = bn / sinh(
Practice Problems
1. Solve the following BVP:
uxx + uyy = 0,
u(1, y) = 0, 0 y 1,
16
u(x, 0) = sin x,
u(x, 1) = sin x, 0 x ,
17
Lecture 4
In this lecture we shall consider solving the mixed BVP for the Laplace equation. To
begin with, let us consider the following the Neumann problem for a rectangle:
PDE:
BC:
uxx + uyy = 0,
(1)
(2)
ux (a, y) = k(y), 0 y b.
This problem has no solution, unless the following compatibility condition holds:
a
a
b
b
g(x)dx
f (x)dx +
k(y)dy
h(y)dy = 0.
0
where we have used the fundamental theorem of calculus, and the Fubinis theorem.
REMARK 1.
u nds =
ux dy uy dx =
C
i.e., the ux of the gradient of u through the boundary is the integral of u in the
interior.
Note that we only require that ux and uy be continuous on the closed rectangle.
Further, we do not demand that the second partial of u extend continuously to the
closed rectangle.
We now consider solving Laplace equation with mixed type of boundary conditions.
EXAMPLE 2. Solve the following BVP:
PDE:
BC:
(3)
u(x, 0) = 0, u(x, b) = 0, 0 x a,
(4)
18
and
(BC)2 :
0 x a,
u2 (x, 0) = u2 (x, b) = 0,
X (x) Y (y)
+
= 0.
X(x)
Y (y)
0 < x < a,
(5)
(6)
(7)
X (a) = 0.
(8)
ny
b
( n )2
b
, n N.
( n )2
b
X(x) = 0
nx
nx
+ C2 sinh
.
b
b
19
(
nx
na
nx )
Xn (x) = an cosh
tanh
sinh
.
b
b
b
n=1
(
nx
na
nx )
ny
an cosh
tanh
sinh
sin
.
b
b
b
b
(9)
an sin
n=1
with an s given by
2
an =
b
ny
= g(y), 0 y b,
b
g(y) sin
0
ny
dy.
b
(10)
Step 2.(Solving for u2 ): Suppose u2 (x, y) = X(x)Y (y) satises (3) and (BC)2 . Arguing
as before, we have the ODEs (5) and (6) for X(x) and Y (y) with the boundary conditions
Y (0) = Y (b) = 0;
X(0) = 0.
= n =
, n N,
are
Yn (y) = sin
ny
.
b
( n )2
b
X(x) = 0,
X(0) = 0.
nx
,
b
n N.
n=1
bn sinh
nx
ny
sin
b
b
(11)
20
which satises the boundary condition u2x (a, y) = h(y). This leads to
2
1
bn =
n cosh na
b
h(y) sin
0
ny
dy.
b
(12)
Practice Problems
1. Solve the following Neumann BVP:
uxx + uyy = 0,
a
0
is needed?
2. Find a solution of the Neumann BVP:
uxx + uyy = 0,
a
0
h(x)dx = 0
21
Lecture 5
and y = r sin ,
(1)
U
Ur
+ 2 = 0, r1 < r < r2 ,
r
r
U (r2 , ) = f (), f ( + 2) = f (),
Urr +
(2)
U (r, + 2) = U (r, ),
r1 < r < r2 ,
where < < and PC stands for periodicity condition. Here, f and g are
continuous periodic function with period 2.
22
(3)
(4)
Note that we get periodic solutions of period 2, when b = n and c = +b2 = n2 , for
n = 0, 1, 2, . . .. In this case, solving (3) we obtain
Tn () = an cos(n) + bn sin(n), n = 0, 1, 2, . . . ,
(5)
where an and bn are arbitrary constants. With c = n2 , equation (4) for R(r) is the
Cauchy-Euler equation
r2 R (r) + rR (r) n2 R(r) = 0.
(6)
This equation can be solved by taking R(r) = rm . Substituting this into the (6), we get
r2 m(m 1)rm2 + rmrm1 n2 rm = 0
or
=
(m2 n2 )rm = 0.
rm is a solution if m = n.
(7)
(8)
n=1
Un (r, ).
(9)
23
f () =
A0
+
An cos(n) + Bn sin(n),
2
(10)
n=1
g() =
C0
+
2
Cn cos(n) + Dn sin(n).
(11)
n=1
a0 + 0 log(r2 ) =
(12)
an r2n + n r2n
(13)
(14)
Solving for a0 , 0 from (12), an , n from (13) and bn , n from (14), we obtain
a0 =
12 A0 log r1
,
log Q
An r1n Cn r2n
an =
,
Qn Qn
Bn r1n Dn r2n
,
bn =
Qn Qn
1
2 C0 log r2
21 C0
,
log Q
Cn r2n An r1n
n =
,
Qn Qn
Dn r2n Bn r1n
n =
,
Qn Qn
0 =
1
2 A0
(15)
(16)
(17)
where Q = r2 /r1 . This provides us with the constants an , bn , cn , dn in terms of the given
Fourier coecients An , Bn , Cn , Dn of f () and g().
Thus, the solution of (2), where f () and g() are given by (10)-(11), is
U (r, ) = a0 + 0 log r +
[an rn + n rn ] cos(n)
n=1
}
+[bn rn + n rn ] sin(n) ,
where an , n , bn , n are dened by (16)-(17).
EXAMPLE 1. Solve the following Dirichlet problem
U
Ur
+ 2 = 0, 1 < r < 2,
r
r
U (1, ) = 1 + 4 cos(2),
Urr +
U (2, ) = 2 + 5 sin(),
U (r, + 2) = U (r, ),
1 < r < 2.
(18)
24
a0 + 0 log(2) = 2,
1
2b1 + 1 = 5,
2
2
2 a2 + 22 2 = 0.
Solving (15) for a0 and 0 , (16) for b1 and 1 and(17) for a2 and 2 , we obtain
a0 = 1, 0 =
1
, b1 = 10/3, 1 = 10/3, a2 = 4/15, 2 = 64/15.
log(2)
All other systems in (15)-(17) have solutions zero. The solution of (18) is then
U (r, ) = 1 + log(r)/ log(2) + (10r/3 10r1 /3) sin()
+(4r2 /15 + 64r2 /15) cos(2).
EXAMPLE 2. Solve the following problem:
Ur
U
+ 2 = 0, 1 < r < 2,
r
r
U (1, ) = 0, U (2, ) = sin(), 0 2.
Urr +
Practice Problems
1. Solve the BVP
1
1
Urr + Ur + 2 U = 0 1 < r < 2,
r
r
U (2, ) = 1 + 4 cos + cos(2), U (1, ) = sin(2),
U (r, + 2) = U (r, ).
(19)
n=1
for all other ns
25
26
Lecture 6
The Dirichlet problem in a disk of radius r0 and center at (0, 0) can be expressed as
PDE:
BC:
Ur
U
+ 2 = 0, 0 < r < r0 , ,
r
r
U (r0 , ) = f (), ,
Urr +
(1)
(2)
(3)
k = n, n = 1, 2, . . ..
(4)
27
corresponding to
k = n, n = 1, 2, . . . .
(5)
A0
,
2
(6)
A0
U (r, ) =
+
Cn rn [An cos(n) + Bn sin(n)].
2
n=1
A0
U (r, ) =
+
2
n=1
r
r0
)n
[An cos(n) + Bn sin(n)],
(7)
where the An s and bn s are constants. These constants can be determined from the
boundary condition. With r = r0 in (7), we have
A0
f () =
+
[An cos(n) + Bn sin(n)].
2
n=1
28
1
An =
f () cos(n)d, n = 0, 1, . . . ,
1
f () sin(n)d, n = 1, . . . ,
Bn =
(8)
(9)
A0
f () =
+
[An cos(n) + Bn sin(n)],
2
n=1
r3
2
Ur
U
+ 2 = 0,
r
r
U (1, ) = f (),
Urr +
0 r < 1,
sin(3) + r4 cos(4).
Solution. Here r0 = 1. Note that f () is already in the form of Fourier series, with
1 n=1
2 for n = 0 and 1 for n = 4
1
An =
Bn =
2 n=3
0 for other n
0 for other n
The solution of the BVP is
U (r, ) =
A0
+
2
n=1
r
r0
= 1 + r sin +
)n
[An cos(n) + Bn sin(n)]
r3
sin(3) + r4 cos(4).
2
Exterior Dirichlet Problem: We shall discuss the exterior Dirichlet problem i.e., the
Dirichlet problem outside the circle. The exterior Dirichlet problem is given by
PDE:
BC:
Ur
U
+ 2 = 0, 1 r < ,
r
r
U (1, ) = f (), 0 2.
Urr +
29
This problem is solved exactly in a manner similar to the interior Dirichlet problem. We
assume that the solutions are bounded as r . Basically, we throw out the solutions
rn cos(n), rn sin(n),
ln r
(10)
n=0
2
1
f ()d,
2 0
1 2
f () cos(n)d,
0
1 2
f () sin(n)d.
0
Practice Problems
1. Solve the Dirichlet problem
Uxx + Uyy = 0,
u(1, ) = sin2 , ,
for the disk r 1.
2. Solve the BVP
U
Ur
+ 2 = 0 0 r < 2, < < ,
r
r
U (2, ) = 1 + 8 sin 32 cos(4) < < .
Urr +
1
1
sin + 3 sin(3).
r
r
(11)
Lecture 1
Fourier Transform
Recall that if f is a periodic function with period 2L on R then f has a Fourier Series(FS)
representation of the form
(
1
nx
nx )
f (x) = a0 +
an cos(
) + bn sin(
) ,
2
L
L
(1)
n=1
where
1
L
an =
and
1
bn =
L
f (x) cos(
L
nx
)dx,
L
f (x) sin(
L
n = 0, 1, 2, . . .
nx
)dx, n = 1, 2, . . .
L
Fourier series are powerful tools in treating various problems involving periodic functions.
Many practical problems do not involve periodic functions. Therefore, it is desirable to
generalize the method of Fourier series to include non-periodic functions. If f is not
periodic then we may regard it as periodic with an innite period, i.e., we would like to
see what happens if we let L . We shall do this for reasons of motivation as well
as for making it plausible that for a non-periodic function, one should expect an integral
representation (Fourier integral) instead of Fourier series.
Set n =
n
L .
1
a0 +
[an cos(n x) + bn sin(n x)].
2
n=1
L
[
1 L
1
cos(n x)
f (t) cos(n t)dt
f (x)dx +
L L
L
L
n=1
]
L
+ sin(n x)
f (t) sin(n t)dt .
f (x) =
=
Note that
n+1 n =
Setting = n+1 n =
1
f (x) =
L
L,
f (x)dx +
L
(n + 1) (n)
= .
L
L
L
L
[
1
cos(n x)
f (t) cos(n t)dt
L
n=1
]
L
+ sin(n x)
f (t) sin(n t)dt .
L
This representation is valid for any xed L, arbitrary large, but nite. Letting L
and assuming that the resulting nonperiodic function
f(x) = lim f (x)
L
it seems plausible that the innite series (1) becomes an integral from 0 to , i.e.,
]
[
1
f (x) =
cos(x)
f (t) cos(t)dt + sin(x)
f (t) sin(t)dt d.
0
f (x) =
[A() cos(x) + B() sin(x)]d,
0
which is called the Fourier integral of f, where
A() =
f (t) cos(t)dt, B() =
(2)
f(t) sin(t)dt,
1
f (x) =
[A() cos(x) + B() sin(x)]d,
0
(3)
where
A() =
f (t) cos(t)dt,
B() =
f (t) sin(t)dt,
(4)
In order to motivate the denition of Fourier transform we rst express complex form
of the Fourier integral as follows. Using the identity cos(a b) = cos a cos b + sin a sin b,
we write the integral (3) as
1
f (x) =
Since cos =
]
f (t) cos(x t)dt d.
(5)
ei +ei
,
2
f (x) =
=
1
2
1
2
we obtain
f (t){ei(xt) + ei(xt) }dtd
0
1
i(xt)
f (t)e
dtd +
f (t)ei(xt) dtd
2
0
Replacing w by w in the second term on the right hand side and adjusting limits from
to 0, we obtain
f (x) =
=
1
f (t)ei(xt) dtd
2
[
]
1
1
eix
f (t)eit dt d,
2
2
which is the complex form of the Fourier integral of f . This leads to following pair of
transforms.
DEFINITION 2. (The Fourier Transform)
Let f : (, ) R or C. The Fourier transform (FT) of f (x) is dened by
1
F(f )() = f () =
f (x)eix dx ( < < )
2
(6)
F (f ) = f (x) =
f()eix d ( < x < )
2
provided this integral exists.
EXAMPLE 5. Find the FT of the function
{
1 for |x| L,
f (x) =
0 for |x| > L.
(7)
1 eiL eiL
1 2 sin(L)
=
.
i
2
2
Note that even though f (x) vanishes for x outside the interval [L, L], the same is not
true of f(). In general, it can be shown that if f and f vanish outside [L, L], then
f 0.
Some Basic Properties of Fourier Transforms:
Linearity: F is a linear transformation. For any two functions f1 and f2 with FTs
F[f1 ] and F[f2 ], respectively and any constants c1 and c2 , we have
F[c1 f1 + c2 f2 ] = c1 F[f1 ] + c2 F[f2 ].
Conjugation: Let f (x) be a function with FT F[f ]. Then the FT of the function
f(x) (complex conjugate) is given by
F[f (x)] = F[f ]().
Continuity: Let f (x) be an absolutely integrable function with FT f(). Then
f() is a continuous function.
Convolution: We know the convolution of the functions f and g, denoted be f g,
is dened by
(f g)(x) =
f (x t)g(t)dt,
provided the integral exists for each x. (i.e., if f is bounded and g is absolutely
integrable). Let f() and g() be the FTs of f and g, respectively. Then
F(f g) = F(f )F(g) = f() g().
Note: In general, F[f (x)g(x)] = F[f ]F[g].
Parsevals identity: For any two functions f (x) and g(x) with FTs f() and g(),
respectively. Then
f (x)
g (x)dx =
In particular,
f()
g ()d.
|f (x)|2 dx =
|f()|2 d.
d
d
{F[u]}(, t) = U (, t).
dt
dt
The Fourier transform of a time derivative equals the time derivative of the Fourier
transform. This shows that time dierentiation and the FT with respect to x commute.
Practice Problems
1. Compute the complex FS for each of the function f (x) = eax cos(bx).
2. Show that if f (x) is absolutely integrable on (, ), then
F[eibx f (ax)] =
1
F[f (( b)/a)], a, b R, a = 0.
a
3. Find the FT of
(A) f (x) = ecx , where c is a constant.
2
Lecture 2
In this lecture we shall discuss the Fourier sine and cosine transforms and their properties.
These transforms are appropriate for problems over semi-innite intervals in a spatial
variable in which the function or its derivative are prescribed on the boundary.
If a function is even or odd function then f can be represented by a Fourier integral
which takes a simpler form than in the case of an arbitrary function.
If f (x) is an even function, then B() = 0 in (3), and
A() = 2
f (t) cos tdt.
0
1
f (x) =
A() cos(x)d.
0
Similarly, if f (x) is odd, then A() = 0 in (3), and
f (t) sin tdt.
B() = 2
0
1
f (x) =
B() sin(x)d.
0
These Fourier integrals motivates to dene the Fourier cosine transform (FCT) and Fourier
sine transform (FST). The FT of an even function f is called FCT of f . The FT of an
odd function f is called the FST of f .
DEFINITION 1. (Fourier Cosine Transform) The FCT of a function f : [0, ) R
is dened as
Fc (f ) = fc () = Fc () =
2
f (x) cos(x)dx (0 < ).
0
(1)
(3)
2
= fs (x) = Fs () =
fs () sin(x)d (0 x < ).
0
(4)
2
2
=
sin(x)f (x)
cos(x)f (x)dx
0
x=0
= Fc [f ].
If we assume that f (x), f (x) then
x=
2
2
2
sin(x)f (x)dx =
sin(x)f (x)
cos(x)f (x)dx
0
x=0
x=
x=
2
2
sin(x)f (x)
cos(x)f (x)
=
+
x=0
x=0
2
sin(x)f (x)dx
2
0
2
=
f (0) 2 Fs [f ]
Thus, we have
Fs [f (x)] = Fc [f ].
Fs [f (x)] = Fs [f ] +
2
2
f (0).
Fc [f (x)] = Fs [f ]
f (0)
2
Fc [f (x)] = Fc [f ]
f (0).
Note: Observe that the FST of a rst derivative of a function is given in terms of
the FCT of the function itself. However, the FST of a second derivative is given in
terms
of the sine transform of the function. There is an additional boundary term
2 f (0).
Transformation of partial derivatives:
(i) Let u = u(x, t) be a function dened for x 0 and t 0. If u(x, t) 0 as
x , and Fs [u](, t) = u
s (, t), then
Fs [ux ](, t) = Fc [u](, t).
2
Fc [ux ](, t) = Fs [u](, t)
u(0, t).
2
u(0, t).
Fs [uxx ](, t) = 2 Fs [u](, t) +
2
2
Fc [uxx ](, t) = Fc [u](, t)
ux (0, t).
(ii) If we transform the partial derivative ut (x, t) (and if the variable of integration
in the transformation is x), then the transformation is given by
d
Fs [ut ](, t) = {Fs [u]}(, t).
dt
d
Fc [ut ](, t) = {Fc [u]}(, t).
dt
Thus, time dierentiation commutes with both the Fourier cosine and sine transformations.
Practice Problems
1. Find the FST and FCT of the function
{
1, 0 x 2,
f (x) =
0, x > 2.
2. If u = u(x, t) and u(x, t) 0 as x , then
(A) Fs ux (, t) = Fc [u](, t)
(B) Fc ux (, t) = 2 u(0, t) + Fs [u](, t)
3. If u(x, t) and ux (x, t) 0 as x , then
Lecture 3
10
In this lecture we shall study some applications of the Fourier transform in solving the
heat ow problems where the spatial domain is innite or semi-innite.
Consider the heat ow in an innite rod where the initial temperature is u(x, 0) = f (x).
We shall prove that if the function f (x) is continuous and either absolutely integrable i.e.,
|f (x)|dx <
or bounded (i.e., |f (x)| M x), then the following IVP problem has a solution u(x, t)
which is continuous throughout the half-plane t 0, < x < .
PDE:
IC:
< x < ,
(1)
(2)
Taking the FT of both sides of the PDE (1) and IC (2) with respect to the x variable, we
obtain
F[ut ] = 2 F[uxx ]
F[u(x, 0)] = F[f (x)].
Using the properties of the FT
F[ut ] =
d
u
(, t),
dt
F[uxx ] = 2 u
(, t)
we have
d
u
(, t) = 2 2 u
(, t),
dt
u
(, 0) = f().
(3)
(4)
11
(5)
2 2 t
].
2 2 t
= F 1 [f()] F 1 [e t ]
]
[
2
1
( x 2 )
= f (x)
e 4 t
22 t
(x)2
1
f ()e 42 t d.
=
2 2 t
2
REMARK 1.
Note that integrand is made up of two terms i.e., the initial temperature f (x) and
the function
(x)2
1
G(x, t) =
e 42 t .
2 2 t
12
u(0, t) = b0 t > 0,
(7)
< x < ,
u(x, 0) = 0,
(6)
(8)
d
Fs [u]
dt
d
u
s (, t).
dt
and
2
u(0, t)
2
= 2 u
s (, t) +
u(0, t)
2
= 2 u
s (, t) +
b0 ,
where in the last step we have used BC u(0, t) = b0 , we arrive at the ODE
(
)
d
2
2
2
u
s (, t) = u
s (, t) +
b0 .
dt
Fs [uxx ] = Fs [u] +
2
d
2 2
2 2
u
s (, t) + u
s (, t) =
b0 ,
dt
u
s (, 0) = 0.
13
2 b0
2 2
u
s (, t) =
(1 e t ).
(9)
2
sin(x)
2 2
=
b0
(1 e t )d
[ 0
]
x
= b0 erf c(
) ,
22 t
where erf c(y) is the complementary error function given by
2
2
erf c(y) =
e d.
y
Hence, the solution of the heat conduction problem is
(
)
x
u(x, t) = b0 erf c
.
22 t
Practice Problems
1. Solve the following IVP:
ut = uxx , < x < , t > 0,
u(x, 0) = 2x, < x < ,
u(x, t), ux (x, t) 0 as x , t > 0.
2. Let f (x) C(R) be an odd function. If f (x) is absolutely integrable on R and
f (x) 0 as x then show that the unique continuous solution of the problem
ut = 2 uxx , < x < , t > 0,
u(x, 0) = f (x), < x < ,
u(x, t), ux (x, t) 0 as x , t > 0.
is also odd in the variable x. Show that the conclusion is false if the BC is dropped.
14
Lecture 4
15
In this lecture we shall learn how Fourier Transforms can be used to solve one dimensional
wave equations in an innite (or semi-innite) interval. More precisely, we shall derive
DAlemberts formula using FT method.
Consider the following one-dimensional wave equation:
utt (x, t) = c2 uxx (x, t),
PDE:
IC:
< x < ,
(1)
(2)
F[g(x)] = g().
Taking the FT of both sides of the PDE (1) and IC with respect to the x variable, we
obtain
F[utt ] = c2 F[ux x]
F[u(x, 0)] = F[f (x)],
d2
u
(, t),
dt2
F[uxx ] = 2 u
(, t)
we have
d2
u
(, t) + c2 2 u
(, t) = 0,
dt2
u
(, 0) = f(), u
t (, 0) = g().
(3)
(4)
(5)
The condition u
(, 0) = f() yields
C() = u
(, 0) = f().
Dierentiate (5) to have
u
t (, t) = cC() sin(ct) + cD() cos(ct).
(6)
16
sin(ct)
.
c
(7)
F 1 [
u(, t)]
[
]
sin(ct)
1
= F [f () cos(ct)] + F
g()
.
c
:= I1 + I2 .
1
1
=
f ()(eict + eict )eix d
2
1
1
=
f ()ei(x+ct) d +
f ()ei(xct) d
2
2
1
[f (x + ct) + f (x ct)].
=
2
(8)
(9)
For the second term I2 , we use the convolution theorem as follows. Note that if
{
1
2c |x| ct,
h(x) =
0 |x| > ct.
then its FT is given by
1 sin(c)
h()
=
.
2 c
An use of convolution theorem now yields
]
[
sin(ct)
1
(x)
I2 = F
g()
c
1 x+ct
=
g(y)h(x y)dy =
g(y)dy.
2c xct
(10)
x+ct
g(y)dy.
xct
(11)
Practice Problems
1. Solve the following IVP:
utt = 4uxx , < x < ,
t > 0,
t > 0,
17
Lecture 5
18
The steady-state temperature distribution for y > 0 with the prescribed temperature
u(x, 0) = f (x) on an innite wall, y = 0, is described by the equation:
PDE:
uxx + uyy = 0,
(1)
BC:
u(x, 0) = f (x),
< x < ,
(2)
(3)
1
F[u](, y) +
uyy eix dx = 0
2
[
]
2
ix
2
u(x, y)e
dx = 0
u
(, y) + 2
y
d2
u
(, y) 2 u
(, y) = 0,
dy 2
2
=
=
(4)
(5)
(6)
(7)
19
(8)
(9)
f ( )e
e
d eix d
=
2
2
1
=
f ( ) d
e([i( x)]||y) d.
2
(10)
e([i( x)]||y) d
0
1
1
e([y+i( x)]) d +
e[yi( x)] d
=
2
2 0
[
]
1
1
1
=
+
2 y + i( x) y i( x)
1
y
=
.
( x)2 + y 2
(11)
f ( )
d.
( x)2 + y 2
(12)
uxx + uyy = 0
u(x, 0) = e
x2
(14)
u(x, y)dx =
(13)
, for each y 0.
20
Solution. Since ex is bounded and continuous, apply formula (12) to obtain (for
2
y > 0)
1
u(x, y) =
y
2
e d.
y 2 + (x )2
y + (x )
(15)
(16)
Interchanging the order of integration (This is possible because the integrand is absolutely
2
integrable for y > 0) and using e d = , we obtain
u(x, y)dx =
=
Note that
[
]
y
dx
2
e d
2
2
y + (x )
y 2
e d = .
y
u(x, 0)dx
2 d
result.
Practice Problems
1. Solve
uxx + uyy = 0, 0 < x, y < ,
u(0, y) = 0, u(x, 0) = f (x),
where u(x, y) assumed to be bounded and u(, y) = ux (, y) = 0, 0 y < .
2. Solve
uxx + uyy = 0, < x < , 0 < y < 2
u(x, 0) = f (x), u(x, 2) = 0,
< x < ,
Lecture 1
satisfying the BC
2 u = 0 in
(1)
u
= B.
u +
n
(2)
Here, (x), (x), and B are given functions evaluated on the boundary region . The
term
u
n
denotes the exterior normal derivative on . The boundary condition (2) relates
the values of u on and the ux of u through . We assume that (x) > 0 and (x) > 0
on .
If = 0, = 0 then (2) is referred to as Dirichlet BC. If = 0, = 0 then (2) is
referred to as Neumann BC. If = 0, = 0 then the condition (2) is known as Robins
type BC or mixed BC. We assume that is subdivided into three disjoint subsets 1 ,
2 and 3 . On 1 , 2 and 3 , u satises a boundary condition of the rst kind
(Dirichlet type), second kind (Neumann type) and third kind (mixed type), respectively.
Introducing a function w(x) (whose property we shall specify later) and applying
Greens theorem, we note that
(
)
2
2
w u u w dx =
(
)
w
u
u
w
ds,
n
n
(3)
where n is the exterior unit normal to . The equation (3) is the basic integral theorem
from which the Greens function method proceeds in the elliptic case.
Now the function w(x) is to be determined so that (3) expresses u at an arbitrary
point in the region in terms of w and known functions in (1) and (2).
Let w(x) be a solution of
2 w = (x ),
(4)
u (x ) dx = u().
u( w)dx =
(5)
w(2 u) dx = 0.
(6)
It now remains to choose boundary conditions for w(x) on so that the boundary
integral in (3) involves only w(x) and known functions. This can be accomplished by
requiring w(x) to satisfy the homogeneous form of the given boundary condition (2), i.e.
w
w +
= 0.
(7)
n
If x 1 on , we have
w
u
u
w
n
n
=
=
(
)
1
w
u
u
w
n
n
(
)
1 w
u
1 w
u +
= B
,
n
n
n
(8)
(9)
The function w(x) is called the Greens function for the boundary value problem (1)-(2).
To indicate its dependence on the point , we denote the Greens function by
w = G(x; ).
(10)
1
1 G
B
ds +
BGds.
u() =
n
2 3
1
(11)
x, ,
G
G +
= 0.
n
(12)
(13)
The Poisson equation in a rectangle: Let : 0 < x < a, 0 < y < b be a rectangular
domain in R2 with boundary . Consider the following BVP
2 u = f (x, y) in ,
(14)
with the BC
u(x, 0) = u(x, b) = 0,
0 < x < a,
u(0, y) = u(a, y) = 0,
0 < y < b.
(15)
(16)
with the BC
G(x, 0; , ) = G(x, b; , ) = 0, 0 < x < a,
G(0, y; , ) = G(a, y; , ) = 0,
(17)
0 < y < b,
w
u
2
2
(w u u w)dx =
(u
w )ds.
n
n
(18)
Equation (18) is Greens formula for functions of two space variables. If u is the solution
of the given BVP and w is replaced by G, then the homogeneous BC satised by both u
and G make the right-hand side in (18) vanish and the formula reduces to
(19)
This yields
u(, ) =
(20)
Applying (18) with u(x, y) replaced by G(x, y; , ) and w(x, y) replaced by G(x, y; , ),
we obtain
G(, ; , ) = G(, ; , ).
(21)
u(x, y) =
G(x, y; , )f (, )dd.
(22)
G(x, y; , ) is called the Greens function of the given BVP. Formula (22) shows the eect
of all the sources in on the temperature at the point (x, y).
To construct the Greens function G, recall the two dimensional eigenvalue problem
associated with the BVP (14)-(15).
Uxx + Uyy + U = 0, 0 < x < a, 0 < y < b,
U (0, y) = 0, U (a, y) = 0,
0 < y < b,
( n )2
a
( m )2
b
, Unm = sin
( nx )
a
sin
( my )
b
, n, m = 1, 2, . . .
n=1 m=1
cnm (, ) sin
( nx )
n=1 m=1
sin
( my )
b
(23)
n=1 m=1
n=1 m=1
= (x )(y ).
(24)
Multiplying both sides of (24) by Upq . Then integrate over and use the property of
Dirac delta function to obtain
cpq (, ) =
4
Upq (, ).
abpq
(25)
a
b
4
G(x, y; , ) =
sin
sin
.
m 2
2
ab
( n
a
b
a ) +( b )
n=1 m=1
0 < x < 1,
( my )
sin(n) sin(m/2)
G(x, y; , ) = 2
sin
(nx)
sin
.
(n)2 + (m)2 /4
2
n=1 m=1
(26)
2 1
u(x, y) =
0
( my )
4 sin(n) sin(m/2)
(2)
sin
(nx)
sin
2 (4n2 + m2 )
2
n=1 m=1
( ) sin() sin(2)dd
( 1
)
1
sin() sin(n)d
= 8
4n2 + m2
0
n=1 m=1
( 2
( my ) )
( my )
sin(2) sin
d sin (nx) sin
2
2
( 0
)
8
1
1
sin(x) sin(2y) = sin(x) sin(2y).
=
2 4 12 + 42
5
2
REMARK 2. We know that separation of variables cannot be performed if the PDE and/or
BCs are not homogeneous. The eigenfunction expansion technique is used to deal with
IBVPs, where the PDE is nonhomogeneous and the BCs are zero.
Practice Problems
1. Use the Greens function method to nd the solution of the Dirichlet BVP:
uxx + uyy = x2 + y 2 , 0 < x < 1, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.
2. Use the Greens function method to nd the solution of the Neumann BVP:
uxx + uyy = 0, 0 < x < 1, 0 < y < 1,
ux (x, 0) = ux (x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.
Lecture 2
(x, t) Q = (0, T ],
(1)
x ,
(2)
subject to the IC
u(x, 0) = f (x), ut (x, 0) = g(x),
and BC
u
u + = B(x, t),
n
0 < t T.
(3)
Notice that the problem is dened over the bounded (cylindrical) region Q in (x, t)-space
(see, Fig. 9.1). The lateral boundary of Q is denoted by Qx and the two caps of the
cylinder, which are portions of the planes t = 0 and t = T , are denoted by Q0 and QT ,
respectively. The boundary conditions for u(x, t) are assigned on Qx . The exterior unit
[ (
)
(
)]
w utt 2 u u wtt 2 w dx
Q
=
(wu + uw, wut uwt ) nds
Q
=
(wu + uw) nx ds +
(wut uwt )dx
Qx
QT
(
)
(4)
Q0
)]
)
(
[ (
w utt uxx u wtt wxx dx =
t1
t0
t1
t0x1
+
x0x1
x0
Next, we show that how w(x, t) is determined so that the solution u(x, t) of (1)-(3)
can be specied at an arbitrary point (, ) in the region Q from (4). For this, we rst
require that w(x, t) be a solution of
wtt 2 w = (x )(t ),
, 0 < < T.
(
)
2
u wtt w dx =
u(x )(t )dx = u(, ).
Q
(5)
(6)
w(utt 2 u)dx = 0.
Q
(7)
Since
(wu + uw) nx ds =
Qx
Qx
)
u
w
w
+u
ds,
n
n
w
w +
= 0,
n Qx
we nd that
and obtain
Qx
w
u
1 w
1
B
Bwds.
w
+u
ds =
ds
n
n
n
S1
S2 S3
(8)
(9)
(10)
wt (x, T ) = 0,
(11)
so that integral over QT vanishes. The function w(x, t) determined from (5), (10), and
(11) is called the Greens function for the initial and boundary value problem(1)-(3) for
u(x, t). It is denoted as w(x, t) = G(x, t; , ).
Once the initial and boundary value problem for G is solved, the values of G and Gt
at t = 0 are known. Then the solution u at an (arbitrary) point (, ) is given by
(
)
1
1 G
B
ds +
BGds.
u(, ) =
Gg Gt f dx
n
S2 S3
Q0
S1
(12)
x, G, t, < T, > 0
(13)
Gt (x, T ; , ) = 0.
G
= 0,
G +
n Qx
t<T
(14)
(15)
Since G(x, t; , ) = G(, ; x, t), G satises the same dierential equation but with
time running forwards instead of backwards.
10
(16)
(17)
(18)
(19)
G(x, t, , ), Gx (x, t; , ) 0 as x ,
(20)
(21)
where (x , t ) = (x )(t ).
Note that an application of Fourier transform yields
1
F[(x )] =
(x )ei .
2
G(, t; , ) = 0, t < .
Since (t ) = 0 for t = , the solution of (22) is
{
0,
t<
G(,
t; , ) =
C1 cos(c(t )) + C2 sin(c(t )), t > ,
(22)
(23)
(24)
to be continuous at
where C1 and C2 are arbitrary functions of , and . Requiring G
t = yields C1 = 0. To nd C2 , we consider an interval [1 , 2 ] such that 0 < 1 < < 2
and integrate (22) with respect to t over this interval:
2
2 2
Gt (, 2 ; , ) Gt (, 1 ; , ) + c
G(,
t; , )dt
1
2
1
1
1
= ei
(t )dt = ei
(t )dt = ei .
2
2
2
1
By (24),
t (, 1 ; , ) = 0,
G
t (, 2 ; , ) = cC2 cos(c(2 )).
G
11
ei /( 2c). Hence
{
G(,
t; , ) =
0,
t<
sin(c(t ))
1 ei
,
2c
t > ,
Note that
]
1 sin(a)
F
= H(a |x|),
[
]
F 1 eia F[f ]() = f (x a).
1
1
H(c(t ) |x |).
2c
(25)
The values of G in the upper half (t > 0) of the (x, t)-plane, (25) can be written in the
form
1
[H((x ) + c(t )) H((x ) c(t ))].
2c
Writing u in term of G, we obtain
G(x, t; , ) =
u(x, t) =
(26)
(27)
where G(x, t; , ) is called the Greens function for the wave equation.
When < x < , the corresponding formula is obtained from (27) by letting
a and b and taking into account that G(x, t; , ) = 0 for |x| suciently
large.
In this case, the formula (27) reduces to
u(x, t) =
[G(x, t; , 0)u(, 0) G (x, t; , 0)u(, 0)]d.
(28)
This formula can be simplied by using the explicit form of G. Using (26) and the fact
that
H ( a) = ( a),
we obtain
1
G (x, t; , ) = [((x ) + c(t )) + ((x ) c(t ))].
2
1
u(x, t) =
[(x ct) + (x + ct)]f ()d
2
1
+
[H(x + ct) H(x ct)]g()d
2c
1
1 x+ct
=
[f (x + ct) + f (x ct)] +
g()d,
2
2c xct
which is called DAlemberts formula (see Module 6, Eq. (11)).
Practice Problems
1. Use the Greens function method to solve IBVP:
utt = uxx , < x < , t > 0,
u(x, t), ux (x, t) 0, ax x 0, t > 0,
u(x, 0) = 0, ut (x, 0) = x, < x < .
2. Use the Greens function method to solve IBVP:
utt = uxx + f (x, t), < x < , t > 0,
u(x, t), ux (x, t) 0, ax x 0, t > 0,
u(x, 0) = ut (x, 0) = 0, < x < ,
{
where
f (x, t) =
t,
1 < x < 1,
0, otherwise.
12
13
Lecture 3
(x, t) Q = (0, T ],
(1)
x ,
(2)
u
= B(x, t),
n
0 < t T.
(3)
The above problem can be treated in the same way as the wave equation discussed in the
previous lecture. Let Q be a cylindrical region in (x, t)-space obtained by extending the
region parallel to itself from t = 0 to t = T (cf. Fig. 9.1). The lateral boundary of Q
is denoted by Qx . The two caps of the cylinder, which are portions of the planes t = 0
and t = T , are denoted by Q0 and QT , respectively.
The boundary conditions for u(x, t) are assigned on Qx . The exterior unit normal n
to Q has the form n = (nx , 0) on Qx , where nx is the exterior unit normal to . On
Q0 , n has the form n = (0, 1), and on QT , it has the form n = (0, 1).
As consequence of the divergence theorem, we have the following integral equation
[ (
)
(
)]
w ut 2 u u wt 2 w dx
(4)
Q
(
)
wu + uw, wu dx
=
Q
(
)
u
w
uwdx
uwdx,
=
w
+u
ds +
n
n
Qx
QT
Q0
(5)
= (, ) is the gradient operator in space-time.
where
t
Observe that the operator
adjoint as was the case in the of the Laplace and wave equations. The adjoint operator
of
2 is given as t
2 . With this choice for the adjoint operator we nd that
, 0 < < T
(6)
14
and the BC
w(x, T ) = 0
(7)
w
w +
= 0.
n
(8)
u(, ) =
f Gdx
S1
Q0
1 G
B
ds +
n
S2 S3
1
BGds,
(9)
where we have set w(x, t) = G(x, t; , ). G(x, t; , ) is the Greens func- function for
the initial and boundary value problem (1)-(3). Thus the Greens function G(x, t; , )
satises the equation
Gt 2 G = (x )(t ),
x, , t, < T, > 0
(10)
G
G +
= 0,
n
(11)
t < T.
(12)
The equation (10) satised by the Greens function G is a backward heat equation. Since
the problem for the Greens function is to be solved backwards in time, the initial and
boundary value problem (10)-(12) for G is well posed. Once G is determined then all
the terms on the right side of (9) are known and the solution u(x, t) of the initial and
boundary value problem (1)-(3) is completely determined.
Consider the following IBVP:
ut = 2 uxx , 0 < x < L, t > 0,
(13)
u(0, t) = 0, u(L, t) = 0,
(14)
u(x, 0) = f (x),
t>0
0 < x < L.
nx
L
n=1
]
}
{[ L
n 2 (n/L)2 t
nx
2
f () sin
e
sin
=
L 0
L
L
n=1
[
]
L
2
nx
n 2 (n/L)2 t
=
f ()
sin
sin
e
d.
L
L
L
0
u(x, t) =
cn e
2 (n/L)2 t
n=1
sin
(15)
15
2
nx
n 2 (n/L)2 t
sin
sin
e
.
L
L
L
(16)
n=1
u(x, t) =
(17)
2 sin(nx) sin(n)en
2 2 t
(18)
n=1
Thus,
u(x, t) =
=
G(x, t; , 0)d
0
n=1
[
2 sin(nx)
(19)
]
sin(n)d en
2 2 t
(20)
2 [
2 2
(1)n+1 en t sin(nx).
n
n=1
Practice Problems
1. Use the Greens function method to solve IBVP:
ut = uxx , 0 < x < 1, t > 0,
ux (0, t) = 0, ux (1, t) = 0,
u(x, 0) = x, 0 < x < 1.
t > 0,
(21)
16
Bibliography
[1] D. Bleecker and G. Csordas, Basic Partial Dierential Equations, Van Nostrand
Reinhold, New York, 1992.
[2] C. Constanda, Solution Techniques for Elementary Partial Dierential Equations,
Chapman & Hall/CRC, New York, 2002.
[3] L. J. Crowin and R. H. Szczarba, Multivariable Calculus, Marcel Dekker, Inc, New
York, 1982.
[4] S. J. Farlow, Partial Dierential Equations for Scientists and Engineers, Birkhauser,
New York, 1993.
[5] F. John, Partial Dierential Equations, Springer-Verlag, New York, 1982.
[6] E. Kreyszig, Advanced Engineering Mathematics, Wiley, 2011.
[7] J. Marsden and A. Weinstein, Calculus III, Springer-Verlag, New York, 1985.
[8] T. Myint-U and L. Debnath, Linear Partial Dierential Equations for Scientists and
Engineers, Birkhauser, Boston, 2007.
[9] R. K. Nagle and E. B. Sa, Fundamentals of Dierential Equations and Boundary
Value Problems, Addison-Wesley, New York, 1996.
[10] I.N. Sneddon, Elements of Partial Dierential Equations, Dover Publications, New
York, 2006.
[11] E. C. Zachmanoglou and D. W. Thoe, Introduction to Partial Dierential Equations
with Applications, Dover Publication, Inc., New York, 1986.
[12] E. Zauderer, Partial Dierential Equations of Applied Mathematics, Second Edition,
John Wiley & Sons, New York, 1989.
17