You are on page 1of 207

Module 1: Mathematical Preliminaries

This introductory module comprises of four lectures. In these four lectures, we introduce
to the readers some basic concepts from multivariable calculus and some essential results
from ordinary dierential equations(ODEs). Some geometrical concepts necessary for the
subsequent modules are also discussed.
Module 1 is organised as follows. In Lecture 1, we review some basic denitions and
results from multivariable calculus. In Lecture 2, we discuss some essential formulas for
solving linear rst-order and second-order (with constant coecients only) ODEs. In
addition, we review the basic existence and uniqueness theorems for initial value problem
(IVP) for ODEs and systems of ODEs. In Lecture 3, we discuss some geometrical concepts
like surfaces, normals and integral curves and surfaces of vector elds. Finally, Lecture 4
is devoted to methods for nding the integral curves of a vector eld by solving systems
of ODEs.

MODULE 1: MATHEMATICAL PRELIMINARIES

Lecture 1

A Review of Multivariable Calulus

In this lecture, we recall some basic concepts from multivariable calculus. The concepts
of limits, continuity, partial derivatives, directional derivatives, chain rules, tangent plane
and normals are discussed.
For any (x, y), (x0 , y0 ) R2 , let us denote

d((x, y), (x0 , y0 )) = (x x0 )2 + (y y0 )2


for the distance between two points (x, y) and (x0 , y0 ). A disk Dr (x0 , y0 ) of radius r
centered at (x0 , y0 ) is dened as
Dr (x0 , y0 ) = {(x, y) | d((x, y), (x0 , y0 )) < r} .
The concept of limit now can be dened by the same , technique as in one variable
calculus.
DEFINITION 1. (The , denition of limit) Let f (x, y) be a real-valued function of
two variables dened on a disk Dr (x0 , y0 ), except possibly at (x0 , y0 ). Then
lim

f (x, y) = l if for every > 0 there is a > 0 such that

(x,y)(x0 ,y0 )

|f (x, y) l| < whenever 0 < d((x, y), (x0 , y0 )) < .


Denition 1 means that the distance between f (x, y) and l can be made arbitrarily
small by making the distance from (x, y) to (x0 , y0 ) suciently small (but not 0). That
is, if any small interval (l , l + ) is given around l, then we can nd a disk D (x0 , y0 )
with center (x0 , y0 ) and radius > 0 such that f maps all the points in D (x0 , y0 ) [except
possibly (x0 , y0 )] into the interval (l , l + ).
The denition of a limit can be extended to functions of three or more variables. Using
vector notation the denition can be written in a compact form as follows:
Let f : Dr (x0 ) Rn R. Then
lim f (x) = l if for every > 0 there is > 0 such that

xx0

|f (x) l| < whenever 0 < d(x, x0 ) < .


DEFINITION 2. (Continuity) Let f (x, y) be a real-valued function of two variables dened in a disk Dr (x0 , y0 ) with center (x0 , y0 ). Then
f

is continuous at (x0 , y0 ) if

lim
(x,y)(x0 ,y0 )

f (x, y) = f (x0 , y0 ).

MODULE 1: MATHEMATICAL PRELIMINARIES

We say f is continuous in Dr (x0 , y0 ) if f is continuous at every point (x, y) in Dr (x0 , y0 ).


The intuitive meaning of continuity is that if the point (x, y) changes by a small amount,
then the value of f (x, y) changes by a small amount. Geometrically, this means that a
surface that is the graph of a continuous function has no holes or breaks.
DEFINITION 3. (Partial derivatives) Let f : Dr (x0 , y0 ) R. The partial derivatives
of f are the functions fx and fy dened by
f (x + h, y) f (x, y)
,
h0
h
f (x, y + h) f (x, y)
.
fy (x, y) := lim
h0
h

fx (x, y) := lim

To nd fx , treat y as a constant and dierentiate f (x, y) with respect to x. Similarly,


to nd fy , treat x as a constant and dierentiate f (x, y) with respect to y. If z = f (x, y)
we write
fx =

z
f
=
= zx ,
x
x

fy =

f
z
=
= zy .
y
y

Partial derivatives can also be dened for functions of three or more variables. In general,
if z is a function of n variables, z = f (x1 , x2 , . . . , xn ), its partial derivative with respect
to the ith variable xi is
f (x1 , . . . , xi1 , xi + h, xi+1 , . . . , xn ) f (x1 , . . . , xi , . . . , xn )
z
:= lim
.
h0
xi
h
We also write
zxi =

z
f
=
= fxi .
xi
xi

Since the partial derivatives are themselves functions, we can take their partial derivatives
to obtain higher derivatives. If z = f (x, y), we may compute
( )
z
2z
fxx (x, y) =
=
, fyy (x, y) =
x x
x2
( )
z
2z
fxy (x, y) =
=
, fyx (x, y) =
y x
yx

( )
z
=
y y
( )
z
=
x y

2z
,
y 2
2z
.
xy

In general, fxy = fyx . However, the following theorem gives condition under which we can
assert that fxy = fyx .

MODULE 1: MATHEMATICAL PRELIMINARIES

THEOREM 4. Let f : Dr (x0 , y0 ) R. If fxy and fyx are both continuous at (x0 , y0 ),
then
fxy (x0 , y0 ) = fyx (x0 , y0 ).
DEFINITION 5. (Chain rule) Let z1 = f1 (x1 , . . . , xn ), . . . , zm = fm (x1 , . . . , xn ) be m
functions of n variables, and let x1 = g1 (t1 , . . . , tk ), . . . , xn = gn (t1 , . . . , tk ) be n functions
of k variables, all with continuous partial derivatives.
Consider the zi s as functions of the tj s by
zi = fi (g1 (t1 , . . . , tk ), . . . , gn (t1 , . . . , tk )).
Then
zi
zi x1
zi x2
zi xn
=
+
+ +
.
tj
x1 tj
x2 tj
xn tj
DEFINITION 6. If z = f (x, y) is a function of two variables, its gradient vector eld f
is dened by
f (x, y) := (fx (x, y), fy (x, y)) = (

z z
, ).
x y

If u = f (x, y, z) is a function of three variables, its gradient vector eld f is dened by


f (x, y, z) = (fx (x, y, z), fy (x, y, z), fz (x, y, z)) = (

u u u
,
,
).
x y z

DEFINITION 7. (Implicit dierentiation) If y = f (x) is a function satisfying the


relation z = F (x, y) = 0, then
dy
Fx (x, f (x))
=
.
dx
Fy (x, f (x))

(1)

Dierentiating F (x, y) = 0 with respect to x using the chain rule gives

F dx F dy
+
=0
x dx
y dx
F
F dy
+
= 0,
x
y dx

which yields (1).


DEFINITION 8. (Directional derivatives) The directional derivative of f at (x0 , y0 ) in
the direction of a unit vector u = (u1 , u2 ) is
Du f (x0 , y0 ) := lim

h0

if this limit exists.

f (x0 + hu1 , y0 + hu2 ) f (x0 , y0 )


h

MODULE 1: MATHEMATICAL PRELIMINARIES

Note that if u = (1, 0) then Du f = fx and if u = (0, 1), then Du f = fy . In other


words, the partial derivatives of f with respect to x and y are just special cases of the
directional derivatives.
THEOREM 9. If f (x, y) is a dierentiable function of x and y, then f has a directional
derivative in the direction of any unit vector u = (u1 , u2 ) and
Du f (x, y) = fx (x, y)u1 + fy (x, y)u2 .
The directional derivative can be written as
Du f (x, y) = fx (x, y)u1 + fy (x, y)u2
= (fx (x, y), fy (x, y)) (u1 , u2 )
= f (x, y) u.

(2)

This expresses the directional derivative in the direction of u as the scalar projection
of the gradient vector onto u. From (2), we have
Du f (x, y) = f (x, y) u
= |f ||u| cos
= |f | cos ,
where is the angle between f and u. The maximum value of cos is 1 and this occurs
when = 0. Therefore, the maximum value of Du f (x, y) is |f | and it occurs when = 0
i.e., when u has the same direction as f .
Similarly, the directional derivative of functions of three variables with unit vector
u = (u1 , u2 , u3 ) can be written as
Du f (x, y, z) = f (x, y, z) u.
We now introduce the concept of dierentiability for functions of several variable, lets
rst recall the denition of dierentiability in one variable case.
Let D be an open subset R. The function f : D R is said to be dierentiable at
x0 D if
lim

xx0

f (x) f (x0 )
x x0

exists. The value of this limit is called the derivative of f at x0 and is denoted by f (x0 ).

MODULE 1: MATHEMATICAL PRELIMINARIES

The above denition may be restated as follows: The function f : D R is dierentiable at x0 D if there is a number f (x0 ) such that
lim

xx0

|f (x) f (x0 ) f (x0 )(x x0 )|


= 0.
|x x0 |

(3)

Any real number a0 determines a linear transformation L : R R dened by


Lx = a0 x.
In particular, f (x0 ) determines a linear transformation L : R R given by Lx = f (x0 )x.
Therefore, with this linear transformation, we may rewrite (3) as
lim

xx0

|f (x) f (x0 ) L(x x0 )|


= 0.
|x x0 |

(4)

We now use (3) to dene dierentiability of a function f : Rn Rm .


DEFINITION 10. (Dierentiability) Let D Rn be an open subset and let f : D Rm .
We say that f is dierentiable at x0 D if there is a linear transformation L : Rn Rm
such that
lim

xx0

f (x) f (x0 ) L(x x0 )


= 0.
x x0

(5)

The linear transformation L of (5) is called the derivative of f at x0 . We denote it by


f (x0 ).
We say that f is dierentiable in D if it is dierentiable at each every point of D.
DEFINITION 11. (Jacobian matrix) Let f : D Rn Rm is dened by
f (x) = (f1 (x), . . . , fm (x)),
where fi : D R, 1 i m. For each x D, we dene the Jacobian matrix of f at x
by
Jf (x) := (aij ),
where aij = (fi /xj )(x), 1 i m, 1 j n.
EXAMPLE 12.
Let f : R2 R3 be given by
f (x1 , x2 ) = (x21 , x1 x2 , x22 ).
Here, f1 (x1 , x2 ) = x21 , f2 (x1 , x2 ) = x1 x2 , f3 (x1 , x2 ) = x22 . Then
f1
= 2x1 ,
x1

f2
= x2 ,
x1

f3
= 0.
x1

MODULE 1: MATHEMATICAL PRELIMINARIES

f1
= 0,
x2
Therefore,

f2
= x1 ,
x2

f3
= 2x2
x2

2x1 0

Jf (x1 , x2 ) =
x2
0

x1

2x2

The following theorem gives a formula for computing derivative.


THEOREM 13. (Computing derivative) Let D be an open subset of Rn and f : D
Rm be dened by
f (x) = (f1 (x), . . . , fm (x)),
where fi : D R, 1 i m. If f is dierentiable at a point x0 in D, then each of the
partial derivatives (fi /xj )(x0 ) exists, 1 i m, 1 j n. Furthermore,
f (x0 ) = Jf (x0 ).
Note that the linear transformation L is dened by the Jacobian matrix of f at x0 .
In particular, for m = 1, we have
L = f (x0 ) = f (x0 ).
The following theorem gives the sucient condition for dierentiability of f .
THEOREM 14. (Sucient condition for dierentiability) Let D Rn be an open
set and f : D Rm be dened by
f (x) = (f1 (x), . . . , fm (x)),
where fi : D R, 1 i m. Suppose that (fi /xj )(x0 ) exists and continuous on D,
1 i m, 1 j n. Then f is dierentiable on D.
We shall conclude this lecture by stating some results on maxima and minima in the
case of a function of several variables. We restrict our discussion to functions of two
variables only.
DEFINITION 15. (Maxima and Minima) Let f (x, y) be a function of two variables. A
point (x0 , y0 ) is a local minimum point for f if there is a disk D (x0 , y0 ) about (x0 , y0 )
such that
f (x, y) f (x0 , y0 ) for all (x, y) D (x0 , y0 ).
The number f (x0 , y0 ) is called a local minimum value.

MODULE 1: MATHEMATICAL PRELIMINARIES

Similarly, if there is a disk D (x0 , y0 ) about (x0 , y0 ) such that


f (x, y) f (x0 , y0 ) for all (x, y) D (x0 , y0 )
then the point (x0 , y0 ) a local maximum point for f .
A point which is either a local maximum or minimum point is called a local extremum.
The following is the analog in two variables of the rst derivative test for one variable.
First Derivative Test:
If (x0 , y0 ) is a local extremum of f and the partial derivatives of f exist at (x0 , y0 ), then
fx (x0 , y0 ) = fy (x0 , y0 ) = 0.
Such point (x0 , y0 ) is also called a critical point of f .
Second Derivative Test:
Let f (x, y) have continuous second-order partial derivatives, and suppose that (x0 , y0 ) is
a critical point for f . Then
fx (x0 , y0 ) = 0 and fy (x0 , y0 ) = 0.
Let A = fxx (x0 , y0 ), B = fxy (x0 , y0 ), and C = fyy (x0 , y0 ). Then the following statements
are true.
(a) If A > 0, AC B 2 > 0 then (x0 , y0 ) is a local minimum.
(b) If A < 0, AC B 2 > 0 then (x0 , y0 ) is a local maximum.
(c) If AC B 2 < 0 then (x0 , y0 ) is a saddle point.
(d) If AC B 2 = 0 then the test is inconclusive.

Practice Problems
1. Show that lim(x,y)(0,0)

x2 + y 2 does not exist.

2. Using and denition prove that f (x, y) = |x| is continuous at (0, 0).
{

3. Let
f (x, y) =

xy(x2 y 2 )
,
x2 +y 2

(x, y) = (0, 0),

0,

(x, y) = (0, 0).

(a) If (x, y) = (0, 0), compute fx and fy .


(b) What is the value of f (x, 0) and f (0, y)?
(c) Show that fx (0, 0) = 0 = fy (0, 0).
(d) Show that fyx (0, 0) = 1 and fxy (0, 0) = 1.
(e) What went wrong? why are the mixed partial not equal?

MODULE 1: MATHEMATICAL PRELIMINARIES

4. Find the derivative of the function f : R2 R2 dened by


f (x, y) = (x2 + xy, x y 2 ).
5. Find the maxima, minima and saddle points of f (x, y) = (x2 y 2 )e(x

2 y 2 )/2

10

MODULE 1: MATHEMATICAL PRELIMINARIES

Lecture 2

Essential Ordinary Dierential Equations

In this lecture, we recall some methods of solving rst-order IVP in ODE (separable and
linear) and homogeneous second-order linear ODEs with constant coecients. These results will be useful while solving linear homogeneous PDEs using the variables separable
method in the subsequent modules (cf., Module 5, Module 6 and Module 7). Some fundamental results on existence, uniqueness and continuous dependence of solutions on given
data will also be discussed.
First-Order ODEs: A rst-order ODE is separable if it can be written in the form
f (y)

dy
= g(x),
dx

(1)

where y is an unknown function of the independent variable x.


Integrate both sides (if possible) with respect to x to have

dy
f (y) dx = g(x) dx
dx

=
f (y) dy = g(x) dx,
and from which we nd solutions y(x).
Consider a rst order linear nonhomogeneous equation ODE in the standard form
y (x) + p(x)y(x) = q(x).
When q(x) = 0, the resulting equation
y (x) + p(x)y(x) = 0
is called homogeneous equation which can be put in a separable form
dy
= p(x) dx,
y

(y = 0).

Its solution is thus given by


(
)
yh (x) = C exp p(x) dx ,
where C is an arbitrary constant. To solve (2), an integrating factor is given by
(
(x) = exp

)
p(x) dx .

(2)

11

MODULE 1: MATHEMATICAL PRELIMINARIES

The general solution of (2) is given by


1
y(x) =
(x)

}
(x)q(x)dx + C1 .

(3)

EXAMPLE 1. Solve (1 + x2 )y + 2xy = 5x4 .


Solution. Putting the equation into the standard form (2), we have
y + {2x/(1 + x2 )}y = 5x4 /(1 + x2 ).
Here, p(x) = 2x/(1 + x2 ) and q(x) =
[
(x) = exp

5x4
.
(1+x2 )

(4)

An integrating factor for (2) is

]
{2x/(1 + x )}dx = exp[log(1 + x2 )] = 1 + x2 .
2

Multiplying both side of (4) by (x), we obtain


d
[(x)y(x)] = (x)q(x) = 5x4 .
dx
Integrate both side of the above equation to have
(x)y(x) = x5 + C
=

y(x) = (x5 + C)/(1 + x2 ).

Second-Order Linear ODEs with Constant Coecients: We recall some basic


results of the homogeneous second-order linear ODE of the form
ay (x) + by (x) + cy(x) = 0,

(5)

where the coecients a, b, and c are real constants with a = 0. Let m1 and m2 be the
roots of the associated auxiliary equation
am2 + bm + c = 0.
If m1 and m2 are real and distinct (b2 4ac > 0), then the general solution of (5) is
y(x) = c1 em1 x + c2 em2 x .
If m1 = m2 = m (b2 4ac = 0), then the general solution of (5) is
y(x) = c3 emx + c4 xemx .

12

MODULE 1: MATHEMATICAL PRELIMINARIES

If m1 = + i and m2 = i (b2 4ac < 0), then the general solution of (5) is
y(x) = ex [c5 cos(x) + c6 sin(x)].
Here, ci , i = 1, 2, 3, 4, 5, 6 are arbitrary constants.
On Existence and Uniqueness of IVP: Consider the following IVPs:
|y | + 2|y| = 0, y(0) = 1.

(6)

y (x) = x, y(0) = 1.

(7)

xy = y 1, y(0) = 1.

(8)

Note that the IVP (6) has no solution, the problem (7) has precisely one solution, namely
y = 12 x2 + 1 and the problem (8) has innitely many solutions, namely y = 1 + cx, where
c is an arbitrary constant. From the above three IVPs, we observe that an IVP
y (x) = f (x, y),

y(x0 ) = y0

may have none, precisely one, or more than one solution. This leads to the following
fundamental results.
THEOREM 2. (Existence) Let R : |x x0 | < a, |y y0 | < b be a rectangle. If f (x, y)
is continuous and bounded in R i.e., there is a number K such that
|f (x, y)| K (x, y) R,
then the IVP
y (x) = f (x, y),

y(x0 ) = y0

(9)

has at least one solution y(x). This solution is dened for all x in the interval
|x x0 | < ,

where = min{a,

b
}.
K

THEOREM 3. (Uniqueness) Let R : |x x0 | < a, |y y0 | < b be a rectangle. If f (x, y)


and

f
y

are continuous and bounded in R i.e., there exist two number K and M such that
|f (x, y)| K (x, y) R,

f
M (x, y) R,
y

(10)
(11)

then the IVP (9) has a unique solution y(x). This solution is dened for all x in the
interval
|x x0 | < ,

where = min{a,

b
}.
K

13

MODULE 1: MATHEMATICAL PRELIMINARIES

EXAMPLE 4. Let R : |x| < 5, |y| < 3 be the rectangle. Consider the IVP
y = 1 + y2,

y(0) = 0

over R.
Here, a = 5, b = 3. Then
max |f (x, y)| = max |1 + y 2 | 10(= K),
(x,y)R
(x,y)R

f
max = max 2|y| 6(= M ).
(x,y)R y
(x,y)R
b
3
= min{a, } = min{5,
} = 0.3 < 5.
K
10
Note that the solution of the IVP is y = tan x. This solution is valid in the interval
|x| < 0.3 in stead of the entire interval |x| < 5. It is easy check that the solution y = tanx
is discontinuous at x = /2, and hence, there is no continuous solution valid in the entire
interval |x| < 5.
The conditions in Theorem 3 are sucient conditions rather than necessary ones, and
can be lessened. By the mean value theorem of dierential calculus, we have
f (x, y2 ) f (x, y1 ) = (y2 y1 )

f
(x, ),
y

where (x, y1 ), (x, y2 ) R and lies between y1 and y2 . In view of the condition

f
M (x, y) R,
y
it follows that
|f (x, y2 ) f (x, y1 )| M |y2 y1 |,

(12)

which is known as a Lipschitz condition. Thus, the condition (11) can be weakened to
obtain the following existence and uniqueness result.
THEOREM 5. (Picards Theorem) Let R : |x x0 | < a, |y y0 | < b be a rectangle.
Let f (x, y) be continuous and bounded in R i.e., there exists a number K such that
|f (x, y)| K (x, y) R.
Further, let f satisfy the Lipschitz condition with respect to y in R, i.e., there exists a
number M such that
|f (x, y2 ) f (x, y1 )| M |y2 y1 | (x, y1 ), (x, y2 ) R.

(13)

14

MODULE 1: MATHEMATICAL PRELIMINARIES

Then, the IVP (9) has a unique solution y(x). This solution is dened for all x in the
interval
|x x0 | < ,

where = min{a,

b
}.
K

Note that the continuity of f is not enough to guarantee the uniqueness of the solution
which can be seen from the following example.
EXAMPLE 6. (Nonuniqueness) Consider the IVP:
y =
Note that f (x, y) =

|y|, y(0) = 0.

|y| is continuous for all y. However,


{

y 0 and y =

x2 /4,

x0

x2 /4,

x < 0.

are two solutions of the given IVP. The uniqueness fails because the Lipschitz condition
(13) is violated in any region which include the line y = 0. With y1 = 0 and y2 > 0, we
note that

y2
|f (x, y2 ) f (x, y1 )|
|f (x, y2 ) f (x, 0)|
1
=
=
= ,
|y2 y1 |
|y2 |
y2
y2

which can be made large by choosing y2 0. Thus, it is not possible to nd a xed


constant M such that the condition (13) holds, and hence the IVP has a solution but it
is not unique.
Next, we generalize the above result to a system of n rst order ordinary dierential
equations in n unknowns of the form
dyi (x)
= fi (x, y1 , . . . , yn ),
dx

i = 1, . . . , n,

(14)

satisfying the initial conditions


y1 (x0 ) = y10 , . . . , yn (x0 ) = yn0 ,

(15)

where y10 , . . . , yn0 are the given initial values.


The fundamental result concerning the existence and uniqueness of solution of the
system (14)-(15) is essentially the same as Theorem 5.
THEOREM 7. Let Q be a box in Rn+1 dened by
Q : |x x0 | < a, |y1 y10 | < b1 , . . . , |yn yn0 | < bn .

15

MODULE 1: MATHEMATICAL PRELIMINARIES

Let each of the functions f1 , . . . , fn be continuous and bounded in Q, and satisfy the following Lipschitz condition with respect to the variables y1 , y2 , . . . , yn , i.e., there exists
constants L1 , . . . , Ln such that
|f (x, y11 , . . . , yn1 ) f (x, y12 , . . . , yn2 )| L1 |y11 y12 | + + Ln |yn1 yn2 |
for all pairs of points (x, y11 , . . . , yn1 ), (x, y12 , . . . , yn2 ) Q. Then there exists a unique set of
functions y1 (x), . . . , yn (x) dened for x in some interval |x x0 | < h, 0 < h < a such
that y1 (x), . . . , yn (x) solve (14)-(15).

Practice Problems
1. Determine whether the given dierential equation is separable.
(a)

dy
dx

yex+y
;
x2 +y

dy
(b) x dx
= 1 + y 2 ; (c)

dy
dx

= sin(x + y).

2. Solve the following rst-order linear equations subject to the given conditions:
(a)

dy
dx

y
x

dy
dy
= 1, y(2) = 1; (b) 4 dx
+ 3xy = 5, y(0) = 1; (c) sin x dx
+ y cos x =

x sin x, y(/2) = 2.
3. Find the general solution of the following second-order homogeneous linear ODEs.
(a)

d2 y
dx2

dy
+ 5y = 0; (b)
+ 4 dx

d2 y
dx2

dy
dx

= 0; (c)

d2 y
dx2

dy
+ 4y = 0.
2 dx

4. Does f (x, y) = |x| + |y| satisfy a Lipschitz condition in the xy-plane? Does f /y
exist?
5. Find all solutions of the IVP

dy
dx

= 2 y, y(1) = 0.

16

MODULE 1: MATHEMATICAL PRELIMINARIES

Lecture 3

Surfaces and Integral Curves

In Lecture 3, we recall some geometrical concepts that are essential for understanding
the nature of solutions of partial dierential equations to be discussed in the subsequent
lectures.
Surface: A surface is the locus of a point moving in space with two degrees of freedom.
Generally, we use implicit and explicit representations for describing such a locus by
mathematical formulas.
In the implicit representation we describe a surface as a set
S = {(x, y, z) | F (x, y, z) = 0},
i.e., a set of points (x, y, z) satisfying an equation of the form F (x, y, z) = 0.
Sometimes we can solve such an equation for one of the coordinates in terms of the
other two, say for z in terms of x and y. When this is possible we obtain an explicit
representation of the form z = f (x, y).
EXAMPLE 1. A sphere of radius 1 and center at the origin has the implicit representation
x2 + y 2 + z 2 1 = 0.
When this equation is solved for z it leads to two solutions:
z=

1 x2 y 2 and z = 1 x2 y 2 .

The rst equation gives an explicit representation of the upper hemisphere and the
second of the lower hemisphere.
We now describe here a class of surfaces more general than surfaces obtained as graphs
of functions. For simplicity, we restrict the discussion to the case of three dimensions.
Let R3 and let F (x, y, z) C 1 (), where C 1 () := {F (x, y, x) C() :
Fx , Fy , Fz C()}. We know the gradient of F , denoted by F , is a vector valued
function dened by
F = (

F F F
,
,
).
x y z

One can visualize F as a eld of vectors (vector elds), with one vector, F , emanating
from each point (x, y, z) . Assume that
F (x, y, z) = (0, 0, 0), x .

(1)

MODULE 1: MATHEMATICAL PRELIMINARIES

17

This means that the partial derivatives of F do not vanish simultaneously at any point of
.
DEFINITION 2. (Level surface) The set
Sc = {(x, y, z) | (x, y, z) and F (x, y, z) = c},
for some appropriate value of the constant c, is a surface in . This surface is called a
level surface of F .
Note: When R2 , the set Sc = {(x, y) | (x, y) and F (x, y) = c} is called a
level curve in .
Let (x0 , y0 , z0 ) and set c = F (x0 , y0 , z0 ). The equation
F (x, y, z) = c

(2)

represents a surface in passing through the point (x0 , y0 , z0 ). For dierent values of c,
(2) represents dierent surfaces in . Each point of lies on exactly one level surface of
F . Any two points (x0 , y0 , z0 ) and (x1 , y1 , z1 ) of lie on the same level surface if and only
if
F (x0 , y0 , z0 ) = F (x1 , y1 , z1 ).
Thus, one may visualize as being laminated by the level surfaces of F . The equation
(2) represents one parameter family of surfaces in .
EXAMPLE 3. Take = R3 \(0, 0, 0) and let F (x, y, z) = x2 + y 2 + z 2 . Then
F (x, y, z) = (2x, 2y, 2z).
Note that the condition (1) is satised (x, y, z) . The level surfaces of F are spheres
with center at the origin.
EXAMPLE 4. Take = R3 . Then F (x, y, z) = (0, 0, 1). The condition (1) is satised
at every point of . The level surfaces are planes parallel to the (x, y)-plane.
Consider the surface given by the equation (2) and let the point (x0 , y0 , z0 ) lie on this
surface. We now ask the following question: Is it possible to describe Sc by an equation
of the form
z = f (x, y),

(3)

so that Sc is the graph of f ? This is equivalent to asking whether it is possible to solve


(2) for z in terms of x and y. An answer to this question is contained in the following
theorem.

18

MODULE 1: MATHEMATICAL PRELIMINARIES

THEOREM 5. (Implicit Function Theorem)


If F is dened within a sphere containing the point (x0 , y0 , z0 ), where F (x0 , y0 , z0 ) = 0,
Fz (x0 , y0 , z0 ) = 0, and Fx , Fy and Fz are continuous inside the sphere, then the equation
F (x, y, z) = 0 denes z = f (x, y) near the point (x0 , y0 , z0 ).
EXAMPLE 6. Consider the unit sphere
x2 + y 2 + z 2 = 1.

(4)

Note that the point (0, 0, 1) lies on this surface and Fz (0, 0, 1) = 2. By the implicit function
function theorem, we can solve (4) for z near the point (0, 0, 1). In fact, we have
z=+

1 x2 y 2 , x2 + y 2 < 1.

(5)

In the upper half space z > 0, (4) and (5) describe the same surface.
The point (0, 0, 1) is also an the surface (4) and Fz (0, 0, 1) = 2. Near (0, 0, 1),
we have

z = 1 x2 y 2 , x2 + y 2 < 1.

(6)

In the lower half space z < 0, (4) and (6) represents the same surface.
On the other hand, at the point (1, 0, 0), we have Fz (1, 0, 0) = 0. Clearly, it is not
possible to solve (4) for z in terms of x and y near this point.
Note that the set of points satisfying the equations
F (x, y, z) = c1 , G(x, y, z) = c2

(7)

must lie on the intersection of these two surfaces. If F and G are not colinear at any
point of the domain , where both F and G are dened, i.e.,
F (x, y, z) G(x, y, z) = 0, (x, y, z) ,

(8)

then the intersection of the two surfaces given by (7) is always a curve.
(

Since
F G =

(F, G) (F, G) (F, G)


,
,
(y, z) (z, x) (x, y)

)
,

(9)

where the Jacobian


F G F G
(F, G)
=

.
(y, z)
y z
z y
The condition (8) means that at every point of at least one of the Jacobian on the right
side of (9) is dierent from zero.

19

MODULE 1: MATHEMATICAL PRELIMINARIES

EXAMPLE 7. Let
F (x, y, z) = x2 + y 2 z,

G(x, y, z) = z.

Note that F = (2x, 2y, 1) and G = (0, 0, 1). It is easy to see that if = R3 with
the z-axis removed, then the condition (8) is satised in . The pair of equations
x2 + y 2 z = 0,

z=1

represents a circle which is the intersection of the paraboloidal surface represented by the
rst equation and the plane represented by the second equation.
Systems of Surfaces: A one-parameter system of surfaces is represented by the equation
of the form
f (x, y, z, c) = 0.

(10)

Consider the system of surfaces described by the equation


f (x, y, z, c + c) = 0,

(11)

corresponding to the slightly dierent value c + c.


Note that these two surfaces will intersect in a curve whose equations are (10) and
(11). This curve may be considered to be intersection of the equations
f (x, y, z, c) = 0,

lim

c0

f (x, y, z, c + c) f (x, y, z, c)
.
c

The limiting curve described by the set of equations


f (x, y, z, c) = 0,

f (x, y, z, c) = 0.
c

(12)

is called the characteristic curve (cf. [10]) of (10).


REMARK 8. Geometrically, it is the curve on the surface (10) approached by the intersection of (10) and (11) as c 0. Note that as c varies, the characteristic curve (12)
trace out a surface whose equation is of the form
g(x, y, z) = 0.
DEFINITION 9. (Envelope of one-parameter system)
The surface determined by eliminating the parameter c between the equations
f (x, y, z, c) = 0,

f (x, y, z, c) = 0
c

is called the envelope of the one-parameter system f (x, y, z, c) = 0.

20

MODULE 1: MATHEMATICAL PRELIMINARIES

EXAMPLE 10. Consider the equation


x2 + y 2 + (z c)2 = 1.
This equation represents the family of spheres of unit radius with centers on the z-axis.
Set
f (x, y, z, c) = x2 + y 2 + (z c)2 1.
Then

f
c

= z c. The set of equations


x2 + y 2 + (z c)2 = 1,

z=c

describe the characteristic curve to the surface. Eliminating the parameter c, the envelope
of this family is the cylinder
x2 + y 2 = 1.
Now consider the two parameter system of surfaces dened by the equation
f (x, y, z, c, d) = 0,

(13)

where c and d are parameters.


In a similar way, the characteristics curve of the surface (13) passes through the point
dened by the equations
f (x, y, z, c, d) = 0,

f (x, y, z, c, d) = 0,
c

f (x, y, z, c, d) = 0.
d

This point is called the characteristics point of the two-parameter system (13). As the
parameters c and d vary, this point generates a surface which is called the envelope of the
surfaces (13).
DEFINITION 11. (Envelope of two-parameter system)
The surface obtained by eliminating c and d from the equations
f (x, y, z, c, d) = 0,

f (x, y, z, c, d) = 0,
c

f (x, y, z, c, d) = 0
d

is called the envelope of the two-parameter system f (x, y, z, c, d) = 0.


EXAMPLE 12. Consider the equation
(x c)2 + (y d)2 + z 2 = 1,
where c and d are parameters. Observe that
(x c)2 + (y d)2 + z 2 = 1, x c = 0, y d = 0.
The characteristics points of the two-parameter system (13) are (c, d, 1). Eliminating c
and d, the envelope is the pair of parallel planes z = 1.

21

MODULE 1: MATHEMATICAL PRELIMINARIES

Integral Curves of Vector Fields: Let V(x, y, z) = (P (x, y, z), Q(x, y, z), R(x, y, z))
be a vector eld dened in some domain R3 satisfying the following two conditions:
V = 0 in , i.e., the component functions P , Q and R of V do not vanish simultaneously at any point of .
P, Q, R C 1 ().
DEFINITION 13. A curve C in is an integral curve of the vector eld V if V is tangent
to C at each of its points.
EXAMPLE 14. 1. The integral curves of the constant vector elds V = (1, 0, 0) are lines
parallel to the x-axis (see Fig. 1.1).
2. The integral curves of V = (y, x, 0) are circles parallel to the (x, y)-plane and
centered on the z-axis (see Fig. 1.1).

Figure 1.1: Integral curves of V = (1, 0, 0) and V = (y, x, 0)


REMARK 15. In physics, if V is a force eld, the integral curves of V are called lines
of force. If V is the velocity of the uid ow, the integral curves of V are called lines of
ow. These are the paths of motion of the uid particles.
With V = (P, Q, R), associate the system of ODEs:
dx
= P (x, y, z),
dt

dy
= Q(x, y, z),
dt

dz
= R(x, y, z).
dt

(14)

A solution (x(t), y(t), z(t)) of the system (14), dened for t in some interval I, may be
regarded as a curve in . We call this curve a solution curve of the system (14). Every
solution curve of the system (14) is an integral curve of the vector eld V. Conversely, if
C is an integral curve of V , then there is a parametric representation
x = x(t),

y = y(t), z = z(t); t I,

MODULE 1: MATHEMATICAL PRELIMINARIES

22

of C such that (x(t), y(t), z(t)) is a solution of the system of equations (14). Thus, every
integral curve of V , if parametrized appropriately, is a solution curve of the associated
system of equations (14).
It is customary to write the systems (14) in the form
dx
dy
dz
=
=
.
P
Q
R

(15)

EXAMPLE 16. The systems associated with the vector elds V = (x, y, x) and V =
(y, x, 0), respectively, are
dx
dy
dz
=
=
,
x
y
z
dx
dy
dz
=
=
.
y
x
0

(16)
(17)

Note that the zero which appears in the denominator of (17) should not be disturbing. It
simply means that dz/dx = dz/dy = dz/dt = 0.
Before we discuss the method of solutions of (15), let us introduce some basic denitions and facts (cf. [11]).
DEFINITION 17. Two functions (x, y, z), (x, y, z) C 1 () are functionally independent
in R3 if
(x, y, z) (x, y, z) = 0, (x, y, z) .

(18)

Geometrically, condition (18) means that and are not parallel at any point of
.
DEFINITION 18. A function C 1 () is called a rst integral of the vector eld V =
(P, Q, R) (or its associated system (15)) in , if at each point of , V is orthogonal to
, i.e.

V = 0

P
+Q
+R
= 0 in .
x
y
z

THEOREM 19. Let 1 and 2 be any two functionally independent rst integrals of V in
. Then the equations
(x, y, z) = c1 ,

2 (x, y, z) = c2

describe the collection of all integrals of V in .

(19)

23

MODULE 1: MATHEMATICAL PRELIMINARIES

If (x, y, z) is a rst integral of V and f () is a C 1 function of single variable then


w(x, y, z) = f ((x, y, z)) is also a rst integral of V. This follows from the fact that
P

w
w
w
+Q
+R
x
y
z

= Pf
+ Qf
+ Rf
x
y
z
(
)

= f P
+Q
+R
= 0.
x
y
z

Similarly, if f (u, v) is a C 1 function of two variables 1 and 2 and if 1 (x, y, z) and


2 (x, y, z) are any two rst integrals of V then w(x, y, z) = f (1 (x, y, z), 2 (x, y, z)) is
also a rst integral of V.
EXAMPLE 20. Let V = (1, 0, 0) be a vector eld and let = R3 . A rst integral of V is
a solution of the equation
x = 0.
Any function of y and z only is a solution of this equation. For example,
1 = y,

2 = z

are two solutions which are functionally independent. The integral curves of V are described by the equations
y = c1 ,

z = c2 ,

and are straight lines parallel to the x-axis.


EXAMPLE 21. Let V = (y, x, 0) be a vector eld and let = R3 \ z-axis. A rst integral
of V is a solution of the equation
yx xy = 0.
It is easy to verify that
1 (x, y, z) = x2 + y 2 ,

2 (x, y, z) = z

(20)

are two functionally independent rst integrals of V. Therefore, the integrals curves of V
in are given by
x2 + y 2 = c1 ,

z = c2 .

(21)

The above equations describe circles parallel to the (x, y)-plane and centered on the z-axis
(see the second gure of Fig 1.1).

24

MODULE 1: MATHEMATICAL PRELIMINARIES

Practice Problems 3
1. Find a vector V (x, y, z) normal to the surface z =

x2 + y 2 + (x2 + y 2 )3/2 .

2. If f (x, y, z) is always parallel to the vector (x, y, z), show that f must assume equal
values at the points (0, 0, a) and (0, 0, a).
3. Find F , where F (x, y, z) = z 2 x2 y 2 . Find the largest set in which grad F does
not vanish?
4. Find a vector normal to the surface z 2 x2 y 2 = 0 at the point (1, 0, 1).
5. If possible, solve the equation z 2 x2 y 2 = 0 in terms of x, y near the following

points: (a) (1, 1, 2); ; (b) (1, 1, 2) ; (c) (0, 0, 0).


6. Find the integral curves of the following vector elds: (a) V = (x, 0, z),
(x2 , y 3 , 0),

(c) V =

(b) V =

(2, 3y 2 , 0).

7. Let u(x, y, z) be a rst integral of V and let C be an integral curve of V given by


x = x(t), y = y(t), z = z(t); t I.
Show that C must lie on some level surface of u. [Hint: Compute

d
dt {u(x(t), y(t), z(t))}].

8. If V be the vector eld given by V = (x, y, z) and let be the octant x > 0, y > 0,
z > 0. Show that u1 (x, y, z) =
rst integrals of V in

y
x

and u2 (x, y, z) =

z
x

are functionally independent

25

MODULE 1: MATHEMATICAL PRELIMINARIES

Lecture 4

Solving Equations dx/P = dy/Q = dz/R

In the previous lecture, we have seen that the integral curves of the set of dierential
equations
dx
dy
dz
=
=
P
Q
R

(1)

form a two-parameter family of curves in three-dimensional space. If we can derive two


relation of the form
u1 (x, y, z) = c1 ,

u2 (x, y, z) = c2 ,

(2)

then varying c1 and c2 we get a two-parameter family of curves satisfying (1). In this
lecture, we shall describe methods for nding integral curves of the set of dierential
equations (1).
Method I: Along any tangential direction through a point (x, y, z) to u1 (x, y, z) = c1
we have
u1
u1
u1
dx +
dy +
dz = 0.
x
x
x

(3)

If u1 (x, y, z) = c is a suitable one-parameter system of surfaces, then the tangential direction to the integral curve through the point (x, y, z) is also a tangential direction to this
surface. Hence
(P, Q, R) u1 = 0
u1
u1
u1
+Q
+R
= 0.
P
x
x
x

To nd u1 , choose functions P1 , Q1 and R1 such that


(P, Q, R) (P1 , Q1 , R1 ) = 0,
=

P P1 + QQ1 + RR1 = 0.

(4)

Thus, there exists a function u1 such that


P1 =

u1
,
x

Q1 =

u1
,
y

R1 =

u1
.
z

and this leads to the equation


du1 = P1 dx + Q1 dy + R1 dz,
which is an exact dierential.

(5)

26

MODULE 1: MATHEMATICAL PRELIMINARIES

REMARK 1. The method described above for nding solutions of (1) is by inspection. A
good deal of intuition is required to determine the forms of the functions P1 , Q1 and R1
(cf. [10]).
EXAMPLE 2. Find the integral curves of the equations
dx
dy
dz
=
=
.
y(x + y)
x(x + y)
z(x + y)

(6)

Solution. Comparing with (1), we nd that


P = y(x + y), Q = x(x + y),

R = z(x + y).

If we choose
P1 =

1
1
, Q1 = ,
z
z

R1 =

x+y
z2

then condition P P1 + QQ1 + RR1 = 0 is satised. The function u1 (x, y, z) is then determined as follows:

1
1
x+y
u1 (x, y, z) =
dx +
dy + ( 2 ) dz
z
z
z
x y x+y
+ +
=c
=
z
z
z
x+y
= 2
= c.
z
Similarly, choose P1 = x, Q1 = y and R1 = 0 and verify that the condition (4) is get
satised. The function u2 is then determined as
1
u2 (x, y, z) = (x2 y 2 ).
2
Thus, the integral curves of the dierential equations (6) are the member of the twoparameter family of curves
x + y = c1 z,

x2 y 2 = c2 .

Method II: Suppose that we are able to nd three functions P1 , Q1 and R1 such that
the ratio
P1 dx + Q1 dy + R1 dz
= dW1 ,
P P1 + QQ1 + RR1

(7)

an exact dierential. Similarly, suppose we can nd other three functions P2 , Q2 and R2


such that
P2 dx + Q2 dy + R2 dz
= dW2
P P2 + QQ2 + RR2

(8)

27

MODULE 1: MATHEMATICAL PRELIMINARIES

is also an exact dierential. Since the ratios


P1 dx + Q1 dy + R1 dz
P2 dx + Q2 dy + R2 dz
dx
dy
dz
=
=
=
=
,
P P1 + QQ1 + RR1
P P2 + QQ2 + RR2
P
Q
R
it now follows that
dW1 = dW2 ,
which yields the relation
W1 (x, y, z) = W2 (x, y, z) + c1 ,
where c1 denotes an arbitrary constant.
EXAMPLE 3. Find the integral curves of the equations
dx
dy
zdz
.
=
= 2
yx
x+y
x + y2
Solution. Here P = y x, Q = x + y and R =

x2 +y 2
z .

(9)
Observe that P + Q = 2y.

Now choose P1 = 1, Q1 = 1 and R1 = 0 to obtain

=
=

dx + dy
dy
=
2y
x+y
(x + y)(dx + dy) = 2y
1
d{(x + y)2 } = 2y.
2

It has solution of the form


u1 (x, y, z) =

(x + y)2
y 2 = c1 .
2

Similarly, with P2 = x, Q2 = y and R2 = z, we nd that


xdx ydy + zdz = 0,
which has solution
u2 (x, y, z) = x2 y 2 + z 2 = c2 .
The equations
(x + y)2
y 2 = c1 ,
2
constitute the integral curves of (9).

x2 y 2 + z 2 = c2

Method III: When one of the variables is absent from (1), we can derive the integral
curves in a simple way.
For the sake of deniteness, let P and Q be functions of x and y only. Then the
equation
dx
dy
=
P
Q

28

MODULE 1: MATHEMATICAL PRELIMINARIES

may be written as
dy
Q
= f (x, y), where f (x, y) = .
dx
P
Let this equation has a solution of the form
(x, y, c1 ) = 0.

(10)

Solving (10) for x and substituting the value of x in the equation


dy
dz
=
Q
R
leads to an equation of the form
dy
= g(y, z, c1 ).
dz

(11)

Let the solution of (11) be expressed by


(y, z, c1 , c2 ) = 0.

(12)

EXAMPLE 4. Find the integral curves of the equations


dx
dy
dz
=
=
2
x
y+x
y+z

(13)

Solution. The rst two equations may be expressed as

dy
y
=x
dx x
d (y)
= 1,
dx x

which has solution


y = c1 x + x2 .
Using the rst and third equations of (13), we note that

dz
y
z
z
= + = c1 + x +
dx
x x
x
d ( z ) c1
=
+ 1,
dx x
x

which has solution


z = c1 x log x + c2 x + x2 .
Hence, the integral curves of the dierential equations (13) are given by the equations
y = c1 x + x2 ,

z = c1 x log x + c2 x + x2 .

MODULE 1: MATHEMATICAL PRELIMINARIES

Practice Problems 4
Find the integral curves of the following system of ODEs:
1.

dx
dy
dz
=
=
yz
zx
xy

2.

dx
dy
dz
=
=
z
xz
y

3.

dx
dy
dz
=
=
xz y
yz x
xy z

4.

dx
dy
dz
=
=
y + 3z
z + 5x
x + 7y

29

Module 2: First-Order Partial Dierential Equations


The mathematical formulations of many problems in science and engineering reduce to
study of rst-order PDEs. For instance, the study of rst-order PDEs arise in gas ow
problems, trac ow problems, phenomenon of shock waves, the motion of wave fronts,
Hamilton-Jacobi theory, nonlinear continum mechanics and quantum mechanics etc.. It is
therefore essential to study the theory of rst-order PDEs and the nature their solutions
to analyze the related physical problems.
In Module 2, we shall study rst-order linear, quasi-linear and nonlinear PDEs and
methods of solving these equations. An important method of characteristics is explained
for these equations in which solving PDE reduces to solving an ODE system along a
characteristics curve. Further, the Charpits method and the Jacobis method for nonlinear
rst-order PDEs are discussed.
This module consists of seven lectures. Lecture 1 introduces some basic concepts
of rst-order PDEs such as formulation of PDEs, classication of rst-order PDEs and
Cauchys problem for rst-order PDEs. In Lecture 2, we study rst-order linear PDEs
and the parametric form of solution of rst-order PDEs. In Lecture 3, we study a rstorder quasi-linear PDE and discuss the method of characteristics for a rst-order quasilinear PDE. Lecture 4 is devoted to nonlinear rst-order PDEs and Cauchys method
of characteristics for nding solutions of these equations. Lecture 5 is focused on the
compatible system of equations and Charpits method for solving nonlinear equations. In
Lecture 6, we consider some special type of PDEs and method of obtaining their general
integrals. Finally, the Jacobis method for solving nonlinear PDEs is discussed in Lecture
7.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 1

First-Order Partial Dierential Equations

A rst order PDE in two independent variables x, y and the dependent variable z can be
written in the form
f (x, y, z,

z z
, ) = 0.
x y

(1)

For convenience, we set


p=

z
,
x

q=

z
.
y

Equation (1) then takes the form


f (x, y, z, p, q) = 0.

(2)

The equations of the type (2) arise in many applications in geometry and physics. For
instance, consider the following geometrical problem.
EXAMPLE 1. Find all functions z(x, y) such that the tangent plane to the graph z = z(x, y)
at any arbitrary point (x0 , y0 , z(x0 , y0 )) passes through the origin characterized by the PDE
xzx + yzy z = 0.
The equation of the tangent plane to the graph at (x0 , y0 , z(x0 , y0 )) is
zx (x0 , y0 )(x x0 ) + zy (x0 , y0 )(y y0 ) (z z(x0 , y0 )) = 0.
This plane passes through the origin (0, 0, 0) and hence, we must have
zx (x0 , y0 )x0 zy (x0 , y0 )y0 + z(x0 , y0 ) = 0.

(3)

For the equation (3) to hold for all (x0 , y0 ) in the domain of z, z must satisfy
xzx + yzy z = 0,
which is a rst-order PDE.
EXAMPLE 2. The set of all spheres with centers on the z-axis is characterized by the
rst-order PDE yp xq = 0.
The equation
x2 + y 2 + (z c)2 = r2 ,

(4)

where r and c are arbitrary constants, represents the set of all spheres whose centers lie
on the z-axis. Dierentiating (4) with respect to x, we obtain
(
)
z
2 x + (z c)
= 2 (x + (z c)p) = 0.
x

(5)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Dierentiate (4) with respect to y to have


y + (z c)q = 0.

(6)

Eliminating the arbitrary constant c from (5) and (6), we obtain the rst-order PDE
yp xq = 0.

(7)

Equation (4) in some sense characterized the rst-order PDE (7).


EXAMPLE 3. Consider all surfaces described by an equation of the form
z = f (x2 + y 2 ),

(8)

where f is an arbitrary function, described by the rst-order PDE.


Writing u = x2 + y 2 and dierentiating (8) with respect to x and y, it follows that
p = 2xf (u);
where f (u) =

df
du .

q = 2yf (u),

Eliminating f (u) from the above two equations, we obtain the same

rst-order PDE as in (7).


REMARK 4. The function z described by each of the equations (4) and (8), in some
sense, a solution to the PDE (7). Observe that, in Example 2, PDE (7) is formulated
by eliminating arbitrary constants from (4) whereas in Example 3, PDE (7) is formed by
eliminating an arbitrary function.

Formation of rst-order PDEs

The applications of conservation principles often yield a rst-order PDEs. We have seen
in the previous two examples that a rst-order PDE can be formed either by eliminating arbitrary constants or an arbitrary function involved. Below, we now generalize the
arguments of Example 2 and Example 3 to show that how a rst-order PDE can be formed.
Method I (Eliminating arbitrary constants): Consider two parameters family of surfaces described by the equation
F (x, y, z, a, b) = 0,

(9)

where a and b are arbitrary constants. Equation (9) may be thought of as a generalization
of the relation (4).

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Dierentiating (9) with respect to x and y, we obtain


F
F
+p
=0
x
z
F
F
+q
= 0.
y
z

(10)
(11)

Eliminate the constants a, b from equations (9), (10) and (11) to obtain a rst-order PDE
of the form
f (x, y, z, p, q) = 0.

(12)

This shows that a family of surfaces described by the relation (9) gives rise to a rst-order
PDE (12).
Method II (Eliminating arbitrary function): Now consider the generalization of Example 3. Let u(x, y, z) = c1 and v(x, y, z) = c2 be two known functions of x, y and z
satisfying a relation of the form
F (u, v) = 0,

(13)

where F is an arbitrary function of u and v. Dierentiating (13) with respect to x and y


lead to the equations
Fu (ux + uz p) + Fv (vx + vz p) = 0
Fu (uy + uz q) + Fv (vy + vz q) = 0.
Eliminating Fu and Fv from the above two equations, we obtain
p

(u, v)
(u, v)
(u, v)
+q
=
,
(y, z)
(z, x)
(x, y)

which is a rst-order PDE of the form f (x, y, z, p, q) = 0. Here,

(14)
(u,v)
(x,y)

= ux vy uy vx .

Classication of rst-order PDEs

We classify the equation (1) depending on the special forms of the function f . If (1) is of
the form
a(x, y)

z
z
+ b(x, y)
+ c(x, y)z = d(x, y)
x
y

then it is called linear rst-order PDE. Note that the function f is linear in
z with all coecients depending on the independent variables x and y only.
If (1) has the form
a(x, y)

z
z
+ b(x, y)
= c(x, y, z)
x
y

z z
x , y

and

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

then it is called semilinear because it is linear in the leading (highest-order) terms


and

z
y .

However, it need not be linear in z. Note that the coecients of

z
x

and

z
y

z
x

are

functions of the independent variables only.


If (1) has the form
a(x, y, z)

z
z
+ b(x, y, z)
= c(x, y, z)
x
y

then it is called quasi-linear PDE. Here the function f is linear in the derivatives
and

z
y

z
x

with the coecients a, b and c depending on the independent variables x and y as

well as on the unknown z. Note that linear and semilinear equations are special cases of
quasi-linear equations.
Any equation that does not t into one of these forms is called nonlinear.
EXAMPLE 5.
1. xzx + yzy = z (linear)
2. xzx + yzy = z 2 (semilinear)
3. zx + (x + y)zy = xy (linear)
4. zzx + zy = 0 (quasilinear)
5. xzx2 + yzy2 = 2 (nonlinear)

Cauchys problem or IVP for rst-order PDEs

Recall the initial value problem for a rst-order ODE which ask for a solution of the
equation that takes a given value at a given point of R. The IVP for rst-order PDE ask
for a solution of (2) which has given values on a curve in R2 . The conditions to be satised
in the case of IVP for rst-order PDE are formulated in the classic problem of Cauchy
which may be stated as follows:
Let C be a given curve in R2 described parametrically by the equations
x = x0 (s),

y = y0 (s); s I,

(15)

where x0 (s), y0 (s) are in C 1 (I). Let z0 (s) be a given function in C 1 (I). The IVP or
Cauchys problem for rst-order PDE
f (x, y, z, p, q) = 0
is to nd a function u = u(x, y) with the following properties:

(16)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

u(x, y) and its partial derivatives with respect to x and y are continuous in a region
of R2 containing the curve C.
u = u(x, y) is a solution of (16) in , i.e.,
f (x, y, u(x, y), ux (x, y), uy (x, y)) = 0 in .
On the curve C
u(x0 (s), y0 (s)) = z0 (s), s I.

(17)

The curve C is called the initial curve of the problem and the function z0 (s) is called the
initial data. Equation (17) is called the initial condition of the problem.
NOTE: Geometrically, Cauchys problem may be interpreted as follows: To nd a solution
surface u = u(x, y) of (16) which passes through the curve C whose parametric equations
are
x = x0 (s), y = y0 (s) z = z0 (s).

(18)

Further, at every point of which the direction (p, q, 1) of the normal is such that
f (x, y, z, p, q) = 0.
The proof of existence of a solution of (16) passing through a curve with equations
(18) requires some more assumptions on the function f and the nature of the curve C.
We now state the classic theorem due to Kowalewski in the following theorem (cf. [10]).
THEOREM 6. (Kowalewski) If g(y) and all its derivatives are continuous for |y y0 | < ,
if x0 is a given number and z0 = g(y0 ), q0 = g (y0 ), and if f (x, y, z, q) and all its partial
derivatives are continuous in a region S dened by
|x x0 | < ,

|y y0 | < , |q q0 | < ,

then there exists a unique function (x, y) such that:


(a) (x, y) and all its partial derivatives are continuous in a region
: |x x0 | < 1 , |y y0 | < 2 ;
(b) For all (x, y) in , z = (x, y) is a solution of the equation
z
z
= f (x, y, z, )
x
y
(c) For all values of y in the interval |y y0 | < 1 , (x0 , y) = g(y).
We conclude this lecture by introducing dierent kinds of solutions of rst-order PDE.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

DEFINITION 7. (A complete solution or a complete integral) Any relation of the


form
F (x, y, z, a, b) = 0

(19)

which contains two arbitrary constants a and b and is a solution of a rst-order PDE is
called a complete solution or a complete integral of that rst-order PDE.
DEFINITION 8. (A general solution or a general integral) Any relation of the form
F (u, v) = 0
involving an arbitrary function F connecting two known functions u(x, y, z) and v(x, y, z)
and providing a solution of a rst-order PDE is called a general solution or a general
integral of that rst-order PDE.
It is possible to derive a general integral of the PDE once a complete integral is known.
With b = (a), if we take any one-parameter subsystem
f (x, y, z, a, (a)) = 0
of the system (19) and form its envelope, we obtain a solution of equation (16). When
(a) is arbitrary, the solution obtained is called the general integral of (16) corresponding
to the complete integral (19).
When a denite (a) is used, we obtain a particular solution.
DEFINITION 9. (A singular integral) The envelope of the two-parameter system (19)
is also a solution of the equation (16). It is called the singular integral or singular solution
of the equation.
NOTE: The general solution of an equation of type (1) can be obtained by solving
systems of ODEs. This is not true for higher-order equations or for systems of rst-order
equations.

Practice Problems
1. Classify whether the following PDE is linear, quasi-linear or nonlinear:
(a) zzx 2xyzy = 0; (b) zx2 + zzy = 2; (c) zx + 2zy = 5z; (d) xzx + yzy = z 2 .
2. Eliminate the arbitrary constants a and b from the following equations to form the
PDE:
(a) ax2 + by 2 + z 2 = 1; (b) z = (x2 + a)(y 2 + b).

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

3. Show that z = f (xy), where f is an arbitrary dierentiable function satises


xzx yzy = 0,
and hence, verify that the functions sin(xy), cos(xy), log(xy) and exy are solutions.
4. Eliminate the arbitrary function f from the following and form the PDE:
(a) z = x + y + f (xy); (b) z = f (

xy
).
z

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 2

Linear First-Order PDEs

The most general rst-order linear PDE has the form


a(x, y)zx + b(x, y)zy + c(x, y)z = d(x, y),

(1)

where a, b, c, and d are given functions of x and y. These functions are assumed to be
continuously dierentiable. Rewriting (1) as
a(x, y)zx + b(x, y)zy = c(x, y)z + d(x, y),

(2)

we observe that the left hand side of (2), i.e.,


a(x, y)zx + b(x, y)zy = z (a, b)
is (essentially) a directional derivative of z(x, y) in the direction of the vector (a, b), where
(a, b) is dened and nonzero. When a and b are constants, the vector (a, b) had a xed
direction and magnitude, but now the vector can change as its base point (x, y) varies.
Thus, (a, b) is a vector eld on the plane.
The equations
dx
= a(x, y),
dt

dy
= b(x, y),
dt

(3)

dy
determine a family of curves x = x(t), y = y(t) whose tangent vector ( dx
dt , dt ) coincides

with the direction of the vector (a, b). Therefore, the derivative of z(x, y) along these
curves becomes
dz
d
z dx z dy
= z{(x(t), y(t))} =
+
dt
dt
x dt
y dt
= zx (x(t), y(t))a(x(t), y(t)) + zy (x(t), y(t))b(x(t), y(t))
= c(x(t), y(t))z(x(t), y(t)) + d(x(t), y(t))
= c(t)z(t) + d(t),
where we have used the chain rule and (1). Thus, along these curves, z(t) = z(x(t), y(t))
satises the ODE

Let (t) = exp

t
0 c( )d

z (t) + c(t)z(t) = d(t).

(4)

be an integrating factor for (4). Then, the solution is given by


1
z(t) =
(t)

]
( )d( )d + z(0) .

(5)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

10

The approach described above to solve (1) by using the solutions of (3)-(4) is called the
method of characteristics. It is based on the geometric interpretation of the partial
dierential equation (1).
NOTE: (i) The ODEs (3) is known as the characteristics equation for the PDE (1). The
solution curves of the characteristic equation are the characteristics curves for (1).
(ii) Observe that (t) and d(t) depend only on the values of c(x, y) and d(x, y) along
the characteristics curve x = x(t), y = y(t). Thus, equation (5) shows that the values z(t)
of the solution z along the entire characteristics curve are completely determined, once
the value z(0) = z(x(0), y(0)) is prescribed.
(iii) Assuming certain smoothness conditions on the functions a, b, c, and d, the existence and uniqueness theory for ODEs guarantees a unique solution curve (x(t), y(t), z(t))
of (3)-(4) (i.e., a characteristic curve) passes through a given point (x0 , y0 , z0 ) in (x, y, z)space.

The method of characteristics for solving linear rst-order IVP

In practice we are not interested in determining a general solution of the partial dierential
equation (1) but rather a specic solution z = z(x, y) that passes through or contains a
given curve C. This problem is known as the initial value problem for (1). The method
of characteristics for solving the initial value problem for (1) proceeds as follows.
Let the initial curve C be given parametrically as:
x = x(s), y = y(s), z = z(s).

(6)

for a given range of values of the parameter s. The curve may be of nite or innite extent
and is required to have a continuous tangent vector at each point.
Every value of s xes a point on C through which a unique characteristic curve passes
(see, Fig. 2.1). The family of characteristic curves determined by the points of C may be
parameterized as
x = x(s, t), y = y(s, t), z = z(s, t)
with t = 0 corresponding to the initial curve C. That is, we have
x(s, 0) = x(s),

y(s, 0) = y(s), z(s, 0) = z(s).

In other words, we have the following:

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

11

Figure 2.1: Characteristic curves and construction of the integral surface

The functions x(s, t) and y(s, t) are the solutions of the characteristics
system (for each xed s)
d
d
x(s, t) = a(x(s, t), y(s, t)),
y(s, t) = b(x(s, t), y(s, t))
dt
dt
with given initial values x(s, 0) and y(s, 0).

(7)

Suppose that
z(x(s, 0), y(s, 0)) = g(s),

(8)

where g(s) is a given function. We obtain z(x(s, t), y(s, t)) as follows: Let
z(s, t) = z(x(s, t), y(s, t)), c(s, t) = c(x(s, t), y(s, t)), d(s, t) = d(x(s, t), y(s, t))
[

and
(s, t) = exp

]
c(s, t)dt .

(9)

(10)

Analogous to formula (5), for each xed s, we obtain


1
z(s, t) =
(s, t)

(s, t)d(s, t)dt + g(s) .

(11)

z(s, t) is the value of z at the point (x(s, t), y(s, t)). Thus, as s and t vary, the point
(x, y, z), in xyz-space, given by
x = x(s, t),

y = y(s, t),

z = z(s, t),

(12)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

12

traces out the surface of the graph of the solution z of the PDE (1) which meets the
initial curve (8). The equations (12) constitute the parametric form of the solution of (1)
satisfying the initial condition (8) [i.e., a surface in (x, y, z)-space that contains the initial
curve ]
NOTE: If the Jacobian J(s, t) = xs yt xt ys = 0, then the equations x = x(s, t) and
y = y(s, t) can be inverted to give s and t as (smooth) functions of x and y i.e., s = s(x, y)
and t = t(x, y). The resulting function z = z(x, y) = z(s(x, y), t(x, y)) satises the PDE
(1) in a neighborhood of the curve C (in view of (4) and the initial condition (6)) and is
the unique solution of the IVP.
EXAMPLE 1. Determine the solution the following IVP:
z
z
+c
= 0,
y
x

z(x, 0) = f (x),

where f (x) is a given function and c is a constant.


Solution. A step by step procedure for the nding solution is given below.
Step 1.(Finding characteristic curves)
To apply the method of characteristics, parameterize the initial curve C as follows: as
follows:
x = s,

y = 0, z = f (s).

(13)

The family of characteristics curves x((s, t), y(s, t)) are determined by solving the ODEs
d
x(s, t) = c,
dt

d
y(s, t) = 1
dt

The solution of the system is


x(s, t) = ct + c1 (s) and y(s, t) = t + c2 (s).
Step 2. (Applying IC)
Using the initial conditions
x(s, 0) = s, y(s, 0) = 0.
we nd that
c1 (s) = s,

c2 (s) = 0,

and hence
x(s, t) = ct + s and y(s, t) = t.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

13

Step 3. (Writing the parametric form of the solution)


Comparing with (1), we have c(x, y) = 0 and d(x, y) = 0. Therefore, using (10) and (11),
we nd that
d(s, t) = 0, (s, t) = 1.
Since z(x(s, 0), y(s, 0)) = z(s, 0) = g(s) = f (s), we obtain z(s, t) = f (s). Thus, the
parametric form of the solution of the problem is given by
x(s, t) = ct + s,

y(s, t) = t, z(s, t) = f (s).

Step 4. (Expressing z(s, t) in terms of z(x, y)) Expressing s and t as s = s(x, y) and
t = t(x, y), we have
s = x cy,

t = y.

We now write the solution in the explicit form as


z(x, y) = z(s(x, y), y(x, y)) = f (x cy).
Clearly, if f (x) is dierentiable, the solution z(x, y) = f (x cy) satises given PDE as
well as the initial condition.
NOTE: Example 1 characterizes unidirectional wave motion with velocity c. If we consider the initial function z(x, 0) = f (x) to represent a waveform, the solution z(x, y) =
f (x cy) shows that a point x for which x cy = constant, will always occupy the same
position on the wave form. If c > 0, the entire initial wave form f (x) moves to the right
without changing its shape with speed c (if c < 0, the direction of motion is reversed).
EXAMPLE 2. Find the parametric form of the solution of the problem
yzx + xzy = 0
with the condition given by
z(s, s2 ) = s3 ,

(s > 0).

Solution. To nd the solution, lets proceed as follows.


Step 1. (Finding characteristic curves)

The family of characteristics curves (x(s, t), y(s, t)) are determined by solving
d
x(s, t) = y(s, t),
dt

d
y(s, t) = x(s, t)
dt

with initial conditions


x(s, 0) = s,

y(s, 0) = s2 .

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

14

The general solution of the system is


x(s, t) = c1 (s) cos(t) + c2 (s) sin(t) and y(s, t) = c1 (s) sin(t) c2 (s) cos(t).
Step 2. (Applying IC)

Using ICs, we nd that


c1 (s) = s,

c2 (s) = s2 ,

and hence
x(s, t) = s cos(t) s2 sin(t) and y(s, t) = s sin(t) + s2 cos(t).
Step 3. (Writing the parametric form of the solution)
Comparing with (1), we note that c(x, y) = 0 and d(x, y) = 0. Therefore, using (10)
and (11), it follows that
d(s, t) = 0, (s, t) = 1.
In view of the given condition curve and z = z(s, t), we obtain
z(x(s, 0), y(s, 0)) = z(s, s2 ) = g(s) = s3 ,

z(s, t) = s3 .

Thus, the parametric form of the solution of the problem is given by


x(s, t) = s cos(t) s2 sin(t), y(s, t) = s sin(t) + s2 cos(t), z(s, t) = s3 .
Step 4. (Expressing z(s, t) in terms of z(x, y))
Writing s and t as a function of x and y, it is an easy exercise to show that
]3/2

1 [
z(x, y) = 1 + 1 + 4(x2 + y 2 )
.
8

Practice Problems
1. Find the general solution of the following PDE in the indicated domain.
(A) xzx + 2yzy = 0, for x > 0, y > 0
(B) yzx 4xzy = 2xy, for all (x, y)
(C) xzx xyzy = z, for all (x, y)
2. Find a particular solution of the following PDEs satisfying the given side conditions.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

(A) xzx + 2yzy = 0, z(x, 1/x) = x, for x > 0, y > 0


(B) xzx xyzy = z,

z(x, x) = x2 ex , for all (x, y)

3. Find the parametric form of the solutions of the PDEs.


(A) xzx xyzy = z, for all (x, y),
(B) (y + x)zx + (y x)zy = z,

z(s2 , s) = s3

z(cos(s), sin(s)) = 1, for 0 s 2

4. Show that the problem yzx + xzy = 0, z(x, 0) = 3x has no solution.

15

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 3

16

Quasilinear First-Order PDEs

A rst order quasilinear PDE is of the form


a(x, y, z)

z
z
+ b(x, y, z)
= c(x, y, z).
x
y

(1)

Such equations occur in a variety of nonlinear wave propagation problems. Let us assume
that an integral surface z = z(x, y) of (1) can be found. Writing this integral surface in
implicit form as
F (x, y, z) = z(x, y) z = 0.
Note that the gradient vector F = (zx , zy , 1) is normal to the integral surface F (x, y, z) =
0. The equation (1) may be written as
azx + bzy c = (a, b, c) (zx , zy , 1) = 0.

(2)

This shows that the vector (a, b, c) and the gradient vector F are orthogonal. In other
words, the vector (a, b, c) lies in the tangent plane of the integral surface z = z(x, y) at
each point in the (x, y, z)-space where F = 0.
At each point (x, y, z), the vector (a, b, c) determines a direction in (x, y, z)-space is
called the characteristic direction. We can construct a family of curves that have the
characteristic direction at each point. If the parametric form of these curves is
x = x(t), y = y(t), and z = z(t),

(3)

then we must have


dx
= a(x(t), y(t), z(t)),
dt

dy
= b(x(t), y(t), z(t)),
dt

dz
= c(x(t), y(t), z(t)),
dt

(4)

because (dx/dt, dy/dt, dz/dt) is the tangent vector along the curves. The solutions of (4)
are called the characteristic curves of the quasilinear equation (1).
We assume that a(x, y, z), b(x, y, z), and c(x, y, z) are suciently smooth and do not
all vanish at the same point. Then, the theory of ordinary dierential equations ensures
that a unique characteristic curve passes through each point (x0 , y0 , z0 ). The IVP for
(1) requires that z(x, y) be specied on a given curve in (x, y)-space which determines a
curve C in (x, y, z)-space referred to as the initial curve. To solve this IVP, we pass a
characteristic curve through each point of the initial curve C. If these curves generate a
surface known as integral surface. This integral surface is the solution of the IVP.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

17

REMARK 1. (i) The characteristic equations (4) for x and y are not, in general, uncoupled
from the equation for z and hence dier from those in the linear case (cf. Eq. (3) of Lecture
2).
(ii) The characteristics equations (4) can be expressed in the nonparametric form as
dx
dy
dz
=
=
.
a
b
c

(5)

Below, we shall describe a method for nding the general solution of (1). This method
is due to Lagrange hence it is usually referred to as the method of characteristics or the
method of Lagrange.

The method of characteristics

It is a method of solution of quasi-linear PDE which is stated in the following result.


THEOREM 2. The general solution of the quasi-linear PDE (1) is
F (u, v) = 0,

(6)

where F is an arbitrary function and u(x, y, z) = c1 and v(x, y, z) = c2 form a solution of


the equations
dx
dy
dz
=
=
.
a
b
c

(7)

Proof. If u(x, y, z) = c1 and v(x, y, z) = c2 satisfy the equations (1) then the equations
ux dx + uy dy + uz dz = 0,
vx dx + vy dy + vz dz = 0
are compatible with (7). Thus, we must have
aux + buy + cuz = 0,
avx + bvy + cvz = 0.
Solving these equations for a, b and c, we obtain
a
(u,v)
(y,z)

b
(u,v)
(z,x)

c
(u,v)
(x,y)

Dierentiate F (u, v) = 0 with respect to x and y, respectively, to have


{
}
{
}
F v
v z
F u u z
+
+
+
=0
u x z x
v x z x
{
}
{
}
F u u z
F v v z
+
+
+
= 0.
u y
z y
v y z y

(8)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Eliminating

F
u

and

F
v

18

from these equations, we obtain


z (u, v) z (u, v)
(u, v)
+
=
x (y, z) y (z, x)
(x, y)

(9)

In view of (8), the equation (9) yields


a

z
z
+b
= c.
x
y

Thus, we nd that F (u, v) = 0 is a solution of the equation (1). This completes the
proof.

.

REMARK 3.

All integral surfaces of the equation (1) are generated by the integral

curves of the equations (4).


All surfaces generated by integral curves of the equations (4) are integral surfaces of
the equation (1).
EXAMPLE 4. Find the general integral of xzx + yzy = z.
Solution. The associated system of equations are
dx
dy
dz
=
=
.
x
y
z
From the rst two relation we have
dy
x
dx
=
= ln x = ln y + ln c1 = = c1 .
x
y
y
Similarly,
dz
dy
z
=
= = c2 .
z
y
y
Take u1 =

x
y

and u2 = yz . The general integral is given by


x z
F ( , ) = 0.
y y

EXAMPLE 5. Find the general integral of the equation


z(x + y)zx + z(x y)zy = x2 + y 2 .
Solution. The characteristic equations are
dx
dy
dz
=
= 2
.
z(x + y)
z(x y)
x + y2
Each of these ratio is equivalent to
ydx + xdy zdz
xdx ydy zdz
=
.
0
0

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

19

Consequently, we have
d{xy

z2
1
} = 0 and d{ (x2 y 2 z 2 )} = 0.
2
2

Integrating we obtain two integrals


2xy z 2 = c1 and x2 y 2 z 2 = c2 ,
where c1 and c2 are arbitrary constants. Thus, the general solution is
F (2xy z 2 , x2 y 2 z 2 ) = 0,
where F is an arbitrary function.
Next, we shall discuss a method for solving a Cauchy problem for the rst-order quasilinear PDE (1). The following theorem gives conditions under which a unique solution of
the initial value problem for (1) can be obtained.
THEOREM 6. Let a(x, y, z), b(x, y, z) and c(x, y, z) in (1) have continuous partial derivatives with respect to x, y and z variables. Let the initial curve C be described parametrically
as
x = x(s),

y = y(s),

and z = z(x(s), y(s)).

The initial curve C has a continuous tangent vector and


J(s) =

dy
dx
a[x(s), y(s), z(s)]
b(x(s), y(s), z(s)] = 0
ds
ds

(10)

on C. Then, there exists a unique solution z = z(x, y), dened in some neighborhood of
the initial curve C, satises (1) and the initial condition z(x(s), y(s)) = z(s).
Proof. The characteristic system (4) with initial conditions at t = 0 given as x =
x(s), y = y(s), and z = z(s) has a unique solution of the form
x = x(s, t), y = y(s, t), z = z(s, t),
with continuous derivatives in s and t, and
x(s, 0) = x(s), y(s, 0) = y(s), z(s, 0) = z(s).
This follows from the existence and uniqueness theory for ODEs. The Jacobian of the
transformation x = x(s, t), y = y(s, t) at t = 0 is


[
]
x x
y
x
s t
=
J(s) = J(s, t)|t=0 = y y
a
b
= 0.


t
t t=0
s
t
t=0

(11)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

20

in view of (10). By the continuity assumption, the Jacobian J = 0 in a neighborhood


of the initial curve. Thus, by the implicit function theorem, we can solve for s and t as
functions of x and y near the initial curve. Then
z(s, t) = z(s(x, y), t(x, y)) = Z(x, y).
a solution of (1), which can be easily seen as
c=

z dx z dy
+
x dt
y dt
z
z
= a
+b ,
x
y

dz
dt

where we have used (4). The uniqueness of the solution follows from the fact that any two
integral surfaces that contain the same initial curve must coincide along all the characteristic curves passing through the initial curve. This is a consequence of the uniqueness
theorem for the IVP for (4). This completes our proof.

.

EXAMPLE 7. Consider the IVP:


z
z
+z
=0
y
x
z(x, 0) = f (x),
where f(x) is a given smooth function.
Solution. We solve this problem using the following steps.
Step 1. (Finding characteristic curves)
To solve the IVP, we parameterize the initial curve as
x = s,

y = 0, z = f (s).

The characteristic equations are


dx
= z,
dt

dy
= 1,
dt

dz
= 0.
dt

Let the solutions be denoted as x(s, t), t(s, t), and z(s, t). We immediately nd that
x(s, t) = zt + c1 (s),

y(s, t) = t + c2 (s), z(s, t) = c3 (s),

where ci , i = 1, 2, 3 are constants to be determined using IC.


Step 2. (Applying IC) The initial conditions at s = 0 are given by
x(s, 0) = s,

y(s, 0) = 0, z(s, 0) = f (s).

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

21

Using these condition, we obtain


x(s, t) = zt + s, y(s, t) = t,

z(s, t) = f (s).

Step 3. (Writing the parametric form of the solution)


The solutions are thus given by
x(s, t) = zt + s = f (s)t + s, y(s, t) = t, z(s, t) = f (s).
Step 4. (Expressing z(s, t) in terms of z(x, y)) Applying the condition (10), we nd that
J(s) = 1 = 0, along the entire initial curve. We can immediately solve for s(x, y) and
t(x, y) to obtain
s(x, y) = x tf (s),

t(x, y) = y.

Since t = y and s = x tf (s) = x yz, the solution can also be given in implicit form as
z = f (x yz).
EXAMPLE 8. Solve the following quasi-linear PDE:
zzx + yzy = x, (x, y) R2
subject to the initial condition
z(x, 1) = 2x, x R.
Solution. Here a(x, y, z) = z,

b(x, y, z) = y,

c(x, y, z) = x. The characteristics

equations are
dx
= z, x(s, 0) = s,
dt
dy
= y, y(s, 0) = 1,
dt
dz
= x, z(s, 0) = 2s.
dt
On solving the above ODEs, we obtain
s
s
x(s, t) = (3et et ), y(s, t) = et , z(s, t) = (3et + et ).
2
2
Solving for (s, t) in terms of (x, y), we obtain
s(x, y) =

2xy
,
3y 2 1

t(x, y) = ln(y),

z(x, y) = z(s(x, y), t(x, y)) =

(3y 2 + 1)x
.
(3y 2 1)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

22

Note that the characteristics variables imply that y must be positive (y = et ). In fact, the
solution z is valid only for 3y 2 1 > 0, i.e., for y >

1
3

> 0. Observe that the change of

variables is valid only where



x (s, t) x (s, t)
t
s

ys (s, t) yt (s, t)




= 0.

It is easy to verify that this condition leads to y = 1/ 3.

Practice Problems
1. Find a solution of the PDE zx + zzy = 6x satisfying the condition z(0, y) = 3y.
2. Find the general integral of the PDE
(2xy 1)zx + (z 2x2 )zy = 2(x yz)
and also the particular integral which passes through the line x = 1, y = 0.
3. Solve zx + zzy = 2x, z(0, y) = f (y).
4. Find the solution of the equation zx + zzy = 1 with the data
x(s, 0) = 2s,

y(s, 0) = s2 , z(0, s2 ) = s.

5. Find the characteristics of the equation zx zy = z, and determine the integral surface
which passes through the parabola x = 0, y 2 = z.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 4

23

Nonlinear First-Order PDEs

The general nonlinear rst-order PDE is written in the form


F (x, y, z, zx , zy ) = 0,

(1)

where F is not linear in zx and zy . Setting zx = p and zy = q, rewrite (1) as


F (x, y, z, p, q) = 0.

(2)

The method of characteristics for nonlinear PDEs

Recall the method of characteristics for solving rst-order linear PDE:


F (x, y, z, p, q) = a(x, y)p + b(x, y)q + c(x, y)z d(x, y) = 0.
In this method, the PDE becomes an ODEs along the characteristics curves which may
be regarded as the solutions of the system
x (t) = a(x(t), y(t)) and y (t) = b(x(t), y(t)).

(3)

Note that Fp = a(x, y) and Fq = b(x, y). Hence, (3) may be written as
x (t) = Fp and y (t) = Fq .

(4)

For solving rst-order nonlinear PDE (1), the relation (4) motivates us to dene characteristics curves as solutions of the system
x (t) = Fp (x(t), y(t), z(t), p(t), q(t)) and y (t) = Fq (x(t), y(t), z(t), p(t), q(t)),

(5)

where z(t) = z(x(t), y(t)), p(t) = zx (x(t), y(t)), q(t) = zy (x(t), y(t)). However, unlike the
linear case, the right sides of (5) depend not only on x(t) and y(t), but also on z(t), p(t)
and q(t). Thus, we can expect a large system of ve ODEs for the ve unknown x(t), y(t),
z(t), p(t) and q(t). For the remaining three equations, notice that
d
{z(x(t), y(t))}
dt
= zx x (t) + zy y (t)

z (t) =

= p(t)x (t) + q(t)y (t)


= p(t)Fp (x(t), y(t), z(t), p(t), q(t)) + q(t)Fq (x(t), y(t), z(t), p(t), q(t)).

(6)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

24

Along a characteristics p is a function of t. The equation for p (t) is obtained as follows:


d
{zx (x(t), y(t))}
dt
= zxx x (t) + zxy y (t)

p (t) =

= zxx Fp (x(t), y(t), z(t), p(t), q(t)) + zxy Fq (x(t), y(t), z(t), p(t), q(t)).

(7)

Using the fact that z(x, y) should solve the PDE (1), we obtain
d
{F (x, y, z(x, y), zx (x, y), zy (x, y))}
dx
= Fx + Fz zx + Fp zxx + Fq zyx .

0 =

Therefore,
p (t) = zxx Fq + zxy Fq = (Fx + pFz ).

(8)

q (t) = [Fy + qFz ].

(9)

Similarly,

Thus, we have the following system of ve ODEs


x (t) = Fp (x(t), y(t), z(t), p(t), q(t))
y (t) = Fq (x(t), y(t), z(t), p(t), q(t))
z (t) = p(t)Fp (x(t), y(t), z(t), p(t), q(t)) + q(t)Fq (x(t), y(t), z(t), p(t), q(t))

(10)

p (t) = {Fx (x(t), y(t), z(t), p(t), q(t)) + p(t)Fz (x(t), y(t), z(t), p(t), q(t))}
q (t) = {Fy (x(t), y(t), z(t), p(t), q(t)) + q(t)Fz (x(t), y(t), z(t), p(t), q(t))}
These equations constitute the characteristics system of the PDE (1) and are known as
the characteristics equations associated with PDE (1).
NOTE: If the functions which appear in equations (10) satisfy a Lipschitz condition,
there is a unique solution of the equations for each prescribed set of initial values of the
variables. Therefore the characteristics strip is uniquely determined by any initial element
(x(t0 ), y(t0 ), z(t0 ), p(t0 ), q( t0 )) at any initial point t0 of t.
An important result about characteristic strips is given below.
THEOREM 1. The function F (x, y, z, p, q) is a constant along every characteristics strip
of the equation F (x, y, z, p, q) = 0.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

25

Proof. Along a characteristic strip, we have


d
{F (x(t), y(t), z(t), p(t), q(t))} = Fx x (t) + Fy y (t) + Fz z (t) + Fp p (t) + Fq q (t)
dt
= Fx Fp + Fy Fq + Fz (pFp + qFq ) Fp (Fx + pFz ) Fq (Fy + qFz )
= 0.
This implies F (x, y, z, p, q) = k, a constant along the strip.

Solving Cauchys problem for nonlinear PDEs

The objective of this section to solve PDE


F (x, y, z, zx , zy ) = 0
subject to an appropriate initial condition (i.e., z assume prescribed values on some curve).
Let (f (s), g(s)) traces out a regular curve in the xy-plane as s varies. We regard this
curve as being an initial curve. We seek a solution u(x, y) of the following problem (known
as Cauchys problem).
F (x, y, z, zx , zy ) = 0, u(f (s), g(s)) = G(s),

(11)

where G(s) is a continuously dierentiable function. Such a problem may have no solution
(e.g., the PDE zx2 + zy2 + 1 = 0). However, if a solution exists in some neighborhood of the
initial curve, then such a solution can often be determined using the following steps (cf.
[1]).
Step 1: Find functions h(s) and k(s) (if possible) such that
F (f (s), g(s), G(s), h(s), k(s)) = 0, G (s) = h(s)f (s) + k(s)g (s) and
Fp (f (s), g(s), G(s), h(s), k(s))g (s) Fq (f (s), g(s), G(s), h(s), k(s))f (s) = 0. (12)
Note that if h(s) and k(s) do not exist, then (11) has no solution. If there are several
choices for (h(s), k(s)), then a solution of (11) exists for each such choice.
Step 2: For each xed s, solve the following charateristics system for x(s, t), y(s, t),
z(s, t), p(s, t),q(s, t) with the given initial conditions p(s, 0) = h(s), q(s, 0) = k(s), where
h(s) and k(s) are the functions found in Step 1.

26

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

d
x(s, t) = Fp (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
d
y(s, t) = Fq (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
d
z(s, t) = p(s, t)Fp (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+q(s, t)Fq (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
d
p(s, t) = [Fx (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+p(s, t)Fz (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]
d
q(s, t) = [Fy (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))
dt
+q(s, t)Fz (x(s, t), y(s, t), z(s, t), p(s, t), q(s, t))]

(13)

Step 3: As s and t vary, the point (x, y, z), dened by


x = x(s, t), y = y(s, t), z = z(s, t)

(14)

traces out the graph of a solution z of (11) in the xyz-space, in a neighborhood of the
curve traced out by (f (s), g(s), G(s)). In some cases, one can use the rst two equations
in (14) to solve for s and t in terms of x and y (say, s = s(x, y) and t = t(x, y)) to obtain a
solution z(x, y) = z(s(x, y), t(x, y)), for (x, y) in a neighborhood of the curve (f (s), g(s)).
To illustrate the above steps, let us consider the following example.
EXAMPLE 2. Solve the PDE zx zy z = 0 subject to the condition z(s, s) = 1.
Solution. Here, we have
F (x, y, z, p, q) = pq z.
The characteristics system (13) takes the form
dy
dz
dx
= Fp = q(t),
= Fq = p(t),
= pFp + qFq = 2p(t)q(t),
dt
dt
dt
dp
dq
= [Fx + p(t)Fz ] = p(t),
= [Fy + q(t)Fz ] = q(t).
dt
dt
Note that

dp
dq
= p(t) = p(t) = cet and
= q(t) = q(t) = det ,
dt
dt
where c and d are arbitrary constants. Since we are looking for a characteristics strip (i.e.,
F (x, y, z, p, q) = 0), we set z(t) = p(t)q(t) = cde2t . The equations for the characteristic
strip are:
x(t) = det + d1 ,

y(t) = cet + c1 , z(t) = cde2t , p(t) = cet ,

q(t) = det ,

27

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

where c1 and d1 are constants.


The initial condition z(s, s) = 1 is given on the line y = x traced out by (s, s),
in (11), we have f (s) = s and g(s) = s. We must nd h(s) and k(s) such that
1 = G(s) = h(s)k(s)

0 = G (s) = h(s) k(s),

0 = Fp (. . .)(1) Fq (. . .)(1) = k(s) h(s).


Thus, we have two choices h(s) = 1 and k(s) = 1, or h(s) = 1 and k(s) = 1. For the
choice h(s) = 1 and k(s) = 1, we obtain
x(s, t) = et 1 + s, y(s, t) = et 1 s, z(s, t) = e2t , p(s, t) = et ,

q(s, t) = et .

From the rst two equations, we obtain


et = (x + y + 2)/2.
Then the solution is

(x + y + 2)2
.
4
If we choose h(s) = 1 and k(s) = 1, the solution is given by
z(x, y) = e2t =

z(x, y) =

(x + y 2)2
.
4

Practice Problems
Solve the following Cauchys problem:
1. pq z = 0, z(x, x) = x
2. p + zq = 2x, z(0, y) = f (y)
3. Find the solution of the equation p + zq = 1 with the data
x(s, 0) = 2s,

y(s, 0) = s2 , z(0, s2 ) = s.

4. Find the characteristics of the equation pq = z, and determine the integral surface
which passes through the parabola x = 0, y 2 = z.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 5

28

Compatible Systems and Charpits Method

In this lecture, we shall study compatible systems of rst-order PDEs and the Charpits
method for solving nonlinear PDEs. Lets begin with the following denition.
DEFINITION 1. (Compatible systems of first-order PDEs) A system of two rstorder PDEs
f (x, y, z, p, q) = 0

(1)

g(x, y, z, p, q) = 0

(2)

and

are said to be compatible if they have a common solution.


THEOREM 2. The equations f (x, y, z, p, q) = 0 and g(x, y, z, p, q) = 0 are compatible on
a domain D if



f f
p q
(i) J = (f,g)
= 0 on D.
(p,q) =
gp gq
(ii) p and q can be explicitly solved from (1) and (2) as p = (x, y, z) and q = (x, y, z).
Further, the equation
dz = (x, y, z)dx + (x, y, z)dy
is integrable.
THEOREM 3. A necessary and sucient condition for the integrability of the equation
dz = (x, y, z)dx + (x, y, z)dy is
[f, g]

(f, g) (f, g)
(f, g)
(f, g)
+
+p
+q
= 0.
(x, p) (y, q)
(z, p)
(z, q)

In other words, the equations (1) and (2) are compatible i (3) holds.
EXAMPLE 4. Show that the equations
xp yq = 0,

z(xp + yq) = 2xy

are compatible and solve them.


Solution. Take f xp yq = 0,

g z(xp + yq) 2xy = 0. Note that

fx = p, fy = q, fz = 0, fp = x, fq = y.
and
gx = zp 2y, gy = zq 2x, gz = xp + yq, gp = zx, gq = zy.

(3)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

29

Compute
(f, g)
J
(p, q)




f f x y
p q

=
=
= zxy + zxy = 2zxy = 0
gp gq zx zy

for x = 0, y = 0, z = 0. Further,


f f
(f, g)
x p
=
=
gx gp
(x, p)


f f
(f, g)
z p
=
=
gz gp
(z, p)


f f
(f, g)
y q
=
=
gy gq
(y, q)


f f
(f, g)
z q
=
=
gz gq
(z, q)

p
zp 2y
0
xp + yq
q
zq 2x
0
xp + yq


x
= zxp x(zp 2y) = 2xy
zx

x
= 0 x(xp + yq) = x2 p xyq
zx

y
= qzy + y(zq 2x) = 2xy
zy

y
= y(xp + yq) = y 2 q + xyp.
zy

It is an easy exercise to verify that


(f, g)
(f, g)
(f, g) (f, g)
+
+p
+q
(x, p) (y, q)
(z, p)
(z, q)
2 2
2 2
= 2xy x p xypq 2xy + y q + xypq

[f, g]

= y 2 q 2 x2 p2
= 0.
So the equations are compatible.
Next step to determine p and q from the two equations xp yq = 0, z(xp + yq) = 2xy.
Using these two equations, we have
zxp + zyq 2xy = 0 = xp + yq =
= 2xp =

2xy
z

2xy
y
= p = = (x, y, z).
z
z

and
xp
xy
x
=
=
y
yz
z
x
= q = = (x, y, z).
z

xp yq = 0 = q =

Substituting p and q in dz = pdx + qdy, we get


zdz = ydx + xdy = d(xy),

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

30

and hence integrating, we obtain


z 2 = 2xy + k,
where k is a constant.
NOTE: For the compatibility of f (x, y, z, p, q) = 0 and g(x, y, z, p, q) = 0 it is not necessary that every solution of f (x, y, z, p, q) = 0 be a solution of g(x, y, z, p, q) = 0 or
vice-versa as is generally believed. For instance, the equations
f xp yq x = 0

(4)

g x2 p + q xz = 0

(5)

are compatible. They have common solutions z = x + c(1 + xy), where c is an arbitrary
constant. Note that z = x(y + 1) is a solution of (4) but not of (5).
Charpits Method: It is a general method for nding the complete integral of a
nonlinear PDE of rst-order of the form
f (x, y, z, p, q) = 0.

(6)

Basic Idea: The basic idea of this method is to introduce another partial dierential
equation of the rst order
g(x, y, z, p, q, a) = 0

(7)

which contains an arbitrary constant a and is such that


(i) Equations (6) and (7) can be solved for p and q to obtain
p = p(x, y, z, a),

q = q(x, y, z, a).

(ii) The equation


dz = p(x, y, z, a)dx + q(x, y, z, a)dy

(8)

is integrable.
When such a function g is found, the solution
F (x, y, z, a, b) = 0
of (8) containing two arbitrary constants a, b will be the solution of (6).
Note: Notice that another PDE g is introduced so that the equations f and g are compatible and then common solutions of f and g are determined in the Charpits method.
The equations (6) and (7) are compatible if
[f, g]

(f, g)
(f, g)
(f, g) (f, g)
+
+p
+q
= 0.
(x, p) (y, q)
(z, p)
(z, q)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

31

Expanding it, we are led to the linear PDE


fp

g
g
g
g
g
+ fq
+ (pfp + qfq )
(fx + pfz )
(fy + qfz )
= 0.
x
y
z
p
q

(9)

Now solve (9) to determine g by nding the integrals of the following auxiliary equations:
dy
dz
dp
dx
dq
=
=
=
=
fp
fq
pfp + qfq
(fx + pfz )
(fy + qfz )

(10)

These equations are known as Charpits equations which are equivalent to the characteristics equations (10) of the previous Lecture 4.
Once an integral g(x, y, z, p, q, a) of this kind has been found, the problem reduces to
solving for p and q, and then integrating equation (8).
REMARK 5. 1. For nding integrals, all of Charpits equations (10) need not to be used.
2. p or q must occur in the solution obtained from (10).
EXAMPLE 6. Find a complete integral of
p2 x + q 2 y = z.
Solution. To nd a complete integral, we proceed as follows.
Step 1: (Computing fx , fy , fz , fp , fq ).
Set f p2 x + q 2 y z = 0. Then
fx = p2 ,

fy = q 2 ,

fz = 1, fp = 2px, fq = 2qy.

= pfp + qfq = 2p2 x + 2q 2 y, (fx + pfz ) = p2 + p, (fy + qfz ) = q 2 + q.


Step 2: (Writing Charpits equations and nding a solution g(x, y, z, p, q, a)).
The Charpits equations (or auxiliary) equations are:

dy
dz
dp
dq
dx
=
=
=
=
fp
fq
pfp + qfq
(fx + pfz )
(fy + qfz )
dx
dy
dz
dp
dq
=
=
=
=
2
2
2
2px
2qy
2(p x + q y)
p + p
q 2 + q

From which it follows that


p2 dx + 2pxdp
q 2 dy + 2qydq
= 3
2
3
+ 2p x 2p x
2q y + 2q 2 y 2q 3 y
q 2 dy + 2qydq
p2 dx + 2pxdp
=
p2 x
q2y

2p3 x
=

(11)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

32

On integrating, we obtain
log(p2 x) = log(q 2 y) + log a
p2 x = aq 2 y,

(12)

where a is an arbitrary constant.


Step 3: (Solving for p and q).
Using (11) and (12), we nd that
p2 x + q 2 y = z,
=
=

p2 x = aq 2 y

(aq 2 y) + q 2 y = z = q 2 y(1 + a) = z
[
]1/2
z
z
2
q =
= q =
.
(1 + a)y
(1 + a)y

and
y
z
y
az
=a
=
x
(1 + a)y x
(1 + a)x
[
]1/2
az
p=
.
(1 + a)x
p2 = aq 2
=

Step 4: (Writing dz = p(x, y, z, a)dx + q(x, y, z, a)dy and nding its solution).
Writing

]1/2
]1/2
[
az
z
dz =
dx +
dy
(1 + a)x
(1 + a)y
)
(
( )1/2
( a )1/2
1 + a 1/2
1
dz =
dx +
dy.
z
x
y
[

=
Integrate to have

[(1 + a)z]1/2 = (ax)1/2 + (y)1/2 + b


the complete integral of the equation (11).

Practice Problems
1. Show that the equations xp yq = x and x2 p + q = xz are compatible and solve
them.
2. Show that the equations f (x, y, p, q) = 0 and g(x, y, p, q) = 0 are compatible if
(f, g) (f, g)
+
= 0.
(x, p) (y, p)
3. Find complete integrals of the equations:
(i) p = (z + qy)2 ;

(ii) (p2 + q 2 )y = qz

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 6

33

Some Special Types of First-Order PDEs

We shall consider some special types of rst-order partial dierential equations whose
solutions may be obtained easily by Charpits method.
Type (a): (Equations involving only p and q)
If the equation is of the form
f (p, q) = 0

(1)

then Charpits equations take the form


dx
dy
dz
dp
dq
=
=
=
=
fp
fq
pfp + qfq
0
0
An immediate solution is given by p = a, where a is an arbitrary constant. Substituting
p = a in (1), we obtain a relation
q = Q(a).
Then, integrating the expression
dz = adx + Q(a)dy
we obtain
z = ax + Q(a)y + b,

(2)

where b is a constant. Thus, (2) is a complete integral of (1).


Note: Instead of taking dp = 0, we can take dq = 0 q = a. In some problems, taking
dq = 0 the amount of computation involved may be reduced considerably.
EXAMPLE 1. Find a complete integral of the equation pq = 1.
Solution. If p = a then pq = 1 q =

1
a.

In this case, Q(a) = 1/a. From (2), we

obtain a complete integral as


y
+b
a
a2 x + y az = c,
z = ax +
=
where a and c are arbitrary constants.
Type (b) (Equations not involving the independent variables):
For the equation of the type
f (z, p, q) = 0,

(3)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

34

Charpits equation becomes


dx
dy
dz
dp
dq
=
=
=
=
.
fp
fq
pfp + qfq
pfz
qfz
From the last two relation, we have
dp
dq
dp
dq
=
=
=
pfz
qfz
p
q
p = aq,

(4)

where a is an arbitrary constant. Solving (3) and (4) for p and q, we obtain
q = Q(a, z) = p = aQ(a, z).
Now
dz = pdx + qdy
=

dz = aQ(a, z)dx + Q(a, z)dy

dz = Q(a, z) [adx + dy] .

It gives complete integral as

dz
= ax + y + b,
Q(a, z)

(5)

where b is an arbitrary constant.


EXAMPLE 2. Find a complete integral of the PDE p2 z 2 + q 2 = 1.
Solution. Putting p = aq in the given PDE, we obtain
a2 q 2 z 2 + q 2 = 1

Now,

q 2 (1 + a2 z 2 ) = 1

q = (1 + a2 z 2 )1/2 .

(
p = (1 q )/z = 1
2

=
=

1
(1 + a2 z 2 )

)(

1
z2

a2
1 + a2 z 2
p = a(1 + a2 z 2 )1/2 .
p2 =

Substituting p and q in dz = pdx + qdy, we obtain


dz = a(1 + a2 z 2 )1/2 dx + (1 + a2 z 2 )1/2 dy
=
=

(1 + a2 z 2 )1/2 dz = adx + dy
}
1 {
az(1 + a2 z 2 )1/2 log[az + (1 + a2 z 2 )1/2 ] = ax + y + b,
2a

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

35

which is the complete integral of the given PDE.


Type (c): (Separable equations)
A rst-order PDE is separable if it can be written in the form
f (x, p) = g(y, q).

(6)

That is, a PDE in which z is absent and the terms containing x and p can be separated
from those containing y and q. For this type of equation, Charpits equations become
dy
dz
dp
dq
dx
=
=
=
=
.
fp
gq
pfp qgq
fx
gy
From the last two relation, we obtain an ODE
dp
dx
dp fx
=
=
+
=0
fx
fp
dx fp

(7)

which may be solved to yield p as a function of x and an arbitrary constant a. Writing


(7) in the form fp dp + fz dx = 0, we see that its solution is f (x, p) = a. Similarly, we get
g(y, q) = a. Determine p and q from the equation
f (x, p) = a,

g(y, q) = a

and then use the relation dz = pdx + qdy to determine a complete integral.
EXAMPLE 3. Find a complete integral of p2 y(1 + x2 ) = qx2 .
Solution. First we write the given PDE in the form
p2 (1 + x2 )
q
=
(separable equation)
2
x
y
It follows that
p2 (1 + x2 )
ax
= a2 = p =
,
2
x
1 + x2
where a is an arbitrary constant. Similarly,
q
= a2 = q = a2 y.
y
Now, the relation dz = pdx + qdy yields

ax
a2 y 2
dz =
dx + a2 ydy = z = a 1 + x2 +
+ b,
2
1 + x2
where a and b are arbitrary constant, a complete integral for the given PDE.

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

36

Type (d): (Clairauts equation)


A rst-order PDE is said to be in Clairaut form if it can be written as
z = px + qy + f (p, q).

(8)

Charpits equations take the form


dy
dz
dp
dx
dq
=
=
=
=
.
x + fp
y + fq
px + qy + pfp + qfq
0
0
Now, dp = 0 = p = a, where a is an arbitrary constant.
dq = 0 = q = b, where b is an arbitrary constant.
Substituting the values of p and q in (8), we obtain the required complete integral
z = ax + by + f (a, b).
EXAMPLE 4. Find a complete integral of (p + q)(z xp yq) = 1.
Solution. The given PDE can be put in the form
z = xp + yq +

1
,
p+q

(9)

which is of Clairauts type. Putting p = a and q = b in (9), a complete integral is given


by
z = ax + by +

1
,
a+b

where a and b are arbitrary constants.

Practice Problems
Find complete integrals of the following PDEs.
1. p + q = pq
2.

p+ q =1

3. z = p2 q 2
4. p(1 + q) = qz
5. p2 + q 2 = x + y

6. z = px + qy + 1 + p2 + p2
7. zpq = p2 (xq + p2 ) + q 2 (yp + q 2 )

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 7

37

Jacobi Method for Nonlinear First-Order PDEs

Consider the following rst-order PDE of the form


f (x, y, z, ux , uy , uz ) = 0,

(1)

where x, y, z are independent variables and u is the dependent variable. Note that the
dependent variable u does not appear in the PDE (1).
Idea of Jacobis Method: The fundamental idea of Jacobis method is to introduce
two rst-order PDEs involving two arbitrary constants a and b of the following form
h1 (x, y, z, ux , uy , uz , a) = 0,

(2)

h2 (x, y, z, ux , uy , uz , b) = 0

(3)

such that
(f, h1 , h2 )
= 0
(ux , uy , uz )

(4)

and
Equations (1), (2) and (3) can be solved for ux , uy , uz ;
The equation
du = ux dx + uy dy + uz dz

(5)

is integrable.
If h1 = 0 and h2 = 0 are compatible with f = 0 then h1 and h2 satisfy
(f, h)
(f, h)
(f, h)
+
+
=0
(x, ux ) (y, uy ) (z, uz )

(6)

for h = hi , i = 1, 2. Equation (6) leads to a semi-linear PDE of the form


fux

h
h
h
h
h
h
+ fuy
+ fuz
fx
fy
fz
=0
x
y
z
ux
uy
uz

(7)

for h = hi , i = 1, 2. The associated auxiliary equations are given by


duy
dx
dy
dz
dux
duz
=
=
=
=
=
.
fux
fuy
fuz
fx
fy
fz
The rest of the procedure is same as in Charpits method.

(8)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

38

The method just described above can be applied to solve rst-order equation of the
form
f (x, y, z, p, q) = 0.

(9)

Next, we shall show how to transform the equation f (x, y, z, p, q) = 0 into the equation
g(x, y, z, ux , uy , uz ) = 0 so that the above procedure can be applied.
If u(x, y, z) is a relation between x, y and z and satises (9) then we have
ux + uz zx = 0 = ux + uz p = 0 = p = ux /uz ,
uy + uz zy = 0 = uy + uz q = 0 = q = uy /uz .
Substituting
p = ux /uz and q = uy /uz
in (9) we obtain an equation
g(x, y, z, ux , uy , uz ) = 0

(10)

which can be solved by the Jacobis method.


EXAMPLE 1. Find a complete integral of p2 x + q 2 y = z by Jacobis method.
Step 1: (Converting the given PDE into the form f (x, y, z, ux , uy , uz ) = 0).
Set p = ux /uz and q = uy /uz in the given PDE to obtain

u2y
u2x
x
+
y=z
u2z
u2z
xu2x + yu2y zu2z = 0.

Thus,
f (x, y, z, ux , uy , uz ) = xu2x + yu2y zu2z = 0.
Step 2: Solving PDE (11) by Jacobis method
Step 2(a): Compute fux , fuy , fuz , fx , fy , fz
fux = 2xux , fuy = 2yuy , fuz = 2zuz , fx = u2x , fy = u2y , fz = u2z .
Step 2(b): Writing auxiliary equation and solving for ux , uy and uz .
The auxiliary equations are given by

duy
dx
dy
dz
dux
duz
=
=
=
=
=
fux
fuy
fuz
fx
fy
fz
du
dx
dy
dz
dux
duz
y
=
=
=
=
= 2
2xux
2yuy
2zuz
u2x
u2y
uz

(11)

MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS

39

Now,
dux
dx
ux dx
2xdux
=
=
=
2
2
2xux
ux
2xux
2xu2x
ux dx = 2xdux
dx
dux
= 2
x
ux
log x = 2 log(ux ) + log(a)

=
=
=

log x + log(u2x ) = log(a)


( a )1/2
xu2x = a = ux =
.
x

=
=
Similarly, we get

yu2y

( )1/2
b
= b = uy =
.
y

and

(a + b)
uz =
z

]1/2
.

Step 2(c): Solving the equation du = ux dx + uy dy + uz dz.

du =
=

( a )1/2
x

(
)
( )1/2
a + b 1/2
b
dy +
dz
dx +
y
z

u = 2(ax)1/2 + 2(by)1/2 + 2((a + b)z)1/2 + c.

Step 3: (Finding solution of the given PDE from the solution of PDE (11)).
Writing u = c in (12), we get the complete integral of the given PDE as
[(
z=

ax
a+b

)1/2

(
+

by
a+b

)1/2 ]2

Practice Problems
Find the complete integral of the following PDEs:
1. (p2 + q 2 )y = qz
2. z 2 = pqxy
3. 2(y + zq) = q(xp + yq)

(12)

Module 3: Second-Order Partial Dierential


Equations
In Module 3, we shall discuss some general concepts associated with second-order linear
PDEs. These types of PDEs arise in connection with various physical problems such as
the motion of a vibration string, heat ow, electricity, magnetism and uid dynamics. The
second-order partial dierential equations are classied into three dierent types. Further,
the simplied canonical/normal forms are obtained for second order linear equations in
two independent variables.
The lectures are organized as follows. The rst lecture is devoted to the classication
of second-order linear PDEs in two or more independent variables. These equations are
classied into hyperbolic, parabolic and elliptic types. The reduction to the canonical form
(or normal form) of second order equations in two independent variables are discussed in
the second lecture. Finally, the third lecture is devoted to wellposedness, superposition
principle and method of factorization for these types of equations.

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 1

Classication of Second-Order PDEs

Classication of PDEs is an important concept because the general theory and methods
of solution usually apply only to a given class of equations. Let us rst discuss the
classication of PDEs involving two independent variables.

Classication with two independent variables

Consider the following general second order linear PDE in two independent variables:
A

2u
2u
2u
u
u
+
B
+
C
+D
+E
+ F u + G = 0,
2
2
x
xy
y
x
y

(1)

where A, B, C, D, E, F and G are functions of the independent variables x and y. The


equation (1) may be written in the form
Auxx + Buxy + Cuyy + f (x, y, ux , uy , u) = 0,

(2)

where

u
u
2u
2u
2u
, uy =
, uxx =
,
u
=
,
u
=
.
yy
xy
x
y
x2
xy
y 2
Assume that A, B and C are continuous functions of x and y possessing continuous partial
ux =

derivatives of as high order as necessary.


The classication of PDE is motivated by the classication of second order algebraic
equations in two-variables
ax2 + bxy + cy 2 + dx + ey + f = 0.

(3)

We know that the nature of the curves will be decided by the principal part ax2 +bxy +cy 2
i.e., the term containing highest degree. Depending on the sign of the discriminant b2 4ac,
we classify the curve as follows:
If b2 4ac > 0 then the curve traces hyperbola.
If b2 4ac = 0 then the curve traces parabola.
If b2 4ac < 0 then the curve traces ellipse.
With suitable transformation, we can transform (3) into the following normal form
x2 y 2
2 = 1 (hyperbola).
a2
b
2
x = y (parabola).
x2 y 2
+ 2 = 1 (ellipse).
a2
b

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Linear PDE with constant coecients. Let us rst consider the following general
linear second order PDE in two independent variables x and y with constant coecients:
Auxx + Buxy + Cuyy + Dux + Euy + F u + G = 0,

(4)

where the coecients A, B, C, D, E, F and G are constants. The nature of the equation
(4) is determined by the principal part containing highest partial derivatives i.e.,
Lu Auxx + Buxy + Cuyy .

(5)

For classication, we attach a symbol to (5) as P (x, y) = Ax2 + Bxy + Cy 2 (as if we have
replaced x by

and y by

y ).

Now depending on the sign of the discriminant (B 2 4AC),

the classication of (4) is done as follows:


B 2 4AC > 0 = Eq. (4) is hyperbolic

(6)

B 2 4AC = 0 = Eq. (4) is parabolic

(7)

B 2 4AC < 0 = Eq. (4) is elliptic

(8)

Linear PDE with variable coecients. The above classication of (4) is still valid if
the coecients A, B, C, D, E and F depend on x, y. In this case, the conditions (6), (7)
and (8) should be satised at each point (x, y) in the region where we want to describe its
nature e.g., for elliptic we need to verify
B 2 (x, y) 4A(x, y)C(x, y) < 0
for each (x, y) in the region of interest. Thus, we classify linear PDE with variable coecients as follows:
B 2 (x, y) 4A(x, y)C(x, y) > 0 at (x, y) = Eq. (4) is hyperbolic at (x, y)

(9)

B 2 (x, y) 4A(x, y)C(x, y) = 0 at (x, y) = Eq. (4) is parabolic at (x, y)

(10)

B 2 (x, y) 4A(x, y)C(x, y) < 0 at (x, y) = Eq. (4) is elliptic at (x, y)

(11)

Note: Eq. (4) is hyperbolic, parabolic, or elliptic depends only on the coecients of the
second derivatives. It has nothing to do with the rst-derivative terms, the term in u, or
the nonhomogeneous term.
EXAMPLE 1.
1. uxx + uyy = 0 (Laplace equation). Here, A = 1, B = 0, C = 1 and B 2 4AC =
4 < 0. Therefore, it is an elliptic type.

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

2. ut = uxx (Heat equation). Here, A = 1, B = 0, C = 0. B 2 4AC = 0. Thus, it is


of parabolic type.
3. utt uxx = 0 (Wave equation). In this case, A = 1, B = 0, C = 1 and B 2 4AC =
4 > 0. Hence, it is of hyperbolic type.
x = 0 (Tricomi equation). B 2 4AC = 4x. Given PDE is

4. uxx + xuyy = 0,

hyperbolic for x < 0 and elliptic for x > 0. This example shows that equations with
variable coecients can change form in the dierent regions of the domain.

Classication with more than two variables

Consider the second-order PDE in general form:


n
n

i=1 j=1

u
2u
aij
+
bi
+ cu + d = 0,
xi xj
xi
n

(12)

i=1

where the coecients aij , bi , c and d are functions of x = (x1 , x2 , , xn ) alone and u =
u(x1 , x2 , , xn ).
Its principal part is
L

n
n

aij

i=1 j=1

2
.
xi xj

(13)

It is enough to assume that A = [aij ] is symmetric if not, let a


ij = 12 (aij + aji ) and rewrite
L

n
n

a
ij

i=1 j=1

Note that

2u
xi xj

with (14) (i.e.,

2u
xj xi . As in two-space
u
replacing x
by xi ).
i

2
.
xi xj

(14)

dimension, let us attach a quadratic form P

P (x1 , x2 , , xn ) =

n
n

aij xi xj .

(15)

i=1 j=1

Since A is a real valued symmetric (aij = aji ) matrix, it is diagonalizable with real
eigenvalues 1 , 2 , . . . , n (counted with their multiplicities). In other words, there exists
a corresponding set of orthonormal set of n eigenvectors, say 1 , 2 , , n with R =

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

[1 , 2 , , n ] as column vectors such that

RT AR =

=D

(16)

n
We now classify (12) depending on sign of eigenvalues of A:
(a) If i > 0 i or i < 0 i then (12) is elliptic type.
(b) If one or more of the i = 0 then (12) is parabolic type.
(c) If one of the i < 0 or i > 0, and all the remaining have
opposite sign then (12) is said to be of hyperbolic type.
EXAMPLE 2.
1. 2 u = uxx + uyy + uzz = 0. In this case, i = 1 > 0 for all i = 1, 2, 3. It is an elliptic
PDE since all eigenvalues are of one sign.
2. It is an easy exercise to check that ut 2 u = 0 is of parabolic type.
3. The equation utt 2 u = 0 is of hyperbolic type.
EXAMPLE 3. Classify ux1 x1 + 2(1 + cx2 )ux2 x3 = 0.
To symmetrize, write it as
ux1 x1 + (1 + cx2 )ux2 x3 + (1 + cx2 )ux3 x2 = 0
i.e., xT Ax cx2 = 0, where

A= 0

0
0

0 1 + cx2

x1

1 + cx2
x = x2
0
x3

Eigenvalues are 1 = 1, 2 = 1 + cx2 , 3 = (1 + cx2 ) and normalized eigenvectors


1
0
0

1 =
0 2 = 1/2 3 = 1/ 2
1/ 2
1/ 2
0

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

0
0

R=
0 1/2 1/ 2
0 1/ 2 1/ 2

So

Note that R = RT = R1 .

RT AR =
0 1 + cx2
0

0
0
(1 + cx2 )

=D

Equation is parabolic if x2 = 1c (c = 0), hyperbolic if x2 > 1c and x2 < 1c . For c = 0,


1 = 2 = 1 and 3 = 1, it is hyperbolic type.
Practice Problems
1. Classify the following equations into hyperbolic, elliptic or parabolic type.
(A) 5uxx 3uyy + (cos x)ux + ey uy + u = 0.
(B) ex uxx + ey uyy u = 0.
(C) xuxx + uyy = 0.
(D) 8uxx + uyy ux + [log(2 + x2 )]u = 0.
(E) sin2 xuxx + sin 2xuxy + cos2 xuyy = x.
2. Classify the following equations into elliptic, parabolic, or hyperbolic type.
2

(A) uxx + 2uyz + (cos x)uz ey u = cosh z.


(B) uxx + 2uxy + uyy + 2uzz (1 + xy)u = 0.
(C) ez uxy uxx = log[x2 + y 2 + z 2 + 1].
3. Determine the regions where uxx 2x2 uxz + uyy + uzz = 0 is of hyperbolic, elliptic
and parabolic.

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 2

Canonical Forms or Normal Forms

By a suitable change of the independent variables we shall show that any equation of the
form
Auxx + Buxy + Cuyy + Dux + Euy + F u + G = 0,

(1)

where A, B, C, D, E, F and G are functions of the variables x and y, can be reduced to a


canonical form or normal form. The transformed equation assumes a simple form so that
the subsequent analysis of solving the equation will be become easy.
Consider the transformation of the indpendent variables from (x, y) to (, ) given by
= (x, y), = (x, y).
Here, the functions and are continuously dierentiable and the Jacobian


(, ) x y
=
J=
= (x y y x ) = 0
(x, y) x y

(2)

(3)

in the domain where (1) holds.


Using chain rule, we notice that
ux = u x + u x
uy = u y + u y
uxx = u x2 + 2u x x + u x2 + u xx + u xx
uxy = u x y + u (x y + y x ) + u x y + u xy + u xy
uyy = u y2 + 2u y y + u y2 + u yy + u yy
Substituting these expression into (1), we obtain
x , y )u + B(
x , y ; x , y )u + C(
x , y )u = F (, , u(, ), u (, ), u (, )), (4)
A(
where
x , y ) = A 2 + Bx y + C 2
A(
x
y
x , y ; x , y ) = 2Ax x + B(x y + y x ) + 2Cy y
B(
x , y ) = A 2 + Bx y + C 2 .
C(
x
y
An easy calculation shows that
2 4AC = (x y y x )2 (B 2 4AC).
B

(5)

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

The equation (5) shows that the transformation of the independent variables does not
modify the type of PDE.
We shall determine and so that (4) takes the simplest possible form. We now
consider the following cases:
Case I: B 2 4AC > 0
Case II:

B2

Case III:

(Hyperbolic type)

4AC = 0

B2

(Parabolic type)

4AC < 0

(Elliptic type)

Case I: Note that B 2 4AC > 0 implies the equation A2 + B + C = 0 has two real
and distinct roots, say 1 and 2 . Now, choose and such that

= 1
and
= 2 .
x
y
x
y

(6)

Then the coecients of u and u will be zero because


A = Ax2 + Bx y + Cy2 = (A21 + B1 + C)y2 = 0,
C = Ax2 + Bx y + Cy2 = (A22 + B2 + C)y2 = 0.
Thus, (5) reduces to
2 = (B 2 AC)(x y y x )2 > 0
B
as B 2 4AC > 0. Note that (6) is a rst-order linear PDE in and whose characteristics
curves are satisfy the rst-order ODEs
dy
+ i (x, y) = 0, i = 1, 2.
dx

(7)

Let the family of curves determined by the solution of (7) for i = 1 and i = 2 be
f1 (x, y) = c1 and f2 (x, y) = c2 ,

(8)

respectively. These family of curves are called characteristics curves of PDE (5). With
(as B
> 0) and use (7)-(8) to obtain
this choice, divide (4) throughout by B
2u
= (, , u, u , u ),

(9)

which is the canonical form of hyperbolic equation.


EXAMPLE 1. Reduce the equation uxx = x2 uyy to its canonical form.
Solution. Comparing with (1) we nd that A = 1, B = 0, C = x2 .
The roots of the equations A2 + B + C = 0 i.e., 2 + x2 = 0 are given by i = x.
The dierential equations for the family of characteristics curves are
dy
x = 0.
dx

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

whose solutions are y + 12 x2 = c1 and y 12 x2 = c2 . Choose


1
1
= y + x2 , = y x2 .
2
2
An easy computation shows that
ux = u x + u x ,
uxx = u x2 + 2u x x + u x2 + u xx + u xx
= u x2 2u x2 + u x2 + u u ,
uyy = u y2 + 2u y y + u y2 + u yy + u yy ,
= u + 2u + u .
Substituting these expression in the equation uxx = x2 uyy yields

or
or

4x2 u = (u u )
1
(u u )
4( )u =
4( )
1
u =
(u u )
4( )

which is the required canonical form.


CASE II: B 2 4AC = 0 = the equation A2 + B + C = 0 has two equal roots, say
1 = 2 = . Let f1 (x, y) = c1 be the solution of

dy
dx

+ (x, y) = 0. Take = f1 (x, y) and

to be the any function of x and y which is independent of .


x , y ) = 0 and hence from equation (5), we obtain B
= 0. Note that
As before, A(
x , y ) = 0, otherwise would be a function of . Dividing (4) by C,
the canonical form
C(
of (2) is
u = (, , u, u , u ).

(10)

which is the canonical form of parabolic equation.


EXAMPLE 2. Reduce the equation uxx + 2uxy + uyy = 0 to canonical form.
Solution. In this case, A = 1, B = 2, C = 1. The equation 2 + 2 + 1 = 0 has equal
roots = 1. The solution of

dy
dx

1 = 0 is x y = c1 Take = x y. Choose = x + y.

proceed as in Example 1 to obtain u = 0 which is the canonical form of the given PDE.
CASE III: When B 2 4AC < 0, the roots of A2 + B + C = 0 are complex. Following
the procedure as in CASE I, we nd that
u = 1 (, , u, u , u ).

(11)

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

10

The variables , are infact complex conjugates. To get a real canonical form use the
transformation

1
1
= ( + ), = ( )
2
2i

to obtain

1
u = (u + u ),
4
which follows from the following calculation:

(12)

1
1
u = u + u = u + u
2
2i
1
1
u =
(u + u ) + (u + u )
2
2i
1
=
(u + u ).
4
The desired canonical form is
u + u = (, , u(, ), u (, ), u (, )).

(13)

EXAMPLE 3. Reduce the equation uxx + x2 uyy = 0 to canonical form.


Solution. In this case, A = 1, B = 0, C = x2 . The roots are 1 = ix, 2 = ix.
Take = iy + 12 x2 , = iy + 12 x2 . Then = 21 x2 , = y. The canonical form is
u + u =

1
u .
2

Practice Problems
1. Reduce the following equations to canonical/normal form:
(A) 2uxx 4uxy + 2uyy + 3u = 0.
(B) uxx + yuyy = 0.
(C) uxy + ux + uy = 2x.
2. Show that the equation
uxx 6uxy + 12uyy + 4ux u = sin(xy)
is of elliptic type and obtain its canonical form.
3. Determine the regions where Tricomis equation uxx + xuyy = 0 is of elliptic,
parabolic, and hyperbolic types. Obtain its characteristics and its canonical form in
the hyperbolic region.

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

Lecture 3

11

Superposition Principle and Wellposedness

A very important fact concerning linear PDEs is the superposition principle, which is
stated below.
A linear PDE can be written in the form
L[u] = f,

(1)

where L[u] denotes a linear combination of u and some of its partial derivatives, with
coecients which are given functions of the independent variables.
DEFINITION 1. (Superposition principle) Let u1 be a solution of the linear PDE
L[u] = f1
and let u2 be a solution of the linear PDE
L[u] = f2 .
Then, for any any constants c1 and c2 , c1 u1 + c2 u2 is a solution of
L[u] = c1 f1 + c2 f2 .
That is,
L[c1 u1 + c2 u2 ] = c1 f1 + c2 f2 .

(2)

In particular, when f1 = 0 and f2 = 0, (2) implies that if u1 and u2 are solutions of the
homogeneous linear PDE L[u] = 0, then c1 u1 + c2 u2 will also be a solution of L[u] = 0.
EXAMPLE 2. Observe that u1 (x, y) = x3 is a solution of the linear PDE uxx uy = 6x,
and u2 (x, y) = y 2 is a solution of uxx uy = 2y. Then, using superposition principle, it
is easy to verify that 3u1 (x, y) 4u2 (x, y) will be a solution of uxx uy = 18x + 8y.
REMARK 3. Note that the principle of superposition is not valid for nonlinear partial
dierential equations. This failure makes it dicult to form families of new solutions
from an original pair of solutions.
EXAMPLE 4. Consider the nonlinear rst order PDE ux uy u(ux + uy ) + u2 = 0. Note
that ex and ey are two solutions of this equation. However, c1 ex + c2 ey will not be a
solution, unless c1 = 0 or c2 = 0.
Solution. Dene D[u] := (ux u)(uy u). For any u, v C 1 , we have
D[u + v] = (ux + vx u v)(uy + vy u v)
= D[u] + D[v] + (uy u)(vx v) + (ux u)(vy v).

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

12

The computation shows that D[u + v] = D[u] + D[v] in general. Taking u = c1 ex and
v = c2 ey , an easy computation shows that
D[c1 ex + c2 ey ] = D[c1 ex ] + D[c2 ey ] + (c1 ex )(c2 ey ) = c1 c2 ex+y .
Thus, D[c1 ex + c2 ey ] = 0 only if c1 = 0 or c2 = 0.

Well-posed problems

A set of conditions was proposed by Hadamard (cf. [12]), who listed three requirements
that must be met when formulating an initial and /or boundary value problem. A problem
for which the PDE and the data lead to a solution is said to be well posed or correctly
posed if the following three conditions are satises:
Hadamards conditions for a well-posed problem are:
1. The solution must exist.
2. The solution should be unique.
3. The solution should depend continuously on the initial and/or boundary data.
If it fails to meet these requirements, it is incorrectly posed.
The conditions (1)-(2) require that the equation plus the data for the problem must
be such that one and only one solution exists. The third condition states that a small
variation of the data for the problem should cause small variation in the solution. As data
are generally obtained experimentally and may be subject to numerical approximations,
we require that the solution be stable under small variations in initial and/or boundary
values. That is, we cannot allow large variations to occur in the solution if the data are
altered slightly.
A simple example of a ill posed problem is given below.
EXAMPLE 5. Consider Cauchys problem for Laplaces equation in y 0:
2u 2u
+ 2 = 0,
x2
y
u(x, 0) = 0,
1
uy (x, 0) = sin nx,
n
where n is a positive integer, is not well-posed.

(3)
(4)
(5)

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

The solution is given by u(x, y) =

1
n2

13

sin(nx) sinh(ny). Now, as n , uy (x, 0) 0

so that for large n the Cauchy data u(x, 0) and uy (x, 0) can be made arbitrarily small
in magnitude. However, the solution u(x, y) oscillates with an amplitude that grows
exponentially like eny as n . Thus, arbitrarily small data can lead to arbitrarily large
variation in solutions and hence the solution is unstable. This violates the condition (3)
i.e., the continuous dependence of the solution on the data.
Boundary value problems are not well posed for hyperbolic and parabolic equations.
This follows because these are, in general, equations whose solutions evolve in time and
their behavior at later times is predicted by their previous states.
EXAMPLE 6. Consider the hyperbolic equation
uxy = 0 in 0 < x < 1, 0 < y < 1
with the boundary conditions
u(x, 0) = f1 (x), u(x, 1) = f2 (x) for 0 x 1,
u(0, y) = g1 (y), u(1, y) = g2 (y) for 0 y 1.
We shall show that this problem has no solution if the data are prescribed arbitrarily.
Since uxy = 0 implies that ux (x, y) = constant, we have
ux (x, 0) = ux (x, 1).
In view of the given BC, we have
ux (x, 0) = f1 (x)

and ux (x, 1) = f2 (x).

Thus, unless f1 (x) and f2 (x) are prescribed such that f1 (x) = f2 (x), the BVP cannot be
solved. Therefore, it is incorrectly posed.

Method of factorization

There is no general methods are available for obtaining the general solutions of secondorder PDEs. Sometimes PDE of second-order can be factorized into two rst-order equations. The equations
u = 0,
yuxx + (x + y)uxy + xuyy = 0.

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

14

are examples of such equation. It is often much easier to factorize an equation when
in its canonical form. But, we can often factorize equations with constant coecients
directly. The method of factorization can be a useful method of solution for hyperbolic
and parabolic equations.
EXAMPLE 7. The equation
uxx uyy + 4(ux + u) = 0
)(
)

+
+2

+ 2 u = 0.
x y
x y
It is equivalent to the pair of rst order equations
can be written as

ux uy + 2u = v,
and
vx + vy + 2v = 0.
EXAMPLE 8. The hyperbolic equation
acuxy + aux + cuy + u = 0
can be written as

)(
)
(

+1
c
+ 1 u = 0.
a
x
y

It is equivalent to
cuy + u = v,
avx + v = 0.
Note: Unlike the case when the coecients are constant, the dierential operators need
not commute.
Practice Problems
1. If u1 (x, y) = x3 solves uxx + uyy = 2 and u2 (x, y) = c3 + dy 3 solves uxx + uyy =
6cx + 6dy for real constants c and d then nd a solution of uxx + uyy = ax + by + c
for given real constants a, b and c.
2. Let u1 (x, y) be the solution to the Cauchy problem
uxx + uyy = 0,
u(x, 0) = f (x),
uy (x, 0) = g(x),

MODULE 3: SECOND-ORDER PARTIAL DIFFERENTIAL EQUATIONS

15

and let u2 (x, y) be the solution of the following Cauchy problem


uxx + uyy = 0,
u(x, 0) = f (x),
1
uy (x, 0) = g(x) + sin(nx).
n
Show that u2 u1 =

1
n2

sinh(ny) sin(nx) and the solution to the Cauchy problem for

Laplaces equation does not depend continuously on the initial data.


3. Show that the Dirichlet problem for the wave equation
uxx uyy = 0,

0 < x < l, 0 < y < T,

u(0, y) = u(l, y) = 0,
u(x, 0) = u(x, T ) = 0,
is not wellposed.

0 y T,
0 x l,

Module 4: Fourier Series


Periodic functions occur frequently in engineering problems. Their representation in terms
of simple periodic functions, such as sine and cosine, which leads to Fourier series(FS).
Fourier series is a very powerful tool in connection with various problems involving partial
dierential equations. Applications of Fourier series in solving PDEs are discussed in the
subsequent module. In this module, we shall learn basic concepts, facts and techniques in
connection with Fourier series.
The Module 4 is organized as follows. While the rst lecture introduces the FS, the
convergence of FS and the properties of termwise dierentiation and integration of FS are
discussed in the second lecture. Third lecture is devoted to Fourier sine series (FSS) and
Fourier cosine series (FCS) of functions.

MODULE 4: FOURIER SERIES

Lecture 1

Introduction to Fourier Series

In this lecture, we shall discuss a class of expansions which are particularly useful in the
study of solution of PDEs. To begin with, we now review some function property that are
particularly relevant to this study.
DEFINITION 1. (Periodic function) A function is periodic of period L if f (x+L) = f (x)
for all x in the domain of f .
The smallest positive value of L is called the fundamental period. The trigonometric
functions sin x and cos x are examples of periodic functions with fundamental period 2
and tan x is periodic with fundamental period . A constant function is a periodic function
with arbitrary period L.
It is easy to verify that if the functions f1 , . . . , fn are periodic of period L, then any
linear combination
c1 f1 (x) + + cn fn (x)
is also periodic. Furthermore, if the innite series

1
nx
nx
a0 +
an cos(
) + bn sin(
)
2
L
L
n=1

consisting of 2L-periodic functions converges for all x, then the function to which it converges will be periodic of period 2L.
There are two symmetry properties of functions that will be useful in the study of
Fourier series.
DEFINITION 2. (Even function and Odd function) Let f : [L, L] R. Then f (x)
is called even, if f (x) = f (x) for all x [L, L]. f (x) is called odd, if f (x) = f (x),
for all x [L, L].
Note: The graph of an even function is symmetric with respect to the y-axis. Note that
if (x, f (x)) is on the graph of an even function f (x), then (x, f (x)) will also be on the
graph (i.e., the graph is invariant under reection in the y-axis), see Figure 4.1.
The graph of an odd function is symmetric with respect to the origin. If f (x) is odd,
then (x, f (x)) is on the graph if and only if (x, f (x)) is on the graph. That is, the
graph is invariant under reection through the origin, see Figure 4.2.
EXAMPLE 3. The functions f (x) = x2n , n = 0, 1, 2, . . . are even functions whereas f (x) =
x2n+1 , n = 0, 1, 2, . . . are odd functions. The functions sin x and tan x are odd functions
and cos x is an even function.

MODULE 4: FOURIER SERIES

We collect some facts concerning even and odd functions.


The product of two even functions is even.
The product of two odd functions is even.
The product of an odd function and an even function is odd.
L

If f (x) is odd (L x L), then

L f (x)dx

If f (x) is even (L x L), then

L f (x)dx

= 0, if the integral exists.


=2

L
0

f (x)dx, if the integral exists.

It is easy to verify that

nx
sin
dx = 0,
L
L

cos
L

nx
dx = 0,
L

n = 1, 2, . . . ,

For m. n = 1, 2, . . . , we have

mx
nx
cos
dx = 0
L
L
L
{
L
0,
nx
mx
sin
dx =
sin
L
L
L,
L
{
L
0,
nx
mx
cos
dx =
cos
L
L
L,
L
sin

(1)
m = n,
m = n.
m = n,
m = n.

(2)

(3)

Equations (1)-(3) express an orthogonality condition satised by the set of trigonometric


functions {cos x, sin x, cos 2x, sin 2x, . . .}, where L = .
DEFINITION 4. (Orthogonal functions) A set of functions {fn (x)}
n=1 is said to be an
orthogonal with respect to the nonnegative weight function w(x) on the interval [a, b] if

fm (x)fn (x)w(x)dx = 0,

whenever m = n.

We have already seen that the set of trigonometric functions


{1, cos x, sin x, cos 2x, sin 2x, . . .}
is orthogonal on [, ] with respect to the weight function w(x) = 1.
Dene the norm of f as
[
f :=

]1/2
f (x)w(x)dx
.
2

(4)

MODULE 4: FOURIER SERIES

We say that a set of functions {fn (x)}


n=1 is an orthonormal system with respect to w(x)
if (4) holds and fn = 1 for each n. That is,

fm (x)fn (x)w(x)dx =
a

0, m =
n,
1, m = n.

(5)

An innite series of the form

1
nx
nx
a0 +
) + bn sin(
),
an cos(
2
L
L

(6)

n=1

where
1
an =
L
and
1
bn =
L

f (x) cos(
L

nx
)dx, n = 0, 1, 2, 3 . . . ,
L

f (x) sin(
L

nx
)dx, n = 1, 2, 3 . . . ,
L

is called the Fourier series of f (x). This series is named after the outstanding French
mathematical physicist Joseph Fourier (1768-1830).
Suppose that f (x) is of the form

1
nx
nx
f (x) = a0 +
[an cos(
) + bn sin(
)].
2
L
L
N

(7)

n=1

Then, the coecients an and bn are uniquely determined by the formulas

1 L
nx
an =
f (x) cos(
) dx, n = 0, 1, . . . ,
L L
L

1 L
nx
bn =
f (x) sin(
) dx, n = 1, 2, . . . , .
L L
L

(8)
(9)

REMARK 5.
Not every function f (x) has the representation of the form (7). The right side of (7) is
smooth i.e., C (innitely dierentiable functions), but many functions have graphs
with jumps or corners. We will encounter functions f (x) for which the integral (8)
and (9) are not zero for innitely many values of n. In such cases, f (x) can not be
represented as a nite sum as in (7). Also, even if N , the sum (7) might not
converge to f (x), unless some additional assumptions are made (cf. [1])
DEFINITION 6. (Fourier series) Let f (x) : [L, L] R be such that the integrals

nx
1 L
f (x) cos(
)dx, n = 0, 1, 2, 3 . . . ,
(10)
an =
L L
L

MODULE 4: FOURIER SERIES

and

1
bn =
L

f (x) sin(
L

nx
)dx, n = 1, 2, 3 . . . ,
L

(11)

exists and are nite. Then the Fourier series(FS) of f on [L, L] is the expression
[
1
nx ]
nx
f (x) a0 +
) + bn sin(
) .
an cos(
2
L
L

(12)

n=1

The coecients a0 , an , bn (n = 1, 2, 3, . . .) are known as the Fourier coecients of f . The


symbol means has the Fourier series.
EXAMPLE 7. Find the FS of
{
f (x) =

1 < x < 0,
1,

0 < x < .

Solution. Here L = . Note that f (x) is an odd function. Since the product of an
odd function and an even function is odd, f (x) cos nx is also an odd function. Hence
an =

f (x) cos nxdx = 0,

n = 0, 1, 2, . . .

Since f (x) sin x is an even function (as the product of two odd functions), we have

1
2
f (x) cos nxdx =
sin nxdx

0
[
]
[
]
2 1 (1)n
2 cos nx
, n = 1, 2, 3, . . . .
=
= n n

n
0
{
0,
n even,
=
4
n , n odd.

bn =

Thus
f (x)

[
]

2 [1 (1)n ]
4
1
1
sin nx =
sin x + sin 3x + sin 5x + .

3
5
n=1

REMARK 8.
If f is any odd function, then its FS consists only of sine terms. (see, Example 7).
If f is an even function, then its FS consists only of cosine terms (including cos 0x).
EXAMPLE 9. Find the FS of the function f (x) = x for L x L.

MODULE 4: FOURIER SERIES

Solution. We rst compute the Fourier coecients an for n 1,


L
x
nx L
nx
1
nx
)dx =
sin(
)
)dx
sin(
L
n
L
n
L
L
L
L

L
nx L
cos(
)
= 0+
(n)2
L L
= 0, n = 1, 2, 3, . . . .
1
L

an =

x cos(

For n = 0, we get
1
a0 =
L

L
1 x2
xdx =
= 0.
L 2 L
L

Thus, an = 0, for n = 0, 1, 2, 3, . . .. Next, to compute bn , we have


bn =
=
=


L
nx
x
nx L
nx
1
x sin(
)dx =
cos(
) +
cos(
)dx
L
n
L
n
L
L
L
L

2L
L
nx L
cos(n) +
)
sin(
(n)
(n)2
L L
2L
(1)n+1 , n = 1, 2, 3, . . . .,
n
1
L

where we have used the fact cos(n) = (1)n . Thus, the FS of f (x) is given by
f (x)

(1)

n=1

nx
2L
1
nx
sin(
)=
(1)n+1 sin(
).
n
L

n
L

n+1 2L

n=1

Practice Problems
1. Find the FS of the following function:
{
{
0 2 x 1,
x2
1 x 0,
(a) f (x) =
; (b) f (x) =
1, 1 < x 2.
1 + x, 0 < x 1.
(c) f (x) = | sin x|, < x < .
2. If the 2-periodic even function is given by f (x) = |x| for < x < , show that
f (x) =

4 cos(2n 1)x

.
2
(2n 1)2
n=1

3. Show that
1

1 1 1

+ + = .
3 5 7
4

MODULE 4: FOURIER SERIES

Lecture 2

Convergence of Fourier Series

In this lecture, we shall discuss the convergence of FS (without proofs) and the properties
of termwise dierentiation and integration.

Convergence of FS for continuous functions

In Example 9 (cf. Lecture 1 of Module 4), we notice that the FS of f (0) = 0 = f (0).
However, the FS of f (L) = 0 = L = f (L). Thus, the FS of f (x) is not f (x) at
x = L.
Under various assumptions on a function f (x), one can prove that the Fourier series
of f (x) does converge to f (x) for all x in [L, L]. We now have the following results:
THEOREM 1. (The convergence of FS) Let f C 2 ([L, L]) ( a twice continuously
dierentiable function on the interval [L, L]) be such that f (L) = f (L) and f (L) =
f (L). Let an and bn be the Fourier coecients of f (x), and let
M = max |f (x)|.
x[L,L]

Then for any N 1,



}
{

4L2 M

nx
nx
a0


+
[an cos(
) + bn sin(
)] 2 ,
f (x)


2
L
L
N

(1)

n=1

for all x [L, L].


REMARK 2.
As N , the right hand side of (1) 0, which implies that the sum in brackets
on the left side of (1) converges to f (x) as N , i.e.,
[

1
nx
nx ]
f (x) a0 +
an cos(
) + bn sin(
) .
2
L
L
n=1

The inequality (1) tells us how many terms of Fourier Series f (x) will suce, in
order to approximate f (x) to within a certain error.
The conditions f (L) = f (L) and f (L) = f (L) ensure that if the interval [L, L]
is bent into a circle by joining x = L to x = L, then the graph of f (x) above this
circle is continuous and has a well dened tangent line (or derivative) at the juncture
where L and L are identied.

MODULE 4: FOURIER SERIES

Periodic extensions of functions: We begin with some facts concerning periodic functions and periodic extensions of functions.
DEFINITION 3. Let f : [L, L] R be such that f (L) = f (L). Then the periodic
extension of f (x) is the unique periodic function fe(x) of period 2L, such that fe(x) = f (x)
for L x L.
In order to have periodic extension to exist, we must have f (L) = f (L). Thus, note
that the function f (x) = x, dened for L x L, does not have a periodic extension.
To remedy this situation, we dene
1
f (L) = {f (L ) + f (L+ )},
2
where
f (L ) := lim f (x) := lim f (L h)
xL

h0+

and
f (L+ ) :=

lim

x(L)+

f (x) := lim f ((L) + h).


h0+

We now state a sequence of convergence results without proofs. For a proof, readers
may refer to [1, 9].
THEOREM 4. (Pointwise convergence of FS) Let f (x) C 1 ([L, L]) and assume that
f (L) = f (L) and f (L) = f (L). Then the FS of f (x) = f (x) x [L, L].
That is, if the N -th partial sum of Fourier series of f (x) is denoted by

1
nx
nx
SN (x) = a0 +
[an cos(
) + bn sin(
)],
2
L
L
N

n=1

where
1
an =
L

ny
1
) dy and bn =
f (y) cos(
L
L
L

f (y) sin(
L

ny
) dy,
L

then, for any xed x [L, L], we have

1
nx
nx
f (x) a0 +
[an cos(
) + bn sin(
)] lim SN (x) = f (x).
N
2
L
L
n=1

REMARK 5. Note that when f and f are piecewise continuous on [L, L], the Fourier
series converges to f (x). FS converges to the average of the left and right-hand limits at
points where f is discontinuous.
THEOREM 6. (Uniform convergence of FS) Let f (x) C 2 ([L, L]) be such that
f (L) = f (L) and f (L) = f (L). Then the FS of f (x) converges uniformly to f (x).

MODULE 4: FOURIER SERIES

That is, the sequence of partial sums S1 (x), S2 (x), . . . of Fourier Series of f (x) converge uniformly to f (x). Indeed,
|f (x) SN (x)|

4L2 M
, x [L, L],
2N

where
M = max |f (x)|.
LxL

Convergence of FS for piecewise continuous functions

DEFINITION 7. A function f : [a, b] R is piecewise continuous on [a, b] if


(i) f (x) is continuous at all but a nite number of points in [a, b],

(ii) x0 (a, b), the limits f (x+


0 ) and f (x0 ) exist, and

(iii) the limits f (a+ ) and f (b ) exist.


EXAMPLE 8. The function

x, 0 < x < 1,
f (x) =
2, 1 < x < 2,

(x 2)2 , 2 x 3
is piecewise continuous on [0, 3].
DEFINITION 9. f (x) is piecewise C 1 on [a, b] if f (x) and f (x) are piecewise continuous
on [a, b].
Note: If f (x) is piecewise continuous on [a, b], then f (x) is automatically piecewise
continuous on [a, b].
The following denition is convenient in the formulation of the convergence theorem
for FS of piecewise C 1 functions.
DEFINITION 10. Let f (x) : [L, L] R be a piecewise C 1 -function. Dene the adjusted
function f(x) as follows:
{
f(x) =

1
+

2 [f (x ) + f (x )],
1
+

2 [f (L ) + f (L )],

L < x < L,
x = L.

(2)

Note: The above denition tells that f(x) coincides at all points in (L, L), where f (x)
is continuous, but f(x) is the average of the left-hand and right-hand limits of f (x) at
points of discontinuity in (L, L). The value of f(x) at x = L can be thought of as an
average of left-hand and right-hand limits, if we bend the interval [L, L] into a circle.

10

MODULE 4: FOURIER SERIES

THEOREM 11. Let f : [L, L] R be a piecewise C 1 function and let f(x) be the adjusted
function as dened in (2). Then FS of f (x) = f(x), for all x [L, L]. In fact, we have
FS of f (x) = fe(x),

< x < ,

where fe(x) is the periodic extension of the adjusted function f(x).


If f (x) is piecewise C 1 and has no discontinuities in [L, L] and f (L) = f (L), then
there is still hope for uniform convergence of FS of f (x) to f (x) in [L, L]. The following
theorem provides the uniform convergence of FS of f (x) to f (x) for such functions.
THEOREM 12. Let f (x) : [L, L] R be a piecewise C 1 function such that f (L) =
f (L). Then FS of f (x) converges uniformly to f (x) on [L, L]. That is,
max |f (x) SN (x)| 0 as N ,

LxL

(3)

where SN (x) is the N -th partial sum of FS of f (x).


REMARK 13.
Note that in the above theorem we have not assumed that f (x) is continuous everywhere. The graph of f (x) may have corners. However, the continuity assumption ensures that the graph of f (x) will have no gaps. Moreover, the assumption
f (L) = f (L) ensures that the periodic extension f(x) has no gaps (i.e., f (x) yields
a continuous function on the circle of circumference 2L).

Dierentiation and integration of FS

The term-by-term dierentiation of a Fourier series is not always permissible. For example,
the FS for f (x) = x, < x < (see, Example 7) is

f (x) 2

(1)n+1

n=1

sin nx
,
n

which converges for all x, whereas its derived series


2

(1)n+1 cos nx

n=1

diverges for every x.


The following theorem gives sucient conditions for using termwise dierentiation.

11

MODULE 4: FOURIER SERIES

THEOREM 14. (Dierentiation of FS) Let f (x) : R R be continuous and f (x+2L) =


f (x). Let f (x) and f (x) be piecewise continuous on [L, L]. Then, The FS of f (x) can
be obtained from the FS for f (x) by termwise dierentiation. In particular, if
a0 {
nx
nx }
f (x) =
+
an cos
+ bn sin
,
2
L
L

n=1

then
f (x)

n {
n=1

an sin

nx
nx }
+ bn cos
.
L
L

Notice that the above theorem does not apply to Example 7 as the 2-periodic extension of f (x) fails to be continuous on (, ).
Termwise integration of a FS is permissible under much weaker conditions.
THEOREM 15. (Integration of FS) Let f (x) : [L, L] R be piecewise continuous
function with FS

a0 {
nx
nx }
+
an cos
+ bn sin
.
2
L
L

f (x)

n=1

Then, for any x [L, L], we have


}
x
x
x {

nt
nt
a0
dt +
an cos
+ bn sin
dt.
f (t)dt =
L
L
L
L
L 2
n=1

Practice Problems
1. Discuss the convergence of the FS of the following functions f :
{
{
x,
0 < x < ,
x, 0 < x < ,
(a) f (x) =
; (b) f (x) =
x, < x < 2.
0, < x 2.
2. Determine the FS of the following functions by dierentiating the appropriate FS:
(i) sin2 x, 0 < x < ; (ii) cos x + cos(2x), 0 < x <
3. Dierentiate the FS of f (x) = | sin x| term by term to prove that
cos x =

8 n sin(2nx)
.

(1 4n2 )
n=1

4. Find the function represented by the new series which are obtained by termwise
integration of the following series from 0 to x:
(1)k+1

cos(kx)
= ln(2 cos(x/2)), < x < .
k

12

MODULE 4: FOURIER SERIES

Lecture 3

Fourier Cosine and Sine Series

Recall that the FS for an odd/even function dened on [L, L] consists entirely of sine/cosine
terms. If a function f dened in 0 x L can be extended to the interval [L, L] in such
a way that the extended function is odd/even. This leads to the following denitions.
DEFINITION 1. (Even and Odd extensions) Let f : [0, L] R. The even extension
of f (x) is the unique even function fe (x) dened for x [L, L] with fe (x) = f (x) for
x [0, L], i.e.,
{
fe (x) =

if 0 x L,

f (x)

f (x) if L x 0.

If f (0) = 0 then the odd extension fo (x) is the unique odd function dened for x [L, L],
such that fo (x) = f (x) for x [0, L], i.e.,
{
fo (x) =

f (x)

if 0 x L,

f (x) if L x 0.

An example of even and odd extensions of f (x) =

Figure 4.1: f (x) =

x, 0 x L given in Fig. 4.1.

x, 0 x L; even extension fe (x); odd extension fo (x)

THEOREM 2. Let f : [L, L] R f (x) be a function with Fourier coecients


1
an =
L

nx
1
f (x) cos(
)dx and bn =
L
L
L

f (x) sin(
L

nx
)dx.
L

If f (x) is even, then bn = 0 (n = 1, 2, 3, . . .) and


2
an =
L

f (x) cos(
0

nx
)dx, (n = 0, 1, 2, . . .).
L

(1)

13

MODULE 4: FOURIER SERIES

If f (x) is odd, then an = 0(n = 0, 1, 2, . . .) and

2 L
nx
f (x) sin(
)dx, (n = 1, 2, 3, . . .).
bn =
L 0
L

(2)

DEFINITION 3. Let f : [0, L] R be such that the integrals (1) and (2) exist. Then the
Fourier sine series (FSS) of f (x) is the expression
FSS of f (x) =

n=1

nx
2
bn sin(
)dx, where bn =
L
L

f (x) sin(
0

nx
)dx.
L

(3)

The Fourier cosine series (FCS) of f (x) is the expression

nx
2
1
FCS of f (x) = a0 +
an cos(
), where an =
2
L
L
n=1

f (x) cos(
0

nx
)dx.
L

(4)

THEOREM 4. Let f : [0, L] R. Suppose that the integrals in (1) and (2) exist. Then
(redening f (0) = 0) the FSS of f (x) is the Fourier series of the odd extension f0 (x)
dened on [L, L], i.e.,
FSS of f (x) = FS of fo (x).
The FCS of f (x) is the Fourier series of the even extension fe (x) dened on [L, L], i.e.,
FCS of f (x) = FS of fe (x).
Since FSS of f (x) = FS of fo (x) and FCS of f (x) = FS of fe (x), we can obtain
convergence results for FSS of f (x) and FCS of f (x) by applying Theorem 11 (see, previous
lecture) to the extensions fo (x) and fe (x).
THEOREM 5. Let f (x) : [0, L] R be a piecewise C 1 function. Then
{
1

+
2 [f (x ) + f (x )], 0 < x < L,
FSS of f (x) =
0,
x = 0 or L.

(5)

If f (x) is also continuous on [0, L] with f (0) = 0 and f (L) = 0, then the partial sums
SN (x) of FSS of f (x) converge uniformly to f (x) on [0, L], i.e.,
max {|f (x) SN (x)|} 0 as N .

0xL

THEOREM 6. Let f (x) : [0, L] R be a piecewise C 1 function. Then

2 [f (x ) + f (x )], 0 < x < L,


FCS of f (x) =
f (0+ ),
x = 0,

f (L ),
x = L.

(6)

(7)

If f (x) is also continuous on [0, L], then the partial sums SN (x) of FCS of f (x) converge
uniformly to f (x).

14

MODULE 4: FOURIER SERIES

THEOREM 7. The Fourier sine series of f e (x) on [0, 2L] is given by


e

FSS of f (x) =

n=0

where
2
cn =
L

1
cn sin[(n + )x/L],
2

1
f (x) sin[(n + )x/L], n = 0, 1, 2, . . . .
2

(8)

(9)

If f (x) is piecewise C 1 , then


{
FSS of f e (x) =

1 e
2 [f (x )

+ f (x+ )], 0 < x < 2L,

0,

x = 0.

In particular, for 0 x L, we have

1 e
e +

2 [f (x ) + f (x )], 0 < x < L,


FSS of f e (x) =
0,
x = 0,

f (L ),
x = L.

(10)

(11)

Moreover, if f (x) is piecewise C 1 and continuous with f (0) = 0, then the partial sums
SN (x) of FSS of f e (x) converge uniformly to f (x) on [0, L].

Practice Problems
1. Construct the FCS for the following functions:
{
(a) f (x) =

0 x 1,

1, 1 < x 2.

{
; (b) f (x) =

2+x

0 x 1,

1 x, 1 < x 2.

2. Construct the FSS for the following functions:


{
(a) f (x) =

0 < x < /2,

2, /2 < x .

(b) f (x) = x2 , 0 < x < L.

Module 5: Heat Equation


In this module, we shall study one-dimensional heat equation. It is a parabolic partial
dierential equation which describes diusion processes such as heat conduction, chemical
concentration etc., and hence the heat equation is often called the diusion equation.
This module consists of ve lectures and is organized as follows. The rst lecture is
devoted to the derivation of heat equation from the principle of conservation of energy.
Uniqueness results and the maximum principle for the heat equation will be discussed
in the second lecture. Third lecture discusses the method of solution by separation of
variables. Fourth and fth lectures are devoted to the cases where the boundary conditions
do not change with time and time-dependent boundary conditions, respectively.

MODULE 5: HEAT EQUATION

Lecture 1

Modeling the Heat Equation

We shall derive heat equation from the principle of conservation of energy and the fact
that heat ows from hot regions to cold regions.
Consider a wire or rod of length L which is made of some heat-conducting material
and is insulated on the outside, except possibly over the ends at x = 0 and x = L. Let
u(x, t) denote the temperature at x at time t. u(x, t) is assumed to be constant on each
cross section at each time. By the principle of conservation of energy (heat energy), the

Figure 5.1: A thin rod of length L

net change of heat inside the segment P Q (between x and x + x) is equal to the net
heat ux across the boundaries and the total heat generated inside P Q. If c is thermal
capacity of the rod, is the density of the rod, A is the cross-section area of the rod, k is
thermal conductivity of the rod and f (x, t) is the external heat source, then we calculate
these terms as follows:

x+x

Total amount of heat inside the segment P Q at time t =


d
Net change of heat inside P Q =
dt

cAu(, t)d .
x

x+x

x+x

cAu(, t)ds = cA
x

ut (, t)d .
x

Net ux of heat across the boundaries =kA[ux (x + x, t) ux (x, t)].


x+x
Heat generated due to external heat source inside P Q = A
f (, t)d .
x

By the principle of conservation of energy, we write

d
dt

x+x

x+x

cAu(, t)d = cA
x

ut (, t)d
x

= kA[ux (x + x, t) ux (x, t)] + A

x+x

f (, t)d.
x

(1)

MODULE 5: HEAT EQUATION

Applying Mean Value Theorem for integral1 , we obtain


cAut (1 , t)x = kA[ux (x + x, t) ux (x, t)] + Af (2 , t)x,
where 1 , 2 (x, x + x), and hence,
[
]
k ux (x + x, t) ux (x, t)
1
ut (1 , t) =
+ f (2 , t).
c
x
c
Now, letting x 0, we arrive at
ut (x, t) = 2 uxx (x, t) + F (x, t),
where 2 = k/(c) is called the thermal diusivity of the rod and F (x, t) =

(2)
1
c f (x, t)

is

called the heat source density.


REMARK 1.
When the rod is not laterally insulated and we allow the heat to ow in and out
across the lateral boundary at a rate proportional to the dierence between the
temperature u(x, t) and the surrounding medium, the conservation of heat principle
yields
ut = 2 uxx (u u0 ), > 0.
The heat loss (u > u0 ) or gain (u < u0 ) is proportional to the dierence between
the temperature u(x, t) of the rod and the surrounding medium u0 . Here, is the
constant of proportionality.
If the material of the rod is uniform, then k is independent of x. For some materials,
the value of k depends on the temperature u and hence the resulting heat equation
{
}
u
1
ut =
k(u)
c x
x
is nonlinear.
If the material is nonhomogeneous the diusion within the rod depends on x. For
example, suppose the half of the rod is made of copper and other half is made of
steel, then the PDE that describes the heat ow is given by
ut = 2 (x)uxx , 0 < x < L,
1

If f (x) is continuous on [a, b], then there exists at least one number in (a, b) such that
b
f (x)dx = f ()(b a).
a

MODULE 5: HEAT EQUATION

with
(x) =

1 , 0 < x < L/2,


2 , L/2 < x < L,

where 1 and 2 are the thermal diusivity coecients of copper and steel, respectively.
Types of BCs: There are three types of boundary conditions that can occur for heat
ow problems. They are
Dirichlet boundary conditions (temperature is specied on the boundary):
Consider heat ow problem in a rod (0 x L). The specication of the temperatures u(0, t) and u(L, t) at the ends are classied as Dirichlet type BC.
Neumann boundary conditions ( heat ow across the boundary is specied):
The specication of the normal derivative (i.e.,

u
n ,

where n is the outward normal

to the boundary) on the boundary is classied as Neumann type BCs. For instance,
if the end points of a rod is insulated (i.e., we do not allow any ow of heat across
the boundary), the BCs are
ux (0, t) = 0,

ux (L, t) = 0,

0 < t < .

Robins or Mixed boundary conditions:


If the condition on the boundary is a mixture of both Dirichlet and Neumann types
i.e.,
u
= h(u g(t))
n
then it is called Robins BCs or mixed BCs. Here, h is a constant and g(t) is given
function that can vary over the boundary. The mixed BCs may be interpreted
as the inward ux across the boundary is proportional to the dierence between
the temperature u and some specied temperature g. If the temperature u on the
boundary is greater than the boundary temperature, then the ow of heat is outward.
If u is less than the specied boundary temperature g, then heat ows inward.

MODULE 5: HEAT EQUATION

Lecture 2

The Maximum and Minimum Principle

In this lecture, we shall prove the maximum and minimum properties of the heat equation.
These properties can be used to prove uniqueness and continuous dependence on data of
the solutions of these equations.
To begin with, we shall rst prove the maximum principle for the inhomogeneous heat
equation (F = 0).
THEOREM 1. (The maximum principle) Let R : 0 x L, 0 t T be a closed
region and let u(x, t) be a solution of
ut 2 uxx = F (x, t) (x, t) R,

(1)

which is continuous in the closed region R. If F < 0 in R, then u(x, t) attains its maximum
values on t = 0, x = 0 or x = L and not in the interior of the region or at t = T . If
F > 0 in R, then u(x, t) attains its minimum values on t = 0, x = 0 or x = L and not in
the interior of the region or at t = T .
Proof. We shall show that if a maximum or minimum occurs at an interior point
0 < x0 < l and 0 < t0 T , then we will arrive at contradiction. Let us consider the
following cases.
Case I: First, consider the case with F < 0. Since u(x, t) is continuous in a closed and
bounded region in R, u(x, t) must attain its maximum in R. Let (x0 , t0 ) be the interior
maximum point. Then, we must have
uxx (x0 , t0 ) 0,

ut (x0 , t0 ) 0.

(2)

Since ux (x0 , t0 ) = 0 = ut (x0 , t0 ), we have


ut (x0 , t0 ) = 0 if t0 < T.
If t0 = T , the point (x0 , t0 ) = (x0 , T ) is on the boundary of R, then we claim that
ut (x0 , t0 ) 0
as u may be increasing at (x0 , t0 ). Substituting (2) in (1), we nd that the left side of
the equation (1) is non-negative while the right side is strictly negative. This leads to
a contradiction and hence, the maximum must be assumed on the initial line or on the
boundary.

MODULE 5: HEAT EQUATION

Case II: Consider the case with F > 0. Let there be an interior minimum point
(x0 , t0 ) in R such that
uxx (x0 , t0 ) 0,

ut (x0 , t0 ) 0.

(3)

Note that the inequalities (3) is same as (2) with the signs reversed. Again arguing as
before, this leads to a contradiction, hence the minimum must be assumed on the initial
line or on the boundary.
Note: When F = 0 i.e., for homogeneous equation, the inequalities (2) at a maximum or
(3) at a minimum do not leads to a contradiction when they are inserted into (1) as uxx
and ut may both vanish at (x0 , t0 ).
Below, we present a proof of the maximum principle for the homogeneous heat equation.
THEOREM 2. (The maximum principle) Let u(x, t) be a solution of
ut = 2 uxx 0 x L, 0 < t T,

(4)

which is continuous in the closed region R : 0 x L and 0 t T . The maximum


and minimum values of u(x, t) are assumed on the initial line t = 0 or at the points on
the boundary x = 0 or x = L.
Proof. Let us introduce the auxiliary function
v(x, t) = u(x, t) + x2 ,

(5)

where > 0 is a constant and u satises (4). Note that v(x, t) is continuous in R and
hence it has a maximum at some point (x1 , t1 ) in the region R.
Assume that (x1 , t1 ) is an interior point with 0 < x1 < L and 0 < t1 T . Then we
nd that
vt (x1 , t1 ) 0, vxx (x1 , t1 ) 0.

(6)

vt 2 vxx = ut 2 uxx 22 = 22 < 0.

(7)

Since u satises (4), we have

Substituting (6) into (4) and using (7 ) now leads to


0 vt 2 vxx < 0,
which is a contradiction since the left side is non-negative and the right side is strictly
negative. Therefore, v(x, t) assumes its maximum on the initial line or on the boundary
since v satises (1) with F < 0.

MODULE 5: HEAT EQUATION

Let
M = max{u(x, t)} on t = 0, x = 0, and x = L,
i.e., M is the maximum value of u on the initial line and boundary lines. Then
v(x, t) = u(x, t) + x2 M + L2 , for 0 x L, 0 t T.

(8)

Since v has its maximum on t = 0, x = 0, or x = L, we obtain


u(x, t) = v(x, t) x2 v(x, t) M + L2 .

(9)

Since is arbitrary, letting 0, we conclude that


u(x, t) M for all (x, t) R,

(10)

and this completes the proof.


REMARK 3.
The minimum principle for the heat equation can be obtained by replacing the
function u(x, t) by u(x, t), where u(x, t) is a solution of (4). Clearly, u is also
a solution of (4) and the maximum values of u correspond to the minimum values
of u. Since u satises the maximum principle, we conclude that u assumes its minminimum values on the initial line or on the boundary lines. In particular, this
implies that if the initial and boundary data for the problem are non- negative, then
the solution must be non-negative.
In geometrical term, the maximum principle states that if a solution of the problem
(4) is graphed in the xtu-space, then the surface u = u(x, t) achieves its maximum
height above one of the three sides x = 0, x = L, t = 0 of the rectangle 0 x L,
0 t T.
From a physical perspective, the maximum principle states that the temperature, at
any point x inside the rod at any time t (0 t T ), is less than the maximum of
the initial temperature distribution or the maximum of the temperatures prescribed
at the ends during the time interval [0, T ].

Uniqueness and continuous dependence

As a consequence of the maximum principle, we can show that the heat ow problem has
a unique solution and depend continuously on the given initial and boundary data.

MODULE 5: HEAT EQUATION

THEOREM 4. (Uniqueness result) Let u1 (x, t) and u2 (x, t) be solutions of the following
problem
PDE:
BC:
IC:

ut = 2 uxx ,

0 < x < L, t > 0,

u(0, t) = g(t), u(L, t) = h(t),

(11)

u(x, 0) = f (x),

where f (x), g(t) and h(t) are given functions. Then u1 (x, t) = u2 (x, t), for all 0 x L
and t 0.
Proof. Let u1 (x, t) and u2 (x, t) be two solutions of (11). Set w(x, t) = u1 (x, t)
u2 (x, t). Then w satises
wt = 2 wxx 0 < x < L, t > 0,
w(0, t) = 0,

w(L, t) = 0,

w(x, 0) = 0.
By the maximum principle (cf. Theorem 2), we must have
w(x, t) 0 = u1 (x, t) u2 (x, t), for all 0 x L, t 0.
A similar argument with w
= u2 u1 yields
u2 (x, t) u1 (x, t) for all 0 x L, t 0.
Therefore, we have
u1 (x, t) = u2 (x, t) for all 0 x L, t 0,
and this completes the proof.
THEOREM 5. (Continuous Dependence on the IC and BC) Let u1 (x, t) and u2 (x, t),
respectively, be solutions of the problems
ut = 2 uxx ;
u(0, t) = g1 (t) u(L, t) = h1 (t);
u(x, 0) = f1 (x);

ut = 2 uxx
u(0, t) = g2 (t) u(L, t) = h2 (t)
u(x, 0) = f2 (x),

in the region 0 x L, t 0. If
|f1 (x) f2 (x)| for all x, 0 x L,

(12)

MODULE 5: HEAT EQUATION

and
|g1 (t) g2 (t)| and |h1 (t) h2 (t)| for all t, 0 t T,
for some 0, then we have
|u1 (x, t) u2 (x, t)| for all x and t, where 0 x L, 0 t T.
Proof. Let v(x, t) = u1 (x, t) u2 (x, t). Then vt = 2 vxx and we obtain
|v(x, 0)| = |f1 (x) f2 (x)| , 0 x L,
|v(0, t)| = |g1 (t) g2 (t)| , 0 t T,
|v(L, t)| = |h1 (t) h2 (t)| , 0 t T.
Note that the maximum of v on t = 0 (0 x L) and x = 0 and x = L (0 t T ) is
not greater than . The minimum of v on these boundary lines is not less than . Hence,
the maximum/minimum principle yields
v(x, t) = |u1 (x, t) u2 (x, t)| = |v(x, t)| .
Note: (i) We observe that when = 0, the problems in (12) are identical. We conclude
that |u1 (x, t) u2 (x, t)| 0 (i.e. u1 = u2 ). This proves the uniqueness result.
(ii) Suppose a certain initial/boundary value problem has a unique solutions. Then
a small change in the initial and/or boundary conditions yields a small change in the
solutions.
For the inhomogeneous equation (1), we have seen that the maximum or minimum
values must be attained either on the initial line or the boundary lines and that they
cannot be assumed in the interior. This result is known as a strong maximum or minimum
principle.
THEOREM 6. (Strong maximum principle) Let u(x, t) be a solution of the heat equation in the rectangle R : 0 x L, 0 t T . If u(x, t) achieves its maximum at
(x , T ), where 0 < x < L, then u must be constant in R.

Practice Problems
1. Use the maximum/minimum principle to show that the solution u of the problem
ut = uxx , 0 < x < , t > 0,
ux (0, t) = 0, ux (, t) = 0, t > 0,
1
u(x, 0) = sin(x) + sin(2x), 0 x
2

satises 0 u(x, t)

3 3
4 ,

t 0.

10

MODULE 5: HEAT EQUATION

2. Let Q = {(x, t) | 0 < x < , 0 < t T }. Let u solves


ut = uxx in Q,
u(0, t) = 0, u(, t) = 0,

0 t T,

u(x, 0) = sin (x), 0 x .


2

Use maximum principle to show that 0 u(x, t) et sin x in Q.

11

MODULE 5: HEAT EQUATION

Lecture 3

Method of Separation of Variables

Separation of variables is one of the oldest technique for solving initial-boundary value
problems (IBVP) and applies to problems, where
PDE is linear and homogeneous (not necessarily constant coecients) and
BC are linear and homogeneous.
Basic Idea: To seek a solution of the form
u(x, t) = X(x)T (t),
where X(x) is some function of x and T (t) in some function of t. The solutions are simple
because any temperature u(x, t) of this form will retain its basic shape for dierent
values of time t. The separation of variables reduced the problem of solving the PDE
to solving the two ODEs: One second order ODE involving the independent variable x
and one rst order ODE involving t. These ODEs are then solved using given initial and
boundary conditions.
To illustrate this method, let us apply to a specic problem. Consider the following
IBVP:
PDE:
BC:
IC:

ut = 2 uxx ,

0 x L, 0 < t < ,

(1)

0 < t < ,

(2)

u(0, t) = 0 u(L, t) = 0,
u(x, 0) = f (x),

0 x L.

(3)

Step 1:(Reducing to the ODEs) Assume that equation (1) has solutions of the form
u(x, t) = X(x)T (t),
where X is a function of x alone and T is a function of t alone. Note that
ut = X(x)T (t) and uxx = X (x)T (t).
Now, substituting these expression into ut = 2 uxx and separating variables, we obtain
X(x)T (t) = 2 X (x)T (t)

T (t)
X (x)
=
.
2 T (t)
X(x)

12

MODULE 5: HEAT EQUATION

Since a function of t can equal a function of x only when both functions are constant.
Thus,
T (t)
X (x)
=
=c
2 T (t)
X(x)
for some constant c. This leads to the following two ODEs:
T (t) 2 cT (t) = 0,

(4)

X (x) cX(x) = 0.

(5)

Thus, the problem of solving the PDE (1) is now reduced to solving the two ODEs.
Step 2:(Applying BCs)
Since the product solutions u(x, t) = X(x)T (t) are to satisfy the BC (2), we have
u(0, t) = X(0)T (t) = 0 and X(L)T (t) = 0, t > 0.
Thus, either T (t) = 0 for all t > 0, which implies that u(x, t) = 0, or X(0) = X(L) = 0.
Ignoring the trivial solution u(x, t) = 0, we combine the boundary conditions X(0) =
X(L) = 0 with the dierential equation for X in (5) to obtain the BVP:
X (x) cX(x) = 0,

X(0) = X(L) = 0.

(6)

There are three cases: c < 0, c > 0, c = 0 which will be discussed below. It is convenient
to set c = 2 when c < 0 and c = 2 when c > 0, for some constant > 0.
Case 1. (c = 2 > 0 for some > 0). In this case, a general solution to the dierential
equation (5) is
X(x) = C1 ex + C2 ex ,
where C1 and C2 are arbitrary constants. To determine C1 and C2 , we use the BC
X(0) = 0, X(L) = 0 to have
X(0) = C1 + C2 = 0,

(7)

X(L) = C1 eL + C2 eL = 0.

(8)

From the rst equation, it follows that C2 = C1 . The second equation leads to
C1 (eL eL ) = 0,

C1 (e2L 1) = 0,

C1 = 0.

13

MODULE 5: HEAT EQUATION

since (e2L 1) > 0 as > 0. Therefore, we have C1 = 0 and hence C2 = 0. Consequently


X(x) = 0 and this implies u(x, t) = 0 i.e., there is no nontrivial solution to (5) for the
case c > 0.
Case 2. (when c=0)
The general solution solution to (5) is given by
X(x) = C3 + C4 x.
Applying BC yields C3 = C4 = 0 and hence X(x) = 0. Again, u(x, t) = X(x)T (t) = 0.
Thus, there is no nontrivial solution to (5) for c = 0.
Case 3. (When c = 2 < 0 for some > 0)

The general solution to (5) is


X(x) = C5 cos(x) + C6 sin(x).
This time the BC X(0) = 0, X(L) = 0 gives the system
C5 = 0,
C5 cos(L) + C6 sin(L) = 0.
As C5 = 0, the system reduces to solving C6 sin(L) = 0. Hence, either sin(L) = 0 or
C6 = 0. Now
sin(L) = 0

L = n,

n = 0, 1, 2, . . . .

Therefore, (5) has a nontrivial solution (C6 = 0) when


L = n or =

n
, n = 1, 2, 3, . . . .
L

Here, we exclude n = 0, since it makes c = 0. Therefore, the nontrivial solutions (eigenfunctions) Xn corresponding to the eigenvalue c = 2 are given by
Xn (x) = an sin(

nx
),
L

(9)

where an s are arbitrary constants.


Step 3:(Applying IC)
2
Let us consider solving equation (4). The general solution to (4) with c = 2 = ( n
L )

is
Tn (t) = bn e

2 ( n )2 t
L

14

MODULE 5: HEAT EQUATION

Combing this with (9), the product solution u(x, t) = X(x)T (t) becomes
nx
2 n 2
un (x, t) := Xn (x)Tn (t) = an sin(
)bn e ( L ) t
L
nx
2t
2 ( n
)
L
= cn e
sin(
), n = 1, 2, 3, . . . ,
L
where cn is an arbitrary constant.
Since the problem (9) is linear and homogeneous, an application of superposition
principle gives
u(x, t) =

un (x, t) =

n=1

cn e

2 ( n )2 t
L

sin(

n=1

nx
),
L

(10)

which will be a solution to (1)-(3), provided the innite series has the proper convergence
behavior.
Since the solution (10) is to satisfy IC (3), we must have
u(x, 0) =

cn sin

( nx )

n=1

= f (x),

0 < x < L.

Thus, if f (x) has an expansion of the form


f (x) =

cn sin

( nx )
L

n=1

which is called a Fourier sine series (FSS) with cn s are given by the formula

nx
2 L
cn =
f (x) sin(
)dx.
L 0
L

(11)

(12)

Then the innite series (10) with the coecients cn given by (12) is a solution to the
problem (1)-(3).
EXAMPLE 1. Find the solution to the following IBVP:
ut = 3uxx 0 x , 0 < t < ,

(13)

u(0, t) = u(, t) = 0, 0 < t < ,

(14)

u(x, 0) = 3 sin 2x 6 sin 5x,

(15)

0 x .

Solution. Comparing (13) with (1), we notice that 2 = 3 and L = . Using formula
(10), we write a solution u(x, t) as
u(x, t) =

n=1

cn e3n t sin(nx).
2

15

MODULE 5: HEAT EQUATION

To determine cn s, we use IC (15) to have


u(x, 0) = 3 sin 2x 6 sin 5x =

cn sin(nx).

n=1

Comparing the coecients of like terms, we obtain


c2 = 3 and c5 = 6,
and the remaining cn s are zero. Hence, the solution to the problem (13)-(15) is
u(x, t) = c2 e3(2) t sin(2x) + c5 e3(5) t sin(5x)
2

= 3e12t sin(2x) 6e75t sin(5x).

Practice Problems
1. Solve the following IBVP:
ut = 16uxx , 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0,
u(x, 0) = (1 x)x, 0 < x < 1.
2. Solve the following IBVP:
ut = uxx , 0 < x < , t > 0,
ux (0, t) = ux (, t) = 0, t > 0,
u(x, 0) = 1 sin x, 0 < x < .

16

MODULE 5: HEAT EQUATION

Lecture 4

Time-Independent Homogeneous BC

The boundary conditions in previous lecture are assumed to be homogeneous, where we


are able to use the superposition principle in forming general solutions of the PDE. We
now turn to the situation where the BC are not both homogeneous, but are independent
of time variable t. The method of solution consists of the following steps:
Step 1: Find a particular solution of the PDE and BC.
Step 2: Find the solution of a related problem with homogeneous BC. Then, add
this solution to that particular solution obtained in Step 1.
The procedure is illustrated in the following example.
PDE:

ut = 2 uxx 0 x L, 0 < t < ,

(1)

BC:

u(0, t) = a u(L, t) = b, 0 < t < ,

(2)

IC:

0 x L,

u(x, 0) = f (x),

(3)

where a and b are arbitrary constants and f (x) is a given function.


Solution. Seek a particular solution up (x, t) of the form up (x, t) = cx + d, where c
and d are chosen so that the BC are satised:
a = up (0, t) = c 0 + d = d,
b = up (L, t) = cL + d = cL + a.
=

d = a and c = (b a)/L.

Thus,
up (x, t) = (b a)x/L + a
solves both the PDE with the BCs being satised.
Consider the related homogeneous problem (i.e., with homogeneous PDE and BC)
PDE:
BC:

vt = 2 vxx 0 x L, 0 < t < ,


v(0, t) = 0, v(L, t) = 0,

0 < t < ,

v(x, 0) = f (x) up (x, 0), 0 x L.

If f (x) up (x, 0) is of the form


n=1 cn sin(nx/L), then its solution is given by
IC:

v(x, t) =

n=1

cn e(n/L)

2 2 t

sin(nx/L).

(4)

17

MODULE 5: HEAT EQUATION

Now, set u(x, t) = up (x, t)+v(x, t). Then it is easy to verify that u(x, t) solves (1). Indeed,
u(x, t) solves (1) by the superposition principle. Further, we have
BC:

u(0, t) = up (0, t) + v(0, t) = a + 0 = a


u(L, t) = up (L, t) + v(L, t) = b + 0 = b

IC:

u(x, 0) = up (x, 0) + v(x, 0) = up (x, 0) + f (x) up (x, 0) = f (x).

REMARK 1. (i) It is necessary to subtract up (x, 0) from f (x) to form the initial condition
for the related problem (4) so that the initial condition (3) is satised.
(ii) Since any particular solution will do, for simplicity one should consider a particular
solution of the form cx + d, and nd the constants, using the BC. The reason is that the
formula only applies to the BC of (2). For other BC, we obtain other particular solution.
For example, If ux (0, t) = a, u(L, t) = b then up (x, t) = a(x L) + b.
EXAMPLE 2.
PDE:
BC:
IC:

ut = 2uxx 0 x 1, 0 < t < ,

(5)

ux (0, t) = 1 u(1, t) = 2, 0 < t < ,

(6)

u(x, 0) = x + cos2 (3x/4) 5/2.

(7)

Solution. Take up (x, t) = cx + d. The rst BC ux (0, t) = 1 yields c = 1, while


up (1, t) = 1 + d yields d = 3 by the second BC. Thus, up (x, t) = x 3. The related
homogeneous problem is
vt = 2vxx 0 x 1, 0 < t < ,
0<t<

vx (0, t) = 0 v(1, t) = 0,

v(x, 0) = [x + cos2 (3x/4) 5/2] (x 3)


1
1 1
= + cos(3x/2) 5/2 + 3 = 1 + cos(3x/2).
2 2
2
It is easy to obtain the solution of the related homogeneous problem as
v(x, t) = e9

2 t/2

[1 +

1
cos(3x/2)].
2

Then

1
cos(3x/2)].
2
From the above examples, we notice that the particular solution is time independent, or
u(x, t) = x 3 + e9

2 t/2

[1 +

in steady-state.
Note: Any steady-state solution of the heat equation ut = 2 uxx is of the form cx + d.

18

MODULE 5: HEAT EQUATION

The solutions u(x, t) are sums of a steady-state particular solution of the PDE and
BC and the solution v(x, t) of the related homogeneous problem which is transient in the
sense that v(x, t) 0 as t . Thus
u(x, t) = up (x, t)

(steady-state solution)

v(x, t) up (x, t), as t .


(transient solution)

That is, the solution u approaches the steady-state solution as t . However, for some
types of BC there are no steady-state particular solutions, as illustrated in the following
example.
EXAMPLE 3. Consider the problem
PDE:
BC:
IC:

ut = 2 uxx 0 x L, 0 < t < ,

(8)

ux (0, t) = a ux (L, t) = b,

(9)

u(x, 0) = f (x),

(10)

where a and b are constants, and f (x) is a given function.


Solution. Let up (x, t) = cx + d. Then, using BC, we obtain c = a and c = b, which
is impossible unless a = b.
NOTE: Observe that the boundary conditions state that heat is being drained out
of the end x = 0 at a rate ux (0, t) = a and heat is owing into the end x = L at a rate
ux (L, t) = b. If b > a, then the heat energy is being added to the rod at a constant rate.
If b < a, the rod loses heat at a constant rate. Thus, we cannot expect a steady-state
solution of the PDE and BC, unless a = b.
The simplest form for a particular solution, that reects the fact that the heat energy
is changing at a constant rate, is
up (x, t) = ct + h(x)
where c is a constant and h(x) is a function of x. The constant c and the function h(x)
can be determined from the PDE and BC. Thus,

=
=

c = (up )t = 2 (up )xx = 2 h (x)


c
h (x) = 2

c 2
x + dx + e,
h(x) =
22

for constants d and e. Using BC, we note that


a = (up )x (0, t) = h (0) = d = d = a.

19

MODULE 5: HEAT EQUATION

(b a)2
cL
+
d
=
c
=
.
2
L
Thus, a particular solution (taking e = 0, for simplicity) is obtained as:
b = (up )x (L, t) = h (L) =

up (x, t) =

(b a) 2
(b a) 2
(b a) 2
1
t+
x + ax =
[ t + x2 ] + ax.
L
2L
L
2

The related homogeneous problem is


vt = 2 vxx 0 x L, t 0,
0 < t < ,
(b a) 2
v(x, 0) = f (x) up (x, 0) = f (x) [
x + ax].
2L

If f (x) up (x, 0) is of the form


n=0 cn cos(nx/L), we have the solution
vx (0, t) = 0 vx (L, t) = 0,

u(x, t) = up (x, t) + v(x, t)

2 2
= up (x, t) +
cn e(n/L) t cos(nx/L),
n=0

where up (x, t) is given by (11).

Practice Problems
1. Solve the following IBVP:
ut = uxx , 0 < x < L, t > 0,
u(0, t) = a, u(L, t) = b, t > 0,
u(x, 0) = a + bx, 0 x L.
2. Solve the following IBVP:
ut = 4uxx , 0 < x < , t > 0,
u(0, t) = 5, u(, t) = 10, t > 0,
u(x, 0) = sin x sin 3x, 0 < x < .

(11)

20

MODULE 5: HEAT EQUATION

Lecture 5

Time-Dependent BC

In this lecture we shall learn how to solve the inhomogeneous heat equation
ut 2 uxx = h(x, t)
with time-dependent BC. To begin with, let us consider the following IBVP problem with
time-dependent BC:
PDE:
BC:
IC:

ut = 2 uxx 0 x L, 0 < t < ,


u(0, t) = a(t) u(L, t) = b(t),

0 < t < ,

u(x, 0) = f (x).

(1)
(2)
(3)

In the previous lecture, we had discussed the solution of this problem in the case where a(t)
and b(t) are constant functions (independent of t) and f (x) is a suitable given function.
Notice that the function w(x, t) dened by
]
[
b(t) a(t)
x + a(t)
w(x, t) =
L
satises the BC (2). However, w(x, t) will not satisfy the PDE (1) unless a(t) and b(t)
are constant. In fact,
[

wt wxx
2

]
b (t) a (t)
=
x + a (t).
L

We now attempt to nd a solution for the problem (1)-(3) of the form


u(x, t) = w(x, t) + v(x, t),
where v(x, t) satises the following problem
vt 2 vxx = ut 2 uxx (wt 2 wxx )
= (wt 2 wxx )
= [b (t) a (t)]x/L a (t).
Further,
v(0, t) = u(0, t) w(0, t) = a(t) a(t) = 0,
v(L, t) = b(t) b(t) = 0.

21

MODULE 5: HEAT EQUATION

Thus, the function v(x, t) must satisfy the following related problem with homogeneous
BC, but inhomogeneous PDE:
PDE:
BC:
IC:

vt 2 vxx = [b (t) a (t)]x/L a (t),


v(0, t) = 0, v(L, t) = 0,

0 x L, 0 < t < ,

0 < t < ,

(4)
(5)

v(x, 0) = u(x, 0) w(x, 0) = f (x) [a(0) b(0)]x/L a(0).

(6)

Note: When a(t) and b(t) are constants, the PDE is homogeneous. But, in this case,
v(x, t) satises nonhomogeneous PDE.
The problem (4)-(6) is a special case of the following general problem:
PDE:
BC:
IC:

vt 2 vxx = h(x, t) 0 x L, 0 < t < ,


v(0, t) = 0, v(L, t) = 0,

0<t<

v(x, 0) = g(x).

(7)
(8)
(9)

The solution procedure to the above problem was given by the French mathematician and
physicist Jean-Marie-Constant Duhamel (1797-1872). The method is known as Duhamels
principle.
Suppose u1 and u2 are solutions of the following problems:

(P 1 :)

PDE:

(u1 )t 2 (u1 )xx = 0

BC:

u1 (0, t) = 0, u1 (L, t) = 0

IC:

u1 (x, 0) = g(x)

PDE:
(P 2 :) BC:
IC:

(u2 )t 2 (u2 )xx = h(x, t)


u2 (0, t) = 0, u2 (L, t) = 0 (10)
u2 (x, 0) = 0

It is easy to check that v(x, t) = u1 (x, t) + u2 (x, t) solves (7)-(9). The solution u1 to the
problem (P 1) is known (cf. Lecture 4 in Module 5). It remains only to solve the problem
(P2) for u2 .
The above observation has led to the following (cf. [1]).
THEOREM 1. A solution to problem (1)-(3) is given by
u(x, t) = w(x, t) + u1 (x, t) + u2 (x, t),
where

]
b(t) a(t)
w(x, t) =
x + a(t)
L

is the particular solution of the BC and u1 (x, t) solves (P1) with g(x) = f (x) w(x, 0)
and u2 (x, t) solves (P2) with h(x, t) = (wt 2 wxx ) = [b (t) a (t)]x/L a (t).

22

MODULE 5: HEAT EQUATION

Duhamels principle

The basic idea of Duhamels principle is to transfer the source term h(x, t) to initial
condition of related problems. This is done in the following manner. The function dened
by

u(x, t) =

v(x, t; s)ds
0

is a solution of (7)-(9) provided v(x, t; s) is a solution of the problem


PDE:
BC:
IC:

vt = 2 vxx , 0 x L, 0 < t < ,


v(0, t; s) = 0, v(L, t; s) = 0,

0 < t < ,

v(x, s; s) = h(x, s).

(11)
(12)
(13)

Note that both PDE and BC are homogeneous. We use translation in time

u(x, t) =

v(x, t s; s)ds

to obtain an IC at t = 0, instead of t = s. Rewriting (11)-(13) in terms of v, we now


reduce the problem to the following associated problem with IC at t = 0:
PDE:
BC:
IC:

vt = 2 vxx 0 x L, 0 < t < ,

(14)

v(0, t; s) = 0, v(L, t; s) = 0, 0 < t <

(15)

v(x, 0; s) = h(x, s).

(16)

To illustrate the procedure let us consider the following example:


EXAMPLE 2. Solve
PDE:
BC:
IC:

ut 2 uxx = t sin(x) 0 x , 0 < t < ,


u(0, t) = 0, u(, t) = 0,

0 < t < ,

u(x, 0) = 0.

(17)
(18)
(19)

Solution. Here h(x, t) = t sin(x). We solve the related problem:


PDE:
BC:
IC:

vt = 2 vxx ,

0 x , 0 < t < ,

v(0, t; s) = 0, v(, t; s) = 0,
v(x, 0; s) = h(x, s) = s sin(x).

0 < t < ,

(20)
(21)
(22)

23

MODULE 5: HEAT EQUATION

Treating s a constant, we easily obtain v(x, t; s) = se t sin(x). Note that


t
t
2
u(x, t) =
v(x, t s; s)ds =
se (ts) sin(x)ds
0
0
t
[
]
2
2
2
= e t sin(x)
se s ds = (2 )1 t + (2 )2 (e t 1) sin(x),
2

which satises (17)-(19).


THEOREM 3. (Duhamels principle, [1]) Let h(x, t) be a twice continuously dierentiable function in 0 x L, t 0. Assume that, for each s 0, the IBVP
PDE:
BC:
IC:

vt = 2 vxx 0 x L, 0 < t < ,

(23)

v(0, t; s) = 0, v(L, t; s) = 0, 0 < t < ,

(24)

v(x, 0; s) = h(x, s).

(25)

has a solution v(x, t; s), where v(x, t; s), vt (x, t; s) and vxx (x, t; s) are continuous (in all
three variables). Then the unique solution of the problem
PDE:
BC:
IC:

ut 2 uxx = h(x, t) 0 x L, 0 < t < ,


u(0, t) = 0, u(L, t) = 0,

0 < t < ,

u(x, 0) = 0.

is given by

(26)
(27)
(28)

u(x, t) =

v(x, t; s)ds.

(29)

Proof. Note that the function u(x, t) dened by


t
u(x, t) =
v(x, t; s)ds
0

satises the IC u(x, 0) = 0 and the BC u(0, t) = u(L, t) = 0. Observe that v(x, t; s)
satises the BC (24). Now, with g(t, s) = v(x, t; s), where x xed, we have
t
ut (x, t) = v(x, t; t) +
vt (x, t; s)ds
0
t
= h(x, t) +
2 vxx (x, t; s)ds.
0

Apply Leibnizs rule to obtain


ut (x, t) = h(x, t) + 2 uxx (x, t).
By the hypothese on v(x, t; s), it follows that u(x, t) is in C 2 . For the uniqueness, see
Theorem 4 (of Lecture 2 of Module 5).

24

MODULE 5: HEAT EQUATION

REMARK 4. The solution u in (29) may be written as

u(x, t) =

v(x, t s; s)ds

where v solves (14)-(16).


EXAMPLE 5. Solve the IBVP:
ut 2 uxx = t[sin(2x) + 2x] 0 x 1, 0 < t < ,
u(0, t) = 1, u(1, t) = t2 , 0 < t < ,
u(x, 0) = 1 + sin(x) x.
Solution. The function that satises the BC is
w(x, t) = (t2 1)x + 1.
Then u(x, t) = w(x, t)+v(x, t), where v(x, t) solves the related problem with homogeneous
BC:
vt kvxx = ut 2 uxx (wt 2 wxx ) = t sin(2x)
v(0, t) = u(0, t) w(0, t) = 0
v(1, t) = u(1, t) w(1, t) = 0
v(x, 0) = u(x, 0) w(x, 0) = sin(x).
Now, v = u1 + u2 , where u1 and u2 , respectively, solves
(u1 )t 2 (u1 )xx = 0
(a)

(u2 )t 2 (u2 )xx = t sin(2x)


(b)

u1 (0, t) = 0 u1 (1, t) = 0
u1 (x, 0) = sin(x)

We know that u1 (x, t) = e

2 2 t

u2 (0, t) = 0 u2 (1, t) = 0
u2 (x, 0) = 0.

sin(x). The function u2 is found via Duhamels principle.

The solution u2 is given by

u2 (x, t) =

v(x, t s; s)ds,

where v solves the problem


vt = 2 vxx
v(0, t; s) = 0 v(L, t; s) = 0
v(x, 0; s) = s sin(2x).

25

MODULE 5: HEAT EQUATION

We know that v(x, t; s) = se4

2 2 t

u2 (x, t) =

sin(2x). Thus,

s e4

2 2 (ts)

sin(2x)ds
t
2 2
4 2 2 t
= e
sin(2x)
s e4 s ds
0
]
[
2 2
2 2 2
2 2
= (4 )
4 t + e4 t 1 sin(2x).
0

The solution is then given by


u(x, t) = w(x, t) + u1 (x, t) + u2 (x, t).
REMARK 6. Duhamels principle is also applicable to problems with PDE ut 2 uxx =
h(x, t) and homogeneous BC of the forms:
ux (0, t) = 0
u(L, t)

=0

u(0, t)

=0

ux (L, t) = 0

ux (0, t)

=0

ux (L, t) = 0.

Practice Problems
1. Solve the following IBVP:
ut = 2 uxx + cos(3t), 0 < x < 1, t > 0,
ux (0, t) = 0, ux (1, t) = 1, t > 0,
1
u(x, 0) = cos(x) x2 x, 0 < x < 1.
2
2. Solve the following IBVP:
ut = 4uxx + et sin(x/2) sin(t), 0 < x < , t > 0,
u(0, t) = cos(t), u(, t) = 0, t > 0,
u(x, 0) = 1, 0 < x < .

Module 6: The Wave Equation


In this module we shall study the one-dimensional wave equation which describes transverse vibrations of an elastic string. This module is organized as follows. In the rst
lecture, we shall discuss the mathematical formulation of this model using Newtons second law of motion. Further, we shall also established the uniqueness of solutions by
proving that the energy is conserved. In the second lecture, we shall derive DAlemberts
formula for the solution of initial value problems for the innite string. The third lecture
deals with some special cases of DAlemberts formula and the semi-innite string problems. The fourth lecture is devoted to solving initial and boundary value problem for a
string with xed ends. Finally, in the last lecture, we shall discuss Duhamels principle
for inhomogeneous wave equations.

MODULE 6: THE WAVE EQUATION

Lecture 1

Mathematical Formulation and Uniqueness Result

We begin by studying the one-dimensional wave equation, which describe the transverse
vibrations of a string. Consider the small vibrations of a string that is fastened at each
end (see, Fig. 6.1). We now make the following assumptions:
The string is made of a homogeneous material (i.e., the mass/unit length of the
string is constant).
There is no eect of gravity and external forces.
The vibration takes place in a plane.
The mathematical model equation under these assumptions describe small vibrations of
the string. Let the forces acting on a small portion P Q of the string. Since the string

Figure 6.1: Vibrations of a string problem


does not oer resistance to bending, the tension is tangential to the curve of the string at
each point. Let T1 and T2 , respectively, be the tensions at the endpoints P and Q. Since
there is no motion in horizontal direction, the horizontal components of the tension must
be constant. From the Fig. 6.1, we obtain
T1 cos 1 = T2 cos 2 = T = constant.

(1)

Let T1 sin 1 and T2 sin 2 be two components of T1 and T2 , respectively in the vertical
direction. The minus sign indicates that component at P is directed downward. By
Newtons second law, the resultant of these two forces is equal to the mass x of the
portion times the acceleration utt , evaluated at some point between x and x + x. If is
the mass of the undeected string per unit length and x is length of the portion of the
undeected string then we have
T2 sin 2 T1 sin 1 = xutt .

MODULE 6: THE WAVE EQUATION

In view of (1), we obtain


T2 sin 2
T1 sin 1
x

= tan 2 tan 1 =
utt .
T2 cos 2 T1 cos 1
T

(2)

Note that tan 1 and tan 2 are the slopes of the curve of the string at x and x + x, i,e.,
tan 1 = (ux )P ,

tan 2 = (ux )Q .

Here, partial derivatives are used because u also depends on t. Dividing (2) by x, we
have
1

[ux (x + x, t) ux (x, t)] = utt .


x
T
Letting x 0, we obtain
utt = c2 uxx ,
where c2 =

(3)

T
.

NOTE: The notation c2 (instead of c) for the physical constant T / has been chosen to
indicate that this constant is positive. The constant c2 depends on the density and tension
of the string.
As the problem is linear, it is enough to prove the uniqueness of solution. The uniqueness result is proved in the following theorem.
THEOREM 1. Let u1 (x, t) and u2 (x, t) be two solutions of
PDE:
BC:
IC:

utt = c2 uxx ,

0 x L, < t < ,

u(0, t) = a(t), u(L, t) = b(t),


u(x, 0) = f (x), ut (x, 0) = g(x).

Then u1 (x, t) = u2 (x, t) for all 0 x L, < t < .


Proof. Let v(x, t) = u1 (x, t) u2 (x, t). Note that v satises
vtt = c2 vxx , 0 x L, < t < ,
v(0, t) = 0, v(L, t) = 0,
v(x, 0) = 0, vt (x, 0) = 0.
with homogeneous BC and IC. Observe that v(x, 0) = 0 and vt (x, 0) = 0. We need to
show that v(x, t) = 0 for all t. We write

v(x, t) = v(x, t) v(x, 0) =

vt (x, t)dt.
0

(4)

MODULE 6: THE WAVE EQUATION

We now claim that vt (x, t) = 0 for all x in [0, L] and for all t. Construct the function

H(t) =

{c2 vx2 (x, t) + vt2 (x, t)}dx.

(5)

Dierentiating with respect to t and using vtt = c2 vxx , we obtain


L

H (t) =
{c2 2vx vxt + 2vt vtt }dx
0
L
2
= 2c
{vx vxt + vt vxx }dx
0
L

2
(vx vt )dx
= 2c
0 x
L
= 2c2 {vx (x, t)vt (x, t)}
0

= 0,
where in the last step we have used vt (0, t) =

d
dt v(0, t)

= 0 and, similarly vt (L, t) = 0.

Thus,
H (t) = 0 = H(t) = C,
where C is an arbitrary constant. Since H(0) = 0, we have C = 0 and, hence H(t) = 0.
Thus, (5) becomes

{c2 vx2 (x, t) + vt2 (x, t)}dx = 0

vt (x, t) = 0 x [0, L],

t R.

In view of (4), we obtain

v(x, t) =

vt (x, t)dt = 0 = u1 (x, t) = u2 (x, t).


0

This completes the proof.

MODULE 6: THE WAVE EQUATION

Lecture 2

The Innite String Problem

In this lecture, we shall show that the solution of the wave equation
utt = c2 uxx
can be immediately obtained with suitable transformation of the independent variables.
We shall derive DAlembert formula for the solution of the wave equation for an innite
string ( < x < ) with IC u(x, 0) = f (x) and ut (x, 0) = g(x).
Consider the following IVP:
PDE:
IC:

utt = c2 uxx ,

< x < , t 0,

u(x, 0) = f (x) (initial displacement),

(1)
(2)

ut (x, 0) = g(x) (initial velocity).


Step 1.(Transforming to its canonical form): Introducing the transformation
= x + ct

= x ct,

we note that
ux = u x + u x = u + u .
uxx = (u + u )x
= (u + u ) x + (u + u ) x
= u + 2u + u .
Similarly,
utt = c2 (u 2u + u ).
Substituting the expression for uxx and utt in utt = c2 uxx yields
u = 0,

(3)

which is known as canonical form of (1).


Step 2. (Solving the transformed equation (3)): Integrate (3) with respect to to have
u (, ) = () + (),

MODULE 6: THE WAVE EQUATION

where () is the antiderivative of (), and () is any function of . Thus, the general
solution of u = 0 is
u(, ) = () + (),

(4)

where (), () are arbitrary functions of and , respectively.


Step 3. (Transforming back to the original variables x and t): Substituting = x + ct
and = x ct in (4) we get
u(x, t) = (x ct) + (x + ct).

(5)

This is the general solution of the wave equation. We may interpret (5) as the sum of any
two moving waves, each moving in opposite directions with velocity c.
Step 4. (Applying IC to the general solution): In order to solve IVP (1)-(2), the general
solution u(x, t) is required to satisfy the two initial conditions
u(x, 0) = f (x),

ut (x, 0) = g(x).

These conditions lead to the following equations:


(x) + (x) = f (x)

(6)

c (x) + c (x) = g(x).

(7)

Integrating (7) from x0 to x, we obtain

c(x) + c(x) =

g( ) d + K.

(8)

x0

Solving for (x) and (x) from (6) and (8), we obtain

1
1 x
(x) = f (x)
g( ) d
2
2c x0
x
1
1
g( ) d
(x) = f (x) +
2
2c x0

(9)
(10)

Thus, the solution to IVP (1)-(2) is given by


1
1
u(x, t) = [f (x ct) + f (x + ct)] +
2
2c

x+ct

g( ) d.

(11)

xct

The equation (11) is known as DAlembert solution to the IVP (1)-(2). This formula is
of great interest in itself, and it avoids the problem of convergence of innite series in the
Fourier series approach.

MODULE 6: THE WAVE EQUATION

REMARK 1. DAlemberts formula yields a number of properties of solutions of the wave


problem for the innite string.
Disturbances propagate with speed c.
The value u(x0 , t0 ) depends only on the values of g in the interval [x0 ct0 , x0 + ct0 ]
and on the values of f at the endpoints of this interval. Geometrically, this is the
interval cut out by the characteristic lines that pass through the point (x0 , t0 ). The
interval [x0 ct0 , x0 + ct0 ] is called the interval of dependence for the point (x0 , t0 )
(since u(x0 , t0 ) depends only on the values u(x, 0) and ut (x, 0) for x in this interval).
Odd initial data yields odd solution and even initial data yields even solution.
If f (x) and g(x) are odd, then u(x, t) is odd in the x-variable, since
u(x, t) =
=
=
=
=

1
1 x+ct
[f (x + ct) + f (x ct)] +
g(r) dr
2
2c xct

1 xct
1
[f (x ct) f (x + ct)]
g(s)ds
2
2c x+ct

1
1 xct
[f (x ct) + f (x + ct)] +
g(s)ds
2
2c x+ct
x+ct
1
1
[f (x + ct) + f (x ct)]
g(s)ds
2
2a xct
u(x, t).

Similarly, we can show that if f (x) and g(x) are even then u(x, t) is even i.e.,
u(x, t) = u(x, t).
Periodic initial data yield periodic solutions.
If f (x + 2L) = f (x) and g(x + 2L) = g(x), then u(x + 2L, t) = u(x, t). That is, if f
and g are periodic of period 2L then u(x, t) is also periodic of period 2L in x. This
follows easily from DAlemberts formula. This fact is useful in dealing with nite
strings.
It can be shown that if f (x) and g(x) are periodic of period 2L and

g(x)dx = 0,
L

then u(x, t) is not only periodic in x of period 2L, but also periodic in t of period
2L/c.

MODULE 6: THE WAVE EQUATION

Special cases of DAlemberts formula:


CASE I. (Initial velocity zero). Suppose the string has IC
u(x, 0) = f (x)
ut (x, 0) = 0.
The DAlembert solution is
1
u(x, t) = [f (x ct) + f (x + ct)].
2
Thus, the solution u at a point (x0 , t0 ) can be interpreted as the average of the initial
displacement f (x) at a point (x0 ct0 , 0) and (x0 + ct0 , 0) found by backtracking the
characteristic curves x ct = x0 ct0 and x + ct = x0 + ct0 .
CASE 2. (Initial displacement zero) Suppose the string has the following IC:
u(x, 0) = 0
ut (x, 0) = g(x).
In this case, the solution is
u(x, t) =

1
2c

x+ct

g( ) d.
xct

The solution u at (x, t) may be interpreted as integrating the initial velocity between xct
and x + ct on the initial line t = 0.
Let us consider the following examples.
EXAMPLE 2. (Zero initial velocity) Solve the IVP:
utt = c2 uxx ,

PDE:
IC:

< x, t < ,

u(x, 0) = sin(x),
ut (x, 0) = 0.

Solution: Applying DAlemberts formula (11) with f (x) = sin(x) and g(x) = 0, we
obtain

1
[sin(x ct) + sin(x + ct)] .
2
3. (Zero initial displacement) Consider the IVP:
u(x, t) =

EXAMPLE

PDE: utt = c2 uxx ,


I.C. u(x, 0) = 0,
ut (x, 0) = sin(x).

< x, t <

MODULE 6: THE WAVE EQUATION

Solution: Here the string is initially straight (u(x, 0) = 0), but has a variable velocity
at t = 0 (ut (x, 0) = sin(x)). Thus, applying DAlemberts formula (11) with f (x) = 0 and
g(x) = sin(x), we obtain
1
u(x, t) =
2c

x+ct

sin( )d =

xct

1
[cos(x + ct) cos(x ct)] .
2c

Practice Problems
1. Solve the following IVP:
utt = 9uxx , < x < , t > 0,
u(x, 0) = sin x, ut (x, 0) = cos x, < x < .
2. Solve the following IVP:
utt = c2 uxx , < x < ,

t > 0,

u(x, 0) = 0, ut (x, 0) = sin2 (x), < x < .


3. Let u(x, t) be the solution of
utt = c2 uxx , 0 < x < , t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x), < x < .
Use DAlemberts formula to show that u is even in x.

10

MODULE 6: THE WAVE EQUATION

Lecture 3

The Semi-Innite String Problem

Before we introduce the semi-innite string problem, let us look at some special cases of
DAlemberts formula derived in the previous lecture.
EXAMPLE 1. Consider the problem for the semi-innite string (0 x < ) with xed
end at x = 0:
PDE:
BC:
IC:

0 x < , < t <

utt = c2 uxx ,
u(0, t) = 0

u(x, 0) = f (x),

ut (x, 0) = 0.

Solution. Note that f (x) is dened for x 0. Consider the odd extension f0 (x),
< x < as follows:

{
f0 (x) =

f (x)

for x 0,

f (x)

for x 0.

The related extended problem is


PDE: utt = c2 uxx ,
I.C. u(x, 0) = f0 (x),

x, t <
ut (x, 0) = 0.

By DAlemberts formula, the solution of this problem is


1
u(x, t) = [f0 (x + ct) + f0 (x0 ct)].
2
Note that u(x, t) is odd in x, since f0 (x) is odd. Thus, u(0, t) = 0 and so u(x, t) satises
the BC.
Moreover,

1
u(x, 0) = [f0 (x + c 0) + f0 (x c 0)] = f0 (x),
2
which is the same as f (x) when x 0.
Semi-innite string problem: We shall nd the solution of the following wave equation
whose left end xed at zero and has given initial conditions:
PDE:
BC:
IC:

utt = c2 uxx ,

0 < x < , 0 < t <

u(0, t) = 0, 0 < t < ,


u(x, 0) = f (x), ut (x, 0) = 0,

0 < x < .

11

MODULE 6: THE WAVE EQUATION

Recall that the solution of the PDE (1) is given by (see (5), Lecture 2 of this module)
u(x, t) = (x ct) + (x + ct).

(1)

Substitute the general solution into the initial conditions, we arrive at (cf. (9)-(10), Lecture
2 of this module)
(x ct) =
(x + ct) =

1
1
f (x ct)
2
2c
1
1
f (x + ct) +
2
2c

xct

g() d.

(2)

g() d.

(3)

x0
x+ct
x0

Since we are looking for the solution u(x, t) everywhere in the rst quadrant (x > 0, t > 0)
of the xt-plane, we must nd (xct) < xct < and (x+ct) 0 < x+ct < .
Using (1), (2) and (3), for x ct 0, it follows that
u(x, t) = (x ct) + (x + ct)

1
1 x+ct
=
[f (x ct) + f (x + ct)] +
g()d.
2
2c xct
When x < ct, use of BC u(0, t) = 0 leads to
(ct) = (ct)
and hence,
1
1
(x ct) = f (ct x)
2
2c

ctx

g() d + K.
x0

Substituting this value of into the general solution


u(x, t) = (x ct) + (x + ct).
yields
1
1
u(x, t) = [f (x + ct) f (ct x)] +
2
2c
Thus, for x < ct and x > ct, we have
{
1
2 [f (x ct) + f (x + ct)] +
u(x, t) =
1
2 [f (x + ct) f (ct x)] +

1
2c
1
2c

x+ct

g() d, 0 < x < ct.


ctx

x+ct
xct
x+ct
ctx

g() d x ct
g() d x < ct.

EXAMPLE 2. Find the solution of the following IBVP:


utt = uxx , 0 < x < , t > 0,
u(x, t) = 0, t > 0,
u(x, 0) = | sin x|, ut (x, 0) = 0, 0 < x < .

12

MODULE 6: THE WAVE EQUATION

Solution. For x > t,


u(x, t) =
=

1
(f (x + t) + f (x t))
2
1
(| sin(x + t)| + | sin(x t)|).
2

For x < t,
u(x, t) =
=

1
(f (x + t) f (t x))
2
1
(| sin(x + t)| | sin(t x)|).
2

Observe that u(0, t) = 0 is satised by u(x, t) for x < t. Thus,


{
1
2 (| sin(x + t)| + | sin(x t)|) x > t
u(x, t) =
1
2 (| sin(x + t)| | sin(t x)|) x < t.

Practice Problems
1. Solve the following IBVP:
utt = uxx , 0 < x < ,

t > 0,

ux (0, t) = 0, t 0,
u(x, 0) = cos x, ut (x, 0) = 0, 0 x < .

2. Solve the following IBVP:


utt = c2 uxx , 0 < x < ,

t > 0,

u(0, t) = 0, t 0,
u(x, 0) = x2 , ut (x, 0) = 0, 0 x < .

13

MODULE 6: THE WAVE EQUATION

Lecture 4

The Finite Vibrating String Problem

In this lecture, we shall study the transverse vibrations of a nite string. If u(x, t) represents the displacement (deection) of the string and the ends of the string are held xed,
then the motion of the string is described by the following initial-boundary value problem
(IBVP):
PDE:
BC:
IC:

utt = c2 uxx ,

0 < x < L, 0 < t < ,

(1)

u(0, t) = 0;

u(L, t) = 0, 0 < t < .

(2)

u(x, 0) = f (x); ut (x, 0) = g(x), 0 x L.

(3)

While studying the wave equation in a bounded region of space 0 < x < L, it is to be
noted that the waves no longer appear to be moving due to their repeated interaction with
boundaries. These waves are known as standing waves (e.g., a guitar string xed at both
ends). The boundary condition in (2) reect the fact the string is held xed at the two
end points x = 0 and x = L.
We shall apply the method of separation of variables to solve this problem.
Step 1. (Reducing to a system of ODEs): We seek solutions of the form
u(x, t) = X(x)T (t).

(4)

Substituting (4) into utt = c2 uxx and separating variables, we get


X(x)T (t) = c2 X (x)T (t).
or
X (x)
T (t)
=
= k,
c2 T (t)
X(x)
where the constant k can now be any number < k < . This leads to two ODEs:
T (t) c2 kT (t) = 0,

(5)

X (x) kX(x) = 0.

(6)

The ODE X kX = 0 is solved for X(x) in a manner similar to that of heat equation
(see, Lecture 3 of Module 5), but the solution of the ODE T c2 kT = 0 for T (t) are
dierent, because of the second-order time derivative.
Step 2. (Solving the ODEs): Investigating the solutions of these two ODEs for all dierent
values of k lead into the following cases.

14

MODULE 6: THE WAVE EQUATION

Case I : Let k > 0. Set k = 2 . The soultions are given by


T (t) = Ae(c)t + Be(c)t ,
X(x) = Ce()x + De()x .
Application of BC yields u 0.
Case II : Let k = 0. In this case, the solutions are linear and given by
T (t) = At + B,

X(x) = Cx + D.

This case is of no interest because use of BC yields trivial solution u 0. Hence, for
nontrival solution, we are left with the possibility of choosing k < 0.
Case III : Let k < 0. Set k = 2 for some R and = 0.
The solutions of T (t) + c2 T (t) = 0 is given by
T (t) = A sin(ct) + B cos(ct).
The solutions of X (x) + 2 X(x) = 0 is
X(x) = C sin(x) + D cos(x),
where A, B, C and D are constants. Then
u(x, t) = [A sin(ct) + B cos(ct)][C sin(x) + D cos(x)].
Our goal is to nd the constants A, B, C and D and the negative separation constant
so that the expression
u(x, t) = [C sin(x) + D cos(x)][A sin(ct) + B cos(ct)]

(7)

satises the BC. As u(x, t) has to satisfy the BC (2), substituting (7) into u(0, t) =
u(L, t) = 0 gives
u(0, t) = X(0)T (t) = D[A sin(ct) + B cos(ct)] = 0
= D = 0.

u(L, t) = 0 = X(L)T (t) = 0


= C sin(L)[A sin(ct) + B cos(ct)] = 0
= sin(L) = 0
= L = n, n = 0, 1, 2, . . .
n
or n =
, n = 0, 1, 2, . . . .
L

15

MODULE 6: THE WAVE EQUATION

Note that the choice of C = 0 in (7) would lead to X(x)T (t) = 0. Thus, the sequence of
solutions given by
un (x, t) = Xn (x)Tn (t)
[
]
nx
nct
nct
= sin(
) an sin(
) + bn cos(
) , n = 1, 2, 3,
L
L
L
As the PDE is linear, by superposition principle we write
u(x, t) =

sin(

n=1

[
]
nx
nct
nct
) an sin(
) + bn cos(
) .
L
L
L

These solutions are called eigenfunctions and the values n =

n
L

(8)

are called the eigenvalues

of the vibrating string.


Step 3. (Applying IC): Substituting (8) into IC u(x, 0) = f (x), ut (x, 0) = g(x) yields the
two equations:

n=1

n=1

bn sin(
an (

nx
) = f (x),
L

nc
nx
) sin(
) = g(x),
L
L

which represent the Fourier sine expansion of f (x) and g(x), respectively. The coecients
an and bn are given by
an =
bn =

L
2
nx
g(x) sin(
)dx,
nc 0
L

2 L
nx
f (x) sin(
)dx.
L 0
L

(9)
(10)

Thus, the solution is


u(x, t) =

n=1

sin(

[
]
nx
nct
nct
) an sin(
) + bn cos(
) ,
L
L
L

(11)

where an and bn are given by (9) and (10), respectively.


REMARK 1.

The function u(x, t) given by (11) with coecients (9) and (10), is a

solution of (1) that satises the conditions (2) and (3), provided that the series (11)
converges and also that the series obtained by dierentiating (11) twice (term-wise)
with respect to x and t, converge and have the sums uxx and utt , respectively, which
are continuous.

16

MODULE 6: THE WAVE EQUATION

Note that each un in (8) represents a harmonic motion having the frequency n /2 =
cn/2L cycles per unit time. This motion is called the nth normal mode of the string.
The rst normal mode is known as the fundamental mode (n = 1), and the others
are known as overtones.

Practice Problems
1. Solve the following IBVP:
utt = uxx , 0 < x < 1,

t > 0,

u(0, t) = u(1, t) = 0, t > 0,


u(x, 0) = x(1 x), ut (x, 0) = 0, 0 x 1.
2. Solve the following IBVP:
utt = 4uxx , 0 < x < ,

t > 0,

u(0, t) = u(, t) = 0, t > 0,


u(x, 0) = 0, ut (x, 0) = sin x, 0 x .

17

MODULE 6: THE WAVE EQUATION

Lecture 5

The Inhomogeneous Wave Equation

Recall the Duhamels principle for inhomogeneous heat equations that arises due to internal heat sources. We solve the inhomogeneous heat equation by solving a family of
related problems in which the sources appears in the initial conditions instead of the differential equation. The same idea works for inhomogeneous wave equations. To illustrate
the procedure, let us consider the following innite string problem:
PDE:
IC:

< x, t < ,

utt = c2 uxx + h(x, t),

u(x, 0) = 0, ut (x, 0) = 0.

(1)
(2)

To motivate the method of Duhamel for the string problem, let the acceleration h(x, s) be
applied to the string at t = s s and let the acceleration be turned o at t = s. The
string will then acquire a velocity of h(x, s)s, and its position change is h(x, s)(s)2 /2.
Assuming s to be small enough, the change in position can be neglected. The eect of
the imposed acceleration is v(x, t; s)s, where v(x, t; s) is the solution of
PDE:
IC:

vtt = c2 vxx ,

< x < , t s,

(3)

v(x, s; s) = 0, vt (x, s; s) = h(x, s).

(4)

This problem has initial conditions given at the arbitrary time t = s, instead of t = 0. We
can write v(x, t; s) = v(x, t s; s), where v(x, t; s) solves
< x < , t 0

(5)

v(x, s; s) = 0, vt (x, s; s) = h(x, s).

(6)

vtt = c2 vxx ,

PDE:
IC:

By DAlemberts formula, the solution of (5) is given by

1 x+ct
v(x, t; s) =
h(r, s)dr,
2c xct

(7)

and hence, the solution of (3) is


v(x, t; s) = v(x, t s; s) =

1
2c

x+c(ts)

h(r, s)dr.
xc(ts)

THEOREM 1. (Duhamels principle for the wave equation[1]) Let h(x, t) be a C 1


function, < x, t < . Then the unique solution of the problem (1) satisfying the
conditions (2) is given by

u(x, t) =

v(x, t; s)ds =
0

1
v(x, t s; s)ds =
2c

x+c(ts)

h(r, s)drds.
0

xc(ts)

(8)

18

MODULE 6: THE WAVE EQUATION

Proof. By DAlemberts formula, we know

1 x+ct
v(s, t; s) =
h(r, s)ds.
2c xct
Note that v(s, t; s) is in C 2 since h(x, t) is assumed to be in C 1 . Dierentiate twice with
respect to t to obtain

ut (x, t) = v(x, 0; s) +

vt (x, t s; s)ds =

vt (x, t s; s)ds,

(9)

and

utt (x, t) = vt (x, 0; t) +


vtt (x, t s; s)ds
0
t
= h(x, t) +
c2 vxx (x, t s; s)ds
0

= h(x, t) + c2 uxx (x, t),


where we have used (5). This shows that u(x, t) is a C 2 solution of (1). By (8), we have
u(x, 0) = 0. The equation (9) yields ut (x, 0) = 0.
To prove the uniqueness, let u1 and u2 be two solutions of (1)-(2). Now, the function
v = u1 u2 satises vtt = c2 vxx with IC v(x, 0) = 0 and vt (x, 0) = 0. Hence, v 0 =
u1 = u2 . This completes the proof.
EXAMPLE 2. Solve
PDE:
IC:

utt uxx = x t,

< x, t < ,

(10)

u(x, 0) = x4 , ut (x, 0) = sin(x).

Solution. Splitting the problem (10) into two problems with u1 (x, t) and u2 (x, t)
solve
(u1 )tt (u1 )xx = 0,
u1 (x, 0) = x4 ,
(u1 )t (x, 0) = sin(x),
and
(u2 )tt (u2 )xx = x t,
u2 (x, 0) = 0,
(u2 )t (x, 0) = 0.
respectively. The solution of (8) is then u(x, t) = u1 (x, t) + u2 (x, t). By DAlemberts
formula

1
1
u1 (x, t) = [(x + t)4 + (x t)4 ] [cos(x + t) cos(x t)].
2
2

19

MODULE 6: THE WAVE EQUATION

Applying Theorem 1 we compute u2 (x, t) as follows:


u2 (x, t) =
=
=
=

1
2

t
0

x+(ts)

x(ts)

1
(r s)drds =
2

t[
0
t)2

r2
sr
2

]x+ts
ds
xt+s

]
[
1 t (x + t s)2 (x + s

s(x + t s) + s(x + s t) ds
2 0
2
2
]
[
1 t
(x + t)2 (x t)2
2
2s 2s(x + t) +
ds

2 0
2
2
t3 t2 (x + t)
t3 t2 x

+ t2 x = +
.
3
2
6
2

The solution u(x, t) = u1 (x, t) + u2 (x, t) can easily be veried.


REMARK 3. Duhamels principle also applies in the case of a nite string. As in Example
2, one can handle the case where both the dierential equation and BC are inhomogeneous.
This is done by splitting the problem into two parts and then adding the solutions of the
two parts to obtain the desired solution.

Practice Problems
1. Solve the following nonhomogeneous IBVP:
utt = uxx + x sin t, 0 < x < 1,

t > 0,

u(x, 0) = x(1 x), ut (x, 0) = 0, 0 x 1,


u(0, t) = u(1, t) = 0, t > 0.
2. Solve the following nonhomogeneous IBVP:
utt = uxx + 2, 0 < x < 1, t > 0,
u(x, 0) = x, ut (x, 0) = 0, 0 x 1,
u(0, t) = 0, ux (1, t) = t, t 0.

Module 7: The Laplace Equation


In this module, we shall study one of the most important partial dierential equations in
physics known as the Laplace equation
2 u = 0 in Rn ,
where 2 u :=

2u
i=1 x2i

(1)

is the Laplacian of the function u. The theory of the solutions of

Laplace s equation is called potential theory. The equation (1) is often referred to as the
potential equation as the function u is frequently a potential function. Solutions of (1)
that have continuous second-order partial derivatives are called harmonic functions. For
easy of exposition, we shall study Laplaces equation in two dimensions.
This module consists of ve lectures. The rst lecture introduces some basic concepts
and the maximum and minimum principle for boundary value problems (BVP). In the
second lecture, we discuss the Greens identities, fundamental solution of the Laplace
equation and the Poisson integral formula. The solution of the Laplace equation for
rectangular region is discussed in the third lecture. The mixed BVP for a rectangle is
discussed in the fourth lecture. In the fth lecture, we solve the Laplace equations for
the annular region between concentric circles. Finally, the sixth lecture is devoted to the
interior and exterior Dirichlet problems for the Laplace equations.

MODULE 7: THE LAPLACE EQUATION

Lecture 1

Basic Concepts and The Maximum/Minimum


Principle

Let be an open region in R2 . The Laplace equation in two dimension is of the form
2 u(x, y) = 0,
where 2 :=

2
x2

(x, y) ,

(1)

+ x
2 is the Laplace operator or the Laplacian. The equation of the type

(1) plays an important role in a variety of physical contexts such as in Gravitation theory,
electrostatics, steady-state heat conduction problems and uid ow problems.
Some examples of physical problems(cf. [10]):
EXAMPLE 1. (Gravitation theory) The force of attraction F , both inside and outside the
attracting matter, can be expressed in terms of a gravitational potential u by the equation
F = u.
In empty space u satises Laplaces equation
2 u = 0.
EXAMPLE 2. (Steady-state heat ow problem) In the theory of heat conduction if the
temperature u does not vary with the time, then u satises the equation
( u) = 0,
where is the thermal conductivity. If is a constant throughout the medium then
2 u = 0.
EXAMPLE 3. (Fluid ow problem) The velocity q of a perfect uid in irrotational motion
can be expressed in terms of a velocity potential u by the equation
q = u.
If there are no sources or sinks at all points of the uid the function u satises Laplaces
equation
2 u = 0.
The inhomogeneous Laplace equation
2 u(x, y) = f (x, y) in ,
where f is a given function is known as the Poisson equation.

MODULE 7: THE LAPLACE EQUATION

Types of BVP

Because these solutions do not depend on time, initial conditions are irreverent and only
boundary conditions are specied. There are three basic types of boundary conditions
that are usually associated Laplaces equation. They are
Dirichlet BVP: If the BC are of Dirichlet type i.e., if the solution u(x, y) to Laplace
equation in a domain is specied on the boundary i.e.,
u(x, y) = f (x, y) on ,
where f (x, y) is a given function. The Laplace equation together with Dirichlet BC
are called the Dirichlet problem / Dirichlet BVP. The Dirichlet problem for
Laplace equation is of the form
2 u(x, y) = 0 in ;

u(x, y) = f (x, y) on .

Neumann BVP: We know the BC are of Neumann type if the directional derivative
u
n

along the outward normal to the boundary is specied on i.e.,


u
(x, y) = g(x, y) for (x, y) .
n

In physical terms, the normal component of the solution gradient is known on the
boundary. In steady-state heat ow problem, Neumann BC means the rate of heat
loss or gain through the boundary points is prescribed.
The Laplace equation together with Neumann BC are called the Neumann BVP/
Neumann problem which is written as
u
(x, y) = g(x, y) for (x, y) .
n

2 u = 0 in ;

The Neumann problem will have no solution unless we assume that the average
value of the function g on is zero. This assumption is known as the compatibility

u
=
g = 0,

n
which will be discussed in the next lecture.

condition

Robins BVP. The boundary conditions are called Robins type or mixed type if
Dirichlet BC are specied on part of the boundary and Neumann type BC are
specied on the remaining part of the boundary . For example,
u
+ c(u g) = 0,
n

MODULE 7: THE LAPLACE EQUATION

where c is a constant and g is a given function that can vary over the boundary. The
Laplace equation together with the Rabins/Mixed BC known as Rabins BVP /
Mixed BVP.

The maximum/minimum principle

The maximum/minimum principle for Laplaces equation is stated in the following theorem.
THEOREM 4. (The maximum/minimum principle for Laplaces equation)
be a solution of Laplaces equation
Let u(x, y) C 2 () C()
2 u(x, y) := uxx + uyy = 0

(2)

in a bounded region with boundary . Then the maximum and minimum values of u
attain on . That is,
max u(x, y) = max u(x, y); and min u(x, y) = min u(x, y).

it attains its maximum either in or on .


Proof. Since u is continuous in
Suppose u achieves its maximum at some point (x0 , y0 ) . Let
u(x0 , y0 ) = max u(x, y) = M0 > Mb ,

where Mb = max u(x, y). Consider the function


v(x, y) = u(x, y) + [(x x0 )2 + (y y0 )2 ],

(3)

for some > 0. Note that v(x0 , y0 ) = u(x0 , y0 ) = M0 and


max v(x, y) Mb + d2 ,

where d is the diameter of . For such (0 < < (M0 Mb )/d2 ), the maximum of v can
not occur on because
M0 = v(x0 , y0 ) > max v(x, y).

This implies there may be points in where v > M0 . Let


v(x1 , y1 ) = max v(x, y).

At (x1 , y1 ), we must have


vxx 0 and vyy 0 = vxx + vyy 0.

(4)

MODULE 7: THE LAPLACE EQUATION

From (3), we observe that


vxx + vyy = uxx + uyy + 2 + 2 = 4 > 0,
where we have used the fact that uxx + uyy = 0. This led to a contradiction to (4). Thus,
max v(x, y) = max v(x, y).

So, the maximum of u attains on .


To prove that the minimum of u is also achieved on the boundary , replace u by
u in the above argument to obtain
min u = max(u) = max(u) = min(u).

This completes the proof.


We now discuss the maximum and minimum principle for Poissons equation
2 u(x, y) = f (x, y) in .

(5)

THEOREM 5. (The maximum/minimum principle for Poissons equation)


Let be a bounded domain in R2 with boundary . Then the maximum values of a
solution u of (5) attain on if f (x, y) > 0 in and the minimum values of u occur on
if f (x, y) < 0 in .
Proof. Since u is continuous in a closed and bounded domain, it must assume its
maximum in or in . Suppose that the maximum is assumed at a point (x0 , y0 ) in ,
i.e.,
u(x0 , y0 ) = max u(x, y).

Suppose that f (x, y) > 0 in . Then at (x0 , y0 ) , we must have


uxx (x0 , y0 ) 0,

uyy (x0 , y0 ) 0.

As f > 0, it follows from (5) that


uxx + uyy > 0,
which is a contradiction. Hence, the maximum of u(x, y) must occur on .
To show that the minimum of u(x, y) attains on if f (x, y) < 0 in , replace u
by u in the preceding argument. This is equivalent to replacing f by f in (4). Since
f < 0, we obtain f > 0 and conclude that u assumes its maximum on . Therefore,
u assumes its minimum on and this completes the proof.
The maximum/minimum principle can be used to prove uniqueness and continuous
dependence of the solution for the Dirichlets problems.

MODULE 7: THE LAPLACE EQUATION

THEOREM 6. Let be a bounded domain in R2 with boundary . The solution of the


Dirichlets problem
2 u(x, y) = f (x, y) in ,

u(x, y) = g(x, y) on

(6)

if it exists, is unique.
Proof. Let u1 (x, y) and u2 (x, y) be two solutions of (6). Set v(x, y) = u1 (x, y)
u2 (x, y). Then v satises
2 v = 0 in , v = 0 on .
The maximum/minimum principle yields (cf. Theorem 4)
v = 0 in = u1 u2 = 0 in .
Thus, we have
u1 = u2 ,
which proves the uniqueness.
Next, we shall prove the continuous dependence of the solution on the boundary data.
THEOREM 7. The solution of the Dirichlet problem depends continuously on the boundary
data.
Proof. Let ui , i = 1, 2 be the solutions of
2 ui = F in R2 , ui = fi on .
Then the function v = u1 u2 solves
2 v = 0 in with v = f1 f2 on .
By the maximum/minimum principle v attains its maximum/minimum on . Thus, for
we have
all (x, y) ,
max(|f1 f2 |) min(f1 f2 ) v(x, y) max(f1 f2 ) max(|f1 f2 |).

If |f1 f2 | < then


< min v(x, y) v(x, y) max v(x, y) < .

Therefore,
|f1 f1 | < = |v(x, y)| <
This completes the proof.
for all (x, y) .

MODULE 7: THE LAPLACE EQUATION

Practice Problems
1. Let u satisfy the Laplace equation in a disk = {(x, y) | x2 +y 2 < 1} and continuous
If u(cos , sin ) sin + cos(2), then show that
on .
u(x, y) y + x2 y 2 ,

(x, y) .

2. Consider the elliptic equation


(u) = F, > 0,
in a bounded region R2 with the boundary . Show that if F < 0 in , the
solution u assumes its maximum on and if F > 0 in , the solution u assumes
its minimum on .
3. Let be a bounded region R2 . Use the maximum principle to prove continuous
dependence on the data for the Dirichlet problem for the elliptic equation
(u) = F in
with > 0.

MODULE 7: THE LAPLACE EQUATION

Lecture 2

Greens Identity and Fundamental Solutions

In this lecture, we shall learn about some important identities known as Greens identities
and its special forms. As a consequence of these identities we can prove the uniqueness of
the solution to the Dirichlet problem and the compatibility conditions for the Neumann
problems. The fundamental solutions for the Laplace equation will be discussed.
Let be bounded domain in R2 with smooth boundary . Recall the following
Gauss divergence theorem: For u, v C 1 ()

u
v
v
dx =
vu nds
u
dx,
xk

xk

(1)

where n is the outward unit normal the boundary and ds is the element of arc length.
As a consequence of Gauss divergence theorem, the following identity known as Greens
identity hold true:

u
v udx =
v ds

u vdx.

Integrating the second term of the right hand side once more by parts we obtain
)

(
u
v
2
2
v udx =
u vdx +
v
u
ds.
n
n

Here,

(2)

(3)

indicates dierentiation in the direction of the exterior normal to .

From the identity (2), the special case v = 1 yields

u
2 udx =
ds.
n

(4)

Another special case of interest by choosing v = u. In this case, the equation (2) yields
the energy identity

|u| dx +
2

u udx =

u
ds.
n

(5)

it follows that
If 2 u = 0 in then for u C 2 (),

|u|2 dx = 0

u = 0

u = constant.

This observation leads to uniqueness theorems for the Dirichlet problem and the Neumann
problem.

MODULE 7: THE LAPLACE EQUATION

REMARK 1. Using Greens identity (2), one can easily prove that:
of the Dirichlet problem is determined uniquely.
(i) A solution u C 2 ()
of the Neumann problem is determined uniquely within an
(ii) A solution u C 2 ()
additive constant.
Observe that the solution of the Neumann problem can only exist if the data satisfy
the condition known as compatibility condition. For example, the compatibility condition
for the Neumann problem:
2 u = 0 in ,

u
= g on
n

is

gds = 0,

which immediately follows from the identity (4).


Fundamental Solutions: One of the principal features of the Laplace equation
2 u = 0

(6)

is its spherical symmetry. The Laplace equation is preserved under rotations about a point
. Therefore, it is reasonable to assume that there exist special solutions v(x) of (6) that
are invariant under rotations about . Such solutions would be of the form
v = (r),

(7)

v
u n
u
r = |x | = t (xi i )2

where

i=1

represents the Euclidean distance between x and . By the chain rule of dierentiation
we nd that
1
dr
=
dxi
2

)1/2
(xi i )

2xi =

i=1

xi
.
r

Further, we note that


vxi

dr
xi
x2
= (r)
= (r)( ), vxi xi = (r) 2i +
dxi
r
r

Hence,
v=
2

i=1

vxi xi = (r) +

1 x2i
3
r
r

n1
(r) = 0.
r

)
.

10

MODULE 7: THE LAPLACE EQUATION

If (r) = 0, we have

(r)
1n
=
.
(r)
r

On solving we arrive at (r) = Cr1n and hence,


{
C log r + C1 n = 2,
(r) =
Cr 2n
n > 2,
2n + C1
where C and C1 are constants.
The function v(x) = (r) satises (6) for r > 0, that is for x = , but becomes innite
for x = . The function v for a suitable choice of the constant C, is a fundamental solution
for the operator 2 , satisfying the equation,
2 v = (x ),
where is the Dirac delta function. The function
(r) =

1
log r, r > 0
2

is a fundamental solution to two dimensional Laplaces equation (6). For a proof, see [5].
The Poisson Integral Formula. We know the function u C 2 () satisfying the Laplace
equation 2 u = 0 is harmonic. The following result express the solution of the Dirichlet
problem in terms of an integral known as The Poisson integral formula.
THEOREM 2. (The Poisson integral formula) Let f () be a continuous function and
f ( + 2) = f (). Dene

(r02 r2 )f (s)
1
ds, r < r0 ,
2 r02 2rr0 cos( s) + r2
u(r0 , ) = f (), r = r0 .

u(r, ) =

Then u(r, ) solves the following Dirichlet problem:


2 u(x, y) = 0,
u(r0 , ) = f (),

(x2 + y 2 )1/2 < r0 ,


f ( + 2) = f (),

where u(r, ) = u(x, y) = u(r cos , r sin ). That is, u(r, ) is harmonic on the open disk
D = {(x, y) | (x2 + y 2 )1/2 < r0 }.
Some consequences of the Poisson integral formula are given below.
THEOREM 3. Let u be a harmonic function on some region . The value of u at the
center of any disk D with D is the average (or mean) of the values of u on the
circular boundary D of D.

MODULE 7: THE LAPLACE EQUATION

11

Note: The mean value property can be used to prove the maximum and minimum principle for solutions for Laplaces equation. It can be used to show that whenever the
maximum or minimum is attained in the interior of the region, the solution u must be
identically constant. This is the strong maximum and minimum principle for Laplaces
equation.
THEOREM 4. (The strong maximum/minimum principle) Let u be a harmonic
function on an open connected set . Suppose that the maximum or minimum of u is
attained at some point in . Then u must be constant throughout .
We know by denition a harmonic function u on an open region is only required to
be C 2 (). But, u actually C () (innitely dierentiable function). Thus, we have the
following result.
THEOREM 5. (Regularity result) If u is harmonic on an open region , then u
C ().

Practice Problems
1. Prove that a solution of the Neumann problem
2 u = f in , u = g on
diers from another solution by a constant.
2. Prove that u1 (x, y) = 1 + log(x2 + y 2 ) and u2 (x, y) = 1 log(x2 + y 2 ) are harmonic,
where dened. Note that u1 = u2 on the circle x2 + y 2 = 1, but unequal inside
the circle. Why does this not contradict the uniqueness theorem for the Dirichlet
problem.
3. Let u be harmonic in the disk x2 + y 2 < r02 . If u achieves its maximum at the point
(0, 0), then show that u must be constant throughout this disk.

12

MODULE 7: THE LAPLACE EQUATION

Lecture 3

The Dirichlet BVP for a Rectangle

In this lecture we shall discuss the solution of the Laplace equation with Dirichlet type
BC in cartesian coordinates.
Consider the following Dirichlet problem in a rectangle:
PDE:
BC:

uxx + uyy = 0,

0 < x < a, 0 < y < b,

u(x, 0) = f1 (x),

u(x, b) = f2 (x), 0 x a,

(1)
(2)

u(0, y) = g1 (y), u(a, y) = g2 (y) 0 y b.


We shall study how the method of separation of variables is still applicable for the BVP.
Since the BC are nonhomogeneous, we are required to do some preliminary work.
By the principle of superposition, we seek the solution of the above BVP (1)-(2) as
u(x, y) = u1 (x, y) + u2 (x, y) + u3 (x, y) + u4 (x, y),
where each of u1 , u2 , u3 and u4 satises the PDE with one of the original nonhomogeneous
BC, and the homogeneous versions of the remaining three BC. These problems are then
solved by the method of separation of variables.
Let us consider solving the following example problem:
EXAMPLE 1. Solve the Dirichlet BVP:
PDE:

uxx + uyy = 0,

0 < x < a, 0 < y < b,

BC:

u(x, 0) = f (x),

u(x, b) = 0,

u(0, y) = 0, u(a, y) = 0,

0 x a,

(3)
(4)

0 y b.

Apply the method of separation of variables to solve this problem. The step-wise
solution procedure is given below.
Step 1: (Reducing to ODEs)
Separating variables, we seek for a solution of the form
u(x, y) = X(x)Y (x).
Substituting this into (3), we obtain
X Y (y) + X(x)Y (y) = 0

13

MODULE 7: THE LAPLACE EQUATION

and hence,

X (x)
Y (y)
=
= k,
X(x)
Y (y)

for some constant k, which is called the separation constant. This leads two ODEs
X (x) kX(x) = 0,

Y (y) + kY (y) = 0.

(5)
(6)

Step 2: (Solving the resulting ODEs)


Case 1 : When k > 0, set k = 2 , where = 0. In this case, the solutions of ODEs are
X(x) = [Aex + Bex ],
Y (y) = [C cos(y) + D sin(y)].
Therefore, the solutions of PDE u(x, y) are given by
u(x, y) = [Aex + Bex ][C cos(y) + D sin(y)].
Case 2 : When k = 0, the solutions of ODEs are linear are given by
X(x) = (A + Bx),

Y (y) = (C + Dy).

Therefore,
u(x, y) = (A + Bx)(C + Dy).
Case 3 : Suppose k < 0, set k = 2 , where > 0.
The solutions of ODEs are given by
X(x) = [A cos(x) + B sin(x)]
Y (x) = [Cey + Dey ].
Thus , the solution of PDE is
u(x, t) = [A cos(x) + B sin(x)][Cey + Dey ].
Step 3: (Applying the BC)
Using the boundary conditions u(0, y) = 0 and u(a, y) = 0 for the product solution
obtained for the case k > 0 leads to the equations
A + B = 0, Aea + Bea = 0,

14

MODULE 7: THE LAPLACE EQUATION

which has a trivial solution A = 0 and B = 0. Thus, only the trivial solution u(x, y) = 0
is possible. Similarly, use of boundary conditions u(0, y) = 0 and u(a, y) = 0 also leads
to a trivial solution u(x, y) = 0 for the case k = 0. Let us examine the product solution
obtained in Case 3 (for k < 0) i.e.,
u(x, y) = [A cos(x) + B sin(x)][Cey + Dey ].
Using the boundary condition u(0, y) = 0 yields A = 0. The condition u(a, y) = 0 gives
B sin(a)][Cey + Dey ] = 0.
For a non-trivial solution,
B = 0 = sin a = 0
= a = n or =

n
, n = 1, 2, 3, . . . .
a

Therefore, the sequence of non-trivial is given by


un (x, y) = sin(

ny
ny
nx
)[Cn e a + Dn e a ]
a

Applying the BC u(x, b) = 0, we obtain


nb
nb
nx
)[Cn e a + Dn e a ] = 0
a
nb
nb
Cn e a + Dn e a = 0

sin(
=
=

Dn = Cn

nb
a

nb
a

, n = 1, 2, . . . , .

Therefore, the solution now takes the form


}
n(yb)
nx 2Cn { n(yb)
) nb e a e a
/2
a e a
nx
2Cn
n(y b)
) sinh(
).
nb sin(
a
a
e a

un (x, y) = sin(
=
Setting cn =

2Cn
e

nb
a

and using superposition principle, we obtain

u(x, y) =

cn sin(

n=1

nx
n(y b)
) sinh(
).
a
a

To satisfy the remaining nonhomogeneous BC, we must have


u(x, 0) = f (x) =

n=1

cn sin(

nx
nb
) sinh(
),
a
a

15

MODULE 7: THE LAPLACE EQUATION

which is a half-range Fourier series. Therefore,

nb
2 a
nx
cn sinh(
)=
f (x) sin(
)dx,
a
a 0
a
and this implies
2
cn =
a sinh( nb
a )

f (x) sin(
0

nx
)dx.
a

(7)

Therefore, the required solution to the problem (3)-(4) is


u(x, y) =

cn sin(

n=1

nx
n(y b)
) sinh(
)
a
a

with the coecients cn computed from (7).


As a consequence of the superposition principle we obtain the following result.
THEOREM 2. Let an , bn , cn and dn be the Fourier coecients of f (x), g(x), h(y) and
k(y). Then solution of the Dirichlet problem
PDE:
BC:

uxx + uyy = 0,

0 < x < a, 0 < y < b,

u(x, 0) = f (x), u(x, b) = g(x) 0 x a,


u(0, y) = h(y), u(a, y) = k(y), 0 y b,

is
u(x, y) =

nx
n(b y)
) sinh[
]
a
a
n=1
ny
nx
) sinh(
)
+Bn sin(
a
a
ny
n(a x)
+Cn sin(
) sinh[
]
b
b ]
ny
nx
+ Dn sin(
) sinh(
) ,
b
b
An sin(

where
nb
)
a
na
= cn / sinh(
)
b

An = an / sinh(
Cn

nb
)
a
na
Dn = dn / sinh(
).
b

Bn = bn / sinh(

Practice Problems
1. Solve the following BVP:
uxx + uyy = 0,

0 < x < 1, 0 < y < 1,

u(x, 0) = x(x 1), u(x, 1) = 0, 0 x 1,


u(0, y) = 0,

u(1, y) = 0, 0 y 1,

16

MODULE 7: THE LAPLACE EQUATION

2. Solve the following BVP:


uxx + uyy = 0,

0 < x < , 0 < y < ,

u(x, 0) = sin x,

u(x, 1) = sin x, 0 x ,

u(0, y) = sin y, u(1, y) = sin y, 0 y ,

MODULE 7: THE LAPLACE EQUATION

17

Lecture 4

The Mixed BVP for a Rectangle

In this lecture we shall consider solving the mixed BVP for the Laplace equation. To
begin with, let us consider the following the Neumann problem for a rectangle:
PDE:
BC:

uxx + uyy = 0,

0 < x < a, 0 < y < b

uy (x, 0) = f (x), uy (x, b) = g(x), 0 x a


ux (0, y) = h(y),

(1)
(2)

ux (a, y) = k(y), 0 y b.

This problem has no solution, unless the following compatibility condition holds:
a
a
b
b
g(x)dx
f (x)dx +
k(y)dy
h(y)dy = 0.
0

Solution. If u(x, y) is a solution of (1), then


a b
b a
b a
uyy dydx
uxx dxdy +
(uxx + uyy )dxdy =
0 =
0
0
0
0
0
0
b
a
=
[ux (a, y) ux (0, y)]dy +
uy (x, b) uy (x, 0)]dx
0
0
a
a
b
b
f (x)dx,
g(x)dx
h(y)dy +
k(y)dy
=
0

where we have used the fundamental theorem of calculus, and the Fubinis theorem.
REMARK 1.

The compatibility condition is an immediate consequence of the follow-

ing special case of Greens theorem


u nds =
ux dy uy dx =
C

(uxx + uyy )dxdy,


R

i.e., the ux of the gradient of u through the boundary is the integral of u in the
interior.
Note that we only require that ux and uy be continuous on the closed rectangle.
Further, we do not demand that the second partial of u extend continuously to the
closed rectangle.
We now consider solving Laplace equation with mixed type of boundary conditions.
EXAMPLE 2. Solve the following BVP:
PDE:
BC:

uxx + uyy = 0, 0 < x < a, 0 < y < b,

(3)

u(x, 0) = 0, u(x, b) = 0, 0 x a,

(4)

u(0, y) = g(y), ux (a, y) = h(y), 0 y b.

18

MODULE 7: THE LAPLACE EQUATION

Solution. The solution of this problem has a form


u(x, y) = u1 (x, y) + u2 (x, y),
where u1 and u2 satisfy (3) with the BC
{
u1 (x, 0) = u1 (x, b) = 0,
0 x a,
(BC)1 :
u1 (0, y) = g(y), u1x (a, y) = 0, 0 y b,
{

and
(BC)2 :

0 x a,

u2 (x, 0) = u2 (x, b) = 0,

u2 (0, y) = 0, u2x (a, y) = h(y), 0 y b.

We shall determine each one of u1 and u2 by the method of separation of variables.


Step 1.(Solving for u1 ): Separating variables for u1 (x, y) = X(x)Y (y) and substituting in (3) we obtain

X (x) Y (y)
+
= 0.
X(x)
Y (y)

This leads to the following ODEs:


X (x) + X(x) = 0,

0 < x < a,

Y (y) Y (y) = 0, 0 < y < b,

(5)
(6)

for a constant . Since u1 satises (BC)1 , we must have


Y (0) = Y (b) = 0,

(7)

X (a) = 0.

(8)

Nontrivial solutions of (6) with BC (7) are


Yn (y) = sin
corresponding to
= n =

ny
b

( n )2
b

, n N.

The dierential equation for X(x)


X (x)

( n )2
b

X(x) = 0

has solution of the form


X(x) = C1 cosh

nx
nx
+ C2 sinh
.
b
b

19

MODULE 7: THE LAPLACE EQUATION

The condition (8) yields C2 /C1 = tanh na


b . Thus, a sequence of solutions X(x) is given
by

(
nx
na
nx )
Xn (x) = an cosh
tanh
sinh
.
b
b
b

By superposition principle the product solution u1 is expressed by


u1 (x, y) =

n=1

(
nx
na
nx )
ny
an cosh
tanh
sinh
sin
.
b
b
b
b

(9)

The boundary condition u1 (0, y) = g(y), 0 y b yields


u1 (0, y) =

an sin

n=1

with an s given by
2
an =
b

ny
= g(y), 0 y b,
b

g(y) sin
0

ny
dy.
b

(10)

Step 2.(Solving for u2 ): Suppose u2 (x, y) = X(x)Y (y) satises (3) and (BC)2 . Arguing
as before, we have the ODEs (5) and (6) for X(x) and Y (y) with the boundary conditions
Y (0) = Y (b) = 0;

X(0) = 0.

The non-trivial solutions corresponding to


( n )2

= n =

, n N,

are
Yn (y) = sin

ny
.
b

For X(x), we have the ODE:


X (x)

( n )2
b

X(x) = 0,
X(0) = 0.

It has solutions of the form


Xn (x) = bn sin h

nx
,
b

n N.

Thus, u2 (x, y) is given by


u2 (x, y) =

n=1

bn sinh

nx
ny
sin
b
b

(11)

20

MODULE 7: THE LAPLACE EQUATION

which satises the boundary condition u2x (a, y) = h(y). This leads to
2
1
bn =
n cosh na
b

h(y) sin
0

ny
dy.
b

(12)

Step 3.(Writing the solution): The solution of (3)-(4) is obtained as


u(x, y) = u1 (x, y) + u2 (x, y),
where an and bn are determined by (10) and (12), respectively.

Practice Problems
1. Solve the following Neumann BVP:
uxx + uyy = 0,

0 < x < a, 0 < y < b,

uy (x, 0) = 0, uy (x, b) = h(x), 0 x a,


ux (0, y) = 0, ux (a, y) = 0, 0 y b.
given that g(x) is continuous and

a
0

h(x)dx = 0. Why the assumption

is needed?
2. Find a solution of the Neumann BVP:
uxx + uyy = 0,

0 < x < , 0 < y < ,

uy (x, 0) = cos x, uy (x, b) = 0, 0 x ,


ux (0, y) = 0, ux (a, y) = 0, 0 y .
By adding a constant, nd a solution such that u(0, 0) = 0.
3. Solve the following mixed BVP:
uxx + uyy = 0,

0 < x < a, 0 < y < b,

u(x, 0) = 2x, u(x, b) = x2 , 0 x a,


ux (0, y) = 0, ux (a, y) = 0, 0 y b.

a
0

h(x)dx = 0

21

MODULE 7: THE LAPLACE EQUATION

Lecture 5

The Dirichlet Problems for Annuli

Let us consider an annular region or a disk D in R2 . To solve the Dirichlet problem in D


it is most natural to use polar coordinates. Polar coordinates (r, ) of a point in the plane
are related to its Cartesian coordinates (x, y) by
x = r cos

and y = r sin ,

(1)

where r = (x2 + y 2 )1/2 . Set


U (r, ) u(x, y) = u(r cos , r sin ).
Observe that
Ur = ux xr + uy yr = ux cos + uy sin
and
U = ux x + uy y = ux r sin + uy r cos .
Hence,
Urr = uxx cos2 + 2uxy sin cos + uyy sin2 .
and
U = (uxx x + uxy y )r sin ux r cos + (uyx x + uyy y )r cos uy r sin
= r2 (uxx sin2 2uxy cos sin + uyy cos2 ) r(ux cos + uy sin ).
Thus,
1
1
Urr + Ur + 2 U = uxx + uyy .
r
r
The Dirichlet problem for the annulus: We formulate the Dirichlet problem for the
annulus in polar coordinates as follows:
PDE:
BC:

U
Ur
+ 2 = 0, r1 < r < r2 ,
r
r
U (r2 , ) = f (), f ( + 2) = f (),

Urr +

(2)

U (r1 , ) = g(), g( + 2) = g(),


PC:

U (r, + 2) = U (r, ),

r1 < r < r2 ,

where < < and PC stands for periodicity condition. Here, f and g are
continuous periodic function with period 2.

22

MODULE 7: THE LAPLACE EQUATION

Using separation of variable, we seek solutions of the form


U (r, ) = R(r)T ().
Substituing this into the PDE in (2) and separating variables, we obtain
R (r)T () + r1 R (r)T () + r2 R(r)T () = 0
r2 R (r) + rR (r)
T ()
=
= c = b2 (b > 0).
R(r)
T ()

This leads to the two ODEs


T () + cT () = 0,

(3)

r2 R (r) + rR (r) cR(r) = 0.

(4)

Note that we get periodic solutions of period 2, when b = n and c = +b2 = n2 , for
n = 0, 1, 2, . . .. In this case, solving (3) we obtain
Tn () = an cos(n) + bn sin(n), n = 0, 1, 2, . . . ,

(5)

where an and bn are arbitrary constants. With c = n2 , equation (4) for R(r) is the
Cauchy-Euler equation
r2 R (r) + rR (r) n2 R(r) = 0.

(6)

This equation can be solved by taking R(r) = rm . Substituting this into the (6), we get
r2 m(m 1)rm2 + rmrm1 n2 rm = 0
or
=

(m2 n2 )rm = 0.
rm is a solution if m = n.

For n 1, the general solution is


{
cn rn + dn rn , for n = 1, 2, 3, . . . ,
Rn (r) =
c0 + d0 log(r), for n = 0.
Putting together the expressions for Tn () and Rn (r) in U (r, ), we obtain
U0 (r, ) = a0 + 0 log(r),

(7)

Un (r, ) = (an rn + n rn ) cos(n) + (bn rn + n rn ) sin(n), n 1.

(8)

By the superposition principle, we obtain a more general solution of (4) as


U (r, ) = U0 (r, ) +

n=1

Un (r, ).

(9)

23

MODULE 7: THE LAPLACE EQUATION

Suppose that f () and g() have Fourier series of the form

f () =

A0
+
An cos(n) + Bn sin(n),
2

(10)

n=1

g() =

C0
+
2

Cn cos(n) + Dn sin(n).

(11)

n=1

Comparing Fourier coecients in the equations U (r2 , ) = f () and U (r1 , ) = g(), we


obtain
A0
C0
, a0 + 0 log(r1 ) =
,
2
2
= An , an r1n + n r1n = Cn ,

a0 + 0 log(r2 ) =

(12)

an r2n + n r2n

(13)

bn r2n + n r2n = Bn , bn r1n + n r1n = Dn , n = 1, 2, . . . .

(14)

Solving for a0 , 0 from (12), an , n from (13) and bn , n from (14), we obtain
a0 =

12 A0 log r1
,
log Q
An r1n Cn r2n
an =
,
Qn Qn
Bn r1n Dn r2n
,
bn =
Qn Qn

1
2 C0 log r2

21 C0
,
log Q
Cn r2n An r1n
n =
,
Qn Qn
Dn r2n Bn r1n
n =
,
Qn Qn
0 =

1
2 A0

(15)
(16)
(17)

where Q = r2 /r1 . This provides us with the constants an , bn , cn , dn in terms of the given
Fourier coecients An , Bn , Cn , Dn of f () and g().
Thus, the solution of (2), where f () and g() are given by (10)-(11), is

U (r, ) = a0 + 0 log r +

[an rn + n rn ] cos(n)

n=1

}
+[bn rn + n rn ] sin(n) ,
where an , n , bn , n are dened by (16)-(17).
EXAMPLE 1. Solve the following Dirichlet problem
U
Ur
+ 2 = 0, 1 < r < 2,
r
r
U (1, ) = 1 + 4 cos(2),
Urr +

U (2, ) = 2 + 5 sin(),
U (r, + 2) = U (r, ),

1 < r < 2.

(18)

24

MODULE 7: THE LAPLACE EQUATION

Solution. Using the formulas (18) with A0 = 2, C0 = 4, A2 = 4, C1 = 5 and all other


An , Bn , Cn and Dn equal to 0. Note that Q = r2 /r1 = 2.
Equating the Fourier coecients in the BC with those of U (r, ) in (18), using r1 = 1
and r2 = 2, we obtain
a0 + 0 log(1) = 1,
b1 + 1 = 0,
a2 + 2 = 4,

a0 + 0 log(2) = 2,
1
2b1 + 1 = 5,
2
2
2 a2 + 22 2 = 0.

Solving (15) for a0 and 0 , (16) for b1 and 1 and(17) for a2 and 2 , we obtain
a0 = 1, 0 =

1
, b1 = 10/3, 1 = 10/3, a2 = 4/15, 2 = 64/15.
log(2)

All other systems in (15)-(17) have solutions zero. The solution of (18) is then
U (r, ) = 1 + log(r)/ log(2) + (10r/3 10r1 /3) sin()
+(4r2 /15 + 64r2 /15) cos(2).
EXAMPLE 2. Solve the following problem:
Ur
U
+ 2 = 0, 1 < r < 2,
r
r
U (1, ) = 0, U (2, ) = sin(), 0 2.
Urr +

Solution. The coecients a0 , b0 , an , bn , cn and dn are found to be


{
{
2
a0 = 0
an = 0
n
=
1
32
3
cn =
dn =
b0 = 0
bn = 0, n = 1, 2, . . .
0 for all other ns
0
Substituting these value in (18) yields the solution as
(
)
2
1
u(r, ) =
r
sin .
3
r
It is easy to verify that u(r, ) satises Laplaces equation and the given BC.

Practice Problems
1. Solve the BVP
1
1
Urr + Ur + 2 U = 0 1 < r < 2,
r
r
U (2, ) = 1 + 4 cos + cos(2), U (1, ) = sin(2),
U (r, + 2) = U (r, ).

(19)

n=1
for all other ns

MODULE 7: THE LAPLACE EQUATION

2. Solve the BVP


1
1
Urr + Ur + 2 U = 0 1 < r < 2, 0 < 2,
r
r
Ur (1, ) = sin , Ur (2, ) = cos , 0 2.
3. Solve the BVP
1
1
Urr + Ur + 2 U = 0 1 < r < 2, < < ,
r
r
U (1, ) = cos2 , Ur (2, ) = 0, < < .

25

26

MODULE 7: THE LAPLACE EQUATION

Lecture 6

The Dirichlet Problem for the Disk

The Dirichlet problem in a disk of radius r0 and center at (0, 0) can be expressed as
PDE:
BC:

Ur
U
+ 2 = 0, 0 < r < r0 , ,
r
r
U (r0 , ) = f (), ,

Urr +

(1)

where f () is a given periodic, continuous function of period 2 (f ( + 2) = f ()). To


solve the above problem, we use the method of separation of variables.
Step 1.(Writing the ODEs): Seek solutions of the form
U (r, ) = R(r)T (),
where 0 r r0 and . Substituting into (1) and separating variables yield

R (r)T () + r1 R (r)T () + r2 R(r)T () = 0.


r2 R (r) + rR (r)
T ()
=
= k.
R(r)
T ()

Which leads to the following two ODEs:


T () + kT () = 0,

(2)

r2 R (r) + rR (r) kR(r) = 0.

(3)

Step 2.(Solving the ODEs):


Case (a): When k < 0, the general solution to (2) is the sum of two exponentials.
Hence we have only trivial 2-periodic solutions (see, Lecture 5).
Case (b): When k = 0, we nd that T () = A + B is the solution to (2). This linear
function is periodic only when A = 0, that is, T0 () = B is the only 2-periodic solution
corresponding to k = 0.
Case (c): When k > 0, the general solution to (2) is

T () = A cos( k) + B sin( k).


In this case we get a nontrivial 2-periodic solution only when

k = n, n = 1, 2, . . ..

Hence, we obtain the nontrivial 2-periodic solutions


Tn () = An cos(n) + Bn sin(n)

(4)

27

MODULE 7: THE LAPLACE EQUATION

corresponding to

k = n, n = 1, 2, . . . .

Now for k = n2 , n = 0, 1, 2, . . ., equation (3) is the Cauchy-Euler equation


r2 R (r) + rR (r) n2 R(r) = 0.

(5)

When n = 0, the general solution is


R0 (r) = C + D ln r.
Since ln r as r 0+ , this solution is unbounded near r = 0 when D = 0. Therefore,
we must choose D = 0 if U (r, ) is to be continuous at r = 0. We now have R0 (r) = C
and so U0 (r, ) = R0 (r)T0 () = CB. For convenience, we write U0 (r, ) in the form
U0 (r, ) =

A0
,
2

(6)

where A0 is an arbitrary constant.


When k = n2 , n = 1, 2, . . . , the general solution of (3) is given by
Rn (r) = Cn rn + Dn rn .
Since rn as r 0+ , we must set Dn = 0 in order for u(r, ) to be bounded at
r = 0. Thus
Rn (r) = Cn rn
Now for each n = 1, 2, . . . , we have the solutions
U (r, ) = Rn (r)Tn () = Cn rn [An cos(n) + Bn sin(n)].
By superposition principle, we write

A0
U (r, ) =
+
Cn rn [An cos(n) + Bn sin(n)].
2
n=1

This series may be written in the equivalent form

A0
U (r, ) =
+
2

n=1

r
r0

)n
[An cos(n) + Bn sin(n)],

(7)

where the An s and bn s are constants. These constants can be determined from the
boundary condition. With r = r0 in (7), we have

A0
f () =
+
[An cos(n) + Bn sin(n)].
2
n=1

28

MODULE 7: THE LAPLACE EQUATION

Since f () is 2-periodic, we recognize that An , Bn are Fourier coecients. Thus

1
An =
f () cos(n)d, n = 0, 1, . . . ,

1
f () sin(n)d, n = 1, . . . ,
Bn =

(8)

(9)

We now summarize the Dirichlet problem for a disk as follows.


In the Dirichlet problem(1), if

A0
f () =
+
[An cos(n) + Bn sin(n)],
2
n=1

then the solution is given by


)
(
A0 r n
+
[An cos(n) + Bn sin(n)],
U (r, ) =
2
r0
n=1

where An and Bn are given by (8) and (9), respectively.


EXAMPLE 1. Solve the following BVP
PDE:
BC:
where f () = 1 + r sin +

r3
2

Ur
U
+ 2 = 0,
r
r
U (1, ) = f (),
Urr +

0 r < 1,

sin(3) + r4 cos(4).

Solution. Here r0 = 1. Note that f () is already in the form of Fourier series, with

1 n=1
2 for n = 0 and 1 for n = 4
1
An =
Bn =
2 n=3

0 for other n

0 for other n
The solution of the BVP is

U (r, ) =

A0
+
2

n=1

r
r0

= 1 + r sin +

)n
[An cos(n) + Bn sin(n)]

r3
sin(3) + r4 cos(4).
2

Exterior Dirichlet Problem: We shall discuss the exterior Dirichlet problem i.e., the
Dirichlet problem outside the circle. The exterior Dirichlet problem is given by
PDE:
BC:

Ur
U
+ 2 = 0, 1 r < ,
r
r
U (1, ) = f (), 0 2.
Urr +

29

MODULE 7: THE LAPLACE EQUATION

This problem is solved exactly in a manner similar to the interior Dirichlet problem. We
assume that the solutions are bounded as r . Basically, we throw out the solutions
rn cos(n), rn sin(n),

ln r

that are unbounded as r .


The solution is given by
U (r, ) =

rn [An cos(n) + Bn sin(n)],

(10)

n=0

where An and Bn are given by


A0 =
An =
Bn =

2
1
f ()d,
2 0

1 2
f () cos(n)d,
0

1 2
f () sin(n)d.
0

The detail procedure is thus left as an exercise.

Practice Problems
1. Solve the Dirichlet problem
Uxx + Uyy = 0,

(x2 + y 2 < 1),

u(1, ) = sin2 , ,
for the disk r 1.
2. Solve the BVP
U
Ur
+ 2 = 0 0 r < 2, < < ,
r
r
U (2, ) = 1 + 8 sin 32 cos(4) < < .
Urr +

3. Show that the exterior Dirichlet problem


Ur
U
+ 2 = 0 1 r < ,
r
r
U (1, ) = 1 + sin + cos(3) 0 < < 2,
Urr +

has the solution


U (r, ) = 1 +

1
1
sin + 3 sin(3).
r
r

(11)

Module 8: The Fourier Transform Methdos for PDEs


In the previous modules (Modules 5-7), the method of separation of variables was used
to obtain solutions of initial and boundary value problems for partial dierential equations given over bounded spatial regions. The present module deals with partial dierential equations dened over unbounded spatial regions. The mathematical tools used for
solving initial and boundary value problems over unbounded spatial regions are integral
transforms: The Fourier transform (FT), the Fourier sine transform (FST) and the Fourier
cosine transform (FCT).
The outline of this module is as follows. The rst lecture introduces the necessary background for the denition of FT. The FST and FCT are introduced in the second lecture.
Applications of FT, FST and FCT for the three types of PDEs viz., heat equation, wave
equation and Laplace equation are described in third, fourth and fth lectures, respectively.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Lecture 1

Fourier Transform

Recall that if f is a periodic function with period 2L on R then f has a Fourier Series(FS)
representation of the form
(
1
nx
nx )
f (x) = a0 +
an cos(
) + bn sin(
) ,
2
L
L

(1)

n=1

where
1
L

an =

and
1
bn =
L

f (x) cos(
L

nx
)dx,
L

f (x) sin(
L

n = 0, 1, 2, . . .

nx
)dx, n = 1, 2, . . .
L

Fourier series are powerful tools in treating various problems involving periodic functions.
Many practical problems do not involve periodic functions. Therefore, it is desirable to
generalize the method of Fourier series to include non-periodic functions. If f is not
periodic then we may regard it as periodic with an innite period, i.e., we would like to
see what happens if we let L . We shall do this for reasons of motivation as well
as for making it plausible that for a non-periodic function, one should expect an integral
representation (Fourier integral) instead of Fourier series.
Set n =

n
L .

Then (1) can be rewritten as

1
a0 +
[an cos(n x) + bn sin(n x)].
2
n=1

L
[
1 L
1
cos(n x)
f (t) cos(n t)dt
f (x)dx +
L L
L
L
n=1
]
L
+ sin(n x)
f (t) sin(n t)dt .

f (x) =
=

Note that
n+1 n =
Setting = n+1 n =
1
f (x) =
L

L,

we write the Fourier series in the form

f (x)dx +
L

(n + 1) (n)

= .
L
L
L

L
[
1
cos(n x)
f (t) cos(n t)dt

L
n=1
]
L
+ sin(n x)
f (t) sin(n t)dt .
L

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

This representation is valid for any xed L, arbitrary large, but nite. Letting L
and assuming that the resulting nonperiodic function
f(x) = lim f (x)
L

is absolutely integrable over (, ), i.e.,



|f(x)| <

it seems plausible that the innite series (1) becomes an integral from 0 to , i.e.,
]
[


1

f (x) =
cos(x)
f (t) cos(t)dt + sin(x)
f (t) sin(t)dt d.
0

The above equation can be put in the form

f (x) =
[A() cos(x) + B() sin(x)]d,
0
which is called the Fourier integral of f, where

A() =
f (t) cos(t)dt, B() =

(2)

f(t) sin(t)dt,

This motivates the following result.


THEOREM 1. If f is piecewise continuous in every bounded interval of R and

|f (x)| < ,

then f can be represented by a Fourier integral

1
f (x) =
[A() cos(x) + B() sin(x)]d,
0

(3)

where

A() =

f (t) cos(t)dt,

B() =

f (t) sin(t)dt,

(4)

The Fourier integral of f converges to f if f is continuous. If f is discontinuous at a


point x the integral converges to
f (x+) + f (x)
2
i.e., the average value of the left and right hand limits of f (x) at that point.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

In order to motivate the denition of Fourier transform we rst express complex form
of the Fourier integral as follows. Using the identity cos(a b) = cos a cos b + sin a sin b,
we write the integral (3) as
1
f (x) =

Since cos =

]
f (t) cos(x t)dt d.

(5)

ei +ei
,
2

f (x) =
=

1
2
1
2

we obtain

f (t){ei(xt) + ei(xt) }dtd
0



1
i(xt)
f (t)e
dtd +
f (t)ei(xt) dtd
2
0

Replacing w by w in the second term on the right hand side and adjusting limits from
to 0, we obtain
f (x) =
=


1
f (t)ei(xt) dtd
2
[
]


1
1

eix
f (t)eit dt d,
2
2

which is the complex form of the Fourier integral of f . This leads to following pair of
transforms.
DEFINITION 2. (The Fourier Transform)
Let f : (, ) R or C. The Fourier transform (FT) of f (x) is dened by

1

F(f )() = f () =
f (x)eix dx ( < < )
2

(6)

provided this integral exists.


REMARK 3. Note that all functions may not have Fourier transform. For example, the
constant function C (C = 0), sin x, ex , x2 do not have FT. Only functions that tend to
zero suciently fast as |x| (rapidly decreasing to zero functions) have FT.
DEFINITION 4. (The Inverse Fourier Transform)
The inverse Fourier transform of a function f() ( < < ) is dened as

1
1

F (f ) = f (x) =
f()eix d ( < x < )
2
provided this integral exists.
EXAMPLE 5. Find the FT of the function
{
1 for |x| L,
f (x) =
0 for |x| > L.

(7)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Solution. Using the denition of FT, we have




L
ix L
1
1
1
e
ix
ix

f() =
f (x)e
dx =
e
dx =
2
2 L
2 i L
=

1 eiL eiL
1 2 sin(L)

=
.
i

2
2

Note that even though f (x) vanishes for x outside the interval [L, L], the same is not
true of f(). In general, it can be shown that if f and f vanish outside [L, L], then
f 0.
Some Basic Properties of Fourier Transforms:
Linearity: F is a linear transformation. For any two functions f1 and f2 with FTs
F[f1 ] and F[f2 ], respectively and any constants c1 and c2 , we have
F[c1 f1 + c2 f2 ] = c1 F[f1 ] + c2 F[f2 ].
Conjugation: Let f (x) be a function with FT F[f ]. Then the FT of the function
f(x) (complex conjugate) is given by
F[f (x)] = F[f ]().
Continuity: Let f (x) be an absolutely integrable function with FT f(). Then
f() is a continuous function.
Convolution: We know the convolution of the functions f and g, denoted be f g,
is dened by

(f g)(x) =

f (x t)g(t)dt,

provided the integral exists for each x. (i.e., if f is bounded and g is absolutely
integrable). Let f() and g() be the FTs of f and g, respectively. Then
F(f g) = F(f )F(g) = f() g().
Note: In general, F[f (x)g(x)] = F[f ]F[g].
Parsevals identity: For any two functions f (x) and g(x) with FTs f() and g(),
respectively. Then

f (x)
g (x)dx =

In particular,

f()
g ()d.

|f (x)|2 dx =

|f()|2 d.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Transformation of partial derivatives:


(i) Let u = u(x, t) be a function dened for < x < and t 0. If u(x, t) 0
as x , and F[u](, t) = U (, t), then
F[ux ](, t) = iF[u] = iU (, t).
If, in addition, ux (x, t) 0 as x , then
F[uxx ](, t) = 2 F[u] = 2 U (, t).
(ii) If we transform the partial derivative ut (x, t) (and if the variable of integration
in the transformation is x), then the transformation is given by
F[ut ](, t) =

d
d
{F[u]}(, t) = U (, t).
dt
dt

The Fourier transform of a time derivative equals the time derivative of the Fourier
transform. This shows that time dierentiation and the FT with respect to x commute.

Practice Problems
1. Compute the complex FS for each of the function f (x) = eax cos(bx).
2. Show that if f (x) is absolutely integrable on (, ), then
F[eibx f (ax)] =

1
F[f (( b)/a)], a, b R, a = 0.
a

3. Find the FT of
(A) f (x) = ecx , where c is a constant.
2

(B) f (x) = e|x|


(C) f (x) = sin x2
4. Verify the following properties of FT:
(A) F[ux ] = iF[u]
(B) F[uxx ] = 2 F[u]
(C) F[u(x + c)] = eic F[u], for any c R

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Lecture 2

Fourier Sine and Cosine Transformations

In this lecture we shall discuss the Fourier sine and cosine transforms and their properties.
These transforms are appropriate for problems over semi-innite intervals in a spatial
variable in which the function or its derivative are prescribed on the boundary.
If a function is even or odd function then f can be represented by a Fourier integral
which takes a simpler form than in the case of an arbitrary function.
If f (x) is an even function, then B() = 0 in (3), and

A() = 2
f (t) cos tdt.
0

Hence, the Fourier integral reduces to the simpler form

1
f (x) =
A() cos(x)d.
0
Similarly, if f (x) is odd, then A() = 0 in (3), and

f (t) sin tdt.
B() = 2
0

Thus, (3) becomes

1
f (x) =
B() sin(x)d.
0
These Fourier integrals motivates to dene the Fourier cosine transform (FCT) and Fourier
sine transform (FST). The FT of an even function f is called FCT of f . The FT of an
odd function f is called the FST of f .
DEFINITION 1. (Fourier Cosine Transform) The FCT of a function f : [0, ) R
is dened as
Fc (f ) = fc () = Fc () =


2
f (x) cos(x)dx (0 < ).
0

(1)

DEFINITION 2. (Inverse Fourier Cosine Transform ) The Inverse FCT (IFCT) of a


function fc () (0 < ) is dened as

2
1
Fc [fc ] = fc (x) =
fc () cos(x)d (0 x < ).
(2)
0
DEFINITION 3. (Fourier Sine Transform) The FST of a function f : [0, ) R is
dened as

2
Fs (f ) = fs () = Fs () =
f (x) sin(x)dx (0 < ).
0

(3)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

DEFINITION 4. (Inverse Fourier Sine Transform) The Inverse FST (IFST) of a


function fs () (0 < ) is dened as
Fs1 (f )


2
= fs (x) = Fs () =
fs () sin(x)d (0 x < ).
0

(4)

Basic Properties of Fourier Cosine and Sine Transforms:


Linearity:
Fc [(af + bg)] = aFc [f ] + bFc [g].
Fs [(af + bg)] = aFs [f ] + bFs [g].
Let f be a function dened for x 0 and f (x) 0 as x . Then

2
sin(x)f (x)dx
Fs [f (x)] =
0
x=



2
2

=
sin(x)f (x)
cos(x)f (x)dx

0
x=0

= Fc [f ].
If we assume that f (x), f (x) then
x=



2
2
2

sin(x)f (x)dx =
sin(x)f (x)
cos(x)f (x)dx

0
x=0
x=

x=

2
2

sin(x)f (x)
cos(x)f (x)
=
+

x=0
x=0

2
sin(x)f (x)dx
2
0

2
=
f (0) 2 Fs [f ]

Thus, we have
Fs [f (x)] = Fc [f ].

Fs [f (x)] = Fs [f ] +
2

2
f (0).

A similar result is true for the Fourier cosine function.

Fc [f (x)] = Fs [f ]
f (0)

2
Fc [f (x)] = Fc [f ]
f (0).

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Note: Observe that the FST of a rst derivative of a function is given in terms of
the FCT of the function itself. However, the FST of a second derivative is given in
terms
of the sine transform of the function. There is an additional boundary term
2 f (0).
Transformation of partial derivatives:
(i) Let u = u(x, t) be a function dened for x 0 and t 0. If u(x, t) 0 as
x , and Fs [u](, t) = u
s (, t), then
Fs [ux ](, t) = Fc [u](, t).

2
Fc [ux ](, t) = Fs [u](, t)
u(0, t).

If, in addition, ux (x, t) 0 as x , then

2
u(0, t).
Fs [uxx ](, t) = 2 Fs [u](, t) +

2
2
Fc [uxx ](, t) = Fc [u](, t)
ux (0, t).

(ii) If we transform the partial derivative ut (x, t) (and if the variable of integration
in the transformation is x), then the transformation is given by
d
Fs [ut ](, t) = {Fs [u]}(, t).
dt
d
Fc [ut ](, t) = {Fc [u]}(, t).
dt
Thus, time dierentiation commutes with both the Fourier cosine and sine transformations.

Practice Problems
1. Find the FST and FCT of the function
{
1, 0 x 2,
f (x) =
0, x > 2.
2. If u = u(x, t) and u(x, t) 0 as x , then
(A) Fs ux (, t) = Fc [u](, t)
(B) Fc ux (, t) = 2 u(0, t) + Fs [u](, t)
3. If u(x, t) and ux (x, t) 0 as x , then

(A) Fs [uxx ](, t) = 2 Fs [u](, t) + 2 u(0, t)

(B) Fc [uxx ](, t) = 2 Fc [u](, t) 2 ux (0, t)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Lecture 3

10

Heat Flow Problems

In this lecture we shall study some applications of the Fourier transform in solving the
heat ow problems where the spatial domain is innite or semi-innite.

Heat ow problem in an innite rod

Consider the heat ow in an innite rod where the initial temperature is u(x, 0) = f (x).
We shall prove that if the function f (x) is continuous and either absolutely integrable i.e.,

|f (x)|dx <

or bounded (i.e., |f (x)| M x), then the following IVP problem has a solution u(x, t)
which is continuous throughout the half-plane t 0, < x < .
PDE:
IC:

ut (x, t) = 2 uxx (x, t),


u(x, 0) = f (x),

< x < , t > 0,

< x < ,

(1)
(2)

with u(x, t), ux (x, t) 0 as x , t > 0.


The stepwise solution procedure is given below.
Step 1. (Transforming the problem to an IVP in ODE )
We apply FT F to the PDE (1) and IC (2) and use the properties of FT to reduce the
given Cauchy problem to an IVP for an ODE. Let
F[u] = u
(, t)

F[f (x)] = f().

Taking the FT of both sides of the PDE (1) and IC (2) with respect to the x variable, we
obtain
F[ut ] = 2 F[uxx ]
F[u(x, 0)] = F[f (x)].
Using the properties of the FT
F[ut ] =

d
u
(, t),
dt

F[uxx ] = 2 u
(, t)

we have
d
u
(, t) = 2 2 u
(, t),
dt
u
(, 0) = f().

(3)
(4)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

11

Step 2. (Solving the transformed problem)


Note that (3) is a rst-order IVP for an ODE in t for each xed . The solution to this
problem is given by
2 2
u
(, t) = f()e t .

(5)

Step 3. (Finding the inverse transform)


To nd the solution u(x, t), we take inverse transform, with t xed, to obtain
u(x, t) = F 1 [
u(, t)]
= F 1 [f()e

2 2 t

].

Step 4. (Using convolution property of the inverse FT )


Using the convolution property of F 1 , we write
u(x, t) = F 1 [f()e

2 2 t

= F 1 [f()] F 1 [e t ]
]
[
2
1
( x 2 )
= f (x)
e 4 t
22 t

(x)2
1

f ()e 42 t d.
=
2 2 t
2

REMARK 1.
Note that integrand is made up of two terms i.e., the initial temperature f (x) and
the function
(x)2
1
G(x, t) =
e 42 t .
2 2 t

The function G(x, t) is called Greens function or impulse-response function which


is the temperature response to an initial temperature impulse at x = .
The major drawback of the FT method is that all functions can not be transformed.
Only functions that damp to zero suciently fast as |x| have FTs.

Heat ow problem in a semi-innite rod

Consider the heat ow in a semi-innite region with the temperature prescribed as a


function of time at x = 0.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

12

EXAMPLE 2. Solve the problem


PDE:
BC:
IC:

ut (x, t) = 2 uxx (x, t),

0 < x < , t > 0

u(0, t) = b0 t > 0,

(7)

< x < ,

u(x, 0) = 0,

(6)

(8)

with u(x, t), ux (x, t) 0 as x .


Since 0 < x < , we may wish to use a transform. Since u is specied at x = 0, we
should try to use Fourier sine transform (and not the Fourier cosine transform). We solve
this problem with the following steps.
Step 1. (Transforming the problem)
Notice that u is specied at x = 0. Let Fs [u] = u
s (, t). Now taking FST of both sides of
(6) and noting the following properties of FST

2
Fs [ut ] =
ut (x, t) sin(x)dx
0
]
[
d
2
=
ut (x, t) sin(x)dx
dt
0
=
=

d
Fs [u]
dt
d
u
s (, t).
dt

and

2
u(0, t)

2
= 2 u
s (, t) +
u(0, t)

2
= 2 u
s (, t) +
b0 ,

where in the last step we have used BC u(0, t) = b0 , we arrive at the ODE
(
)

d
2
2
2
u
s (, t) = u
s (, t) +
b0 .
dt

Fs [uxx ] = Fs [u] +
2

Next, taking FST of the IC (8), we obatin


Fs [u(x, 0)] = Fs [0] u
s (, 0) = 0.
Thus, we transform the original problem (6)-(8) to an IVP in ODE:

d
2 2
2 2
u
s (, t) + u
s (, t) =
b0 ,
dt

u
s (, 0) = 0.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

13

Step 2.(Solving the transformed problem)


Using the standard method of solving ODE, the solution is given by

2 b0
2 2
u
s (, t) =
(1 e t ).

(9)

Step 3. (Finding the Inverse Transform)


Applying the inverse FST to both sides of (9), we nd that
[
]
2
b
2
2
0
u(x, t) = Fs1 [
us (, t)] = Fs1
(1 e t )


2
sin(x)
2 2
=
b0
(1 e t )d

[ 0
]
x
= b0 erf c(
) ,
22 t
where erf c(y) is the complementary error function given by

2
2
erf c(y) =
e d.
y
Hence, the solution of the heat conduction problem is
(
)
x
u(x, t) = b0 erf c
.
22 t

Practice Problems
1. Solve the following IVP:
ut = uxx , < x < , t > 0,
u(x, 0) = 2x, < x < ,
u(x, t), ux (x, t) 0 as x , t > 0.
2. Let f (x) C(R) be an odd function. If f (x) is absolutely integrable on R and
f (x) 0 as x then show that the unique continuous solution of the problem
ut = 2 uxx , < x < , t > 0,
u(x, 0) = f (x), < x < ,
u(x, t), ux (x, t) 0 as x , t > 0.
is also odd in the variable x. Show that the conclusion is false if the BC is dropped.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

3. Apply appropriate FT to solve the IBVP:


ut = uxx , x > 0, t > 0,
u(x, 0) = x, x > 0,
ux (0, t) = et , t > 0,
u(x, t), ux (x, t) 0 as x , t > 0.

14

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Lecture 4

15

Vibration of an Innite String

In this lecture we shall learn how Fourier Transforms can be used to solve one dimensional
wave equations in an innite (or semi-innite) interval. More precisely, we shall derive
DAlemberts formula using FT method.
Consider the following one-dimensional wave equation:
utt (x, t) = c2 uxx (x, t),

PDE:
IC:

< x < , t > 0,

u(x, 0) = f (x), ut (x, 0) = g(x)

< x < ,

(1)
(2)

with u(x, t), ux (x, t) 0 as x , t > 0.


Step 1. (Transforming the problem to a second-order IVP in ODE )
Let
F[u] = u
(, t),

F[f (x)] = f(),

F[g(x)] = g().

Taking the FT of both sides of the PDE (1) and IC with respect to the x variable, we
obtain
F[utt ] = c2 F[ux x]
F[u(x, 0)] = F[f (x)],

F[ut (x, 0)] = F[g(x)]

Using the properties of the FT


F[utt ] =

d2
u
(, t),
dt2

F[uxx ] = 2 u
(, t)

we have
d2
u
(, t) + c2 2 u
(, t) = 0,
dt2
u
(, 0) = f(), u
t (, 0) = g().

(3)
(4)

Step 2:(Solving the transformed problem) The solution of (3) is given by


u
(, t) = C() cos(ct) + D() sin(ct).

(5)

The condition u
(, 0) = f() yields
C() = u
(, 0) = f().
Dierentiate (5) to have
u
t (, t) = cC() sin(ct) + cD() cos(ct).

(6)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

16

From the condition u


t (, 0) = g(), we obtain
g() = u
t (, 0) = cD().
Thus, we write the solution of IVP as
u
(, t) = f() cos(ct) + g()

sin(ct)
.
c

(7)

Step 3:(Taking Inverse Transform)


Taking inverse transform of both sides of (7) and using linearity property, we have
u(x, t)

F 1 [
u(, t)]

[
]
sin(ct)
1

= F [f () cos(ct)] + F
g()
.
c
:= I1 + I2 .
1

For I1 , we note that


I1 = F 1 [f() cos(ct)](x)

=
f() cos(ct)eix d

1
=
f ()(eict + eict )eix d
2

1
1
=
f ()ei(x+ct) d +
f ()ei(xct) d
2
2
1
[f (x + ct) + f (x ct)].
=
2

(8)

(9)

For the second term I2 , we use the convolution theorem as follows. Note that if
{
1
2c |x| ct,
h(x) =
0 |x| > ct.
then its FT is given by
1 sin(c)

h()
=
.
2 c
An use of convolution theorem now yields
]
[
sin(ct)
1
(x)
I2 = F
g()
c

1 x+ct
=
g(y)h(x y)dy =
g(y)dy.
2c xct

(10)

Adding (9) and (10) yields DAlemberts formula:


1
1
u(x, t) = [f (x + ct) + f (x ct)] +
2
2c

x+ct

g(y)dy.
xct

(11)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Practice Problems
1. Solve the following IVP:
utt = 4uxx , < x < ,

t > 0,

u(x, 0) = x , ut (x, 0) = 0, < x < ,


2

u(x, t), ux (x, t) 0 as x , t > 0.


2. Solve the following IBVP:
utt = c2 uxx , 0 < x < ,

t > 0,

u(x, 0) = ut (x, 0) = 0, 0 < x < ,


u(0, t) = 1, t > 0,
u(x, t), ux (x, t) 0 as x , t > 0.

17

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

Lecture 5

18

Laplaces Equation in a Half-Plane

The steady-state temperature distribution for y > 0 with the prescribed temperature
u(x, 0) = f (x) on an innite wall, y = 0, is described by the equation:

PDE:

uxx + uyy = 0,

< x < , y > 0

(1)

BC:

u(x, 0) = f (x),

< x < ,

(2)

where u is bounded as y . Both u and ux 0 as |x| .


Solution. To solve this problem, we proceed as follows. Let
F[u]() = u
(, y),

F[f (x)] = f().

Step 1. (Transforming the problem using FT )


Taking FT of the PDE (1) in the variable x and using linearity property we have
F[uxx ] + F[uyy ] = 0.

(3)

Since u and ux 0 as |x| , it follows that


1
F[u](, y) +
uyy eix dx = 0
2
[
]
2
ix
2
u(x, y)e
dx = 0
u
(, y) + 2
y

d2
u
(, y) 2 u
(, y) = 0,
dy 2
2

=
=

(4)

which is a second-order linear ODE in y. Taking FT of the BC yields


u
(, 0) = F[f (x)] = f().

(5)

Step 2. (Solving the Transformed the problem)


The general solution of (4) is given by
u
(, y) = A()ey + B()ey ,

(6)

where A() and B() are to be determined. Since u is bounded as y , its FT u


(, y)
must be bounded as y . This implies A() = 0 for > 0. If < 0 then B() = 0.
Thus,
u
(, y) = Ke||y , K = is a constant.

(7)

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

19

Using (7) and (5), we obtain


f() = K.
Therefore,
1
u
(, y) = f()e||y =
2

(8)

f (x)e||y eix dx.

(9)

Step 3. (Applying Inverse FT )


Taking Inverse FT both sides of (9), we obtain

1
u(x, y) = F 1 [
f (x)e||y eix dx]
2
]
[

1
1
||y i

f ( )e
e
d eix d
=
2
2


1
=
f ( ) d
e([i( x)]||y) d.
2

(10)

An easy computation shows that


1
2

e([i( x)]||y) d

0

1
1
e([y+i( x)]) d +
e[yi( x)] d
=
2
2 0
[
]
1
1
1
=
+
2 y + i( x) y i( x)
1
y
=
.
( x)2 + y 2

(11)

Substituting (11) into (10), we conclude that


u(x, y) =

f ( )
d.
( x)2 + y 2

(12)

Let us consider the following example.


EXAMPLE 1. Let u(x, y) solves
< x < , y > 0,

uxx + uyy = 0
u(x, 0) = e

x2

(14)

If u(x, y) is continuous and bounded, then show that

u(x, y)dx =

(13)

, for each y 0.

MODULE 8: THE FOURIER TRANSFORM METHDOS FOR PDES

20

Solution. Since ex is bounded and continuous, apply formula (12) to obtain (for
2

y > 0)
1
u(x, y) =

y
2
e d.
y 2 + (x )2

Integrate both sides of (15) with respect to x to obtain




1
y
2
e d dx.
u(x, y)dx =
2
2

y + (x )

(15)

(16)

Interchanging the order of integration (This is possible because the integrand is absolutely

2
integrable for y > 0) and using e d = , we obtain

u(x, y)dx =

=
Note that

[
]
y
dx
2
e d
2
2

y + (x )

y 2
e d = .
y

u(x, 0)dx

2 d

= (14) also holds for y = 0. Hence the

result.

Practice Problems
1. Solve
uxx + uyy = 0, 0 < x, y < ,
u(0, y) = 0, u(x, 0) = f (x),
where u(x, y) assumed to be bounded and u(, y) = ux (, y) = 0, 0 y < .
2. Solve
uxx + uyy = 0, < x < , 0 < y < 2
u(x, 0) = f (x), u(x, 2) = 0,

< x < ,

u(x, t) 0 uniformly in y as |x| .

Module 9: The Method of Greens Functions


The method of Greens functions is an important technique for solving boundary value
and, initial and boundary value problems for partial dierential equations.
In this module, we shall learn Greens function method for nding the solutions of
partial dierential equations. This is accomplished by constructing Greens theorem that
are appropriate for the second order dierential equations. These integral theorems will
then be used to show how BVP and IBVP can be solved in terms of appropriately dened
Greens functions for these problems. More precisely, we shall study the construction and
use of Greens functions for the Laplace, the Heat and the Wave equations.

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Lecture 1

The Laplace Equation

Let be a bounded domain in R2 . Consider the Laplace equation

satisfying the BC

2 u = 0 in

(1)


u
= B.
u +
n

(2)

Here, (x), (x), and B are given functions evaluated on the boundary region . The
term

u
n

denotes the exterior normal derivative on . The boundary condition (2) relates

the values of u on and the ux of u through . We assume that (x) > 0 and (x) > 0
on .
If = 0, = 0 then (2) is referred to as Dirichlet BC. If = 0, = 0 then (2) is
referred to as Neumann BC. If = 0, = 0 then the condition (2) is known as Robins
type BC or mixed BC. We assume that is subdivided into three disjoint subsets 1 ,
2 and 3 . On 1 , 2 and 3 , u satises a boundary condition of the rst kind
(Dirichlet type), second kind (Neumann type) and third kind (mixed type), respectively.
Introducing a function w(x) (whose property we shall specify later) and applying
Greens theorem, we note that

(
)
2
2
w u u w dx =

(
)
w
u
u
w
ds,
n
n

(3)

where n is the exterior unit normal to . The equation (3) is the basic integral theorem
from which the Greens function method proceeds in the elliptic case.
Now the function w(x) is to be determined so that (3) expresses u at an arbitrary
point in the region in terms of w and known functions in (1) and (2).
Let w(x) be a solution of
2 w = (x ),

(4)

where (x ) is a two-dimensional Dirac delta function. Using the property of Dirac


delta function, we have

u (x ) dx = u().

u( w)dx =

In view of (1), we have

(5)

w(2 u) dx = 0.

(6)

MODULE 9: THE METHOD OF GREENS FUNCTIONS

It now remains to choose boundary conditions for w(x) on so that the boundary
integral in (3) involves only w(x) and known functions. This can be accomplished by
requiring w(x) to satisfy the homogeneous form of the given boundary condition (2), i.e.

w
w +
= 0.
(7)
n
If x 1 on , we have
w
u
u
w
n
n

=
=

(
)
1
w
u
u
w

n
n
(
)
1 w
u
1 w
u +
= B
,
n
n
n

where we have used (2). If x 2 3 on , we have


(
)
w
u
1
u
1
u
w
= w u +
= Bw.
n
n

(8)

(9)

The function w(x) is called the Greens function for the boundary value problem (1)-(2).
To indicate its dependence on the point , we denote the Greens function by
w = G(x; ).

(10)

In terms of G(x; ), (2) takes the form

1
1 G
B
ds +
BGds.
u() =
n
2 3
1

(11)

The Greens function G(x; ) thus satises the equation


2 G = (x ),
and the BC

x, ,


G
G +
= 0.
n

(12)

(13)

The Poisson equation in a rectangle: Let : 0 < x < a, 0 < y < b be a rectangular
domain in R2 with boundary . Consider the following BVP
2 u = f (x, y) in ,

(14)

with the BC
u(x, 0) = u(x, b) = 0,

0 < x < a,

u(0, y) = u(a, y) = 0,

0 < y < b.

(15)

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Let G(x, y; , ) be the solution of the BVP


2 G = (x , y ) in

(16)

with the BC
G(x, 0; , ) = G(x, b; , ) = 0, 0 < x < a,
G(0, y; , ) = G(a, y; , ) = 0,

(17)

0 < y < b,

where (x, y) = (x)(y). Since is piecewise smooth, we use the divergence


theorem to nd that for a pair of smooth functions u and w.

w
u
2
2
(w u u w)dx =
(u
w )ds.
n
n

(18)

Equation (18) is Greens formula for functions of two space variables. If u is the solution
of the given BVP and w is replaced by G, then the homogeneous BC satised by both u
and G make the right-hand side in (18) vanish and the formula reduces to

[u(x, y)(x )(y ) f (x, y)G(x, y; , )]dxdy = 0.

(19)

This yields

u(, ) =

G(x, y; , )f (x, y)dxdy.

(20)

Applying (18) with u(x, y) replaced by G(x, y; , ) and w(x, y) replaced by G(x, y; , ),
we obtain
G(, ; , ) = G(, ; , ).

(21)

Applying a simple interchange of variables, (20) becomes

u(x, y) =

G(x, y; , )f (, )dd.

(22)

G(x, y; , ) is called the Greens function of the given BVP. Formula (22) shows the eect
of all the sources in on the temperature at the point (x, y).
To construct the Greens function G, recall the two dimensional eigenvalue problem
associated with the BVP (14)-(15).
Uxx + Uyy + U = 0, 0 < x < a, 0 < y < b,
U (0, y) = 0, U (a, y) = 0,

0 < y < b,

U (x, 0) = 0 U (x, b) = 0 0 < x < a.

MODULE 9: THE METHOD OF GREENS FUNCTIONS

The eigenvalues and the corresponding eigenfunctions are given by


nm =

( n )2
a

( m )2
b

, Unm = sin

( nx )
a

sin

( my )
b

, n, m = 1, 2, . . .

We now seek an expansion of the form


G(x, y; , ) =
=

cnm (, )Unm (x, y)

n=1 m=1

cnm (, ) sin

( nx )

n=1 m=1

sin

( my )
b

(23)

Putting (23) in (16), we obtain


2 G(x, y; , ) =

cnm (, )(2 Unm )(x, y)

n=1 m=1

nm cnm (, )Unm (x, y)

n=1 m=1

= (x )(y ).

(24)

Multiplying both sides of (24) by Upq . Then integrate over and use the property of
Dirac delta function to obtain
cpq (, ) =

4
Upq (, ).
abpq

(25)

In view of (23), we now conclude


(
)
(
)

sin n sin m
( my )
( nx )

a
b
4
G(x, y; , ) =
sin
sin
.
m 2
2
ab
( n
a
b
a ) +( b )
n=1 m=1

EXAMPLE 1. Using Greens function method to solve


uxx + uyy = 2 sin(x) sin(2y), 0 < x < 1, 0 < y < 2,
u(x, 0) = 0, u(x, 2) = 0,

0 < x < 1,

u(0, y) = 0 u(1, y) = 0 0 < y < 2.


Here, a = 1, b = 2 and f (x, y) = 2 sin(x) sin(2y). By (23), we have

( my )

sin(n) sin(m/2)
G(x, y; , ) = 2
sin
(nx)
sin
.
(n)2 + (m)2 /4
2
n=1 m=1

(26)

MODULE 9: THE METHOD OF GREENS FUNCTIONS

It now follows from (22) that

2 1

u(x, y) =
0

( my )

4 sin(n) sin(m/2)
(2)
sin
(nx)
sin
2 (4n2 + m2 )
2
n=1 m=1

( ) sin() sin(2)dd
( 1
)

1
sin() sin(n)d
= 8
4n2 + m2
0
n=1 m=1
( 2
( my ) )
( my )

sin(2) sin
d sin (nx) sin
2
2
( 0
)
8
1
1
sin(x) sin(2y) = sin(x) sin(2y).
=
2 4 12 + 42
5
2

REMARK 2. We know that separation of variables cannot be performed if the PDE and/or
BCs are not homogeneous. The eigenfunction expansion technique is used to deal with
IBVPs, where the PDE is nonhomogeneous and the BCs are zero.

Practice Problems
1. Use the Greens function method to nd the solution of the Dirichlet BVP:
uxx + uyy = x2 + y 2 , 0 < x < 1, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.
2. Use the Greens function method to nd the solution of the Neumann BVP:
uxx + uyy = 0, 0 < x < 1, 0 < y < 1,
ux (x, 0) = ux (x, 1) = 0, 0 < x < 1,
u(0, y) = u(1, y) = 0, 0 < y < 1.

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Lecture 2

The Wave Equation

Let R2 be a bounded domain. Consider the wave equation


utt 2 u = 0,

(x, t) Q = (0, T ],

(1)

x ,

(2)

subject to the IC
u(x, 0) = f (x), ut (x, 0) = g(x),
and BC

u
u + = B(x, t),
n

0 < t T.

(3)

Notice that the problem is dened over the bounded (cylindrical) region Q in (x, t)-space
(see, Fig. 9.1). The lateral boundary of Q is denoted by Qx and the two caps of the
cylinder, which are portions of the planes t = 0 and t = T , are denoted by Q0 and QT ,
respectively. The boundary conditions for u(x, t) are assigned on Qx . The exterior unit

Figure 9.1: The region Q


normal n to Q has the form n = (nx , 0) on Qx , where nx is the exterior unit normal to
. On Q0 , n has the form n = (0, 1), and on QT , it has the form n = (0, 1).

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Using the divergence theorem, it follows that

[ (
)
(
)]
w utt 2 u u wtt 2 w dx
Q

(wu + uw, wut uwt )dx


=

=
(wu + uw, wut uwt ) nds
Q

=
(wu + uw) nx ds +
(wut uwt )dx
Qx
QT

(
)

wut uwt dx,

(4)

Q0

= (, ) is the gradient operator in space-time. The integral relation (4) forms


where
t
the basis for the Greens function method for solving the initial and boundary value
problem (1)-(3).
REMARK 1. When = (x0 , x1 ) (i.e., in one space dimension) and Q = (x0 , x1 ) (t0 , t1 ),
the equation (4) takes the form

)]
)
(
[ (
w utt uxx u wtt wxx dx =

t1

t0
t1

[w(x1 , t)ux (x1 , t) w(x0 , t)ux (x0 , t)]dt


[wx (x1 , t)u(x1 , t) wx (x0 , t)u(x0 , t)]dt

t0x1
+

x0x1

w(x, t1 )ut (x, t1 ) u(x, t1 )wt (x, t1 )dx


w(x, t0 )ut (x, t0 ) u(x, t0 )wt (x, t0 )dx.

x0

Next, we show that how w(x, t) is determined so that the solution u(x, t) of (1)-(3)
can be specied at an arbitrary point (, ) in the region Q from (4). For this, we rst
require that w(x, t) be a solution of
wtt 2 w = (x )(t ),

, 0 < < T.

Using the property of Dirac delta function, we have

(
)
2
u wtt w dx =
u(x )(t )dx = u(, ).
Q

Further, from (1), we have

(5)

(6)

w(utt 2 u)dx = 0.
Q

(7)

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Since

(wu + uw) nx ds =

Qx

Qx

)
u
w
w
+u
ds,
n
n


w
w +
= 0,
n Qx

we nd that

and obtain

Qx

w
u
1 w
1
B
Bwds.
w
+u
ds =
ds
n
n
n
S1
S2 S3

(8)

(9)

(10)

where S1 , S2 and S3 are the portions of Qx that correspond to 1 , 2 and 3 on ,


respectively. If w(x, 0) and wt (x, 0) are specied, the integral over Q0 in (4) is completely
determined since u(x, 0) and ut (x, 0) are known. However, u and ut , at t = T (i.e., on
QT ) are not known. Observe that if we specify w and wt at t = T in such a way that
the unknown values of u and ut play no role in the integral over QT . This leads to the
only possible choice is to set
w(x, T ) = 0,

wt (x, T ) = 0,

(11)

so that integral over QT vanishes. The function w(x, t) determined from (5), (10), and
(11) is called the Greens function for the initial and boundary value problem(1)-(3) for
u(x, t). It is denoted as w(x, t) = G(x, t; , ).
Once the initial and boundary value problem for G is solved, the values of G and Gt
at t = 0 are known. Then the solution u at an (arbitrary) point (, ) is given by

(
)
1
1 G
B
ds +
BGds.
u(, ) =
Gg Gt f dx
n
S2 S3
Q0
S1

(12)

Thus, the Greens function G(x, t; , ) satises the equation


Gtt 2 G = (x )(t ),

x, G, t, < T, > 0

(13)

with the end condition


G(x, T ; , ) = 0,
and BC

Gt (x, T ; , ) = 0.


G
= 0,
G +
n Qx

t<T

(14)

(15)

Since G(x, t; , ) = G(, ; x, t), G satises the same dierential equation but with
time running forwards instead of backwards.

MODULE 9: THE METHOD OF GREENS FUNCTIONS

10

DAlemberts formula via Greens function. Consider the following IBVP:


utt = c2 uxx , < x < , t > 0,

(16)

u(x, t) = 0, ux (x, t) = 0 as x , t > 0,

(17)

u(x, 0) = f (x), ut (x, 0) = g(x), < x < .

(18)

The Greens function G(x, t; , ) associated with (16)-(18) is a solution of


Gtt = c2 Gxx + (x , t ) < x < , t > 0,

(19)

G(x, t, , ), Gx (x, t; , ) 0 as x ,

(20)

G(x, t; , ) = 0, < x < , t < ,

(21)

where (x , t ) = (x )(t ).
Note that an application of Fourier transform yields

1

F[(x )] =
(x )ei .
2

Writing F[G](, t; , ) = G(,


t; , ) and applying FT to the PDE (19) and IC satises
by G, we obtain
tt + 2 c2 G
= 1 ei (t ), t > 0,
G
2

G(, t; , ) = 0, t < .
Since (t ) = 0 for t = , the solution of (22) is
{
0,
t<

G(,
t; , ) =
C1 cos(c(t )) + C2 sin(c(t )), t > ,

(22)
(23)

(24)

to be continuous at
where C1 and C2 are arbitrary functions of , and . Requiring G
t = yields C1 = 0. To nd C2 , we consider an interval [1 , 2 ] such that 0 < 1 < < 2
and integrate (22) with respect to t over this interval:
2
2 2

Gt (, 2 ; , ) Gt (, 1 ; , ) + c
G(,
t; , )dt
1
2

1
1
1
= ei
(t )dt = ei
(t )dt = ei .
2
2
2
1

By (24),
t (, 1 ; , ) = 0,
G
t (, 2 ; , ) = cC2 cos(c(2 )).
G

11

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Letting 1 , 2 and using the continuity of G at t = , it follows that C2 =

ei /( 2c). Hence
{

G(,
t; , ) =

0,

t<

sin(c(t ))
1 ei
,

2c

t > ,

Note that
]
1 sin(a)
F
= H(a |x|),

[
]
F 1 eia F[f ]() = f (x a).
1

Setting a = c(t ) and a = , respectively, we obtain


G(x, t; , ) =

1
H(c(t ) |x |).
2c

(25)

The values of G in the upper half (t > 0) of the (x, t)-plane, (25) can be written in the
form
1
[H((x ) + c(t )) H((x ) c(t ))].
2c
Writing u in term of G, we obtain
G(x, t; , ) =

u(x, t) =

(26)

[G(x, t; , 0)u(, 0) G (x, t; , 0)u(, 0)]d


t
=b
2
c
[G (x, t; , )u(, ) G(x, t; , )u (, )]=a
d,
a

(27)

where G(x, t; , ) is called the Greens function for the wave equation.
When < x < , the corresponding formula is obtained from (27) by letting
a and b and taking into account that G(x, t; , ) = 0 for |x| suciently
large.
In this case, the formula (27) reduces to

u(x, t) =
[G(x, t; , 0)u(, 0) G (x, t; , 0)u(, 0)]d.

(28)

This formula can be simplied by using the explicit form of G. Using (26) and the fact
that
H ( a) = ( a),
we obtain
1
G (x, t; , ) = [((x ) + c(t )) + ((x ) c(t ))].
2

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Using the denition of Dirac delta function and H, (28) becomes

1
u(x, t) =
[(x ct) + (x + ct)]f ()d
2

1
+
[H(x + ct) H(x ct)]g()d
2c

1
1 x+ct
=
[f (x + ct) + f (x ct)] +
g()d,
2
2c xct
which is called DAlemberts formula (see Module 6, Eq. (11)).

Practice Problems
1. Use the Greens function method to solve IBVP:
utt = uxx , < x < , t > 0,
u(x, t), ux (x, t) 0, ax x 0, t > 0,
u(x, 0) = 0, ut (x, 0) = x, < x < .
2. Use the Greens function method to solve IBVP:
utt = uxx + f (x, t), < x < , t > 0,
u(x, t), ux (x, t) 0, ax x 0, t > 0,
u(x, 0) = ut (x, 0) = 0, < x < ,
{

where
f (x, t) =

t,

1 < x < 1,

0, otherwise.

12

13

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Lecture 3

The Heat Equation

Consider the heat equation


ut 2 u = 0,

(x, t) Q = (0, T ],

(1)

x ,

(2)

with the initial condition


u(x, 0) = f (x),
and the BC
u +


u
= B(x, t),
n

0 < t T.

(3)

The above problem can be treated in the same way as the wave equation discussed in the
previous lecture. Let Q be a cylindrical region in (x, t)-space obtained by extending the
region parallel to itself from t = 0 to t = T (cf. Fig. 9.1). The lateral boundary of Q
is denoted by Qx . The two caps of the cylinder, which are portions of the planes t = 0
and t = T , are denoted by Q0 and QT , respectively.
The boundary conditions for u(x, t) are assigned on Qx . The exterior unit normal n
to Q has the form n = (nx , 0) on Qx , where nx is the exterior unit normal to . On
Q0 , n has the form n = (0, 1), and on QT , it has the form n = (0, 1).
As consequence of the divergence theorem, we have the following integral equation

[ (
)
(
)]
w ut 2 u u wt 2 w dx
(4)
Q

(
)
wu + uw, wu dx
=

Q
(
)

u
w
uwdx
uwdx,
=
w
+u
ds +
n
n
Qx
QT
Q0
(5)
= (, ) is the gradient operator in space-time.
where
t
Observe that the operator

2 that occurs in the heat equation (1) is not self-

adjoint as was the case in the of the Laplace and wave equations. The adjoint operator
of

2 is given as t
2 . With this choice for the adjoint operator we nd that

w(ut 2 u) u(wt 2 w) is a divergence expression.


Let w(x, t) to be a solution of
wt 2 w = (x )(t ),

, 0 < < T

(6)

14

MODULE 9: THE METHOD OF GREENS FUNCTIONS

with the end condition

and the BC

w(x, T ) = 0

(7)


w
w +
= 0.
n

(8)

As before, we obtain from (4)

u(, ) =
f Gdx

S1

Q0

1 G
B
ds +
n

S2 S3

1
BGds,

(9)

where we have set w(x, t) = G(x, t; , ). G(x, t; , ) is the Greens func- function for
the initial and boundary value problem (1)-(3). Thus the Greens function G(x, t; , )
satises the equation
Gt 2 G = (x )(t ),

x, , t, < T, > 0

(10)

with the end condition


G(x, T ; , ) = 0
and the BC


G
G +
= 0,
n

(11)

t < T.

(12)

The equation (10) satised by the Greens function G is a backward heat equation. Since
the problem for the Greens function is to be solved backwards in time, the initial and
boundary value problem (10)-(12) for G is well posed. Once G is determined then all
the terms on the right side of (9) are known and the solution u(x, t) of the initial and
boundary value problem (1)-(3) is completely determined.
Consider the following IBVP:
ut = 2 uxx , 0 < x < L, t > 0,

(13)

u(0, t) = 0, u(L, t) = 0,

(14)

u(x, 0) = f (x),

t>0

0 < x < L.

The method of separation of variables yields

nx
L
n=1
]
}
{[ L

n 2 (n/L)2 t
nx
2
f () sin
e
sin
=
L 0
L
L
n=1
[
]
L

2
nx
n 2 (n/L)2 t
=
f ()
sin
sin
e
d.
L
L
L
0

u(x, t) =

cn e

2 (n/L)2 t

n=1

sin

(15)

15

MODULE 9: THE METHOD OF GREENS FUNCTIONS

Dene the Greens function of this problem by


G(x, t; , 0) =

2
nx
n 2 (n/L)2 t
sin
sin
e
.
L
L
L

(16)

n=1

Then the solution of the IBVP can be expressed in the form

u(x, t) =

G(x, t; , 0)f ()d.

(17)

EXAMPLE 1. Consider the following IBVP:


ut = uxx , 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0
u(x, 0) = x, 0 < x < 1.
Solution. Here 2 = 1, L = 1 and f (x) = x. So
G(x, t; , 0) =

2 sin(nx) sin(n)en

2 2 t

(18)

n=1

Thus,

u(x, t) =
=

G(x, t; , 0)d
0

n=1

[
2 sin(nx)

(19)
]

sin(n)d en

2 2 t

(20)

Integrating by parts, we notice that


)]1 1
[ (
1
1
1
+

cos(n)
cos(n)d = (1)n+1 .
n
n
0 n
0
Thus, the solution is given by
u(x, t) =

2 [
2 2
(1)n+1 en t sin(nx).
n

n=1

Practice Problems
1. Use the Greens function method to solve IBVP:
ut = uxx , 0 < x < 1, t > 0,
ux (0, t) = 0, ux (1, t) = 0,
u(x, 0) = x, 0 < x < 1.

t > 0,

(21)

MODULE 9: THE METHOD OF GREENS FUNCTIONS

2. Use the Greens function method to solve IBVP:


ut = 4uxx , 0 < x < 1, t > 0,
u(0, t) = 0, u(1, t) = 0, t > 0,
u(x, 0) = 1, 0 < x < 1.

16

Bibliography
[1] D. Bleecker and G. Csordas, Basic Partial Dierential Equations, Van Nostrand
Reinhold, New York, 1992.
[2] C. Constanda, Solution Techniques for Elementary Partial Dierential Equations,
Chapman & Hall/CRC, New York, 2002.
[3] L. J. Crowin and R. H. Szczarba, Multivariable Calculus, Marcel Dekker, Inc, New
York, 1982.
[4] S. J. Farlow, Partial Dierential Equations for Scientists and Engineers, Birkhauser,
New York, 1993.
[5] F. John, Partial Dierential Equations, Springer-Verlag, New York, 1982.
[6] E. Kreyszig, Advanced Engineering Mathematics, Wiley, 2011.
[7] J. Marsden and A. Weinstein, Calculus III, Springer-Verlag, New York, 1985.
[8] T. Myint-U and L. Debnath, Linear Partial Dierential Equations for Scientists and
Engineers, Birkhauser, Boston, 2007.
[9] R. K. Nagle and E. B. Sa, Fundamentals of Dierential Equations and Boundary
Value Problems, Addison-Wesley, New York, 1996.
[10] I.N. Sneddon, Elements of Partial Dierential Equations, Dover Publications, New
York, 2006.
[11] E. C. Zachmanoglou and D. W. Thoe, Introduction to Partial Dierential Equations
with Applications, Dover Publication, Inc., New York, 1986.
[12] E. Zauderer, Partial Dierential Equations of Applied Mathematics, Second Edition,
John Wiley & Sons, New York, 1989.

17

You might also like