You are on page 1of 698

Analytic

Solutions
of
Partial
Dierential
Equations
MATH3414
School
of
Mathematics,
University
of
Leeds

15
credits

Taught
Semester
1,

Year
running
2003/04

Pre-requisites
MATH2360
or
MATH2420
or
equivalent.

Co-requisites
None.

Objectives:
To
provide
an
understanding
of,
and
methods
of
solution
for,
the
most
important
types
of
partial
dierential
equations
that
arise
in
Mathematical
Physics.
On
completion
of
this
module,
students
should
be
able
to:
a)
use
the
method
of
characteristics
to
solve
rst-order
hyperbolic
equations;
b)
classify
a
second
order
PDE
as
elliptic,
parabolic
or
hyperbolic;
c)
use
Green's
functions
to
solve
elliptic
equations;
d)
have
a
basic
understanding
of
diusion;
e)
obtain
a
priori
bounds
for
reaction-diusion
equations.

Syllabus:
The
majority
of
physical
phenomena
can
be
described
by
partial
dierential
equation
s
(e.g.
the
Navier-Stokes
equation
of
uid
dynamics,
Maxwell's
equations
of
electromagnetism)
.
This
module
considers
the
properties
of,
and
analytical
methods
of
solution
for
some
of
the
most
common
rst
and
second
order
PDEs
of
Mathematical
Physics.
In
particular
,
we
shall
look
in
detail
at
elliptic
equations
(Laplace?s
equation),
describing
steady-state
phenomena
and
the
diusion
/
heat
conduction
equation
describing
the
slow
spread
of
concentratio
n
or
heat.
The
topics
covered
are:
First
order
PDEs.
Semilinear
and
quasilinear
PDEs;
method
of
characteristics.
Characteristics
crossing.
Second
order
PDEs.
Classi
catio
n
and
standard
forms.
Elliptic
equations:
weak
and
strong
minimum
and
maximum
principles;
Green's
functions.
Parabolic
equations:
exempli
ed
by
solutions
of
the
diusion
equation.
Bounds
on
solutions
of
reaction-diusion
equations.

Form
of
teaching
Lectures:
26
hours.
7
examples
classes.

Form
of
assessment
One
3
hour
examination
at
end
of
semester
(100%).
Details:
Evy
Kersale
Oce:
9.22e
Phone:
0113
343
5149
E-mail:
kersale@maths.leeds.ac.uk
WWW:
http://www.maths.leeds.ac.uk/~kersale/
Schedule:
three
lectures
every
week,
for
eleven
weeks
(from
27/09
to
10/12).
Tuesday
13:00{14:00
RSLT
03
Wednesday
10:00{11:00
RSLT
04
Friday
11:00{12:00
RSLT
06
Pre-requisite:
elementary
dierential
calculus
and
several
variables
calculus
(e.g.
partial
dierentiation
with
change
of
variables,
parametric
curves,
integration),
elementary
algebra
(e.g.
partial
fractions,
linear
eigenvalue
problems),
ordinary
dierential
equations
(e.g.
change
of
variable,
integrating
factor),
and
vector
calculus
(e.g.
vector
identities,
Green's
theorem).
Outline
of
course:
Introduction:
de
nitions
examples
First
order
PDEs:
linear
&
semilinear
characteristics
quasilinear
nonlinear
system
of
equations
Second
order
linear
PDEs:
classi
cation
elliptic
parabolic
Book
list:
P.
Prasad
&
R.
Ravindran,
\Partial
Dierential
Equations",
Wiley
Eastern,
1985.
W.
E.
Williams,
\Partial
Dierential
Equations",
Oxford
University
Press,
1980.
P.
R.
Garabedian,
\Partial
Dierential
Equations",
Wiley,
1964.
Thanks
to
Prof.
D.
W.
Hughes,
Prof.
J.
H.
Merkin
and
Dr.
R.
Sturman
for
their
lecture
notes.
Course
Summary

De
nitions
of
dierent
type
of
PDE
(linear,
quasilinear,
semilinear,
nonlinear)

Existence
and
uniqueness
of
solutions

Solving
PDEs
analytically
is
generally
based
on
nding
a
change
of
variable
to
transform
the
equation
into
something
soluble
or
on
nding
an
integral
form
of
the
solution.
First
order
PDEs
@u
@u
a
+
b
=
c.
@x
@y
Linear
equations:
change
coordinate
using
(x,
y),
de
ned
by
the
characteristic
equation
dyb
=
;
dxa
and
(x,
y)
independent
(usually
.
=
x)
to
transform
the
PDE
into
an
ODE.
Quasilinear
equations:
change
coordinate
using
the
solutions
of
dx
dy
du
=
a,
=
b
and
=
c
ds
ds
ds
to
get
an
implicit
form
of
the
solution
(x,
y,
u)=
F
( (x,
y,
u)).
Nonlinear
waves:
region
of
solution.
System
of
linear
equations:
linear
algebra
to
decouple
equations.
Second
order
PDEs
@2u@2u@2u
@u
@u
a
+2b
+
c
+
d
+
e
+
fu
=
g.
@x2
@x@y
@y2
@x
@y
iii
Classi
cation
Type
Canonical
form
Characteristics
b2
-
ac
>
0
Hyperbolic
@2
u
@@s
+
.
.
.
=
0
dy
dx
=
b

.
b2
-
ac
a
b2
-
ac
=
0
Parabolic
@2
u
@2
+
.
.
.
=
0
dy
dx
=
b
a,
s
=
x
(say)
b2
-
ac
<
0
Elliptic
@2
u
@2
+
@2
u
@
2
+
.
.
.
=
0
dy
dx
=
b

.
b2
-
ac
a
,
.
.
=
.
+
s
a
=
i(.
-
)
Elliptic
equations:
(Laplace
equation.)
Maximum
Principle.
Solutions
using
Green's
functions
(uses
new
variables
and
the
Dirac
-function
to
pick
out
the
solution).
Method
of
images.
Parabolic
equations:
(heat
conduction,
diusion
equation.)
Derive
a
fundamental
solution
in
integral
form
or
make
use
of
the
similarity
properties
of
the
equation
to
nd
the
solution
in
terms
of
the
diusion
variable
x
s
=
=
.
2
t
First
and
Second
Maximum
Principles
and
Comparison
Theorem
give
bounds
on
the
solution,
and
can
then
construct
invariant
sets.
Contents
1
Introduction
1
1.1
Motivation
.....................................
1
1.2
Reminder
......................................
1
1.3
De
nitions......................................
2
1.4
Examples
......................................
3
1.4.1
WaveEquations
..............................
3
1.4.2
DiusionorHeatConductionEquations
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
1.4.3
Laplace'sEquation
.............................
4
1.4.4
OtherCommonSecondOrderLinearPDEs
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
1.4.5
NonlinearPDEs
..............................
5
1.4.6
SystemofPDEs
..............................
5
1.5
ExistenceandUniqueness
.............................
6
2
First
Order
Equations
9
2.1
LinearandSemilinearEquations
.........................
9
2.1.1
MethodofCharacteristic
.........................
9
2.1.2
EquivalentsetofODEs
..........................
12
2.1.3
CharacteristicCurves
...........................
14
2.2
QuasilinearEquations
...............................
19
2.2.1
Interpretation
of
Quasilinear
Equation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
2.2.2
Generalsolution:
..............................
20
2.3
WaveEquation
...................................
26
2.3.1
LinearWaves
................................
26
2.3.2
NonlinearWaves
..............................
27
2.3.3
WeakSolution
...............................
29
2.4
SystemsofEquations................................
31
2.4.1
LinearandSemilinearEquations
.....................
31
2.4.2
QuasilinearEquations
...........................
34
3
Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
37
3.1
Classi
cationandStandardFormReduction.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
3.2
ExtensionsoftheTheory
.............................
44
3.2.1
Linear
second
order
equations
in
n
variables...............
44
3.2.2
TheCauchyProblem............................
45
i
CONTENTS
4
Elliptic
Equations
49
4.1
De
nitions......................................
49
4.2
Properties
of
Laplace's
and
Poisson's
Equations
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
4.2.1
MeanValueProperty
...........................
51
4.2.2
Maximum-MinimumPrinciple.......................
52
4.3
Solving
Poisson
Equation
Using
Green's
Functions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
54
4.3.1
De
nitionofGreen'sFunctions
......................
55
4.3.2
Green'sfunctionforLaplaceOperator
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
4.3.3
FreeSpaceGreen'sFunction
.......................
60
4.3.4
MethodofImages
.............................
61
4.4
ExtensionsofTheory:
...............................
68
5
Parabolic
Equations
69
5.1
De
nitionsandProperties
.............................
69
5.1.1
Well-Posed
Cauchy
Problem
(Initial
Value
Problem)
.
.
.
.
.
.
.
.
.
.
69
5.1.2
Well-Posed
Initial-Boundary
Value
Problem
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
70
5.1.3
TimeIrreversibilityoftheHeatEquation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
70
5.1.4
Uniqueness
of
Solution
for
Cauchy
Problem:
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
5.1.5
Uniqueness
of
Solution
for
Initial-Boundary
Value
Problem:
.
.
.
.
.
.
71
5.2
FundamentalSolutionoftheHeatEquation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
5.2.1
IntegralFormoftheGeneralSolution
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
5.2.2
PropertiesoftheFundamentalSolution
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
74
5.2.3
Behaviour
at
large
t
............................
75
5.3
SimilaritySolution
.................................
75
5.3.1
In
niteRegion
...............................
76
5.3.2
Semi-In
niteRegion
............................
77
5.4
Maximum
Principles
and
Comparison
Theorems
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
5.4.1
FirstMaximumPrinciple
.........................
79
A
Integral
of
e..x
2
in
R
81
Chapter
1
Introduction
Contents
1.1
Motivation
................................
1
1.2
Reminder
.................................
1
1.3
De
nitions.................................
2
1.4
Examples
.................................
3
1.5
ExistenceandUniqueness
.......................
6
1.1
Motivation
Why
do
we
study
partial
dierential
equations
(PDEs)
and
in
particular
analytic
solutions?
We
are
interested
in
PDEs
because
most
of
mathematical
physics
is
described
by
such
equations.
For
example,
uids
dynamics
(and
more
generally
continuous
media
dynamics),
electromagnetic
theory,
quantum
mechanics,
trac
ow.
Typically,
a
given
PDE
will
only
be
accessible
to
numerical
solution
(with
one
obvious
exception

exam
questions!)
and
analytic
solutions
in
a
practical
or
research
scenario
are
often
impossible.
However,
it
is
vital
to
understand
the
general
theory
in
order
to
conduct
a
sensible
investigation.
For
example,
we
may
need
to
understand
what
type
of
PDE
we
have
to
ensure
the
numerical
solution
is
valid.
Indeed,
certain
types
of
equations
need
appropriate
boundary
conditions;
without
a
knowledge
of
the
general
theory
it
is
possible
that
the
problem
may
be
ill-posed
of
that
the
method
is
solution
is
erroneous.
1.2
Reminder
Partial
derivatives:
The
dierential
(or
dierential
form)
of
a
function
f
of
n
independent
variables,
(x1;x2;:::;xn),
is
a
linear
combination
of
the
basis
form
(dx1,
dx2;:::,
dxn)
n
.
@f
@f
@f
@f
df
=dxi
=dx1
+dx2
+
::.
+dxn;
@xi
@x1
@x2
@xn
i=1
where
the
partial
derivatives
are
de
ned
by
@f
f(x1;x2;:::;xi
+
h,
.
.
.
,
xn)
-
f(x1;x2;:::;xi;:::;xn)
=
lim
.
@xih!0
h
1
1.3
De
nitions
The
usual
dierentiation
identities
apply
to
the
partial
dierentiations
(sum,
product,
quotient,
chain
rules,
etc.)
Notations:
I
shall
use
interchangeably
the
notations
@f
@2f
.
@xi
f
.
fxi
,
.
@x
2
ixj
f
.
fxixj
;
@xi
@xi@xj
for
the
rst
order
and
second
order
partial
derivatives
respectively.
We
shall
also
use
interchangeably
the
notations
~u
.
u
.
u,
for
vectors.
Vector
dierential
operators:
in
three
dimensional
Cartesian
coordinate
system
(i,
j,
k)
we
consider
f(x,
y,
z):
R3
.
R
and
[ux(x,
y,
z);uy
(x,
y,
z);uz
(x,
y,
z)]
:
R3
.
R3
.
Gradient:
rf
=
@xf
i
+
@y
f
j
+
@z
f
k.
Divergence:
div
u
r
u
=
@xux
+
@yuy
+
@z
uz.
Curl:
r
u
=(@z
uy
-
@y
uz
)
i
+(@z
ux
-
@xuz
)
j
+(@xuy
-
@yux)
k.
Laplacian:
f
r2f
=
@2f
+
@2f
+
@2f.
xyz
Laplacian
of
a
vector:
u
r2u
=
r2ux
i
+
r2uy
j
+
r2uz
k.
Note
that
these
operators
are
dierent
in
other
systems
of
coordinate
(cylindrical
or
spherical,
say)
1.3
De
nitions
A
partial
dierential
equation
(PDE)
is
an
equation
for
some
quantity
u
(dependent
variable)
which
depends
on
the
independent
variables
x1;x2;x3;:::;xn;n
~
2,
and
involves
derivatives
of
u
with
respect
to
at
least
some
of
the
independent
variables.
F
(x1;:::;xn;@x1
u,
.
.
.
,
@xn
u,
@2
u,
@2
u,
.
.
.
,
@n
u)=0:
x1
x1
x2
x1
:::xn
Note:
1.
In
applications
xi
are
often
space
variables
(e.g.
x,
y,
z)
and
a
solution
may
be
required
in
some
region
G
of
space.
In
this
case
there
will
be
some
conditions
to
be
satis
ed
on
the
boundary
@
;
these
are
called
boundary
conditions
(BCs).
2.
Also
in
applications,
one
of
the
independent
variables
can
be
time
(t
say),
then
there
will
be
some
initial
conditions
(ICs)
to
be
satis
ed
(i.e.,
u
is
given
at
t
=
0
everywhere
in
)
3.
Again
in
applications,
systems
of
PDEs
can
arise
involving
the
dependent
variables
u1;u2;u3;:::;um;m
~
1
with
some
(at
least)
of
the
equations
involving
more
than
one
ui.
Chapter
1

Introduction
The
order
of
the
PDE
is
the
order
of
the
highest
(partial)
dierential
coecient
in
the
equation.
As
with
ordinary
dierential
equations
(ODEs)
it
is
important
to
be
able
to
distinguish
between
linear
and
nonlinear
equations.
A
linear
equation
is
one
in
which
the
equation
and
any
boundary
or
initial
conditions
do
not
include
any
product
of
the
dependent
variables
or
their
derivatives;
an
equation
that
is
not
linear
is
a
nonlinear
equation.
@u
@u
+
c
=0,
rst
order
linear
PDE
(simplest
wave
equation),
@t
@x
@2@2
uu
+
=(x,
y),
second
order
linear
PDE
(Poisson).
@x2
@y2
A
nonlinear
equation
is
semilinear
if
the
coecients
of
the
highest
derivative
are
functions
of
the
independent
variables
only.
@u
2
@u
(x
+3)
+
xy
=
u
3
;
@x
@y
@2u@2u
@u
2
@u
4
x
+(xy
+
y
2)+
u
+
u
=
u.
@x2
@y2
@x
@y
A
nonlinear
PDE
of
order
m
is
quasilinear
if
it
is
linear
in
the
derivatives
of
order
m
with
coecients
depending
only
on
x,
y,
.
.
.
and
derivatives
of
order
<m.
"2.
"2#
@u
@2u
@u
@u
@2u
@u
@2u
1+
-
2
+1+
=0.
@y
@x2
@x
@y
@x@y
@x
@y2
Principle
of
superposition:
A
linear
equation
has
the
useful
property
that
if
u1
and
u2
both
satisfy
the
equation
then
so
does
u1
+
u2
for
any
,
a
=
R.
This
is
often
used
in
constructing
solutions
to
linear
equations
(for
example,
so
as
to
satisfy
boundary
or
initial
conditions;
c.f.
Fourier
series
methods).
This
is
not
true
for
nonlinear
equations,
which
helps
to
make
this
sort
of
equations
more
interesting,
but
much
more
dicult
to
deal
with.
1.4
Examples
1.4.1
Wave
Equations
Waves
on
a
string,
sound
waves,
waves
on
stretch
membranes,
electromagnetic
waves,
etc.
@2u
1
@2u
=
;
@x2
c2
@t2
or
more
generally
1
@2u
=
r2
u
c2
@t2
where
c
is
a
constant
(wave
speed).
1.4
Examples
1.4.2
Diusion
or
Heat
Conduction
Equations
@u
@2u
=
;
@t
@x2
or
more
generally
@u
=
r2
u;
@t
or
even
@u
=
r
(ru)
@t
where
t
is
a
constant
(diusion
coecient
or
thermometric
conductivity).
Both
those
equations
(wave
and
diusion)
are
linear
equations
and
involve
time
(t).
They
require
some
initial
conditions
(and
possibly
some
boundary
conditions)
for
their
solution.
1.4.3
Laplace's
Equation
Another
example
of
a
second
order
linear
equation
is
the
following.
@2@2
uu
+
=0;
@x2
@y2
or
more
generally
r2
u
=0.
This
equation
usually
describes
steady
processes
and
is
solved
subject
to
some
boundary
conditions.
One
aspect
that
we
shall
consider
is:
why
do
the
similar
looking
equations
describes
essentially
dierent
physical
processes?
What
is
there
about
the
equations
that
make
this
the
cases?
1.4.4
Other
Common
Second
Order
Linear
PDEs
Poisson's
equation
is
just
the
Lapace's
equation
(homogeneous)
with
a
known
source
term
(e.g.
electric
potential
in
the
presence
of
a
density
of
charge):
r2
u
=.
The
Helmholtz
equation
may
be
regarded
as
a
stationary
wave
equation:
r2
u
+
k2
u
=0.
The
Schrodinger
equation
is
the
fundamental
equation
of
physics
for
describing
quantum
mechanical
behavior;
Schrodinger
wave
equation
is
a
PDE
that
describes
how
the
wavefunction
of
a
physical
system
evolves
over
time:
@u
..r2
u
+
Vu
=
i.
@t
Chapter
1

Introduction
1.4.5
Nonlinear
PDEs
An
example
of
a
nonlinear
equation
is
the
equation
for
the
propagation
of
reaction-diusion
waves:
@u
@t
=
@2u
@x2
+
u(1
-
u)
(2nd
order),
or
for
nonlinear
wave
propagation:
@u
@t
+
(u
+
c)
@u
@x
=
0;
(1st
order).
The
equation
@u
@u
x
2
u
+(y
+
u)=
u
3
@x
@y
is
an
example
of
quasilinear
equation,
and
@u
@u
y
+(x
3
+
y)=
u
3
@x
@y
is
an
example
of
semilinear
equation.
1.4.6
System
of
PDEs
Maxwell
equations
constitute
a
system
of
linear
PDEs:
d
1
@E
r
E
=
,
r
B
=
j
+
2
;
p
c@t
@B
r
B
=0,
r
E
=
-
.
@t
In
empty
space
(free
of
charges
and
currents)
this
system
can
be
rearranged
to
give
the
equations
of
propagation
of
the
electromagnetic
eld,
@2E
@2B
=
c
2r2E,
=
c
2r2B.
@t2
@t2
Incompressible
magnetohydrodynamic
(MHD)
equations
combine
Navier-Stokes
equation
(including
the
Lorentz
force),
the
induction
equation
as
well
as
the
solenoidal
constraints,
@U
+
U
rU
=
..r+
B
rB
+
r2U
+
F;
@t
@B
=
r
(U

B)+
r2B;
@t
r
U
=0,
r
B
=0.
Both
systems
involve
space
and
time;
they
require
some
initial
and
boundary
conditions
for
their
solution.
1.5
Existence
and
Uniqueness
1.5
Existence
and
Uniqueness
Before
attempting
to
solve
a
problem
involving
a
PDE
we
would
like
to
know
if
a
solution
exists,
and,
if
it
exists,
if
the
solution
is
unique.
Also,
in
problem
involving
time,
whether
a
solution
exists
8t>
0
(global
existence)
or
only
up
to
a
given
value
of
t

i.e.
only
for
0
<t<t0
(
nite
time
blow-up,
shock
formation).
As
well
as
the
equation
there
could
be
certain
boundary
and
initial
conditions.
We
would
also
like
to
know
whether
the
solution
of
the
problem
depends
continuously
of
the
prescribed
data

i.e.
small
changes
in
boundary
or
initial
conditions
produce
only
small
changes
in
the
solution.
Illustration
from
ODEs:
1.
du
=
u,
u(0)
=
1.
dt
Solution:
u
=
et
exists
for
0
.
t<
.
2.
du
=
u
2
;u(0)
=
1.
dt
Solution:
u
=1=(1
-
t)
exists
for
0
.
t<
1
3.
du
=
=
u,
u(0)
=
0;
dt
has
two
solutions:
u
.
0
and
u
=
t2=4
(non
uniqueness).
We
say
that
the
PDE
with
boundary
or
initial
condition
is
well-formed
(or
well-posed)
if
its
solution
exists
(globally),
is
unique
and
depends
continuously
on
the
assigned
data.
If
any
of
these
three
properties
(existence,
uniqueness
and
stability)
is
not
satis
ed,
the
problem
(PDE,
BCs
and
ICs)
is
said
to
be
ill-posed.
Usually
problems
involving
linear
systems
are
well-formed
but
this
may
not
be
always
the
case
for
nonlinear
systems
(bifurcation
of
solutions,
etc.)
Example:
A
simple
example
of
showing
uniqueness
is
provided
by:
r2
u
=
F
in
G
(Poisson's
equation).
with
u
=0
on
@
,
the
boundary
of
,
and
F
is
some
given
function
of
x.
Suppose
u1
and
u2
two
solutions
satisfying
the
equation
and
the
boundary
conditions.
Then
consider
w
=
u1
-
u2;
r2w
=0
in
G
and
w
=0
on
@
.
Now
the
divergence
theorem
gives
wrw

n
dS
=
r
(wrw)dV,
@
.
=
wr2
w
+(rw)2.
dV
where
n
is
a
unit
normal
outwards
from
.
Chapter
1

Introduction
(rw)2
dV
=
w
@w
dS
=0.
@n
.
@.
Now
the
integrand
(rw)2
is
non-negative
in
G
and
hence
for
the
equality
to
hold
we
must
have
rw
.
0;
i.e.
w
=
constant
in
.
Since
w
=0
on
@G
and
the
solution
is
smooth,
we
must
have
w
.
0
in
;
i.e.
u1
=
u2.
The
same
proof
works
if
@u=@n
is
given
on
@G
or
for
mixed
conditions.
1.5
Existence
and
Uniqueness
Chapter
2
First
Order
Equations
Contents
2.1
LinearandSemilinearEquations
...................
9
2.2
QuasilinearEquations..........................
19
2.3
WaveEquation
..............................
26
2.4
SystemsofEquations
..........................
31
2.1
Linear
and
Semilinear
Equations
2.1.1
Method
of
Characteristic
We
consider
linear
rst
order
partial
dierential
equation
in
two
independent
variables:
@u
@u
a(x,
y)+
b(x,
y)+
c(x,
y)u
=
f(x,
y),
(2.1)
@x
@y
where
a,
b,
c
and
f
are
continuous
in
some
region
of
the
plane
and
we
assume
that
a(x,
y)
and
b(x,
y)
are
not
zero
for
the
same
(x,
y).
In
fact,
we
could
consider
semilinear
rst
order
equation
(where
the
nonlinearity
is
present
only
in
the
right-hand
side)
such
as
@u
@u
a(x,
y)+
b(x,
y)=
(x,
y,
u),
(2.2)
@x
@y
instead
of
a
linear
equation
as
the
theory
of
the
former
does
not
require
any
special
treatment
as
compared
to
that
of
the
latter.
The
key
to
the
solution
of
the
equation
(2.1)
is
to
nd
a
change
of
variables
(or
a
change
of
coordinates)
.
.
(x,
y);s
.
(x,
y)
which
transforms
(2.1)
into
the
simpler
equation
@w
+
h(,
)w
=
F
(,
)
(2.3)
@.
where
w(,
)=
u(x(,
);y(,
)).
9
2.1
Linear
and
Semilinear
Equations
We
shall
de
ne
this
transformation
so
that
it
is
one-to-one,
at
least
for
all
(x,
y)
in
some
set
D
of
points
in
the
(x-y)
plane.
Then,
on
D
we
can
(in
theory)
solve
for
x
and
y
as
functions
of
,
.
To
ensure
that
we
can
do
this,
we
require
that
the
Jacobian
of
the
transformation
does
not
vanish
in
D:
J=
@@x@@y@@x@@y
=
@
@x@
@y..
@
@y@
@x6=f0;1g
for
(x,
y)
in
D.
We
begin
looking
for
a
suitable
transformation
by
computing
derivatives
via
the
chain
rule
@u
@w
@.
@w
@s
@u
@w
@.
@w
@s
=
+
and=
+
.
@x
@.
@x
@s
@x
@y
@.
@y
@s
@y
We
substitute
these
into
equation
(2.1)
to
obtain
@w
@.
@w
@s
@w
@.
@w
@s
a
++
b
++
cw
=
f.
@.
@x
@s
@x
@.
@y
@s
@y
We
can
rearrange
this
as
@.
@.
.
@w
@s
@s
.
@w
a
+
b
+
a
+
b
+
cw
=
f.
(2.4)
@x
@y
@.
@x
@y
@s
This
is
close
to
the
form
of
equation
(2.1)
if
we
can
choose
s
.
(x,
y)
so
that
@s
@s
a
@x
+
b
@y
=
0
for
(x,
y)
in
D.
Provided
that
@=@y
6
=
0
we
can
express
this
required
property
of
s
as
@xb
=
-
.
@y
a
Suppose
we
can
de
ne
a
new
variable
(or
coordinate)
s
which
satis
es
this
constraint.
What
is
the
equation
describing
the
curves
of
constant
?
Putting
s
.
(x,
y)=
k
(k
an
arbitrary
constant),
then
@s
@s
ds
=dx
+dy
=0
@x
@y
implies
that
dy=dx
=
..@x=@y
s
=
b=a.
So,
the
equation
(x,
y)=
k
de
nes
solutions
of
the
ODE
dyb(x,
y)
=
.
(2.5)
dxa(x,
y)
Equation
(2.5)
is
called
the
characteristic
equation
of
the
linear
equation
(2.1).
Its
solution
can
be
written
in
the
form
F
(x,
y,
)
=
0
(where
s
is
the
constant
of
integration)
and
de
nes
a
family
of
curves
in
the
plane
called
characteristics
or
characteristic
curves
of
(2.1).
(More
on
characteristics
later.)
Characteristics
represent
curves
along
which
the
independent
variable
s
of
the
new
coordinate
system
(,
)
is
constant.
So,
we
have
made
the
coecient
of
@w=@s
vanish
in
the
transformed
equation
(2.4),
by
choosing
s
.
(x,
y),
with
(x,
y)=
k
an
equation
de
ning
the
solution
of
the
characteristic
equation
(2.5).
We
can
now
choose
.
arbitrarily
(or
at
least
to
suit
our
convenience),
providing
we
still
have
J
6
=
0.
An
obvious
choice
is
.
.
(x,
y)=
x.
Chapter
2

First
Order
Equations
Then
J
=
10
@f
@f
@x
@y
@s
=
,
@y
and
we
have
already
assumed
this
on-zero.
Now
we
see
from
equation
(2.4)
that
this
change
of
variables,
.
=
x,
s
.
(x,
y),
transforms
equation
(2.1)
to
@w
(x,
y)+
c(x,
y)w
=
f(x,
y):;
@.
where
.
=
a@=@x
+
b@=@y.
To
complete
the
transformation
to
the
form
of
equation
(2.3),
we
rst
write
(x,
y);c(x,
y)
and
f(x,
y)
in
terms
of
.
and
s
to
obtain
@w
A(,
)+
C(,
)w
=
(,
).
@.
Finally,
restricting
the
variables
to
a
set
in
which
A(,
)
6
=
0
we
have
@w
C
d
+
w
=
;
@.
A
A
which
is
in
the
form
of
(2.3)
with
C(,
)
(,
)
h(,
)
=
and
F
(,
)=
.
A(,
)
A(,
)
The
characteristic
method
applies
to
rst
order
semilinear
equation
(2.2)
as
well
as
linear
equation
(2.1);
similar
change
of
variables
and
basic
algebra
transform
equation
(2.2)
to
@w
K
=
;
@.
A
where
the
nonlinear
term
K(,
,
w)=
(x,
y,
u)
and
restricting
again
the
variables
to
a
set
in
which
A(,
)=
(x,
y)
6
=
0.
Notation:
It
is
very
convenient
to
use
the
function
u
in
places
where
rigorously
the
function
w
should
be
used.
E.g.,
the
equation
here
above
can
identically
be
written
as
@u=@.
=
K=A.
Example:
Consider
the
linear
rst
order
equation
2
@u
@u
x
+
y
+
xyu
=1.
@x
@y
This
is
equation
(2.1)
with
a(x,
y)=
x2
,
b(x,
y)=
y,
c(x,
y)=
xy
and
f(x,
y)
=
1.
The
characteristic
equation
is
dyb
y
==
:
2
dxax
2.1
Linear
and
Semilinear
Equations
Solve
this
by
separation
of
variables
11
1
dy
=
2
dx
v
ln
y
+=
k,
for
y>
0,
and
x
=06
.
yxx
This
is
an
integral
of
the
characteristic
equation
describing
curves
of
constant
s
and
so
we
choose
1
s
.
(x,
y)
=
ln
y
+
.
x
Graphs
of
ln
y
+1=x
are
the
characteristics
of
this
PDE.
Choosing
.
=
x
we
have
the
Jacobian
@s
1
J
==
6as
required:
=0
@y
y
Since
.
=
x,
1
..1=
s
=
ln
y
+
v
y
=
e.
Now
we
apply
the
transformation
.
=
x,
s
=
ln
y
+1=x
with
w(,
)=
u(x,
y)
and
we
have
@u
@w
@.
@w
@s
@w
@w
1
@w
1
@w
=
+=+
-
=
-
;
@x
@.
@x
@s
@x
@.
@s
x2
@.
2
@s
@u
@w
@.
@w
@s
@w
11
@w
=
+
=0+

=
.
..1=
@y
@.
@y
@s
@y
@s
y
e@s
Then
the
PDE
becomes
@w
1
@w
.
1
@w
..1=
2
-
+
e
+
e..1=.
w
=1;
2
..1=
@.
@s
e@s
which
simpli
es
to
2
@w
@w
11
..1=
+
e..1=.
w
=1
thento
+
ew
=
.
@.
@.
2
We
have
transformed
the
equation
into
the
form
of
equation
(2.3),
for
any
region
of
(,
)space
with
.
6
=
0.
2.1.2
Equivalent
set
of
ODEs
The
point
of
this
transformation
is
that
we
can
solve
equation
(2.3).
Think
of
@w
+
h(,
)w
=
F
(,
)
@.
as
a
linear
rst
order
ordinary
dierential
equation
in
,
with
s
carried
along
as
a
parameter.
Thus
we
use
an
integrating
factor
method
R
RR
h(;)d.
@w
h(;)dh(;)d.
e
+
h(,
)
ew
=
F
(,
)
e;
@.
RR
.

h(;)dh(;)d.
ew
=
F
(,
)
e.
@.
Chapter
2

First
Order
Equations
Now
we
integrate
with
respect
to
.
Since
s
is
being
carried
as
a
parameter,
the
constant
of
integration
may
depend
on
s
R
.
R
h(;)dh(;)d.
d.
+
g()
ew
=
F
(,
)
e
in
which
g
is
an
arbitrary
dierentiable
function
of
one
variable.
Now
the
general
solution
of
the
transformed
equation
is
R
.
RR
-
h(;)dh(;)d.
d.
+
g()
e
-
h(;)d.
w(,
)=
eF
(,
)
e.
We
obtain
the
general
form
of
the
original
equation
by
substituting
back
(x,
y)
and
(x,
y)
to
get
u(x,
y)=
e
(x;y)
[
(x,
y)+
g((x,
y))]
.
(2.6)
A
certain
class
of
rst
order
PDEs
(linear
and
semilinear
PDEs)
can
then
be
reduced
to
a
set
of
ODEs.
This
makes
use
of
the
general
philosophy
that
ODEs
are
easier
to
solve
than
PDEs.
Example:
Consider
the
constant
coecient
equation
@u
@u
a
+
b
+
cu
=0
@x
@y
where
a,
b,
c
=
R.
Assume
a
=
0,
the
characteristic
equation
is
6
dy=dx
=
b=a
with
general
solution
de
ned
by
the
equation
bx
-
ay
=
k,
k
constant.
So
the
characteristics
of
the
PDE
are
the
straight
line
graphs
of
bx
-
ay
=
k
and
we
make
the
transformation
with
.
=
x,
s
=
bx
-
ay.
Using
the
substitution
we
nd
the
equation
transforms
to
@w
c
+
w
=0.
@.
a
The
integrating
factor
method
gives
.

c=a
ew
=0
@.
and
integrating
with
respect
to
.
gives
c=a
ew
=
g(),
where
g
is
any
dierentiable
function
of
one
variable.
Then
..c=a
w
=
g()
e
and
in
terms
of
x
and
y
we
back
transform
..cx=a
u(x,
y)=
g(bx
-
ay)
e.
2.1
Linear
and
Semilinear
Equations
Exercise:
Verify
the
solution
by
substituting
back
into
the
PDE.
Note:
Consider
the
dierence
between
general
solution
for
linear
ODEs
and
general
solution
for
linear
PDEs.
For
ODEs,
the
general
solution
of
dy
+
q(x)y
=
p(x)
dx
contains
an
arbitrary
constant
of
integration.
For
dierent
constants
you
get
dierent
curves
of
solution
in
(x-y)-plane.
To
pick
out
a
unique
solution
you
use
some
initial
condition
(say
y(x0)=
y0)
to
specify
the
constant.
For
PDEs,
if
u
is
the
general
solution
to
equation
(2.1),
then
z
=
u(x,
y)
de
nes
a
family
of
integral
surfaces
in
3D-space,
each
surface
corresponding
to
a
choice
of
arbitrary
function
g
in
(2.6).
We
need
some
kind
of
information
to
pick
out
a
unique
solution;
i.e.,
to
chose
the
arbitrary
function
g.
2.1.3
Characteristic
Curves
We
investigate
the
signi
cance
of
characteristics
which,
de
ned
by
the
ODE
dy
b(x,
y)
=
;
dx
a(x,
y)
represent
a
one
parameter
family
of
curves
whose
tangent
at
each
point
is
in
the
direction
of
the
vector
e
=(a,
b).
(Note
that
the
left-hand
side
of
equation
(2.2)
is
the
derivation
of
u
in
the
direction
of
the
vector
e,
e
ru.)
Their
parametric
representation
is
(x
=
x(s);y
=
y(s))
where
x(s)
and
y(s)
satisfy
the
pair
of
ODEs
dx
dy
=
a(x,
y),
=
b(x,
y).
(2.7)
ds
ds
The
variation
of
u
with
respect
x
=
.
along
these
characteristic
curves
is
given
by
du
@u
dy@u
@u
b@u
=
+
=+
;
dx
@x
dx@y
@x
a@y
(x,
y,
u)
=
from
equation
(2.2),
a(x,
y)
such
that,
in
term
of
the
curvilinear
coordinate
s,
the
variation
of
u
along
the
curves
becomes
du
du
dx
==
(x,
y,
u).
ds
dx
ds
The
one
parameter
family
of
characteristic
curves
is
parameterised
by
s
(each
value
of
s
represents
one
unique
characteristic).
The
solution
of
equation
(2.2)
reduces
to
the
solution
of
the
family
of
ODEs
du
ds
=
(x,
y,
u)
.
or
similarly
du
dx
=
du
d.
=
(x,
y,
u)
a(x,
y)
.
(2.8)
along
each
characteristics
(i.e.
for
each
value
of
).
Characteristic
equations
(2.7)
have
to
be
solved
together
with
equation
(2.8),
called
the
compatibility
equation,
to
nd
a
solution
to
semilinear
equation
(2.2).
Chapter
2

First
Order
Equations
Cauchy
Problem:
Consider
a
curve
.
in
(x,
y)-plane
whose
parametric
form
is
(x
=
x0();y
=
y0()).
The
Cauchy
problem
is
to
determine
a
solution
of
the
equation
F
(x,
y,
u,
@xu,
@y
u)=0
in
a
neighbourhood
of
.
such
that
u
takes
prescribed
values
u0()
called
Cauchy
data
on
...
=csth
u o
(1)
u o
(2)
(x , y )oo
=cstx
x
y
G
Notes:
1.
u
can
only
be
found
in
the
region
between
the
characteristics
drawn
through
the
endpoint
of
...
2.
Characteristics
are
curves
on
which
the
values
of
u
combined
with
the
equation
are
not
sucient
to
determine
the
normal
derivative
of
u.
3.
A
discontinuity
in
the
initial
data
propagates
onto
the
solution
along
the
characteristics.
These
are
curves
across
which
the
derivatives
of
u
can
jump
while
u
itself
remains
continuous.
Existence
&
Uniqueness:
Why
do
some
choices
of
.
in
(x,
y)-space
give
a
solution
and
other
give
no
solution
or
an
in
nite
number
of
solutions?
It
is
due
to
the
fact
that
the
Cauchy
data
(initial
conditions)
may
be
prescribed
on
a
curve
.
which
is
a
characteristic
of
the
PDE.
To
understand
the
de
nition
of
characteristics
in
the
context
of
existence
and
uniqueness
of
solution,
return
to
the
general
solution
(2.6)
of
the
linear
PDE:
u(x,
y)=
e
(x;y)
[
(x,
y)+
g((x,
y))]
.
Consider
the
Cauchy
data,
u0,
prescribed
along
the
curve
.
whose
parametric
form
is
(x
=
x0();y
=
y0())
and
suppose
u0(x0();y0())
=
q().
If
.
is
not
a
characteristic,
the
problem
is
well-posed
and
there
is
a
unique
function
g
which
satis
es
the
condition
q()=
e
(x0
();y0
())
[
(x0();y0())
+
g(x0();y0())]
.
If
on
the
other
hand
(x
=
x0();y
=
y0())
is
the
parametrisation
of
a
characteristic
((x,
y)=
k,
say),
the
relation
between
the
initial
conditions
q
and
g
becomes
q()=
e
(x0
();y0
())
[
(x0();y0())
+
G]
,
(2.9)
where
G
=
g(k)
is
a
constant;
the
problem
is
ill-posed.
The
functions
(x,
y)
and
(x,
y)
are
determined
by
the
PDE,
so
equation
(2.9)
places
a
constraint
on
the
given
data
function
q(x).
If
q()
is
not
of
this
form
for
any
constant
G,
then
there
is
no
solution
taking
on
these
prescribed
values
on
...
On
the
other
hand,
if
q()
is
of
this
form
for
some
G,
then
there
are
in
nitely
many
such
solutions,
because
we
can
choose
for
g
any
dierentiable
function
so
that
g(k)=
G.
2.1
Linear
and
Semilinear
Equations
Example
1:
Consider
@u
@u
2
+3
+8u
=0.
@x
@y
The
characteristic
equation
is
dy
3
=
dx
2
and
the
characteristics
are
the
straight
line
graphs
3x
-
2y
=
c.
Hence
we
take
s
=3x
-
2y
and
.
=
x.
x
y
=cstx
=csth
=cstx
(We
can
see
that
an
s
and
.
cross
only
once
they
are
independent,
i.e.
J
6
=
0;
s
and
.
have
been
properly
chosen.)
This
gives
the
solution
u(x,
y)=
e
..4x
g(3x
-
2y)
where
g
is
a
dierentiable
function
de
ned
over
the
real
line.
Simply
specifying
the
solution
at
a
given
point
(as
in
ODEs)
does
not
uniquely
determine
g;
we
need
to
take
a
curve
of
initial
conditions.
Suppose
we
specify
values
of
u(x,
y)
along
a
curve
.
in
the
plane.
For
example,
let's
choose
.
as
the
x-axis
and
gives
values
of
u(x,
y)
at
points
on
..,
say
u(x,
0)
=
sin(x).
Then
we
need
..4x
4x
u(x,
0)
=
eg(3x)
=
sin(x)
i.e.
g(3x)
=
sin(x)e,
and
putting
t
=3x,
g(t)
=
sin(t=3)
e4t=3
.
This
determines
g
and
the
solution
satisfying
the
condition
u(x,
0)
=
sin(x)
on
.
is
..8y=3
u(x,
y)
=
sin(x
-
2y=3)
e.
We
have
determined
the
unique
solution
of
the
PDE
with
u
speci
ed
along
the
x-axis.
We
do
not
have
to
choose
an
axis

say,
along
x
=
y,
u(x,
y)=
u(x,
x)=
x4
.
From
the
general
solution
this
requires,
..4x
4
44x
u(x,
x)=
eg(x)=
x,
so
g(x)=
xe
to
give
the
unique
solution
8(x..y)
u(x,
y)
=
(3x
-
2y)4
e
satisfying
u(x,
x)=
x4
.
Chapter
2

First
Order
Equations
However,
not
every
curve
in
the
plane
can
be
used
to
determine
g.
Suppose
we
choose
.
to
be
the
line
3x
-
2y
=
1
and
prescribe
values
of
u
along
this
line,
say
u(x,
y)=
u(x,
(3x
-
1)=2)
=
x
2
.
Now
we
must
choose
g
so
that
..4x
2
eg(3x
-
(3x
-
1))
=
x.
This
requires
g(1)
=
x2
e4x
(for
all
x).
This
is
impossible
and
hence
there
is
no
solution
taking
the
value
x2
at
points
(x,
y)
on
this
line.
Last,
we
consider
again
.
to
be
the
line
3x
-
2y
=
1
but
choose
values
of
u
along
this
line
to
be
u(x,
y)=
u(x,
(3x
-
1)=2)
=
e
..4x
.
Now
we
must
choose
g
so
that
..4x
..4x
eg(3x
-
(3x
-
1))
=
e.
This
requires
g(1)
=
1,
condition
satis
ed
by
an
in
nite
number
of
functions
and
hence
there
is
an
in
nite
number
of
solutions
taking
the
values
e..4x
on
the
line
3x
-
2y
=
1.
Depending
on
the
initial
conditions,
the
PDE
has
one
unique
solution,
no
solution
at
all
or
an
in
nite
number
or
solutions.
The
dierence
is
that
the
x-axis
and
the
line
y
=
x
are
not
the
characteristics
of
the
PDE
while
the
line
3x
-
2y
=
1
is
a
characteristic.
Example
2:
x
@u
@x
-
y
@u
@y
=
u
with
u
=
x
2
on
y
=
x,
1
.
y
.
2
Characteristics:
dy
dx
=
-
y
x
v
d(xy)
=
0
v
xy
=
c,
constant.
So,
take
s
=
xy
and
.
=
x.
Then
the
equation
becomes
@w
@w
@w
@w
@w
xy
+
x
-
xy
=
w
v
.
-
w
=0
v
=0.
@s
@.
@s
@.
@.
.
Finally
the
general
solution
is,
w
=
g()
or
equivalently
u(x,
y)=
xg(xy).
When
y
=
x
22
with
1
.
y
.
2,
u
=
x;
so
x=
xg(x2)
v
g(x)=
=
x
and
the
solution
is
=
u(x,
y)=
x
xy.
=consth
=constxG
This
gure
presents
the
characteristic
curves
given
by
xy
=
constant.
The
red
characteristics
show
the
domain
where
the
initial
conditions
permit
us
to
determine
the
solution.
2.1
Linear
and
Semilinear
Equations
Alternative
approach
to
solving
example
2:
@u
@u
x
-
y
=
u
with
u
=
x
2
on
y
=
x,
1
.
y
.
2
@x
@y
This
method
is
not
suitable
for
nding
general
solutions
but
it
works
for
Cauchy
problems.
The
idea
is
to
integrate
directly
the
characteristic
and
compatibility
equations
in
curvilinear
coordinates.
(See
also
\alternative
method
for
solving
the
characteristic
equations
for
quasilinear
equations
hereafter.)
The
solution
of
the
characteristic
equations
dx
dy
=
x
and
=
..y
ds
ds
gives
the
parametric
form
of
the
characteristic
curves,
while
the
integration
of
the
compatibility
equation
du
=
u
ds
gives
the
solution
u(s)
along
these
characteristics
curves.
The
solution
of
the
characteristic
equations
is
x
=
c1
e
s
and
y
=
c2
e
..s
,
where
the
parametric
form
of
the
data
curve
.
permits
us
to
nd
the
two
constants
of
integration
c1
&
c2
in
terms
of
the
curvilinear
coordinate
along
...
The
curve
.
is
described
by
x0()=

and
y0()=

with

=
[2,
1]
and
we
consider
the
points
on
.
to
be
the
origin
of
the
coordinate
s
along
the
characteristics
(i.e.
s
=
0
on
..).
So,
on
.
(s
=
0)
v
s
=

=
c1
x(s,
)=

e
x0
,
8
=
[0,
1].
..s
=

=
c2
y(s,
)=

e
y0
For
linear
or
semilinear
problems
we
can
solve
the
compatibility
equation
independently
of
the
characteristic
equations.
(This
property
is
not
true
for
quasilinear
equations.)
Along
the
characteristics
u
is
determined
by
du
s
=
u
v
u
=
c3
e
.
ds
Now
we
can
make
use
of
the
Cauchy
data
to
determine
the
constant
of
integration
c3,
on
..,
at
s
=
0,
u0(x0();y0())
.
u0()=
2
=
c3.
Then,
we
have
the
parametric
forms
of
the
characteristic
curves
and
the
solution
s
..ss
x(s,
)=

e
;y(s,
)=

e
and
u(s,
)=
2
e
,
in
terms
of
two
parameters,
s
the
curvilinear
coordinate
along
the
characteristic
curves
and

the
curvilinear
coordinate
along
the
data
curve
...
From
the
two
rst
ones
we
get
s
and

in
terms
of
x
and
y.
=
x
x
2s
xy
=
2
v

=
v
s
=
ln
and
xy
(
~
0).
=e
yy
Then,
we
substitute
s
and

in
u(s,
)
to
nd
=
x
x
u(x,
y)=
xy
exp
ln
=
xy
=
x
xy.
yy
Chapter
2

First
Order
Equations
2.2
Quasilinear
Equations
Consider
the
rst
order
quasilinear
PDE
@u
@u
a(x,
y,
u)+
b(x,
y,
u)=
c(x,
y,
u)
(2.10)
@x
@y
where
the
functions
a,
b
and
c
can
involve
u
but
not
its
derivatives.
2.2.1
Interpretation
of
Quasilinear
Equation
We
can
represent
the
solutions
u(x,
y)
by
the
integral
surfaces
of
the
PDE,
z
=
u(x,
y),
in
(x,
y,
z)-space.
De
ne
the
Monge
direction
by
the
vector
(a,
b,
c)
and
recall
that
the
normal
to
the
integral
surface
is
(@xu,
@y
u,
..1).
Thus
quasilinear
equation
(2.10)
says
that
the
normal
to
the
integral
surface
is
perpendicular
to
the
Monge
direction;
i.e.
integral
surfaces
are
surfaces
that
at
each
point
are
tangent
to
the
Monge
direction,
01.
.
a@xu
@u
@u
.
b
.

.
@y
u
.
=
a(x,
y,
u)+
b(x,
y,
u)
-
c(x,
y,
u)=0.
@x
@y
c
..1
With
the
eld
of
Monge
direction,
with
direction
numbers
(a,
b,
c),
we
can
associate
the
family
of
Monge
curves
which
at
each
point
are
tangent
to
that
direction
eld.
These
are
de
ned
by
.
101.
1
dxa
c
dy
-
b
dz
dx
dy
dz
.
dy
.

.
b
.
=
.
a
dz
-
c
dx
.
=0
8
=
=
(=ds),
a(x,
y,
u)
b(x,
y,
u)
c(x,
y,
u)
dzcb
dx
-
a
dy
where
dl
=
(dx,
dy,
dz)
is
an
arbitrary
in
nitesimal
vector
parallel
to
the
Monge
direction.
In
the
linear
case,
characteristics
were
curves
in
the
(x,
y)-plane
(see

2.1.3).
For
the
quasilinear
equation,
we
consider
Monge
curves
in
(x,
y,
u)-space
de
ned
by
dx
=
a(x,
y,
u);
ds
dy
=
b(x,
y,
u);
ds
du
=
c(x,
y,
u).
ds
Characteristic
equations
(dfx,
yg=ds)
and
compatibility
equation
(du=ds)
are
simultaneous
rst
order
ODEs
in
terms
of
a
dummy
variable
s
(curvilinear
coordinate
along
the
characteristics);
we
cannot
solve
the
characteristic
equations
and
compatibility
equation
independently
as
it
is
for
a
semilinear
equation.
Note
that,
in
cases
where
c
.
0,
the
solution
remains
constant
on
the
characteristics.
The
rough
idea
in
solving
the
PDE
is
thus
to
build
up
the
integral
surface
from
the
Monge
curves,
obtained
by
solution
of
the
ODEs.
Note
that
we
make
the
dierence
between
Monge
curve
or
direction
in
(x,
y,
z)-space
and
characteristic
curve
or
direction,
their
projections
in
(x,
y)-space.
2.2
Quasilinear
Equations
2.2.2
General
solution:
Suppose
that
the
characteristic
and
compatibility
equations
that
we
have
de
ned
have
two
independent
rst
integrals
(function,
f(x,
y,
u),
constant
along
the
Monge
curves)
(x,
y,
u)=
c1
and
(x,
y,
u)=
c2.
Then
the
solution
of
equation
(2.10)
satis
es
F
(,
)
=
0
for
some
arbitrary
function
F
(equivalently,
.
=
G( )
for
some
arbitrary
G),
where
the
form
of
F
(or
G)
depends
on
the
initial
conditions.
Proof:
Since
.
and
.
are
rst
integrals
of
the
equation,
(x,
y,
u)=
(x(s);y(s);u(s)),
=
(s)=
c1.
We
have
the
chain
rule
d.
@.
dx
@.
dy
@.
du
=
++
=0;
ds
@x
ds
@y
ds
@u
ds
and
then
from
the
characteristic
equations
@.
@.
@.
a
+
b
+
c
=0.
@x
@y
@u
And
similarly
for
.
@.
@.
@.
a
+
b
+
c
=0.
@x
@y
@u
Solving
for
c
gives
@.
@.
@.
@.
@.
@.
@.
@.
a
-
+
b
-
=0
@x
@u
@x
@u
@y
@u
@y
@u
or
aJ[u,
x]=
bJ[y,
u]
where
J[x1;x2]=
@p
@p
@x1
@x2
@.
@.
@x1
@x2
.
And
similarly,
solving
for
a,
bJ[x,
y]=
cJ[u,
x].
Thus,
we
have
J[u,
x]=
J[x,
y]
b=c
and
J[y,
u]=
J[u,
x]
a=b
=
J[x,
y]
a=c.
Now
consider
F
(,
)
=
0

remember
F
((x,
y,
u(x,
y)); (x,
y,
u(x,
y)))

and
dierentiate
@F
@F
dF
=dx
+dy
=0
@x
@y
Then,
the
derivative
with
respect
to
x
is
zero,
@F
@F
@.
@.
@u
@F
@.
@.
@u
=
++
+=0;
@x
@.
@x
@u
@x
@.
@x
@u
@x
as
well
as
the
derivative
with
respect
to
y
@F
@F
@.
@.
@u
@F
@.
@.
@u
=
++
+=0.
@y
@.
@y
@u
@y
@.
@y
@u
@y
Chapter
2

First
Order
Equations
For
a
non-trivial
solution
of
this
we
must
have
@.
.
@.

@.
@.
@u
@.
@u
@.
@.
@u
@.
@u
++
-
+
+=0;
@x
@u
@x
@y
@u
@y
@x
@u
@x
@y
@u
@y
@.
@.
@.
@.
.
@u
@.
@.
@.
@.
.
@u
@.
@.
@.
@.
)-
+
-
=
-
;
@y
@u
@y
@u
@x
@u
@x
@u
@x
@y
@x
@y
@x
@y
@u
@u
v
J[y,
u]+
J[u,
x]=
J[x,
y].
@x
@y
Then
from
the
previous
expressions
for
a,
b,
and
c
@u
@u
a
+
b
=
c;
@x
@y
i.e.,
F
(,
)
=
0
de
nes
a
solution
of
the
original
equation.
Example
1:
@u
@u
(y
+
u)+
y
=
x
-
y
in
y>
0,
...
<x<
1;
@x
@y
with
u
=1+
x
on
y
=1.
We
rst
look
for
the
general
solution
of
the
PDE
before
applying
the
initial
conditions.
Combining
the
characteristic
and
compatibility
equations,
dx
=
y
+
u,
(2.11)
ds
dy
=
y,
(2.12)
ds
du
=
x
-
y
(2.13)
ds
we
seek
two
independent
rst
integrals.
Equations
(2.11)
and
(2.13)
give
d
(x
+
u)=
x
+
u;
ds
and
equation
(2.12)
1dy
=1.
y
ds
Now,
consider
d
.
x
+
u
.
1d
x
+
u
dy
=(x
+
u)
-
2
;
dsy
y
dsyds
x
+
ux
+
u
=
-
=0.
yy
So,
(x
+
u)=y
=
c1
is
constant.
This
de
nes
a
family
of
solutions
of
the
PDE;
so,
we
can
choose
x
+
u
(x,
y,
u)=
,
y
2.2
Quasilinear
Equations
such
that
.
=
c1
determines
one
particular
family
of
solutions.
Also,
equations
(2.11)
and
(2.12)
give
d
(x
-
y)=
u;
ds
and
equation
(2.13)
ddu
(x
-
y)(x
-
y)=
u.
ds
ds
Now,
consider
d
dd
2.
2
(x
-
y)2
-
u
=(x
-
y)2.
-
u;
ds
ds
ds
ddu
=
2(x
-
y)(x
-
y)
-
2u
=0.
ds
ds
2
Then,
(x
-
y)2
-
u=
c2
is
constant
and
de
nes
another
family
of
solutions
of
the
PDE.
So,
we
can
take
2
(x,
y,
u)=(x
-
y)2
-
u.
The
general
solution
is
F
.
x
+
u
,
(x
-
y)2
-
u
2
.
=
0
or
(x
-
y)2
-
u
2
=
G
.
x
+
u
.
,
y
y
for
some
arbitrary
functions
F
or
G.
Now
to
get
a
particular
solution,
apply
initial
conditions
(u
=1+
x
when
y
=
1)
(x
-
1)2
-
(x
+
1)2
=
G(2x
+
1)
v
G(2x
+1)
=
..4x.
Substitute

=2x
+
1,
i.e.
x
=(
-
1)=2,
so
G()
=
2(1
-
).
Hence,
x
+
u
2
(x
-
y)2
-
u
2
=2
1
-
=(y
-
x
-
u).
yy
We
can
regard
this
as
a
quadratic
equation
for
u:
2
.
x
-
y
.
u
2
-
u
-
2
+(x
-
y)2
=0,
yy
2
2
11
u
2
-
u
-
x
-
y
+
+=0:
2
y
yy
Then,
.
2

11
1111
u
=

2
+
x
-
y
+
-
2
=

x
-
y
+
.
yyyyyy
Consider
again
the
initial
condition
u
=1+
x
on
y
=1
u(x,
y
=
1)
=
1

(x
-
1+1)
=
1

x
v
take
the
positive
root.
Hence,
2
u(x,
y)=
x
-
y
+
.
y
Chapter
2

First
Order
Equations
Example
2:
using
the
same
procedure
solve
@u
@u
x(y
-
u)+
y(x
+
u)
=(x
+
y)u
with
u
=
x
2
+1
on
y
=
x.
@x
@y
Characteristic
equations
dx
=
x(y
-
u),
(2.14)
ds
dy
=
y(x
+
u),
(2.15)
ds
du
=(x
+
y)u.
(2.16)
ds
Again,
we
seek
to
independent
rst
integrals.
On
the
one
hand,
equations
(2.14)
and
(2.15)
give
dx
dy
2
y
+
x
=
xy
2
-
xyu
+
yx
+
xyu
=
xy
(x
+
y);
ds
ds
1du
=
xy
from
equation
(2.16).
u
ds
Now,
consider
1dx
1dy
1du
d
xy
.
+=
v
ln
=0.
x
dsy
dsu
ds
dsu
Hence,
xy=u
=
c1
is
constant
and
xy
(x,
y,
u)=
u
is
a
rst
integral
of
the
PDE.
On
the
other
hand,
dx
dy
du
-
=
xy
-
xu
-
xy
-
yu
=
..u(x
+
y)=
-
;
ds
ds
ds
d
v
(x
+
u
-
y)=0.
ds
Hence,
x
+
u
-
y
=
c2
is
also
a
constant
on
the
Monge
curves
and
another
rst
integral
is
given
by
(x,
y,
u)=
x
+
u
-
y,
so
the
general
solution
is
xy
=
G(x
+
u
-
y).
u
Now,
we
make
use
of
the
initial
conditions,
u
=
x2
+1
on
y
=
x,
to
determine
G:
2
x
2
=
G(x
2
+
1);
1+
x
2
set

=
x2
+
1,
i.e.
x=

-
1,
then

-
1
G()=
;

and
nally
the
solution
is
xy
x
+
u
-
y
-
1
=
.
Rearrange
to
nish!
ux
+
u
-
y
2.2
Quasilinear
Equations
Alternative
approach:
Solving
the
characteristic
equations.
Illustration
by
an
example,
2
@u
@u
x
+
u
=1,
with
u
=0
on
x
+
y
=1.
@x
@y
The
characteristic
equations
are
dx
2
dy
du
=
x,
=
u
and
=1;
ds
ds
ds
which
we
can
solve
to
get
1
x
=
,
(2.17)
c1
-
s
2
s
y
=+
c2
s
+
c3,
(2.18)
2
u
=
c2
+
s,
for
constants
c1;c2;c3.
(2.19)
We
now
parameterise
the
initial
line
in
terms
of
:

=
x,
y
=1
-
,
and
apply
the
initial
data
at
s
=
0.
Hence,
11
(2.17)
gives

=
v
c1
=
c1

,
(2.18)
gives
1
-

=
c3
v
c3
=1
-
,
(2.19)
gives
0
=
c2
v
c2
=0.
Hence,
we
found
the
parametric
form
of
the
surface
integral,
2
s
x
=
;y
=
+1
-

and
u
=
s.
1
-
s
2
Eliminate
s
and
,
x
x
=
v

=
;
1
-
s
1+
sx
then
2
ux
y
=
+1
-
.
2
1+
ux
Invariants,
or
rst
integrals,
are
(from
solution
(2.17),
(2.18)
and
(2.19),
keeping
arbitrary
c2
=
0)
.
=
u2=2
-
y
and
.
=
x=(1
+
ux).
Alternative
approach
to
example
1:
@u
@u
(y
+
u)+
y
=
x
-
y
in
y>
0,
...
<x<
1;
@x
@y
with
u
=1+
x
on
y
=1.
Characteristic
equations
dx
=
y
+
u,
(2.20)
ds
dy
=
y,
(2.21)
ds
du
=
x
-
y.
(2.22)
ds
Chapter
2

First
Order
Equations
Solve
with
respect
to
the
dummy
variable
s;
(2.21)
gives,
s
y
=
c1
e,
(2.20)
and
(2.22)
give
d
s
(x
+
u)=
x
+
u
v
x
+
u
=
c2
e;
ds
and
(2.20)
give
dx
s
=
c1
e
+
c2
e
s
-
x;
ds
11
..ss
..ss
so,
x
=
c3
e
+
(c1
+
c2)
e
and
u
=
..c3
e
+
(c2
-
c1)
e.
22
Now,
at
s
=
0,
y
=
1
and
x
=
,
u
=1+

(parameterising
initial
line
..),
c1
=1;c2
=1+2
and
c3
=
..1.
Hence,
the
parametric
form
of
the
surface
integral
is,
ss
..s
x
=
-
e
..s
+
(1
+
)
e,
y
=
e
and
u
=
e
+
es
.
Then
eliminate

and
s:
1
11
x
=
-
+
(1
+
)
y
v

=
x
-
y
+
,
y
yy
so
11
1
u
=+
x
-
y
+
y.
yy
y
Finally,
2
u
=
x
-
y
+
,
as
before.
y
To
nd
invariants,
return
to
solved
characteristics
equations
and
solve
for
constants
in
terms
s
of
x,
y
and
u.
We
only
need
two,
so
put
for
instance
c1
=
1
and
so
y
=
e.
Then,
c3
1
c3
1
x
=+
(1
+
c2)
y
and
u
=
-
+
(c2
-
1)
y.
y
2y
2
Solve
for
c2
x
+
ux
+
u
c2
=
,
so
.
=
,
yy
and
solve
for
c3
1
c3
=
(x
-
u
-
y)y,
so
.
=(x
-
u
-
y)y.
2Observe
.
is
dierent
from
last
time,
but
this
does
not
as
we
only
require
two
independent
choices
for
.
and
.
In
fact
we
can
show
that
our
previous
.
is
also
constant,
(x
-
y)2
-
u
2
=(x
-
y
+
u)(x
-
y
-
u),
.
=(y
-
y)
,
y
=(.
-
1).
which
is
also
constant.
2.3
Wave
Equation
Summary:
Solving
the
characteristic
equations

two
approaches.
1.
Manipulate
the
equations
to
get
them
in
a
'directly
integrable
form,
e.g.
1d
(x
+
u)=1
x
+
u
ds
and
nd
some
combination
of
the
variables
which
dierentiates
to
zero
(
rst
integral),
e.g.
d
x
+
u
=0.
dsy
2.
Solve
the
equations
with
respect
to
the
dummy
variable
s,
and
apply
the
initial
data
(parameterised
by
)
at
s
=
0.
Eliminate

and
s;
nd
invariants
by
solving
for
constants.
2.3
Wave
Equation
We
consider
the
equation
@u
@u
+(u
+
c)
=
0
with
u(0;x)=
f(x);
@t
@x
where
c
is
some
positive
constant.
2.3.1
Linear
Waves
If
u
is
small
(i.e.
u2
.
u),
then
the
equation
approximate
to
the
linear
wave
equation
@u
@u
+
c
=
0
with
u(x,
0)
=
f(x).
@t
@x
The
solution
of
the
equation
of
characteristics,
dx=dt
=
c,
gives
the
rst
integral
of
the
PDE,
(x,
t)=
x
-
ct,
and
then
general
solution
u(x,
t)=
g(x
-
ct),
where
the
function
g
is
determined
by
the
initial
conditions.
Applying
u(x,
0)
=
f(x)
we
nd
that
the
linear
wave
equation
has
the
solution
u(x,
t)=
f(x
-
ct),
which
represents
a
wave
(unchanging
shape)
propagating
with
constant
wave
speed
c.
h=x-ct=cst
h=0
x
G
t
x
u(x,t)
t=0 t=t1f(x)
x =ct11
c
Note
that
u
is
constant
where
x
-
ct
=constant,
i.e.
on
the
characteristics.
Chapter
2

First
Order
Equations
2.3.2
Nonlinear
Waves
For
the
nonlinear
equation,
@u
@u
+(u
+
c)
=0;
@t
@x
the
characteristics
are
de
ned
by
dt
dx
du
=1,
=
c
+
u
and
=0;
ds
ds
ds
which
we
can
solve
to
give
two
independent
rst
integrals
.
=
u
and
.
=
x
-
(u
+
c)t.
So,
u
=
f[x
-
(u
+
c)t],
according
to
initial
conditions
u(x,
0)
=
f(x).
This
is
similar
to
the
previous
result,
but
now
the
\wave
speed
involves
u.
However,
this
form
of
the
solution
is
not
very
helpful;
it
is
more
instructive
to
consider
the
characteristic
curves.
(The
PDE
is
homogeneous,
so
the
solution
u
is
constant
along
the
Monge
curves

this
is
not
the
case
in
general

which
can
then
be
reduced
to
their
projections
in
the
(x,
t)-plane.)
By
de
nition,
.
=
x..(c+u)t
is
constant
on
the
characteristics
(as
well
as
u);
dierentiate
.
to
nd
that
the
characteristics
are
described
by
dx
=
u
+
c.
dt
These
are
straight
lines,
x
=(f()+
c)t
+
,
expressed
in
terms
of
a
parameter
.
(If
we
make
use
of
the
parametric
form
of
the
data
curve
..:
fx
=
,
t
=0;
=
R}
and
solve
directly
the
Cauchy
problem
in
terms
of
the
coordinate
s
=
t,
we
similarly
nd,
u
=
f()
and
x
=(u
+
c)t
+
.)
The
slope
of
the
characteristics,
1=(c
+
u),
varies
from
one
line
to
another,
and
so,
two
curves
can
intersect.
t min
x
t
G q f( )<0qq q q
{x= ,t=0}
u=f( ) & x-(f( )+c)t=
Consider
two
crossing
characteristics
expressed
in
terms
of
1
and
2,
i.e.
x
=(f(1)+
c)t
+
1,
x
=(f(2)+
c)t
+
2.
(These
correspond
to
initial
values
given
at
x
=
1
and
x
=
2.)
These
characteristics
intersect
at
the
time
1
-
2
t
=
-
;
f(1)
-
f(2)
2.3
Wave
Equation
and
if
this
is
positive
it
will
be
in
the
region
of
solution.
At
this
point
u
will
not
be
single-
valued
and
the
solution
breaks
down.
By
letting
2
.
1
we
can
see
that
the
characteristics
intersect
at
1
t
=
-
;
f0()
and
the
minimum
time
for
which
the
solution
becomes
multi-valued
is
1
tmin
=
;
max[..f
0()]
i.e.
the
solution
is
single
valued
(i.e.
is
physical)
only
for
0
.
t<tmin.
Hence,
when
f
0()
<
0
we
can
expect
the
solution
to
exist
only
for
a
nite
time.
In
physical
terms,
the
equation
considered
is
purely
advective;
in
real
waves,
such
as
shock
waves
in
gases,
when
very
large
gradients
are
formed
then
diusive
terms
(e.g.
@xxu)
become
vitally
important.
x
u(x,t)
t
multi-valuedf(x)
breaking
To
illustrate
nite
time
solutions
of
the
nonlinear
wave
equation,
consider
f()=
(1
-
),
(0
.

.
1),
f0()=1
-
2.
So,
f0()
<
0
for
1=2
<<
1
and
we
can
expect
the
solution
not
to
remain
single-valued
for
all
values
of
t.
(max[..f
0()]=1
so
tmin
=
1.
Now,
u
=
f(x
-
(u
+
c)t),
so
u
=[x
-
(u
+
c)t]

[1
-
x
+(u
+
c)t],
(ct
.
x
.
1+
ct),
which
we
can
express
as
22
tu
2
+
(1
+
t
-
2xt
+2ct2)u
+(x
2
-
x
-
2ctx
+
ct
+
ct2)=0,
and
solving
for
u
(we
take
the
positive
root
from
initial
data)
1
.
p.
u
=2t(x
-
ct)
-
(1
+
t)+
(1+
t)2
-
4t(x
-
ct)
.
2t2
Now,
at
t
=
1,
=
u
=
x
-
(c
+1)+
1+
c
-
x,
so
the
solution
becomes
singular
as
t
.
1
and
x
.
1+
c.
Chapter
2

First
Order
Equations
2.3.3
Weak
Solution
When
wave
breaking
occurs
(multi-valued
solutions)
we
must
re-think
the
assumptions
in
our
model.
Consider
again
the
nonlinear
wave
equation,
@u
@u
+(u
+
c)
=0;
@t
@x
and
put
w(x,
t)=
u(x,
t)+
c;
hence
the
PDE
becomes
the
inviscid
Burger's
equation
@w
@w
+
w
=0;
@t
@x
or
equivalently
in
a
conservative
form
.
2

@w
@w
+
=0;
@t
@x
2
where
w2=2
is
the
ux
function.
We
now
consider
its
integral
form,
.
x2
.
2
.
.
x2
.
x2
.
2

@w
@wd
@w
+dx
=0
8
w(x,
t)dx
=
-
dx
@t
@x
2dt
@x
2
x1
x1
x1
where
x2
>x1
are
real.
Then,
d
.
x2
w2(x1;t)
w2(x2;t)
w(x,
t)dx
=
-
.
dt
x1
22
Let
us
now
relax
the
assumption
regarding
the
dierentiability
of
the
our
solution;
suppose
that
w
has
a
discontinuity
in
x
=
s(t)
with
x1
<s(t)
<x2.
x
w(x,t)
w(s ,t)
w(s ,t)
2x1 xs(t)
+
-
Thus,
splitting
the
interval
[x1;x2]
in
two
parts,
we
have
.
s(t)
.
x2
w2(x1;t)
w2(x2;t)d
d
-
=
w(x,
t)dx
+
w(x,
t)dx;
2
2dt
dt
x1
s(t)
.
s(t)
.
x2
@w
@w
=
w(s
..;t)_s(t)+
dx
-
w(s
+;t)_s(t)+
dx;
@t
@t
x1
s(t)
where
w(s..(t);t)
and
w(s+(t);t)
are
the
values
of
w
as
x
.
s
from
below
and
above
respectively;
_s
=ds=dt.
Now,
take
the
limit
x1
.
s..(t)
and
x2
.
s+(t).
Since
@w=@t
is
bounded,
the
two
integrals
tend
to
zero.
We
then
have
..+
w2(s;t)
w2(s;t)
.
..+
-
=
s_w(s
;t)
-
w(s
;t).
22
2.3
Wave
Equation
The
velocity
of
the
discontinuity
of
the
shock
velocity
U
=
s_.
If
[
]
indicates
the
jump
across
the
shock
then
this
condition
may
be
written
in
the
form
.
2
.
w
..U
[w]=
.
2
The
shock
velocity
for
Burger's
equation
is
1
w2(s+)
-
w2(s..)
w(s+)+
w(s..)
U
=
=
.
2
w(s+)
-
w(s..)2
The
problem
then
reduces
to
tting
shock
discontinuities
into
the
solution
in
such
a
way
that
the
jump
condition
is
satis
ed
and
multi-valued
solution
are
avoided.
A
solution
that
satis
es
the
original
equation
in
regions
and
which
satis
es
the
integral
form
of
the
equation
is
called
a
weak
solution
or
generalised
solution.
Example:
Consider
the
inviscid
Burger's
equation
@w
@w
+
w
=0;
@t
@x
with
initial
conditions
.
1
for

.
0;
.
w(x
=
,
t
=
0)
=
f()=
1
-

for
0
.

.
1,
.
0
for

~
1.
As
seen
before,
the
characteristics
are
curves
on
which
w
=
f()
as
well
as
x
-
f()
t
=

are
constant,
where

is
the
parameter
of
the
parametric
form
of
the
curve
of
initial
data,
...
For
all

=
(0,
1),
f
0()=
..1
is
negative
(f
=
=
0
elsewhere),
so
we
can
expect
that
all
the
characteristics
corresponding
to
these
values
of

intersect
at
the
same
point;
the
solution
of
the
inviscid
Burger's
equation
becomes
multi-valued
at
the
time
tmin
=1/
max[..f
0()]
=
1,
8
=
(0,
1).
Then,
the
position
where
the
singularity
develops
at
t
=1
is
x
=
..f()
t
+

=1
-

+

=1.
t
w(x,t) 1
t=0 t=1
w=1 & x-t=cst
w=1 & x-t=0
w=0 & x=cst
x=1 x
weak solution
w(x,t) (shock wave)
1
multi-valued
solution
U
t=0
t>1
x
x
Chapter
2

First
Order
Equations
As
time
increases,
the
slope
of
the
solution,
.
>.
>.
1
for
x
.
t,
1
-
x
w(x,
t)
=
for
t
.
x
.
1,
with
0
.
t<
1,
1
-
t
0
for
x
~
1,
becomes
steeper
and
steeper
until
it
becomes
vertical
at
t
=
1;
then
the
solution
is
multivalued.
Nevertheless,
we
can
de
ne
a
generalised
solution,
valid
for
all
positive
time,
by
introducting
a
shock
wave.
Suppose
shock
at
s(t)=
Ut
+
,
with
w(s..;t)=1
and
w(s+;t)
=
0.
The
jump
condition
gives
the
shock
velocity,
w(s+)+
w(s..)1
U
==
;
22furthermore,
the
shock
starts
at
x
=1;t
=
1,
so
.
=1
-
1=2=1=2.
Hence,
the
weak
solution
of
the
problem
is,
for
t
~
1,
0
for
x<s(t),
1
w(x,
t)=
where
s(t)=
(t
+
1).
1
for
x>s(t),
2
2.4
Systems
of
Equations
2.4.1
Linear
and
Semilinear
Equations
These
are
equations
of
the
form
@u
n
(j)(j)
x
+
bij
u
=
ci,
i
=1,
2;:::;n
aij
u
=
ux
;
y
@x
j=1
(1)(j)
for
the
unknowns
u;u;:::;u(n)
and
when
the
coecients
aij
and
bij
are
functions
only
of
x
and
y.
(Though
the
ci
could
also
involve
u(k).)
In
matrix
notation
Aux
+
Buy
=
c,
where
2
3
.
a11
::.
a1n
.
b11
::.
b1n
6.
..
.
...
.
.
.
7.
,
B
=(bij
)=
6.
..
.
...
.
.
.
7.
A
=(aij
)=
,
an1
::.
ann
bn1
::.
bnn
2
3.
c1
.
(1)
u
(2)
666
.
c2
.
.
.
777
.
and
u
=
666
.
777
.
u
c
=
.
.
.
.
(n)
cn
u
E.g.,
(1)
(2)
(1)
(2)
u-
2u+3u-
u
=
x
+
y,
x
x
y
y
(1)
(2)
(1)
(2)
22
u+
u-
5u+2u=
x
+
y;
xxy
y
2.4
Systems
of
Equations
can
be
expressed
as
.
(1)#.
.
(1)#.

1
..2
ux
3
..1
uy
x
+
y
+=
;
(2)
(2)
22
11
x
..52
y
x+
y
uu
or
Aux
+
Buy
=
c
where
1
..23
..1
x
+
y
A
=
;B
=
and
c
=
22
.
11
..52
x+
y
1=32=3
If
we
multiply
by
A..1
=
..1=31=3
A..1A
ux
+
A..1B
uy
=
A..1
c,
we
obtain
ux
+
Duy
=
d,
.
.
.

1=32=33
..17=31
where
D
=
A..1B
=
=
and
d
=
A..1c.
..1=31=3
..52
..8=31
We
now
assume
that
the
matrix
A
is
non-singular
(i.e.,
the
inverse
A..1
exists
)

at
least
there
is
some
region
of
the
(x,
y)-plane
where
it
is
non-singular.
Hence,
we
need
only
to
consider
systems
of
the
form
ux
+
Duy
=
d.
We
also
limit
our
attention
to
totally
hyperbolic
systems,
i.e.
systems
where
the
matrix
D
has
n
distinct
real
eigenvalues
(or
at
least
there
is
some
region
of
the
plane
where
this
holds).
D
has
the
n
distinct
eigenvalues
1;2;:::;n
where
det(iI
-
D)=0(i
=1;:::;n),
with
i
66so
that
=
j
(i
=
j)
and
the
n
corresponding
eigenvectors
e1,
e2;:::,
en
Dei
=
iei.
The
matrix
P
=[e1,
e2;:::,
en]
diagonalises
D
via
P
..1DP
=
,
23
1
0
::.
::.
0
.
0
2
0
::.
0
.
6.
.
..
.
:
=
.
0
::.
.
::.
0
.
6.
.
0
::.
0
n..1
0
.
0
::.
::.
0
n
We
now
put
u
=
P
v,
then
P
vx
+
Pxv
+
DP
vy
+
DPy
v
=
d,
and
P
..1P
vx
+
P
..1Pxv
+
P
..1DP
vy
+
P
..1DPyv
=
P
..1d,
which
is
of
the
form
vx
+vy
=
q,
where
q
=
P
..1d
-
P
..1Pxv
-
P
..1DPy
v.
The
system
is
now
of
the
form
(i)(i)
vx
+
i
vy
=
qi
(i
=1;:::;n),
(1)(2)
where
qi
can
involve
fv;v;:::;v(n)}
and
with
n
characteristics
given
by
dy
=
i.
dx
This
is
the
canonical
form
of
the
equations.
Chapter
2

First
Order
Equations
Example
1:
Consider
the
linear
system
(1)
(2)
.
ux
+4
uy
=0,
with
initial
conditions
u
=
[2x,
3x]T
on
y
=0.
(2)
(1)
ux
+9
uy
=0,
04
Here,
ux
+
Duy
=
0
with
D
=
.
90
Eigenvalues:
det(D
-
I)=0
v
2
-
36
=
0
v
.
=
6.
Eigenvectors:
.
.
.
.

..64
x
0
x
2
=
v
=
for
.
=6;
9
..6
y
0
y
3
.
.

64
x
0
x
2
=
v
=
for
.
=
..6.
96
y
0
y
..3
Then,
22
132
60
P
..1
P
=
,
=
and
P
..1DP
=
.
3
..3
123
..20
..6
22
60
So
we
put
u
=
v
and
vx
+
vy
=
0,
which
has
general
solution
3
..30
..6
(1)
(2)
v=
f(6x
-
y)
and
v=
g(6x
+
y),
i.e.
(1)
=2v(2)
(1)
=3v(2)
u(1)
+2vand
u(1)
-
3v.
Initial
conditions
give
2x
=2f(6x)+2g(6x),
3x
=3f(6x)
-
3g(6x),
so,
f(x)=
x=6
and
g(x)
=
0;
then
1
(1)
u=
(6x
-
y);
3
1
(2)
u=
(6x
-
y).
2
Example
2:
Reduce
the
linear
system
.
4y
-
x
2x
-
2y
.
ux
+
ux
=0
2y
-
2x
4x
-
y
to
canonical
form
in
the
region
of
the
(x,
y)-space
where
it
is
totally
hyperbolic.
Eigenvalues:
4y
-
x
-
.
2x
-
2y
.
det
=0
v
.
2f3x,
3yg.
2y
-
2x
4x
-
y
-
.
The
system
is
totally
hyperbolic
everywhere
expect
where
x
=
y.
2.4
Systems
of
Equations
Eigenvalues:
1
=3x
v
e1
=
[1,
2]T
,
2
=3y
v
e2
=
[2,
1]T
.
So,
P
=
1
2
2
1
.
,
P
..1
=
1
3
..1
2
2
..1
.
and
P
..1DP
=
3x
0
0
3y
.
.
1
2.
Then,
with
u
=
2
1
v
we
obtain
3x
0
vx
+
vy
=0.
03y
2.4.2
Quasilinear
Equations
We
consider
systems
of
n
equations,
involving
n
functions
u(i)(x,
y)(i
=1;:::;n),
of
the
form
ux
+
Duy
=
d,
where
D
as
well
as
d
may
now
depend
on
u.
(We
have
already
shown
how
to
reduce
a
more
general
system
Aux
+Buy
=
c
to
that
simpler
form.)
Again,
we
limit
our
attention
to
totally
hyperbolic
systems;
then
=
P
..1DP
(v
D
=
P
P
..1
,
using
the
same
de
nition
of
P
,
P
..1
and
the
diagonal
matrix
,
as
for
the
linear
and
semilinear
cases.
So,
we
can
transform
the
system
in
its
normal
form
as,
P
..1
ux
+P
..1
uy
=
P
..1d,
such
that
it
can
be
written
in
component
form
as
n
.
n
.
P
..1
.
.
(j)
.
(j)
.
P
..1
u+
i
u=
dj
,
(i
=1;:::;n)
ij
ij
@x
@y
j=1
j=1
where
i
is
the
ith
eigenvalue
of
the
matrix
D
and
wherein
the
ith
equation
involves
dierentiation
only
in
a
single
direction

the
direction
dy=dx
=
i.
We
de
ne
the
ith
characteristic,
with
curvilinear
coordinate
si,
as
the
curve
in
the
(x,
y)-plane
along
which
dx
dy
dy
=1,
=
i
or
equivalently
=
i.
dsi
dsi
dx
Hence,
the
directional
derivative
parallel
to
the
characteristic
is
d
@@
(j)(j)(j)
u=
u+
i
u;
dsi
@x
@y
and
the
system
in
normal
form
reduces
to
n
ODEs
involving
dierent
components
of
u
nn
.
d
.
P
..1(j)
P
..1
ij
udj
(i
=1;:::;n):
=
ij
dsi
j=1
j=1
Chapter
2

First
Order
Equations
Example:
Unsteady,
one-dimensional
motion
of
an
inviscid
compressible
adiabatic
gas.
Consider
the
equation
of
motion
(Euler
equation)
@u
@u
1
@P
+
u
=
-
;
@t
@x
@x
and
the
continuity
equation
@d
@u
@d
+
d
+
u
=0.
@t
@x
@x
If
the
entropy
is
the
same
everywhere
in
the
motion
then
P...
=
constant,
and
the
motion
equation
becomes
2
@u
@u
c@d
+
u
+
=0;
@t
@x
@x
where
c2
=dP=dd
=
P=d
is
the
sound
speed.
We
have
then
a
system
of
two
rst
order
quasilinear
PDEs;
we
can
write
these
as
@w
@w
+
D
=0;
@t
@x
with
u
uc
w
=
and
D
=
2=d
.
d
u
The
two
characteristics
of
this
hyperbolic
system
are
given
by
dx=dt
=
.
where
.
are
the
eigenvalues
of
D;
det(D
-
I)=
u
-
c2=d
u
-
.
2
=0
v
(u
-
)2
=
c
and

=
u

c.
The
eigenvectors
are
[c,
..]T
for
-
and
[c,
]T
for
+,
such
that
the
usual
matrices
are
,
such
that
=
T
..1DT
=
1
d
..c
u
-
c
0
cc
;T
..1
T
=
=
.
..d
d
c
0
u
+
c
2c
Put
.
and
a
the
curvilinear
coordinates
along
the
characteristics
dx=dt
=
u
-
c
and
dx=dt
=
u
+
c
respectively;
then
the
system
transforms
to
the
canonical
form
dt
dt
dx
dx
du
dd
du
dd
=
=1,
=
u
-
c,
=
u
+
c,
d
-
c
=
0
and
d
+
c
=0.
d.
da
d.
da
d.
d.
da
da
2.4
Systems
of
Equations
Chapter
3
Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
Contents
3.1
Classi
cation
and
Standard
Form
Reduction
.
.
.
.
.
.
.
.
.
.
.
.
37
3.2
ExtensionsoftheTheory........................
44
3.1
Classi
cation
and
Standard
Form
Reduction
Consider
a
general
second
order
linear
equation
in
two
independent
variables
@2u@2u@2u
@u
@u
a(x,
y)
+2b(x,
y)+
c(x,
y)+
d(x,
y)+
e(x,
y)+
f(x,
y)u
=
g(x,
y);
@x2
@x@y
@y2
@x
@y
in
the
case
of
a
semilinear
equation,
the
coecients
d,
e,
f
and
g
could
be
functions
of
@xu,
@y
u
and
u
as
well.
Recall,
for
a
rst
order
linear
and
semilinear
equation,
a
@u=@x+b
@u=@y
=
c,
we
could
de
ne
new
independent
variables,
(x,
y)
and
(x,
y)
with
J
=
@(,
)=@(x,
y)=6f0,
1g,
to
reduce
the
equation
to
the
simpler
form,
@u=@.
=
(,
).
For
the
second
order
equation,
can
we
also
transform
the
variables
from
(x,
y)
to
(,
)
to
put
the
equation
into
a
simpler
form?
So,
consider
the
coordinate
transform
(x,
y)
.
(,
)
where
.
and
s
are
such
that
the
Jacobian,
J
=
@(,
)
=
@.
@.
@x
@y
@f
@f
@x
@y
=
f0,
1g.
@(x,
y)
Then
by
inverse
theorem
there
is
an
open
neighbourhood
of
(x,
y)
and
another
neighbourhood
of
(,
)
such
that
the
transformation
is
invertible
and
one-to-one
on
these
neighbourhoods.
As
before
we
compute
chain
rule
derivations
@u
@u
@.
@u
@s
@u
@u
@.
@u
@s
=+
,
=+
;
@x
@.
@x
@s
@x@y
@.
@y
@s
@y
37
3.1
Classi
cation
and
Standard
Form
Reduction
@2u@2u
@.
2
@2u
@.
@s
@2u
@s
2
@u
@2.
@u
@2s
=
+2
+
++
;
@x2
@2
@x
@@s
@x
@x
@2
@x
@.
@x2
@s
@x2
2
2
@2u@2u
@.
@2u
@.
@s
@2u
@s
@u
@2.
@u
@2s
=
+2
+
++
;
@y2
@2
@y
@@s
@y
@y
@2
@y
@.
@y2
@s
@y2
@2u@2u@.
@.
@2u
.
@.
@s
@.
@s
.
@2u@s
@s
@u
@2.
@u
@2s
=+
++++
.
@x@y
@2
@x
@y
@@s
@x
@y
@y
@x
@2
@x
@y
@.
@x@y
@s
@x@y
The
equation
becomes
@2@2@2
u
uu
A
+2B
+
C
+
F
(u.
;uf
,
u,
,
)=0,
(3.1)
@2
@@s
@2
where
2
2
.
@.
@.
@.
@.
A
=
a
+2b
+
c;
@x
@x
@y
@y
@.
@s
.
@.
@s
@.
@s
.
@.
@s
B
=
a
+
b
++
c;
@x
@x
@x
@y
@y
@x
@y
@y
2
2
@s
@s
@s
@s
C
=
a
+2b
+
c.
@x
@x
@y
@y
We
write
explicitly
only
the
principal
part
of
the
PDE,
involving
the
highest-order
derivatives
of
u
(terms
of
second
order).
It
is
easy
to
verify
that
.
@.
@s
@.
@s
2
(B2
-
AC)=(b2
-
ac)
-
@x
@y
@y
@x
where
(@x@ys
-
@y
@x)2
is
just
the
Jacobian
squared.
So,
provided
J
6
=
0
we
see
that
the
sign
of
the
discriminant
b2
-
ac
is
invariant
under
coordinate
transformations.
We
can
use
this
invariance
properties
to
classify
the
equation.
Equation
(3.1)
can
be
simpli
ed
if
we
can
choose
.
and
s
so
that
some
of
the
coecients
A,
B
or
C
are
zero.
Let
us
de
ne,
@=@x
@=@x
D.
=
and
Df
=;
@=@y
@=@y
then
we
can
write
..@.
2
A
=
aD.
2
+2bD.
+
c;
@y
@.
@s
B
=(aD.
Df
+
b
(D.
+
Df
)+
c)
;
@y
@y
..@s
2
C
=
aDf
2
+2bDf
+
c.
@y
Now
consider
the
quadratic
equation
aD2
+2bD
+
c
=0,
(3.2)
Chapter
3

Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
whose
solution
is
given
by
=
..b

b2
-
ac
D
=
.
a
If
the
discriminant
b2
-
ac
6
=
0,
equation
(3.2)
has
two
distinct
roots;
so,
we
can
make
both
coecients
A
and
C
zero
if
we
arbitrarily
take
the
root
with
the
negative
sign
for
D.
and
the
one
with
the
positive
sign
for
Df
,
=
@=@x
..b
-
b2
-
ac
D.
==
v
A
=0,
(3.3)
@=@y
a
=
@=@x
..b
+
b2
-
ac
Df
==
v
C
=0.
@=@y
a
Then,
using
D.
Df
=
c=a
and
D.
+
Df
=
..2b=a
we
have
2
@s
.
ac
-
b2.
@.
B
=
v
B
=06
.
a
@y
@y
Furthermore,
if
the
discriminant
b2
-
ac
>
0
then
D.
and
Df
as
well
as
.
and
s
are
real.
So,
we
can
de
ne
two
families
of
one-parameter
characteristics
of
the
PDE
as
the
curves
described
by
the
equation
(x,
y)
=
constant
and
the
equation
(x,
y)
=
constant.
Dierentiate
.
along
the
characteristic
curves
given
by
.
=
constant,
@.
@.
d.
=dx
+dy
=0;
@x
@y
and
make
use
of
(3.3)
to
nd
that
this
characteristics
satisfy
=
dyb
+
b2
-
ac
=
.
(3.4)
dxa
Similarly
we
nd
that
the
characteristic
curves
described
by
(x,
y)
=
constant
satisfy
=
dyb
-
b2
-
ac
=
.
(3.5)
dxa
If
the
discriminant
b2
-
ac
=
0,
equation
(3.2)
has
one
unique
root
and
if
we
take
this
root
for
D.
say,
we
can
make
the
coecient
A
zero,
@=@x
b
D.
==
..v
A
=0.
@=@y
a
To
get
s
independent
of
,
Df
has
to
be
dierent
from
D.
,
so
C
6
=
0
in
this
case,
but
B
is
now
given
by
.
.
b2
bb
.
@.
@s
.
@.
@s
B
=
..aDf
+
b
-
+
Df
+
c
=
-
+
c,
a
a
@y
@y
a
@y
@y
so
that
B
=
0.
When
b2
-
ac
=
0
the
PDE
has
only
one
family
of
characteristic
curves,
for
(x,
y)
=
constant,
whose
equation
is
now
dyb
=
.
(3.6)
dxa
Thus
we
have
to
consider
three
dierent
cases.
3.1
Classi
cation
and
Standard
Form
Reduction
1.
If
b2
>
ac
we
can
apply
the
change
of
variable
(x,
y)
.
(,
)
to
transform
the
original
PDE
to
@2
u
+
(lower
order
terms)
=
0.
@@s
In
this
case
the
equation
is
said
to
be
hyperbolic
and
has
two
families
of
characteristics
given
by
equation
(3.4)
and
equation
(3.5).
2.
If
b2
=
ac,
a
suitable
choice
for
.
still
simpli
es
the
PDE,
but
now
we
can
choose
s
arbitrarily

provided
s
and
.
are
independent

and
the
equation
reduces
to
the
form
@2
u
+
(lower
order
terms)
=
0.
@2
The
equation
is
said
to
be
parabolic
and
has
only
one
family
of
characteristics
given
by
equation
(3.6).
3.
If
b2
<
ac
we
can
again
apply
the
change
of
variables
(x,
y)
.
(,
)
to
simplify
the
equation
but
now
this
functions
will
be
complex
conjugate.
To
keep
the
transformation
real,
we
apply
a
further
change
of
variables
(,
)
.
(,
)
via
.
=
.
+
s
=2
<(),
a
=
i(.
-
)=2
=(),
@2@2@2
u
uu
i.e.,
=
+
(via
the
chain
rule);
@@s
@2
@
2
so,
the
equation
can
be
reduced
to
@2@2
uu
+
+
(lower
order
terms)
=
0.
@2
@
2
In
this
case
the
equation
is
said
to
be
elliptic
and
has
no
real
characteristics.
The
above
forms
are
called
the
canonical
(or
standard)
forms
of
the
second
order
linear
or
semilinear
equations
(in
two
variables).
Summary:
b2
-
ac
>
0
=
0
<
0
Canonical
form
@2u
@@f
+
.
.
.
=
0
@2
u
@2
+
.
.
.
=
0
@2
u
@2
+
@2
u
@
2
+
.
.
.
=
0
Type
Hyperbolic
Parabolic
Elliptic
E.g.

The
wave
equation,
@2@2
uu
-
c
2
=0;
w
@t2
@x2
2
is
hyperbolic
(b2
-
ac
=
c>
0)
and
the
two
families
of
characteristics
are
described
w
by
dx=dt
=
cw
i.e.
.
=
x
-
cwt
and
s
=
x
+
cwt.
So,
the
equation
transforms
into
its
canonical
form
@2u=@@s
=
0
whose
solutions
are
waves
travelling
in
opposite
direction
at
speed
cw.
Chapter
3

Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables

The
diusion
(heat
conduction)
equation,
@2u
1
@u
-
=0;
@x2
t
@t
is
parabolic
(b2
-
ac
=
0).
The
characteristics
are
given
by
dt=dx
=
0
i.e.
.
=
t
=
constant.

Laplace's
equation,
@2@2
uu
+
=0;
@x2
@y2
is
elliptic
(b2
-
ac
=
..1
<
0).
The
type
of
a
PDE
is
a
local
property.
So,
an
equation
can
change
its
form
in
dierent
regions
of
the
plane
or
as
a
parameter
is
changed.
E.g.
Tricomi's
equation
@2@2
uu
y
+
=0,
(b2
-
ac
=0
-
y
=
..y)
@x2
@y2
is
elliptic
in
y>
0,
parabolic
for
y
=
0
and
hyperbolic
in
y<
0,
and
for
small
disturbances
in
incompressible
(inviscid)
ow
1
@2u@2u
1
+
=0,
(b2
-
ac
=
-
)
2
1
-
m@x2
@y2
1
-
m2
is
elliptic
if
m<
1
and
hyperbolic
if
m>
1.
Example
1:
Reduce
to
the
canonical
form
2
@2u@2u
2
@2u
1
3
@u
3
@u
y
-
2xy
+
x
=
y
+
x.
@x2
@x@y
@y2
xy@x
@y
2
.
a
=
y
.
2
Here
b
=
..xy
so
b2
-
ac
=(xy)2
-
xy2
=0
v
parabolic
equation.
2
;
c
=
xOn
.
=
constant,
=
dyb
+
b2
-
acb
x
2
=
==
..v
.
=
x
+
y
2
.
dx
a
ay
We
can
choose
s
arbitrarily
provided
.
and
s
are
independent.
We
choose
s
=
y.
(Exercise,
try
it
with
s
=
x.)
Then
@u
@u
@u
@u
@u
@2u
@u
2
@2u
=2x,
=2y
+
,
=2
+4x;
@x
@.
@y
@.
@s
@x2
@.
@2
@2u@2u@2u@2u
@u
2
@2u@2u@2u
=4xy
+2x,
=2
+4y
+4y
+
;
@x@y
@2
@@s
@y2
@.
@2
@@s
@2
and
the
equation
becomes
2
@u
2
@2u
2
@2u
@2u
2
@u
2
@2u
222
2
2y
+4xy
-
8xy
-
4x
y
+2x
+4xy
@.
@2
@2
@@s
@.
@2
@2u
2
@2u
1
3
@u
@u
3
@u
+4x
2
y
+
x
=2xy
+2x
3
y
+
x;
@@s
@2
xy
@.
@.
@s
3.1
Classi
cation
and
Standard
Form
Reduction
@2u
1
@u
i.e.
-
=0.
(canonical
form)
@2
s
@s
This
has
solution
u
=
f()+
2
g(),
where
f
and
g
are
arbitrary
functions
(via
integrating
factor
method),
i.e.
2
22
u
=
f(x
+
y
2)+
yg(x
+
y
2).
We
need
to
impose
two
conditions
on
u
or
its
partial
derivatives
to
determine
the
functions
f
and
g
i.e.
to
nd
a
particular
solution.
Example
2:
Reduce
to
canonical
form
and
then
solve
@2u@2u@2u
@u
+
-
2
+1=0
in0
.
x
.
1;y
>
0,
with
u
==
x
on
y
=0.
@x2
@x@y
@y2
@y
9.
.
a
=1
Here
b
=1=2
so
b2
-
ac
=9=4(>
0)
v
equation
is
hyperbolic.
c
=
..2
Characteristics:
dy
13
@=@x
@=@x

=
..1or2
=
..
-
=
or
.
dx
2
2
Two
methods
of
solving:
1.
directly:
dy
dx
=
2
v
x
-
1
2y
=
constant
and
@=@y
@=@y
dy
dx
=
..1
v
x
+
y
=
constant.
2.
simultaneous
equations:
.
>>.
@.
@.
-
=2
@x
@y
.
>.
1
y
x
=
(s
+2)
.
=
x
-
3
2
,
v
.
2
@s
@s
-
=
-
@x
@y
So,
s
=
x
+
y
>>.
>.
y
=
(s
-
)
3
@u
@u
@u
@u
1
@u
@u
@2u@2u
@u
@2u
=+
,
=
-
+
,
=+2
+
@x
@.
@s
@y
2
@.
@s
@x2
@2
@@s
@2
@2u
1
@2u
1
@2u@2u@2u
1
@2u@2u@2u
=
-
++
,
=
-
+
;
@x@y
2
@2
2
@@s
@2
@y2
4
@2
@@s
@2
and
the
equation
becomes
@2u
@u
@2u
1
@2u
1
@2u@2u
1
@2u@2u@2u
+2
+
-
+
+
-
+2
-
2
+1=0;
@2
@@s
@2
2
@2
2
@@s
@2
2
@2
@@s
@2
9
@2u
v
+1=0,
canonical
form.
2
@@s
Chapter
3

Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
So
@2u=@@s
=
..2=9
and
general
solution
is
given
by
2
v
u(,
)=
-
s
+
f()+
g();
9
where
f
and
g
are
arbitrary
functions;
now,
we
need
to
apply
two
conditions
to
determine
these
functions.
When,
y
=
0,
.
=
s
=
x
so
the
condition
u
=
x
at
y
=
0
gives
u(.
=
x,
s
=
x)=
-
2
x
2
+
f(x)+
g(x)=
x
(v
f(x)+
g(x)=
x
+
2
x
2
.
(3.7)
99
Also,
using
the
relation
@u
1
@u
@u
11
2
=
-
+=
s
-
f0()
-
.
+
g
0();
@y
2
@.
@s
92
9
the
condition
@u=@y
=
x
at
y
=
0
gives
@u
112
110
f0(x)
..
(.
=
x,
s
=
x)=
x
-
x
+
g
0(x)=
x
(v
g
0(x)
-
f0(x)=
x;
@y
929
29
and
after
integration,
g(x)
-
1
f(x)=
5
x
2
+
k,
(3.8)
29
where
k
is
a
constant.
Solving
equation
(3.7)
and
equation
(3.8)
simultaneously
gives,
222
142
2
..
f(x)=
k
and
g(x)=
2
x
-
x
x
+
x
+
k,
393
393222
142
2
..
or,
in
terms
of
.
and
f()=
.
-
k
and
g()=
s
+
2
+
k.
393
393
So,
full
solution
is
2
22
14
u(,
)=
-
s
+
.
-
2
+
s
+
2
;
9
39
39
12
=
(2.
+
)+
(s
-
)(2s
+
).
39
2
y
v
u(x,
y)=
x
+
xy
+
.
(check
this
solution.)
2
Example
3:
Reduce
to
canonical
form
@2@2@2
uuu
+
+=0.
@x2
@x@y
@y2
.
a
=1
.
Here
b
=1=2
so
b2
-
ac
=
..3=4(<
0)
v
equation
is
elliptic.
;
c
=1
Find
.
and
s
via
p=
.
=
constant
on
dy=dx
=
(1+
i
3)=2
.
=
y
-
1
2
(1
+
i
3)x
pv
=
.
s
=
constant
on
dy=dx
=
(1
-
i
3)=2
s
=
y
-
2
1
(1
-
i
3)x
3.2
Extensions
of
the
Theory
To
obtain
a
real
transformation,
put
=
.
=
s
+
.
=2y
-
x
and
a
=
i(.
-
)=
x
3.
So,
@u
@u
=
@u
@u
@u
@2u@2u
=
@2@2
uu
=
-
+3
,
=2
,
=
-
23
+3
;
@x
@.
@a
@y
@.
@x2
@2
@@a
@
2
@2@2=
@2@2@2
uu
uu
u
=
..2
+2
3
,
=4
;
@x@y
@2
@@a
@y2
@2
and
the
equation
transforms
to
@2=
@2@2@2=
@2@2
u
u
uu
uu
-
23
+3
-
2
+2
3
+4
=0.
@2
@@a
@
2
@2
@@a
@2
@2@2
uu
v
+
=0,
canonical
form.
@2
@
2
3.2
Extensions
of
the
Theory
3.2.1
Linear
second
order
equations
in
n
variables
There
are
two
obvious
ways
in
which
we
might
wish
to
extend
the
theory.
To
consider
quasilinear
second
order
equations
(still
in
two
independent
variables.)
Such
equations
can
be
classi
ed
in
an
invariant
way
according
to
rule
analogous
to
those
developed
above
for
linear
equations.
However,
since
a,
b
and
c
are
now
functions
of
@xu,
@y
u
and
u
its
type
turns
out
to
depend
in
general
on
the
particular
solution
searched
and
not
just
on
the
values
of
the
independent
variables.
To
consider
linear
second
order
equations
in
more
than
two
independent
variables.
In
such
cases
it
is
not
usually
possible
to
reduce
the
equation
to
a
simple
canonical
form.
However,
for
the
case
of
an
equation
with
constant
coecients
such
a
reduction
is
possible.
Although
this
seems
a
rather
restrictive
class
of
equations,
we
can
regard
the
classi
cation
obtained
as
a
local
one,
at
a
particular
point.
Consider
the
linear
PDE
nn
@2
@u
u
aij
+
bi
+
cu
=
d.
@xi@xj
@xi
i;j=1
i=1
Without
loss
of
generality
we
can
take
the
matrix
A
=(aij
),
i;j
=1

n,
to
be
symmetric
(assuming
derivatives
commute).
For
any
real
symmetric
matrix
A,
there
is
an
associate
orthogonal
matrix
P
such
that
P
T
AP
=
F
where
F
is
a
diagonal
matrix
whose
element
are
the
eigenvalues,
i,
of
A
and
the
columns
of
P
the
linearly
independent
eigenvectors
of
A,
ei
=(e1i;e2i,

;eni).
So
P
=(eij
)
and
=(iij
),
i;j
=1,

,
n.
P
..1P
T
Now
consider
the
transformation
x
=
P
,
i.e.
.
=
x
=
x
(P
orthogonal)
where
x
=(x1;x2,

;xn)
and
.
=(1;2,

;n);
this
can
be
written
as
nn
xi
=
eij
j
and
j
=
eij
xi.
j=1
i=1
Chapter
3

Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
So,
n
n
@u
@u
@2
u
@u
=
eik
and
=
eik
ejr.
@xi
@k
@xi@xj
@k@r
k=1
k;r=1
The
original
equation
becomes,
nn
@u
aij
eik
ejr
+
(lower
order
terms)
=
0.
@k@r
i;j=1
k;r=1
But
by
de
nition
of
the
eigenvectors
of
A,
n
T
k
A
er
eik
aij
ejr
.
r
rk:
e
=
i;j=1
Then
equation
simpli
es
to
n
k=1
@u
k
+
(lower
order
terms)
=
0.
@k
2
We
are
now
in
a
position
to
classify
the
equation.

Equation
is
elliptic
if
and
only
if
all
k
are
non-zero
and
have
the
same
sign.
E.g.
Laplace's
equation
@2@2@2
uuu
++
=0.
@x2
@y2
@z2

When
all
the
k
are
non-zero
and
have
the
same
sign
except
for
precisely
one
of
them,
the
equation
is
hyperbolic.
E.g.
the
wave
equation
@2@2@2@2
u
uuu
2
-
c
++
=0.
@t2
@x2
@y2
@z2

When
all
the
k
are
non-zero
and
there
are
at
least
two
of
each
sign,
the
equation
ultra-hyperbolic.
E.g.
@2@2@2@2
uu
uu
+
=
+;
@x2
@x2
@x2
@x2
12
34
such
equation
do
not
often
arise
in
mathematical
physics.

If
any
of
the
k
vanish
the
equation
is
parabolic.
E.g.
heat
equation
@2
@u
u@2u@2u
-
t
++
=0.
@t
@x2
@y2
@z2
3.2.2
The
Cauchy
Problem
Consider
the
problem
of
nding
the
solution
of
the
equation
@2@2@2
u
uu
a
+2b
+
c
+
F
(@xu,
@y
u,
u,
x,
y)=0
@x2
@x@y
@y2
3.2
Extensions
of
the
Theory
which
takes
prescribed
values
on
a
given
curve
.
which
we
assume
is
represented
parametrically
in
the
form
x
=
();y
=
(),
for
e
=
I,
where
I
is
an
interval,
.
e
.
say.
(Usually
consider
piecewise
smooth
curves.)
We
specify
Cauchy
data
on
..:
u,
@u=@x
and
@u=@y
are
given
8e
=
I,
but
note
that
we
cannot
specify
all
these
quantities
arbitrarily.
To
show
this,
suppose
u
is
given
on
.
by
u
=
f();
then
the
derivative
tangent
to
..,
du=d,
can
be
calculated
from
du=de
=
f
0()
but
also
du
@u
dx
@u
dy
=+
;
de
@x
de
@y
de
@u
@u
=
0()+
0()=
f0();
@x
@y
so,
on
..,
the
partial
derivatives
@u=@x,
@u=@y
and
u
are
connected
by
the
above
relation.
Only
derivatives
normal
to
.
and
u
can
be
prescribed
independently.
So,
the
Cauchy
problem
consists
in
nding
the
solution
u(x,
y)
which
satis
es
the
following
conditions
u(();())
=
f()
.
>.
>.
;
@u
and
(();())
=
g()
@n
where
e
=
I
and
@=@n
=
n
.
denotes
a
normal
derivative
to
.
(e.g.
n
=[=
,
..0]T
);
the
partial
derivatives
@u=@x
and
@u=@y
are
uniquely
determined
on
.
by
these
conditions.
Set,
p
=
@u=@x
and
q
=
@u=@y
so
that
on
..,
p
and
q
are
known;
then
dp@2u
dx@2u
dy
dq@2u
dx@2u
dy
=
+
and=
+
.
ds
@x2
ds
@x@y
ds
ds
@x@y
ds
@y2
ds
Combining
these
two
equations
with
the
original
PDE
gives
the
following
system
of
equations
for
@2u=@x2
,
@2u=@x@y
and
@2u=@y2
on
.
(in
matrix
form),
.
@2
u
.
0
..F
a
2bc
@x2
@2
u
@x@y
@2
u
BBBBBB
.
CCCCCC.
=
BBBBB
.
CCCCC
.
where
M
=
BBBB
.
dp
ds
dq
CCCC
.
dx
dy
0
M
.
ds
ds
dx
dy
0
ds
ds
ds
@y2
So,
if
det(M)
6
0
we
can
solve
the
equations
uniquely
and
nd
@2u=@x2
=,
@2u=@x@y
and
@2u=@y2
on
...
By
successive
dierentiations
of
these
equations
it
can
be
shown
that
the
derivatives
of
u
of
all
orders
are
uniquely
determined
at
each
point
on
.
for
which
det(M)
6
=
0.
The
values
of
u
at
neighbouring
points
can
be
obtained
using
Taylor's
theorem.
So,
we
conclude
that
the
equation
can
be
solved
uniquely
in
the
vicinity
of
.
provided
det(M)
6
=
0
(Cauchy-Kowaleski
theorem
provides
a
majorant
series
ensuring
convergence
of
Taylor's
expansion).
Chapter
3

Second
Order
Linear
and
Semilinear
Equations
in
Two
Variables
Consider
what
happens
when
det(M)
=
0,
so
that
M
is
singular
and
we
cannot
solve
uniquely
for
the
second
order
derivatives
on
...
In
this
case
the
determinant
det(M)
=
0
gives,
2
2
dy
dx
dy
dx
a
-
2b
+
c
=0.
ds
ds
ds
ds
But,
dy
dy=ds
=
dx
dx=ds
and
so
(dividing
through
by
dx=ds),
dy=dx
satis
es
the
equation,
p
2
dy
dy
dyb

b2
-
ac
dyb
a
-
2b
+
c
=0,
i.e.=
or=
.
dx
dx
dxa
dxa
The
exceptional
curves
..,
on
which,
if
u
and
its
normal
derivative
are
prescribed,
no
unique
solution
can
be
found
satisfying
these
conditions,
are
the
characteristics
curves.
3.2
Extensions
of
the
Theory
Chapter
4
Elliptic
Equations
Contents
4.1
De
nitions.................................
49
4.2
Properties
of
Laplace's
and
Poisson's
Equations
.
.
.
.
.
.
.
.
.
.
50
4.3
Solving
Poisson
Equation
Using
Green's
Functions
.
.
.
.
.
.
.
.
54
4.4
ExtensionsofTheory:..........................
68
4.1
De
nitions
Elliptic
equations
are
typically
associated
with
steady-state
behavior.
The
archetypal
elliptic
equation
is
Laplace's
equation
@2@2
uu
r2
u
=0,
e.g.
+
=0in2-D;
@x2
@y2
and
describes

steady,
irrotational
ows,

electrostatic
potential
in
the
absence
of
charge,

equilibrium
temperature
distribution
in
a
medium.
Because
of
their
physical
origin,
elliptic
equations
typically
arise
as
boundary
value
problems
(BVPs).
Solving
a
BVP
for
the
general
elliptic
equation
nn
.
@2u
.
@u
L[u]=
aij
+
bi
+
cu
=
F
@xi@xj
@xi
i;j=1
i=1
(recall:
all
the
eigenvalues
of
the
matrix
A
=(aij
),
i,
j
=1

n,
are
non-zero
and
have
the
same
sign)
is
to
nd
a
solution
u
in
some
open
region
G
of
space,
with
conditions
imposed
on
@G
(the
boundary
of
)
or
at
in
nity.
E.g.
inviscid
ow
past
a
sphere
is
determined
by
boundary
conditions
on
the
sphere
(u

n
=
0)
and
at
in
nity
(u
=
Const).
There
are
three
types
of
boundary
conditions
for
well-posed
BVPs,
1.
Dirichlet
condition

u
takes
prescribed
values
on
the
boundary
@G
(
rst
BVP).
49
4.2
Properties
of
Laplace's
and
Poisson's
Equations
2.
Neumann
conditions

the
normal
derivative,
@u=@n
=
n
ru
is
prescribed
on
the
boundary
@G
(second
BVP).
In
this
case
we
have
compatibility
conditions
(i.e.
global
constraints):
E.g.,
suppose
u
satis
es
r2u
=
F
on
G
and
n
ru
=
@nu
=
f
on
@
.
Then,
r2
u
dV
=
rru
dV
=
ru

n
dS
=
@u
dS
(divergence
theorem);
@n
.
@.
@.
.
.
v
F
dV
=
f
dS
for
the
problem
to
be
well-de
ned.
.
@.
3.
Robin
conditions

a
combination
of
u
and
its
normal
derivative
such
as
@u=@n
+
u
is
prescribed
on
the
boundary
@G
(third
BVP).
y
@
n
x
Sometimes
we
may
have
a
mixed
problem,
in
which
u
is
given
on
part
of
@G
and
@u=@n
given
on
the
rest
of
@
.
If
G
encloses
a
nite
region,
we
have
an
interior
problem;
if,
however,
G
is
unbounded,
we
have
an
exterior
problem,
and
we
must
impose
conditions
'at
in
nity'.
Note
that
initial
conditions
are
irrelevant
for
these
BVPs
and
the
Cauchy
problem
for
elliptic
equations
is
not
always
well-posed
(even
if
Cauchy-Kowaleski
theorem
states
that
the
solution
exist
and
is
unique).
As
a
general
rule,
it
is
hard
to
deal
with
elliptic
equations
since
the
solution
is
global,
aected
by
all
parts
of
the
domain.
(Hyperbolic
equations,
posed
as
initial
value
or
Cauchy
problem,
are
more
localised.)
From
now,
we
shall
deal
mainly
with
the
Helmholtz
equation
r2u
+
Pu
=
F
,
where
P
and
F
are
functions
of
x,
and
particularly
with
the
special
one
if
P
=
0,
Poisson's
equation,
or
Laplace's
equation,
if
F
=
0
too.
This
is
not
too
severe
restriction;
recall
that
any
linear
elliptic
equation
can
be
put
into
the
canonical
form
n
@2
.
u
+

=0
@x2
k
k=1
and
that
the
lower
order
derivatives
do
not
alter
the
overall
properties
of
the
solution.
4.2
Properties
of
Laplace's
and
Poisson's
Equations
De
nition:
A
continuous
function
satisfying
Laplace's
equation
in
an
open
region
,
with
continuous
rst
and
second
order
derivatives,
is
called
an
harmonic
function.
Functions
u
Chapter
4

Elliptic
Equations
in
C2(
)
with
r2u
~
0
(respectively
r2u
.
0)
are
call
subharmonic
(respectively
superharmonic).
4.2.1
Mean
Value
Property
De
nition:
Let
x0
be
a
point
in
G
and
let
BR(x0)
denote
the
open
ball
having
centre
x0
and
radius
R.
Let
R(x0)
denote
the
boundary
of
BR(x0)
and
let
A(R)
be
the
surface
area
of
R(x0).
Then
a
function
u
has
the
mean
value
property
at
a
point
x0
=
G
if
1
u(x0)=
u(x)dS
A(R)
R
for
every
R>
0
such
that
BR(x0)
is
contained
in
.
If
instead
u(x0)
satis
es
1
u(x0)=
u(x)dV,
V
(R)
BR
where
V
(R)
is
the
volume
of
the
open
ball
BR(x0),
we
say
that
u(x0)
has
the
second
mean
value
property
at
a
point
x0
=
.
The
two
mean
value
properties
are
equivalent.
xy@
Rx0BRR
Theorem:
If
u
is
harmonic
in
G
an
open
region
of
Rn,
then
u
has
the
mean
value
property
on
.
Proof:
We
need
to
make
use
of
Green's
theorem
which
says,
@u
@v
v
-
u
dS
=
v
r2
u
-
u
r2
v
dV.
(4.1)
@n
@n
SV
(Recall:
Apply
divergence
theorem
to
the
function
vru
-
urv
to
state
Green's
theorem.)
Since
u
is
harmonic,
it
follows
from
equation
(4.1),
with
v
=
1,
that
@u
dS
=0.
@n
S
Now,
take
v
=1=r,
where
r
=
jx
-
x0j,
and
the
domain
V
to
be
Br(x0)
-
B"(x0),
0
<"<R.
Then,
in
Rn
-
x0,
1
.
.
.
1.
r22
v
=
r
=0
2
r@r
@r
r
4.2
Properties
of
Laplace's
and
Poisson's
Equations
so
v
is
harmonic
too
and
equation
(4.1)
becomes
@v
@v
@v
@v
u
dS
+
u
dS
=
u
dS
-
u
dS
=0
@n
@n
@r
@r
r
.
r
.
@v
@v
11
v
u
dS
=
u
dS
i.e
u
dS
=
u
dS.
@r
@r
"2
r2
.
r
.
r
Since
u
is
continuous,
then
as
p
.
0
the
LHS
converges
to
4u(x0;y0;z0)
(with
n
=
3,
say),
so
1
u(x0)=
u
dS.
A(r)
r
Recovering
the
second
mean
value
property
(with
n
=
3,
say)
is
straightforward
.
r
3
.
r
ZZ
r11
2
u(x0)dd
=
u(x0)=
u
dS
dd
=
u
dV.
34.
4
0
0.
Br
The
inverse
of
this
theorem
holds
too,
but
is
harder
to
prove.
If
u
has
the
mean
value
property
then
u
is
harmonic.
4.2.2
Maximum-Minimum
Principle
One
of
the
most
important
features
of
elliptic
equations
is
that
it
is
possible
to
prove
theorems
concerning
the
boundedness
of
the
solutions.
Theorem:
Suppose
that
the
subharmonic
function
u
satis
es
r2
u
=
F
in
,
with
F>
0
in
.
Then
u(x,
y)
attains
his
maximum
on
@
.
Proof:
(Theorem
stated
in
2-D
but
holds
in
higher
dimensions.)
Suppose
for
a
contradiction
that
u
attains
its
maximum
at
an
interior
point
(x0;y0)
of
.
Then
at
(x0;y0),
@u
@u
@2u@2u
=0,
=0,
.
0
and
.
0;
@x
@y
@x2
@y2
since
it
is
a
maximum.
So,
@2@2
uu
+
.
0,
which
contradicts
F>
0
in
.
@x2
@y2
Hence
u
must
attain
its
maximum
on
@
,
i.e.
if
u
.
M
on
@
,
u<M
in
.
Theorem:
The
weak
Maximum-Minimum
Principle
for
Laplace's
equation.
Suppose
that
u
satis
es
r2
u
=
0
in
a
bounded
region
;
if
m
.
u
.
M
on
@
,
then
m
.
u
.
M
in
.
Chapter
4

Elliptic
Equations
Proof:
(Theorem
stated
in
2-D
but
holds
in
higher
dimensions.)
Consider
the
function
2
2
v
=
u
+
"(x+
y2),
for
any
">
0.
Then
r2v
=4">
0
in
G
(since
r2(x+
y2)
=
4),
and
using
the
previous
theorem,
v
.
M
+
"R2
in
,
where
u
.
M
on
@G
and
R
is
the
radius
of
the
circle
containing
.
As
this
holds
for
any
",
let
p
.
0
to
obtain
u
.
M
in
,
i.e.,
if
u
satis
es
r2u
=
0
in
,
then
u
cannot
exceed
M,
the
maximum
value
of
u
on
@
.
Also,
if
u
is
a
solution
of
r2u
=
0,
so
is
..u.
Thus,
we
can
apply
all
of
the
above
to
..u
to
get
a
minimum
principle:
if
u
~
m
on
@
,
then
u
~
m
in
.
This
theorem
does
not
say
that
harmonic
function
cannot
also
attain
m
and
M
inside
G
though.
We
shall
now
progress
into
the
strong
Maximum-Minimum
Principle.
Theorem:
Suppose
that
u
has
the
mean
value
property
in
a
bounded
region
G
and
that
u

is
continuous
in
G
=
G
.
@
.
If
u
is
not
constant
in
G
then
u
attains
its
maximum
value
on
the
boundary
@G
of
,
not
in
the
interior
of
.

Proof:
Since
u
is
continuous
in
the
closed,
bounded
domain
G
then
it
attains
its
maximum

M
somewhere
in
G
.
Our
aim
is
to
show
that,
if
u
attains
its
max.
at
an
interior
point
of
,
then
u
is
constant
in

.
Suppose
u(x0)=
M
and
let
x.
be
some
other
point
of
.
Join
these
points
with
a
path
covered
by
a
sequence
of
overlapping
balls,
Br.
y
x
x0x1x?@
Consider
the
ball
with
x0
at
its
center.
Since
u
has
the
mean
value
property
then
1
M
=
u(x0)=
u
dS
.
M.
A(r)
r
This
equality
must
hold
throughout
this
statement
and
u
=
M
throughout
the
sphere
surrounding
x0.
Since
the
balls
overlap,
there
is
x1,
centre
of
the
next
ball
such
that
u(x1)=
M;
the
mean
value
property
implies
that
u
=
M
in
this
sphere
also.
Continuing
like
this
gives
?
u(x?)=
M.
Since
xis
arbitrary,
we
conclude
that
u
=
M
throughout
,
and
by
continuity

throughout
.
Thus
if
u
is
not
a
constant
in
G
it
can
attain
its
maximum
value
only
on
the
boundary
@
.
4.3
Solving
Poisson
Equation
Using
Green's
Functions
Corollary:
Applying
the
above
theorem
to
..u
establishes
that
if
u
is
non
constant
it
can
attain
its
minimum
only
on
@
.
Also
as
a
simple
corollary,
we
can
state
the
following
theorem.
(The
proof
follows
immediately
the
previous
theorem
and
the
weak
Maximum-Minimum
Principle.)
Theorem:
The
strong
Maximum-Minimum
Principle
for
Laplace's
equation.

Let
u
be
harmonic
in
,
i.e.
solution
of
r2u
=
0
in
G
and
continuous
in
,
with
M
and
m
the
maximum
and
minimum
values
respectively
of
u
on
the
boundary
@
.
Then,
either

m<u<M
in
G
or
else
m
=
u
=
M
in
.
Note
that
it
is
important
that
G
be
bounded
for
the
theorem
to
hold.
E.g.,
consider
u(x,
y)=
ex
sin
y
with
G
=
f(x,
y)j...
<x<
+1,
0
<y<
2g.
Then
r2u
=
0
and
on
the
boundary
of
G
we
have
u
=
0,
so
that
m
=
M
=
0.
But
of
course
u
is
not
identically
zero
in
.
Corollary:
If
u
=
C
is
constant
on
@
,
then
u
=
C
is
constant
in
.
Armed
with
the
above
theorems
we
are
in
position
to
prove
the
uniqueness
and
the
stability
of
the
solution
of
Dirichlet
problem
for
Poisson's
equation.
Consider
the
Dirichlet
BVP
r2
u
=
F
in
G
with
u
=
f
on
@G
and
suppose
u1;u2
two
solutions
to
the
problem.
Then
v
=
u1
-
u2
satis
es
r2
v
=
r2(u1
-
u2)=0
in
,with
v
=0
on
@
.
Thus,
v
.
0
in
,
i.e.
u1
=
u2;
the
solution
is
unique.
To
establish
the
continuous
dependence
of
the
solution
on
the
prescribed
data
(i.e.
the
stability
of
the
solution)
let
u1
and
u2
satisfy
r2
uf1;2}
=
F
in
G
with
uf1;2}
=
ff1;2}
on
@
,
with
max
jf1
-
f2|
=
".
Then
v
=
u1
-
u2
is
harmonic
with
v
=
f1
-
f2
on
@
.
As
before,

v
must
have
its
maximum
and
minimum
values
on
@
;
hence
ju1
-
u2j.
p
in
.
So,
the
solution
is
stable

small
changes
in
the
boundary
data
lead
to
small
changes
in
the
solution.
We
may
use
the
Maximum-Minimum
Principle
to
put
bounds
on
the
solution
of
an
equation
without
solving
it.
The
strong
Maximum-Minimum
Principle
may
be
extended
to
more
general
linear
elliptic
equations
nn
.
@2u
.
@u
L[u]=
aij
+
bi
+
cu
=
F,
@xi@xj
@xi
i;j=1
i=1
and,
as
for
Poisson's
equation
it
is
possible
then
to
prove
that
the
solution
to
the
Dirichlet
BVP
is
unique
and
stable.
4.3
Solving
Poisson
Equation
Using
Green's
Functions
We
shall
develop
a
formal
representation
for
solutions
to
boundary
value
problems
for
Poisson's
equation.
Chapter
4

Elliptic
Equations
4.3.1
De
nition
of
Green's
Functions
Consider
a
general
linear
PDE
in
the
form
L(x)u(x)=
F
(x)
in
,
where
L(x)
is
a
linear
(self-adjoint)
dierential
operator,
u(x)
is
the
unknown
and
F
(x)
is
the
known
homogeneous
term.
(Recall:
L
is
self-adjoint
if
L
=
L?,
where
L.
is
de
ned
by
hvjLu.
=
hL?vju.
and
where
hvju.
=
v(x)w(x)u(x)dx
((w(x)
is
the
weight
function).)
The
solution
to
the
equation
can
be
written
formally
u(x)=
L..1F
(x),
where
L..1,
the
inverse
of
L,
is
some
integral
operator.
(We
can
expect
to
have
LL..1
=
LL..1
=
I,
identity.)
We
de
ne
the
inverse
L..1
using
a
Green's
function:
let
u(x)=
L..1F
(x)=
-
G(x,
)F
()d,
(4.2)
where
G(x,
.
)
is
the
Green's
function
associated
with
L
(G
is
the
kernel).
Note
that
G
depends
on
both
the
independent
variables
x
and
the
new
independent
variables
,
over
which
we
integrate.
Recall
the
Dirac
-function
(more
precisely
distribution
or
generalised
function)
(x)
which
has
the
properties,
(x)dx
=
1
and
(x
-
)
h()d.
=
h(x).
Rn
Rn
Now,
applying
L
to
equation
(4.2)
we
get
Lu(x)=
F
(x)=
..LG(x,
)F
()d;
hence,
the
Green's
function
G(x,
)
satis
es
u(x)=
-
G(x,
.
)
F
()d.
with
L
G(x,
)=
..(x
-
)
and
x,
.
=
.
4.3.2
Green's
function
for
Laplace
Operator
Consider
Poisson's
equation
in
the
open
bounded
region
V
with
boundary
S,
r2
u
=
F
in
V.
(4.3)
4.3
Solving
Poisson
Equation
Using
Green's
Functions
xyVSn
Then,
Green's
theorem
(n
is
normal
to
S
outward
from
V
),
which
states
@v
@u
.
ur2
.
v
-
vr2
u
dV
=
u
-
v
dS;
@n
@n
VS
for
any
functions
u
and
v,
with
@h=@n
=
n
rh,
becomes
.
ZZ.

@v
@u
ur2
v
dV
=
vF
dV
+
u
-
v
dS;
@n
@n
V
VS
so,
if
we
choose
v
.
v(x,
),
singular
at
x
=
,
such
that
r2v
=
..(x
-
),
then
u
is
solution
of
the
equation
@v
@u
u()=
-
vF
dV
-
u
-
v
dS
(4.4)
@n
@n
VS
which
is
an
integral
equation
since
u
appears
in
the
integrand.
To
address
this
we
consider
another
function,
w
.
w(x,
),
regular
at
x
=
,
such
that
r2w
=
0
in
V
.
Hence,
apply
Green's
theorem
to
the
function
u
and
w
Z.
Z
@w
@u
.
ur2
w
-
wr2
.
u
-
w
dS
=
u
dV
=
-
wF
dV.
@n
@n
SV
V
Combining
this
equation
with
equation
(4.4)
we
nd
.
@u
u()=
-
(v
+
w)F
dV
-
u
(v
+
w)
-
(v
+
w)dS;
@n@n
VS
so,
if
we
consider
the
fundamental
solution
of
Laplace's
equation,
G
=
v
+
w,
such
that
r2G
=
..(x
-
)
in
V
,
@G
@u
u()=
-
GF
dV
-
u
-
G
dS.
(4.5)
@n
@n
VS
Note
that
if,
F
,
f
and
the
solution
u
are
suciently
well-behaved
at
in
nity
this
integral
equation
is
also
valid
for
unbounded
regions
(i.e.
for
exterior
BVP
for
Poisson's
equation).
The
way
to
remove
u
or
@u=@n
from
the
RHS
of
the
above
equation
depends
on
the
choice
of
boundary
conditions.
Chapter
4

Elliptic
Equations
Dirichlet
Boundary
Conditions
Here,
the
solution
to
equation
(4.3)
satis
es
the
condition
u
=
f
on
S.
So,
we
choose
w
such
that
w
=
..v
on
S,
i.e.
G
=
0
on
S,
in
order
to
eliminate
@u=@n
form
the
RHS
of
equation
(4.5).
Then,
the
solution
of
the
Dirichlet
BVP
for
Poisson's
equation
r2
u
=
F
in
V
with
u
=
f
on
S
is
@G
u()=
-
GF
dV
-
f
dS;
@n
VS
where
G
=
v
+w
(w
regular
at
x
=
)
with
r2v
=
..(x..)
and
r2w
=
0
in
V
and
v
+w
=0
on
S.
So,
the
Green's
function
G
is
solution
of
the
Dirichlet
BVP
r2G
=
..(x
-
)
in
V,
with
G
=
0
on
S.
Neumann
Boundary
Conditions
Here,
the
solution
to
equation
(4.3)
satis
es
the
condition
@u=@n
=
f
on
S.
So,
we
choose
w
such
that
@w=@n
=
..@v=@n
on
S,
i.e.
@G=@n
=0
on
S,
in
order
to
eliminate
u
from
the
RHS
of
equation
(4.5).
However,
the
Neumann
BVP
r2G
=
..(x
-
)
in
V,
@G
with
=0on
S;
@n
which
does
not
satisfy
a
compatibility
equation,
has
no
solution.
Recall
that
the
Neumann
BVP
r2u
=
F
in
V
,
with
@u=@n
=
f
on
S,
is
ill-posed
if
F
dV
6fdS:
=
VS
We
need
to
alter
the
Green's
function
a
little
to
satisfy
the
compatibility
equation;
put
r2G
=
...
+
C,
where
C
is
a
constant,
then
the
compatibility
equation
for
the
Neumann
BVP
for
G
is
1
(...
+
C)dV
=
0dS
=0
v
C
=
;
V
VS
where
V
is
the
volume
of
V
.
Now,
applying
Green's
theorem
to
G
and
u:
@u
@G
..Gr2
u
-
ur2G.
dV
=
G
-
u
dS
@n
@n
VS
we
get
1
u()=
-
GF
dV
+
Gf
dS
+
u
dV.
V
VS
V
u
This
shows
that,
whereas
the
solution
of
Poisson's
equation
with
Dirichlet
boundary
conditions
is
unique,
the
solution
of
the
Neumann
problem
is
unique
up
to
an
additive
constant
u
which
is
the
mean
value
of
u
over
.
4.3
Solving
Poisson
Equation
Using
Green's
Functions
Thus,
the
solution
of
the
Neumann
BVP
for
Poisson's
equation
@u
r2
u
=
F
in
V
with
=
f
on
S
@n
is
u()=
u-
GF
dV
+
Gf
dS,
VS
where
G
=
v
+
w
(w
regular
at
x
=
)
with
r2v
=
..(x
-
),
r2w
=1=V
in
V
and
@w=@n
=
..@v=@n
on
S.
So,
the
Green's
function
G
is
solution
of
the
Neumann
BVP
r2G
=
..(x
-
)+
1
in
V,
V
@G
with
=0on
S.
@n
Robin
Boundary
Conditions
Here,
the
solution
to
equation
(4.3)
satis
es
the
condition
@u=@n
+
u
=
f
onS.So,
we
choose
w
such
that
@w=@n
+
w
=
..@v=@n
-
v
on
S,
i.e.
@G=@n
+
G
=0
on
S.
Then,
Z.
Z.
Z
@G
@u
@G
u
-
G
dS
=
u
+
G(u
-
f)dS
=
-
Gf
dS.
@n
@n
@n
SS
S
Hence,
the
solution
of
the
Robin
BVP
for
Poisson's
equation
@u
r2
u
=
F
in
V
with
+
u
=
f
on
S
@n
is
u()=
-
GF
dV
+
Gf
dS,
VS
where
G
=
v
+
w
(w
regular
at
x
=
)
with
r2v
=
..(x
-
)
and
r2w
=
0
in
V
and
@w=@n
+
w
=
..@v=@n
-
v
on
S.
So,
the
Green's
function
G
is
solution
of
the
Robin
BVP
r2G
=
..(x
-
)
in
V,
@G
with
+
G
=
0
on
S.
@n
Symmetry
of
Green's
Functions
The
Green's
function
is
symmetric
(i.e.,
G(x,
)=
G(,
x)).
To
show
this,
consider
two
Green's
functions,
G1(x)
.
G(x,
.
1)
and
G2(x)
.
G(x,
.
2),
and
apply
Green's
theorem
to
these,
@G2
@G1
..G1r2G2
-
G2r2G1
.
dV
=
G1
-
G2
dS.
@n
@n
VS
Now,
since,
G1
and
G2
are
by
de
nition
Green's
functions,
G1
=
G2
=
0
on
S
for
Dirichlet
boundary
conditions,
@G1=@n
=
@G2=@n
=
0
on
S
for
Neumann
boundary
conditions
or
G2
@G1=@n
=
G1
@G2=@n
on
S
for
Robin
boundary
conditions,
so
in
any
case
the
right-hand
side
is
equal
to
zero.
Also,
r2G1
=
..(x...
1),
r2G2
=
..(x...
2)
and
the
equation
becomes
G(x,
.
1)
(x
-
2)dV
=
G(x,
.
2)
(x
-
1)dV,
VV
G(2,
1)=
G(1,
2).
Nevertheless,
note
that
for
Neumann
BVPs,
the
term
1=V
which
provides
the
additive
constant
to
the
solution
to
Poisson's
equation
breaks
the
symmetry
of
G.
Chapter
4

Elliptic
Equations
Example:
Consider
the
2-dimensional
Dirichlet
problem
for
Laplace's
equation,
r2
u
=
0
in
V
,
with
u
=
f
on
S
(boundary
of
V
).
Since
u
is
harmonic
in
V
(i.e.
r2u
=
0)
and
u
=
f
on
S,
then
Green's
theorem
gives
@v
@u
ur2
v
dV
=
f
-
v
dS.
@n
@n
VS
Note
that
we
have
no
information
about
@u=@n
on
S
or
u
in
V
.
Suppose
we
choose,
1
v
=
-
ln
(x
-
)2
+(y
-
)2.
;
4.
then
r2v
=0
on
V
for
all
points
except
P
.
(x
=
,
y
=
),
where
it
is
unde
ned.
To
eliminate
this
singularity,
we
'cut
this
point
P
out

i.e,
surround
P
by
a
small
circle
of
radius
p
=
p(x
-
)2
+(y
-
)2
and
denote
the
circle
by
,
whose
parametric
form
in
polar
coordinates
is
:
fx
-
.
=
p
cos
,
y
-
s
=
p
sin

with
">
0
and

=
(0,
2)g.
xy"V?S
Hence,
v
=
..1=2.
ln
p
and
dv=dp
=
..1=2p
and
applying
Green's
theorem
to
u
and
v
in
this
new
region
V
.
(with
boundaries
S
and
),
we
get
Z.
Z.

@v
@u
@v
@u
f
-
v
dS
+
u
-
v
dS
=0.
(4.6)
@n
@n
@n
@n
S
.
since
r2u
=
r2v
=
0
for
all
point
in
V
.
.
By
transforming
to
polar
coordinates,
dS
=
"d
and
@u=@n
=
..@u=@p
(unit
normal
is
in
the
direction
")
onto
;
then
Z.
2
@u
p
ln
p
@u
v
dS
=d
.
0
as
p
.
0;
@n
2.
@p
0
and
also
Z.
2a
.
2a
.
2
@v
@v
1
11
u
dS
=
-
up
d
=
up
d
=
u
d
.
u(,
)
as
p
.
0;
@n
@p
2p
2
0
00
and
so,
in
the
limit
p
.
0,
equation
(4.6)
gives
@u
@v
1
u(,
)=
v
-
f
dS,
where
v
=
-
ln
(x
-
)2
+(y
-
)2.
.
@n
@n
4
S
4.3
Solving
Poisson
Equation
Using
Green's
Functions
now,
consider
w,
such
that
r2w
=0
in
V
but
with
w
regular
at
(x
=
,
y
=
),
and
with
w
=
..v
on
S.
Then
Green's
theorem
gives
.
Z.
Z.

@w
@u
@w
@u
.
ur2
w
-
wr2
u
.
dV
=
u
-
w
dS
8
f
+
v
dS
=0
@n
@n
@n
@n
V
SS
since
r2u
=
r2w
=
0
in
V
and
w
=
..v
on
S.
Then,
subtract
this
equation
from
equation
above
to
get
Z.
Z.
Z
@u
@v
@w
@u
.
u(,
)=
v
-
f
dS
-
f
+
v
dS
=
-
f
(v
+
w)dS.
@n
@n
@n
@n
@n
SS
S
Setting
G(x,
y;
,
)=
v
+
w,
then
@G
u(,
)=
-
f
dS.
@n
S
Such
a
function
G
then
has
the
properties,
r2G
=
..(x
-
)
in
V,
with
G
=
0
on
S.
4.3.3
Free
Space
Green's
Function
We
seek
a
Green's
function
G
such
that,
G(x,
)=
v(x,
)+
w(x,
)
where
r2
v
=
..(x
-
)
in
V.
How
do
we
nd
the
free
space
Green's
function
v
de
ned
such
that
r2v
=
..(x
-
)
in
V
?
Note
that
it
does
not
depend
on
the
form
of
the
boundary.
(The
function
v
is
a
'source
term
and
for
Laplace's
equation
is
the
potential
due
to
a
point
source
at
the
point
x
=
.)
As
an
illustration
of
the
method,
we
can
derive
that,
in
two
dimensions,
1
v
=
-
ln
(x
-
)2
+(y
-
)2.
;
4.
as
we
have
already
seen.
We
move
to
polar
coordinate
around
(,
),
x
-
.
=
r
cos

&
y
-
s
=
r
sin
,
and
look
for
a
solution
of
Laplace's
equation
which
is
independent
of

and
which
is
singular
as
r
.
0.
y
Cr
rDr
.
x
Chapter
4

Elliptic
Equations
Laplace's
equation
in
polar
coordinates
is
1
.
@v
@2
v
1
@v
r
=+
=0
r
@r
@r
@r2
r
@r
which
has
solution
v
=
B
ln
r
+
A
with
A
and
B
constant.
Put
A
=
0
and,
to
determine
the
constant
B,
apply
Green's
theorem
to
v
and
1
in
a
small
disc
Dr
(with
boundary
Cr),
of
radius
r
around
the
origin
(,
),
@v
dS
=
r2
v
dV
=
-
(x
-
)dV
=
..1;
@n
Cr
Dr
Dr
so
we
choose
B
to
make
@v
dS
=
..1.
@n
Cr
Now,
in
polar
coordinates,
@v=@n
=
@v=@r
=
B=r
and
dS
=
rd
(going
around
circle
Cr).
So,
.
2a
.
2
B
1
rd
=
B
d
=
..1
v
B
=
-
.
r
2
00
Hence,
11
1
(x
-
)2
+(y
-
)2.
.
2
v
=
-
ln
r
=
-
ln
r
=
-
ln
2.
4.
4.
(We
do
not
use
the
boundary
condition
in
nding
v.)
Similar
(but
more
complicated)
methods
lead
to
the
free-space
Green's
function
v
for
the
Laplace
equation
in
n
dimensions.
In
particular,
.
>>>>>>.
1
-
jx
-
j;n
=1;
2
1
jx
-
j2.
4.
1
-
ln
n
=2,
v(x,
.
)=
,
>>>>>>.
jx
-
j2..n
-
;n
~
3,
(2
-
n)An(1)
where
x
and
.
are
distinct
points
and
An(1)
denotes
the
area
of
the
unit
n-sphere.
We
shall
restrict
ourselves
to
two
dimensions
for
this
course.
Note
that
Poisson's
equation,
r2u
=
F
,
is
solved
in
unbounded
Rn
by
u(x)=
-
v(x,
.
)
F
()d.
Rn
where
from
equation
(4.2)
the
free
space
Green's
function
v,
de
ned
above,
serves
as
Green's
function
for
the
dierential
operator
r2
when
no
boundaries
are
present.
4.3.4
Method
of
Images
In
order
to
solve
BVPs
for
Poisson's
equation,
such
as
r2u
=
F
in
an
open
region
V
with
some
conditions
on
the
boundary
S,
we
seek
a
Green's
function
G
such
that,
in
V
G(x,
)=
v(x,
)+
w(x,
.
)
where
r2
v
=
..(x
-
)
and
r2
w
=
0
or
1=V(V
).
4.3
Solving
Poisson
Equation
Using
Green's
Functions
Having
found
the
free
space
Green's
function
v

which
does
not
depend
on
the
boundary
conditions,
and
so
is
the
same
for
all
problems

we
still
need
to
nd
the
function
w,
solution
of
Laplace's
equation
and
regular
in
x
=
,
which
xes
the
boundary
conditions
(v
does
not
satis
es
the
boundary
conditions
required
for
G
by
itself).
So,
we
look
for
the
function
which
satis
es
r2
w
=
0
or
1=V(V
)
in
V,
(ensuring
w
is
regular
at
(,
)),
with
w
=
..v
(i.e.
G
=
0)
on
S
for
Dirichlet
boundary
conditions,
@w
@v
@G
or
=
-
(i.e.
=
0)
on
S
for
Neumann
boundary
conditions.
@n
@n
@n
To
obtain
such
a
function
we
superpose
functions
with
singularities
at
the
image
points
of
(,
)).
(This
may
be
regarded
as
adding
appropriate
point
sources
and
sinks
to
satisfy
the
boundary
conditions.)
Note
also
that,
since
G
and
v
are
symmetric
then
w
must
be
symmetric
too
(i.e.
w(x,
)=
w(,
x)).
Example
1
Suppose
we
wish
to
solve
the
Dirichlet
BVP
for
Laplace's
equation
@2@2
uu
r2
u
=
+
=0in
y>
0
with
u
=
f(x)
on
y
=0.
@x2
@y2
We
know
that
in
2-D
the
free
space
function
is
1
v
=
-
ln
(x
-
)2
+(y
-
)2.
.
4.
If
we
superpose
to
v
the
function
1
w
=+
ln(x
-
)2
+(y
+
)2.
;
4.
solution
of
r2w
=0
in
V
and
regular
at
(x
=
,
y
=
),
then
1
(x
-
)2
+(y
-
)2
G(x,
y,
,
)=
v
+
w
=
-
ln
.
4.
(x
-
)2
+(y
+
)2
xyS(;)V(;..)+..G=v+wwvy=..y=yx=
Note
that,
setting
y
=
0
in
this
gives,
1
(x
-
)2
+
2
G(x,
0,
,
)=
-
ln
=0,
as
required.
4.
(x
-
)2
+
2
Chapter
4

Elliptic
Equations
The
solution
is
then
given
by
@G
u(,
)=
-
f
dS.
@n
S
Now,
we
want
@G=@n
for
the
boundary
y
=
0,
which
is
@G
@G
1
s
.
=
-
.
=
-
(exercise,
check
this).
@n
@y
.
.
(x
-
)2
+
2
Sy=0
Thus,
s
.
+.
f(x)
u(,
)=
dx;
.
(x
-
)2
+
2
...
and
we
can
relabel
to
get
in
the
original
variables
y
.
+.
f()
u(x,
y)=
2
d.
.
(.
-
x)2
+
y
...
Example
2
Find
Green's
function
for
the
Dirichlet
BVP
@2@2
uu
r2
u
=+=
F
in
the
quadrant
x>
0;y>
0.
@x2
@y2
We
use
the
same
technique
but
now
we
have
three
images.
x(;)(;..)+..VSy(..;..)+(..;)..
Then,
the
Green's
function
G
is
11
G(x,
y,
,
)=
-
ln
(x
-
)2
+(y
-
)2.
+
ln(x
-
)2
+(y
+
)2.
4.
4.
11
-
ln
(x
+
)2
+(y
+
)2.
+
ln(x
+
)2
+(y
-
)2.
.
4.
4.
So,
"..(x
-
)2
+(y
-
)2...
1(x
+
)2
+(y
+
)2.
G(x,
y,
,
)=
-
ln
;
4.
((x
-
)2
+(y
+
)2)
((x
+
)2
+(y
-
)2)
and
again
we
can
check
that
G(0,
y,
,
)=
G(x,
0,
,
)
=
0
as
required
for
Dirichlet
BVP.
4.3
Solving
Poisson
Equation
Using
Green's
Functions
Example
3
Consider
the
Neumann
BVP
for
Laplace's
equation
in
the
upper
half-plane,
@2u@2u
@u
@u
r2
u
=
+
=0in
y>
0
with
=
-
=
f(x)
on
y
=0.
@x2
@y2
@n
@y
xyS(;)V(;..)....x=y=y=..vG=v+wyw
Add
an
image
to
make
@G=@y
=
0
on
the
boundary:
1
1
-
ln
..22(
-
)+(
-
)xy
(x
-
)2
+(y
+
)2
G(x,
y,
,
)=
-
ln
.
4.
4.
Note
that,
@G
1
2(y
-
)
2(y
+
)
=
-
+
,
(x
-
)2
+(y
-
)2
(x
-
)2
+(y
+
)2
@y
4.
and
as
required
for
Neumann
BVP,
@G
@n
@G
1
..2s
2s
=
-
=
+=0.
4.
(x
-
)2
+
2
(x
-
)2
+
2
(x
-
)2
@y
S
y=0
+
2.
,
Then,
since
G(x,
0,
,
)=
..1=2.
ln
.
+1
1
u(,
)=
-
f(x)
ln
2.
...
(x
-
)2
+
2.
dx,
.
+1
1
i.e.
u(x,
y)=
-
f()
ln
2.
...
(x
-
)2
+
y
2.
d,
Remind
that
all
the
theory
on
Green's
function
has
been
developed
in
the
case
when
the
equation
is
given
in
a
bounded
open
domain.
In
an
in
nite
domain
(i.e.
for
external
problems)
we
have
to
be
a
bit
careful
since
we
have
not
given
conditions
on
G
and
@G=@n
at
in
nity.
For
instance,
we
can
think
of
the
boundary
of
the
upper
half-plane
as
a
semi-circle
with
R
.
+1.
yS1S2..Rx+R
Chapter
4

Elliptic
Equations
Green's
theorem
in
the
half-disc,
for
u
and
G,
is
@u
@G
..Gr2
u
-
ur2G.
dV
=
G
-
u
dS.
@n
@n
VS
Split
S
into
S1,
the
portion
along
the
x-axis
and
S2,
the
semi-circular
arc.
Then,
in
the
above
equation
we
have
to
consider
the
behaviour
of
the
integrals
Z.
a
Z.

@u
@u
@G
@G
(1)
G
dS
=
GR
d
and
(2)
u
dS
=
uR
d
@n
@R
@n
@R
S2
0
S2
0
as
R
.
+1.
Green's
function
G
is
O(ln
R)
on
S2,
so
from
integral
(1)
we
need
@u=@R
to
fall
off
suciently
rapidly
with
the
distance:
faster
than
1=(R
ln
R)
i.e.
u
must
fall
off
faster
than
ln(ln(R)).
In
integral
(2),
@G=@R
=
O(1=R)
on
S2
provides
a
more
stringent
constraint
since
u
must
fall
off
more
rapidly
that
O(1)
at
large
R.
If
both
integrals
over
S2
vanish
as
R
.
+.
then
we
recover
the
previously
stated
results
on
Green's
function.
Example
4
Solve
the
Dirichlet
problem
for
Laplace's
equation
in
a
disc
of
radius
a,
.
@2
1
.
@u
1
u
r2
u
=
r
+
=0in
r<a
with
u
=
f()
on
r
=
a.
r@r
@r
r2
@2
xyrSV(x;y)Q+..(;)P
Consider
image
of
point
P
at
inverse
point
Q
P
=(d
cos
,
d
sin
),
Q
=(q
cos
,
q
sin
),
2
with
q
=
a(i.e.
OP

OQ
=
a2).
1
G(x,
y,
,
)=
-
ln
(x
-
)2
+(y
-
)2.
4.
.
22
.
+
1ln
(x
-
acos
)2
+(y
-
asin
)2
+
h(x,
y,
,
)
(with
2
+
2
=
2).
4d
d
We
need
to
consider
the
function
h(x,
y,
,
)
to
make
G
symmetric
and
zero
on
the
boundary.
We
can
express
this
in
polar
coordinates,
x
=
r
cos
,
y
=
r
sin
,
1
(r
cos

-
a2=d
cos
)2
+(r
sin

-
a2=d
sin
)2
G(r,
,
,
)=
ln
+
h;
4.
(r
cos

-
d
cos
)2
+(r
sin

-
d
sin
)2
.
2

1
r+
a4=2
-
2a2r=d
cos(
-
)
=ln
+
h.
4r2
+
2
-
2rd
cos(
-
)
4.3
Solving
Poisson
Equation
Using
Green's
Functions
Choose
h
such
that
G
=0
on
r
=
a,
2
+
a4=2
-
2a3=d
cos(
-
)
1
a
Gjr=a
=
ln
4.
+
h,
a
2
+
2
-
2ad
cos(
-
)
2
2
2
+
a2
-
2ad
cos(
-
)
1
1
a
=
ln
+
h
=0
v
h
=
ln
.
2
2
+
a2
-
2ad
cos(
-
)
2
4.
4a
Note
that,
2
42
1
1
a
a
r
2
w(r,
,
,
)=
ln
4.
+
-
2
cos(
-
)+
ln
r
2
2
d
4a
r
2
ln
a
+
22
1
-
2rd
cos(
-
)
=
2
4a
is
symmetric,
regular
and
solution
of
r2w
=0
in
V
.
So,
2
+
r22=a2
-
2rd
cos(
-
)
1
a
G(r,
,
,
)=
v
+
w
=
ln
;
2
+
2
-
2rd
cos(
-
)
4r
G
is
symmetric
and
zero
on
the
boundary.
This
enable
us
to
get
the
result
for
Dirichlet
problem
for
a
circle,
.
2a
@G
a
d,
u(,
)=
-
f()
0
@r
r=a
where
so
2r2=a2
-
2d
cos(
-
)
@G
1
2r
-
2d
cos(
-
)
..
=
,
2
+
r22=a2
-
2rd
cos(
-
)
2
+
2
-
2rd
cos(
-
)
@r
4a
r
2=a
-
d
cos(
-
)
@G
@n
@G
1
a
-
d
cos(
-
)
..
=
=
,
2
+
2
-
2ad
cos(
-
)
2
+
2
-
2ad
cos(
-
)
@r
2a
a
S
r=a
1
2
-
a2
=
.
2a
a2
+
2
-
2ad
cos(
-
)
Then
1
.
2a
a2
-
2
u(,
)=
f()d,
2a2
+
2
-
2ad
cos(
-
)
0
and
relabelling,
2
.
2
a2
-
rf()
u(r,
)=
d.
2a2
+
r2
-
2ar
cos(
-
)
0
Note
that,
from
the
integral
form
of
u(r,
)
above,
we
can
recover
the
Mean
Value
Theorem.
If
we
put
r
=
0
(centre
of
the
circle)
then,
.
2
1
u(0)
=
f()d;
2.
0
i.e.
the
average
of
an
harmonic
function
of
two
variables
over
a
circle
is
equal
to
its
value
at
the
centre.
Chapter
4

Elliptic
Equations
Furthermore
we
may
introduce
more
subtle
inequalities
within
the
class
of
positive
harmonic
functions
u
~
0.
Since
..1
.
cos(
..)
.
1
then
(a..r)2
.
a2
..2ar
cos(
..)+r2
.
(a+r)2
.
Thus,
the
kernel
of
the
integrand
in
the
integral
form
of
the
solution
u(r,
)
can
be
bounded
1
a
-
r
1
a2
-
r2
1
a
+
r
.
:
2
2a
+
r
2.
(a
-
r)2
.
a2
-
2ar
cos(
-
)+
r2a
-
r
For
positive
harmonic
functions
u,
we
may
use
these
inequalities
to
bound
the
solution
of
Dirichlet
problem
for
Laplace's
equation
in
a
disc
.
2a
.
2
1
a
-
r
1
a
+
r
f()d
.
u(r,
)
.
f()d,
2a
+
r
2a
-
r
00
i.e.
using
the
Mean
Value
Theorem
we
obtain
Harnack's
inequalities
a
-
r
a
+
r
a
+
r
u(0)
.
u(r,
)
.
a
-
r
u(0).
Example
5
Interior
Neumann
problem
for
Laplace's
equation
in
a
disc,
1
.
@u
1
@2
u
r2
r
+
=0in
r
<
a,
u
=
r@r
@r
r2
@2
@u
=
f()
on
r
=
a.
@n
Here,
we
need
r2G
=
..(x
-
)(y
-
)
+
1
with
@G
=0,
V
@r
r=a
where
V
=
a2
is
the
surface
area
of
the
disc.
In
order
to
deal
with
this
term
we
solve
the
equation
2
1
.
@t
1
r
r2(r)=
r
=
v
(r)=
+
c1
ln
r
+
c2,
r
@r
@r
a2
4a2
and
take
the
particular
solution
with
c1
=
c2
=
0.
Then,
add
in
source
at
inverse
point
and
an
arbitrary
function
h
to
x
the
symmetry
and
boundary
condition
of
G
1
2
+
2
-
2rd
cos(
-
)
G(r,
,
,
)=
-
ln
r
4.
2
22
r
r
2
1
a
2
-
ln
4.
+
2
-
2rd
cos(
-
)+
+
h:
a
2
4a2
a
So,
@G
12r
-
2d
cos(
-
)
12r
-
2a2=d
cos(
-
)
r
@h
=
..-
++
;
@r
4r2
+
2
-
2rd
cos(
-
)4r2
+
a4=2
-
2a2r=d
cos(
-
)2a2
@r
@G
=
-
1
2=d
cos(
-
)
a
-
d
cos(
-
)
a
-
a
1
@h
+
++
@r
2.
a
2
+
2
-
2ad
cos(
-
)
a
2
+
a4=2
-
2a3=d
cos(
-
)
2a
@r
,
r=a
r=a
1
a
-
d
cos(
-
)+
2=a
-
d
cos(
-
)1
@h
=
-
++
,
22
+
a2
-
2ad
cos(
-
)
2a
@r
r=a
4.4
Extensions
of
Theory:
@G
11
@h
@h
@G
.
=
-
++
.
and
.
=
0
implies
=
0
on
the
boundary.
@r
.
2a
2a
@r
.
@r
.
@r
r=a
r=ar=a
Then,
put
h
.
1=2.
ln(a=)
;
so,
.
22
.
2
.
2
.
2
G(r,
,
,
)=
-
1
ln
r
+
2
-
2rd
cos(
-
)a
+
r-
2rd
cos(
-
)+
r.
4.
a2
4a2
On
r
=
a,
1
h.
1
2
2.
Gjr=a
=
-
ln
a
+
2
-
2ad
cos(
-
)+
;
4.
4.
.
2
.
=
-
1
ln
a
+
2
-
2ad
cos(
-
)-
1
.
2.
2
Then,
.
2a
u(,
)=
u+
f()
Gjr=a
a
d,
0
.
2a
.
=
u-
a
ln
a
2
+
2
-
2ad
cos(
-
)-
1
f()d.
2.
2
0
Now,
recall
the
Neumann
problem
compatibility
condition,
.
2a
f()d
=0.
0
Z.
.
2
@u
Indeed,
r2
u
dV
=dS
from
divergence
theorem
v
f()d
=0.
@n
VS
0
.
2a
So
the
term
involving
f()d
in
the
solution
u(,
)
vanishes;
hence
0
.
2a
a
.
2
.
u(,
)=
u-
ln
a
+
2
-
2ad
cos(
-
)f()d,
2.
0
.
2a
a
.
2
.
or
u(r,
)=
u-
ln
a
+
r
2
-
2ar
cos(
-
)f()d.
2.
0
Exercise:
Exterior
Neumann
problem
for
Laplace's
equation
in
a
disc,
.
2a
a
.
2
.
u(r,
)=
ln
a
+
r
2
-
2ar
cos(
-
)f()d.
2.
0
4.4
Extensions
of
Theory:

Alternative
to
the
method
of
images
to
determine
the
Green's
function
G:
(a)
eigenfunction
method
when
G
is
expended
on
the
basis
of
the
eigenfunction
of
the
Laplacian
operator;
conformal
mapping
of
the
complex
plane
for
solving
2-D
problems.

Green's
function
for
more
general
operators.
Chapter
5
Parabolic
Equations
Contents
5.1
De
nitionsandProperties
.......................
69
5.2
Fundamental
Solution
of
the
Heat
Equation
.
.
.
.
.
.
.
.
.
.
.
.
72
5.3
SimilaritySolution............................
75
5.4
Maximum
Principles
and
Comparison
Theorems
.
.
.
.
.
.
.
.
.
78
5.1
De
nitions
and
Properties
Unlike
elliptic
equations,
which
describes
a
steady
state,
parabolic
(and
hyperbolic)
evolution
equations
describe
processes
that
are
evolving
in
time.
For
such
an
equation
the
initial
state
of
the
system
is
part
of
the
auxiliary
data
for
a
well-posed
problem.
The
archetypal
parabolic
evolution
equation
is
the
\heat
conduction
or
\diusion
equation:
@u
@2u
=
(1-dimensional);
@t
@x2
or
more
generally,
for
>
0,
@u
=
r
(t
ru)
@t
=
t
r2
u
(t
constant),
@2
u
=
t
(1-D).
@x2
Problems
which
are
well-posed
for
the
heat
equation
will
be
well-posed
for
more
general
parabolic
equation.
5.1.1
Well-Posed
Cauchy
Problem
(Initial
Value
Problem)
Consider
>
0,
@u
=
r2
u
in
Rn,
t>
0;
@t
with
u
=
f(x)
in
Rn
at
t
=0,
and
ju|
<
.
in
Rn,
t>
0.
69
5.1
De
nitions
and
Properties
Note
that
we
require
the
solution
u(x;t)
bounded
in
Rn
for
all
t.
In
particular
we
assume
that
the
boundedness
of
the
smooth
function
u
at
in
nity
gives
ruj.
=
0.
We
also
impose
conditions
on
f,
jf(x)j2
dx
<
1v
f(x)
.
0
as
jxj!1.
Rn
Sometimes
f(x)
has
compact
support,
i.e.
f(x)
=
0
outside
some
nite
region.(E.g.,
in
1-D,
see
graph
hereafter.)
u
...
+.
x
5.1.2
Well-Posed
Initial-Boundary
Value
Problem
Consider
an
open
bounded
region
G
of
Rn
and
>
0;
@u
=
t
r2
u
in
,
t>
0;
@t
with
u
=
f(x)
at
t
=
0
in
,
@u
and
u(x;t)+
a
(x;t)=
g(x;t)
on
the
boundary
@
.
@nThen,
a
=
0
gives
the
Dirichlet
problem,
.
=
0
gives
the
Neumann
problem
(@u=@n
=0
on
the
boundary
is
the
zero- ux
condition)
and
.
60,
a
=
0
gives
the
Robin
or
radiation
problem.
(The
problem
can
also
have
mixed
boundary
conditions.)
If
G
is
not
bounded
(e.g.
half-plane),
then
additional
behavior-at-in
nity
condition
may
be
needed.
5.1.3
Time
Irreversibility
of
the
Heat
Equation
If
the
initial
conditions
in
a
well-posed
initial
value
or
initial-boundary
value
problem
for
an
evolution
equation
are
replaced
by
conditions
on
the
solution
at
other
than
initial
time,
the
resulting
problem
may
not
be
well-posed
(even
when
the
total
number
of
auxiliary
conditions
is
unchanged).
E.g.
the
backward
heat
equation
in
1-D
is
ill-posed;
this
problem,
@u
@2u
=
t
in
0
<
x
<
l,
0
<
t
<
T,
@t
@x2
with
u
=
f(x)
at
t
=
T,
x
=
(0;l),
and
u(0;t)=
u(l,
t)=0
for
t
=
(0;T
),
which
is
to
nd
previous
states
u(x,
t),
(t<T
)
which
will
have
evolved
into
the
state
f(x),
has
no
solution
for
arbitrary
f(x).
Even
when
a
the
solution
exists,
it
does
not
depend
continuously
on
the
data.
The
heat
equation
is
irreversible
in
the
mathematical
sense
that
forward
time
is
distinguishable
from
backward
time
(i.e.
it
models
physical
processes
irreversible
in
the
sense
of
the
Second
Law
of
Thermodynamics).
Chapter
5

Parabolic
Equations
5.1.4
Uniqueness
of
Solution
for
Cauchy
Problem:
The
1-D
initial
value
problem
@u
@2u
=
;x
=
R,
t>
0;
@t
@x2
.
.
with
u
=
f(x)
at
t
=0(x
=
R),
such
that
jf(x)j2
dx<
1.
...
has
a
unique
solution.
Proof:
We
can
prove
the
uniqueness
of
the
solution
of
Cauchy
problem
using
the
energy
method.
Suppose
that
u1
and
u2
are
two
bounded
solutions.
Consider
w
=
u1
-
u2;
then
w
satis
es
@w
@2w
=(...
<x<
1,
t>
0);
@t
@x2
@w
with
w
=0
at
t
=0
(...
<x<
1)
and
.
=0,
8t.
@x
.
Consider
the
function
of
time
.
.
I(t)=
1
w
2(x,
t)dx,
such
that
I(0)=0
and
I(t)
~
0
8t
(as
w
2
~
0);
2
...
which
represents
the
energy
of
the
function
w.
Then,
.
.
.
.
.
1
dI
1
@w2
@w
@2w
=
dx
=
w
dx
=
w
dx
(from
the
heat
equation);
dt
2
@t
@t
@x2
..1...
...
.
.
.
2
@w
@w
=
w
-
dx
(integration
by
parts);
@x
@x
...
...
.
.
2
.
@w
@w
.
=
-
dx
.
0
since
.
=0.
@x
@x
...
.
Then,
0
.
I(t)
.
I(0)
=
0,
8t>
0,
since
dI=dt<
0.
So,
I(t)=0
and
w
.
0
i.e.
u1
=
u2,
8t>
0.
5.1.5
Uniqueness
of
Solution
for
Initial-Boundary
Value
Problem:
Similarly
we
can
make
use
of
the
energy
method
to
prove
the
uniqueness
of
the
solution
of
the
1-D
Dirichlet
or
Neumann
problem
@u
@2u
=
in0
<x<l,
t>
0;
@t
@x2
with
u
=
f(x)
at
t
=0;x
=
(0;l),
u(0;t)=
g0(t)
and
u(l,
t)=
gl(t),
8t>
0
(Dirichlet),
@u
@u
or
(0;t)=
g0(t)
and
(l,
t)=
gl(t),
8t>
0
(Neumann).
@x@x
5.2
Fundamental
Solution
of
the
Heat
Equation
Suppose
that
u1
and
u2
are
two
solutions
and
consider
w
=
u1
-
u2;
then
w
satis
es
@w
@2w
=
(0
<x<l,
t>
0);
@t
@x2
with
w
=0
at
t
=0
(0
<x<l),
and
w(0;t)=
w(l,
t)=0,
8t>
0
(Dirichlet),
@w
@w
or
(0;t)=
(l,
t)=0,
8t>
0
(Neumann).
@x
@x
Consider
the
function
of
time
.
l
I(t)=
1
w
2(x,
t)dx,
such
that
I(0)=0
and
I(t)
~
0
8t
(as
w
2
~
0);
2
0
which
represents
the
energy
of
the
function
w.
Then,
.
l
.
l
dI
1
@w2
@2w
=dx
=
w;
dt
2
0
@t
0
@x2
l
.
l
2
.
l
2
@w
@w
@w
=
w
-
dx
=
-
dx
.
0.
@x
@x
@x
00
0
Then,
0
.
I(t)
.
I(0)
=
0,
8t>
0,
since
dI=dt<
0.
So
I(t)=0
8t>
and
w
.
0
and
u1
=
u2.
5.2
Fundamental
Solution
of
the
Heat
Equation
Consider
the
1-D
Cauchy
problem,
@u
@2u
=
on
...
<x<
1,
t>
0;
@t
@x2
with
u
=
f(x)
at
t
=0
(...
<x<
1),
.
.
such
that
jf(x)j2
dx<
1.
...
Example:
To
illustrate
the
typical
behaviour
of
the
solution
of
this
Cauchy
problem,
consider
the
speci
c
case
where
u(x,
0)
=
f(x)
=
exp(..x2);
the
solution
is
.
2

1
x
u(x,
t)
=
exp
-
(exercise:
check
this).
(1
+
4t)1=2
1+4t
=
Starting
with
u(x,
0)
=
exp(..x2)
at
t
=
0,
the
solution
becomes
u(x,
t)
=
1=2
t
exp(..x2=4t),
=
=
for
t
large,
i.e.
the
amplitude
of
the
solution
scales
as
1=t
and
its
width
scales
as
t.
Chapter
5

Parabolic
Equations
t=0t=1ut=10
x
Spreading
of
the
Solution:
The
solution
of
the
Cauchy
problem
for
the
heat
equation
spreads
such
that
its
integral
remains
constant:
.
.
Q(t)=
u
dx
=
constant.
...
Proof:
Consider
dQ
dt
=
.
.
...
@u
@t
dx
=
.
.
...
@2u
@x2
dx
(from
equation),
@u
.
=
@x
...
=
0
(from
conditions
on
u).
So,
Q
=
constant.
5.2.1
Integral
Form
of
the
General
Solution
To
nd
the
general
solution
of
the
Cauchy
problem
we
de
ne
the
Fourier
transform
of
u(x,
t)
and
its
inverse
by
.
+.
U(k,
t)=
=
1
u(x,
t)e..ikx
dx,
2.
...
.
+.
u(x,
t)=
=
1
U(k,
t)eikx
dk.
2.
...
So,
the
heat
equation
gives,
.
+.

1
@U(k,
t)
=
+
k2U(k,
t)e
ikx
dk
=0
8x,
2.
...
@t
which
implies
that
the
Fourier
transform
U(k,
t)
satis
es
the
equation
@U(k,
t)
+
k2U(k,
t)=0.
@t
The
solution
of
this
linear
equation
is
U(k,
t)=
F
(k)e..k2t
,
5.2
Fundamental
Solution
of
the
Heat
Equation
where
F
(k)
is
the
Fourier
transform
of
the
initial
data,
u(x,
t
=
0),
.
+.
F
(k)=
=
1
f(x)e..ikx
dx.
2.
...
.
+1
(This
requires
jf(x)j2
dx<
1.)
Then,
we
back
substitute
U(k,
t)
in
the
integral
form
...
of
u(x,
t)
to
nd,
.
+.
.
+.
.
+.

1
ikx
dk
=
1
..k2t
ikx
dk,
u(x,
t)=
=
F
(k)e..k2t
e
f()e..ik.
d.
ee
2
2.
...
..1...
.
+.
.
+.
..k2t
=
1
f()
ee
ik(x..)
dk
d.
2.
...
...
Now
consider
.
+.
.
+.
2
x
-
.
(x
-
)2
..k2t
ik(x..)
dk
=
H(x,
t,
)=
ee
exp
..tk
-
i
-
dk,
2t
4t
...
...
since
the
exponent
satis
es
.
".
2
.
x
-
.
x
-
.
(x
-
)2
..k2t
+
ik(x
-
)=
..tk2
-
ik
=
..tk
-
i
+
,
t
2t
4t2
=
and
set
k
-
i(x
-
)=2t
=
s/
t,
with
dk
=ds,
such
that
.
+.
.
r
2
(x
-
)2
.
ds
..s
..(x..)2
=4t
H(x,
t,
)=
eexp
..=
=e
;
4t
...
tt
.
+.
since
e
..s2
ds
=
=
.
(see
appendix
A).
...
So,
.
+.
.
.
+1
1
(x
-
)2
u(x,
t)=
=
f()
exp
..d.
=
K(x
-
,
t)
f()d.
4t
...
4t
...
Where
the
function
.
2

1
x
K(x,
t)=
=
exp
-
4t
4t
is
called
the
fundamental
solution

or
source
function,
Green's
function,
propagator,
diusion
kernel

of
the
heat
equation.
5.2.2
Properties
of
the
Fundamental
Solution
The
function
K(x,
t)
is
solution
(positive)
of
the
heat
equation
for
t>
0
(check
this)
and
has
a
singularity
only
at
x
=0;t
=
0:
=
1.
K(x,
t)
.
0
as
t
.
0+
with
x
6=0(K
=
O(1=t
exp[..1=t])),
=
2.
K(x,
t)
.
+.
as
t
.
0+
with
x
=0(K
=
O(1=t)),
=
3.
K(x,
t)
.
0
as
t
.
+.
(K
=
O(1=t)),
.
.
4.
K(x
-
,
t)d.
=1
...
Chapter
5

Parabolic
Equations
At
any
time
t>
0
(no
matter
how
small),
the
solution
to
the
initial
value
problem
for
the
heat
equation
at
an
arbitrary
point
x
depends
on
all
of
the
initial
data,
i.e.
the
data
propagate
with
an
in
nite
speed.
(As
a
consequence,
the
problem
is
well
posed
only
if
behaviour-atin
nity
conditions
are
imposed.)
However,
the
in uence
of
the
initial
state
dies
out
very
rapidly
with
the
distance
(as
exp(..r2)).
5.2.3
Behaviour
at
large
t
Suppose
that
the
initial
data
have
a
compact
support

or
decays
to
zero
suciently
quickly
as
jxj!.
and
that
we
look
at
the
solution
of
the
heat
equation
on
spatial
scales,
x,
large
compared
to
the
spatial
scale
of
the
data
.
and
at
t
large.
Thus,
we
assume
the
ordering
x2=t
=
O(1)
and
2=t
=
O(")
where
p
.
1
(so
that,
x=t
=
O("1=2)).
Then,
the
solution
.
+1..x2=4t
.
+1
1e..x=2t
d,
u(x,
t)=
=
f()e..(x..)2=4t
d.
=
=
f()e..2=4t
e
4t
...
4t
...
2=4t
.
+.
2
e..xF
(0).
x.
'=
f()d.
'=
exp
-
,
4t
...
2
t
4t
where
F
(0)
is
the
Fourier
transform
of
f
at
k
=
0,
i.e.
.
+.
.
+.
p
1
f(x)e..ikx
dx
)
F
(k)=
=
f(x)dx
=2F
(0).
2.
...
...
=
So,
at
large
t,
on
large
spatial
scales
x
the
solution
evolves
as
u
.
u0=t
exp(..2)
where
u0
=
is
a
constant
and
s
=
x/
2t
is
the
diusion
variable.
This
solution
spreads
and
decreases
as
t
increases.
5.3
Similarity
Solution
For
some
equations,
like
the
heat
equation,
the
solution
depends
on
a
certain
grouping
of
the
independent
variables
rather
than
depending
on
each
of
the
independent
variables
independently.
Consider
the
heat
equation
in
1-D
@u
@2u
-
D
=0;
@t
@x2
and
introduce
the
dilatation
transformation
.
=
"a
x,
f
=
"b
t
and
w(,
)=
"c
u("..a,
"..b);p
=
R.
This
change
of
variables
gives
@u
@w
@f
@w
@u
@w
@.
@w
=
"..c
=
"b..c
,
=
"..c
=
"a..c
@t
@f
@t
@f
@x
@.
@x
@.
@2u@2w@.
@2w
=
"a..c
=
"2a..c
and
.
@x2
@2
@x
@2
So,
the
heat
equation
transforms
into
@2@2
@w
w
@w
w
"b..c
-
"2a..cD"b..c
-
"2a..bD
=0
i.e.
=0;
@f
@2
@f
@2
5.3
Similarity
Solution
and
is
invariant
under
the
dilatation
transformation
(i.e.
8")
if
b
=2a.
Thus,
if
u
solves
the
equation
at
x,
t
then
w
=
"..cu
solve
the
equation
at
x
=
"..a
,
t
=
"..b
.
Note
also
that
we
can
build
some
groupings
of
independent
variables
which
are
invariant
under
this
transformation,
such
as
"a
.
xx
==
a=b
a=b
ta=b
("b
t)
=
which
de
nes
the
dimensionless
similarity
variable
(x,
t)=
x/
2Dt,
since
b
=2a.(s
!.
if
x
!.
or
t
.
0
and
s
=0
if
x
=
0.)
Also,
"c
=
==
v()
w
uu
c=b
c=b
tc=b
("b
t)
suggests
that
we
look
for
a
solution
of
the
heat
equation
of
the
form
u
=
tc=2a
v().
Indeed,
since
the
heat
equation
is
invariant
under
the
dilatation
transformation,
then
we
also
expect
the
solution
to
be
invariant
under
that
transformation.
Hence,
the
partial
derivatives
become,
@u
c
c=2a..1
c=2a
0()
@s
1
c=2a..1
.
c
.
=
tv()+
tv
=
tv()
-
v0()
;
@t
2a
@t
2
a
=
since
@=@t
=
..x=(2t
2Dt)=
..=2t,
and
@u
c=2a
@s
tc=2a..1=2
@2utc=2a..1
00():=
tv
0()=
=
v
0(),
=
v
@x
@x
2D
@x2
2D
Then,
the
heat
equation
reduces
to
an
ODE
=2..1
...
tv
00()+
v0()
-
v()=0.
(5.1)
=
t=2
with
.
=
c=a,
such
that
u
=
v
and
s
=
x/
2Dt.
So,
we
may
be
able
to
solve
the
heat
equation
through
(5.1)
if
we
can
write
the
auxiliary
conditions
on
u,
x
and
t
as
conditions
on
v
and
.
Note
that,
in
general,
the
integral
transform
method
is
able
to
deal
with
more
general
boundary
conditions;
on
the
other
hand,
looking
for
similarity
solution
permits
to
solve
other
types
of
problems
(e.g.
weak
solutions).
5.3.1
In
nite
Region
Consider
the
problem
@u
@2u
=
D
on
...
<x<
1,
t>
0;
@t
@x2
with
u
=
u0
at
t
=0;x
=
R..;u
=0
at
t
=0;x
=
R.
+,
and
u
.
u0
as
x
.
..1;u
.
0
as
x
!1,
8t>
0.
u
u0
t
=0
x
Chapter
5

Parabolic
Equations
=
We
look
for
a
solution
of
the
form
u
=
t=2
v(),
where
(x,
t)=
x/
2Dt,
such
that
v()
is
t=2
solution
of
equation
(5.1).
Moreover,
since
u
=
v()
.
u0
as
s
.
..1,
where
u0
does
not
depend
on
t,
.
must
be
zero.
Hence,
v
is
solution
of
the
linear
second
order
ODE
v
00()+
v0()
=
0
with
v
.
u0
as
s
.
...
and
v
.
0
as
s
.
+1.
Making
use
of
the
integrating
factor
method,
2
.
.

2
=2
2
=2
2
=2
e
v
00()+
s
exp
v
0()=
e
v
0()
=0
v
e
v
0()=
0;
2
@s
*
.
f
.
/
2
..s2
v
0()=
0
e
..2
=2
v
v()=
0
e
..h2=2
dh
+
1
=
2
eds
+
1.
...
...
Now,
apply
the
initial
conditions
to
determine
the
constants
2
and
1.
As
s
.
..1,
we
p=
have
v
=
1
=
u0
and
as
s
!1,
v
=
2
.
+
u0
=
0,
so
2
=
..u0=.
Hence,
the
solution
to
this
Cauchy
problem
in
the
in
nite
region
is
pp
!.
!
.
/
2
.
x2
/
4Dt
..s22
v()=
u0
1
-
eds
i.e.
u(x,
t)=
u0
1
-
e
..sds.
...
...
5.3.2
Semi-In
nite
Region
Consider
the
problem
@u
@2u
=
D
on
0
<x<
1,
t>
0;
@t
@x2
with
u
=0
at
t
=0;x
=
R.
+,
@u
and
=
..q
at
x
=0,
t>
0;u
.
0
as
x
!1,
8t>
0.
@x
=
t=2
Again,
we
look
for
a
solution
of
the
form
u
=
v(),
where
(x,
t)=
x/
2Dt,
such
that
v()
is
solution
of
equation
(5.1).
However,
the
boundary
conditions
are
now
dierent
t( ..1)=2
.
t( ..1)=2
@u
@s
@s
=2
.
=
tv
0()=
=
v
0()
v
.
=
=
v
0(0)
=
..q,
@x
@x
2D
@x
.
2D
x=0
since
q
does
not
depend
on
t,
.
-
1
must
be
zero.
Hence
from
equation
(5.1),
the
function
v,
=
such
that
u
=
vt,
is
solution
of
the
linear
second
order
ODE
=
v
00()+
v0()
-
v()
=
0
with
v
0(0)
=
..q
2D
and
v
.
0
as
s
.
+1.
Since
the
function
v.
=
s
is
solution
of
the
above
ODE,
we
seek
for
solutions
of
the
form
v()=
()
such
that
=
v
=
.
+
=
and
v
0=
=2=
+
0=
.
Then,
back-substitute
in
the
ODE
0=
2+
2
2
0=
+2=
+
2
=
+
.
-
.
=
0
i.e.
=
-
=
..-
.
=
s
5.4
Maximum
Principles
and
Comparison
Theorems
After
integration
(integrating
factor
method
or
another),
we
get
..2
=2
.
f
..s2=2
2
1
2
ee
ln
j0|
=
..2
ln
s
-
+
k
=
ln
-
+
k
v
=
=
0
v
.
=
0
ds
+
1.
2
2
2
2
s2
An
integration
by
part
gives
.
#f
!.
!
..s2=2
.
f
..2
=2
.
f
ee
..s2=2
ds
..s2=2
ds
()=
0
..-
e+
1.
=
2
+e
+
3.
ss
0
Hence,
the
solution
becomes,
.
f
.
..2
=2
..s2
=2
ds
v()=
2
e+
s
e+
3,
0
where
the
constants
of
integration
2
and
3
are
determined
by
the
initial
conditions:
.
f
.
f
0..2
=2
..s..2
=2
..s
v
=
2
..e
+e
2=2
ds
+
e+
3
=
2
e
2=2
ds
+
3,
00
=
so
that
v0(0)
=
3
=
..q
2D.
Also
.
.
.
as
s
.
+1;v
=
2
e
..s2=2
ds
+
3
=0
v
2
=
..3
2
;

0
p
.
.
.
.
r
2
..h2
..s2=2
ds
=
=
since
e
2edh
==
.
22
00
The
solution
of
the
equation
becomes
*
!
.
/
2
=
..2
=2
v()=
2
e+
s
2e
..h2
dh
+
3,
0
!
.
+.
r
=

..h2
..2
=2
-

=
2
e
2edh
+
2
+
3
;
*
2
/
2
r.
!
.
+1
p
4D
..2
=2
-

=
q
e
2e
..h2
dh;
p
.
/
2
r.
!
.
+1
4Dt
x
..x2
=4Dt
-
..h2
u(x,
y)=
q
e
=
edh:
*
.
Dt
x/
4Dt
5.4
Maximum
Principles
and
Comparison
Theorems
Like
the
elliptic
PDEs,
the
heat
equation
or
parabolic
equations
of
most
general
form
satisfy
a
maximum-minimum
principle.
Consider
the
Cauchy
problem,
@u
@2u
=
in
...
<x<
1,
0
.
t
.
T.
@t
@x2
and
de
ne
the
two
sets
V
and
VT
as
V
=
f(x,
t)
=
(..1,
+1)

(0;T
)g,
and
VT
=
f(x,
t)
=
(..1,
+1)

(0;T
]g.
Chapter
5

Parabolic
Equations
Lemma:
Suppose
@u
@2u
@t
-
@x2
<
0
in
V
and
u(x,
0)
.
M,
then
u(x,
t)
<
M
in
VT
.
Proof:
Suppose
u(x,
t)
achieves
a
maximum
in
V
,
at
the
point
(x0;t0).
Then,
at
this
point,
@u
@u
@2u
=0,
=
0
and
.
0.
@t
@x
@x2
But,
@2u=@x2
.
0
is
contradictory
with
the
hypothesis
@2u=@x2
>
@u=@t
=
0
at
(x0;t0).
Moreover,
if
we
now
suppose
that
the
maximum
occurs
in
t
=
T
then,
at
this
point
@u
@u
@2u
~
0,
=
0
and
.
0;
@t
@x
@x2
which
again
leads
to
a
contradiction.
5.4.1
First
Maximum
Principle
Suppose
@u
@2u
...
0
in
V
and
u(x,
0)
.
M,
@t
@x2
then
u(x,
t)
.
M
in
VT
.
Proof:
Suppose
there
is
some
point
(x0;t0)
in
VT
(0
<t
.
T
)
at
which
u(x0;t0)=
M1
>M.
Put
w(x,
t)=
u(x,
t)
-
(t
-
t0)
p
where
p
=(M1
-
M)=t0
<
0.
Then,
@w
@2w
@u
@2u
-
=
..-
"<
0
(in
form
of
lemma);
@t
@x2
@t
@x2
|{z.
.
{.
.
>0
0
and
by
lemma,
w(x,
t)
<
maxfw(x,
0)}
in
VT
,
<M
+
"t0,
M1
-
M
<M
+
t0,
t0
v
w(x,
t)
<M1
in
VT
.
But,
w(x0;t0)=
u(x0;t0)
-
(t0
-
t0)
p
=
u(x0;t0)=
M1;
since
(x0;t0)
=
VT
we
have
a
contradiction.
5.4
Maximum
Principles
and
Comparison
Theorems
Appendix
A
2
Integral
of
e..xin
R
Consider
the
integrals
.
R
.
+.
..s2
..s2
I(R)=
eds
and
I
=
eds
00
such
that
I(R)
.
I
as
R
.
+1.
Then,
.
R
.
R
.
R.
R
.
..x..y..(x2
+y..(x2
+y
I2(R)=
e
2
dx
e
2
dy
=e
2
)
dx
dy
=e
2)
dx
dy.
0000
R
Since
its
integrand
is
positive,
I2(R)
is
bounded
by
the
following
integrals
222
..(x+y..(x+y..(x+y
e
2
)
dx
dy<
e
2
)
dx
dy<
e
2
)
dx
dy,
-
R
+
22
2
where
-
:
fx
=
R+;y
=
R+jx+
y=
R2}
and
+
:
fx
=
R+;y
=
R+jx+
y2
=2R2g.
y
R
..
+RRRRp2xRp2Rp2
Hence,
after
polar
coordinates
transformation,
(x
=
d
cos
,
y
=
d
sin
),
with
dx
dy
=
d
dd
d
2
and
x2
+
y=
2,
this
relation
becomes
*
.
=2
.
R
.
=2
.
R
2
..2
..2
d
edd
d<I2(R)
<d
edd
d.
00
00
Put,
s
=
2
so
that
ds
=2d
d,
to
get
.
R2
.
2R2
.
.
.
..R2
.
.
.
..2R2
.
e
..s
ds
<I2(R)
<
e
..s
ds
i.e.
1
-
e
<I2(R)
<
1
-
e
.
4
444
00
81
Take
the
limit
R
.
+.
to
state
that
=
.
4
.
I2
.
.
4
i.e.
I2
=
.
4
and
I
=
.
2
(I
>
0).
So,
p
.
+.
.
+.
2
.
2
e
..sds
=
v
e
..sds
=
=
,
2
0
...
since
exp(..x2)
is
even
on
R.

You might also like