You are on page 1of 48

Mathematics-III

MTH-203
T. Muthukumar
tmk@iitk.ac.in
I Semester, 2011-12
Contents
Contents
1 Second Order ODE: Linear-Algebraic view 2
1.1 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Lecture-23 5
2.1 Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Sturm-Liouville Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Singular Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Sturm-Liouville Problem: general boundary conditions 9
4 Lecture-24 9
4.1 Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Piecewise Smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Lecture-25 14
5.1 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Odd-Even Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Fourier Sine-Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 Fourier Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 Lecture-26 16
6.1 PDE-Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Gradient and Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.3 PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.4 Types of PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.5 PDE-Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.6 Well Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7 Lecture - 27 19
7.1 First order PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Solving First Order PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1
8 Lecture - 28 21
8.1 Transport Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
8.2 Cauchy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9 Lecture - 29 23
9.1 Second Order PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9.2 Classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
9.3 Standard Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
10 Lecture - 30 27
10.1 Three Basic Linear PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
10.2 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
11 Lecture - 31 28
11.1 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
11.2 DP On Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
12 Lecture - 32 30
12.1 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.2 Laplacian on a 2D-Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
13 Lecture - 33 33
13.1 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.2 Laplacian on a 3D-Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
14 Lecture-34 35
14.1 Eigenvalues of Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14.2 Computing Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.3 In Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.4 In Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
14.5 Bessels Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
15 Lecture - 35 39
15.1 1D Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
15.2 Solving for Circular Wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
16 Lecture - 36 42
16.1 1D Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
17 Lecture - 37 44
17.1 Duhamels Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
18 Lecture - 38 47
18.1 dAlemberts Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1 Second Order ODE: Linear-Algebraic view
1
We have already seen/studied various methods on how to solve a second order ODE. We shall understand
them in the linear-algebraic viewpoint?
1
This section is purely for understanding sake and elaborates on the material explained on black-(green?)board during
lecture.
2
Consider a second order linear homogeneous ODE
y

+P(x)y

+Q(x)y(x) = 0 (1)
on an interval I R such that both P and Q are continuous on I. If set
L =
d
2
dx
+P(x)
d
dx
+Q(x),
then the ODE is simply written as Ly = 0. Now, let
C(I) = y : I R [ y is continuous
denote the set of all real valued continuous functions on I. Note that C(I) is a vector space over R. Also, let
C
k
(I) be the set of all k-times continuously dierentiable function and C
k
(I) C(I) is a subspace of C(I).
Remark 1.1. How big is C(I)? Note that the functions 1, x, x
2
, . . . , x
k
, . . . all belong to C(I). Also note
that they are all linearly independent. Every nite linear combination of these functions is a polynomial of
nite degree. The set of all polynomials up to k degree is a k dimensional subspace of C(I). Thus, C(I) is
not nite dimensional is clear.
In our context, when we say the given second order ODE (1) is linear, we mean the corresponding map
L : C
2
(I) C(I) C(I) is a linear map, i.e., L(y
1
+y
2
) = Ly
1
+Ly
2
for all y
1
, y
2
C
2
(I).
When we say we wish to solve (1), we basically want to nd the null-space (kernel) of L, denoted as S,
i.e.,
S = y C
2
(I) C(I) [ Ly = 0.
Thus, solving a second order linear ODE is equivalent to nding the solution space S corresponding to L.
Since y 0 is always a (trivial) solution to (1), we have 0 S. Recall (or prove as an exercise) that
S, the null-space of L, is a subspace of C
2
(I). By doing this exercise, you would have actually proved the
following theorem:
Theorem 1.2 (Principle of Superposition). If y
1
and y
2
are solutions of (1), then any linear combination
of y
1
and y
2
, i.e.,
y
1
+y
2
, R
is also a solution of (1).
Now, the question of interest is how big can S be? For instance, if S is zero-dimensional then this means
that (1) has no non-trivial solution. We will show that S is at most two dimensional. To say so, we will use
the uniqueness theorem.
Solving an ODE depends on the initial condition prescribed. Since we are dealing with second order
ODE, we need two initial conditions. One can convince this, by arguing heuristically, that to nd y we need
to integrate twice, which will yield two integration constants. Thus, we need two pieces of information to
nd the two constants. In language of time, we provide the initial position at x
0
, y(x
0
) = y
0
and initial
velocity at x
0
, y

(x
0
) = y

0
.
Theorem 1.3 (Existence/Uniqueness). Let P and Q be continuous on I (assume I closed interval) and for
any point x
0
I, then the ODE (1) with initial conditions
y(x
0
) = y
0
and y

(x
0
) = y

0
has a unique solution.
Using the above theorem, we shall make a statement on the dimensionality of S.
3
Theorem 1.4 (Dimensionality Theorem). Let L be the second order linear dierential operator L : C
2
(I)
C(I) of the form
L =
d
2
dx
+P(x)
d
dx
+Q(x).
Then its solution space S is of dimension two.
Proof. For any xed point x
0
I, consider the linear transformation T : S R
2
dened as
T(y) := (y(x
0
), y

(x
0
)) = (y
0
, y

0
).
This denition of T makes sense under the hypothesis of the uniqueness theorem. By uniqueness theorem,
if T(y) = 0 then y 0. Therefore, T is one-to-one (injective). Also, again by uniqueness theorem T is onto
(surjective). Thus, dimension of S is same as image of T, which is R
2
. Hence S is two dimensional.
All the arguments above can be generalised to any k-th order linear homogeneous ODE and one can
conclude that the solution space is of k dimension.
Thus, for a second order linear homogeneous ODE, any two linearly independent solutions of (1) will
span the space of solutions S. In two dimension, checking if two solutions are linearly independent is trivial,
but this check gets tougher in higher dimension. Thus, we introduce the notion of Wronskian which works
for all dimensions.
1.1 Wronskian
If we know the general solution of (1), i.e.,
y(x) = y
1
(x) +y
2
(x),
how does one nd the constants and such that y satises the initial conditions y(x
0
) = y
0
and y

(x
0
) = y

0
,
for a given x
0
I? Thus, we want to nd and such that
y(x
0
) = y
1
(x
0
) +y
2
(x
0
) = y
0
,
y

(x
0
) = y

1
(x
0
) +y

2
(x
0
) = y

0
.
This is equivalent to saying
_
y
1
(x
0
) y
2
(x
0
)
y

1
(x
0
) y

2
(x
0
)
__

_
=
_
y
0
y

0
_
The above matrix equation has a unique solution i the 2 2 matrix has non-zero determinant (invertible).
We dene the determinant of the matrix to be the Wronskian
W(y
1
, y
2
)(x
0
) =

y
1
(x
0
) y
2
(x
0
)
y

1
(x
0
) y

2
(x
0
)

.
Theorem 1.5. Two solutions y
1
and y
2
of (1) are linearly independent (hence span the solution space S)
if and only if there is a point x
0
I at which W(y
1
, y
2
)(x
0
) ,= 0.
We make a caution that if and only if is true for solutions of the ODE.
Theorem 1.6. Let f
1
and f
2
be two dierentiable functions on I. If there is a point x
0
I such that
W(f
1
, f
2
)(x
0
) ,= 0, then f
1
and f
2
are linearly independent in C(I).
The converse is not true unless they are solutions of a ODE. You have already seen an example during
the lectures on Wronskian.
4
2 Lecture-23
2.1 Eigenvalue Problem
Motivation
Consider the problem,
y

(x) +y(x) = 0 x (a, b).


For a given R, we know the general solution, depending on whether < 0, = 0 or > 0.
What if is unknown too?
Note that y 0 is a trivial solution, for all R.
Eigenvalue problem
Denition 2.1. For a dierential operator L, we say
Ly(x) = y(x)
to be the eigenvalue problem (EVP) corresponding to the dierential operator L, where both and y are
unknown.
In an EVP we need to nd all R for which the given ODE (equation) is solvable.
Does EVP ring any bell? Any similarity with diagonalisation of matrices from Linear Algebra? Think
about it!
Eigenvalues and Eigen Functions
Example 2.1. For instance, if L =
d
2
dx
2
then its corresponding eigenvalue problem is
y

= y.
Denition 2.2. A R, for which the EVP corresponding to L admits a non-trivial solution y

is cal led
an eigenvalue of the operator L and y

is said to be an eigen function corresponding to .


Explicit Computation
Consider the boundary value problem,
_
y

+y = 0 x (0, a)
y(0) = y(a) = 0.
This is a second order ODE with constant coecients.
Its characteristic equation is m
2
+ = 0.
Solving for m, we get m =

.
Note that the can be either zero, positive or negative.
If = 0, then y

= 0 and the general solution is y(x) = x +, for some constants and .


Since y(0) = y(a) = 0 and a ,= 0, we get = = 0. Thus, we have no non-trivial solution corresponding
to = 0.
5
< 0, Negative
If < 0, then = > 0.
Hence y(x) = e

x
+e

x
.
Using the boundary condition y(0) = y(a) = 0, we get = = 0 and hence we have no non-trivial
solution corresponding to negative s.
> 0, Positive
If > 0, then m = i

and y(x) = cos(

x) + sin(

x).
Using the boundary condition y(0) = 0, we get = 0 and y(x) = sin(

x).
Using y(a) = 0 (and = 0 yields trivial solution), we assume sin(

a) = 0.
Thus, = (k/a)
2
for each non-zero k N (since > 0).
Hence, for each k N, there is a solution (y
k
,
k
) with
y
k
(x) = sin
_
kx
a
_
,
and
k
= (k/a)
2
.
Properties of Eigenvalues
Notice the following properties of the eigenvalues and eigen functions.
We have discrete set of s such that 0 <
1
<
2
<
3
< . . . and
n
.
The eigen functions y

corresponding to form a subspace of dimension one (Assignment!), i.e.,


if y

is an eigen function corresponding to , then y

, for all R, is also an eigen function


corresponding to .
All the operators L (in one dimension) to which these properties can be generalised are classied as Sturm-
Liouvil le operators.
2.2 Sturm-Liouville Problem
Sturm-Liouville Operator
We say an operator L is Sturm-Liouville operator (S-L) if
L =
1
q(x)
d
dx
_
p(x)
d
dx
_
where p, q : [a, b] R is a continuous functions
such that p(x) > 0 and q(x) > 0
and p is continuously dierentiable in (a, b).
If p q 1, we get the operator

d
2
dx
2
.
6
Sturm-Liouville Problem
Consider the Sturm-Liouville (S-L) problem,
_
d
dx
_
p(x)
dy
dx
_
+q(x)y = 0 x (a, b)
y(a) = y(b) = 0.
Note that, for all R, zero is a trivial solution of S-L problem.
Thus, we are interested in s for which S-L problem has non-trivial solutions.
Solution Space and Eigen Space
Let V
0
be the real vector space of all y : [a, b] R such that y(a) = y(b) = 0.
If is an eigenvalue for S-L operator, we dene the subspace of V
0
as
W

= y V
0
[ y solves S-L problem.
Existence
Theorem 2.3. Under the hypotheses on p and q, there exists an increasing sequence of eigenvalues
0 <
1
<
2
<
3
< . . . <
n
< . . . with
n

and W
n
= W
n
is one-dimensional.
Conversely, any solution y of the S-L problem is in W
n
, for some n.
2.3 Orthogonality
Inner Product
We dene the following inner product in the solution space V
0
,
f, g) :=
_
b
a
q(x)f(x)g(x) dx.
Denition 2.4. We say two functions f and g are perpendicular or orthogonal with weight q
if f, g) = 0.
we say f is of unit length if its norm |f| =
_
f, f) = 1.
Orthogonality of Eigenfunctions
Theorem 2.5. With respect to the inner product dened above in V
0
, the eigen functions corresponding to
distinct eigenvalues of the S-L problem are orthogonal.
Proof
Proof. Let y
i
and y
j
are eigen functions corresponding to distinct eigenvalues
i
and
j
. We need to show
that y
i
, y
j
) = 0. Recall that L is the S-L operator and hence Ly
k
=
k
y
k
, for k = i, j.Consider

i
y
i
, y
j
) = Ly
i
, y
j
) =
_
b
a
qLy
i
y
j
dx
=
_
b
a
d
dx
_
p(x)
dy
i
dx
_
y
j
(x) =
_
b
a
p(x)
dy
i
dx
dy
j
(x)
dx
dx
=
_
b
a
y
i
(x)
d
dx
_
p(x)
dy
j
dx
_
= y
i
, Ly
j
) =
j
y
i
, y
j
).
Thus (
i

j
)y
i
, y
j
) = 0. But
i

j
,= 0, hence y
i
, y
j
) = 0.
7
2.4 Singular Problems
Singular Problems
What we have seen is the Regular S-L problems.
It is regular because the interval under consideration (a, b) was nite and the functions p(x) and q(x)
were positive and continuous on the whole interval.
We say a problem is Singular,
if the interval is innite or
interval is nite, but p or q vanish at one (or both) endpoints or
interval is nite, but p or q is discontinuous at one (or both) endpoints.
Legendre Equation
The Legendre equation
(1 x
2
)y

2xy

+y = 0 for x [1, 1].


is an example of a singular S-L problem.
This is easily seen by rewriting the Legendre equation as
d
dx
_
(1 x
2
)
dy
dx
_
+y = 0 for x [1, 1].
q 1 and p(x) = 1 x
2
vanish at the endpoints x = 1.
Solving Legendre Equation
The end points x = 1 are regular singular point.
The coecients P(x) =
2x
1x
2
and Q(x) =

1x
2
are analytic at x = 0, the origin with radius of
convergence R = 1.
We look for power series form of solutions y(x) =

k=0
a
k
x
k
.
a
2
=
a0
2
, a
3
=
(2)a1
6
and for k 2, a
k+2
=
(k(k+1))a
k
(k+2)(k+1)
.
y(x) = a
0
y
1
+a
1
y
2
, where y
1
, y
2
are innite series containing only even and odd powers of x, respectively.
In particular, y
1
and y
2
are solutions to the Legendre equations, by choosing a
0
= 1, a
1
= 0 and
viceversa.
Legendre Polynomial
Note that, for k 2,
a
k+2
=
(k(k + 1) )a
k
(k + 2)(k + 1)
.
Hence, for any n 2, if = n(n +1), then a
n+2
= 0 and hence every successive (even or odd) term is
zero.
Also, if = 1(1 + 1) = 2, then a
3
= 0.
If = 0(0 + 1) = 0, then a
2
= 0.
Thus, for each n N0, we have
n
= n(n +1) and a polynmial P
n
of degree n which is a solution
to the Legendre equation.
8
3 Sturm-Liouville Problem: general boundary conditions
The most general form of the Sturm-Liouville problem is of the form
_

_
d
dx
_
p(x)
dy
dx
_
r(x)y +q(x)y = 0 x (a, b)
c
1
y(a) +c
2
y

(a) = 0. c
2
1
+c
2
2
> 0
d
1
y(b) +d
2
y

(b) = 0 d
2
1
+d
2
2
> 0.
The conditions on the constant means that both c
1
and c
2
can not be zero simulataneously. Similarly for
d
1
, d
2
. Also, we have the following hypotheses:
1. where p, q, r : [a, b] R are continuous functions
2. such that p(x) > 0 and q(x) > 0
3. and p is continuously dierentiable in (a, b).
Thus, the S-L operator is now given as,
L =
1
q(x)
d
dx
_
p(x)
d
dx
_
+
r(x)
q(x)
Theorem 3.1. Under the hypotheses on p, q and r,
1. there exists an increasing sequence of eigenvalues 0 <
1
<
2
<
3
< . . . <
k
< . . . with
k
,
2. and W
k
= W

k
is at most two dimensional,
3. the eigen spaces W
k
corresponding to distinct eigenvalues
k
are orthogonal,
4. Conversely, any solution y of the S-L problem is in W
k
, for some k.
Exercise 3.1. Consider the Sturm-Liouville problem

d
dx
_
p(x)
dy
dx
_
+r(x)y = q(x)y in (a, b)
with p(x), q(x) > 0 on [a, b] and y(a) ,= y(b) and y

(a) ,= y

(b). Show that the eigen space are all one


dimensional.
4 Lecture-24
4.1 Periodic Functions
Periodic Functions
Denition 4.1. A function f : R R is said to be periodic of period T, if T > 0 is the smal lest number
such that
f(t +T) = f(t) t R.
Such functions are also cal led T-periodic functions.
Example 4.1. sin t and cos t are 2-periodic functions.
sin 2t and cos 2t are -periodic functions.
9
Constructing T-periodic Functions
Given a L-periodic function g, one can always construct a T-periodic as: f(t) = g(Lt/T).
For instance, f(t) = sin
_
2t
T
_
is a T-periodic function.
sin
_
2(t +T)
T
_
= sin
_
2t
T
+ 2
_
= sin
_
2t
T
_
.
In fact, for any positive integer k, sin
_
2kt
T
_
and cos
_
2kt
T
_
are T-periodic functions.
Periodic Sturm-Liouville Problem
Consider the boundary value problem,
_
_
_
y

+y = 0 in (, )
y() = y()
y

() = y

().
Its characteristic equation is m
2
+ = 0.
Solving for m, we get m =

.
Note that the can be either zero, positive or negative.
If = 0, then y

= 0 and the general solution is y(x) = x +, for some constants and .


Since y() = y(), we get = 0. Thus, for = 0, y a constant is the only non-trivial solution.
< 0, Negative
If < 0, then = > 0.
Hence y(x) = e

x
+e

x
.
Using the boundary condition y() = y(), we get = and using the other boundary condition,
we get = = 0.
Hence we have no non-trivial solution corresponding to negative s.
> 0, Positive
If > 0, then m = i

and y(x) = cos(

x) + sin(

x).
Using the boundary condition, we get
cos(

) + sin(

) = cos(

) + sin(

)
and
sin(

) + cos(

) = sin(

) + cos(

).
Thus, sin(

) = sin(

) = 0.
For a non-trivial solution, we must have sin(

) = 0.
Thus, = k
2
for each non-zero k N (since > 0).
10
Hence, for each k N, there is a solution (y
k
,
k
) with
y
k
(x) =
k
cos kx +
k
sin kx,
and
k
= k
2
and for
0
, we have y
0
=
0
.
Consider the series (eigen function expansion)
y(x)

k=0
a
k
y
k
= a
0

0
+

k=1
a
k
(
k
cos kx +
k
sin kx) .
4.2 Fourier Series
Fourier Series
Let f : R R be a T-periodic function. We also know that, for any positive integer k, sin
_
2kt
T
_
and
cos
_
2kt
T
_
are T-periodic functions.
Can we nd sequences a
k
and b
k
in R, and a
0
R such that the innite series
a
0
+

k=1
_
a
k
cos
_
2kt
T
_
+b
k
sin
_
2kt
T
__
converges to f(t) for some or all t R?
Computing Fourier coecients
To simplify notations, let us consider a 2-periodic function f, however, same ideas will work for a
T-periodic function. Let f be a function such that the innite series
a
0
+

k=1
(a
k
cos kt +b
k
sin kt)
converges uniformly to f. Thus,
f(t) = a
0
+

k=1
(a
k
cos kt +b
k
sin kt). (2)
Formulae for a
0
, a
k
s and b
k
s
By integrating both sides of (2) from to ,
_

f(t) dt =
_

_
a
0
+

k=1
(a
k
cos kt +b
k
sin kt)
_
dt
= a
0
(2) +
_

k=1
(a
k
cos kt +b
k
sin kt)
_
dt
Since the series converges uniformly to f, then we can interchange integral and series. Thus,
_

f(t) dt = a
0
(2) +

k=1
__

(a
k
cos kt +b
k
sin kt) dt
_
But we know that
_

sin kt dt =
_

cos kt dt = 0, k N.(Exercise!)
11
Hence,
a
0
=
1
2
_

f(t) dt.
To nd the coecients a
k
, for each xed k, we multiply both sides of (2) by cos kt and integrate from
to . Consequently, we get
_

f(t) cos kt dt = a
0
_

cos kt dt
+

j=1
_

(a
j
cos jt cos kt +b
j
sin jt cos kt) dt
=
_

a
k
cos kt cos kt dt = a
k
.
Similar argument after multiplying by sin kt gives the formula for b
k
s. Thus, we derived, for all k N,
a
k
=
1

f(t) cos kt dt
b
k
=
1

f(t) sin kt dt
a
0
=
1
2
_

f(t) dt.
Exercises
For any m 0 and n positive integer

cos nt cos mt dt =
_
, for m = n
0, for m ,= n.
Hence,
cos kt

is of unit length.

sin nt sin mt dt =
_
, for m = n
0, for m ,= n.
Hence,
sin kt

is of unit length.

sin nt cos mt dt = 0.
Fourier Series and Coecients
Denition 4.2. For any T-periodic function f : R R,
a
0
, a
k
and b
k
, for al l k N, as dened previously are cal led the Fourier coecients of f.
Further, the innite series
a
0
+

k=1
_
a
k
cos
_
2kt
T
_
+b
k
sin
_
2kt
T
__
, (3)
is called the Fourier series of f.
12
Some Questions
Given a 2-periodic function f : R R and we know how to nd the Fourier coecients of f
Will the Fourier series of f
a
0
+

k=1
(a
k
cos kt +b
k
sin kt)
converge?
If it converges, will it converge to f?
If so, is the convergence point-wise or uniform etc
are questions one can ask and will not be dealt with in this course.
An Answer
Answering our question, in all generality, is rather dicult at this stage. However, we shall answer it in
a simple version which will suce our purposes:
Theorem 4.3. If f : R R is a continuously dierentiable (derivative f

exists and is continuous) T-


periodic function, then the Fourier series of f converges to f(t), for every t R.
4.3 Piecewise Smooth
Piecewise Smooth
Is continuity necessary for a function to admit Fourier expansion?
Denition 4.4. A function f : [a, b] R is said to be piecewise continuously dierentiable if it has a
continuous derivative f

in (a, b), except at nitely many points in the interval [a, b] and at each these nite
points, the right-hand and left-hand limit for both f and f

exist.
Example
Consider f : [1, 1] R dened as f(t) = [t[ is continuous. It is not dierentiable at 0, but it is
piecewise continuously dierentiable.
Consider the function f : [1, 1] R dened as
f(t) =
_

_
1, for 1 < t < 0,
1, for 0 < t < 1,
0, for t = 0, 1, 1.
It is not continuous, but is piecewise continuous. It is also piecewise continuously dierentiable.
Theorem 4.5. If f is a T-periodic piecewise continuously dierentiable function,
then the Fourier series of f converges to f(t), for every t at which f is smooth.
At a non-smooth point t
0
, the Fourier series of f wil l converge to the average of the right and left limits
of f at t
0
.
13
5 Lecture-25
5.1 Orthogonality
Orthogonality
Let V be the real vector space of all 2-periodic real valued continuous function on R.
We introduce an inner product in V . For any two elements f, g V , we dene:
f, g) :=
_

f(t)g(t) dt.
The inner product generalises to V the properties of scalar product in R
n
(Exercise!).
Recall the denition
Denition 5.1. We say two functions f and g are perpendicular or orthogonal
if f, g) = 0.
we say f is of unit length if its norm |f| =
_
f, f) = 1.
Set, for k N,
e
0
(t) =
1

2
, e
k
(t) =
cos kt

and f
k
(t) =
sin kt

.
Example 5.1. e
0
, e
k
and f
k
are all of unit length. e
0
, e
k
) = 0 and e
0
, f
k
) = 0. Also, e
m
, e
n
) = 0 and
f
m
, f
n
) = 0, for m ,= n. Further, e
m
, f
n
) = 0 for all m, n.
In this new formulation, we can rewrite the formulae for the Fourier coecients as:
a
0
=
1

2
f, e
0
), a
k
=
1

f, e
k
) and b
k
=
1

f, f
k
).
and the Fourier series of f has the form,
f(t) = f, e
0
)
1

2
+
1

k=1
(f, e
k
) cos kt +f, f
k
) sin kt) .
5.2 Odd-Even Functions
Odd and Even functions
Denition 5.2. We say a function f : R R is odd if f(t) = f(t) and even if f(t) = f(t).
Example 5.2. All constant functions are even functions. For all k N, sin kt are odd functions and cos kt
are even functions.
Any odd function is always orthogonal to an even function (Exercise!).
The Fourier series of an odd or even functions will contain only sine or cosine parts, respectively.
Because, if f is odd
f, 1) = 0 and f, cos kt) = 0
and hence a
0
= 0 and a
k
= 0.
If f is even
f, sin kt) = 0
and b
k
= 0.
14
5.3 Fourier Sine-Cosine Series
Fourier Sine-Cosine Series
Let f : [0, T] R be a piecewise smooth function such that f(0) = f(T) = 0.
Then we claim that f has a Fourier series consisting only of sine terms or only of cosine terms.
To compute the Fourier Sine series of f, we extend f to [T, T] as

f(t) =
_
f(t), for t [0, T]
f(t) , for t [T, 0].
Fourier Sine Series


f is a 2T-periodic function and we extend it to all of R as a 2T-periodic function.
By our construction

f is an odd function.
Since

f is odd, the cosine coecients a
k
and the constant term a
0
vanishes in Fourier series of

f.
The restriction of the Fourier series of

f to f in the interval [0, T] gives the Fourier sine series of f.
Fourier Sine Series
f(t) =

k=1
b
k
sin
_
kt
T
_
where (4)
b
k
=
1
T
_

f, sin
_
kt
T
__
=
1
T
_
T
T

f(t) sin
_
kt
T
_
dt
=
1
T
_
_
0
T
f(t) sin
_
kt
T
_
dt +
_
T
0
f(t) sin
_
kt
T
_
dt
_
=
1
T
_
_
0
T
f(t) sin
_
kt
T
_
dt +
_
T
0
f(t) sin
_
kt
T
_
dt
_
=
2
T
_
T
0
f(t) sin
_
kt
T
_
dt.
Fourier Cosine Series
Similarly, we could have extended f to

f as,

f(t) =
_
f(t), for t [0, T]
f(t) , for t [T, 0].

f is, now, an even function which can be extended as a 2T-periodic function to all of R. The Fourier series
of

f has no sine coecients, b
k
= 0. The restriction of the Fourier series of

f to f in the interval [0, T] gives
the Fourier cosine series of f.
15
Fourier Cosine Series
f(t) = a
0
+

k=1
a
k
cos
_
kt
T
_
(5)
where
a
k
=
2
T
_
T
0
f(t) cos
_
kt
T
_
dt
and
a
0
=
1
T
_
T
0
f(t) dt.
5.4 Fourier Integral
Non-periodic functions
We know that the Fourier series of a 2-periodic function f is given as
f(t) = a
0
+

k=1
(a
k
cos kt +b
k
sin kt)
where a
0
, a
k
and b
k
can be computed from f.
Can we generalise the notion of Fourier series of f, for a non-periodic function f?
Yes! How?
Note that the periodicity of f is captured by the integer k appearing with sin and cos.
To generalise for non-periodic functions, we shall replace k with a real number .
Fourier Integral
Note that when we replace k with , the sequences a
k
, b
k
become functions of and the series is replaced
with integral over R.
Denition 5.3. If f : R R is a piecewise continuous function which vanishes outside a nite interval,
then its Fourier integral is dened as
f(t) =
_

0
(a() cos t +b() sin t) d,
where
a() =
1

f(t) cos t dt
b() =
1

f(t) sin t dt.


6 Lecture-26
6.1 PDE-Introduction
Partial Derivatives
Let u : R
2
R be a two variable function,then its partial derivative (if limit exists) with respect to x is
given as,
u
x
=
u
x
(x, y) := lim
h0
u(x +h, y) u(x, y)
h
.
Similarly, one can consider partial derivative w.r.t y-variable and higher order derivatives, as well.
16
Multi-index Notation
Let = (
1
, . . . ,
n
) be an n-tuple of non-negative integers.Let [[ =
1
+ . . . +
n
. Consider the
derivative of [[ order

1
x
1
1
. . .

n
x
n
n
=

||
x
1
1
. . . x
n
n
= D

.
If = (1, 2) then [[ = 3 and D

=

3
xy
2
.
If = (2, 1) then [[ = 3 and D

=

3
x
2
y
.
If = (1, 1) then [[ = 2 and D

=

2
xy
=

2
yx
.
6.2 Gradient and Hessian
Gradient and Hessian Matrix
Let a n-variable function u admits partial derivatives, then
Du = u :=
_
u
x
1
, . . .
u
x
n
_
= (u
x1
, . . . , u
xn
)
is called the gradient vector of u. If u admits second order partial derivatives, then we arrange them in a
n n matrix (called the Hessian matrix),
D
2
u =
_
_
_
_
_
_

2
u
x
2
1
. . .

2
u
x1xn

2
u
x2x1
. . .

2
u
x2xn
.
.
.

2
u
xnx1
. . .

2
u
x
2
n
_
_
_
_
_
_
nn
=
_
_
_
_
_
u
x1x1
. . . u
xnx1
u
x1x2
. . . u
xnx2
.
.
.
u
x1xn
. . . u
xnxn
_
_
_
_
_
nn
Example
Example 6.1. Let u(x, y) : R
2
R be u(x, y) = ax
2
+by
2
. Then
u = (u
x
, u
y
) = (2ax, 2by).
D
2
u =
_
u
xx
u
yx
)
u
xy
u
yy
_
=
_
2a 0
0 2b
_
Note that, for convenience, we can view u : R
2
R
2
and D
2
u : R
2
R
2
= R
2
2
, by assigning some
ordering to the partial derivatives .
6.3 PDE
Denition
Denition 6.1. Let be an open subset of R
n
. A k-th order PDE F is a given map
F : R
n
k
R
n
k1
. . . R
n
R R
having the form
F
_
D
k
u(x), D
k1
u(x), . . . Du(x), u(x), x
_
= 0, (6)
for each x and u : R is the unknown.
17
Example
Example 6.2. A rst order PDE of a two variable function u(x, y) will be of the form
F(u
x
, u
y
, u, x, y) = 0.
If u(x, y, z) is a three variable function, then
F(u
x
, u
y
, u
z
, u, x, y, z) = 0.
6.4 Types of PDE
Linear PDE
Denition 6.2. We say F is linear if (6) has the form

||k
a

(x)D

u(x) = f(x) for x ,


for some given function f and a

(1 [[ k). If f 0, we say F is homogeneous, else F is inhomogeneous


or non-homogeneous.
Example
Example 6.3. 1.
a
1
(x)u
xx
+a
2
(x)u
xy
+a
3
(x)u
yy
+a
4
(x)u
x
+a
5
(x)u
y
= a
6
(x)u.
2.
xu
y
yu
x
= u.
Semilinear PDE
Denition 6.3. F is said to be semilinear, if it is linear in the highest order, i.e., F has the form

||=k
a

(x)D

u(x) +a
0
(D
k1
u, . . . , Du, u, x) = 0.
Example 6.4.
u
x
+u
y
u
2
.
Quasi and Non-linear
Denition 6.4. We say F is quasilinear if it has the form

||=k
a

(D
k1
u(x), . . . , Du(x), u(x), x)D

u
+a
0
(D
k1
u, . . . , Du, u, x) = 0.
Final ly, we say F is fully nonlinear if F is neither of the earlier form.
Example 6.5. u
x
+uu
y
u
2
is quasilinear and u
x
u
y
u is nonlinear.
18
6.5 PDE-Solution
Notion of Solution
Denition 6.5. We say u : R is a solution to the k-th order PDE (6),
if u is k-times dierentiable with the k-th derivative being continuous
and u satises the equation (6).
Henceforth, whenever we refer to a function as smooth, we mean that we are given as much dierentiability
and continuity as we need.
6.6 Well Posedness
BVP, IVP and Well-posedness
A problem involving a PDE could be a boundary-value problem (we look for a solution with prescribed
boundary value).
or a initial value problem (a solution whose value at initial time is known).
A BVP or IVP is said to be wel l-posed, in the sense of Hadamard, if
(a) has a solution (existence)
(b) the solution is unique (uniqueness)
(c) the solution depends continuously on the data given (stability).
7 Lecture - 27
7.1 First order PDE
First Order PDE: Origin
Let A R
2
be an open subset. Consider
u : R
2
A R
a two parameter family of smooth surfaces in R
3
, u(x, y, a, b), where (a, b) A. For instance, u(x, y, a, b) =
(x a)
2
+(y b)
2
is a family of circles with centre at (a, b). Dierentiate w.r.t x and y, we get u
x
(x, y, a, b)
and u
y
(x, y, a, b), respectively. Eliminating a and b from the two equations, we get a rst order PDE
F(u
x
, u
y
, u, x, y) = 0
whose solutions are the given surfaces u.
Example
Consider the family of circles
u(x, y, a, b) = (x a)
2
+ (y b)
2
.
Thus, u
x
= 2(x a) and u
y
= 2(y b) and eliminating a and b, we get
u
2
x
+u
2
y
4u = 0
is a rst order PDE. Do the assignments for more examples.
19
7.2 Solving First Order PDE
Method of Characteristics
We restrict ourselves to a function of two variables to x ideas (and to visualize geometrically), however,
the ideas can be carried forward to functions of several variable.
Method of characteristics is a technique to reduce a given rst order PDE to a system of ODE and
then solve the ODE using known methods to obtain the solution of the rst order PDE.
Linear First Order PDE
Consider rst order linear equation of two variable:
a(x, y)u
x
+b(x, y)u
y
= c(x, y). (7)
We need to nd u(x, y) that solves above equation.
This is equivalent to nding the graph (surface) S (x, y, u(x, y)) of the function u in R
3
.
Integral Surface
If u is a solution of (7), at each (x, y) in the domain of u,
a(x, y)u
x
+b(x, y)u
y
= c(x, y)
a(x, y)u
x
+b(x, y)u
y
c(x, y) = 0
(a(x, y), b(x, y), c(x, y)) (u
x
, u
y
, 1) = 0
(a(x, y), b(x, y), c(x, y)) (u(x, y), 1) = 0.
But (u(x, y), 1) is normal to S at the point (x, y).
Hence, the coecients (a(x, y), b(x, y), c(x, y)) are perpendicular to the normal.
Thus, the coecients (a(x, y), b(x, y), c(x, y)) lie on the tangent plane to S at (x, y, u(x, y)).
Characteristic Equations
Solving the given rst order linear PDEF is nding the surface S for which (a(x, y), b(x, y), c(x, y)) lie
on the tangent plane to S at (x, y, z).
The surface is the union of curves which satisfy the property of S.
Thus, for any curve S such that at each point of , the vector V (x, y) = (a(x, y), b(x, y), c(x, y) is
tangent to the curve.
Parametrizing the curve by the variable s, we see that we are looking for the curve = x(s), y(s), z(s)
R
3
such that
dx
ds
= a(x(s), y(s)),
dy
ds
= b(x(s), y(s)),
and
dz
ds
= c(x(s), y(s)).
20
Example-Transport Equation
The three ODEs obtained are called characteristic equations. The union of these characteristic (integral)
curves give us the integral surface.
Example: Consider the linear transport equation, for a given constant a,
u
t
+au
x
= 0, x R and t (0, ).
Thus, the vector eld V (x, t) = (a, 1, 0). The characteristic equations are
dx
ds
= a,
dt
ds
= 1, and
dz
ds
= 0.
Solving Transport Equation
Solving the 3 ODEs, we get
x(s) = as +c
1
, t(s) = s +c
2
, and z(s) = c
3
.
Eliminating the parameter s, we get the curves (lines) x at = a constant and z = a constant.
z = u(x, t) is constant along the lines x at = a constant.
That is, z is a function of x at, it changes value only when you switch between the lines. x at. Thus,
for any function g (smooth enough)
u(x, t) = g(x at)
is a general solution of the transport equation. Because,
u
t
+au
x
= g

(x at)(a) +ag

(x at) = 0.
Also, u(x, 0) = g(x).
8 Lecture - 28
8.1 Transport Equation
Inhomogeneous Transport Equation
Given a constant a R and a function f(x, t) , we wish to solve the inhomogeneous linear transport
equation,
u
t
(x, t) +au
x
(x, t) = f(x, t), x R and t (0, ).
As before, the rst two ODE will give the projection of characteristic curve in the xt plane, x at =
a constant, and the third ODE becomes
dz(s)
ds
= f(x(s), t(s)).
Lets say we need to nd the value of u at the point (x
0
, t
0
). The line passing through (x
0
, t
0
) with slope
1/a is given by the equation x at = , where = x
0
at
0
.
If z has to be on the integral curve, then z(s) = u( +as, s).
Hence set z(s) := u( +as, s) and let (s) = +as be the line joining (, 0) and (x
0
, t
0
) as s varies from
0 to t
0
.
The third ODE becomes,
dz(s)
ds
= f((s), s) = f( +as, s).
21
Integrating both sides from 0 to t
0
, we get
_
t0
0
f(x
0
a(t
0
s), s) ds = z(t
0
) z(0)
= u(x
0
, t
0
) u(x
0
at
0
, 0).
Thus,
u(x, t) = u(x at, 0) +
_
t
0
f(x a(t s), s) ds.
8.2 Cauchy Problem
Cauchy Problem
Recall that the general solution of the transport equation depends on the value of u at time t = 0, i.e.,
the value of u on the curve (x, 0) in the xt-plane.
Thus, the problem of nding a function u satisfying the rst order PDE
a(x, y)u
x
+b(x, y)u
y
= c(x, y).
such that u is known on a curve in the xt-plane is called the Cauchy problem.
The question that arises at this moment is that: Does the knowledge of u on any curve lead to
solving the rst order PDE.
The answer is: No.
Non-Characteristic Boundary Data
Suppose, in the transport problem, we choose the curve = (x, t) [ x at = 0, then we had no
information to conclude u o the line x at = 0.
The characteristic curves should emanate from to determine u.
Thus, only those curves are allowed which are not characteristic curves, i.e., (a, b) is nowhere tangent
to the curve.
Denition 8.1. We say =
1
(r),
2
(r) R
2
is noncharacteristic for the Cauchy problem
_
a(x, y)u
x
+b(x, y)u
y
= c(x, y) (x, y) R
2
u = on
if is nowhere tangent to (a(
1
,
2
), b(
1
,
2
)), i.e.,
(a(
1
,
2
), b(
1
,
2
)) (

2
,

1
) ,= 0.
If is not noncharacteristic, then the Cauchy problem is not well-posed.
22
Transport Equation: IVP
For any given (smooth enough) function : R R,
_
u
t
+au
x
= 0 x R and t (0, )
u(x, 0) = (x) x R.
We know that the general solution of the transport equation is u(x, t) = g(x at) for some g. In the IVP,
in addition, we want the initial condition u(x, 0) to be satised. Thus,
u(x, 0) = g(x) = (x).
Thus, by choosing g = , we get the precise solution of the IVP.
Let be the (boundary) curve where the initial value is given, i.e., (x, 0), the x-axis of xt-plane.
We have been given the value of u on . Thus, (, ) = (x, 0, (x)) is the known curve on the solution
surface of u.
We parametrize the curve with r-variable, i.e., =
1
(r),
2
(r) = (r, 0).
is non-characteristic, because (a, 1) (0, 1) = 1 ,= 0.
Thus, in this setup the ODEs are:
dx(r, s)
ds
= a,
dt(r, s)
ds
= 1, and
dz(r, s)
ds
= 0
with initial conditions,
x(r, 0) = r, t(r, 0) = 0, and z(r, s) = (r).
Solving the ODEs, we get
x(r, s) = as +c
1
(r), t(r, s) = s +c
2
(r)
and z(r, s) = c
3
(r) with initial conditions
x(r, 0) = c
1
(r) = r
t(r, 0) = c
2
(r) = 0, and z(r, 0) = c
3
(r) = (r).
Therefore,
x(r, s) = as +r, t(r, s) = s, and z(r, s) = (r).
We solve for r, s in terms of x, t and set u(x, t) = z(r(x, t), s(x, t)).
r(x, t) = x at and s(x, t) = t.
Therefore, u(x, t) = z(r, s) = (r) = (x at).
9 Lecture - 29
9.1 Second Order PDE
Second Order PDE
A general second order PDE of a n-variable function u is of the form,
F(D
2
u, Du, u, x) = 0 x R
n
.
Before we attempt to solve second order PDE, we shall classify the second order linear PDE.
We shall restrict ourselves to a function of two variables to x ideas, however, the ideas can be carried
forward to functions of several variable.
23
9.2 Classication
Classication of II order PDE
Consider the second order linear PDE in two variables (x, y) R
2
,
A(x, y)u
xx
+ 2B(x, y)u
xy
+C(x, y)u
yy
= D(x, y, u, u
x
, u
y
) (8)
where u, u
x
, u
y
appear linearly in the function D.
One of the coecients A, B or C is identically non-zero (to make the PDE second order).
The classication of PDE is founded on the observation that the representation of a PDE depends on
the choice of the coordinate system (origin).
Change of Variable
Our rst aim is to rewrite the given II order PDE in dierent coordinate system.
Let w(x, y), z(x, y) be a new pair of independent variable such that w, z are both continuous and twice
dierentiable w.r.t (x, y).
We also assume that the Jacobian J,
J =

w
x
w
y
z
x
z
y

,= 0,
because a nonvanishing Jacobian ensures the existence of a one-to-one transformation between (x, y)
and (w, z).
We get
u
x
= u
w
w
x
+u
z
z
x
,
u
y
= u
w
w
y
+u
z
z
y
,
u
xx
= u
ww
w
2
x
+ 2u
wz
w
x
z
x
+u
zz
z
2
x
+u
w
w
xx
+u
z
z
xx
u
yy
= u
ww
w
2
y
+ 2u
wz
w
y
z
y
+u
zz
z
2
y
+u
w
w
yy
+u
z
z
yy
u
xy
= u
ww
w
x
w
y
+u
wz
(w
x
z
y
+w
y
z
x
) +u
zz
z
x
z
y
+u
w
w
xy
+u
z
z
xy
Substituting above equations in (8), we get
a(w, z)u
ww
+ 2b(w, z)u
wz
+c(w, z)u
zz
= d(w, z, u, u
w
, u
z
).
where D transforms in to d and
a(w, z) = Aw
2
x
+ 2Bw
x
w
y
+Cw
2
y
b(w, z) = Aw
x
z
x
+B(w
x
z
y
+w
y
z
x
) +Cw
y
z
y
c(w, z) = Az
2
x
+ 2Bz
x
z
y
+Cz
2
y
.
The coecients in the new coordinate system satisfy
b
2
ac = (B
2
AC)J
2
.
24
Since J ,= 0, we observe that the sign of the discriminant, b
2
ac and B
2
AC, of the PDE is invariant
under change of variable.
We classify a second order linear PDE based on the sign of its discriminant d = B
2
AC.
We say a PDE is of
hyperbolic type if d > 0,
parabolic type if d = 0 and
el liptic type if d < 0.
The motivation for these names are no indication of the geometry of the solution of the PDE, but just a
correspondence with the corresponding second degree algebraic equation
Ax
2
+Bxy +Cy
2
+Dx +Ey +F = 0.
Let d = B
2
AC be the discriminant of the algebraic equation and the curve represented by the equation
is a
hyperbola if d > 0,
parabola if d = 0 and
ellipse if d < 0.
The classication of PDE is dependent on its coecients, which may vary from region to region.
For constant coecients, the type of PDE remains unchanged throughout the region.
However, for variable coecients, the PDE may change its classication from region to region.
Example 9.1. Tricomi equation
u
xx
+xu
yy
= 0.
The discriminant of the Tricomi equation is d = x.
It is hyperbolic when x < 0 and elliptic when x > 0. But on the y-axis (x = 0), the equation degenerates
to u
xx
= 0 and it is a line. PDEs are not dened on a line, then they degenerate to ODEs. We say it is
degenerately parabolic when x = 0, i.e., on y-axis.
9.3 Standard Forms
Standard or Canonical Form
The advantage of above classication is that
it helps us in reducing a given PDE into simple forms.How?
Given a PDE, compute the sign of the discriminant B
2
AC
and depending on its classication we can choose a coordinate transformation (w, z) such that
a = c = 0 for hyperbolic,
a = b = 0 or c = b = 0 for parabolic and
a = c and b = 0 for elliptic type.
If the given second order PDE (8) is such that A = C = 0, then (8) is of hyperbolic type and a division
by 2B (since B ,= 0) gives
u
xy
=

D(x, y, u, u
x
, u
y
)
where

D = D/2B. The above form is the rst standard form of second order hyperbolic equation.
25
If we introduce the linear change of variable X = x + y and Y = x y in the rst standard form, we
get the second standard form of hyperbolic PDE
u
XX
u
Y Y
=

D(X, Y, u, u
X
, u
Y
).
If the given second order PDE (8) is such that A = B = 0, then (8) is of parabolic type and a division
by C (since C ,= 0) gives
u
yy
=

D(x, y, u, u
x
, u
y
)
where

D = D/C. The above form is the standard form of second order parabolic equation.
If the given second order PDE (8) is such that A = C and B = 0, then (8) is of el liptic type and a
division by A (since A ,= 0) gives
u
xx
+u
yy
=

D(x, y, u, u
x
, u
y
)
where

D = D/A. The above form is the standard form of second order elliptic equation.
Note that the standard forms (except hyperbolic of rst kind) of a second order linear PDE are
expressions with no mixed derivatives.
These classication idea can be generalised to a n variable quasilinear second order PDE, since D
played no crucial role here.
How to reduce to standard form?
Consider a second order PDE not in standard form.
We look for transformation w = w(x, y) and z = z(x, y), with non-vanishing Jacobian, such that the
reduced form is the standard form. Recall that,
a(w, z) = Aw
2
x
+ 2Bw
x
w
y
+Cw
2
y
b(w, z) = Aw
x
z
x
+B(w
x
z
y
+w
y
z
x
) +Cw
y
z
y
c(w, z) = Az
2
x
+ 2Bz
x
z
y
+Cz
2
y
.
Hyperbolic
If B
2
AC > 0, then to make a = c = 0, we need that w
x
/w
y
and z
x
/z
y
are roots of the quadratic
equation A
2
+ 2B +C = 0.
=
B

B
2
AC
A
.
Thus,
w
x
w
y
=
B +

B
2
AC
A
and
z
x
z
y
=
B

B
2
AC
A
.
Along the curve such that w = a constant, we have
0 =
dw
dx
= w
y
dy
dx
+w
x
and hence
dy
dx
=
wx
wy
. Similarly,
dy
dx
=
zx
zy
. The characteristic curve is given by
dy
dx
= .
In the parabolic case, B
2
AC = 0 and we have = B/A. Thus, we solve along the curve w = a
constant,
dy
dx
=
26
and choose z such that the Jacobian J ,= 0.
In the elliptic case, B
2
AC < 0. Thus, has a real and imaginary part. We solve
dy
dx
=
and choose the real part of the solution to be w and imaginary part to be the z.
10 Lecture - 30
10.1 Three Basic Linear PDE
Three Basic Second Order Linear PDE
For any x R
n
,
The Laplace equation, u(x) = 0 where :=

n
i=1

2
x
2
i
is the trace of the Hessian matrix. The Poisson
equation, u(x) = f(x).
The heat equation for a homogeneous material is u
t
(x, t) c
2
u(x, t) = 0, for t 0 and c is a non-zero
constant.
The wave equation with normalised constants is
u
tt
(x, t) c
2
u(x, t) = 0
for t 0.
Superposition Principle
The three basic II order PDE are linear and satises the superposition principle: If u
1
, u
2
are solutions
of these equations, then

1
u
1
+
2
u
2
is also a solution, for all constants
1
,
2
R.
10.2 Laplace Equation
u = 0
A one dimensional Laplace equation is a ODE and is solvable with solutions u(x) = ax + b for some
constants a and b.
But in higher dimensions solving Laplace equation is not so simple. For instance, a two dimensional
Laplace equation
u
xx
+u
yy
= 0
has the trivial solution as all one degree polynomials of two variables.
In addition, xy, x
2
y
2
, e
x
sin y and e
x
cos y are all solutions to Laplace equation.
Harmonic Functions
Denition 10.1. A n-variable function u whose second order derivatives exist and are continuous is said
to be harmonic if u(x) = 0 in the domain of x.
Studying harmonic functions is beyond the scope of this course, we shall just state one important property
of harmonic functions.
27
Maximum Principle
Theorem 10.2 (Maximum Principle). Let be a bounded open subset of R
n
. Let u : R be a continuous
function which is twice continuously dierentiable in , such that u is harmonic in . Then
max

u = max

u.
11 Lecture - 31
11.1 Dirichlet Problem
Dirichlet Problem (DP)- BVP
Let R
n
be a bounded open subset with a boundary . Let g : R be a continuous function.
Then the Dirichlet problem is to nd a harmonic function u : R such that
_
u(x) = 0 x
u(x) = g(x) x .
(9)
11.2 DP On Rectangle
Dirichet Problem on a Rectangle-2D
Let
= (x, y) R
2
[ 0 x a and 0 y b
be a rectangle of sides a, b.
Let g : R which vanishes on three sides of the rectangle, i.e.,
g(0, y) = g(x, 0) = g(a, y) = 0
and g(x, b) = h(x) where h is a continuous function h(0) = h(a) = 0.
We want to solve DP (9) on this rectangle with given boundary value g.
Separation of Variable
We begin by looking for solution u(x, y) whose variables are separated, i.e., u(x, y) = v(x)w(y).
Substituting this form of u in the Laplace equation, we get
v

(x)w(y) +v(x)w

(y) = 0.
Hence
v

(x)
v(x)
=
w

(y)
w(y)
.
Since LHS is function of x and RHS is function y, they must equal a constant, say . Thus,
v

(x)
v(x)
=
w

(y)
w(y)
= .
28
Solving for v
Using the boundary condition on u, u(0, y) = g(0, y) = g(a, y) = u(a, y) = 0, we get v(0)w(y) =
v(a)w(y) = 0.
If w 0, then u 0 which is not a solution to (9). Hence, w , 0 and v(0) = v(a) = 0. Thus, we need
to solve,
_
v

(x) = v(x), x (0, a)


v(0) = v(a) = 0,
the eigen value problem for the second order dierential operator.
Solving Eigen Value Problem
Note that the can be either zero, positive or negative.
If = 0, then v

= 0 and the general solution is v(x) = x +, for some constants and .


Since v(0) = 0, we get = 0, and v(a) = 0 and a ,= 0 implies that = 0.
Thus, v 0 and hence u 0.
But, this can not be a solution to (9).
> 0, Positive
If > 0, then v(x) = e

x
+e

x
Equivalently,
v(x) = c
1
cosh(

x) +c
2
sinh(

x)
such that = (c
1
+c
2
)/2 and = (c
1
c
2
)/2.
Using the boundary condition v(0) = 0, we get c
1
= 0 and hence
v(x) = c
2
sinh(

x).
Now using v(a) = 0, we have c
2
sinh

a = 0.
Thus, c
2
= 0 and v(x) = 0. We have seen this cannot be a solution.
< 0, Negative
If < 0, then set =

We need to solve
_
v

(x) +
2
v(x) = 0 x (0, a)
v(0) = v(a) = 0.
The general solution is
v(x) = cos(x) + sin(x).
Using the boundary condition v(0) = 0, we get = 0 and hence v(x) = sin(x).
Now using v(a) = 0, we have sin a = 0.
Thus, either = 0 or sin a = 0. But = 0 does not yield a solution.
29
Hence a = k or = k/a, for all non-zero k Z.
Hence, for each k N, there is a solution (v
k
,
k
) for (9), with
v
k
(x) =
k
sin
_
kx
a
_
,
for some constant b
k
and
k
= (k/a)
2
.
We have solved for v. it now remains to solve w for these
k
.
For each k N, we solve for w
k
in the ODE
_
w

k
(y) =
_
k
a
_
2
w
k
(y), y (0, b)
w(0) = 0.
Thus, w
k
(y) = c
k
sinh(ky/a).
General Solution to DP
For each k N,
u
k
=
k
sin
_
kx
a
_
sinh
_
ky
a
_
is a solution to (9).
The general solution is of the form (principle of superposition) (convergence?)
u(x, y) =

k=1

k
sin
_
kx
a
_
sinh
_
ky
a
_
.
Final Solution to DP on Rectangle
We shall now use the condition u(x, b) = h(x) to nd the solution to the Dirichlet problem (9).

h(x) = u(x, b) =

k=1

k
sinh
_
kb
a
_
sin
_
kx
a
_
.
Since h(0) = h(a) = 0, we know that h admits a Fourier Sine series.
Thus
k
sinh
_
kb
a
_
is the k-th Fourier sine coecient of h, i.e.,

k
=
_
sinh
_
kb
a
__
1
2
a
_
a
0
h(x) sin
_
kx
a
_
.
12 Lecture - 32
12.1 Laplace Equation
Laplacian in Polar Coordinates
Now that we have solved the Dirichlet problem in a 2D rectangular domain, we intend to solve the
Dirichlet problem in a 2D disk.
The Laplace operator in polar coordinates (2 dimensions),
:=
1
r

r
_
r

r
_
+
1
r
2

2
where r is the magnitude component and is the direction component.
30
12.2 Laplacian on a 2D-Disk
Dirichet Problem on a Disk-2D
Consider the unit disk in R
2
,
= (x, y) R
2
[ x
2
+y
2
< 1
and is the circle of radius one.
The DP is to nd u(r, ) : R which is well-behaved near r = 0, such that
_
_
_
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0 in
u(r, + 2) = u(r, ) in
u(1, ) = g() on
(10)
where g is a 2 periodic function.
Separation of Variable
We will look for solution u(r, ) whose variables can be separated, i.e., u(r, ) = v(r)w() with both v
and w non-zero.
Substituting it in the polar form of Laplacian, we get
w
r
d
dr
_
r
dv
dr
_
+
v
r
2
d
2
w
d
2
= 0
and hence
r
v
d
dr
_
r
dv
dr
_
=
1
w
_
d
2
w
d
2
_
.
Since LHS is a function of r and RHS is a function of , they must equal a constant, say .
Solving for w
We need to solve the eigen value problem,
_
w

() w() = 0 R
w( + 2) = w() .
Note that the can be either zero, positive or negative.
If = 0, then w

= 0 and the general solution is w() = + , for some constants and . Using
the periodicity of w,
+ = w() = w( + 2) = + 2 +
implies that = 0. Thus, the pair = 0 and w() = is a solution.
> 0, Positive
If > 0, then
w() = e

+e

.
If either of and is non-zero, then w() as , which contradicts the periodicity of w.
Thus, = = 0 and w 0, which cannot be a solution.
31
< 0, Negative
If < 0, then set =

and the equation becomes
_
w

() +
2
w() = 0 R
w( + 2) = w()
Its general solution is
w() = cos() + sin().
Using the periodicity of w, we get = k where k is an integer.
For each k N, we have the solution (w
k
,
k
) where

k
= k
2
and w
k
() =
k
cos(k) +
k
sin(k).
Solving for v
For the
k
s, we solve for v
k
, for each k = 0, 1, 2, . . .,
r
d
dr
_
r
dv
k
dr
_
= k
2
v
k
.
For k = 0, we get v
0
(r) = log r + . But log r blows up as r 0 and we wanted a u well behaved
near origin.
Thus, we must have the = 0. Hence v
0
.
Cauchy-Euler Equation
For k N, we need to solve for v
k
in
r
d
dr
_
r
dv
k
dr
_
= k
2
v
k
.
Use the change of variable r = e
s
. Then e
s ds
dr
= 1 and
d
dr
=
d
ds
ds
dr
=
1
e
s
d
ds
. Hence r
d
dr
=
d
ds
.
v
k
(e
s
) = e
ks
+e
ks
.
v
k
(r) = r
k
+r
k
.
Since r
k
blows up as r 0, we must have = 0.
Thus, v
k
= r
k
. Therefore, for each k = 0, 1, 2, . . .,
u
k
(r, ) = a
k
r
k
cos(k) +b
k
r
k
sin(k).
Final Solution for DP on Disk
The general solution is
u(r, ) =
a
0
2
+

k=1
_
a
k
r
k
cos(k) +b
k
r
k
sin(k)
_
.
32
To nd the constants, we use u(1, ) = g(), hence
g() =
a
0
2
+

k=1
[a
k
cos(k) +b
k
sin(k)] .
Since g is 2-periodic it admits a Fourier series expansion and hence
a
k
=
1

g() cos(k) d,
b
k
=
1

g() sin(k) d.
13 Lecture - 33
13.1 Laplace Equation
Laplacian in Spherical Coordinates
Now that we have solved the Dirichlet problem in a 2D disk, we intend to solve the Dirichlet problem
in a 3D sphere.
The Laplace operator in spherical coordinates (3 dimensions),
:=
1
r
2

r
_
r
2

r
_
+
1
r
2
sin

_
sin

_
+
1
r
2
sin
2

2
.
where r is the magnitude component, is the inclination (elevation) in the vertical plane and is the
azimuth angle (in the direction in horizontal plane.
13.2 Laplacian on a 3D-Sphere
Laplacian on a Sphere-3D
Consider the unit sphere in R
3
,
= (x, y, z) R
3
[ x
2
+y
2
+z
2
< 1
and is the boundary of sphere of radius one.
The DP is to nd u(r, , ) : R which is well-behaved near r = 0, such that
_

_
1
r
2

r
_
r
2 u
r
_
+
1
r
2
sin

_
sin
u

_
+
1
r
2
sin
2

2
u

2
= 0 in
u(r, + 2, + 2) = u(r, , ) in
u(1, , ) = g(, ) on
(11)
where g is a 2 periodic function in both variables.
33
Separation of Variable
We will look for solution u(r, , ) whose variables can be separated, i.e., u(r, , ) = v(r)w()z() with
v, w and z non-zero.
Substituting it in the spherical form of Laplacian, we get
wz
r
2
d
dr
_
r
2
dv
dr
_
+
vz
r
2
sin
d
d
_
sin
dw
d
_
+
vw
r
2
sin
2

d
2
z
d
2
= 0
and hence
1
v
d
dr
_
r
2
dv
dr
_
=
1
wsin
d
d
_
sin
dw
d
_

1
z sin
2

d
2
z
d
2
.
Since LHS is a function of r and RHS is a function of (, ), they must equal a constant, say .
Azimuthal Symmetry
If Azimuthal symmetry is present then z() is constant and hence
dz
d
= 0.
We need to solve for w,
_
sin w

() + cos w

() +sin w() = 0 R
w( + 2) = w() .
Set x = cos .
Then
dx
d
= sin

() = sin
dw
dx
and w

() = sin
2

d
2
w
dx
2
cos
dw
dx
Legendre Equation
In the new variable x, we get the Legendre equation
(1 x
2
)w

(x) 2xw

(x) +w(x) = 0 x [1, 1].


We have already seen that this is a singular problem (while studying S-L problems). For each k
N 0, we have the solution (w
k
,
k
) where

k
= k(k + 1) and w
k
() = P
k
(cos ).
Solving for v
For the
k
s, we solve for v
k
, for each k = 0, 1, 2, . . .,
d
dr
_
r
2
dv
k
dr
_
= k(k + 1)v
k
.
For k = 0, we get v
0
(r) = /r +. But 1/r blows up as r 0 and we wanted a u well behaved near
origin.
Thus, we must have the = 0. Hence v
0
.
34
Cauchy-Euler Equation
For k N, we need to solve for v
k
in
d
dr
_
r
2
dv
k
dr
_
= k(k + 1)v
k
.
Use the change of variable r = e
s
. Then e
s ds
dr
= 1 and
d
dr
=
d
ds
ds
dr
=
1
e
s
d
ds
. Hence r
d
dr
=
d
ds
.
solving for m in the quadratic equation m
2
+m = k(k + 1).
m
1
= k and m
2
= k 1.
v
k
(e
s
) = e
ks
+e
(k1)s
.
v
k
(r) = r
k
+r
k1
.
Since r
k1
blows up as r 0, we must have = 0.
Thus, v
k
= r
k
. Therefore, for each k = 0, 1, 2, . . .,
u
k
(r, , ) = a
k
r
k
P
k
(cos ).
Final Solution for Laplacian on Sphere
The general solution is
u(r, , ) =

k=0
a
k
r
k
P
k
(cos ).
Since we have azimuthal symmetry, g(, ) = g().
To nd the constants, we use u(1, , ) = g(), hence
g() =

k=0
a
k
P
k
(cos ).
Using the orthogonality of P
k
, we have
a
k
=
2k + 1
2
_

g()P
k
(cos ) d.
14 Lecture-34
14.1 Eigenvalues of Laplacian
Eigenvalue Problem of Laplacian
Recall that we did the eigenvalue problem for the Sturm-Liouville operator, which was one-dimensional.
A similar result is true for Laplacian in all dimensions. However, we shall just state in two dimensions.
For a given open bounded subset R
2
, the Dirichlet eigenvalue problem,
_
u(x, y) = u(x, y) (x, y)
u(x, y) = 0 (x, y) .
35
Eigenvalue and Eigen function
Note that, for all R, zero is a trivial solution of the Laplacian.
Thus, we are interested in non-zero s for which the Laplacian has non-trivial solutions. Such an is
called the eigenvalue and corresponding solution u

is called the eigen function.


Note that if u

is an eigen function corresponding to , then u

, for all R, is also an eigen


function corresponding to .
Existence
Let W be the real vector space of all u : R continuous (smooth, as required) functions such that
u(x, y) = 0 on .
For each eigenvalue of the Laplacian, we dene the subspace of W as
W

= u W [ u solves Dirichlet EVP for given .


Theorem 14.1. There exists an increasing sequence of positive numbers 0 <
1
<
2
<
3
< . . . <
n
< . . .
with
n
which are eigenvalues of the Laplacian and W
n
= W
n
is nite dimensional. Conversely, any
solution u of the Laplacian is in W
n
, for some n.
14.2 Computing Eigenvalues
Specic Domains
Though the theorem assures the existence of eigenvalues for Laplacian, it is usually dicult to compute
them for a given .
In this course, we shall compute the eigenvalues when is a 2D-rectangle and a 2D-disk.
14.3 In Rectangle
Eigenvalues of Laplacian in Rectangle
Let the rectangle be = (x, y) R
2
[ 0 < x < a, 0 < y < b.
we wish to solve the Dirichlet EVP in the rectangle
_
u(x, y) = u(x, y) (x, y)
u(x, y) = 0 (x, y) .
The boundary condition amounts to saying
u(x, 0) = u(a, y) = u(x, b) = u(0, y) = 0.
Separation Of Variable
We look for solutions of the form u(x, y) = v(x)w(y) (variable separated).
Substituting u in separated form in the equation, we get
v

(x)w(y) v(x)w

(y) = v(x)w(y).
36
Hence

(x)
v(x)
= +
w

(y)
w(y)
.
Since LHS is function of x and RHS is function y and are equal they must be some constant, say .
We need to solve the EVPs
v

(x) = v(x) and w

(y) = ( )w(y)
under the boundary conditions v(0) = v(a) = 0 and w(0) = w(b) = 0.
Solving for v
As seen before, while solving for v, we have trivial solutions for 0.
If > 0, then v(x) = c
1
cos(

x) +c
2
sin(

x).
Using the boundary condition v(0) = 0, we get c
1
= 0. Now using v(a) = 0, we have c
2
sin

a = 0.
Thus, either c
2
= 0 or sin

a = 0.
We have non-trivial solution, if c
2
,= 0, then

a = k or

= k/a, for k Z.
For each k N, we have v
k
(x) = sin(kx/a) and
k
= (k/a)
2
.
Solving for w
We solve for w for each
k
.
For each k, l N, we have w
kl
(y) = sin(ly/b) and
kl
= (k/a)
2
+ (l/b)
2
.
For each k, l N, we have
u
kl
(x, y) = sin(kx/a) sin(ly/b)
and
kl
= (k/a)
2
+ (l/b)
2
.
14.4 In Disk
Eigenvalues of Laplacian in Disk
Let the disk of radius a be = (x, y) R
2
[ x
2
+y
2
< a
2
.
We wish to solve the Dirichlet EVP in the disk
_
_
_
1
r

r
_
r
u
r
_

1
r
2

2
u

2
= u(r, ) (r, )
u() = u( + 2) R
u(a, ) = 0 R.
We look for solutions of the form u(r, ) = v(r)w() (variable separated).
Substituting u in separated form in the equation, we get

w
r
d
dr
_
r
dv
dr
_

v
r
2
w

() = v(r)w().
37
Hence dividing by vw and multiplying by r
2
, we get

r
v
d
dr
_
r
dv
dr
_

1
w
w

() = r
2
.

r
v
d
dr
_
r
dv
dr
_
+r
2
=
1
w
w

() = .
Solving for non-trivial w, using the periodicity of w, we get for = 0, w() =
a0
2
and for each k N,
= k
2
and
w() = a
k
cos k +b
k
sin k.
Solving for v
For each k N 0, we have the equation,
r
d
dr
_
r
dv
dr
_
+ (r
2
k
2
)v = 0.
Introduce change of variable x =

r and x
2
= r
2
. Then
r
d
dr
= x
d
dx
.
rewriting the equation in new variable y(x)) = v(r)
x
d
dx
_
x
dy(x)
dx
_
+ (x
2
k
2
)y(x) = 0.
Note that this none other than the Bessels equation.
14.5 Bessels Function
Zeroes of Bessels Function
We already know that for each k N 0, we have the Bessels function J
k
as a solution to the
Bessels equation.
Recall the boundary condition on v, v(a) = 0. Thus. y(

a) = 0.
Hence

a should be a zero of the Bessels function.
Theorem 14.2. For each non-negative integer k, J
k
has innitely many positive zeroes.
For each k N 0, let z
kl
be the l-th zero of J
k
.
Hence

a = z
kl
and so
kl
= z
2
kl
/a
2
and y(x) = J
k
(x).
Therefore, v(r) = J
k
(z
kl
r/a).
For each k, l N 0, we have
u
kl
(r, ) = J
k
(z
kl
r/a) sin(k) or J
k
(z
kl
r/a) cos(k)
and
kl
= z
2
kl
/a
2
.
38
15 Lecture - 35
15.1 1D Heat Equation
One dimensional Heat Equation
The equation governing heat propogation in a bar of length L is
u
t
=
1
(x)(x)

x
_
(x)
u
x
_
where (x) is the alertspecic heat at x, (x) is density of bar at x and (x) is the thermal conductivity
of the bar at x.
If the bar is homogeneous, i.e, its properties are same at every point, then
u
t
=

2
u
x
2
with , , being constants.
IVP for Heat Equation
Let L be the length of a homogeneous rod insulated along sides and its ends are kept at zero temper-
ature.
Then the temperature u(x, t) at every point of the rod, 0 x L and time t 0 is given by the
equation
u
t
= c
2

2
u
x
2
where c is a constant.
The temperature zero at the end points is given by the Dirichlet boundary condition
u(0, t) = u(L, t) = 0.
Also, given is the initial temperature of the rod at time t = 0, u(x, 0) = g(x), where g is given (or
known) such that g(0) = g(L) = 0.
Dirichlet Problem for Heat Equation
Given g : [0, L] R such that g(0) = g(L) = 0, we look for all the solutions of the Dirichlet problem
_
_
_
u
t
(x, t) c
2
u
xx
(x, t) = 0 in (0, L) (0, )
u(0, t) = u(L, t) = 0 in (0, )
u(x, 0) = g(x) on [0, L].
We look for u(x, t) = v(x)w(t) (variable separated).
Substituting u in separated form in the equation, we get
v(x)w

(t) = c
2
v

(x)w(t)

(t)
c
2
w(t)
=
v

(x)
v(x)
.
39
Dirichlet Problem for Heat Equation
Since LHS is function of t and RHS is function x and are equal they must be some constant, say .
Thus,
w

(t)
c
2
w(t)
=
v

(x)
v(x)
= .
Thus we need to solve two ODE to get v and w,
w

(t) = c
2
w(t)
and
v

(x) = v(x).
But we already know how to solve the eigenvalue problem involving v.
Solving for v and w
For each k N, we have the pair (
k
, v
k
) as solutions to the EVP involving v, where
k
= (k)
2
/L
2
and v
k
(x) = sin
_
kx
L
_
some constants b
k
.
For each k N, we solve for w
k
to get
ln w
k
(t) =
k
c
2
t + ln
where is integration constant. Thus, w
k
(t) = e
(kc/L)
2
t
.
Hence,
u
k
(x, t) = v
k
(x)w
k
(t) =
k
sin
_
kx
L
_
e
(kc/L)
2
t
,
for some constants
k
, is a solution to the heat equation.
By superposition principle, the general solution is
u(x, t) =

k=1
u
k
(x, t) =

k=1

k
sin
_
kx
L
_
e
(kc/L)
2
t
.
Particular Solution of Heat Equation
We now use the initial temperature of the rod, given as g : [0, L] R to nd the particular solution of
the heat equation.
We are given u(x, 0) = g(x). Thus,
g(x) = u(x, 0) =

k=1

k
sin
_
kx
L
_
Since g(0) = g(L) = 0, we know that g admits a Fourier Sine expansion and hence its coecients
k
are given as

k
=
2
L
_
L
0
g(x) sin
_
kx
L
_
.
40
15.2 Solving for Circular Wire
Heat Equation of a Circular Wire
We intend solve the heat equation in a circle (circular wire) of radius one which is insulated along its
sides.
Then the temperature u(, t) at every point of the circle, R and time t 0 is given by the equation
u
t
= c
2

2
u

2
where c is a constant.
We note that now u(, t) is 2-periodic in the variable . Thus,
u( + 2, t) = u(, t) R, t 0.
Let the initial temperature of the wire at time t = 0, be u(, 0) = g(), where g is a given 2-periodic
function.
IVP
Given a 2-periodic function g : R R, we look for all solutions of
_
_
_
u
t
(, t) c
2
u

(, t) = 0 in R (0, )
u( + 2, t) = u(, t) in R (0, )
u(, 0) = g() on R t = 0.
We look for u(, t) = v()w(t) with varibales separated
substituting for u in the equation, we get
w

(t)
c
2
w(t)
=
v

()
v()
= .
Solving for v and w
For each k N 0, the pair (
k
, v
k
) is a solution to the EVP where
k
= k
2
and
v
k
() = a
k
cos(k) +b
k
sin(k).
For each k N 0, we get w
k
(t) = e
(kc)
2
t
.
General Solution
For k = 0
u
0
(, t) = a
0
/2 (To maintain consistency with Fourier series)
and for each k N, we have
u
k
(, t) = [a
k
cos(k) +b
k
sin(k)] e
k
2
c
2
t
Therefore, the general solution is
u(, t) =
a
0
2
+

k=1
[a
k
cos(k) +b
k
sin(k)] e
k
2
c
2
t
.
41
Particular Solution
We now use the initial temperature on the circle to nd the particular solution. We are given u(, 0) =
g().
Thus,
g() = u(, 0) =
a
0
2
+

k=1
[a
k
cos(k) +b
k
sin(k)]
Since g is 2-periodic it admits a Fourier series expansion and hence
a
k
=
1

g() cos(k) d,
b
k
=
1

g() sin(k) d.
Note that as t the temperature of the wire approaches a constant a
0
/2.
Exercises!
Solve the heat equation for
2D Rectangle.
2D Disk.
16 Lecture - 36
16.1 1D Wave Equation
One dimensional Wave Equation
Let us consider a string of length L, stretched along the x-axis, with one end xed at x = 0 and the
other end being x = L.
We assume that the string is free to move only in the vertical direction.
The vertical displacement u(x, t) of the string at the point x and time t is governed by the equation

2
u
t
2
=
T

2
u
x
2
where T is the tension and is the density of the string.
Equivalently,

2
u
t
2
= c
2

2
u
x
2
where c
2
= T/.
IVP for Wave Equation
The fact that endpoints are xed is given by the Dirichlet boundary condition
u(0, t) = u(L, t) = 0.
Also, given is the initial position u(x, 0) = g(x) (at time t = 0)
Initial velocity of the string at time t = 0, u
t
(x, 0) = h(x).
42
Dirichlet Problem for Wave Equation
Given g, h : [0, L] R such that g(0) = g(L) = 0, we need to solve
_

_
u
tt
(x, t) c
2
u
xx
(x, t) = 0 in (0, L) (0, )
u(0, t) = u(L, t) = 0 in [0, )
u(x, 0) = g(x) in [0, L]
u
t
(x, 0) = h(x) in (0, L)
We seek u(x, t) = v(x)w(t) (variable separated).
Substituting u in separated form in the equation, we get
v(x)w

(t) = c
2
v

(x)w(t)
Hence
w

(t)
c
2
w(t)
=
v

(x)
v(x)
= .
Solving for v and w
For each k N, we obtain the non-trivial solutions (
k
, v
k
), where
v
k
(x) = sin
_
kx
L
_
and
k
= (k/L)
2
.
For each k N, we solve for w
k
in
w

k
(t) + (k/L)
2
c
2
w
k
(t) = 0.
Hence
w
k
(t) = a
k
cos(kct/L) +b
k
sin(kct/L).
General Solution of Wave Equation
For each k N, we have
u
k
(x, t) = [a
k
cos(kct/L) +b
k
sin(kct/L)] sin
_
kx
L
_
for some constants a
k
and b
k
.
Hence, the general solution is
u(x, t) =

k=1
[a
k
cos(kct/L) +b
k
sin(kct/L)] sin
_
kx
L
_
Frequency of the fundamental mode is
1
2
c
L
=
c
2L
=
_
T/
2L
and the frequency of higher modes are integer multiples of the this frequency.
43
Particular Solution of Wave Equation
We now use the initial position g and initial velocity h of the string to nd the particular solution of
the wave equation. We are given u(x, 0) = g(x) and u
t
(x, 0) = h(x).
Thus,
g(x) = u(x, 0) =

k=1
a
k
sin
_
kx
L
_
Since g(0) = g(L) = 0, we know that g admits a Fourier Sine expansion and hence its coecients a
k
are given as
a
k
=
2
L
_
L
0
g(x) sin
_
kx
L
_
.
Dierentiating u w.r.t t, we get
u
t
(x, t) =

k=1
(kc/L) [b
k
cos(kct/L) a
k
sin(kct/L)] sin
_
kx
L
_
.
Thus,
h(x) = u
t
(x, 0) =

k=1
b
k
kc
L
sin
_
kx
L
_
and
b
k
=
2
kc
_
L
0
h(x) sin
_
kx
L
_
.
Exercises!
Solve the wave equation for
2D Rectangle.
2D Disk.
17 Lecture - 37
17.1 Duhamels Principle
Duhamels Principle
Recall that we have studied the homogeneous IVP for heat and wave equation with non-zero initial
condition.
Duhamelss principle states that one can obtain a solution of the inhomogeneous IVP for heat and
wave from its homogeneous IVP.
44
Duhamels for Heat Equation
Let us illustrate the principle for heat equation.
Let u(x, t) be the solution of the inhomogeneous heat equation, for a given f
_
_
_
u
t
(x, t) c
2
u(x, t) = f(x, t) in (0, )
u(x, t) = 0 in (0, )
u(x, 0) = 0 in .
Consider, for each s (0, ), w(x, t; s) as the solution of the homogeneous problem (auxiliary)
_
_
_
w
s
t
(x, t) c
2
w
s
(x, t) = 0 in (s, )
w
s
(x, t) = 0 in (s, )
w
s
(x, s) = f(x, s) on .
Since t (s, ), introducing a change of variable r = t s, we have w
s
(x, t) = w(x, t s) which solves
_
_
_
w
t
(x, r) c
2
w(x, r) = 0 in (0, )
w(x, r) = 0 in (0, )
w(x, 0) = f(x, t) on .
Duhamels principle states that
u(x, t) =
_
t
0
w
s
(x, t) ds =
_
t
0
w(x, t s) ds
Proof
Let us prove that u dened as
u(x, t) =
_
t
0
w(x, t s) ds
solves the inhomogenous heat equation.
Assuming w is C
2
, we get
u
t
(x, t) =

t
_
t
0
w(x, t s) ds
=
_
t
0
w
t
(x, t s) ds +w(x, t t)
d(t)
dt
w(x, t 0)
d(0)
dt
=
_
t
0
w
t
(x, t s) ds +w(x, 0).
u
t
(x, t) =
_
t
0
w
t
(x, t s) ds +w(x, 0)
=
_
t
0
w
t
(x, t s) ds +f(x, t).
45
Similarly,
u(x, t) =
_
t
0
w(x, t s) ds.
Thus,
u
t
c
2
u = f(x, t) +
_
t
0
_
w
t
(x, t s) c
2
w(x, t s)
_
ds
= f(x, t).
Duhamels for Wave Equation (Exercise!)
The principle states that the solution u(x, t) of the inhomogeneous wave equation, for a given f
_
_
_
u
tt
(x, t) u(x, t) = f(x, t) in (0, )
u(x, t) = 0 in (0, )
u(x, 0) = u
t
(x, 0) = 0 in .
is u(x, t) =
_
t
0
w(x, t s) ds
where w(x, t s) is the solution of the homogeneous equation
_

_
w
tt
(x, t s) w(x, t s) = 0 in (0, )
w(x, t s) = 0 in (0, )
w(x, 0) = 0 on
w
t
(x, 0) = f(x, t) on .
Example
Consider the wave equation
_
_
_
u
tt
(x, t) c
2
u
xx
(x, t) = sin 3x in (0, ) (0, )
u(0, t) = u(, t) = 0 in (0, )
u(x, 0) = u
t
(x, 0) = 0 in (0, ).
We look for the solution of the homogeneous wave equation
_

_
w
tt
(x, t) c
2
w
xx
(x, t) = 0 in (0, ) (0, )
w(0, t) = w(, t) = 0 in (0, )
w(x, 0) = 0 in (0, )
w
t
(x, 0) = sin 3x in (0, ).
Solving Homogeneous Equation
We know that the general solution of w is
w(x, t) =

k=1
[a
k
cos(kct) +b
k
sin(kct)] sin(kx)
46
Hence
w(x, 0) =

k=1
a
k
sin(kx) = 0.
Thus, a
k
= 0, for all k.
Also,
w
t
(x, 0) =

k=1
b
k
ck sin(kx) = sin 3x.
Hence, b
k
s are all zeroes except k = 3 and b
3
= 1/3c.
Thus,
w(x, t) =
1
3c
sin(3ct) sin(3x).
Solving Inhomogeneous Equation
u(x, t) =
_
t
0
w(x, t s) ds
=
1
3c
_
t
0
sin(3c(t s)) sin 3xds
=
sin 3x
3c
_
t
0
sin(3c(t s)) ds
=
sin 3xcos(3c(t s))
3c
[
t
0
=
sin 3x
9c
2
(1 cos 3ct) .
18 Lecture - 38
18.1 dAlemberts Formula
dAlemberts Formula: 1D Wave Equation
Consider the IVP _
_
_
u
tt
(x, t) = c
2
u
xx
(x, t) in R (0, )
u(x, 0) = g(x) in R t = 0
u
t
(x, 0) = h(x) in R t = 0,
where g, h : R R are given functions.
Note that the PDE can be factored as
_

t
+c

x
__

t
c

x
_
u = u
tt
c
2
u
xx
= 0.
We set v(x, t) =
_

t
c

x
_
u(x, t) and hence
v
t
(x, t) +cv
x
(x, t) = 0 in R (0, ).
47
Solve Two Transport Equations
Notice that the rst order PDE obtained is in the form of homogeneous transport equation, which we
know to solve.
Hence, for some smooth function f,
v(x, t) = f(x ct)
and f(x) := v(x, 0).
Using v in the original equation, we get the inhomogeneous transport equation,
u
t
(x, t) cu
x
(x, t) = f(x ct).
Recall the formula for inhomogenoeus TE
u(x, t) = g(x at) +
_
t
0
f(x a(t s), s) ds.
Since u(x, 0) = g(x) and a = c, in our case the solution reduces to,
u(x, t) = g(x +ct) +
_
t
0
f(x +c(t s) cs) ds
= g(x +ct) +
_
t
0
f(x +ct 2cs) ds
= g(x +ct) +
1
2c
_
xct
x+ct
f(y) dy
= g(x +ct) +
1
2c
_
x+ct
xct
f(y) dy.
But f(x) = v(x, 0) = u
t
(x, 0) cu
x
(x, 0) = h(x) cg

(x)
and substituting this in the formula for u, we get
u(x, t) = g(x +ct) +
1
2c
_
x+ct
xct
(h(y) cg

(y)) dy
= g(x +ct) +
1
2
(g(x ct) g(x +ct))
+
1
2c
_
x+ct
xct
h(y) dy
=
1
2
(g(x ct) +g(x +ct)) +
1
2c
_
x+ct
xct
h(y) dy
If c = 1, we have
u(x, t) =
1
2
(g(x t) +g(x +t)) +
1
2
_
x+t
xt
h(y) dy.
This is called the dAlemberts formula.
48

You might also like