You are on page 1of 104

UPPSALA UNIVERSITY - Department of Mathematics 2005-06-14

LULEÅ TECHNICAL UNIVERSITY - Department of Mathematics


Fredrik Strömberg
Johan Byström
Lars-Erik Persson

Applied Mathematics

This book is based on lecture notes of professor Lars-Erik Persson, from a course in applied mathematics
given at Luleå Technical University and Uppsala University. The course has been given for graduates
students in areas outside mathematics, but the Internet version has been aimed at “gymnasielärare” (ap-
proximately college teachers) in mathematics.
The only prerequisites is the basic standard mathematics courses at the university level, i.e. Basic algebra,
Linear algebra and Analysis.
The blue links are directed towards external sources outside this document, and we are not responsible
for the availability and information content of these pages. At present there is only very limited selection
of exercises but that will improve soon.
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonFredrik Strömberg – Responsible for this document (and lectures appearing here).
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonJohan Byström – Responsible for the rest of the lectures available on the web.
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonLars-Erik Persson – Inventor of the course and source of inspiration for these notes.
CHAPTER 1

Introduktion till dimensionsanalys and skalning

3
CHAPTER 2

Introduktion till störningsmetoder

5
CHAPTER 3

Introduktion till variationskalkyl

7
CHAPTER 4

Introduction to Partial Differential Equations

4.1. Some Examples


Example 4.1. (The one-dimensional heat conduction equation)
We consider the heat conduction problem (see Chapter 1) in an (infinitely) thin rod of length
l (see Fig. 4.1.1). Let the heat at the point x and time t be given by u(x,t). Assume that the
heat distribution in the rod at the time t = 0 is given by the function f (x), and that the heat at the
endpoints x = 0 and x = l are given by the functions h(t) and g(t), respectively (in practice h and g
are measured quantities). Then u(x,t) is described by the heat conduction equation:
ut0 − ku00xx = 0, t > 0, 0 < x < l,
u(x, 0) = f (x), 0 < x < l,
u(0,t) = h(t), t > 0,
u(l,t) = g(t), t > 0.

F IGURE 4.1.1. One-dimensional heat conduction


0 l


Example 4.2. (The inhomogeneous one-dimensional heat conduction equation)
Suppose that we have the same system as in the previous example, but that we also add the heat
v(x,t) at the point x and time t (see Fig. 4.1.2). In this case u(x,t) is described by the inhomogeneous
heat conduction equation:
ut0 − ku00xx = v(x,t), t > 0, 0 < x < l,
u(x, 0) = f (x), 0 < x < l,
u(0,t) = h(t), t > 0,
u(l,t) = g(t), t > 0.

F IGURE 4.1.2. One-dimensional inhomogeneous heat conduction


x
0 v = v(x,t) l


9
10 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Example 4.3. (The two-dimensional inhomogeneous heat conduction equation)


We now consider heat conduction in a two-dimensional region D. Let the heat at the point
(x, y) ∈ D at the time t be given by u(x, y,t). Assume that the heat distribution at t = 0 is described
by the function f (x, y), and that the heat at the boundary of D is constant over time and given
by g(x, y) (in practice this is obtained by transfer of heat into or out from the system through the
boundary). Assume also that the heat v(x, y,t) is added to the point (x, y) at the time t. Then u(x, y,t)
is described by the two-dimensional inhomogeneous heat conduction equation:

ut0 − k u00xx + u00yy = v(x, y,t), (x, y) ∈ D, t > 0,




u(x, y, 0) = f (x, y), (x, y) ∈ D,


u(x, y,t) = g(x, y), (x, y) ∈ ∂D, t > 0.

(x, y) ∂D
D

Example 4.4. (The three-dimensional heat conduction equation)
We now consider heat conduction in a three-dimensional region V . We use the same notation
as above, with the addition of a z-coordinate. Then u(x, y, z,t) is described by the three-dimensional
heat conduction equation:
(4.1.1) ut0 − div (kgradu) = v(x, y, z,t), (x, y, z) ∈ V, t > 0,
u(x, y, z, 0) = f (x, y, z), (x, y, z) ∈ V,
u(x, y, z,t) = g(x, y, z), (x, y, z) ∈ ∂V, t > 0.

R EMARK 1. Note that the gradient “grad” of the function u(x, y, z) is given by the vector
   
0 0 0
 ∂u ∂u ∂u ∂ ∂ ∂
gradu = ∇u = ux , uy , uz = , , = , , u.
∂x ∂z ∂z ∂x ∂z ∂z
If ∇ is written as  
∂ ∂ ∂
∇= , , ,
∂x ∂z ∂z
the divergence “div”, of a vector field ~F = (Fx , Fy , Fz ) is given by
∂Fx ∂Fy ∂Fz
divF = ∇ · ~F = + + .
∂x ∂z ∂z
Thus, the divergence of the gradient of u(x, y, z) is given by
div (gradu) = ∇ · ∇u = ∇2 u = ∆u = u00xx + u00yy + u00zz .
Hence, if k = k(x, y, z) = k0 is constant (4.1.1) can be written as
ut0 − k0 u00xx + u00yy + u00zz = v ⇔ ut0 − k0 ∆u = v.


R EMARK 2. Observe that the equation


ut0 − κ∆u = v
in general describes a diffusion process. Heat conduction implies a diffusion (transport) of heat, and is
one example of such a process. Some other examples of diffusion processes are
4.1. SOME EXAMPLES 11

• Mixing of one liquid in another (e.g. milk in a cup of tea).


• Diffusion of a gaseous substance in air (e.g. a poisonous gas is released in the air).
• Propagation of elementary particles in a solid material (e.g. neutrinos in a nuclear reactor)

Since the equations are the same, all methods we consider here for solving the heat equation in various
cases can also be applied to these alternative diffusion problems. Another PDE which is as important as
the diffusion equation is the wave equation, which we will now consider in some examples of.

Example 4.5. (The one-dimensional wave equation)


Consider a vibrating (elastic) string of length l which is fixed at both endpoints. Arrange the
string along the x-axis and let u(x,t) describe the position (relative to the equilibrium) of the string
at the coordinate x and time t. At the initial time t = 0 the position and velocity of the string are
given by the functions f (x)and g(x) respectively. The vibrations of the string are described by the
one-dimensional wave equation:

utt00 − ku00xx = 0, 0 < x < l, t > 0,


u(0,t) = u(l,t) = 0, t > 0,
u(x, 0) = f (x), 0 < x < l,
ut0 (x, 0) = g(x), 0 < x < l.

F IGURE 4.1.3. Vibrating string

u(x,t)

0 l
x

Example 4.6. (The two-dimensional wave equation)


Consider a vibrating membrane which is fixed at the boundary (e.g. a drum skin fastened in
a drum). Arrange the membrane so that it covers a domain D in the xy-plane, and let u(x, y,t)
describe the position (relative to the equilibrium) of the membrane at the point (x, y) at the time t
(see Fig. 4.1.4). At the initial time t = 0 the position and velocity of the membrane are given by the
functions f (x, y) and g(x, y) respectively. The vibrations of the membrane are then described by the
two-dimensional wave equation:

utt00 − k u00xx + u00yy



= 0, (x, y) ∈ D, t > 0,
u(x, y,t) = 0, (x, y) ∈ ∂D, t > 0,
u(x, y, 0) = f (x, y), (x, y) ∈ D,
ut0 (x, y, 0) = g(x, y), (x, y) ∈ D.
12 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

F IGURE 4.1.4. Vibrating membrane


D z
z = u(x, y,t)

y
∂D

Example 4.7. (The two-dimensional Laplace equation)


Assume that we have a two-dimensional domain, as in Example 4.3, and that we want to inves-
tigate the heat distribution in the system at thermal equilibrium, i.e. after so long time that the heat
distribution no longer changes with time. Assume also that we do not add any heat. This implies
that we have to set ut0 = 0 and v = 0 in Example 4.3, which gives us the Laplace equation, which
we can write in the following three equivalent ways:

(4.1.2) u00xx + u00yy = 0, ⇔


2
∇ u = 0, ⇔
∆u = 0.

∆ is usually called the Laplace operator or simply the Laplacian, and is of great importance in
both pure and applied mathematics. The solution u(x, y) of (4.1.2) describes the heat in the point
(x, y) after thermal equilibrium. This is usually called a stationary solution to the heat conduction
problem.

Example 4.8. (The two-dimensional Poisson equation)


The Poisson equation is an inhomogeneous Laplace equation, i.e. at all times t we add the heat
v(x, y,t) = f (x, y) (independent of t) to the point (x, y). This equation can be written in the following
three equivalent ways:

u00xx + u00yy = f ⇔
2
∇ u = f ⇔
∆u = f.
4.2. A GENERAL PARTIAL DIFFERENTIAL EQUATION OF THE SECOND ORDER 13

1
Here ut0 = 0 and v(x, y,t) = − f (x, y) in Example 4.3, so the the Poisson equation can be interpreted
k
as the inhomogeneous heat conduction equation at thermal equilibrium (ut0 = 0), where we at all
times t add the heat f (x, y) to the point (x, y).

Example 4.9. (The three-dimensional Poisson equation)


u00xx + u00yy + u00zz = f ⇔
2
∇ u = f ⇔
∆u = f.
1
In this case we have ut0 = 0 and v = − f in Example 4.4, and as in the two-dimensional case above,
k0
the three-dimensional Poisson equation can be interpreted as the heat conduction equation at thermal
1
equilibrium and when at all times t we add the heat − f (x, y, z) to the point (x, y, z).
k0

R EMARK 3. If the added heat in the examples above have negative sign, the obvious physical interpreta-
tion is that we cool down the system.

4.2. A General Partial Differential Equation of the Second Order

A general partial differential equation (PDE) can be written as


G x,t, u, u0x , ut0 , u00xx , u00xt , utt00 = 0.

(4.2.1)

The basic questions we now ask ourselves are:

1. Does it exist a solution to the PDE?


2. Is the solution unique?
3. Is the solution stable under small perturbations?
4. Which methods are available to construct and illustrate solutions?

Example 4.10. The problems in Examples 4.1-4.6 have unique solutions, but the problems in Example
4.7-4.9 do not have unique solutions.

R EMARK 4. A PDE of the type (4.2.1) usually has an infinite number of solutions and the general solution
depends on a number of arbitrary functions (to be compared with the fact that solutions to ODE:s usually
depend on arbitrary constants).

Example 4.11. The equation


00
utx = tx
has the solutions
1
u = t 2 x2 + g(t) + h(x).
4


14 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Example 4.12. The two-dimensional Laplace equation


u00xx + u00yy = 0,
has, for example, the solutions
u(x, y) = x2 − y2 ,
u(x, y) = ex cos y,
u(x, y) = ln x2 + y2 .


R EMARK 5. A solution u(x, y) to the Laplace equation is called a harmonic function. To find harmonic
functions one can use the fact that if f (z) = f (x + iy) is an analytic function (or synonymously: entire
d
or holomorphic), i.e. if f (z) exists, then the real part u(x, y) = ℜ f (x + iy), and the imaginary part
dz
v(x, y) = ℑ f (x + iy) of f are both harmonic functions.
In the example above we used f (z) = z2 , ez and log z2 respectively.

4.3. Linearity and Non-linearity

A partial differential equation can be written as


(*) Lu = f ,
where L is a so called differential operator.
∂ ∂2
Example 4.13. Let L = − k 2 . Then (*) becomes
∂t ∂x
ut0 − ku00xx = f ,
which is a one-dimensional heat conduction equation (cf. Example 4.2).

Example 4.14. Consider the differential operator
∂u
L (u) = u + 2txu.
∂t
Then the equation (*) becomes
∂u
u + 2txu = f (x,t).
∂t

D EFINITION 4.1. We say that the PDE (*) is linear if the operator L has the properties
(1) L(u + v) = Lu + Lv,
(2) L(cu) = cLu.
If these conditions are not both satisfied we say that (*) is non-linear.
Example 4.15. The heat conduction equation in Example 4.13 is linear.
∂ ∂2
Proof: We must see verify that L = − k 2 satisfies (1) and (2) above.
∂t ∂x
∂(u + v) ∂2 (u + v) ∂u ∂2 u ∂v ∂2 v
(1) L(u + v) = −k 2
= − k 2 + − k 2 = Lu + Lv.
∂t ∂x ∂t ∂x ∂t ∂x
4.4. CLASSIFICATION OF PDES 15

∂2 (cu) ∂2 u ∂2 u
 
∂(cu) ∂u ∂u
(2) L(cu) = −k = c − kc 2 = c − k 2 = cLu.
∂t ∂x2 ∂t ∂x ∂t ∂x

Hence, since L satisfies both (1) and (2), the equation


Lu = f
is linear. 

Example 4.16. The PDE in Example 4.14 is non-linear.


Proof: We start by verifying property (1):
L(u + v) = (u + v) (u + v)t0 + 2tx (u + v)
= uut0 + uvt0 + vut0 + vvt0 + 2txu + 2txv, and
Lu + Lv = uut0 + 2txu + vvt0 + 2txv.
Since L (u + v) − (Lu + Lv) = uvt0 + vut0 6= 0 the property (1) is not satisfied and hence the equation
is non-linear. 

4.4. Classification of PDEs

A general linear second order PDE can be written as


(4.4.1) a(x,t)utt00 + b(x,t)u00xt + c(x,t)u00xx + d(x,t)ut0 + e(x,t)u0x + q(x,t)u = f (x, y), (x,t) ∈ D .
Set
D(x,t) = (b(x,t))2 − 4a(x,t)c(x,t).
We say that the PDE (4.4.1) is

• Elliptic if D(x,t) < 0 in D ,


• Parabolic if D(x,t) = 0 in D ,
• Hyperbolic if D(x,t) > 0 in D .

Example 4.17. Consider the two-dimensional Laplace equation


u00xx + u00yy = 0.

Here D(x, y) = 02 − 4 · 1 · 1 = −4 < 0, and hence the equation is elliptic.


Example 4.18. Consider the heat conduction equation


ut0 − u00xx = 0.
Here D(x, y) = 02 − 4 · 0 · (−1) = 0, and hence the equation is parabolic.

Example 4.19. Consider the one-dimensional wave equation


utt00 − u00xx = 0.
Here D(x, y) = 02 − 4 · 1 · (−1) = 4 > 0, and hence the equation is hyperbolic.
16 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

4.5. The Superposition Principle

Consider a linear and homogeneous (i.e. the right hand side is 0) PDE:
(*) Lu = 0.
Suppose that u1 , u2 , . . . are solutions of (*) and that u is a finite linear combination of these:
u = c1 u1 + c2 u2 + · · · + cn un .
Then u is also a solution to (*) since
Lu = L (c1 u1 + · · · + cn un ) = c1 Lu1 + · · · + cn Lun = 0 + · · · + 0 = 0.
This is called the superposition principle and is true also for infinite sums:
u = c1 u1 + c2 u2 + · · · + cn un + · · · ,
provided that certain convergence properties hold1.
The continuous superposition principle:
Assume that uα (x,t) satisfies Luα = 0 for all α, a ≤ α ≤ b, and define
Z b
u(x,t) = c(α)uα (x,t)dα,
a
where c(α) is an arbitrary (integrable) function. Then
Lu = 0.
Proof:
b
Z 
Lu = L c(α)uα (x,t)dα
a
Z b
= c(α)Luα (x,t)dα
a
Z b
= c(α) · 0dα = 0.
a


Example 4.20. It is easy to verify that for each −∞ < α < ∞, the function
!
1 (x − α)2
uα (x,t) = √ exp −
4πkt 4kt
satisfies the heat conduction equation
ut0 − ku00xx = 0.
Hence this equation is also satisfied by the function
(x − α)2
 
1
Z ∞
u(x,t) = √ c(α) exp − dα,
4πkt −∞ 4kt
for any arbitrary, integrable function c(α).

n n
1E.g. if we have uniform convergence in: s (x) = u (x) → u, s0 (x) = u0 (x) → u0 , etc. for all occurring derivatives.
n ∑ j n ∑ j
1 1
4.6. WELL-POSED PROBLEMS 17

4.6. Well-Posed Problems

A boundary or initial value problem is said to be well-posed if

(a) there exists a solution,


(b) the solution is unique, and
(c) the solution is stable.

A problem that is not well-posed is said to be ill-posed.

Example 4.21. Consider the initial-values problem which consists of the equation
utt00 + u00xx = 0, t > 0, −∞ < x < ∞,
together with the initial-values
(4.6.1) u(x, 0) = 0, ut0 (x, 0) = 0, −∞ < x < ∞.
The unique solution is given by the function which is constant 0:
u(x,t) ≡ 0, t ≥ 0, −∞ < x < ∞.
Let us now make a little perturbation of the initial-values (4.6.1):
(4.6.2) u(x, 0) = 0, ut0 (x, 0) = 10−4 sin 104 x.
The solution to this new problem is given by
u(x,t) = 10−8 sin 104 x sinh 104t .
 

1
For large t we know that sinh 104t is approximately exp 104t . The tiny change in the initial-
 
2
values ave rise to a change in the solution from the constant 0 to a function which grows expo-
nentially (from the sinh-factor) and oscillates exponentially much (from the sine-factor). A really
dramatic change! This implies that the solution is not stable, and hence the problem is ill-posed.

Example 4.22. Show that the boundary-value problem



0 00
ut − kuxx = 0,
 0 < x < l, 0 < t < T,
u(x, 0) = f (x), 0 < x < l,

u(0,t) = g(t), u(l,t) = h(t), 0 < t < T,

where f ∈ C [0, l] and g, h ∈ C [0, T ], has a unique solution, u(x,t), in the rectangle
R : 0 ≤ x ≤ l, 0 ≤ t ≤ T.
Solution: Later on we will construct a solution to this problem (in Example 5.9)!
But for now, assume that we have two different solutions to the problem: u1 (x,t) and u2 (x,t). It
is then clear that the function
w(x,t) = u1 (x,t) − u2 (x,t)
must satisfy the boundary-value problem:

0 00
wt − kwxx = 0,
 0 < x < l, 0 < t < T,
w(x, 0) = 0, 0 < x < l,

w(0,t) = w(l,t) = 0, 0 < t < T.

18 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

We now form the “energy integral”


Z l
E(t) = w2 (x,t)dx.
0
Observe that E(t) ≥ 0, E(0) = 0, and
Z l Z l
E 0 (t) = 2wwt0 dx = 2k ww00xx dx
0 0
l Z l 2
2kww0x 0 − 2k w0x

= dx
0
Z l 2
= −2k w0x dx ≤ 0.
0
Hence, the function E is decreasing from E(0) = 0, and since E ≥ 0 we must have E(t) ≡ 0. This
implies that also w(x,t) ≡ 0, i.e. u1 (x,t) = u2 (x,t) for all x,t. Since we assumed that the solutions
u1 and u2 were different we have arrived at a contradiction! Hence the problem must have a unique
solution! ♦

4.7. Some Remarks On Fourier Series

Consider a function f (x), −l < x < l. The Fourier coefficients of f are defined as
1 l
Z
a0 = f (x)dx,
2l −l
Z l
1  nπx 
an = f (x) cos dx, n = 1, 2, . . . ,
l −l l
Z l
1  nπx 
bn = f (x) sin dx, n = 1, 2, . . . ,
l −l l
and the Fourier series of f is defined by
∞  nπx   nπx 
S (x) = a0 + ∑ an cos + bn sin .
n=1 l l
For a more detailed discussion of Fourier series see Section 6.1. See also Fig. 4.7.
Assume that f (x) is infinitely many times differentiable in the interval −l < x < l, except for a number
of discontinuity points. Then we have:
(a) S (x) = S (x + 2l), for all x.
(b) S (x) = f (x) at the points where f is continuous,
1
(c) S (x) = [ f (x+) + f (x−)] at points of discontinuity2.
2

2Here f (+x) = lim f (y), where we keep y > x as we take the limit, and we define f (x−) similarly.
y→x
4.7. SOME REMARKS ON FOURIER SERIES 19

f (x)

x
−l l

(a) A Discontinuous Function

y S (x)
f (x)

x
−l l

(b) And its Fourier Series

When making a graph of a discontinuous function it is customary to indicate the value which is attained
by the function with a filled circle and the value which is not attained by an unfilled circle.

F IGURE 4.7.1. A square wave


y

k (
k, 0 < x < l,
f (x) =
−k, −l < x ≤ 0

x
−π π

−k

Example 4.23. Consider the function f (x) from Fig. 4.7.1:


(
k, 0 < x < l,
f (x) =
−k, −l < x ≤ 0.
 nπx 
Note that f (x) is odd, i.e. f (−x) = − f (x). Since cos x is even the function f (x) cos is
l
odd and we know that the integral of an odd function over an even interval is always 0 (the “negative”
20 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

area cancels the “positive” area), hence a0 = an = 0 for all n. And we have

1 l  nπx 
Z
bn = f (x) sin dx
l −l l
1 0
Z  nπx  Z l  nπx 
= −k sin dx + k sin dx
l −l l 0 l
2k l  nπx 
Z
= sin dx
l 0 l
2k

l  nπx l
= − cos
l nπ l 0
2k
= (1 − cos nπ)

2k
= (1 − (−1)n ) .

I.e.

4k 4k 4k
b1 = , b2 = 0, b3 = , b4 = 0, b5 = ,...,
π 3π 5π

and the Fourier series of f is

 
4k ∞  nπx  4k ∞ 1 (2m + 1)πx
S (x) = ∑ bn sin l = π ∑ 2m + 1 sin
π n=1 m=0 l
       
4k πx 1 3πx 1 5πx
= sin + sin + sin +··· .
π l 3 l 5 l

See Fig. 4.7.2 for an illustration of some of the partial (containing only a finite number of terms)
sums for S (x).
4.8. SEPARATION OF VARIABLES 21

F IGURE 4.7.2. Partial Fourier series


y

k
S1 (x)

x
π π

−k

(a) The First Term

x
π π
S1 (x)
4k
3π sin 3x
4k
5π sin 5x
S2 (x)
−k
S3 (x)

(b) More Terms

4.8. Separation of Variables

Separation of variables is a common method to solve certain types of PDEs. Since it originated from an
idea of Fourier it is also sometimes called Fourier’s method.
22 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Model example: Solve the problem


(1) ut0 − ku00xx = 0, 0 < x < l, t > 0,
(2) u(x, 0) = f (x), 0 < x < l,
(3) u(0,t) = u(l,t) = 0, t > 0.

What we mean by separating the variables in (1) is to seek a solution u(x,t) which can be factored as
u(x,t) = X(x)T (t),
where X(x) and T (t) are functions depending only on x and t respectively. Assume now that we can write
u in this way. If we differentiate u = XT we get ut0 (x,t) = X(x)T 0 (t) and u00xx (x,t) = X 00 (x)T (t), and if we
substitute these expressions into (1) we get the equation:
X(x)T 0 (t) − kX 00 (x)T (t) = 0,
which can be rewritten as
T 0 (t) 1 X 00 (x)
= .
T (t) k X(x)
Wee see that the left hand side is a function of t only and the right hand side is a function of x only. Hence,
the the only possibility is that both sides equals a constant:
T 0 (t) 1 X 00 (x)
= = −λ,
T (t) k X(x)
for some constant λ (which we have to determine later). Instead of the PDE (1) we now have two ODEs:
(
T 0 (t) = −λkT (t),
X 00 (x) = −λX(x),
with the general solutions
√  √ 
T (t) = Ce−λkt , and X(x) = A sin λx + B cos λx .

The boundary values (3) implies that either T ≡ 0 or X(0) = X(l) = 0. Since the first alternative only gives
us the solution which is constant 0 we see that X must satisfy the boundary conditions X(0) = X(l) = 0,
i.e.
X(0) = B = 0,
which tells us that B = 0, and we also see that
√ 
X(l) = A sin λl = 0.
√ 
To once again avoid the trivial solution W ≡ 0 (i.e. with A = 0) we must have sin λl = 0, which
implies that

λl = nπ, n ∈ Z+ ,
or equivalently
n2 π2
λ= ,
l2
for some positive integer n.
We have showed that if a solution to (1) can be factored as X(x)T (t) then it can be written as
 nπ   2 2 
n π kt
K sin x exp − 2 ,
l l
4.8. SEPARATION OF VARIABLES 23

where n is a positive integer and K a constant. By the superposition principle (sec 4.5) the general solution
to (1) satisfying the boundary-values (3) can be written as
∞  nπ   2 2 
n π kt
u(x,t) = ∑ bn sin x exp − 2 ,
n=1 l l

where the Fourier coefficients, {bn }∞


n=1 , are determined by the initial condition (2):

∞  nπ 
(*) u(x, 0) = f (x) = ∑ bn sin x .
n=1 l

Let us for simplicity assume that l = π and consider some examples of initial values f (x) in the above
model example.

Example 4.24. Let f (x) = 2 sin x + 4 sin 3x. Then (*) is satisfied if b1 = 2, b2 = 0, b3 = 4, b4 = b5 =
· · · = 0. Hence, the solution to the model example is

u(x,t) = 2 sin(x)e−xt + 4 sin(3x)e−9kt .


 
4 1 1 4
Example 4.25. Let f (x) = 1 = sin x + sin 3x + sin 5x + · · · . Then (*) is satisfied if b1 = ,
π 3 5 π
41 41
b2 = 0, b3 = , b4 = 0, b5 = , b6 = 0, etc. in this case, the solution to the model example is
π3 π5
given by
 
4 1 1
u(x,t) = sin(x)e−kt + sin(3x)e−9kt + sin(5x)e−25kt + · · ·
π 3 5

4 2
= ∑ sin ((2n − 1)x) e−(2n−1) kt .
π n=1

Example 4.26. If we have an arbitrary initial-value function f (x), 0 ≤ x ≤ π, the solution to the model
example is given by

−n2 kt ,

u(x,t) = ∑ bn sin (nx) exp
n=1

where
1 2
Z π Z π
bn = fu (x) sin nxdx = f (x) sin nxdx.
π −π π 0

Here fu (x) is an extension of f (x) to an odd function in the interval −π < x < π, i.e. fu (x) = f (x) if
x > 0 and fu (x) = − f (x) if x ≤ 0 (cf. Fig. 4.8.1).
24 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

F IGURE 4.8.1. Construction of an odd extension of a function


y

f (x)
fu (x)

x
−π π

4.9. Exercises

4.1. [S] Determine, for each of the following differential equations, if it is linear or non-linear:
a) ut0 (x,t) + x2 u00xx (x,t) = 0.
∂2 u ∂u
b) 2
+ u = f (x,t).
∂t ∂x
c) u∆u − ut0 = 0.
∂3 u ∂2 u ∂u
d) + 2 + = u0x .
∂3t ∂t ∂t

4.2.* Determine, for each of the following partial differential equations, the regions where it is hyper-
bolic, elliptic or parabolic:
a) utt00 + xu00xx + 2u 0 2
 x = f (x,t), (x,t) ∈ R .
2 00 00
b) y uxx + uyy = 0, x ∈ R, y > 0.
∂2 u
 2 
2 ∂ u 1 ∂u
c) =c + , t > 0, r > 0, and c ∈ R a constant.
∂t 2 ∂r2 r ∂r
d) sin x utt00 + 2u00xt + cos xu00xx = tan x, t ∈ R,|x| ≤ π.

4.3. [S] Let u(x,t), t > 0, x > 0 denote the temperature in an infinitely long rod with heat conductance
coefficient k, and which we heat up by increasing the temperature at the end point such that u(0,t) =
1 (x−α)2
t. Use the fact that uα (x,t) = (4πkt)− 2 e− 4kt is a solution of u0 t − ku00xx = 0 for each α ∈ R together
with the superposition principle to determine u(x,t). I.e. solve the problem
ut0 − ku00xx = 0, x > 0,t > 0,
u(0,t) = t, t > 0.
4.9. EXERCISES 25

4.4. Determine whether the following problems are well-posed or ill-posed:


a) utt00 = u00xx , u(0,t) = u(π,t) = u(x, 0) = u(x, π), x,t ∈ [0, π].
b) u0 t − ku00xx = 0, u(0,t) = u(π,t) = 0, u(x,  0) = sin x, x ∈ [0, π], t > 0.
0 00 x
c) ut − kuxx = 0, u(0,t) = 0, u(x, 0) = sin , x ∈ [0, π], t > 0.
2 x
c) u0 t − ku00xx = 0, u(0,t) = u(π,t) = 0, u(x, 0) = sin , x ∈ [0, π], t > 0.
2

4.5. [S] a) Determine the Fourier series for the function f (x) which in the interval −π < x < π is given
by f (x) = x2 .
π2 ∞
(−1)k
b) use a) to show that =−∑ 2
.
12 k=1 k

4.6.* Determine the Fourier series of f (t) = | sint|.

4.7. [S] Consider a rod of length L = 1 with heat conduction coefficient k = 1. At the beginning the
rod has the constant temperature 1. We then (instantaneous) cool down the ends of the rod to the
temperature 0, where we then keep it during the continuation of the experiment.
a) Formulate this problem mathematically.
b) find an expression for the temperature of the rod in the point x at the time t.
(Hint: for the Fourier series expansion of the constant 1 use an odd periodic extension
in the interval.)

4.8.* Solve the following problem by separation of variables:


ut0= u00xx , 0 < x < 3, t > 0,
 

u(x, 0) = sin(πx) − 2 sin x , 0 < x < 3,
3
u(0,t) = u(3,t) = 0, t > 0.

4.9. [S] Solve the following problem


ut0 = u00xx , 0 < x < π, t > 0,
u(x, 0) = sin2 x, 0 < x < π,
u0x (0,t) = u0x (π,t) = 0, t > 0.

4.10. Solve the following problem


utt00 = u00xx , 0 < x < π, t > 0,
u(x, 0) = sin x, 0 < x < π,
ut0 (x, 0) = 1, 0 < x < π,
u(0,t) = u(π,t) = 0, t > 0.
CHAPTER 5

Introduction to Sturm-Liouville Theory and the Theory of


Generalized Fourier Series

We start with some introductory examples.

5.1. Cauchy’s equation

The homogeneous Euler-Cauchy equation (Leonhard Euler and


Augustin-Louis Cauchy) is a linear homogeneous ODE which can be written as
(*) x2 y00 + axy0 + by = 0.

Example 5.1. Solve the equation (*).

Solution: Set y(x) = xr , then y0 (x) = rxr−1 and y00 (x) = r(r − 1)xr−2 . If we insert this into (*) we get
r(r − 1)xr + arxr + bxr = 0,
which gives us the equation
(**) r(r − 1) + ar + b = 0.
This is the so-called Characteristic equation corresponding to (*). Assume that the solutions of (**) are
r1 and r2 . We have three different cases:

1. If r1 and r2 are real and different, r1 6= r2 then


y(x) = Axr1 + Bxr2 .
2. If r1 and r2 are real and equal, r1 = r2 = r then
y(x) = Axr + Bxr ln x.
3. If r1 and r2 are complex conjugates, r1 = α + iβ, r2 = α − iβ then
y(x) = Axα+iβ + Bxα−iβ .


R EMARK 6. Observe that
xα+iβ = xα eiβ ln x = xα (cos (β ln x) + i sin (β ln x))
and
xα−iβ = xα (cos (β ln x) − i sin (β ln x)) ,
hence we can write the solution of case 3 in the example above as
y(x) = xα ((A + B) cos (β ln x) + i(A − B) sin (β ln x)) .
27
28 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

If we only consider constants A and B such that C = A + B and D = i (A − B) are real numbers then we
see that
y(x) = xα (C cos (β ln x) + D sin (β ln x))
is a real-valued solution to (*).

Example 5.2. Solve the differential equation


x2 y00 + 2xy0 − 6y = 0.
Solution: The characteristic equation is
r(r − 1) + 2r − 6 = 0,
i.e.
r2 + r − 6 = 0
which has the solutions
r1 = 2, r2 = −3.
Since we have two different real solutions we are in case 1 above, and the general solution to the
differential equation is given by
y(x) = Ax2 + Bx−3 .

Example 5.3. Solve the equation


1
x2 y00 + 2xy0 + λy = 0, λ > .
4
Solution: The characteristic equation is
r2 + r + λ = 0,
with solutions r r
1 1 1 1
r=− ± −λ = − ±i λ− .
2 4 2 4
Since we now have two complex conjugate solutions r1 = α + iβ and r1 = α − iβ we are in case 3
above, and the general solution to the differential equation is given by
r ! r !!
− 21 1 1
y(x) = Ax sin λ − ln x + B cos λ − ln x .
4 4

5.2. Examples of Sturm-Liouville Problems

In the next section we will describe in more details what is meant by a Sturm-Liouville problem (Charles-
Fran cois Sturm and Joseph Liouville), but first we will look at some examples.

Example 5.4. Solve


(
y00 + λy = 0,
y(0) = y(l) = 0.
5.2. EXAMPLES OF STURM-LIOUVILLE PROBLEMS 29

Solution: Previously (cf. section 4.8, p. 22) we saw that this problem can be solved if and only if
 nπ 2
λ = λn = , n = 1, 2, 3, . . . (eigenvalues)
l
with the corresponding solutions
 nπ 
yn (x) = an sin x (eigenfunctions).
l

Example 5.5. Solve 
00
X (x) − λX(x) = 0, 0 ≤ x ≤ 1,

X(0) = 0,
 0

X (1) = −3X(1).
Solution: We have three different cases:

λ=0 X(x) = Ax + B, X(0) = 0 ⇒ B = 0, and X 0 (1) = −3X(1) ⇒ A = −3A ⇒ A = 0. Hence, we


only get the trivial solution X(x) ≡ 0.
λ>0 With λ = p2 the solutions are given by X(x) = Ae px + Be−px . The boundary conditions
X(0) = 0 and X 0 (1) = −3X(1) gives us the system
(
X(0) = A + B = 0
X 0 (1) + 3X(1) = A pe p + pe−p + 3A e p − e−p = 0,
 

i.e. B = −A and A = 0 or e p (p + 3) + e−p (p − 3) = 0, but this expression is never 0 for p 6= 0


(show this!) and hence we must have A = −B = 0, and also in this case we get only the trivial
solution X ≡ 0.
λ<0 With λ = −p2 we get the solution X(x) = A cos px + B sin px, and the boundary conditions
are X(0) = A = 0, and X 0 (1) = −3X(1) which gives pB cos px = −3B sin px ⇒
B (pcospx + 3 sin px) = 0,
hence either B = 0, (and we get the trivial solution), or
(pcospx + 3 sin px) = 0,
p
i.e. p must satisfy the equation tan p = − .
3
Thus, we see that we only have non-trivial solutions when λ is an eigenvalue λ = λn = −p2n , n = 1, 2, . . . ,
p
where pn is a solution of tan p = − (see Fig. 5.2.1), and then we have the corresponding eigenfunctions
3
Xn (x) = an sin pn x.


Example 5.6. Solve (
x2 X 00 (x) + 2xX 0 (x) + λX = 0,
X(1) = 0, X(e) = 0.
Solution: The characteristic equation is
r(r − 1) + 2r + λ = 0

r2 + r + λ = 0
30 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

p
F IGURE 5.2.1. Solutions to tan p = −
3
y

y = tan p

p1 p2 p3 p

y = − 3p

which has the solutions


r r
1 1 1 1
r=− ± −λ = − ±i λ− ,
2 4 2 4
1 1 1
hence the cases we must investigate are λ < , λ = and λ > (cf. Example 5.3).
4 4 4
r
1 1 1
λ< With r1,2 = − ± − λ (different real ) we get the solutions X(x) = Axr1 + Bxr2 and the
4 2 4
boundary conditions gives
( ( (
X(1) = 0, A+B = 0, A = −B,
⇔ ⇔
X(e) = 0, Aer1 + Ber2 = 0, A (er1 − er2 ) = 0,
i.e. since er1 6= er2 we must have A = 0 and we only get the trivial solution X ≡ 0.
1 1 1 1
λ= Now we get a double root r = − and the solutions are X(x) = Ax− 2 + Bx− 2 ln x. The
4 2
1
boundary conditions give X(1) = A = 0 and X(e) = Be− 2 = 0, i.e. A = B = 0 and we only
get the trivial solution X ≡ 0. r
1 1 1
λ> The two complex roots r = − ± i λ − give the solutions
4 2 4
r ! r !
A 1 B 1
X(x) = √ sin λ − ln x + √ cos λ − ln x ,
x 4 x 4
r !
A 1
and we get X(1) = B = 0, and X(e) = √ sin λ− = 0 hence λ must satisfy
e 4
r
1
λ − = nπ,
4
for some positive integer n. We get the eigenvalues
1
λn = + (nπ)2 , n ∈ Z+ ,
4
5.2. EXAMPLES OF STURM-LIOUVILLE PROBLEMS 31

F IGURE 5.2.2. The Bessel function J0 (x)


y

J0 (x)

α1 α2 α3 α4 α5 x

and the corresponding eigenfunctions


An
Xn (x) = √ sin (nπ ln x) .
x

Example 5.7. (The Bessel equation)


An important ordinary differential equation in mathematical physics is the Bessel equation (Wil-
helm Bessel) of order m:
r2 w00 + rw0 + (r2 − m2 )w = 0.
The solutions (there are two linearly independent) to this equation are called Bessel functions of
order m. (For more information see e.g. Besselfunktions at engineering fundamentals). Here we will
only consider a special case.
Solve the following problem involving the Bessel equation of order 0:
 2
 d w 1 dw
+ + k2 w = 0,
dr2 r dr
w(R) = 0, w0 (r) < ∞.

Solution: A general solution is given by


w(r) = C1 J0 (kr) +C2Y0 (kr),
where J0 and Y0 are the Bessel functions of the first and second kind of order 0. It is known that
Y00 is not bounded and if we impose the condition that w0 (r) must be bounded we get C2 = 0. The
boundary condition implies that
w(R) = C1 J0 (kR) = 0,
and if we want a non-trivial solution (C1 6= 0) then k and R must satisfy J0 (kR) = 0. It is well-known
that J0 has infinitely many zeros αn (α1 = 2.4047 . . . , α2 = 5.5201 . . . , α3 = 8.6537 . . . , . . . etc., see
Fig. 5.2.2). Hence we only get non-trivial solutions for the eigenvalues
αn
kn = , n ∈ Z+ ,
R
with the corresponding solutions are the eigenfunctions
α 
n
wn (r) = J0 r , n ∈ Z+ .
R
32 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.3. Inner Product and Norm

To construct an orthonormal basis in a vector space we must be able to measure lengths and angles.
Hence we must introduce an inner product (a scalar product). With the help of an inner product we can
easily determine which elements are orthogonal to each other. There are two examples of vector spaces
and inner products we will consider here. The plane R2 together with the usual scalar product, and a
vector space consisting of functions on an interval together with an inner product defined by an integral.
Vectors in R2
If we have two vectors ~x = (x1 , x2 ) and ~y = (y1 , y2 ), the inner product of ~x and ~y is defined by
~x ·~y = x1 y1 + x2 y2 .
The norm of ~x, |~x|, is defined by
|~x|2 =~x ·~x = x12 + x22 ,
and the distance between ~x and ~y, |~x −~y|, is given by
|~x −~y|2 = (x1 − y1 )2 + (x2 − y2 )2 .

The angle θ between ~x and ~y can now be computed using the relation
~x ·~y = |~x||~y| cos θ,
π
and we say that two vectors are orthogonal (perpendicular to each other), ~x ⊥~y, if θ = , i.e. if
2
~x ·~y = 0.

A Function Space
We now consider the vector space consisting of functions f (x) defined on the interval [0, l] (for some
l > 0) together with a positive weight-function r(x). The generalizations of the concepts above are
Z l
h f , gi = f (x)g(x)r(x)dx, (inner product)
0
Z l
k f k2 = | f (x)|2 r(x)dx, (norm)
0
Z l
k f − gk2 = | f (x) − g(x)|2 r(x)dx, (distance)
0
h f , gi = k f k kgk cos θ, (angle)
f ⊥g⇔ h f , gi = 0 (orthogonality)
Z l
⇔ f (x)g(x)r(x)dx = 0.
0

5.4. Sturm-Liouville Problems

A general Sturm-Liouville problem can be written as


0
P (x)y0 + (−q(x) + λr(x)) y = 0, 0 < x < l,
0
c1 y(0) + c2 y (0) = 0,
c3 y(l) + c4 y0 (l) = 0.
Here r(x), q(x) and P (x) are given functions, c1 , . . . , c4 given constants and λ a constant which can only
take certain values, the eigenvalues corresponding to the problem. r(x) is usually called a weight function.
It is also customary to assume that r(x) > 0.
5.4. STURM-LIOUVILLE PROBLEMS 33

If P (x) > 0 and c1 , . . . , c4 6= 0 we say that the problem is regular, and if P or r is 0 in some endpoint we
say that it is singular (note that there are other examples of both regular and singular SL problem, e.g.
the following problem is regular).

Example 5.8. r(x) = 1, P (x) = 1, q(x) = 0, c1 = c3 = 1, c4 = c2 = 0.



00
y + λy = 0,

y(0) = 0,

y(l) = 0.

(Cf. Example 5.4). In this case we have


 nπ 2
λn = , n = 1, 2, 3, . . . , (eigen values)
l  nπ 
yn = sin x , (eigen functions)
l
and
Z l  nπ   mπ 
hyn , ym i = sin x sin x dx = 0, if n 6= m,
0 l l
Z l  l 1
nπ  2  nπ  l
Z
kyn k2 = x dx = 1 − cos x dx = .

0 2 2
sin
0 l l
If f is a function on the interval [0, l] we can define the Fourier series of f , S (x), by (cf. Section
6.1):
∞  nπ 
S (x) = ∑ cn sin x , where
n=1 l
1 2 l  nπ 
Z
cn = 2
h f , yn i = f (x) sin x dx.
kyn k l 0 l

R EMARK 7. Examples 5.5-5.7 is also a Sturm-Liouville problem.

For a regular Sturm-Liouville problem we have:

(i) The eigenvalues are real and to every eigenvalue the corresponding eigenfunction is unique
up to a constant multiple.
(ii) The eigenvalues form an infinite sequence λ1 , λ2 , . . . and they can be ordered as

0 ≤ λ1 < λ2 < λ3 < · · · ,

with
lim λn = ∞.
n→∞

(iii) If y1 and y2 are two eigenfunctions corresponding to two different eigenvalues, λi1 6= λi2 ,
they are orthogonal with the respect to the inner product defined by r(x), i.e.
Z l
hy1 , y2 i = y1 (x)y2 (x)r(x)dx = 0.
0
34 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.5. Generalized Fourier Series

We will now see how we can generalize the concept of Fourier series from the usual trigonometric basis
functions to an orthonormal basis consisting of eigenfunctions to a Sturm-Liouville problem.
Assume that we have an infinite linear combination

f (x) = ∑ cn yn (x),
n=1
where yn ⊥ ym for n 6= m. Then
* +
∞ ∞
h f , ym i = ∑ cn yn , ym = ∑ cn hyn , ym i
n=1 n=1

= cm hym , ym i = cm kym k2 .

Let f be an arbitrary function on [0, l]. Then we define the generalized Fourier series for f as

S (x) = ∑ cn yn (x),
n=1
where
1
cn = h f , yn i
kym k2
are the generalized Fourier coefficients.
Let y1 , y2 , . . . be a set of orthogonal eigenfunctions of a regular Sturm-Liouville problem, and let f be a
piece-wise smooth function in [0, l]. Then, for each x in [0, l] we have that

(a) S (x) = f (x) if f is continuous at x, and


1
(b) S (x) = ( f (x+) + f (x−)) if f has a discontinuity point at x.
2

5.6. Some Applications


Example 5.9. Consider a rod of length l, with constant density ρ, specific heat cv and thermal conduc-
tance κ. Let the temperature of the rod at the time t and the distance x (from, say, the left end point)
be denoted by u(x,t). Assume that the temperature at the end points of the rod are given by
u(0,t) = u(l,t) = 0, t > 0,
and that the temperature distribution in the rod at the initial time t = 0 is given by
u(x, 0) = f (x), 0 ≤ x ≤ l.
Determine u(x,t) for 0 ≤ x ≤ l, and t ≥ 0.
Solution: We have seen (cf. Chapter 1) that the mathematical formulation of this problem is

0 00 κ
ut (x,t) − kuxx (x,t) = 0, 0 ≤ x ≤ l, t ≥ 0, k = cV ρ ,


(*) u(0,t) = u(l,t) = 0, t > 0,


u(x, 0) = f (x), 0 ≤ x ≤ l.

We begin by performing the following natural scaling of the problem (cf. Chapter 1):
k x
(5.6.1) t= t, x = .
l2 l
5.6. SOME APPLICATIONS 35

Then we arrive at the following standard problem to solve:



(1) ũ0 (x,t) − ũ00 (x,t) = 0, 0 ≤ x ≤ 1, t ≥ 0,
 t xx
(2) ũ(0,t) = ũ(1,t) = 0, t > 0,
˜(x),

ũ(x, 0) = f 0 ≤ x ≤ 1,

(3)
where f˜(x) = f (xl). We can now use Fourier’s method to solve this problem (cf. section 4.8).
Step 1: Try to find solutions of the type
ũ(x,t) = X(x)T (t).
If we insert this expression in (1) above we get
T 0 (t) X 00 (x)
= = −λ,
T (t) X(x)
i.e. the two equations
(A) X 00 (x) + λX(x) = 0, and
(B) T 0 (t) + λT (t) = 0.
The function u(x,t) = X(x)T (t) must also satisfy the boundary conditions (2):
X(0)T (t) = X(1)T (t) = 0, t ≥ 0,
and if we want a non-trivial solution (T 6≡ 0) we conclude that
X(0) = X(1) = 0.
This boundary condition together with (A) gives us the Sturm-Liouville problem
(
X 00 (x) + λX(x) = 0,
(**)
X(0) = X(1) = 0.
Step 2: We get three cases depending on the value of λ : λ < 0, λ = 0, and λ > 0.
λ<0 We get only the trivial solution X(x) ≡ 0.
λ=0 We get only the trivial solution X(x) ≡ 0.
λ>0 Then √  √ 
X(x) = A sin λx + B cos λx ,
√  √
and X(0) = 0 ⇒ B = 0, and X(1) = 0 ⇒ A sin λ = 0 ⇒ A = 0 or λ = nπ, n ∈ Z+ .
Thus the SL-problem (**) has the following eigenvalues
λn = (nπ)2 , n ∈ Z+ ,
and the corresponding eigenfunctions
Xn (x) = sin (nπx) .
Furthermore, for these values of λ = λn , the equation (B) has the solution
2
T (t) = Tn (t) = e−(nπ) t ,
and we conclude that the general (separable) solution of (1) and (2) can be written as
2
ũn (x,t) = cn sin (nπx) e−(nπ) t .
Step 3: The superposition principle (cf. section 4.5) tells us that the function
∞ 2
ũ (x,t) = ∑ b̃n sin (nπx) e−(nπ) t
n=1
36 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

also satisfy (1) and (2). We will now see that we can also assert that this function satisfy the initial
condition (3) by choosing appropriate constants b̃n . It is clear that

ũ (x, 0) = ∑ b̃n sin (nπx) ,
n=1

and if we choose the b̃n ’s as the Fourier coefficients of f˜, i.e.


Z 1
b̃n = 2 f˜(x) sin (nπx) dx,
0
we actually get

ũ (x, 0) = ∑ b̃n sin (nπx) = f˜(x).
n=1
We conclude that the function
∞ 2
ũ (x,t) = ∑ b̃n sin (nπx) e−(nπ) t ,
n=1

with b̃n as above satisfy (1), (2) and (3).


Final step: By using the scaling from (5.6.1) we see that the solution to the original problem is
given by
∞  nπ  nπ 2
u(x,t) = ∑ bn sin x e−( l ) kt ,
n=1 l
where Z l
2  nπ 
bn = f (x) sin x dx.
l 0 l


Example 5.10. Consider a rod between x = 1 and x = e. Let u(x,t) denote the temperature of the rod
at the point x and time t. Assume that the end points are kept at the constant temperature 0, that at
the initial time t = 0 the rod has a heat distribution given by
u(x, 0) = f (x), 1 < x < e,
that no heat is added and that the rod has constant density ρ and specific heat cv . Assume also
that the rod has heat conductance K which varies as K(x) = x2 . The equation which determines the
temperature u(x,t) is in this case
∂ 2 0
(1) cv ρut0 = x ux , 1 < x < e, t > 0.
∂x
Determine u(x,t) for 1 ≤ x ≤ e, and t > 0.
Solution: We apply Fourier’s method of separating the variables and assume that we can find a
solution of the form u(x,t) = X(x)T (t). Inserting this expression in (1) above we get
T0 1 d 2 0
cv ρ = x X = −λ,
T X dx
where λ is a constant and X satisfies the boundary condition
(2) X(1) = X(e) =0.
Thus T satisfies the equation
λ
(3) T0 =− T,
cv ρ
5.6. SOME APPLICATIONS 37

and X satisfies
d 2 0
x X + λX = 0, 1 < x < e
dx

(4) x2 X 00 + 2xX 0 + λX = 0, 1 < x < e.
The equation (4) together with the boundary condition (2) gives us a regular Sturm-Liouville problem
on [1, e]. The characteristic equation is
r(r − 1) + 2r + λ = 0,
with the roots r
1 1
r1,2 = − ± − λ.
2 4
As in Example 5.6 we get three different cases depending on the value of λ:
1 1 1 1
λ= We have a double root r = − , and the solutions are given by X(x) = Ax− 2 +Bx− 2 ln x.
4 2
1
The boundary condition (2) gives X(1) = A = 0 and X(e) = Be− 2 = 0, i.e. we get only
the trivial solution X ≡ 0.
1
λ< The roots are now real and different, r1 6= r2 , and the solutions are
4
X(x) = Axr1 + Bxr2 .
The boundary conditions gives us
( (
X(1) = A + B = 0 A = −B,

X(e) = Aer1 + Ber2 = 0 A(er1 − er2 ) = 0,
and since r1 6= r2 we must have A = 0, and
r we only get the trivial solution X ≡ 0.
1 1 1
λ> We have two complex roots r = − ± i λ − , and the general solution is
4 2 4
r ! r !
A 1 B 1
X(x) = √ sin λ − ln x + √ cos λ − ln x .
x 4 x 4
r !
− 21 1
The boundary conditions imply that X(1) = B = 0 and X(e) = Ae sin λ− =
4
0, which us gives that
r
1
λ − = nπ, n ∈ Z+ .
4

1
Observe that the case n = 0 is the same as λ = . Hence the eigenvalues of the Sturm-Liouville problem
4
(4) and (2) are
1
λn = + n2 π2 , n ∈ Z+ ,
4
and the corresponding eigenfunctions are
1
Xn (x) = √ sin (nπ ln x) , n ∈ Z+ .
x
For every fixed n, the equation (3) is
λn
Tn0 = − Tn ,
cv ρ
38 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

with the solutions


− cλvnρ t
Tn (t) = e , n ∈ Z+ .
We conclude that the functions
1 − λn t
un (x,t) = Tn (t)Xn (x) = √ sin (nπ ln x) e cv ρ , n ∈ Z+ ,
x
are solutions to the original equation, and they also satisfy the boundary conditions. The superposition
principle implies that the function

1 λn
u(x,t) = ∑ an √x sin (nπ ln x) e− cv ρ t
n=1
is also a solution of the equation which satisfy the boundary conditions. Finally, to accommodate the
initial values we must choose the constants an so that

1
u(x, 0) = ∑ an √x sin (nπ ln x) = f (x),
n=1
This holds if we choose the constants an as
Z e
1
an = f (x)Xn (x)dx
kXn k2 1
Z e
1
= 2 f (x) √ sin (nπ ln x) dx.
1 x
Z e
1 1
(Note that kXn k2 = sin2 (nπ ln x) dx = .) The wanted temperature distribution is thus given by
1 x 2

1 − λn t
u(x,t) = ∑ an √ sin (nπ ln x) e cv ρ ,
n=1 x
where Z e
f (x)
an = 2 √ sin (nπ ln x) dx.
1 x

Example 5.11. Solve the problem:
∂u ∂2 u
(1) = ,
∂t ∂x2
(2) u(0,t) =0,
(3) u0x (1,t) = − 3u(1,t),
(4) u(x, 0) = f (x).
Solution: We use Fourier’s method of separation of variables.
Step 1: Assume that u(x,t) = X(x)T (t) and insert this into (1). In the same way as in the
previous examples we obtain the equation
T 0 (t) X 00 (x)
= = λ,
T (t) X(x)
which gives us the two equations
(5) T 0 (t) − λT (t) = 0,
(6) X 00 (x) − λX(x) = 0.
Step 2: There are three cases of λ we must study.
5.6. SOME APPLICATIONS 39

λ=0 The solutions to (5) and (6) are then T =constant, and X = Ax + B, i.e. u(x,t) = Ax + B
for some constants A and B. The boundary value (2) gives u(0,t) = B = 0, and (3)
gives u0x (1,t) = A = −3u(1,t) = −3A, i.e. A = 0 and we only get the trivial solution
u(x,t) ≡ 0. √ √
λ>0 The solutions to (5) are T (t) = Aeλt and the solutions to (6) are X(x) = Be λx +Ce− λx .
The boundary value (2) gives u(0,t) = T (t)X(0) = Aeλt (B +C) = 0, hence either A = 0
(which implies u ≡ 0) or B = −C. The condition (3) is now equivalent to
√  √ √   √ √ 
Aeλt λB e λ + e− λ = −3AB e λ − e− λ ,

√  √   √ 
2 λ
ABe 3+ λ = AB 3 − λ ,

which is only satisfied if AB = 0 (show this!), and in this case we only get the trivial
solution u ≡ 0.
λ<0 If we set λ = −p2 we get (in the same manner as in Example 5.5) the solutions
2
(*) un (x,t) = Bn e−pn t sin pn x, n = 1, 2, 3, . . . ,

where pn are solutions to the equation


p
tan p = − .
3
Step 3: All functions defined by (*) satisfy (1), (2) and (3). According to the superposition principle
the function

2
u(x,t) = ∑ Bn e−pnt sin pn x
n=1

also satisfies (1), (2) and (3). Furthermore, (6) with the corresponding boundary conditions is a
regular Sturm-Liouville problem and the theory of generalized Fourier series implies that u(x,t) will
satisfy (4):

u(x, 0) = ∑ Bn sin pn x = f (t)
n=1

if we choose the constants Bn as


R1
h f (x), sin pn xi 0 f (x) sin pn xdx
(**) Bn = = .
ksin pn xk2
R1
0 sin2 pn xdx

Thus, the solution to the problem is



2
u(x,t) = ∑ Bn e−pnt sin pn x,
n=1

p
where pn are the positive solutions of tan p = − , p1 < p2 < · · · (see Fig. 5.2.1), and Bn is defined
3
by (**).


40 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

Example 5.12. (The wave equation) A vibrating circular membrane with radius R is described by the
following equation together with boundary and initial values:
p
utt00 = c2 u00xx + u00yy ,t > 0, r = x2 + y2 ≤ R,

(1)
(2) u (R,t) = 0,t > 0, (fixated boundary)
(3) u(r, 0) = f (r), r ≤ R, (initial position)
∂u
(4) (r, 0) = g(r), r ≤ R. (initial velocity)
∂t
p
Observe that the initial conditions only depend on r = x2 + y2 = the distance from the center of
the membrane to the point (x, y), and if we introduce polar coordinates
(
x = r cos θ,
y = r sin θ,
we see that (1) can be written as
∂2 u ∂2 u 1 ∂u 1 ∂2 u
 
= c2 + + .
∂t 2 ∂r2 r ∂r r2 ∂θ2
If we also make the assumption that u(r, θ,t) is radially symmetric (i.e. that u(r, θ,t) is independent
of the angle θ) we can write (1) as
∂2 u
 2 
2 ∂ u 1 ∂u
(1’) =c + .
∂t 2 ∂r2 r ∂r
To solve the problem we continue as before and use Fourier’s method to separate the variables. With
the function u(r,t) = W (r)G(t) inserted into (1’) we get the equations
1
(5) W 00 + W 0 + k2W = 0, 0 ≤ r ≤ R,
r
(6) G00 + (ck)2 G = 0, t > 0.
Furthermore, we get the following boundary values from (2):
(7) W (R) = 0,
and (5) together with (7) is a regular Sturm-Liouville problem which gives us the eigenfunctions
α 
n
Wn (r) = J0 r ,
R
where αn = kn R are solutions of J0 (kR) = 0 (see Example 5.7). Observe that if we write (5) in the
general form we see that we have the weight function= r, i.e. the inner product is given by
Z R
h f , gi = f (r)g(r)rdr.
0
By solving (6) for k = kn and using the superposition principle we see that
∞   cα   cα   α 
n n n
(*) u(r,t) = ∑ An cos t + Bn sin t J0 r
n=1 R R R
is a solution to (1) and (2). And we can also choose the constants An so that (3) is satisfied, i.e.
∞ α 
n
u(r, 0) = ∑ An J0 r = f (r),
n=1 R
if
Z R
1 α 
n
(**) An = R R 2 f (r)J0 r rdr.
J0 αn R
R r rdr 0
0
5.7. EXERCISES 41

In the same way we see that (4) is satisfied if we choose Bn so that


Z R
cαn 1 α 
n
(***) Bn = RR 2
g(r)J0 r rdr.
R R
αn

0 J0 R r rdr 0

Hence, the answer to the problem is given by (*) where An and Bn are chosen as in (**) and (***).

5.7. Exercises

5.1. [S] Solve the following S-L problem by determing the eigenvalues and eigenfunctions:
0
x2 u0 (x) + λu(x) = 0, 1 < x < eL , u(1) = u eL = 0,

(a)
0
(b) x2 u0 (x) + λu(x) = 0, 1 < x < eL , u(1) = u0 (e) = 0.

5.2.* Solve the following S-L problem by determing the eigenvalues and eigenfunctions:
(a) u00 (x) + λu(x) = 0, 0 < x < l, u0 (0) = u0 (l) = 0,
(b) u00 (x) + λu(x) = 0, 0 < x < l, u0 (0) = u(l) = 0.

5.3. [S] Use Fourier’s method to solve the following problem:


ut0 = u00xx , 0 ≤ x ≤ l, t > 0,
0 0
ux (0,t) = ux (l,t) = 0, t > 0,
u(x, 0) = f (x), 0 < x < l.

5.4.* A rod between x = 1 and x = e has √ constant temperature 0 at the endpoints, and at the time
t = 0 the heat distribution is given by x, 1 < x < e. The rod has a constant density ρ and constant
specific heat C, but its thermical conductance varies like K = x2 , 1 < x < e. Formulate an initial and
boundary values problem for the temperature of the rod, u(x,t). Then use fourier’s method to solve
the problem.

5.5.*
(a) Solve the problem
ut0 = 4u00xx , 0 ≤ x ≤ 1, t > 0,
u(0,t) = 0 t > 0,
u0x (1,t) =−cu(1,t), t > 0,
( 1
x, 0≤x< ,
u(x, 0) = 2
1 − x, 1
x≥ .
2
(b) Giva a physical interpretation of the problem in (a).
42 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.6. [S] Consider an ideal liquid, flowing orthogonally towards an infinitely long cylinder by the radius
a. Since the problem is uniform in the axial coordinate we can treat the problem in plane polar
coordinates.

a
x

The speed of the liquid, ~v(r, θ) is then given by the equation


~v(r, θ) = −gradψ,
where ψ as a solution of the Laplace equation
∆ψ = 0.
At the surface of the cylinder we have the boundary condition
∂ψ
|r=a = 0,
∂r
and as r → ∞ we have the following asymptotic boundary condition
ψ ψ
lim = lim = −v0 ,
r→∞ x r→∞ r cos θ
where v0 is a constant.
a) Show, using separation of variables, that in polar coordinates, the assumption ψ(r, θ) =
R(r)Θ(θ) transforms the Laplace equation to the following two equations
Θ00 (θ) + m2 Θ (θ) = 0,
1 m2
R00 (r) + R0 (r) − 2 R(r) = 0,
r r
where m is an integer.
b) Use a) to find ψ and ~v.
CHAPTER 6

Introduction to Transform Theory with Applications

6.1. Transforms of Fourier Series Type


Example 6.1. (The classical form)
If f (t) is defined for t ∈ [−l, l] (or alternatively priodic with period 2l) we can construct a
(classical) Fourier series (Joseph Fourier) for f :
Fcl : f (t) → {a0 , a1 , b1 , . . . , an , bn , . . .} ,
where
1 l
Z
a0 = f (t)dt,
2l −l
1 l
Z
an = f (t) cos(nΩt)dt, n = 1, 2, . . . , and
l −l
Z l
1
bn = f (t) sin(nΩt)dt, n = 1, 2, . . . ,
l −l

are the Fourier coefficients (amplitudes). Here we have defined Ω = . The “signal” f (t) can be
l
reconstructed (in points of continuity) in the following way:

Fcl−1 : f (t) = a0 + ∑ an cos(nΩt) + bn sin(nΩt).
n=1

D EFINITION . (Generalized form)


In the generalized form we use, for example, eigenfunctions from a Sturm-Liouville problem (chapter
5.4) instead of the sine and cosine functions.
Let {yn (t)}∞
n=1 be an orthogonal system (basis functions), i.e.
(
0, n 6= m,
hyn , ym i =
kyn k2 , n = m.
We then define
Fd : f (t) → {a∗n }∞n=1 ,
where
1
a∗n =h f , yn i
kyn k2
are the Fourier coefficients. Under rather general assumptions we can reconstruct the signal f (t) (in
points of continuity) by

Fd−1 : f (t) = ∑ a∗n yn (t).
n=1

43
44 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

R EMARK 8. The classical form in Example 6.1 is obtained by considering


{yn (t)}∞
n=1 = {1, cos Ωt, sin Ωt, . . . , cos nΩt, sin nΩt, . . .} .
Note that in this case we have
Z l
h f , cos nΩti = f (t) cos nΩtdt, and
−l
Z l Z l
1 + cos 2Ωt
kcos nΩtk2 = cos2 nΩtdt = dt = l.
−l −l 2
Observere also that the integrals can be taken over any period of f , e.g. [0, 2l].
Example 6.2. (Classical complex form)

Fc : f (t) → {cn }∞n=−∞ ,


where
1 l
Z
c0 = f (t)dt, and
2l −l
1 l
Z
cn = f (t)e−inΩt dt, n = ±1, ±2, . . . .
2l −l
Here we have the reconstruction formula:

Fc−1 : f (t) = ∑ cn einΩt .
n=−∞

R EMARK 9. The complex form in Example 6.2 can be deduced from the formulas in Example 6.1 and
Euler’s formulas:
 it −it
sint = e − e ,
(
eit

2i −it = cost + i sint,
(6.1.1) it or equivalently −it
cost = e + e e = cost − i sint.
,

2
We have

f (t) = a0 + ∑ an cos nΩt + bn sin nΩt
n=1
einΩt + e−inΩt − e−inΩt
∞    inΩt 
e
= a0 + ∑ an + bn
n=1 2 2i
∞    
an bn inΩt an bn −inΩt
= a0 + ∑ + e + − e
n=1 2 2i 2 2i

= c0 + ∑ cn einΩt + cn e−inΩt ,
n=1
 
an bn
where we let c0 = a0 and cn = + . (Observe that an and bn are real numbers). If we additionally
2 2i
define
c−n = cn ,
we get

f (t) = ∑ cn einΩt .
n=−∞
6.2. THE LAPLACE TRANSFORM 45

Moreover:
an bn 1 l
Z
n>0: cn = −i = f (t)e−inΩt dt,
2 2 2l −l
n=0: c0 = a0 ,
a−n b−n 1 l i l
Z Z
n<0: cn = c−n = −i = f (t) cos(−nΩt)dt − f (t) sin(−nΩt)dt
2 2 2l −l 2l −l
1 l
Z
= f (t)e−inΩt dt.
2l −l

6.2. The Laplace Transform

If f (t) is defined for t ≥ 0 the (unilateral) Laplace transform (Pierre-Simon Laplace) L and its inverse
L −1 are defined by:
Z ∞
L: f (t) 7→ F(s) = L { f (t)} (s) = e−st f (t)dt,
0
Z a+i∞
1
L −1 : F(s) 7→ f (t) = L −1 {F(s)}(t) = F(s)est ds.
2πi a−i∞

Note that if f (t)e−σ0 t → 0 as t → ∞ then the first integral converges for all complex numbers s with real
part greater than σ0 , and in the second integral we then demand that a > σ0 .
R EMARK 10. In applications the inverse transforms are usually computed by using a table (see e.g.
Appendix A-1, p. 90). When computing the inverse transform it is sometimes also useful to remember
how to compute partial fraction decompositions (see e.g. Appendix A-6, p. 99)

It is obvious that the Laplace transform is linear, i.e.


L {a f (t) + bg(t)} = aL { f (t)} + bL {g(t)} .
Apart from computing the Laplace transform of a function by using the integral in the definition above
one can also use the general properties stated below, which also illustrate some important properties of
the Laplace transform.
Differentiation

L f 0 (t) (s) = sL { f (t)} (s) − f (0),




L f 00 (t) (s) = s2 L { f (t)} (s) − s f (0) − f 0 (0),




..
n o .
L f (t) (s) = sn L { f } (s) − sn−1 f (0) − sn−2 f 0 (0) − · · · − f (n−1) (0).
(n)

Convolution
The Convolution product of two functions f and g, f ? g over a finite interval [0,t] is defined as
Z t
( f ? g)(t) = f (u)g(t − u)du.
0

For the Laplace transform we then have


L { f ? g} = L { f } L {g} .
46 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

In fact
Z ∞Z t
L { f ? g} = f (u)g(t − u)due−st dt
0 0
Z ∞Z ∞
= f (u)g(t − u)e−st dtdu
0 u
Z ∞ Z ∞ 
−su −s(t−u)
= f (u)e g(t − u)e dt du
0 u
Z ∞ Z ∞ 
{x = t − u} = f (u)e−su g(x)e−sx dx du
0 0
= L { f } L {g} .
Z ∞Z t Z ∞Z ∞
Observe that in the second equality we used the following identity: dudt = dtdu, which
0 0 0 u
follows from the fact that both sides represent an area integral in the (u,t)-plane over the octant between
the positive t-axis and the line t = u.
Damping
By damping a “signal” f (t) exponentially, i.e. multiply f (t) with e−at one obtains a translation of the
Laplace transform of f as
Z ∞
L e−at f (t) (s) = e−at f (t)e−st dt


Z0 ∞
= f (t)e−(s+a)t dt = L { f } (s + a).
0
I.e. we have the following formula:
L e−at f (t) (s) = L { f } (s + a).

(6.2.1)

Time delay
Heaviside’s function is defined by (
0, t < 0,
θ(t) =
1, t ≥ 0,
and for a ∈ R the function t 7→ θ(t − a) is a function which takes the value 0 when t < a and 1 when t ≥ a
(see Fig. 6.2.1). The meaning of the function θ(t − a) is to switch on a signal at time t = a, and one can
also form the function θ(t − a) − θ(t − b) which switch on a signal at the time t = a and switch it off at
the time t = b: (
f (t), a ≤ t ≤ b,
f (t) (θ(t − a) − θ(t − b)) =
0, else.

Another use of the Heaviside’s function is time delay. To translate a function f (t) which is defined for
t ≥ 0 (i.e. delay the signal) one can form the function t 7→ f (t − a)θ(t − a), the function which is 0 when
t < a and f (t − a) when t ≥ a. The Laplace transform of this function is given in the following manner
by a damping at the transform side
Z ∞
L { f (t − a)θ(t − a)} (s) = f (t − a)θ(t − a)e−st dt
0
Z ∞
= f (t − a)e−st dt = [u = t − a]
a
Z ∞
= f (u)e−s(u+a) du = e−as L { f } (s),
0
6.2. THE LAPLACE TRANSFORM 47

F IGURE 6.2.1. Shifted Heaviside’s function

θ(a − t)
1

i.e. we have the relation


(6.2.2) L { f (t − a)θ(t − a)} (s) = e−as L { f } (s).
1 1 n!
Example 6.3. We have L {1} (s) = , L {t} (s) = 2 , . . . , L {t n } (s) = n+1 , since
s s s
 −st ∞
e 1
Z ∞
L {1} (s) = 1e−st dt = = ,
0 −s 0 s
 −st ∞
te 1 ∞ −st
Z ∞ Z
L {t} (s) = te−st dt = + 1e dt
0 −s 0 s 0
1 1 1
= 0 + · = 2 , etc.
s s s
Observe that when we calculate the integral from 0 to ∞ of t n e−st each integration by parts will give
k −st
us an s in the denominator and a factor in the numerator, and since Z t e vanishes at both limits of
n! ∞ −st
the integral (when k > 0) all terms will vanish except the last, n e dt.
s 0
By using this example and the dampening formula (6.2.1) we can easily compute for example
1 1
L e−at (s) = , L te−at (s) =
 
, etc.
s−a (s − a)2


iat
Example 6.4. Let f (t) = e , where a is a constant and t ≥ 0. The Laplace transform of f is then given
by:
Z ∞ Z ∞
L eiat (s) = eiat e−st dt = e(ia−s)t dt

0 0
" #∞
e(ia−s)t 1 s + ia
= = =
ia − s s − ia s2 + a2
0
s a
= +i 2 ,
s2 + a2 s + a2
and since L is linear Eulers formulas (6.1.1) implies that
s
L {cos at} (s) = 2 2 ,
s +a
a2
L {sin at} (s) = 2 2 .
s +a
48 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

Example 6.5. Solve the initial value problem


(
y00 + y = 1,
0
y(0) = y (0) = 0.

Solution: Let L {y(t)} = Y (s). Then L y00 (t) = s2Y (s), and if we (Laplace-) transform the equa-


tion above we get


1
L y00 + y = s2Y (s) +Y (s) = L {1} = ,

s
i.e.
1
1 1 s
Y (s) = s 2 = = − .
1+s s (1 + s2 ) s 1 + s2
If we apply the inverse Laplace transform we get
   
1 s
y(t) = L −1 {Y (s)} (t) = L −1 (t) − L −1 (t)
s 1 + s2
= 1 − cost.

Example 6.6. (The heat conduction equation)


Consider the boundary value problem

 u − kuxx = 0, t > 0, x > 0,
 t


u(x, 0) = 0, x > 0,



 u(0,t) = 1, t > 0,

where u(x,t) is a bounded function (u(x,t) gives the heat in the point x at the time t). Now transform
the entire equation in the time variable, and let U(x, s) denote the Laplace transform of u(x,t). The
equation ut − kuxx = 0 can now be written as
sU(x, s) − kUxx (x, s) = 0,
and if we solve this ordinary differential equation (in the x-variable), we get
√ √
U(x, s) = Ae− s/kx + Be s/kx ,
where A and√B are functions of s. Since we assumed u to be bounded (in both variables) the term
containing e s/kx must vanish, i.e. B = 0, and we get

U(x, s) = A(s)e− s/kx .
1
The boundary condition implies U(0, s) = L {u(0,t)} (s) = L {1} (s) = , but we see that U(0, s) =
s
1
A(s) so A(s) = and
s
1 √
U(x, s) = e− s/kx .
s
6.3. THE FOURIER TRANSFORM 49

To find u we must now apply the inverse transform on U. For this purpose it is convenient to use a
table, and using Appendix 1, p. 90 we see that
 
x
u(x,t) = erfc √ ,
2 kt
where erfc is the complementary error function (erf),
erfc(t) = 1 − erf(t),
Z t
2 2
erfc(t) = √ e−z dz.
π 0

6.3. The Fourier Transform

The counterpart of Fourier series for functions f (t) defined on R is the Fourier transform, F { f }, which
we define as
Z ∞
F : f (t) 7→ fˆ(ω) = F { f (t)} = f (t)e−iωt dt,
−∞
for functions f (t) such that the integral converges. We also have an inverse transform
1 ∞ ˆ
Z
F −1 : f (t) = f (ω)eiωt dω.
2π −∞
R EMARK 11. We can still interpret the formula as if we reconstruct the signal f (t) as a sum of waves
(basis functions) eiωt , with amplitudes fˆ(ω).
R EMARK 12. In applications it is customary to find the inverse transform using appropriate tables (see
Appendix 2, p. 91).

In the same manner as for the Laplace transform we can derive a number of useful general properties for
the Fourier transform.
• Linearity
F {a f (t) + bg(t)} = aF { f (t)} + bF {g(t)} .
• Differentiation
F f 0 (t) = iωF { f (t)} ,


F f 00 (t) = (iω)2 F { f (t)} ,




..
n o .
F f (n) (t) = (iω)n F { f (t)} .
• Convolution
F { f ? g} = F { f (t)} F {g(t)} ,
where the convolution over R is defined by
Z ∞
f ?g = f (t − u)g(u)du.
−∞
• Frequency modulation
F eiat f (t) (ω) = F { f (t)} (ω − a) = fˆ(ω − a).

50 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

• Time delay
F { f (t − a)} = e−iωa fˆ(ω).
Example 6.7. Let f (t) = θ(t)e−t , (θ(t) is defined as on p. 46) then
Z ∞
−t
θ(t)e−t e−iωt dt

F θ(t)e (ω) =
−∞
Z ∞
= e−(1+iω)t dt
0
" #∞
−e−(1+iω)t 1
= = .
1 + iω 1 + iω
0
1
I.e. fˆ(ω) = .
1 + iω

Example 6.8. (Heat conduction equation with an initial temperature distribution)
Assume that we have an infinitely long rod with temperature distribution in the point x at the
time t given by u(x,t), x ∈ R, t ≥ 0. Assume also that at the initial time t = 0 the temperature is
distributed according to the function f (x), i.e. u(x, 0) = f (x). To determine u we must solve the
following initial value problem.
(
ut0 − ku00xx = 0, −∞ < x < ∞, t > 0,
(6.3.1)
u(x, 0) = f (x), −∞ < x < ∞.
Solution: By using the Fourier transform in the same way as the Laplace transform in Example
6.6 we get (after some calculations) that
1
Z ∞
2
u(x,t) = √ f (z)e−(x−z) /4kt dz.
4πkt −∞


1 2
R EMARK 13. The function G(y,t) = √ e−(x−z) /4kt is the so called Green’s function or the unit
4πkt
impulse solution to the following problem:
(
Gt0 − kG00xx = 0,
G(x, 0) = δy (x).
Here δy (x) is the Dirac delta function (Paul Dirac), which is usually characterized by the property that
Z ∞
g(x)δy (x)dx = g(y),
−∞
or alternatively formulated
g ? δy (u) = g(u − y).
Green’s method: The solution to 6.3.1 is given by
u = f ? G.

Observe that δy (x) is not a function strictly speaking, but a distribution. If y = 0 we simply write δ0 (x) =
δ(x). I connection with applications δy (x) is usually called a unit impulse (in the point x = y). When
considering a physical system, the occurrence of δ(t) should be viewed as that the system is subjected to
6.3. THE FOURIER TRANSFORM 51

a short (momentary) force. (For example if you hit a pendulum with a hammer at the time 0 the system
will be described by an equation of the type mÿ + aẏ + by = cδ(t).)

Sampling
Sampling here means that we reconstruct a continuous function from a set of discrete (measured/sampled)
function values.
S : f (t) → { f (nδ)} , δ is the length of the sampling interval.

F IGURE 6.3.1. Sampling


y

−2δ −δ 0 δ 2δ 3δ t

D EFINITION 6.1. A function f (t) is said to be band limited if the Fourier transform of f , F ( f ) only
contains frequencies in a bounded interval, i.e. if fˆ(ω) = 0 for |ω| ≥ c for some constant c. (The
counterpart for periodic functions is of course that the Fourier series transform is a finite sum.)
T HEOREM . The sampling theorem

A continuous band limited signal f (t) can be uniquely reconstructed from its values in a finite number of
uniformly distributed points (sampling points) if the distance between two neighboring points is at most
π
. In this case we have:
c  

−1 kπ sin(ct − kπ)
S : f (t) = ∑ f .
k=−∞ c ct − kπ

(Here the sampling is performed over the points xk = .)
c
R EMARK 14. In connection with the sampling theorem we should also mention two other discrete Fourier
transforms:
• The Discrete Fourier Transform (DFT).
• The Fast Fourier Transform (FFT).
These transforms are very useful in many practical applications, but we do not have the time to go into
more details concerning these in this short introduction (in short one can say that practically the entire
information society of today relies on the FFT). Some references:
• Mathematics of the DFT. A good and extensive online-book on DFT and applications,
http://ccrma-www.stanford.edu/~jos/r320/.
52 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

• Fourier Transforms, DFTs, and FFTs. Another extensive text on mainly DFT and FFT with
examples and applications,
http://www.me.psu.edu/me82/Learning/FFT/FFT.html.

6.4. The Z-transform

Consider discrete signals, {xn }∞ ∞


n=0 = {x0 , x1 , x2 , . . .} , or {xn }n=−∞ = {. . . , x−2 , x−1 , x0 , x1 , x2 , . . .} . The
notation

 
1, 2, 5, 6, −1, . . .

implies that x0 = 5. The Z-transform of the sequence {xn } is defined by



Z: {xn } → X(z) = ∑ xn z−n ,
n=0
Z −1 : Z −1 [X] = {xn }∞n=0 .
The Z-transform can be considered as a discrete version of the Laplace transform and therefore it is not
surprising that similar general properties hold. For example we have:

• Linearity
Z [a{xn } + b{yn }] = aZ [{xn }] + bZ [{yn }] .
• Damping
z
Z [{an xn }] = X .
a
• Convolution
Z [{xn } ? {yn }] = Z [{xn }] · bZ [{yn }] ,
where (the discrete) convolution of two sequences is defined by
n
{xn } ? {yn } = {zn } , with zn = ∑ xn−k yk , n = 0, 1, 2, . . . .
k=1

• Differentiation
X 0 (z) = Z [{0, 0, −x1 , −2x2 , −3x3 , . . .}] .
• Forward shift
Z [{0, x0 , x1 , x2 , x3 , . . .}] = z−1 X(z),
Z [{0, 0, x0 , x1 , x2 , x3 , . . .}] = z−2 X(z), etc.
• Backward shift
 

Z {x0 , x1 , x2 , x3 , . . .} = zX(z) − x0 z,
 

Z {x0 , x1 , x2 , x3 , . . .} = z2 X(z) − x0 z2 − x1 z, etc.

When comparing with the formulas for the Laplace transform we se that the forward shift corresponds
to time delay and backward shift corresponds to differentiation in the continuous case. Since the shift
6.4. THE Z-TRANSFORM 53

operations might feel a little different as compared to their continuous counterparts we prove the second
last equality:
 

Z {x0 , x1 , x2 , x3 , . . .} = x1 + x2 z−1 + x3 z−2 + · · ·

= x0 z + x1 + x2 z−1 + x3 z−2 + · · · − x0 z
= zX(z) − x0 z.

Example 6.9. (Some examples on the Z-transform)



a) Unit step sequence. Let {σn } = {0, 0, 1, 1, 1, . . .}, then
1 1 1 z
Z [{σn }] = 1 + + +··· = = , |z| > 1.
z z2 1 − 1z z−1

b) Unit impulse sequence. Let {δn } = {. . . , 0, 0, 1, 0, 0, . . .}, then

{δn−2 } = {. . . , 0, 0, 0, 0, 1, 0, 0 . . .}, and we get
Z [{δn }] = 1,
1
Z [{δn−2 }] = , etc.
z2

c) Unit ramp sequence. Let {rn } = {. . . , 0, 0, 0, 1, 2, . . .}. Then
1 2 3
Z [{rn }] = 0 + + 2
+ 3 +··· ,
z z z

1
= 1 + z + z2 + z3 + · · · , |z| < 1,
1−z
and (differentiate both sides)
1
= 1 + 2z + 3z2 + · · · ,
(1 − z)2
which gives
z
= z + 2z2 + 3z3 + · · · ,
(1 − z)2
1
and if we set instead of z here we see that
z
z
Z [{rn }] = , |z| > 1.
(z − 1)2

R EMARK 15. The Z-transform is very useful for solving difference equations and for treating discrete
linear systems.
R EMARK 16. The discrete Fourier transform (DFT) that was mentioned earlier is a special case of the
Z-transform with z = e−2kπ/N .
R EMARK 17. More examples of useful transform pairs and general properties can be found in Appendix
3.
54 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

6.5. Wavelet transforms

The idea of wavelets is relatively new, but it has already shown itself to be much more effective than many
other transforms, e.g. for applications in
• Signal processing, and
• Image processing.
In these cases the story begins with what is now called the mother wavelet, ψ. Typically the function ψ
has the following properties:
Z ∞
* ψ(t)dt = 0,
−∞
** ψ is well localized in both time and frequency, and in addition satisfies some further (techni-
cal) conditions.
It can then be shown that the following system
 ∞
ψ j,k (t) j,k=−∞ ,
where
ψ j,k (t) = 2 j/k ψ 2 j t − k


are translations, dilatations and normalization of the original mother wavelet, is a (complete) orthogonal
basis. A signal f (t) can be reconstructed by using the usual (generalized) Fourier idea:

W −1 : f (t) = ∑ f , ψ j,k ψ j,k (t).


j,k=−∞

and we also have 



W : f (t) → f , ψ j,k j,k=−∞
,
Z


where the “Fourier coefficients” are given by the scalar products f , ψ j,k = f (t)ψ j,k (t)dt.
−∞

R EMARK 18. A problem with the Fourier series transform is that a signal f (t) which is well localized in
time results in an outgoing signal fˆ(ω) which is dispersed in the frequency range (e.g. the Fourier series
for the delta function δ(t) contains all frequencies) and vice versa. The advantage with the wavelet trans-
form is that you can “compromise” and obtain localization in both time and frequency simultaneously (at
least in certain cases).
R EMARK 19. In Appendix 4 we have included a motivation and illustration which makes it easier to
understand the terminology and formulas above. The motivation is obtained by a natural approximation
procedure, with the classical Haar wavelet as mother wavelet.
R EMARK 20. The transform W above corresponds to the Fourier series transform, but there also exists
a similar integral transform corresponding to the Fourier transform.
R EMARK 21. The wavelet transforms are not so useful if you have to do all calculations by hand, but
nowadays there are easily available computer programs which makes them very powerful for certain
applications. The following web adresses provide information about a few such programs:
• http://www.wavelet.org (Wavelet Digest+search engine+links+...)
• http://www.finah.com/ (Many practical applications)
• http://www.tyche.math.univie.ac.at/Gabor/index.html (Gabor analysis)
• http://www.sm.luth.se/~grip/ (Licentiate and PhD thesis of Niklas Grip)
Some research groups in Sweden which are working with wavelets and applications (also industrially):
6.7. CONTINUOUS LINEAR SYSTEMS 55

• KTH: Jan-Olov Strömberg (janolov@math.kth.se)


• Chalmers: Jöran Bergh (math.chalmers.se)
• LTU (and Uppsala): Lars-Erik Persson (larserik@sm.luth.se)
Some books on wavelets
• Wavelets, J. Bergh and F. Ekstedt and M. Lindberg ((1))
• A Wavelet Tour of Signal Processing, S.G. Mallat ((6))
• Introduction to Wavelets and Wavelet Transforms, A Primer, C.S. Burrus and R.A. Gopinath
and H. Guo ((2))
• Foundations of Time-Frequency Analysis, K. Gröchenig ((4))

6.6. The General Transform Idea

f (t); {an } f˜(s); {cn }

L -transform
Problem Transformed Problem
(easy)

(Difficult or Solve the


(easy)
Impossible) transformed problem

Solution of the trans-


Solution of L −1 -transform formed
the problem (easy) problem

When we want to solve a given problem the key to success is to chose a suitable transform for the problem
in question. In this chapter we have presented some useful transforms but there are other examples in the
literature. In Appendix 5 we present some further transforms (mainly taken from (3)). In most cases
we have also included a formula for the inverse transform and the corresponding useful tables are also
included.

6.7. Continuous Linear Systems

F IGURE 6.7.1. A schematic picture of a continuous linear system


x(t) y(t)
56 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

Many linear system, e.g. in technical applications, can be described by a linear differential equation:
(6.7.1) an y(n) (t) + an−1 y(n−1) (t) + · · · + a0 y(t) = bk x(k) (t) + bk−1 x(k−1) (t) + · · · + b0 x(t),
together with initial values
y(0) = y0 (0) = · · · = y(n) (0) = 0.
Set Y (s) = L {y(t)}(s) and X(s) = L {x(t)}(s) and transform (6.7.1). Using the initial values we get:
(an sn + an−1 sn−1 + · · · a0 )Y (s) = (bk sk + bk−1 sk−1 + · · · b0 )X(s),
which gives
Y (s) bk sk + bk−1 sk−1 + · · · b0
= .
X(s) an sn + an−1 sn−1 + · · · a0
We define the Transfer function, H(s), by Y (s) = H(s)X(s), i.e.

bk sk + bk−1 sk−1 + · · · b0
H(s) = .
an sn + an−1 sn−1 + · · · a0
For every incoming signal (with transform X(s)) we get the corresponding solution (outgoing signal)
Y (s) = H(s)X(s), and if we invert the transform we see that
y(t) = h(t) ? x(t).
How do we find H(s)?
For a unit impulse δ(t) we have
Z ∞
L {δ(t)} = δ(t)e−st dt = e0 = 1.
0
This implies that if we send in a unit impulse the system will respond in the following way:
(
y(t) = h(t) ? δ(t) = h(t),
Y (s) = H(s).
In technical applications h(t) is usually called the unit impulse solution.

Example 6.10. (Driven harmonic oscillator)

F IGURE 6.7.2. Hanging spring

x(t)
m

y(t)
6.7. CONTINUOUS LINEAR SYSTEMS 57

We consider the system illustrated in Figure 6.7.2, i.e. a weight m which is attached to the end
of vertically suspended spring. The weight has an equilibrium point relative to a moving reference
system (e.g. the point of attachment for the spring), and the distance from this equilibrium point is
denoted by y(t). The movement of the reference system (relative to some absolute reference system)
is denoted by x(t).
(A concrete example of such a system with a moving reference system is obtained attaching the
spring to a wooden board and then move that board up and down.)
It can be shown that the system can be described by the following linear differential equation:
mÿ(t) + cẏ(t) + ky(t) = cẋ(t) + ax(t).
If we apply the Laplace transform to both sides of this equation we get
ms2 + cs + k Y (s) = (cs + a)X(s),


and the transfer function is


cs + a
H(s) = .
ms2 + cs + k
Suppose, for example, that we have the incoming signal x(t) = sin ωt and that m = 1.00kg, c = 0,
ω
k = a = 1000N/m and ω = 2π. Then X(s) = 2 ,
s + ω2
cs + a ω
Y (s) = H(s)X(s) = 2 · ,
ms + cs + k s2 + ω2
and if we insert the values we get
1000 2π D D
Y (s) = 2 · = 2 − ,
s + 1000 s2 + 4π2 s + 4π2 s2 + 1000
2000π
where D = . Thus
1000 − 4π2
D D √
y(t) = sin 2πt − √ sin 1000t ≈ 1.04 sin 6.28t − 0.207 sin 31.6t.
2π 1000

It is sometimes also useful to compute the unit step solution, i.e. the reaction of the system on the
incoming signal (
1, t > 0,
θ(t) =
0, t ≤ 0.
1 1
We know that L {θ(t)} = hence Y (s) = · H(s).
s s
Example 6.11. A system has the transfer function
3
H(s) = .
(s + 1)(s + 3)
Compute the unit step solution!
Solution: We know that
1 3 1 3 1
Y (s) = H(s) = = − + ,
s s(s + 1)(s + 3) s 2(s + 1) 2(s + 3)
3 1
and hence y(t) = 1 − e−t + e−3t for y ≥ 0 (and y(t) = 0 for y < 0), i.e.
2 2
 
3 −t 1 −3t
y(t) = 1 − e + e θ(t),
2 2
58 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

see Fig. 6.7.3.

F IGURE 6.7.3.
y
y(t) = 1 − 32 e−t + 12 e−3t θ(t)

1

t
1

6.8. Discrete Linear Systems

F IGURE 6.8.1. A schematic image of a discrete linear system


{xn } {yn }

A discrete linear system can be described by a linear difference equation:


(6.8.1) a0 yn + a1 yn−1 + · · · + am yn−m = b0 xn + b1 xn−1 + · · · + bk xn−k ,
alternatively this equation can be formulated as
{ak } ? {yk } = {bk } ? {xk } .
Let Y (s) = Z [{yn }] (z), and X(s) = Z [{xn }] (z). A Z-transform of (6.8.1) gives the equation
   
1 1 1 1
a0 + a1 + · · · + am m Y (z) = b0 + b1 + · · · + bk k X(z),
z z z z
i.e.
Y (z) b0 + b1 1z + · · · + bk z1k
= ,
X(z) a0 + a1 1z + · · · + am z1m
and in the same way as before we can define a transfer function, H(z), by

b0 + b1 1z + · · · + bk z1k
H(z) = .
a0 + a1 1z + · · · + am z1m

For every incoming signal (with Z-transform X(z)) we get the solution (outgoing signal)
Y (z) = H(z)X(z),
which gives
{yn } = {hn } ? {xn }.
How do we find H(z)?
6.9. FURTHER EXAMPLES 59

1
For the unit impulse sequence, {δn }, we have Z [{yn }] = 1 + 0 · + · · · = 1, which implies that the system
z
will respond in the following way:
{yn } = {hn } ? {δn } = {hn },
i.e.
Y (z) = H(z).
In technical applications {hn } is called the unit impulse response.
1
Example 6.12. A linear discrete system has the transfer function H(z) = . Compute the unit
z + 0.8
step response!
Solution: The unit step sequence is {σn } = {1, 1, 1, . . .}, and we have
z
X(z) = Z [{σn }] = ,
z−1
and thus we get
 
z 5z 1 1
Y (z) = H(z)X(z) = = − .
(z − 1)(z + 0.8) 9 (z − 1) (z + 0.8)
The inverse transform gives the answer, Z −1 [Y (z)] = {yn }, where
5
yn = (1 − (−0.8)n ) .
9

6.9. Further Examples


sin ax
Z ∞
Example 6.13. Compute the integral dx for a > 0.
0 x(1 + x2 )
Solution: Consider
sintx
Z ∞
f (t) = dx, t > 0
0 x(1 + x2 )
and its Laplace transform
Z ∞ Z ∞ 
sintx
L { f (t)} = f˜(s) = 2
dx e−st dt
0 0 x(1 + x )
Z ∞ Z ∞ 
−st 1
= sin(tx)e dt dx
0 0 x(1 + x2 )
1
Z ∞
= L (sintx)(s) dx
0 x(1 + x2 )
x 1
Z ∞
= 2 2 2
dx
0 (x + s ) x(1 + x )
1 1 1
Z ∞
= 2 2
− 2 2 dx
s −1 0 1+x x +s
   
1 π π1 π 1 1
= − = − .
s2 − 1 2 2 s 2 s s+1
By applying the inverse transform we see that
π
1 − e−t ,

f (t) =
2
60 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

i.e.
sin ax
Z ∞
π
1 − e−a , a > 0.

2
dx =
0 x(1 + x ) 2

Example 6.14. (The Dirichlet problem (Lejeune Dirichlet) for a half-plane)


Solve

00 00
uxx + uyy
 = 0, −∞ < x < ∞, y ≥ 0,
(6.9.1) u(x, 0) = f (x),

u(x, y) → 0, when |x| → ∞, y → ∞.

Solution: We start by applying the Fourier transform (with respect to x) to u. We denote this
operation with Fx {u} = F {x 7→ u(x, y)} and we get
Z ∞
U = U(ω, y) = Fx {u} (ω) = u(x, y)e−iωx dx,
−∞

and (6.9.1) is then transformed into



d 2U 2
 dy2 − ω U = 0,


U(ω, 0)
 = fˆ(ω),

U(ω, y) → 0, when y → ∞.

The solution to this transformed problem is given by

U(ω, y) = fˆ(ω)e−|ω|y .

If we use the convolution property (F ( f ? g) = F ( f )F (g)) we see that


n o
u(x, y) = F −1 (U) = F −1 fˆ(ω)e−|ω|y = F −1 {F ( f )F (gy )}
Z ∞
f (z)gy (x − z)dz,
−∞

where gy (x) is the inverse Fourier transform of e−|ω|y , i.e.

1 y
gy (x) = .
π x 2 + y2
Hence the wanted solution is
y f (z)
Z ∞
u(x, y) = dz, y > 0.
π −∞ (x − z)2 + y2

(This is the famous Poisson integral formula).


6.10. EXERCISES 61

6.10. Exercises
6.1. [S] a) Compute the inverse Laplace transform of
1
F(s) = e−2s .
s2 + 8s + 15
b) Find the unit step response to a system with the transfer function
3
H(s) = .
(s + 1)(s + 3)

6.2.* Use the Laplace transform to solve:



0 00
ut (x,t) = uxx (x,t), 0 ≤ x < 1, t > 0,

u(0,t) = u(1,t) = 1, t > 0,

u(x, 0) = 1 + sin πx, 0 < x < 1.

6.3. [S] Use the Laplace transform to solve:


y00 + 2y0 + 2y = u(t), y(0) = y0 (0) = 0.
a) When u(t) = θ(t), (
0, t ≤ 0,
b) when u(t) = ρ(t) =
1 t > 0.

6.4. Use Fourier series to solve:



0 00
utt (x,t) = uxx (x,t), 0 < x < 1, t > 0,

u(0,t) = u(1,t) = 0, t > 0,
u(x, 0) = sin πx, ut0 (x, 0) = sin 3πx, 0 < x < 1.

6.5. [S] Compute the Fourier transform of the signal


f (t) = θ(t − 3)e−(t−3) .

6.6.*
a) Prove the convolution formula, F { f ? g} = F { f }F {g}, for the Fourier transform.
b) Define f (t) = θ(t)e−t , let f1 (t) = f (t) and for n ≥ 1 let fn (t) = ( fn−1 ? f )(t). Compute
fn (t).

6.7. [S] Compute the Fourier transform, F (ω), of


(
sin ω0t, |t| ≤ a,
f (t) =
0 else.
62 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

6.8. Compute the Fourier transform, F (ω), of


(
cos ω0t, |t| ≤ a,
f (t) =
0 else.

6.9. [S] Solve the following difference equation


y(n + 2) − y(n + 1) − 2y(n) = 0, y(0) = 2, y(1) = 1.

1
6.10. Determine the sequence y(n), n ≥ 0 which has the Z-transform Y (z) = .
z2 + 1

6.11. [S] Let f (x) = e−|x| and compute the convolutoin product ( f ? f )(x).

6.12. Use the function e−|t| to


1
a) Compute the Fourier transform of f (t) = .
1 + t2
α
b) Compute the Fourier transform of g(t) = 2 2 , α > 0.
α +t
t 2 − α2
c) Compute the Fourier transform of h(t) = , α 6= 0.
(α2 + t 2 )2

6.13. [S] Use the Laplace transform to solve the following system of differential equations
( (
x0 − 2x + 3y = 0 x(0) = 8
0 ,
y − y + 2x =0 y(0) = 3.

6.14.
a) Define the Haar-scaling function ϕ and the Haar-wavelet function ψ.
b) Illustrate ψ(t − 2),ψ(4t),ψ(4t − 1),ψ(4t − 3) and 2ψ(4t − 2) in the ty-plane.
c) Explain how a signal f (t) can be represented by a system of basis functions constructed
by translating, dilating and normalizing the Haar wavelet.

6.15. [S] A continuous system has the transfer function


1
H(s) = .
1 + sT
Compute the response, y(t) to the signal x(t) = sin ωt.

6.16.* A discrete linear system has the transfer function


1
H(z) = .
2z + 1
Compute the unit impulse response.

6.17. [S] A discrete linear system has the unit impulse answer {0.7n }. Compute the system’s response
to the signal {an }, a 6= 0.7.
6.10. EXERCISES 63


6.18. Let f : R → R be a continuous function such that ∑ fˆ(n) is absolutely convergent, and that
−∞

there is a continuous function g(x) = ∑ f (2πn + x), x ∈ [−π, π].
n=−∞
a) Show that g(x) has the period 2π.
b) Compute the Fourier series for g(x) and use this to show the following formula (the
Poisson summation formula):
∞ ∞
∑ fˆ (n) = 2π ∑ f (2πn).
n=−∞ n=−∞

c) Use the Fourier series for g from b) to show that if f (x) = 0 for |x| ≥ π then we have
the following formula
 ∞
1

fˆ (m) eimx , |x| ≤ π,
f (x) = 2π m=−∞

|x| ≥ π.

0,

6.19. Use the previous exercise to show a version of the Sampling theorem. Suppose that f : R → C
has a Fourier transform and is band-limited, i.e. fˆ (ω) = 0 for |ω| ≥ c. Show that f is uniquely

determined by its values at (for example) the sequence , k ∈ Z according to the following formula
c
∞  
kπ 1
f (x) = ∑ f sin (ct − nπ) .
−∞ c cx − mπ

6.20. [S] The dispersion of smoke from a smoke pipe with the height h as the wind direction and and
wind speed is constant can be modelled by the following equation
 2
∂ c ∂2 c

∂c
v =d + ,
∂x ∂x2 ∂z2
where c(x, z) is the concentration of smoke at the height z conunted from the base of the pipe and
the distance x from the pipe in the direction of the wind. d is a diffusion coefficient and v is the wind
speed (in m/s). If we also assume that the rate of change of c in the x-direction is much smaller than
the rate of change in the z -direction we get the simplified equation
 2 
∂c ∂ c
=k .
∂x ∂z2
The rate of change in the concentration at ground level and infinitely high up can be viewed as
negligible which gives us the boundary values
∂c ∂c
(x, 0) = lim (x, z) = 0.
∂z z→∞ ∂z

The concentration of smoke can also be neglected infinitely far away in the x-direction. At the
location of the pipe the concentration is 0 except for at the height h where the smoke drifts out of it
with a flowrate qkgm−2 s−1 . Thus we also get the boundary values:
q
c(0, z) = δ (z − h) , lim c(x, z) = 0.
v x→∞
a) Rewrite the problem using dimensionless quantities.
64 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

b) Use the Laplace transform to find the concentration at ground level, c (x, 0). (Hint: split into
two cases, z ≷ 1 and observe that the derivative of the Laplace transform of c is not continuous
everywhere.)
c) At which range from the pipe is the concentrationen at ground level highest?
CHAPTER 7

Introduktion till Hamiltonsk teori and isoperimetriska problem

65
CHAPTER 8

Integral Equations

8.1. Introduction

Integral equations appears in most applied areas and are as important as differential equations. In fact,
as we will see, many problems can be formulated (equivalently) as either a differential or an integral
equation.
Example 8.1. Examples of integral equations are:
Z x
(a) y(x) = x − (x − t)y(t)dt.
0 Z x
(b) y(x) = f (x) + λ k(x − t)y(t)dt, where f (x) and k(x) are specified functions.
Z 1 0

(c) y(x) = λ k(x,t)y(t)dt, where


0
(
x(1 − t), 0 ≤ x ≤ t,
k(x,t) =
t(1 − x), t ≤ x ≤ 1.
Z 1
(d) y(x) = λ (1 − 3xt) y(t)dt.
0 Z 1
(e) y(x) = f (x) + λ (1 − 3xt) y(t)dt.
0


A general integral equation for an unknown function y(x) can be written as
Z b
f (x) = a(x)y(x) + k(x,t)y(t)dt,
a
where f (x), a(x) and k(x,t) are given functions (the function f (x) corresponds to an external force). The
function k(x,t) is called the kernel. There are different types of integral equations. We can classify a
given equation in the following three ways.
• The equation is said to be of the First kind if the unknown function only appears under the
integral sign, i.e. if a(x) ≡ 0, and otherwise of the Second kind.
• The equation is said to be a Fredholm equation if the integration limits a and b are constants,
and a Volterra equation if a and b are functions of x.
• The equation are said to be homogeneous if f (x) ≡ 0 otherwise inhomogeneous.
Example 8.2. A Fredholm equation (Ivar Fredholm):
Z b
k(x,t)y(t)dt + a(x)y(x) = f (x).
a

67
68 8. INTEGRAL EQUATIONS


Example 8.3. A Volterra equation (Vito Volterra):
Z x
k(x,t)y(t)dt + a(x)y(x) = f (x).
a


Example 8.4. The storekeeper’s control problem.
To use the storage space optimally a storekeeper want to keep the stores stock of goods constant. It
can be shown that to manage this there is actually an integral equation that has to be solved. Assume
that we have the following definitions:
a = number of products in stock at time t = 0,
k(t) = remainder of products in stock (in percent) at the time t,
u(t) = the velocity (products/time unit) with which new products are purchased,
u(τ)∆τ = the amount of purchased products during the time interval ∆τ.
The total amount of products at in stock at the time t is then given by:
Z t
ak(t) + k(t − τ)u(τ)dτ,
0
and the amount of products in stock is constant if, for some constant c0 , we have
Z t
ak(t) + k(t − τ)u(τ)dτ = c0 .
0

To find out how fast we need to purchase new products (i.e. u(t)) to keep the stock constant we thus need
to solve the above Volterra equation of the first kind.

Example 8.5. (Potential)
Let V (x, y, z) be the potential in the point (x, y, z) coming from a mass distribution ρ(ξ, η, ζ) in
Ω (see Fig. 8.1.1). Then
Z Z Z
ρ(ξ, η, ζ)
V (x, y, z) = −G dξdηdζ.
Ω r
The inverse problem, to determine ρ from a given potential V , gives rise to an integrated equation.
Furthermore ρ and V are related via Poisson’s equation
∇2V = 4πGρ.

F IGURE 8.1.1. A potential from a mass distribution


r (ξ, η, ζ)
(x, y, z)
8.2. INTEGRAL EQUATIONS OF CONVOLUTION TYPE 69

8.2. Integral Equations of Convolution Type

We will now consider integral equations of the following type:


Z x
y(x) = f (x) + k(x − t)y(t)dt = f (x) + k ? y(x),
0

where k ? y(x) is the convolution product of k and y (see p. 45). The most important technique when
working with convolutions is the Laplace transform (see sec. 6.2).

Example 8.6. Solve the equation


Z x
y(x) = x − (x − t)y(t)dt.
0

Solution: The equation is of convolution type with f (x) = x and k(x) = x. We observe that
1
L (x) = 2 and Laplace transforming the equation gives us
s
1 1 1 1
L [y] = 2
− L [x ? y] = 2 − L [x] L [y] = 2 − 2 L [y] , i.e.
s s s s
1
L [y] = ,
1 + s2
 
1
and thus y(x) = L −1 = sin x.
1 + s2
Answer: y(x) = sin x.

Example 8.7. Solve the equation


Z x
y(x) = f (x) + λ k(x − t)y(t)dt,
0

where f (x) and k(x) are fixed, given functions.


Solution: The equation is of convolution type, and applying the Laplace transform yields

L [y] = L [ f ] + λL [k] L [y] , i.e.


L [f]
L [y] = .
1 − λL [k]

L [f]
 
−1
Answer: y(x) = L .
1 − λL [k]


70 8. INTEGRAL EQUATIONS

8.3. The Connection Between Differential and Integral Equations (First-Order)


Example 8.8. Consider the differential equation (initial value problem)
(
y0 (x) = f (x, y),
(8.3.1)
y(x0 ) = y0 .
By integrating from x0 to x we obtain
Z x Z x
y0 (t)dt = f (t, y(t))dt,
x0 x0
i.e.
Z x
(8.3.2) y(x) = y0 + f (t, y(t))dt.
x0
On the other hand, if (8.3.2) holds we see that y(x0 ) = y0 , and
y0 (x) = f (x, y(x)),
which implies that (8.3.1) holds! Thus the problems (8.3.1) and (8.3.2) are equivalent.

In fact, it is possible to formulate many initial and boundary value problems as integral equations and
vice versa. In general we have:


Initial value problem 
⇒ The Volterra equation,
Dynamical system 
Boundary value problem ⇒ The Fredholm equation.

Picard’s method (Emile Picard)


Problem: Solve the initial value problem
(
y0 = f (x, y),
y(x0 ) = A.
Or equivalently, solve the integral equation :
Z x
y(x) = A + f (t, y(t))dt.
x0
We will solve this integral equation by constructing a sequence of successive approximations to y(x).
First choose an initial approximation, y0 (x) (it is common to use y0 (x) = y(x0 )), then define the sequence:
y1 (x), y2 (x), . . . , yn (x) by
Z x
y1 (x) = A + f (t, y0 (t))dt,
x
Z 0x
y2 (x) = A + f (t, y1 (t))dt,
x0
.. ..
. . Z x
yn (x) = A + f (t, yn−1 (t))dt.
x0
Our hope is now that
y(x) ≈ yn (x).
8.3. THE CONNECTION BETWEEN DIFFERENTIAL AND INTEGRAL EQUATIONS (FIRST-ORDER) 71

By a famous theorem (Picard’s theorem) we know that under certain conditions on f (x, y) we have
y(x) = lim yn (x).
n→∞

Example 8.9. Solve the equation (


y0 (x) = 2x(1 + y),
y(0) = 0.
Solution: (With Picard’s method) We have the integral equation
Z x
y(x) = 2t(1 + y(t))dt,
0
and as the initial approximation we take y0 (x) ≡ 0. We then get
Z x Z x Z x
y1 (x) = 2t(1 + y0 (t))dt = 2t(1 + 0)dt = 2tdt = x2 ,
0 0 0
Z x Z x Z x
1
y2 (x) = 2t(1 + y1 (t))dt = 2t(1 + t 2 )dt = 2t + 2t 3 dt = x2 + x4 ,
0 0 0 2
x6
Z x
1 1
y3 (x) = 2t(1 + t 2 + t 4 )dt = x2 + x4 + ,
0 2 2 6
.. ..
. .
x4 x6 x2n
yn (x) = x2 + + +···+ .
2 6 n!
We see that
2
lim yn (x) = ex − 1.
n→∞

2
R EMARK 22. Observe that y(x) = ex − 1 is the exact solution to the equation. (Show this!)
R EMARK 23. In case one can can guess a general formula for yn (x) that formula can often be verified by,
for example, induction.
L EMMA 8.1. If f (x) is continuous for x ≥ a then:
Z xZ s Z x
f (y)dyds = f (y)(x − y)dy.
a a a
Z s
P ROOF. Let F(s) = f (y)dy. Then we see that:
a
Z xZ s Z x Z x
f (y)dyds = F(s)ds = 1 · F(s)ds
a a a
Z x a
{integration by parts} = [sF(s)]xa − sF 0 (s)ds
a
Z x
= xF(x) − aF(a) − s f (s)ds
Z x Za s
= x f (y)dy − 0 − y f (y)dy
Z sa a

= f (y)(x − y)dy.
a


72 8. INTEGRAL EQUATIONS

8.4. The Connection Between Differential and Integral Equations (Second-Order)


Example 8.10. Assume that we want to solve the initial value problem
(
u00 (x) + u(x)q(x) = f (x), x > a,
(8.4.1)
u(a) = u0 , u0 (a) = u1 .
We integrate the equation from a to x and get
Z x
u0 (x) − u1 = [ f (y) − q(y)u(y)] dy,
a
and another integration yields
Z x Z x Z xZ s
u0 (s)ds = u1 ds + [ f (y) − q(y)u(y)] dyds.
a a a a
By Lemma 8.1 we get
Z x
u(x) − u0 = u1 (x − a) + [ f (y) − q(y)u(y)] (x − y)dy,
a
which we can write as
Z x Z x
u(x) = u0 + u1 (x − a) + f (y)(x − y)dy + q(y)(y − x)u(y)dy
a a
Z x
= F(x) + k(x, y)u(y)dy,
a
where
Z x
F(x) = u0 + u1 (x − a) + f (y)(x − y)dy, and
a
k(x, y) = q(y)(y − x).
This implies that (8.4.1) can be written as Volterra equation:
Z x
u(x) = F(x) + k(x, y)u(y)dy.
a

R EMARK 24. Example 8.10 shows how an initial value problem can be transformed to an integral equa-
tion. In example 8.12 below we will show that an integral equation can be transformed to a differential
equation, but first we need a lemma.

L EMMA 8.2. (Leibniz’s formula)


Z b(t)  Z b(t)
d
u(x,t)dx = ut0 (x,t)dx + u(b(t),t)b0 (t) − u(a(t),t)a0 (t).
dt a(t) a(t)

P ROOF. Let
Z b
G(t, a, b) = u(x,t)dx,
a
where (
a = a(t),
b = b(t).
8.4. THE CONNECTION BETWEEN DIFFERENTIAL AND INTEGRAL EQUATIONS (SECOND-ORDER) 73

The chain rule now gives


d
G = Gt0 (t, a, b) + G0a (t, a, b)a0 (t) + G0b (t, a, b)b0 (t)
dt
Z b
= ut0 (x,t)dx − u(a(t),t)a0 (t) + u(b(t),t)b0 (t).
a

Example 8.11. Let


Z t2
F(t) = √ sin(xt)dx.
t

Then
Z t2
3 1
F 0 (t) = √ cos(xt)xdx + sint 3 · 2t − sint 2 · √ .
t 2 t

Example 8.12. Consider the equation


Z 1
(*) y(x) = λ k(x,t)y(t)dt,
0

where
(
x(1 − t), x ≤ t ≤ 1,
k(x,t) =
t(1 − x), 0 ≤ t ≤ x.
I.e. we have
Z x Z 1
y(x) = λ t(1 − x)y(t)dt + λ x(1 − t)y(t)dt.
0 x

If we differentiate y(x) we get (using Leibniz’s formula)


Z x Z 1
y0 (x) = λ −ty(t)dt + λx(1 − x)y(x) + λ (1 − t)y(t)dt − λx(1 − x)y(t)
0 x
Z x Z 1
= λ −ty(t)dt + λ (1 − t)y(t)dt,
0 x

and one further differentiation gives us

y00 (x) = −λxy(x) − λ(1 − x)y(x) = −λy(x).

Furthermore we see that y(0) = y(1) = 0. Thus the integral equation (*) is equivalent to the boundary
value problem
(
y00 (x) + λy(x) = 0
y(0) = y(1) = 0.
74 8. INTEGRAL EQUATIONS

8.5. A General Technique to Solve a Fredholm Integral Equation of the Second Kind

We consider the equation:


Z b
(8.5.1) y(x) = f (x) + λ k(x, ξ)y(ξ)dξ.
a

Assume that the kernel k(x, ξ) is separable, which means that it can be written as
n
k(x, ξ) = ∑ α j (x)β j (ξ).
j=1
If we insert this into (8.5.1) we get
n Z b
y(x) = f (x) + λ ∑ α j (x) β j (ξ)y(ξ)dξ
j=1 a
n
(8.5.2) = f (x) + λ ∑ c j α j (x).
j=1

Observe that y(x) as in (8.5.2) gives us a solution to (8.5.1) as soon as we know the coefficients c j . How
can we find c j ?
Multiplying (8.5.2) with βi (x) and integrating gives us
Z b Z b n Z b
y(x)βi (x)dx = f (x)βi (x)dx + λ ∑ c j α j (x)βi (x)dx,
a a j=1 a

or equivalently
n
ci = fi + λ ∑ c j ai j .
j=1
n
Thus we have a linear system with n unknown variables: c1 , . . . , cn , and n equations ci = fi + λ ∑ c j ai j ,
j=1
1 ≤ i ≤ n. In matrix form we can write this as
(I − λA)~c = ~f ,
where      
a11 ··· a1n f1 c1
.. .. .. ..
     
A= ..  ~ 
, f =   , and ~c =  .
   
. . . . .
     
an1 ··· ann fn cn
Some well-known facts from linear algebra
Suppose that we have a linear system of equations
(*) B~x = ~b.
Depending on whether the right hand side ~b is the zero vector or not we get the following alternatives.
1. If ~b = ~0 then:
a) det B 6= 0 ⇒ ~x = ~0,
b) det B = 0 ⇒ (*) has an infinite number of solutions ~x.
2. If ~b 6= 0 then:
c) det B 6= 0 ⇒ (*) has a unique solution ~x,
d) det B = 0 ⇒ (*) has no solution or an infinite number of solutions.
8.5. A GENERAL TECHNIQUE TO SOLVE A FREDHOLM INTEGRAL EQUATION OF THE SECOND KIND 75

The famous Fredholm Alternative Theorem is simply a reformulation of the fact stated above to the setting
of a Fredholm equation.

Example 8.13. Consider the equation


Z 1
(*) y(x) = λ (1 − 3xξ)y(ξ)dξ.
0
Here we have
k(x, ξ) = 1 − 3xξ = α1 (x)β1 (ξ) + α2 (x)β2 (ξ),
i.e. (
α1 (x) = 1, α2 (x) = −3x,
β1 (ξ) = 1, β2 (ξ) = ξ.
We thus get
 Z 1 Z 1   
3
β1 (x)α1 (x)dx β1 (x)α2 (x)dx   1 −
A =  Z0 1

Z0 1 = 1 2 
,
β2 (x)α1 (x)dx β2 (x)α2 (x)dx −1
0 0 2
and
 
3
1 − λ λ λ2
det(I − λA) = det 
 2 
 = 1− =0
1 4
−λ 1+λ
2

λ = ±2.
The Fredholm Alternative Theorem tells us that we have the following alternatives:

λ 6= ±2 then (*) has only the trivial solution y(x) = 0, and


λ=2 then the system (I − λA)~c = ~0 looks like
(
−c1 + 3c2 = 0,
−c1 + 3c2 = 0,
which has an infinite number of solutions: c2 = a and c3 = 3a, for any constant a. From
(8.5.2) we see that the solutions y(x) are
y(x) = 0 + 2(3a · 1 + a(−3x)) = 6a(1 − x)
= b(1 − x).
We conclude that every function y(x) = b(1 − x) is a solution of (*).
λ = −2 Then the system (I − λA)~c = ~0 looks like
(
3c1 − 3c2 = 0,
c1 − c2 = 0,
which has an infinite number of solutions c1 = c2 = a for any constant a. From (8.5.2) we
once again see that the solutions y(x) are
y(x) = 0 − 2(a · 1 + a(−3x)) = −2a(1 − 3x)
= b(1 − 3x),
and we see that every function y(x) of the form y(x) = b(1 − 3x) is a solution of (*).
76 8. INTEGRAL EQUATIONS

As always when solving a differential or integral equation one should test the solutions by inserting them
into the equation in question. If we insert y(x) = 1 − x and y(x) = 1 − 3x in (*) we can confirm that they
are indeed solutions corresponding to λ = 2 and −2 respectively.

Example 8.14. Consider the equation


Z 1
(*) y(x) = f (x) + λ (1 − 3xξ)y(ξ)dξ.
0
Note that the basis functions α j and β j and hence the matrix A is the same as in the previous example,
and hence det(I − λA) = 0 ⇔ λ = ±2. The Fredholm Alternative Theorem gives us the following
possibilities:
Z 1 Z 1
1◦ f (x) · 1dx 6= 0 or f (x) · xdx 6= 0 and λ 6= ±2. Then (∗) has a unique solution
0 0
2
y(x) = f (x) + λ ∑ ci αi (x) = f (x) + λc1 − 3λc2 x,
i=1

where c1 and c2 is (the unique) solution of the system


Z 1
3

(1 − λ)c1 + λc2
 = f (x)dx,
2 Z0 1
− 1 λc1 + (1 + λ)c2 =

x f (x)dx.
2 0
Z 1 Z 1
2◦ f (x) · 1dx 6= 0 or f (x) · xdx 6= 0 and λ = −2. We get the system
0 0
 Z 1
3c1 − 3c2
 = f (x)dx,
Z0 1
c1 − c2

= x f (x)dx.
0
Since the left hand side of the topmost equation is a multiple of the left hand side of the
Z 1 Z 1
bottom equation there are no solutions if x f (x)dx 6= 3 f (x)dx, and there are an infinite
Z 1 Z 10 0 Z 1
number of solutions if x f (x)dx = 3 f (x)dx. Let 3c2 = a, then 3c1 = a + f (x)dx,
0 0 0
which gives the solutions
y(x) = f (x) − 2 [c1 α1 (x) + c2 α2 (x)]
a 1 1
  
a
Z
= f (x) − 2 + f (x)dx + (−3x)
3 3 0 3
Z 1  
2 2
= f (x) − f (x)dx − a − 2x .
3 0 3
Z 1 Z 1
3◦ f (x) · 1dx 6= 0 or f (x) · xdx 6= 0 and λ = 2. We get the system
0 0
 Z 1
−c1 + 3c2
 = f (x)dx,
Z0 1
−c1 + 3c2

= x f (x)dx.
0
8.6. INTEGRAL EQUATIONS WITH SYMMETRICAL KERNELS 77

Z 1 Z 1
The left hand sides are identical so there are no solutions if x f (x)dx 6= f (x)dx, other-
0 Z 1 0

wise we have an infinite number of solutions. Let c2 = a, c1 = 3a − f (x)dx, then we get


0
the solution
1
 Z 
y(x) = f (x) + 2 3a − f (x)dx + a (−3x)
0
Z 1
= f (x) − 2 f (x)dx + 6a(1 − x).
0
Z 1 Z 1
4◦ x f (x)dx = f (x)dx = 0, λ 6= ±2. Then y(x) = f (x) is the unique solution.
Z0 1 Z0 1
5◦ x f (x)dx = f (x)dx = 0, λ = −2. We get the system
0 0
(
3c1 − 3c2 = 0,
⇔ c1 = c2 = a,
c1 − c2 = 0,
for an arbitrary constant a.We thus get an infinite number of solutions of the form
y(x) = f (x) − 2 [a · 1 + a (−3x)]
= f (x) − 2a (1 − 3x) .
Z 1 Z 1
6◦ x f (x)dx = f (x)dx = 0, λ = 2. We get the system
0 0
( (
−c1 + 3c2 = 0, c2 = a,

c1 + 3c2 = 0, c1 = 3a,
for an arbitrary constant a. We thus get an infinite number of solutions of the form
y(x) = f (x) + 2 [3a · 1 + a (−3x)]
= f (x) + 6a (1 − x) .

8.6. Integral Equations with Symmetrical Kernels

Consider the equation


Z b
(*) y(x) = λ k(x, ξ)y(ξ)dξ,
a
where
k(x, ξ) = k(ξ, x)
is real and continuous. We will now see how we can adapt the theory from the previous sections to the
case when k(x, ξ) is not separable but instead is symmetrical, i.e. k(x, ξ) = k(ξ, x). If λ and y(x) satisfy
(*) we say that λ is an eigenvalue and y(x) is the corresponding eigenfunction. We have the following
theorem.
T HEOREM 8.3. The following holds for eigenvalues and eigenfunctions of (*):
(i) if λm and λn are eigenvalues with corresponding eigenfunctions ym (x) and yn (x) then:
Z b
λn 6= λm ⇒ ym (x)yn (x)dx = 0.
a
I.e. eigenfunctions corresponding to different eigenvalues are orthogonal (ym (x) ⊥ yn (x)).
(ii) The eigenvalues λ are real.
78 8. INTEGRAL EQUATIONS

(iii) If the kernel k is not separable then there are infinitely many eigenvalues:
λ1 , λ2 , . . . , λn , . . . ,
with 0 < |λ1 | ≤ |λ2 | ≤ · · · and lim |λn | = ∞.
n→∞
(iv) To every eigenvalue corresponds at most a finite number of linearly independent
eigenfunctions.

P ROOF. (i). We have


Z b
ym (x) = λm k(x, ξ)ym (ξ)dξ, and
a
Z b
yn (x) = λn k(x, ξ)yn (ξ)dξ,
a
which gives
Z b Z b Z b
ym (x)yn (x)dx = λm yn (x) k(x, ξ)ym (ξ)dξdx
a a a
Z b Z b 
= λm yn (x)k(k, ξ)dx ym (ξ)dξ
a a
Z b Z b
 
[k(x, ξ) = k(ξ, x)] = λm k(ξ, x)yn (x)dx ym (ξ)dξ
a a
Z b  
1
= λm yn (ξ) ym (ξ)dξ
a λn
λm b
Z
= ym (ξ)yn (ξ)dξ.
λn a
We conclude that  Z b
λm
1− ym (x)yn (x)dx = 0,
λn a
Z b
and if λm 6= λn then we must have ym (x)yn (x)dx = 0. 
a
Example 8.15. Solve the equation
Z 1
y(x) = λ k(x, ξ)y(ξ)dξ,
0
where (
x(1 − ξ), x ≤ t ≤ 1,
k(x, ξ) =
ξ(1 − x), 0 ≤ ξ ≤ x.
From Example 8.12 we know that the integral equation is equivalent to
(
y00 (x) + λy(x) = 0,
y(0) = y(1) = 0.
√ √
If λ > 0√we have the solutions y(x) = c1 cos λx + c2 sin λx, y(0) = 0 ⇒ c1 = 0 and √ y(1) = 0
⇒c2 sin λ = 0, hence either c2 = 0 (which only gives the trivial solution y ≡ 0) or λ = nπ for
some integer n, i.e. λ = n2 π2 . Thus, the eigenvalues are
λn = n2 π2 ,
and the corresponding eigenfunctions are
yn (x) = sin(nπx).
8.7. HILBERT-SCHMIDT THEORY TO SOLVE A FREDHOLM EQUATION 79

Observe that if m 6= n it is well-known that


Z 1
sin(nπx) sin(mπx)dx = 0.
0

8.7. Hilbert-Schmidt Theory to Solve a Fredholm Equation

We will now describe a method for solving a Fredholm Equation of the type:
Z b
(*) y(x) = f (x) + λ k(x,t)y(t)dt.
a

L EMMA 8.4. (Hilbert-Schmidt’s Lemma) Assume that there is a continuous function g(x) such that
Z b
F(x) = k(x,t)g(t)dt,
a

where k is symmetrical (i.e. k(x,t) = k(t, x)). Then F(x) can be expanded in a Fourier series as

F(x) = ∑ cn yn (x),
n=1

where yn (x) are the normalized eigenfunctions to the equation


Z b
y(x) = λ k(x,t)y(t)dt.
a

(Cf. Thm. 8.3.)

T HEOREM 8.5. (The Hilbert-Schmidt Theorem) Assume that λ is not an eigenvalue of (*) and that y(x)
is a solution to (*). Then

fn
y(x) = f (x) + λ ∑ yn (x),
n=1 λ n −λ
where λn and yn (x) are eigenvalues and eigenfunctions to the corresponding homogeneous equation (i.e.
Z b
(*) with f ≡ 0) and fn = f (x)yn (x)dx.
a

P ROOF. From (*) we see immediately that


Z b
y(x) − f (x) = λ k(x, ξ)y(ξ)dξ,
a

and according to H-S Lemma (8.4) we can expand y(x) − f (x) in a Fourier series:

y(x) − f (x) = ∑ cn yn (x),
n=1

where
Z b Z b
cn = (y(x) − f (x)) yn (x)dx = y(x)yn (x)dx − fn .
a a
80 8. INTEGRAL EQUATIONS

Hence
Z b Z b
y(x)yn (x)dx = fn + (y(x) − f (x)) yn (x)dx
a a
Z b Z b 
= fn + λ k(x, ξ)y(ξ)dξ yn (x)dx
a a
Z b Z b 
{k(x, ξ) = k(ξ, x)} = fn + λ k(ξ, x)yn (x)dx y(ξ)dξ
a a
λ b
Z
= fn + yn (ξ)y(ξ)dξ.
λn a
Thus Z b
fn λn f n
y(x)yn (x)dx = = ,
a 1− λ λn − λ
λn
and we conclude that
λn fn λ fn
cn = − fn = ,
λn − λ λn − λ
i.e. we can write y(x) as

fn
y(x) = f (x) + λ ∑ yn (x).
n=1 λn − λ

Example 8.16. Solve the equation


Z 1
y(x) = x + λ k(x, ξ)y(ξ)dξ,
0

where λ 6= n2 π2 , n = 1, 2, . . . , and
(
x(1 − ξ), x ≤ ξ ≤ 1,
k(x, ξ) =
ξ(1 − x), 0 ≤ ξ ≤ x.
Solution: From Example 8.15 we know that the normalized eigenfunctions to the homogeneous
equation
Z 1
y(x) = λ k(x, ξ)y(x)dξ
0
are √
yn (x) = 2 sin (nπx) ,
corresponding to the eigenvalues λn = n2 π2 , n = 1, 2, . . . . In addition we see that
Z 1 √ √
(−1)n+1 2
Z 1
fn = f (x)yn (x)dx = x 2 sin (nπx) dx = ,
0 0 nπ
hence √
2λ ∞ (−1)n+1
y(x) = x + ∑ n (n2 π2 − λ) sin (nπx) , λ 6= n2 π2 .
π n=1


Finally we observe that by using practically the same ideas as before we can also prove the following
theorem (cf. (5, pp. 246-247)).
8.7. HILBERT-SCHMIDT THEORY TO SOLVE A FREDHOLM EQUATION 81

T HEOREM 8.6. Let f and k be continuous functions and define the operator K acting on the function
y(x) by
Z x
Ky(x) = k(x, ξ)y(ξ)dξ,
a
and then define positive powers of K by
K m y(x) = K(K m−1 y)(x), m = 2, 3, . . . .

Then the equation Z x


y(x) = f (x) + λ k(x, ξ)y(ξ)dξ
a
has the solution

y(x) = f (x) + ∑ λn K n ( f ).
n=1
This type of series expansion is called a Neumann series.

Example 8.17. Solve the equation


Z x
y(x) = x + λ (x − ξ) y(ξ)dξ.
0
Solution: (by Neumann series):
x3
Z x
K(x) = (x − ξ)ξdξ =
0 3!
ξ3 x5
Z x
K 2 (x) = (x − ξ) dξ =
0 3! 5!
..
.
ξ2n−1 x2n+1
Z x
K n (x) = (x − ξ) dξ = ,
0 (2n − 1)! (2n + 1)!
hence

y(x) = x + ∑ λn K n (x)
n=1
x3 x5 x2n+1
= x+λ + λ2 + · · · + λn +··· .
3! 5! (2n + 1)!
Solution (by the Laplace transform): We observe that the operator
Z x
K= (x − ξ) y(ξ)dξ
0
is a convolution of the function y(x) with the identity function x 7→ x, i.e. K(x) = (t 7→ t ? y)(x),
which implies that L [K(x)] = L [x]L [y], and since y(x) = x + λK(x) we get
1 1
L (y) = L (x) + λL (x) L (y) = 2
+ λ 2 L (y)
 s s 
1 1 1 1
L (y) = 2 = √ √ − √ ,
s −λ 2 λ s− λ s+ λ
and by inverting the transform we get
1  √ √ 
y(x) = √ e λx − e− λx .
2 λ
82 8. INTEGRAL EQUATIONS

Observe that we obtain the same solution independent of method. This is easiest seen by looking
at the Taylor expansion of the second solution. More precisely we have
√ √ 1 √ 2 1 √ 3
e− λx = 1 − λx + λx − λx + · · · ,
2 3!
√ √ 1 √
  2 1 √ 3

e λx = 1 + λx + λx + λx + · · · ,
2 3!
i.e.
1  √ √ 
y(x) = √ e λx − e− λx
2 λ
 √
2 √ 3 2 √ 5

1
= √ 2 λx + λx + λx + · · ·
2 λ 3! 5!
x3 x5
= x + λ + λ2 + · · · .
3! 5!

8.8. Exercises

8.1. [S] Rewrite the following second order initial values problem as an integral equation
(
u00 (x) + p(x)u0 (x) + q(x)u(x) = f (x), x > a,
u(a) = u0 , u0 (a) = u1 .

8.2. Consider the initial values problem


(
u00 (x) + ω2 u(x) = f (x), x > 0,
u(a) = 0, u0 (0) = 1.
a) Rewrite this equation as an integral equation.
b) Use the Laplace transform to give the solution for a general f (x) with Laplace transform
F(s).
c) Give the solution u(x) for f (x) = sin at with a ∈ R, a 6= ω.

8.3. [S] Rewrite the initial values problem


y00 (x) + ω2 y = 0, 0 ≤ x ≤ 1,
y(0) = 1, y0 (0) = 0
as an integral equation of Volterra type and give those solutions which also satisfy y(1) = 0.

8.4. Rewrite the boundary values problem


y00 (x) + λp(x)y = q(x), a ≤ x ≤ b,
y(a) = y(b) = 0
as an integral equation of Fredholm type. (Hint: Use y(b) = 0 to determine y0 (a).)
8.8. EXERCISES 83

8.5. [S] Let α ≥ 0 and consider the probability that a randomly choosen integer between 1 and x has
its largest prime factor ≤ xα . As x → ∞ this probability distribution tends to a limit distribution with
the distribution function F(α), the so called Dickman function (note that F(α) = 1 for α ≥ 1). The
function F(α) is a solution of the following integral equation
Z α  
t 1
F(α) = F dt, 0 ≤ α ≤ 1.
0 1−t t
1
Compute F(α) for ≤ α ≤ 1.
2

8.6.* Consider the Volterra equation


Z x
u(x) = x + µ (x − y) u(y)dy.
0
a) Compute the first three non-zero terms in the Neumann series of the solution.
b) Give the solution of the equation (for example by using a) to guess a solution and then verify
it).

8.7. [S] Solve the following integral equation:


Z x
x= ex−ξ y(ξ)dξ.
0

8.8. Use the Laplace transformZ to solve:


x
a) y(x) = f (x) + λ ex−ξ y(ξ)dξ.
Z x 0
x−ξ
b) y(x) = 1 + e y(ξ)dξ
0

8.9. [S] Write a Neumann series for the solution of the integral equation
Z 1
u(x) = f (x) + λ u(t)dt,
0
e 1 1
and give the solution of the equation for f (x) = ex − + and λ = .
2 2 2
8.10. Solve the following integral equations:
Z 1
a) y(x) = x2 + (1 − 3xξ) y(ξ)dξ,
0
Z 1

b) y(x) = x2 + λ (1 − 3xξ) y(ξ)dξ for all values of λ.


0

8.11. [S] Solve the following integral equation


Z π
u(x) = f (x) + λ sin(x) sin (2y) u(x)dy
0
when
a) f (x) = 0,
b) f (x) = sin x,
c) f (x) = sin 2x.
84 8. INTEGRAL EQUATIONS

8.12. Consider the equation


Z x
u(x) = f (x) + λ u(t)dt, 0 ≤ x ≤ 1.
0

a) Show that for f (x) ≡ 0 the equationen has only the triviala solution in C2 [0, 1].
b) Give a function f (x) such that the equation has a non-trivial solution for all values of λ
and compute this solution.

8.13. [S] Let a > 0 and consider the integral equation


Z x−a
u(x) = 1 + λ θ(x − y + a)(x − y)u(y)dy, x ≥ a.
0
Use the Laplace transform to determine the eigenvalues and the corresponding eigenfunctions of this
equation.

8.14.* The current in an LRC-circuit with L = 3, R = 2, C = 0.2 (SI-units) and where we apply a
voltage at the time t = 3 satisfies the following integral equation
Z t
I(t) = 6θ(t − 1) (t − 1) + 2t + 3 − (2 + 5 (t − y)) I(y)dy.
0
Determine I(t) using the Laplace transform.

8.15. [S] Consider (again) the salesman’s control problem (Example 8.4). Assume that the number of
products in stock at the time t = 0 is a and that the products are sold at a constant rate such that all
products are sold out in T (time units). Now let u(t) be the rate (products/time unit) with which we
have to purchase new products in order to have a constant number of a products in stock.
a) Write the integral equation which is satisfied by u(t).
b) Solve the equation from a) and find u(t).
a t/T
b) u(t) = e .
T

8.16.*
a) Write the integral equation
Z 1
(*) y(x) = λ k(x, ξ)y(ξ)dξ,
0
where (
x(1 − ξ), x ≤ ξ ≤ 1,
k(x, ξ) =
ξ(1 − x), 0 ≤ ξ ≤ x,
as a boundary value problem.
b) Find the eigenvalues and the normalized eigenvectors to the problem in a).
Solve the equation
Z 1
y(x) = f (x) + λ k(x, ξ)y(ξ)dξ,
0
where k(x, ξ) is as in a) and λ 6= n2 π2 for
c) f (x) = sin (πkx), k ∈ Z, and
d) f (x) = x2 .
8.8. EXERCISES 85

8.17. [S] Consider the Fredholm equation


Z 2π
u(x) = f (x) + λ cos (x + t) u(t)dt.
0
Determine solutions for all values of λ and give sufficient conditions (if there are any) that f (x) has
to satisfy in order for solutions to exist.

8.18. Show that the equation


Z π
g(s) = λ (sin s sin 2t) g(t)dt
0
only has the trivial solution.

8.19. [S] Solve the integral equation


Z ∗∞
1 u(t)
sin s = dt,
π −∞ t −s
Z ∗
where means that we consider the principal value of the integral (since the integrand has a
Z ∞ it
e
singularity at t = s). (Hint: Use the resiue theorem on the integral dt.)
−∞ s−t
8.20. Give the Laplace transform of the non-trivial solution for the following integral equation
Z s
s2 − t 2 g(t)dt.

g(s) =
0
Hint: Rewrite the kernel in convolution form and use the differentiation rule.
CHAPTER 9

Introduction to the theory of Dynamical Systems, Chaos , Stability


and Bifurcations

87
APPENDIX A

Appendices

89
90 A. APPENDICES

A-1. General Properties of the Laplace Transform:

TABLE 1. General Properties of the Laplace Transform


f (t) F(s) = L [ f (t)](s)
Z ∞
Definition f (t) f (t)e−st dt
0
1 a+i∞
Z
Inverse F(s)est ds F(s)
2πi a−i∞
Linearity a f (t) + bg(t) aL [ f (t)](s) + bL [g(t)](s)
1 s
Scaling f (at) F
|a| a
Sign change f (−t) F(−s)
Time delay f (t − a)θ(t − a) e−as F(s)
1
Ampl. modulation f (t) cos Ωt (F (s − iΩ) + F (s + iΩ))
2
Damping e−at f (t) F(s + a)
Z t
Convolution f ? g(t) = f (τ)g(t − τ)dτ L [ f (t)]L [g(t)]
0
Differentiation f (n) (t) sn F(s) − sn−1 f (0) − · · · − f (n−1) (0)
Differentiation t n f (t) (−1)n F (n) (s)
Transformpar
Constant 1 s−1 , s > 0
1
Exponential eat ,s>a
s−a
n!
Power t n , n ∈ Z+ ,s>0
sn+1
a s
Trig. sin at and cos at 2 2
, and 2 ,s>0
s +a s + a2
a s
Hyp. trig. sinh at and cosh at , and 2 , s > |a|
s2 − a2 s − a2
b
Exp.×trig. eat sin bt ,s>a
(s − a)2 + b2
s−a
Exp.×trig. eat cos bt ,s>a
(s − a)2 + b2
n!
Exp.×power t n eat ,s>a
(s − a)n+1
Heaviside’s function θ(t − a) s−1 e−as , s > 0
Delta function δ(t − a) e−as
√ Z √t
2 2 1
Error function erf t = √ e−z dz √ ,s>0
π 0 s 1+s
r
1 a2 π −a√s
Normal dist./Gaussian. √ e− 4t e ,s>0
t s
a a 1 −a s √
Compl. Erf. erfc √ = 1 − erf √ e ,s>0
2 t 2 t s
1 a − a2 √ −a s √
×Normal dist. e 4t πe ,s>0
t 2t 3/2
A-2. GENERAL PROPERTIES OF THE FOURIER TRANSFORM 91

A-2. General Properties of the Fourier Transform

TABLE 2. General Properties of the Fourier Transform


f (t) fˆ(ω)
Z ∞
Definition f (t) f (t)e−iωt dt
−∞
1 ∞ ˆ
Z
Inverse f (ω)eiωt dω fˆ(ω)
2π −∞
Linearity a f (t) + bg(t) a fˆ(ω) + bĝ(ω)
1 ˆ ω 
Scaling f (at), a 6= 0 f
|a| a
Sign change f (−t) ˆ
f (−ω)
Complex conjugation f (t) fˆ(−ω)
Time delay f (t − T ) e−iωT fˆ(ω)
Freq. translations eiΩt f (t) fˆ(ω − Ω)
1 ˆ
f (ω − Ω) + fˆ(ω + Ω)

Ampl. modulation f (t) cos Ωt
2
1 ˆ
f (ω − Ω) − fˆ(ω + Ω)

Ampl. modulation f (t) sin Ωt
2i
Symmetry fˆ(t) 2π f (−ω)
Time differentiation f (n) (t) (iω)n fˆ(ω)
Freq. differentiation (−it)n f (t) fˆ(n) (ω)
Time convolution f (t) ? g(t) fˆ(ω)ĝ(ω)
1 ˆ
Freq. convolution f (t)g(t) f (ω) ? ĝ(ω)

Transform pairs
Delta function δ(t) 1
Derivative av Delta fn.. δ(n) (t) (iω)n
1
Exponential θ(t)e−at ,a>0
a + iω
1
Exponential (1 − θ(t)) e−at ,a>0
a − iω
2a
Exponential e−a|t| ,a > 0
a 2 + ω2
1
Heaviside’s function θ(t) πδ(ω) +

Constant 1 2πδ(ω)
sin Ωt
Filtering (sinc) θ(ω + Ω) − θ(ω − Ω)
πt
1 2 2
Normal dist./Gaussian. √ e−t /(4A) e−Aω , A > 0
4πA
92 A. APPENDICES

A-3. General Properties of the Z-transform

TABLE 3. General Properties of the Z-transform


{xn }∞
n=0 X(z) = Z [{xn }] (z)

Definition xn ∑ xn z−n
n=0
Linearity a {xn } + b {yn } aZ [{xn }] + bZ [{yn }]
−n
Damping a xn X (az) , a > 0
nxn −zX 0 (z)
Differentiation (1 − n)xn−1 σn−1 X 0 (z)
Convolution {xn } ? {yn } X(z)Y (z)
Forward translation xn−k σn−k (k ≥ 0) z−k X(z)
k−1
Backward translation xn+k (k ≥ 0) zk X(z) − ∑ x j zk− j
j=0

Transform pairs
z
Unit step σn
z−1
Unit pulse δn 1
Delayed unit pulse δn−k z−k
z
Exponential an
z−a
z
Ramp function rn = nσn
(z − 1)2
z sin θ
Sine sin nθ
z2 − 2z cos θ + 1
z sin θ
Damped sine an sin nθ
z − 2za cos θ + a2
2
z(z − cos θ)
Cosine cos nθ
z2 − 2z cos θ + 1
z(z − a cos θ)
Damped cosine an cos nθ
z2 − 2za cos θ + a2
A-4. THE HAAR WAVELET 93

A-4. The Haar Wavelet

1.The Haar Wavelet. The mother wavelet ψ and scaling function ϕ are in this case very simple
functions that take the values 0, 1 and −1, and 0 and 1 (see Fig. 1.4.1):

1


 0≤t ≤ ,
1, (

 2 1, 0 ≤ t < 1,
ψ(t) = −1, 1 < t ≤ 1, ϕ(t) =


 2 0, otherwise.
0, otherwise,

The different operations performed on the mother wavelet to construct a basis are illustrated in Figs. 1.4.2,
1.4.3, 1.4.4 and 1.4.5.

F IGURE 1.4.1. The Haar wavelet and scaling function


y y
(
1, 0 ≤ t ≤ 1, 
ϕ(t) = 1
 , 0 ≤ t ≤ 12 ,
0, else
ψ(t) = −1 , 12 < t ≤ 1,

1 1 0 , else

t t
1 1 1
2

(a) The Haar scaling function (b) The Haar wavelet

F IGURE 1.4.2. Translations of ϕ


y y

y = ϕ(t − 1) y = ϕ(t − k)
1 1

t t
1 k k+1

trim =
0 0 -10 0
94 A. APPENDICES

F IGURE 1.4.3. Dilatations of ϕ


y y

y = ϕ 22t

y = ϕ 2k t


1 1

t t
1 1
4 2k

y
F IGURE 1.4.4. Dilatations and translationsy

y = ϕ 22 t − 1 y = ϕ 22 t − 3
 

1 1

t t
1 1 3 1
4 2 4

F IGURE 1.4.5. Dilatations, translation and normalization


y
4

y = 22 ϕ 22 t − 3


t
3 1
4

R EMARK 25. Note that we have e.g.


(
ϕ(t) = ϕ(2t) + ϕ(2t − 1),
ψ(t) = ϕ(2t) − ϕ(2t − 1).
A-4. THE HAAR WAVELET 95

2. An Approximation Example. We will now see how to approximate a function by step functions.
Observe that in the figures that illustrate the different cases we have used the function f (t) = t 2 .
a) Approximation by the mean value (see Fig. 1.4.6):

 Z 1 
1
f (t) ≈ A0 (t) = f (s)ds ϕ(t).
1 0

F IGURE 1.4.6. Approximation by the mean value


y

1 y = f (t)

y = A0 (t)
t
1

b) Approximation by a step function (2 steps) (see Fig. 1.4.7):

Z 1 Z 1
2
f (t) ≈ A1 (t) = 2 f (s)dsϕ(2t) + 2 1
f (s)dsϕ(2t − 1)
0 2
Z 1 √ √ Z 1 √ √
= f (s) 2ϕ(2s)ds 2ϕ(2t) + f (s) 2ϕ(2s − 1)ds 2ϕ(2t − 1)
0 0
= a0 ϕ0 (t) + a1 ϕ1 (t).

F IGURE 1.4.7. Approximation by a step function (2 steps)


y

1 y = f (t)

y = A1 (t)

t
1 1
2

c) Approximation by a step function (4 steps) (see Fig. 1.4.8):


96 A. APPENDICES

f (t) ≈ A2 (t)
Z 1 Z 1 Z 3
4 2 4
= 4 f (s)dsϕ(4t) + 4 1
f (s)dsϕ(4t − 1) + 4 1
f (s)dsϕ(4t − 2) +
0 4 2
Z 1
4 3 f (s)dsϕ(4t − 3)
Z 41  Z 1 
= f (s)2ϕ(4s)ds 2ϕ(4t) + f (s)2ϕ(4s − 1)ds 2ϕ(4t − 1) +
0 0
Z 1  Z 1 
f (s)2ϕ(4s − 2)ds 2ϕ(4t − 2) + f (s)2ϕ(4s − 3)ds 2ϕ(4t − 3)
0 0
= a0 ϕ0 (t) + a1 ϕ1 (t) + a2 ϕ2 (t) + a3 ϕ3 (t).

F IGURE 1.4.8. Approximation by a step function (4 steps)


y

1 y = f (t)

y = A2 (t)

t
1 1 3 1
4 2 4

d) Approximation by a step function (2n steps)

2n −1
f (t) ≈ ∑ ak ϕk (t),
k=0
where the “Fourier coefficients” are
Z 1
n
ak = f (s)2 2 ϕ (2n s − k) ds,
0
and the “basis functions” are
n
ϕk (t) = 2 2 ϕ (2nt − k) .

3. Approximation by wavelets. The basic idea is that we can write f (t) as:
f (t) ≈ An (t) = (An (t) − An−1 (t)) + (An−1 (t) − An−2 (t)) + . . .
+ (A2 (t) − A1 (t)) + (A1 (t) − A0 (t)) + A0 (t).
E.g. for n = 2 we have
f (t) ≈ A2 (t) = (A2 (t) − A1 (t)) + (A1 (t) − A0 (t)) + A0 (t),
A-5. ADDITIONAL TRANSFORMS 97

where
Z 1 Z 1
A1 (t) − A0 (t) = 2 f (s)ϕ(2s)dsϕ(2t) + 2 f (s)ϕ(2s − 1)dsϕ(2t − 1) −
0 0
Z 1
f (s)ϕ(s)dsϕ(t) = [ϕ(t) = ϕ(2t) + ϕ(2t − 1)]
0
Z 1 Z 1
= f (s) (ϕ(2s) − ϕ(2s − 1)) dsϕ(2t) − f (s) (ϕ(2s) − ϕ(2s − 1)) ϕ(2t − 1)
0 0
Z 1
= f (s)ψ(s)dsψ(t),
0
where ψ(t) is the Haar wavelet as defined on p. 93. Similarly one can also show that
Z 1 √ √ Z 1 √
A2 (t) − A1 (t) = f (s) 2ψ(2s)ds 2ψ(2t) + f (s) 2ψ(2s − 1)dsψ(2t − 1).
0 0
By continuing in this manner we find that f (t) can be approximated by An (t), which can be expressed as
n

An (t) = A0 (t) + ∑ f , ψ j,k ψ j,k (t),
j,k=0

where
j
ψ j,k (t) = 2 2 ψ 2 j t − k ,


and

Z 1
f , ψ j,k = f (s)ψ j,k (s)ds.
0

A-5. Additional Transforms

We present here some additional examples of transforms. For more information and applications cf.,e.g. L.
Debnath, Integral Transforms and Their Applications, (3).

1 The Fourier Cosine Transform


r Z
ˆfc (ω) = 2

Fc : f (t) → f (t) cos(ωt)dt,
π
r Z 0
2 ∞ ˆ
Fc−1 : f (t) = fc (ω) cos(ωt)dω.
π 0
2 The Fourier Sine Transform
r Z
ˆfs (ω) = 2

Fs : f (t) → f (t) sin(ωt)dt,
π
r Z 0
2 ∞ ˆ
Fs−1 : f (t) = fs (ω) sin(ωt)dω.
π 0
3 The Hankel Transforms (Defined by the Bessel functions Jn , n = 0, 1, . . .)
Z ∞
Hn : f (r) → fˆn (y) = f (r)Jn (yr)rdr,
Z 0∞
Hn−1 : f (r) = fˆn (y)Jn (yr)ydy.
0
98 A. APPENDICES

4 The Mellin Transform


Z ∞
M : f (x) → f˜(α) = xα−1 f (x)dx,
0
Z c+i∞
1
M −1 : f (x) = x−α f˜(α)dα,
2πi c−i∞
here α is complex and c is chosen such that the integral converges.
5 The Hilbert Transform
1 ∞ f (t)
Z
H : f (t) → fˆH (x) = dt,
π −∞ t − x
1 ∞ fˆH (x)
Z
H −1 : f (t) = − dx.
π −∞ x − t
6 The Stieltjes Transform
f (z)
Z ∞
S : f (t) → f˜(z) = dt, | arg z| < π.
0 t +z
Remark: This operation can be inverted, but we don’t get any simple integral formula as
before, hence we don’t write the inverse transform explicitly here.
7 The Generalized Stieltjes Transform
f (t)
Z ∞
Sρ : f (t) → f˜ρ (z) = dt, | arg z| < π.
0 (t + z)ρ
The same remark as above applies.
8 The Legendre Transform
Z 1
f˜(n) , f˜(n) =

L : f (x) → Pn (x) f (x)dx,
−1

2n + 1 ˜
L−1 : f (x) = ∑ f (n)Pn (x).
n=0 2
Here Pn (x) is the Legendre polynomial of degree n, which we can write explicitly as
[ n2 ]   
−n k n 2n − 2k n−2k
Pn (x) = 2 ∑ (−1) x , n = 0, 1, . . . ,
k=0 k n
2n + 1 ˜
and the “Fourier coefficients” are an = f (n).
2
9 The Jacobi Transform
n o Z 1
J : f (x) → f α,β (n) , f α,β (n) = (1 − x)α (a + x)β Pnα,β (x) f (x)dx,
−1

J −1 : f (x) = ∑ (δn )−1 f α,β (n)Pnα,β (x).
n=0

Here Pnα,β (x) is the Jacobi polynomial of degree n and order α, β, which can be written
explicitly as
∞   
n+α n+β
Pnα,β (x) = 2−n ∑ (x − 1)n−k (x + 1)k , n = 0, 1, . . . ,
k=0 k n − k
A-6. PARTIAL FRACTION DECOMPOSITIONS 99

and the “Fourier coefficients” are an = (δn )−1 f α,β (n), where
2α+β+1 Γ(n + α + 1)Γ(n + β + 1)
δn = .
n!(α + β + 2n + 1)Γ(n + α + β + 1)
10 The Laguerre Transform
Z ∞
f˜α (n) , f˜α (n) = e−x xα Lnα (x) f (x)dx,

L: f (x) →
0

L−1 f (x) = ∑ (δn )−1 f˜α (n)Lnα (x).
n=0

Here Lnα (x) is the Laguerre polynomial of degreen ≥ 0 and order α > −1, and the “Fourier
coefficients” are an = (δn )−1 f˜α (n), where
Γ(n + α + 1)
δn = .
n!
11 The Hermite Transform
Z ∞
∗ 2
H : f (x) → { fH (n)} , fH (n) = e−x Hn (x) f (x)dx,
−∞

∗ −1
(H ) : f (x) = ∑ δ−1
n f H (n)Hn (x).
n=0
Here Hn (x) is the Hermite polynomial of degree n, and the “Fourier coefficients” are an =
δ−1
n f H (n), where √
δn = n!2n π.
R EMARK 26. Observe that the transforms 8-11 are special cases of the earlier theory for generalized
Fourier series (cf. Def. 6.1).

A-6. Partial Fraction Decompositions

It is quite common that, especially when dealing with Laplace or Z transform, one wants to apply the
inverse transform to a rational function
P(s)
.
Q(s)
If none of the standard rules apply directly, the standard approach is to first of all perform polynomial
division if the degree of P is greater than or equal to the degree of Q. After this step it is usually the best
approach to make a partial fraction decomposition.
Suppose now that degP < degQ. We know that  the polynomial Q can be factored (in R) into linear factors,
(s − a), and quadratic factors s − a)2 + b2 . Remember that a partial fraction decomposition is of the
form
P(s) p1 pM
= +···+ ,
Q(s) q1 qM
where p1 , . . . , pM are constants or linear polynomials, and q1 , . . . , qM consist of the linear and quadratic
factors of Q (with all multiplicities). The following two general rules apply:
• A linear factor (s − a) of multiplicity n contributes with
A1 A2 An
+ +···+ .
s − a (s − a)2 (s − a)n
100 A. APPENDICES

• A quadratic factor (s − a)2 + b2 of multiplicity n contributes with




A1 s + B1 A2 s + B2 An s + Bn
+ +···+ .
((s − a)2 + b2 ) ((s − a)2 + b2 )2 ((s − a)2 + b2 )n
The coefficients of the polynomials p j are usually computed by putting the right hand side on a common
denominator and comparing the resulting coefficients with P(s).
Example 1.1. We consider the rational function
P(s) 3s2 + 1
= .
Q(s) s(s + 1)(s − 1)2
2

The factors of Q are the linear factors s, (s − 1) of multiplicity 2 and the quadratic factor (s2 + 1).
Hence the partial fraction decomposition is
P(s) 3s2 + 1 A B C Ds + E
= 2 2
= + + 2
+ 2 ,
Q(s) s(s + 1)(s − 1) s s − 1 (s − 1) s +1
and if we put the right hand side on a common denominator we get
A(s − 1)2 s2 + 1 + Bs(s − 1)(s2 + 1) +Cs(s2 + 1) + (Ds + E)s(s − 1)2

3s2 + 1
= ,
s(s2 + 1)(s − 1)2 s(s2 + 1)(s − 1)2
and hence
3s2 + 1 = A(s − 1)2 s2 + 1 + Bs(s − 1)(s2 + 1) +Cs(s2 + 1) + (Ds + E)s(s − 1)2 .


We can solve for A and C immediately: if we set s = 1 we see that


3 + 1 = A · 0 + B · 0 + 2C + D · 0 + E · 0 = 2C,
hence C = 2, and if we set s = 0 we see that 1 = A. Hence we must have
3s + 1 = (s − 1)2 s2 + 1 + Bs(s − 1)(s2 + 1) + 2s(s2 + 1) + (Ds + E)s(s − 1)2
2


= s2 + 1 − 2s (s2 + 1) + B(s2 − s)(s2 + 1) + 2s3 + 2s + (Ds2 + Es) s2 − 2s + 1


 

= s4 − 2s3 + 2s2 − 2s + 1 + B(s4 − s3 + s2 − s) + 2s3 + 2s + Ds4 − 2Ds3 + Es3 − 2Es2


+Ds2 + Es
= s4 (1 + B + D) + s3 (−2 − B + 2 − 2D + E) + s2 (2 + B − 2E + D) + s (−2 − B + 2 + E) + 1.
And we get the following equations for the coefficients
1 = 1,
E −B = 0,
B + D − 2E + 2 = 3,
E − 2D − B = 0,
B+D+1 = 0,
and using standard linear algebra we see that B = E = −1, and D = 0. Hence the partial fraction
decomposition becomes
P(s) 3s2 + 1 1 1 2 1
= = − + − .
Q(s) s(s + 1)(s − 1)2
2 s s − 1 (s − 1)2 s2 + 1
This can (and should) now also be verified by multiplying together all factors of the right hand side
again.
Bibliography

[1] J. Bergh, F. Ekstedt, and M. Lindberg. Wavelets. Studentlitteratur, 1999. 55


[2] C.S. Burrus, R.A. Gopinath, and H. Guo. Introduction to Wavelets and Wavelet Transforms, A Primer.
Prentice Hall, 1998. 55
[3] L. Debnath. Integral Transforms and Their Applications. CRS Press, 1995. 55, 97
[4] K.Gröchenig. Foundations of Time-Frequency Analysis. Birkhäuser, 2000. 55
[5] J. D. Logan. Applied Mathematics. Wiley, 2nd edition, 1996. 80
[6] S.G. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999. 55

101
102 Bibliography

A-7. Answers to exercises

4.1. a) Linear. b) Non-linear, c) Non-linear, d) Linear.


α2 1 (x−α)2
Z ∞ Z ∞
4.3. u(x,t) = c(α)uα (x,t) dα with c (α) = √ gives u(x,t) = √ α2 e− 4kt dα.
−∞ 2k π 4 k3 π −∞

π2 ∞
(−1)n
4.5. a) S (x) = +4 ∑ 2
cos (nt). b) Use f (0) = 0 = S (0).
3 n=1 n

4.7. a) We get the equation ut0 = u00xx ,0 < x < 1, t > 0, the initial value u(x, 0) = 1, and the boundary values
u(0,t) = u(1,t) = 0.
4 ∞ 1 2 2
b) u(x,t) = ∑ e−π (2k+1) t sin (π (2k + 1) x).
π k=0 2k + 1
1
1 − e−4t cos 2x .

4.9. u(x,t) =
2
1 n2 π2
  
An 1 nπ
5.1. (a) The eigenvalues are λn = + 2 and the eigenfunctions are un (x) = √ cos + ln x .
4 L x 4 L
1
(b) The eigenvalues are λn = p2n + , where pn are solutions of tan pn = 2pn and the eigenfunctions are
4
1
un (x) = An √ sin (pn ln x).
x
2 2
2 l
∞   Z  
πk − π 2k t πk
5.3. u(x,t) = ∑ ak cos x e l with ak = f (x) cos x dx.
k=0 l l 0 l

5.6. a) The reason that m must be an integer is the periodicity: Θ(θ + 2π) = Θ (θ).
a2 a2 a2
     
b) ψ(r, θ) = −v0 r + cos θ, och ~v = (vr , vθ ) with vr = v0 1 − 2 cos θ, och vθ = v0 1 + 2 sin θ.
r r r
 
1 −3(t−2) 1 −5(t−2)
6.1. a) f (t) = e − e θ(t − 2).
2 2
3 1
b) y(t) = 1 − e−t + e−3t .
2 2
1 1 −t
6.3. a) y(t) = 1 − e−t (cost + sint) b) y(t) =
 
e cost − 1 + t .
2 2
1
6.5. fˆ(ω) = e−3iω .
1 + iω
 
sin (ω + ω0 ) a sin (ω − ω0 ) a
6.7. F (ω) = i − .
ω + ω0 ω − ω0
6.9. y(n) = 2n + (−1)n ,n ≥ 1.

6.11. ( f ? f )(x) = (1 + |x|)e−|x| .


5 3 5 2
6.13. x(t) = 5e−t + 3e4t (X(s) = + ), y(t) = 5e−t − 2e4t (Y (s) = − ).
s+1 s−4 s+1 s−4
1 
−t/T

6.15. y(t) = ωTe + sin ωt − ωT cos ωt .
1 + ω2 T 2
A-7. ANSWERS TO EXERCISES 103

10
an+1 − 0.7n+1 σn .

6.17. yn =
10a − 7
z Dx cv
6.20. a) z = , x = , c = , and the equation now becomes
h V h2 q
∂c ∂2 c
= ,
∂x ∂z2
with the boundary conditions
∂c ∂c
|z=0 = |z→∞ = 0,
∂z ∂z
and
c|x=0 = δ (z − 1) , c|x→∞ = 0.

1 1 qh − V h2
b) c (x, 0) = √ e− 4x which gives c(x, 0) = √ e 4dx kg m−3 .
πx πdvx
1 d
c) The maximum is attained at x = , i.e. x = m.
2 2vh2
Z x Z x
8.1. We get a Volterrra equation on the form u(x) = F(x)+ k(x, y)u(y)dy with F(x) = (x − y) f (y)dy+
 a a
(p(a)u0 + u1 ) (x − a) + u0 and k(x, y) = p(y) + (x − y) p0 (y) − q(y) .


Z x
8.3. The integral equation is y(x) = −ω2 (x − t)y(t)dt + 1, and the solutions are y(x) = cos ωt, and if
0
y(1) = 0 we must have ω = 2πn, and we thus get the eigenfunctions yn (x) = cos (2πnx) , n ∈ Z.
  Z 1
1 t 1 1
Z α
8.5. For ≤ α ≤ 1 we have: F(α) = F dt = 1 − dt = 1 + ln α.
2 0 1−t t α t

8.7. y = 1 − x
∞ Z 1
1
8.9. u(x) = f (x) + ∑ λn f1 , where f1 = f (t)dt. For the given function f (x) and λ = we get u(x) =
n=1 0 2

e 1 1 e−1
ex − + + ∑ n = ex .
2 2 n=1 2 2

λπ
8.11. a) u(x) ≡ 0. b) u(x) = sin x, c) u(x) = sin 2x + sin x.
2
 2  
2πn 2πn
8.13. The eigenvalues are λn = − and the eigenfunctions are un (x) = cos x , for n ∈ Z.
a a

t
8.15. a) With k(t) = 1 − for 0 ≤ t ≤ T we get the Volterra equation
T
Z t
ak(t) + k(t − y)u(y)dy, 0 ≤ t ≤ T,
0

which can be solved either by rewriting it as a differential equation or by using the Laplace transfrom.
104 Bibliography

1 2π Z Z 2π
8.17. The eigenvalues are λ = ± , and if we define f1 = f (t) cost and f2 = f (t) sint we get the
π 0 0
different cases:
1
For λ = there are solutions if f (x) is odd (or ≡ 0), and these are then given by
π
f2
u(x) = f (x) − sin x + c cos x

for some constant c.
1
For λ = − there are solutions if f (x) is even (or ≡ 0), and these are then given by
π
f1
u(x) = f (x) + cos x + d sin x,

where d is a constant.
1
For λ 6= ± we have the solutions
π
λ f1 λ f1
u(x) = f (x) + cos x − sin x.
1 − λπ 1 + λπ
8.19. u(t) = cost

You might also like