You are on page 1of 57

Chapter 3 Differential Equations

3.1 Introduction
Almost all the elementary and numerous advanced parts of theoretical physics
are formulated in terms of differential equations (DE).
Newtons Laws
Maxwell equations
Schrodinger and Dirac equations etc.

Since the dynamics of many physical systems involve just two derivatives,
DE of second order occur most frequently in physics. e.g.,
acceleration in classical mechanics

d 2x
F ma m 2
dt

the kinetic energy operator in quantum mechanics

Ordinary differential equation (ODE) Partial differential equation (PDE)

Examples of PDEs
1. Laplace's eq.: 2 0
This very common and important eq. occurs in studies of
a. electromagnetic phenomena, b. hydrodynamics,
c. heat flow,
d. gravitation.
2. Poisson's eq., 2 / 0
In contrast to the homogeneous Laplace eq., Poisson's eq. is
homogeneous with a source term / 0

non-

3. The wave (Helmholtz) and time-independent diffusion eqs., 2 k 2 . 0


These eqs. appear in such diverse phenomena as
a. elastic waves in solids,
b. sound or acoustics,
c. electromagnetic waves,
d. nuclear reactors.
4. The time-dependent diffusion eq.
2

a 2 t
2

5. The time-dependent wave eq.,

1 2

2 2 2
c t

where 2 is a four-dimensional analog of the Laplacian.


6.The scalar potential eq., 2 0
7.The Klein-Gordon eq., 2 , and the corresponding vector eqs. in
which is replaced by a vector function.
2 2

8.The Schrodinger wave eq.

V i
2m
t
2
and

2m

V E

for the time-independent case.

Some general techniques for solving second-order PDEs


1.Separation of variables, where the PDE is split into ODEs that are related
by common constants which appear as eigenvalues of linear operators,
LY = lY, usually in variable.
2. Conversion of a PDE into an integral eq. using Green's functions
applies to inhomogeneous PDEs.
3. Other analytical methods such as the use of integral transforms.
4. Numerical calculations
Nonlinear PDEs
Notice that the above mentioned PDEs are linear (in ). Nonlinear ODEs an
PDEs are a rapidly growing and important field.
The simplest nonlinear wave eq

C ( )
0
t
t

Perhaps the best known nonlinear eq. of second is the Korteweg-deVries (KdV) eq.

3 0
t
x
x

3.2 First-order Differential Equations


We consider here the general form of first-order DE:
dy
P ( x, y )
f ( x, y )
dx
Q ( x, y )

(3.1)

The eq. is clearly a first-order ODE. It may or may not be linear, although we shall
treat the linear case explicitly later.
Separable variables

Frequently, the above eq. will have the special form


dy
P( x)
f ( x, y )
dx
Q( y )

or
P(x)dx + Q(y) dy = 0
Integrating from (x0, y0) to (x, y) yields,

x0

P( x)dx Q( y)dy 0
y0

Since the lower limits x0 and y0 contribute constants, we may ignore the lower
limits of integration and simply add a constant of integration.
Example Boyle's Law
In differential form Boyle's gas law is
V-volume, P - Pressure.
or
ln V + ln P = C.
If we set C = ln k, PV = k.

dV
V

dP
P

Exact Differential Equations


Consider
P(x,y) dx + Q(x,y) dy = 0 .
This eq. is said to be exact if we match the LHS of it to a differential dj,

dj

j
j
dx
dy
x
y

Since the RHS is zero, we look for an unknown function


j(x,y) = const. and dj = 0.
j
j
P
(
x
,
y
)
dx

Q
(
x
,
y
)
dy

dx

dy
We have
x
y
j
j
and
P( x, y ),
Q ( x, y )
x
y
The necessary and sufficient for our eq. to be exact is that the second, mixed
partial derivatives of j (
assumed continuous) are independent of the
order of differential:
2j
P( x, y) Q( x, y) 2j
yx

xy

If such j(x,y) exists then the solution is j(x,y)=C.

There always exists at least one integrating facts, (x,y), such that

( x, y) P( x, y)dx ( x, y)Q( x, y)dy 0


Linear First-order ODE
If f (x,y) has the form -p(x)y + q(x), then
dy
p ( x) y q( x)
dx

(3.2)

It is the most general linear first-order ODE. If q(x) = 0, Eq.(3.2) is


homogeneous (in y). A nonzero q(x) may represent a source or deriving term.
The equation is linear; each term is linear in y or dy/dx. There are no higher
powers; that is, y2, and no products, ydy/dx. This eq. may be solved exactly.

Let us look for an integrating factor (x) so that


dy
( x) ( x) p ( x) y ( x) q ( x)
dx
may be rewritten as

d
( x) y ( x)q( x).
dx

(3.3)

(3.4)

The purpose of this is to make the left hand side of Eq.(3.2) a


derivative so that it can be integratedby inspection.
Expanding Eq. (3.4), we obtain

( x)

dy d

y ( x)q( x).
dx dx

Comparison with Eq.(3.3) shows that we must require

d ( x)
( x) p( x).
dx
9

Here is a differential equation for (x) , with the variables and x


separable. We separate variables, integrate, and obtain
x

( x) exp p( x)dx

(3.5)

as our integrating factor.


With (x) known we proceed to integrate Eq.(3.4). This, of course, was the
point of introducing (x) in the first place. We have
x d
x
dx ( x) y( x)dx ( x)q( x)dx
Now integrating by inspection, we have
x

( x) y( x) ( x)q( x)dx C.
The constants from a constant lower limit of integration are lumped into the
constant C. Dividing by (x) , we obtain

y( x) ( x)

(x)q(x)dx C.
x

10

Finally, substituting in Eq.(3.5) for yields


x
x
s p(t )dt q( s)ds C .
y ( x) exp p(t )dt
exp

(3.6)

Equation (3.6) is the complete general solution of the linear,


first-order differential equation, Eq.(3.2). The portion
x

y1 ( x) C exp p(t )dt

corresponds to the case q(x) =0 and is a general solution of the Homogeneous


differential equation. The other term in Eq.(3.6),

x
x
s

y 2 ( x) exp p(t )dt exp p(t )dt q( s)ds


.

is a particular solution corresponding to the specific source term q(x) .

11

V
Example
RL Circuit
For a resistance-inductance circuit Kirchhoffs law leads to
dI (t )
L
RI (t ) V (t )
dt
for the current I(t) , where L is the inductance and R the resistance, both
constant. V(t) is the time-dependent impressed voltage.
From Eq.(3.5) our integrating factor (t) is

(t ) exp

R
dt
L

e Rt L .
12

Then by Eq.(3.6)

I (t ) e

Rt L

t Rt L V (t )

e
dt

C
,

with the constant C to be determined by an initial condition (a boundary condition)


For the special case V(t)=V0 , a constant,
L Rt L

Rt L V0
I (t ) e
C
L Re

V0
Ce Rt L .
R
If the initial condition is I(0) =0, then c V0 R

and

V0
I (t )
1 e Rt
R

13

3.3 SEPARATION OF VARIABLES


A very important PDE in physics

f 0
2

f k 2 is the Helmholtz wave equation

2 k 2 0
Electromagnetic field, wave transition etc.

k 2 f (r ) in the sphericalcoordinates, it is the Schrodinger


equation of hydrogen atom
2
2
m
e
2
(E
) 0

4 0 r

m: mass of an electron; hbar: Plank constant


0 electric permeabili ty of vacuum

Cartesian coordinates
Cylindrical coordinates
Spherical coordinates
14

Certain partial differential equations can be solved by separation of variables.


The method splits the partial differential equation of n variables into
ordinary differential equations. Each separation introduces an arbitrary
constant of separation . If we have n variables, we have to introduce n-1
constants, determined by the conditions imposed in the problem being
solved.
Cartesian Coordinates
In Cartesian coordinates the Helmholtz equation becomes

2 2 2
2

k
0
2
2
2
x
y
z

(3.7)

Using 2 ( x, y, z ) for the Laplacian. For the present let k 2 be a constant.


15

Let

( x, y, z ) X ( x)Y ( y)Z ( z )

(3.7a)

d2X
d 2Y
d 2Z
2
YZ

XZ

XY

k
XYZ 0
2
2
2
dx
dy
dz
Dividing by

XYZ

and rearranging terms, we obtain

1 d2X
1 d 2Y 1 d 2 Z
2
k

.
2
2
2
X dx
Y dy
Z dz

(3.8)

The left-hand side is a function of x alone, whereas the right-hand side depends
only on y and z, but x , y , and z are all independent coordinates. The only
possibility is setting each side equal to a constant, a constant of separation. We
choose
2
2
1 d2X
2
1
d
Y
1
d
Z
2
2
l , (3.9)

l
. (3.10)
2
2
X dx
Y dy
Z dz

Rearrange Eq.3.10

2
1 d 2Y
1
d
Z
2
2
k l
,
2
2
Y dy
Z dz

where a second separation constant has been introduced.

16

Similarly

1 d 2Y
2

m
,
2
Y dy

(3.11)

1 d Z
2
2
2
2

n
,
2
Z dz

(3.12)

introducing a constant n 2 by k 2 l 2 m 2 n 2 , to produce a symmetric set of


equations. Now we have three ordinary differential equations ((3.9),(3.11), and
(3.12)) to replace Eq.(3.7). Our solution should be labeled according to the choice
of our constants l, m ,and n ,that is ,

lmn ( x, y, z ) X l ( x)Ym ( y ) Z n ( z ).

(3.13)

Subject to the conditions of the problem being solved.


We may develop the most general solution of Eq.(3.7)by taking a linear
combination of solutions ,

lmn

lmn

(3.14)

l , m, n
17
Where the constant coefficients almnare determined by the boundary conditions

Example: Laplace equation in rectangular coordinates


2 2 2
2 2 0
2
x
y
z

V ( x, y ) Consider a rectangle box with

z=c
y=c y
x=a
x

( x, y, z ) X ( x)Y ( y ) Z ( z )

dimensions (a,b,c) in the (x,y,z)


directions. All surfaces of the box
are kept at zero potential, except
the surface z=c, which is at a
potential V(x,y).
It is required to find the potential
everywhere inside the box.

18

Example: Laplace equation in rectangular coordinates


1 d2X
2

X dx 2

Where

1 d 2Y
2

Y dy 2

1 d 2Z
2

Z dz 2

2 2 2

Then the solutions of the three ordinary differential equations are

ix

, e

iy

, e

2 2 z

X Y Z


, ,

19

To have 0 at x=a and y=b, we must have a=n and b=m Then

nm sin( n x) sin( m y ) sinh( nm z )

n n / a , m m / b,
( x, y , z )

n , m 1

nm

nm

n2 m2
2 2
a
b

sin( n x) sin( m x) sinh( nm z )

Since V(x,y) at z=c:


V ( x, y )

n , m 1

nm

sin( n x) sin( m x) sinh( nm c)

We have the coefficients


a

4
Anm
dx dyV ( x, y) sin( n x) sin( m y)
ab sinh( nmc) 0 0

Here the features of Fourier series have been used.

20

Circular Cylindrical Coordinates


With our unknown function dependent on , , and z , the Helmholtz
equation becomes
2 ( ,j , z ) k 2 ( ,j , z ) 0.

(3.15)

or
1

1 2 2
2
(
) 2

k
0,
2
2

j
z

As before, we assume a factored form for

( ,j , z ) P( )(j )Z ( z ).

(3.16)

,
(3.17)
21

Substituting into Eq.(3.16), we have


Z d
dP
PZ d 2
d 2Z
2
(
) 2

k
PZ 0.
2
2
d
d
dj
dz

(3.18)

All the partial derivatives have become ordinary derivatives. Dividing by PZ


and moving the z derivative to the right-hand side yields

1 d
dP
1 d 2
1 d 2Z
2
(
) 2
k
.
2
2
P d d
Z dz
dj
Then

And

d 2Z
2

l
Z,
2
dz
1 d
dP
1 d 2
2
2
(
) 2

l
.
2
P d d
dj

(3.19)

(3.20)

(3.21)

22

2
2
2
2
Setting k l n , multiplying by , and rearranging terms, we obtain

d
dP
1 d 2
2 2
(
)n
.
2
P d
d
dj

(3.22)

We may set the right-hand side to m2 and


d 2
2

m
.
2
dj
Finally, for the dependence we have

(3.23)

d
dP
(
) ( n 2 2 m 2 ) P 0.
d
d

(3.24)

This is Bessels differential equation . The solution and their properties are
presented in Chapter 6. -

The original Helmholtz equation has been replaced by three ordinary differential
equations. A solution of the Helmholtz equation is ( , j , z) P( )(j )Z ( z).
A general Sol.

( , j , z) a mn Pmn ( ) m (j )Z n ( z).
m, n

(3.26)

23

Spherical Polar Coordinates


Let us try to separate the Helmholtz equation in spherical polar coordinates:
1
r 2 sin

1 2
2
sin

(
r
)

(sin

k
.

2
r
r

sin j

(3.27)

We try (r , , j ) R(r )( ) (j ). By substituting back into Eq. and


dividing by R , we have

1 d 2 dR
1
d
d
1
d 2
2
r

sin

Rr 2 dr dr r 2 sin d
d r 2 sin 2 dj 2

(3.29)

Note that all derivative s are now ordinary derivative s rather than partials.
By multiplyin g by r 2 sin 2 , We can isolate (1 / )( d 2 / dj 2 ) to get
1 d 2
1 d 2 dR
1
d
d
2
2
2

r
sin

sin

(3.30)

2
2
2
dj
Rr dr dr r sin d
d

24

we use m2 as the separation constant. Any constant will do, but this
one will make life a little easier. Then
1 d 2 (j )
2

m
dj 2

and

(3.31)

1 d 2 dR
1
d
d
m2
2
(
r
)

(sin

k
. (3.32)
2
2
2
2
dr
d
Rr dr
r sin d
r sin

Multiplying Eq.(3.32) by r 2 and rearranging terms, we obtain


2
1 d 2 dR
1
d
d

m
(r
) r 2k 2
(sin
)
.
2
R dr
dr
sin d
d
sin

(3.33)

25

Again, the variables are separated. We equate each side to a constant Q and
finally obtain

1 d
d
m2
(sin
)
Q 0.
2
sin d
d
sin

QR
1 d 2 dR
2
(
r
)

k
R

0.
2
2
dr
r dr
r

(3.34)

(3.35)

Once more we have replaced a partial differential equation of three variables


by three ordinary differential equations. Eq.(3.34) is identified as the
associated Legendre equation in which the constant Q becomes l(l+1); l is
an integer. If k 2 is a (positive) constant, Eq. (3.35) becomes the spherical
Bessel equation.
Again, our most general solution may be written
Qm (r, ,j ) RQ (r )Qm ( )m (j ).

(3.36)

Q, m

26

The restriction that k^2 be a constant is necessarily. The separation process will
Still be possible for k^2 as general as

1
1
k f (r ) 2 g ( ) 2 2 h(j ) k '2
r
r sin
2

In the hydrogen atom problem, one of the most important examples of the Schrodinger
Wave equation with a closed form solution is k^2=f(r)
Finally, as an illustration of how the constant m in Eq.(3.31) is restricted, we note
that in cylindrical and spherical polar coordinates is an azimuth angle. If this is a
classical problem, we shall certainly require that the azimuthal solution () be
singled valued, that is,

(j 2 ) (j ).

27

This is equivalent to requiring the azimuthal solution to have a


period of 2 or some integral multiple of it. Therefore m must be
an integer. Which integer it is depends on the details of the
problem. Whenever a coordinate corresponds to an axis of
translation or to an azimuth angle the separated equation always
has the form

d 2 (j )
2

m
(j )
2
dj

28

3.4 Singular Points


Let us consider a general second order homogeneous DE (in y) as

y'' + P(x) y' + Q(x) y = 0

(3.40)

where y' = dy/dx. Now, if P(x) and Q(x) remain finite at x = x0 , point x = x0
is an ordinary point. However, if either P(x) or Q(x) ( or both) diverges as
x0 , x0 is a singular point.
x approaches to
Using Eq.(3.40), we are able to distinguish between two kinds of singular points
(1) If either P(x) or Q(x) diverges as x x 0 , but (x - x 0 ) P( x) and (x - x 0 ) 2 Q( x)
remain finite, then x0 is called a regular or non - essential singular point.

(2) If either (x - x 0 ) P( x) or (x - x 0 ) 2 Q( x) remain still diverges as x x 0 ,


then x0 is labeled an irregular or essential singularity.
29

These definitions hold for all finite values of x0. The analysis of x is similar
to the treatment of functions of a complex variable. We set x = 1/z, substitute into
the DE, and then let z 0 . By changing variables in the derivative, we have
dy( x) dy( z 1 ) dz
1 dy( z 1 )
dy

2
z 2
dx
dz dx
dz
dz
x

(3.41)

2
1
d 2 y ( x) d dy( x) dz
dy( z 1 )
2
2 d y( z )

( z ) 2 z
z

dz dx dx
dz
dx
dz 2

(3.42)

2
1
dy( z 1 )
4 d y( z )
2z
z
dz
dz 2
3

Using these results, we transform Eq.(3.40) into

2
d
y
dy
z 4 2 2 z 3 z 2 P( z 1 )
Q( z 1 ) y 0
dz
dz

(3.43)

The behavior at x = (z = 0) then depends on the behavior of the new


coefficients
30

2 z P( z 1 )
~
P ( z)
z2

and

Q( z 1 )
~
Q( z)
z4

as z 0. If these two expressions remain finite, point x = is an ordinary


point. If they diverge no more rapidly than that 1/z and 1/ z 2 , respectively, x =
is a regular singular point, otherwise an irregular singular point.
Example

x2 y xy x 2 n2 y 0
Bessel's eq. is
Comparing it with Eq. (3.40) we have
P(x) = 1/x, Q(x) = 1 - n 2 x 2,
which shows that point x = 0 is a regular singularity. As x
from Eq. (3.43), we have the coefficients
1
1 n2 z 2
P( z )
and Q( z )
z
z4
Since the Q(z) diverges as z 4, point x =

(z

0),

is an irregular or essential singularity.


31

We list , in Table 3.4, several typical ODEs and their singular points.
Table 3.4
Equation
1.Hypergeometric

x( x 1) y (1 a b) x cy aby 0

2. Legendre

(1 x ) y 2 xy l (l 1) y 0
2

3. Chebyshev
(1 x 2 ) y xy n 2 y 0
4. Confluent hypergeometric
xy (c x) y ay 0

Regular singularity

Irregular singularity

0,1,

___

-1,1,

___

-1,1,

___

5. Bessel

7. Simple harmonic oscillator

___

8. Hermite

___

x 2 y xy ( x 2 n 2 ) y 0
6. Laguerre

xy (1 x) y ay 0

y 2 y 0

y 2 xy 2y 0

32

3.5 Series Solutions


2 k 2 0 in spherical coordinates
1 d
d
m2
(sin
)
Q 0.
2
sin d
d
sin

(A)

Let x cos , it can be written as the standard form of the associated Legendre eq.
m2
(1 x ) y"2 xy '[l (l 1)
]y 0
1 x2
2

QR
1 d 2 dR
2
(
r
)

k
R

0.
2
2
dr
r dr
r

Q l (l 1)

The Bessels function

R(r)

(B)

y(kx)
kx

x 2 y ' ' xy '[ x 2 (l 1 / 2) 2 ] y 0

A linear second-order homogeneous ODE

y'' + P(x) y' + Q(x) y = 0.

33

Linear second-order ODE


In this section, we develop a method of a series expansion for
obtaining one solution of the linear, second-order, homogeneous DE.
A linear, second-order, homogeneous ODE may be written in the form
y'' + P(x) y' + Q(x) y = 0.
the most general solution may be written as

y ( x c1 y1 ( x c2 y2 ( x
Our physical problem may lead to a nonhomogeneous, linear, secondorder DE
y'' + P(x) y' + Q(x) y = F(x).
Specific solution of this eq., yp, could be obtained by some special
techniques. Obviously, we may add to yp any solution of the
corresponding homogeneous eq.
Hence,

y ( x c1 y1 ( x c2 y2 ( x y p ( x

The constants c1 and c2 will eventually be fixed by boundary conditions

34

To seek a solution with the form

y ( x) x (a0 a1 x a2 x ...) a x k
k

Fuchss Theorem:
We can always get at least one power-series solution, provided we are
expanding about a point that is an ordinary point or at worst a regular
singular point.

35

.
To illustrate the series solution, we apply the method to two important DEs.
First, the linear oscillator eq.
,
y"w 2 y

(3.44)

with known solutions y = sin wx, cos wx.

We try

y ( x) x (a0 a1 x a2 x ...) a x k
k

with k and a still undetermined. Note that k need not be an integer. By


differentiating twice, we obtain

y a (k )(k 1) x k 2
0

36

By substituting into Eq. (3.44), we have

a (k )(k 1) x
0

k 2

k
a
x
0
x 0

(3.45)

From the uniqueness of power series the coefficients of each power of x on


the LHS must vanish individually.
The lowest power is x k 2, for = 0 in the first summation. The requirement
that the coefficient vanishes yields
a 0 k(k-1) = 0.
Since, by definition, a0 0, we have
k(k-1) = 0
This eq., coming from the coefficient of the lowest power of x, is called the
indicial equation. The indicial eq. and its roots are of critical importance to
our analysis. k=0 or k=1
If k = 1, the coefficient a1 (k+1)k of x k 1 must vanish so that
a1 = 0.
We set = j+2 in the first summation and ' = j in the second. This
results in
37

a j+2 (k+j+2)(k+j+1) +
or

a j 2 a j

a j= 0

2
(k j 2)(k j 1)

This is a two-term recurrence relation.

We first try the solution k = 0. The recurrence relation becomes

a j 2 a j

2
( j 2)( j 1)

which leads to

a 2 a 0

2 1

2!

a0

a4 a2

2
43

4
4!

a0
38

a2 n (1)

2n
(2n)!

a0

So our solution is

y ( x) k 0

2n

(x) 2 (x) 4

x
)
a 0 1

... a 0 (1) n
a 0 cos(x)
2!
4!
(2n)!
n 0

If we choose the indicial eq. root k = 1, the recurrence relation becomes

a j 2 a j
Again, we have

2
( j 3)( j 2)

a2 n (1)

2n

(2n 1)!

For this choice, k = 1, we obtain

(x) 2 (x) 4
a0
y( x) k 1 a 0 x 1

...
3!
5!

2 n 1
a0
(

x
)
n
(

1
)

sin(x)

(2n 1)!
n 0

39

This series substitution, known as Frobenius' method, has given us two


series solution of the linear oscillation eq. However, two points must be
strongly emphasized:
(1) The series solution should be substituted back into the DE, to see if it
works.
(2) The acceptability of a series solution depends on its convergence
(including asymptotic convergence).
Expansion above x0
It is perfectly possible to write

y ( x) a ( x x0 ) k , a0 0
0

Indeed, for the Legendre eq the choice


x0 = 1 has some advantages. The point x0 should not be chosen at an essential
singularity -or the method will probably fail.

40

Limitations of Series Approach

This attack on the linear oscillator eq. was perhaps a bit too easy.
To get some idea of what can happen we try to solve Bessel's eq.

x2 y xy ( x 2 n2 y 0

(3.46)

Again, assuming a solution of the form

y ( x) a x k
0

41

we differentiate and substitute into Eq. (3.46). The result is

a (k )( k 1) x

k 1

a (k ) x
0

a x
0

k 2

a n 2 x k 0
0

By setting = 0, we get the coefficient ofx k ,


a0[k(k-1) + k-n 2 ] = 0.
The indicial equation

k 2 n2 0

with solution k = +n or -n. For the coefficients of x^(k+1), we obtain

a1 ( k 1 k k 1 n2 0
For k = +n or -n (k is not equal 1/2), [ ] does not vanish and we must require
a1 = 0.
k j
Proceeding to the coefficient of x for k = n, we set = j in the 1st, 2nd,
and 4th terms and = j-2 in the 3rd term. By requiring the resultant
k j
coefficient of x
to vanish, we obtain
a j [(n+j)(n+j-1)+(n+j)- n 2] + a j 2=0.
42

When j

j+2, this can be written for j0 as


1
a j 2 a j
( j 2)( 2n j 2)

(3.47)

which is the desired recurrence relation. Repeated application of this recurrence


relation leads to
a2 a0

1
a n!
2 0
2(2n 2)
2 1!(n 1)!

a 6 a 4

a n!
1
6 0
, and so on, and in general
2(2n 2)
2 3! (n 3)!

a 2 p (1) p

a 4 a 2

a n!
1
4 0
4( 2 n 4)
2 2! (n 2)!

a 0 n!
2 2 p p! (n p )!

Inserting these coefficients in our assumed series solution, we have

n! x 2
n! x 4
y ( x) a0 x 1 2
4
...
2 1!(n 1)! 2 2!(n 2)!

43

In summation form

n! x n 2 j
1
x
n
y( x) a0 (1) 2 j
a0 2 n! (1) j
( )n 2 j
2 j!(n j )!
j!(n j )! 2
j 0
j 0

(3.48)

The final summation is identified as the Bessel function Jn(x).


When k = -n and n is not integer, we may generate a second
distinct series to be labeled J-n(x). However, when -n is a
negative integer, trouble develops.

1
x n2 j
1
x
J n ( x) (1)
( )
(1) j n
( ) 2 j n (1) n J n ( x)
j!( j n)! 2
j !( j n)! 2
j 0
j 0
j

The second solution simply reproduces the first. We have failed


to construct a second independent solutions for Bessel's eq. by
this series technique when n is an integer.
44

Will this method always work? The answer is no!

SUMMARY
If we are expanding about an ordinary point or at worst about a regular
singularity, the series substitution approach will yield at least one solution
(Fuchss theorem).
Whether we get one or two distinct solutions depends on the roots of the
indicial equation.
1. If the two roots of the indicial equation are equal, we can obtain only
one solution by this series substitution method.
2. If the two roots differ by a non-integer number, two independent
solutions may be obtained.
3. If the two roots differ by an integer, the larger of the two will yield a
solution.
The smaller may or may not give a solution, depending on the behavior of
the coefficients. In the linear oscillator equation we obtain two solutions;
for Bessels equation, only one solution.

45

Regular and Irregular Singularities


The success of the series substitution method depends on the roots of the
indicial eq. and the degree of singularity. To have clear understanding on
this point, consider four simple eqs.

6
y0
2
x

(3.49a)

6
y 3 y 0
x

(3.49b)

1
a2
y y 2 y 0
x
x

(3.49c)

1
a2
y 2 y 2 y 0
x
x

(3.49d)

46

For the 1st eq., the indicial eq. is


k(k-1) - 6 =0,
giving k = 3, -2. Since the eq. is homogeneous in x ( counting d 2 dx 2 as x 2 ),
here is no recurrence relation. However, we are left with two perfectly
3
good solution, x and x 2.
For the 2nd eq., we have -6 a 0= 0, with no solution at all, for we have
agreed that a 0 0. The series substitution broke down at Eq. (3.49b) which
has an irregular singular point at the origin.
Continuing with the Eq. (3.49c), we have added a term y'/x. The indicial eq.
is k 2 a 2 0 , but again, there is no recurrence relation. The solutions are
y = x a , x a, both perfectly acceptable one term series.
For Eq. (3.49d), (y'/x
recurrence relation

a j 1

y'/x2), the indicial eq. becomes k = 0. There is a

a 2 j ( j 1)
aj
j 1
47

Unless a is selected to make the series terminate we have


lim

a j 1
aj

lim

j ( j 1)

j 1

Hence our series solution diverges for all x 0.

48

3.6 A Second Solution


In this section we develop two methods of obtaining a second
independent solution: an integral method and a power series containing a
logarithmic term. First, however we consider the question of independence
of a set of function.
Linear Independence of Solutions
Given a set of functions,
, the criterion for linear dependence is
the existence of a relation of the form

kj 0

(3.50)

in which not all the coefficients k are zero. On the other hand, if the only
solution of Eq. (3.50) is k=0 for all , the set of functions j is said to be
linearly independent.
Let us assume that the functions
are differentiable. Then,
differentiating Eq. (3.50) repeatedly, we generate a set of eq.

49

kj 0

k j 0

( n 1)
k
j
0

This gives us a set of homogeneous linear eqs. in which k are the unknown
quantities. There is a solution k 0 only if the determinant of the coefficients
of the k's vanishes,

j1
j 1

j2
j 2

...

...

...
...
...

jn
j n
...

j 1( n 1) j 2( n 1) ... j n( n 1)
50

This determinant is called the Wronskian.


1. If the wronskian is not equal to zero, then Eq.(3.50) has no solution other than k =
0. The set of functions is therefore independent.
2. If the Wronskian vanishes at isolated values of the argument, this does not
necessarily prove linear dependence (unless the set of functions has only two functions).
However, if the Wronskian is zero over the entire range of the variable. the functions
are linearly dependent over this range.
Example: Linear Independence
The solution of the linear oscillator eq. are f sin(wx), f2 cos(wx)
. The
Wronskian becomes

sin x

cosx

cosx sin x

and f1 and f2 are therefore linearly independent. For just two functions this means that
one is not a multiple of the other, which is obviously true in this case.

51

You know that

sin x (1 cos2 x)1 2


but this is not a linear relation.
Example Linear Dependence
Consider the solutions of the one-dimensional diffusion eq.: y y 0 .
We have f1 = ex and f2 = e-x, and we add f3 = cosh x, also a solution.
Then

ex
ex
ex

ex
e x
e x

cosh x
sinh x 0
cosh x

because the first and third rows are identical. Here, they are linearly
dependent, and indeed, we have
e x e x - 2 coshx = 0 with k 0.
52

A Second Solution
Returning to our 2nd order ODE
y'' + P(x) y' + Q(x) y = 0
Let y1 and y2 be two independent solutions. Then Wroskian is
W= y1 y2 y1 y2 .
By differentiating the Wronskian, we obtain
W y1 y2 y1 y2 y1y2 y1 y2

y1 P( x) y2 Q( x) y2 y2 P( x) y1 Q( x) y1
P( x)( y1 y2 y1 y2 ).

In the special case that P(x) = 0, i.e.


y'' + Q(x) y = 0,

(3.52)

W = y1 y2 y1 y2 = constant.

53

Since our original eq. is homogeneous, we may multiply solutions y1 and y 2


by whatever constants we wish and arrange to have W =1 ( or -1). This
case P(x) = 0, appears more frequently than might be expected. ( 2 in
Cartesian coordinates, the radical dependence of 2(r) in spherical polar
coordinates lack a first derivative). Finally, every linear 2nd-order ODE can
be transformed into an eq. of the form of Eq.(3.52).
For the general case, let us now assume that we have one solution by a
series substitution ( or by guessing). We now proceed to develop a 2nd,
independent solution for which W 0.
dW
P( x1 )dx1
W

We integrate from x1 =a to x1 = x to obtain


ln

x
W ( x)
P ( x1 )dx1 ,
a
W (a)

x
W ( x) W (a) exp P( x1 )dx1 .
a

(3.53)
54

W ( x) y1 y2 y1 y2
y12

d y2
( ).
dx y1

(3.54)

By combining Eqs. (3.53) and (3.54), we have


x
exp P( x1 )dx1
d y2
a

( ) W (a)
dx y1
y12

(3.55)

Finally, by integrating Eq. (3.55) from x2=b to x2=x we get


x 2 P( x )dx
exp
1
1
x
a
dx .
y2 ( x) y1 ( x)W (a)
2
2
b
y1 ( x2 )

Here a and b are arbitrary constants and a term y1(x)y2(b)/y1(b) has been
dropped, for it leads to nothing new. As mentioned before, we can set
W(a) =1 and write
55

x2

exp

P
(
x
)
dx
1
1
x

y 2 ( x ) y1 ( x )
dx 2 .
2
y1 ( x 2 )

(3.56)

If we have the important special case of P(x) = 0. The above eq. reduces to

y 2 ( x) y1 ( x)

dx2

y1 ( x2 )

Now, we can take one known solution and by integrating can generate a
second independent solution.
Example
A Second Solution for the Linear Oscillator eq.
From d 2 y dx 2 + y = 0 with P(x) = 0, let one solution be y1 = sin x.

y 2 ( x) sin( x)

dx2
2

sin x( cot x) cos x.

sin x2
56

Chapter 4. Orthogonal Functions* (Optional Reading)


4.1 Hermitian Operators (HO)
HO in quantum mechanics (QM)

As we know, P i x is an HO operator. As is customary in QM, we
simply assume that the wave functions satisfy appropriate boundary conditions:
vanishing sufficiently strongly at infinity or having periodic behavior. The
operator L is called Hermitian if
x

(
L

)
2 d
1
2
1

The adjoint A+ of an operator A is defined by


(
A

)
1 2
1 2 d

Clearly if A+ = A (self-adjoint) and satisfies the above mentioned boundary


conditions, then A is Hermitian. The expectation value of an operator L is
defined as
57

You might also like