You are on page 1of 63

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.

php

4
Inverse Problems
in Potential Scattering
Khosrow Chadan

4.1. Introduction
Since the pioneering work of Gardner, Greene, Kruskal, and Miura in 1968,
Zakharov and Shabat in 1972, and others, who showed that inverse problem
techniques of potential scattering can be used to solve several nonlinear partial
dierential equations of mathematical physics (Kortewegde Vries equation for
the motion of shallow water waves, nonlinear Schr
odinger equation for solitons
in optical bers, SineGordon equation, Boussinesq equation, etc.) and the
universality of the concept of solitons, a great amount of work has been devoted
to these problems. One is now able to solve a great number of these equations
with applications to various branches of physics where nonlinear equations are
derived by modeling important physical problems.
In other domains of physics also, such as electromagnetism (radar detection, medical imaging by electric elds) or acoustics (applications in geophysics,
tomography), other techniques have also been developed during recent years,
and are now used currently with great success.
The purpose of this chapter is to give an account of the inverse problem techniques in potential scattering. There are now many good books on
scattering theory, inverse problems, and solvable nonlinear partial dierential
equations. For those who wish to learn more about these problems and related
subjects, a list of references is given at the end.
4.2. Physical Background and Formulation of the Inverse Scattering
Problem
We consider here the scattering of a nonrelativistic particle by a xed center
of force, or what amounts to the same, the scattering of two such particles
by each other, after the motion of the center of mass has been removed. In
appropriate units of length, time, and mass, and when the interaction between
131

132

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

the two particles is local and represented by a local potential V (x), we have
to deal with the time-dependent Schr
odinger equation

(2.1)

(t) = H(t),
t

where the Hamiltonian is given by the sum of kinetic energy and potential
energy:
H = + V,

(2.2)

being the Laplacian in three dimensions. According to the rst principle of


quantum mechanics, the state of a system at each instant of time is represented
by a vector (the wave function) (t) in the Hilbert space of states. In the
case we deal with here, the Hilbert space of states is just L2 (IR3 ). Since the
Hamiltonian is an unbounded operator in this space, care must be taken about
the domain of H. This domain will depend, of course, on V and is usually
extremely complicated, except when V L2 , in which case D(H) = H2 , H2
being the Sobolev space. A way to avoid the diculties which arise is to
use quadratic forms, which are perfectly well suited for dening self-adjoint
operators which play a crucial role in quantum mechanics. Indeed, another
basic principle of quantum mechanics states that to each observable of the
system (positions of the particles, their momenta, energy) there corresponds
a self-adjoint operator in the Hilbert space of states. Observables must have
only real eigenvalues (or generalized eigenvalues). For details, see [5], [6].
For physical reasons, the potential V must be a real function. The rst
point is to look at the self-adjointness of the Hamiltonian. Indeed, the Hamiltonian operator corresponds to the total energy of the system, and since the
total energy is an observable, it must correspond to a self-adjoint operator,
which is one of the basic principles of quantum mechanics.
There are a variety of conditions under which H is self-adjoint. An illustrating example in one dimension is given in Chapter 1. Since we are interested
in scattering theory, i.e., particles move freely when they are far apart from
each other, we must assume that V 0 as |x| . One such condition is
V L2 (IR3 ). A better one, more suitable for scattering theory, is V L1 L2 .
It can been shown that this implies the condition

(2.3)

|V (x)| |V (y )| 3 3
d xd y < .
|x y |2

More precisely, L1 L2 implies L3/2 , and then the Sobolev inequality leads to
(2.3). As we shall see below, this condition, named after Rollnik, turns out to
be very important in scattering theory and dispersion relations. Henceforth,
we shall always assume (2.3).

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

133

Once the self-adjointness of the Hamiltonian is established, we must look at


its spectrum. Under the above conditions, for instance, the Rollnik condition
(2.3), one can show that the spectrum is made of two parts: a continuous part
(absolutely continuous) extending from 0 to +, and a discrete part, containing a nite number of eigenvalues, each with nite multiplicity, all negative.
For the proof of this fact in the case where V is a characteristic function of an
interval see 1.8 in Chapter 1. The negative eigenvalues correspond to bound
systems, when the two particles stay together for all times. For this reason,
these states are called bound-states.
What can be said then on the (absolutely) continuous part of the spectrum?
A careful study shows that it corresponds to scattering states, i.e., to (x, t)
such that for any bounded region of space ,

(2.4)

|(x, t)| d3 x = 0,

lim

where

(2.5)

(x, t) =

eiEt (k, x) f (k) d3 k,

E = k2 , f (k) L2 (IR3 ), and (k, x) is the solution of


(2.6)

H(k, x) = E(k, x).

What one would like to have is that (2.4) holds for all f (k) L2 . This would
mean that no matter how the wave-packet (x, t) is made, particles go far
apart from each other when time becomes large, so that the probability of
nding them together in an arbitrary nite region of space goes to zero. If this
is the case, then all of the continuous spectrum is made of scattering states,
and this is called asymptotic completeness.
Asymptotic completeness can be shown to hold also under various conditions on the potential. For instance, the condition V L1 L2 is sucient
to guarantee asymptotic completeness. Another condition is again the Rollnik
condition (2.3).
Once the asymptotic completeness is established, one can go from the timedependent description (2.1) to the time-independent description. Combining
the Schr
odinger equation (2.6) with the asymptotic condition that, at t = ,
particles move toward each other with relative momentum k (more precisely,
that we have only an incoming plane-wave without any spherical incoming
wave), one is lead to the celebrated LippmannSchwinger equation (k = |k|):

(2.7a)

1
(k, x) = eik.x
4

eik|xy|
V (y ) (k, y ) d3 y.
|x y |

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

134

Inverse Problems

If the solution of this integral equation, assumed to be unique, is then used in


(2.5), one can show that, when t +, the wave-packet is made of the same
incoming wave-packet, attenuated, plus an outgoing spherical wave, which is
the scattered wave produced by the interaction at nite times. Equivalently,
one can show that the solution of (2.7a) satises the asymptotic condition (the
Sommerfeld radiation condition)

(2.7b)

lim r

s
iks
r


=0 ,

r = |x|,

where s = (k, x) eikx .


This integral equation has many defects. The rst one is that its kernel
is singular; it becomes innite at x = y . However, this singularity is mild
and there are well-known mathematical tools to deal with it. It is easily seen
that it is sucient to iterate the integral equation (2.7) once in order to get a
nite kernel. The second defect is that the inhomogeneous term is not squareintegrable. This also can be easily taken care of using the BirmanSchwinger
trick, which is to multiply both sides of (2.7) by |V |1/2 . Writing sgn(V ) =
V /|V |, which is a bounded function having two values 1, we then get

(2.8)

1
(k, x) = 0 (k, x)
4


G(k, x, y ) sgn(V (y )) (k, y ) d3 y,

where

(2.9)

0 (k, x) = |V (x)|1/2 eik.x

and
ik|
x
y|

(2.10)

e
G(k, x, y ) = |V (x)|1/2
|V (y )|1/2 .
|x y |

When V L1 , 0 becomes L2 , whatever k (real) is. If the Rollnik condition


(2.3) holds also, the operator with kernel G can easily be seen to be a Hilbert
Schmidt operator, and especially a compact operator, for all real k 0, and
this simplies life greatly because then we can use all the classical theorems
about Fredholm integral equations and show the existence and uniqueness of
the solution. This is why the Rollnik condition plays an important role in
scattering theory. For a detailed study of all these problems, we refer the
reader to Simon [5].
After having shown the existence and the uniqueness of the solution of
(2.7) or (2.8) for all real k, one can then show that the scattering amplitude
A(k, k ), dened by the asymptotic form of (k, x) when x :

Inverse Problems in Potential Scattering

135

ik|
x|

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(2.11)

e
(k, x) = eik.x + A(k, k )

|x|

+ ,

where k = k(x/|x|), is given by

(2.12)

1
A(k, k ) =
4


eik .x V (x) (k, x)d3 x.

In short, one can take the limit x in (2.7a) under the integral sign. Note
2
that k 2 = k = E.
After scattering amplitude is found, one shows that the cross-section d,
i.e., the probability of nding the particles moving outward far from the scattering center, with momentum k , in the solid angle d around the direction
k per unit of time and per unit ux of the incoming particles with momentum
k, is given by

(2.13)

2



d = A(k, k ) d.

The quantity d is what is experimentally measured by putting counters


far from the scattering center to measure the ux of the outgoing particles.
An important problem particle physicists are confronted with is to now nd A
from d, i.e., to nd A from its absolute value. This important problem, which
is not yet solved in its full generality, lies outside the scope of this chapter, and
we refer the reader to [7, Chap. 10] for a detailed study of it and references.
So far, we have been dealing with the (unique) solution of (2.7) for real k,
2
k = E belonging to the continuous part of the spectrum of the kernel (the
continuous part of the spectrum of H). As for those in the discrete spectrum
(isolated eigenvalues of H) corresponding to the solutions of the homogeneous
equation, i.e., (2.7) without the rst term in the righthand side, they are all
situated on E 0, are nite in number, and each have nite multiplicity. Let
us call them E1 < E2 < E3 < < En 0.
The general inverse problem, which is nding the potential from the scattering data


2
A(k, k )|k, k IR3 , k 2 = k {Ej |j = 1, 2, . . . , n} ,
is a very complicated problem and has not yet been completely solved, although
great progress has been made in recent years toward its complete solution. For
the state of the art on this general problem, we refer the reader to [9].
The simplest inverse problems, which have been given complete satisfactory
solutions and are the subject of these lectures, are obtained when the potential
has spherical symmetry: V (x) = V (|x|). In this case, one can decompose the
wave function, the solution of (2.7), in partial waves:

136

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(2.14)

4 
(k, x) =
(2 + 1)i P (cos )  (k, r),
kr

|x| = r,

=0

where is the angle between r and k : cos = (k.r)/kr, and  the solution
of the radial Schr
odinger equation

(2.15)

d2
( + 1)
2+
+ V (r)  (k, r) = E  (k, r),
dr
r2

to which one has to add the boundary condition

(2.16)

 (k, 0) = 0.

The asymptotic behavior of  when r is then shown to be




(2.17)

 (k, r) =

ei

1
sin kr  + 
2


+ o(1).

The quantity  , called the phase shift, is a real function of k for each . In
terms of  , the scattering amplitude can be written

(2.18)

1
(2 + 1) ei sin  P (cos ),
A(k, k ) =
k
=0

2
where cos = (k k )/k 2 , since k 2 = k = E, E being the energy of the
incoming or outgoing particles. The partial-wave Hamiltonian for each ,

(2.19)

H =

d2
( + 1)
+
+ V (r)
dr2
r2

under suitable conditions on V , is a self-adjoint operator in L2 (0, ), and its


spectrum is made of a continuum extending from 0 to +, and a few negative
eigenvalues E1 < E2 < < En 0. One suitable condition on the potential
is the following:

(2.20)

r|V (r)|dr < ,

and we shall use it extensively throughout this chapter. It can be easily shown
that this condition implies the Rollnik condition (2.3).

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

137

Here, two natural inverse problems occur. The rst one is at xed energy. We perform scattering experimentsat a given energy E and we get the
scattering amplitude (2.18), where k = E. Projecting A on the Legendre
polynomials, we obtain, in principle, all the phase shifts  . One may then ask
the following question: knowing all these numbers  , would it be possible to
obtain the potential? This problem is of some interest in nuclear physics and
has been completely solved. However, the solution is not in general unique and
there are many ambiguities.
The second inverse problem is at xed . We are dealing with (2.15) for
a given . Knowing V (r), we can solve it and get  (k) for all values of k, as
well as the energies of the possible bound states (negative point spectrum of
H ). Conversely, knowing  (k) for all k [0, ) and the negative spectrum
E1 En for this same , we would like to nd the potential. This is the problem that we are going to study in this chapter. We shall see how to compute
the potential from the scattering data { (k)|k [0, )} {E1 , E2 , En }.
This inverse problem on the half line r [0, ), when extended to the full line
x (, ), has been instrumental in the solution of many partial dierential
equations of mathematical physics.
4.3. Scattering Theory for Partial Waves
We have to study the radial Schr
odinger equation for a given  = 0, 1, 2, . . . :
(3.1)

(3.2)


d2
( + 1)
 ,
 (k, r) + E = V (r) +
dr2
r2

r [0, ),

 (k, 0) = 0,

E = k2 ,

and nd the asymptotic properties of the solution for large r, (2.17), as well
as the properties of the phase shift  (k) as function of the momentum k.
We shall begin rst with some general remarks on the dierential operator
(the Hamiltonian)

(3.3)

H =

d2
( + 1)
+
+ V (r)
dr2
r2

together with the Dirichlet boundary condition given in (3.2). It can be shown
that if the potential, assumed to be real, satises the condition

(3.4)

r |V (r)| dr < ,

the operator H is self-adjoint in L2 (R+ ). Therefore, its spectrum is real. It


contains a continuous (absolutely continuous) part E 0 extending to innity.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

138

Inverse Problems

Then, depending on the sign of the potential V , it may or may not have a
point spectrum on the negative E-axis. Indeed, assume rst that the potential
is positive everywhere (repulsive potential in the terminology of physicists).
Now, we know that the operator d2 /dr2 acting in L2 (0, ) with Dirichlet
boundary condition at r = 0 is a positive operator. Therefore, if the potential
is positive everywhere, the operator H given by (3.3) with Dirichlet boundary
condition at r = 0 is also positive for all  0. It follows that H cannot have
negative eigenvalues. In order to have negative eigenvalues, the total potential

Vtot V (r) +

(3.5)

( + 1)
r2

must be negative somewhere (attractive in the language of physicists). If this


negative part is strong enough, one may then have some negative eigenvalues.
In any case, if (3.4) is satised, one can show that the point spectrum is nite,
nondegenerate, and bounded from below. The number of these eigenvalues
is denoted by n and we may order them in increasing order: < En <
En 1 < < E2 < E1 0.
An upper bound for n is given by the Bargmann bound
1
n (V )
2 + 1

(3.6)

r |V (r)| dr,

V (r) = V [V ] ,
where is the Heaviside function (V is the negative part of V ) [1], [3].
We may remark in passing that from (3.6), it is clear that whatever the
potential may be, the point spectrum is empty if  is large enough, i.e., when
the righthand side of (3.6) becomes less than one. This is intuitively clear from
(3.5).
A better bound when the potential has a large negative part, and therefore
when there is a large point spectrum, is given by

(3.7)

2
n


0


1/2

|V (r)|

dr + 1

 2
2
1+
( + 1),

but one has to assume here that the negative part of the potential is an increasing function: V (r) 0. Here, one sees again that if  is large enough,
the point spectrum becomes empty.
Let us go back now to the positive part of the spectrum. Again, under the
condition (3.4), one can show that the radial Schr
odinger equation, (3.1) and
(3.2), has
a unique solutionup to a multiplicative constant factorfor every
xed k = E 0. We call this solution, as normalized by (2.17) at innity,

Inverse Problems in Potential Scattering




Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(3.8)

 (k, r) =

ei (k)


1
sin kr  +  (k) + ,
2

139

r ,

the physical solution, or the scattering solution, and  (k) is of course a real
function of k. But this solution is dened by two boundary conditions, one at
r = 0 and one at r = .
Remember that we are dealing with a second-order dierential equation.
Therefore, a unique solution is dened by two boundary conditions: either by
the value of the solution and its rst derivative at a given point, or by the value
of the solution at two dierent points. However, it is usually more convenient
to dene each solution of a linear dierential equation by sucient boundary
conditions at one point and then try to relate one to another dierent solution
dened by dierent boundary points. As we shall see, this will provide a precise
denition of the phase shift  (k) in terms of solutions of (3.1) and will enable
us to study its properties as function of k. In doing so, we shall in fact nd
many properties of the solutions of the Schr
odinger equation as functions of
k. These, in turn, will provide us with the key to the solution of the inverse
problem.
It turns out that the case  = 0 is not much more complicated to study
than the case  = 0. Only the algebra is more complicated, but the essential
points are very similar in all case. We shall therefore study rst the simple
case  = 0, called S-wave by physicists.
4.3.1.

(3.9)

S-Wave. We have to study the dierential equation


 (k, r) + k 2 = V (r)

on r [0, ), together with


(3.10)

(k, 0) = 0.

Because the dierential equation is of second order, we get a unique solution


by also giving the value of the derivative at the origin. Taking this value to be
one, one gets what is called the regular solution :

(3.11)

(0) = 0,

 (0) = 1.

One way to study (3.9) and (3.11) is to combine them into an integral
equation of Volterra type. Consider now the integral equation

(3.12)

sin kr
(k, r) =
+
k


0

sin k(r t)
V (t) (k, t) dt
k

140

Inverse Problems

and its formal derivative

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php


(3.13)

 (k, r)

cos k(r t) V (t) (k, t) dt.

= cos kr +
0

We would then get from (3.12) and (3.13) that (0) = 0 and  (0) = 1. If
we dierentiate (3.13) again formally, we would get the dierential equation
(3.9). It follows that, provided we can justify all the assumptions we made and
all the formal dierentiations we performed, the solution of (3.12) would give
us the regular solution :

(k, r) (k, r).

(3.14)

The usual method to study Volterra integral equations is the well-known


method of iteration. One starts with the zeroth-order solution (the solution
when V 0)

(0) =

(3.15a)

sin kr
k

and one denes higher-order iterations by

(3.15b)

(n)

sin kr
+
=
k


0

sin k(r t)
V (t) (n1) (k, t) dt,
k

n = 1, 2, . . . . If this sequence of functions (n) converges uniformly in an interval [0, R], for some xed value of k, then the limit function exists and satises
the integral equation for this value of k. Moreover, if at each step, (n) (0) = 0

and (n) (0) = 1, the same would be true for the limit function . To go from
the integral equation to the dierential equation, we must justify that (n) is
twice dierentiable at each step. If this can be done, we have proved that is
the unique solution of (3.9).
Remark. Before moving to the proof of uniform convergence of our sequence (n) , we must note the following point: for all values of r and 0 t r
in a nite interval [0, R], the functions sin kr/k and sin k(r t)/k are holomorphic in the variable k in any compact domain D of the complex k-plane.
Therefore, if at each step of the iteration, (n) is holomorphic in k in the same
domain, and if the convergence is also uniform in D, then the limit function
would also be holomorphic in k in D.
Assuming condition (3.4) on the potential, all of these uniform convergences and the validity of dierentiations under the integral signs can be shown
globally using the bound, valid for all real nite r and all nite k,

Inverse Problems in Potential Scattering






 sin kr 
r
|Im


 k  C 1 + |k| r e

(3.16)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

141

k|r ,

which also gives for t r,






 sin k(r t) 
r

t
C

e|Im


k
1 + |k| (r t)


r
e|Im k|r .
C
1 + |k| r

(3.17)

k|(rt)

C is an absolute constant, independent of r and k.


The use of the above bounds and of the formula


x1

f (x1 )dx1

(3.18)


f (x2 )dx2

xn1

1
f (xn )dxn =
n!

f (x)dx
0

leads then to


(n) (k, r) < C

r
1 + |k| r


e|Im k|r

1 + CI(k, r)+

C2 2
Cn n
I (k, r) ,
+ I (k, r) + +
2!
n!

(3.19)

where I is given by

(3.20)

I(k, r) =
0

t |V (t)|
dt
1 + |k| t

t |V (t)| dt
0

t |V (t)| dt < .

This clearly shows all our assertions about uniform convergence and analyticity in k of our sequence (n) (k, r). Also from

 r
|V (t)|
I(k, r)
dt
t |V (t)| dt +
|k|
0

we easily get

(3.21)

lim I(k, r) = 0.

|k|

This gives us also the asymptotic properties of our sequence (n) and .
Because the dierential equation is invariant under the change k k
and the boundary conditions (3.11) are independent of k, we have, for any real
or complex k,

142

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(k, r) = (k, r).


is an even function of k. Also, because the dierential equation is real for
k real, the same is true for the solution . The Schwartz reection principle
gives

(k , r) = [(k, r)] ,
where * means the complex conjugate. Once these results are put together, we
end up with the following theorem.
Theorem 4.3.1.
(a) The regular solution (k, r), solution of (3.9) and (3.11) exists and is
unique for all values of r R+ and all k if the potential satises (3.4). For
each xed r, is holomorphic in the nite k-plane.
(b) We have the symmetry and reality properties

(k, r) = (k, r) = [(k , r)] .

(3.22)

(c) We have the bounds



|(k, r)| C

(3.23)

r
1 + |k| r


e|Im

k|r

eCI(k,r)

which, when used in (3.12), also give








r
 sin kr  < C
I(k, r) e|Im

k 
1 + |k| r

(3.24)

k|r .

(d) From this last bound, we see that the asymptotic behavior of for large
|k| is given by

(3.25)

sin kr
[1 + o(1)] ,
k

|k| .

This means that, for large E, the dominant part of is the free solution (0) =
sin kr/k. In fact, it is easily seen on (3.24) that, when k is real, we have

(3.26)




 




 sin kr  dk = 2
 sin kr  dk < ,



k
k 

whereas itself is L2 (, ) in the k variable for each xed value of r.


To show (3.26), it suces to use (3.24) and the denition of I(k, r), (3.20),
and then exchange the two integrations.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

143

4.3.2. Integral Representation for . The integral representation we are


going to now derive for (k, r) was already discussed in Chapter 3. Here we give
a general derivation. The regular solution being holomorphic in k everywhere
in the nite k-plane is what is called an entire function. Entire functions are
classied according to the rate of their growth at innity. Roughly speaking,
if f (z) is an entire function of z, and its growth at innity is given by

|f (z)| exp [ |z| ] ,

|z| ,

we say that f is of order and type (see Chapter 1 for details).


Entire functions of order one are called entire functions of exponential type,
or simply exponential functions, and it is obvious that (k, r) is an exponential
function of type r in the variable k.
Consider now the function (k, r) = (k, r) (sin kr/k). According to
(3.26), L1 (, ) in k for each xed nite value of r. According to the
theory of analytic (holomorphic) functions, we have that the integral over k of
(k, r) exp(ikt) along any closed contour in the k-plane vanishes, whatever t
(nite) is. Taking a contour made of the real axis between K and K, K > 0,
and the half-circle of radius K in the upper k-plane, on which k = K exp(i),
0 , we get


(t, r)

(k, r)

eikt

(k, r) eikt dk

dk =
K

contour


(K ei , r) eikt K ei d = 0.

+i
0

Now, if t > r, the last integral goes to zero as K because of (3.24) and
the damping factor exp(Im k t) = exp(Kt sin ). Therefore,

(3.27)

(t, r)

(k, r) eikt dk = 0,

t>r

no matter how close t is to r. Moreover, since the integrand is L1 (, ),


we know that (t, r) is a continuous function of t. Therefore, (r, r) = 0 by
continuity. The same conclusions can be reached for t < r if we close our
contour of integration by a large half-circle in the lower half of the k-plane.
And again, (r, r) = 0.
(t, r), for each real nite value of r, is a continuous function of t and
vanishes outside the interval (r, r). Inverting (3.27), which causes no convergence problem, and remembering that (k, r) is a real and even function of k
(real for real k), we get
 r
sin kr
1
(k, r) = (k, r)
=
(t, r) eikt dt
k
2 r
 r
1
(t, r) eikt dt
=
2 r

144

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(t, r) cos kt dt =
r

2
=

2
=

[(t, r) + (t, r)] cos kt dt


0

[(t, r)] d
0

sin kt
k

sin kt
[(t, r)]
dt.
t
k

Writing

K(r, t)

(3.28)

2
(t, r),
t

we have the following theorem.


Theorem 4.3.2. The regular solution has the integral representation

(3.29)

sin kr
+
(k, r) =
k

K(r, t)
0

sin kt
dt,
k

where K(r, t), given by (3.28), is independent of k and is an integrable function


of t.
We must show that (3.28) is meaningful, that is, we can dierentiate
with respect to t. This is indeed the case for r < t < r. The reason is that
we have (3.24) and (3.21). Dierentiating (3.27) with respect to t under the
integral sign, we get an extra k-factor. So, for t < r, we still have an oscillating
integrand which goes to zero as k . By the Abel theorem, the integral
is (simply) convergent. For t = r, the integral is usually innite, but what we
really need is the integrability of K(r, t) at t = r, that is, of (/t)(t, r) at
t = r, and this also can be shown with a more careful analysis. We refer the
reader to the literature for more details; see [7].
As is obvious from (3.29), we could have started from a Fourier sine transform instead of (3.27) to dene K directly:



sin kr sin kt 2 2
k dk
(k, r)
K(r, t) =
k
k





sin kr eikt 2 2
1
k dk.
(k, r)
=
2i
k
k

(3.28 )

The reason for introducing the factor (2/)k 2 will become clear in 4.3.3. We
can now close the contour again by a large half-circle in the upper k-plane,
and we nd that K(r, t) = 0 for all t > r. Inverting the Fourier sine transform
(3.28 ), we get, of course, (3.29). Both (3.28) and (3.28 ) are useful, as we shall
see later, for generalizing (3.29), and it is easily checked that K dened by
(3.28 ) is identical to (3.28).

Inverse Problems in Potential Scattering

145

From (3.28) one can show that

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php


(3.29 )

K(r, 0) = 0,




t K(r, t) 
= 0.
t
t=0

This will also be shown by a dierent method below, based on (3.34).


Theorem 2 is, in essence, the following celebrated PaleyWiener theorem
for entire functions of exponential type which are L2 on the real axis.
Theorem 4.3.3 (PaleyWiener). The entire function f (z) is of exponential type and belongs to L2 (, ) on the real axis if and only if

f (z) =
eizt (t) dt,

where
(t) L2 (, ).
Moreover, if f (z) is given by the above representation and (t) does not vanish
almost everywhere in any neighborhood of (or ), then f (z) is of order 1
and type (that is, not of exponential type less than ).
What we have shown for (k, r) is, in fact, nothing more than the rst
half of the above theorem. The second part of the theorem guarantees that
if the type of the function is , then the support of cannot be smaller than
(, ) at least at one end of the interval. This is indeed the case here for K at
t = r. We should note here that is analytic in k and goes to zero as k .
Therefore, if it is L1 at , it is also L2 there.
The integral representation (3.29) is the rst of the three ingredients for
solving the inverse problem. The second is the relation between the kernel K
and the potential, and we shall establish it now.
So far, (3.29) is purely formal, in the sense that any exponential function of
k with appropriate properties can be written in that form, without any relation
to a second-order dierential equation. We must now take into account that
, given by the above representation, indeed satises the Schr
odinger equation
(3.9). Dierentiating twice with respect to r under the integral sign and doing
two integrations by parts (with respect to t), we nd formally

(3.30)

2K
2K

= V (r) K(r, t),


r2
t2

0tr

and

(3.31)

K
K
d
+
V (r) = 2 K(r, r) = 2
dr
r
t t=r

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

146

Inverse Problems

provided K(r, 0) = 0 and (tK|t)|t=0 = 0. The relation (3.31) is the second


ingredient for the solution of the inverse problem. Equation (3.30) is a partial
dierential equation of hyperbolic type, and we consider it so far in 0 t r.
However, if K(r, 0) = 0, we can extend the denition of K for r t 0 just
by writing

K(r, t) = K(r, t).

(3.32)

In this way, K becomes an odd function of t in r t r. Now, the two


characteristic curves of our hyperbolic equation are just the lines t = r.
Since K is odd in t, we now have K(r, r) = K(r, r), r 0. However, this
last quantity is, by (3.31), just the integral of V . If V is integrable at the
origin, which may not be the case in general if we assume only (3.4), then
1
K(r, r) = K(r, r) =
2

(3.33)

V (t) dt.
0

If we have only (3.4), we can keep the dierential version (3.31).


In any case, we now have a hyperbolic equation in the fundamental domain
r t r bounded by two characteristics t = r, and we are given the value
of the function on these two characteristics, or its total derivative with respect
to r. In both cases, it is known that the hyperbolic equation has a unique
solution under our conditions on these boundary values. Similar conclusions
were also reached in Chapter 3.
Remark. Formula (3.29) should not be confused with (3.12). The second
one is an (Fourier) integral representation and the rst is a Volterra integral
equation. Given the potential V , this integral equations provides us with the
solution of the dierential equation (3.9). Alternatively, given V , we can
solve the partial dierential equation (3.30), which is now independent of E
(or k), and from its solution K nd from (3.29).
An equivalent way of doing the same thing is to use the integral equation
(3.12) instead of the dierential equation (3.9) since the two are equivalent.
Replacing (3.29) into (3.12) and taking the Fourier sine transform, we get the
integral equation in t, for each xed r,

1
(3.34) K(r, t) =
2

r+t
2

V (s)ds +
rt
2

r+t
2

ds
rt
2

rt
2

V (s + u) K(s + u, s u) du,

which can also be used to prove the existence and uniqueness of K. Setting
t = r, we get, as expected, (3.33) i.e., the integral version of (3.31). Setting
t = 0 in (3.34), we check that K(r, 0) = 0. We do the same for (tK/t)|t=0 =
0. Dierentiating twice with respect to r and t, we nd, as expected, (3.30).

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

147

To show the existence and uniqueness of the solution of (3.34), one can
again use the iteration method, and everything goes through without any problem. One can then check all the assumptions made on K and its derivatives
and get the bound

(3.35)

 r+t

 r+t

2
2
1
|K(r, t)| <
|V (s)| ds exp
u |V (u)| du .
2 rt
0
2

4.3.3. Completeness Relation. We are concerned here with the eigenfunction expansion of functions in the Hilbert space L2 (0, ). It is known that
self-adjoint operators in L2 have suciently many eigenfunctions to form a
complete basis, either in the usual sense of a countable set of eigenfunctions,
or in the generalized sense as in Fourier exponential, sine, or cosine transforms.
Let us begin with the simple case where the potential is absent. We have
just the dierential operator d2 /dr2 acting in L2 (0, ) together with the
boundary condition (0) = 0. The generalized eigenfunctions are sin kr, k
[0, ), and any L2 function can be expanded in terms of these. More precisely,
given a function f (r) L2 (0, ), if we dene its Fourier sine transform by

(3.36)

f(k) =

f (r) sin kr dr
0

then

(3.37)

f (r) =

f(k) sin kr dk

in the L2 sense, i.e., convergence in the mean,

(3.38)



  K


2


f(k) sin kr dk  0,
f (r)


0

K ,

and one has the Parseval equality




(3.39)
0


2

|f | dr =

 2
 
f  dk.

Putting (3.36) in the righthand side of (3.37) and exchanging the order of
integrations, one gets symbolically the completeness of the set {sin kr},

(3.40)


0

sin kr sin kt dk = (r t),

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

148

Inverse Problems

where is the Dirac delta function. This relation is of course formal, but
can be given a precise meaning through the usual machinery of the theory of
distributions or integral transforms.
In order to make everything more precise, we can use the complete boundary conditions as given by (3.11)

(3.41)

 (0) = 1.

(0) = 0,

Then the eigenfunctions in the free case (V 0) are given by (sin kr/k), and
if we dene

f(k) =

(3.36 )

f (r)
0

sin kr
dk
k

then

(3.37 )

f (r) =

sin kr 2
k dk
f(k)
k

and the completeness of the set of generalized eigenfunctions { sinkkr } becomes




(3.40 )
0

sin kr sin kt
k
k


2 2
k dk = (r t).

This is also written, more appropriately,



(3.40 )



sin Er sin Et 1

E dE = (r t),

E
E

since the generalized eigenvalue is E = k 2 . The quantity



(3.41a)

0 (E) =
0


1 
E dE 

is often called the spectral measure (in our case the free spectral measure), and
d0 /dE the spectral density. The Parseval equality then becomes

(3.41b)
0


2

|f (r)| dr =

 2


f ( E) d0 (E).

When V is present in the dierential operator, we saw that we may have,


in addition to the continuous spectrum E 0 as in the free case, some nite

Inverse Problems in Potential Scattering

149

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

negative point spectrum corresponding to true eigenvalues E1 , E2 , . . . , En . If


we call the corresponding eigenfunctions j (r) (ij , r), j = 1, 2, . . . , n, and
dene the normalization constants Cj by

(3.42)

2j (r)dr = 1,

Cj
0

it can then be shown that the spectral measure (E) (cf. 1.6 in Chapter 1) is
given by

(3.43a)

(E) =

d(E),

1
E

|F ( E |2 ,

(3.43b)

E = k 2 0,

d(E)
n
= 

dE

Cj (E Ej ), E 0,

j=1

where F is the Jost function, to be dened later, and for E 0, E is dened


to be 0. Making V = 0, we should nd, of course, F = 1, and no bound
states, and we recover 0 (E). In any case, the completeness relation (3.40 ) is
now replaced by


(3.44)

( E, r) ( E, t) d(E) = (r t).

Given any function f (r) L2 (0, ), we have the Parseval equality



(3.45)


2

|f (r)| dr =



 2
f (E) d(E),

where f(E) is the integral transform of f (r):

(3.46)

f(E) =

0 f (r) ( E, r) dr,

E 0,

f (Ej ) = 0 f (r) j (r) dr,

j = 1, . . . , n,

We recover f (r) from its components f(E) by



(3.47)

f (r) =

f(E) ( E, r) d(E),

E < 0.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

150

Inverse Problems

which is the generalization of (3.37). What we have in (3.44), (3.45), and


(3.47) is simply the mathematical formulation that our self-adjoint dierential
operator has enough (generalized) eigenfunctions to span the Hilbert space
L2 (0, ).
The completeness relations (3.40 ) and (3.44) are the third (and the last)
ingredients for solving the inverse problems. These relations, together with
(3.29), lead to the GelfandLevitan integral equation that we shall establish
later. But before doing this, we must dene the Jost function, which enters
into (3.44), and study its properties.
We close this section with the following remark: while in ordinary Fourier
transforms r and k play a symmetrical role, this is no longer the case for the
integral transform dened by (3.46) and (3.47). For the Fourier sine transform,
if we exchange the role of r and k, we can write (3.40) as


(3.48a)
0

sin kr sin k  r
dr
k
k

(0) ( E, r)(0) ( E  , r)dr

(E  E)
= 2 E  (E  E) =
.
d0 /dE
2 E E
In the general case, this then becomes, again for E > 0, E  > 0,


(3.48b)
0

(E  E)
.
( E, r) ( E  , r)dr =
(d/dE)

The above relations can be interpreted as the generalized orthogonality


relation between the (generalized) eigenfunctions. For the true eigenfunctions
(point spectrum), we have, of course,

(3.48c)

j (r)  (r)dr = Cj1 j ,

and

(3.48d)

( E, r) j (r) dr = 0,

E 0.

When we introduce the physical solution (3.74) later, we will see that (3.48b)
can be written similarly to (3.40).
4.3.4. The Jost Solution. The Jost solution f (k, r) of the Schr
odinger
equation (3.9) is dened by the boundary conditions at innity:

Inverse Problems in Potential Scattering

151

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

limr eikr f (k, r) = 1,


(3.49)

limr eikr f  (k, r) = ik.

Again, we can try to combine the dierential equation with these boundary
conditions. The procedure is the same as for the regular solution . It is easily
veried, formally, that we get

(3.50)

f (k, r) =

eikr

sin k(r t)
V (t) f (k, t) dt
k

and

(3.51)

f  (k, r)

ikeikr

cos k(r t) V (t) f (k, t) dt.

The boundary conditions are satised if the integrals are convergent.


To solve the Volterra integral equation (3.50), we again use the iteration
method, starting from

f (0) (k, r) = eikr

(3.52)
and writing


(3.53)

f (n) (k, r) = eikr

sin k(r t)
V (t) f (n1) (k, t) dt.
k

Using the bounds




f (0) (k, r) = eIm

(3.54)

k.r

and

(3.55)



 sin k(r t) 
 < C (t r) e|Im



k
1 + k(t r)

k|(tr) ,

valid for t r, we nd, for Im k 0,


(3.56)


f (n) (k, r) eIm


k.r

1 + CJ(k, r) +

C2 2
Cn n
J (k, r) + +
J (k, r) ,
2!
n!

152

Inverse Problems

where

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php


(3.57a)

J(k, r) =
r

t
|V (t)| dt
1 + |k|t

t |V (t)| dt < .

Obviously, we also have


(3.57b)

1
J
|k|

|V (t)|dt 0,

as |k| .

Again, as for the regular solution , we see that, under condition (3.4)
on the potential, our sequence of functions {f (n) } converges uniformly for all
nite r and all nite k in Im k 0. This uniform convergence entails that the
limit function f is the unique solution of the integral equation. Obviously, one
has the bound

(3.58)

|f (k, r)| eIm

k.r eCJ(k,r)

C  eIm

k.r

and a similar bound for the derivative f  (k, r), where C  is another absolute
constant independent of r and k. It is then easily veried that f satises the
boundary conditions (3.49). Moreover, using the above bound in the righthand
side of (3.50), we nd the asymptotic behavior of f in the upper k-plane. For
each xed r,

(3.59)

f (k, r) = eikr [1 + o(1)] ,

|k| ,

Im k 0.

Since each f (n) , starting from f (0) , is holomorphic in k in Im k > 0 for


each xed value of r, and the convergence is uniform, the Jost solution is
also holomorphic in Im k > 0, and continuous in Im k 0. Also, since
f (0) (k , r) = [f (0) (k, r)] , the same is true for f (n) , and therefore for f . Note
that if Im k 0, we have also Im(k ) 0, and both k and k are in
the domain of denition of f . All these can be summarized in the following
theorem.
Theorem 4.3.4.
(a) Under the condition (3.4), the Jost solution dened by boundary conditions (3.48) and (3.49) exists and is unique.
(b) It is analytic in Im k > 0 and continuous in Im k 0, and we have
the bound (3.58).
(c) In Im k 0, we have
(3.60)

f (k , r) = [f (k, r)] .

(d) In the closed upper plane Im k 0, and for each xed nite r, we have
the asymptotic behaviors

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

(3.61)

f (k, r) = eikr [1 + o(1)] ,

(3.62)

f  (k, r) = ikeikr [1 + o(1)] ,

153

|k| ,

|k| .

Remark. The integral in (3.50) extends to innity. Therefore, contrary to


the case of , we must restrict k to Im k 0. Otherwise, in
the sequence of suc
(n)
cessive iterations f , we would get integrals of the form r |V (t)| exp[(Im k
+|Im k|)(t r)]dt, which are generally divergent in Im k < 0. In the upper
k-plane, the exponential disappears, and we get (3.56) and (3.57).
Now that we have precise solutions and f of the dierential equation,
corresponding to boundary conditions at two dierent points, we can try to see
the relations which may exist between them. This leads us to the denition
of the Jost function, which plays a crucial role in both the direct and inverse
problems.
4.3.5. Jost Function. When k is real and = 0, the two solutions f+ =
f (k, r) and f = f (k, r) are independent of each other. Indeed, if we calculate
their Wronskian, which is independent of r and can therefore be calculated at
r = , and using (3.48) and (3.49), we nd

(3.63)

 f  f = 2ik = 0.
W [f , f+ ] f f+
+

Therefore, any other solution of the dierential equation can be written as a


linear combination, with coecient independent of r, of f+ and f , in the same
way as cosine and sine are written as linear combinations of eikr and eikr .
For instance, the regular solution can be written as
1
[G(k)f+ + F (k)f ] ,
2ik
where we have introduced 2ik for convenience. Taking into account that is
an even function of k, we nd G(k) = F (k). So, nally, for all real k,
(k, r) =

(3.64)

(k, r) =

1
[F (k)f (k, r) F (k)f (k, r)] .
2ik

The function F (k) is called the Jost function. It is given, by denition, by


the Wronskian

(3.65)

F (k) = W [f (k, r) , (k, r)] .

154

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

This denition of F (k) makes sense for all k in Im k 0. From the symmetry
properties of and f , (3.22), and (3.60), it follows that

(3.66)

F (k ) = [F (k)] , Im k 0.

Since is holomorphic for all nite k, and f is holomorphic in Im k > 0 and


continuous in Im k 0, it follows that the Jost function is holomorphic in
Im k > 0, and continuous in Im k 0. When k is real positive, we can write

(3.67)

F (k) = |F (k)| ei(k) ,

k 0,

where (k) is the phase of F , dened modulo 2. We will soon see how to
dene it in a precise way.
From the bounds for f and f  , it is easily veried that rf  (k, r) 0 as
r 0. This, when used in (3.65), gives

(3.68)

F (k) = f (k, 0).

Another denition of the Jost function is obtained by using (3.12) in (3.65)


and letting r . From the boundary conditions for f , (3.48), and (3.49), it
follows that

(3.69)

eikr V (r) (k, r) dr.

F (k) = 1 +
0

This integral representation is quite useful. If we use the bound (3.23) for
in (3.69), we immediately nd the asymptotic behavior of F (k):

(3.70)

F (k) = 1 + o(1),

|k| ,

Im k 0.

We can therefore choose, accordingly, the determination

(3.71)

() = 0

and then nd, by continuity, (k) for nite values of k 0. Because of (3.66),
we also have

(3.72)

(k) = (k),

k 0.

Another property of F(k) is that it never vanishes on the real axis. Indeed,
if F (k0 ) = 0, k0 > 0, then, by (3.66), F (k0 ) = 0. Then it follows from

Inverse Problems in Potential Scattering

155

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(3.64) that (k0 , r) = 0 for all r, and this contradicts the boundary conditions
 (k0 , 0) = 1 because and  are both continuous functions of r.
4.3.6. Zeros of F (k) in Im k > 0. Suppose that F (k0 ) = 0, Im k0 > 0.
This means, according to (3.65), that (k0 , r) and f (k0 , r) are not independent
solutions. Therefore, (k0 , r) = A f (k0 , r), where A is a constant = 0. Since,
for r , the righthand side behaves as eIm k0 .r , the same is true for .
It follows that (k0 , r) L2 (0, ). Since satises the dierential equation
and the boundary conditions, it follows that E0 is an eigenvalue. However,
we saw that the dierential operator together with the boundary condition is
self-adjoint. This means that E0 = k02 is real and negative and, therefore, that
Re k0 = 0, Im k0 > 0. E0 is one of the point spectrum of the Hamiltonian,
{Ej , j = 1, n}, and (k0 , r) is its eigenfunction.
Conversely, suppose that Ej = j2 is one of the eigenvalues. Then one
must have F (ij ) = 0. Indeed, if this were not the case, it would mean that
j = (ij , r) and fj = f (ij , r) are independent of each other. By denition,
j is the eigenfunction corresponding to the eigenvalue Ej because we saw
that, for all nite values of E, the solution is unique. Therefore, j is square
integrable, which means that j () = 0. From the dierential equation,
it follows that j () = 0. Now since both j and j are continuous and
j () = j () = 0, it follows that j () = 0. If we now use these results
in (3.65) for r , we nd that F (ij ) = 0. There is therefore a one-to-one
correspondence between the eigenvalues Ej = j2 , and the zeros of the Jost
function on the positive imaginary axis. It can also be shown that all these
zeros are simple, and that the spectrum is nondegenerate. We therefore have
the following theorem.
Theorem 4.3.5.
(a) The Jost function F (k), dened by (3.65), or (3.68) or (3.69), is holomorphic in Im k > 0 and continuous in Im k 0.
(b) We have the symmetry property (3.66) and the asymptotic property
(3.70).
(c) The phase of the Jost function on the real axis can be dened by continuity for all real k from (3.71).
(d) There is a one-to-one correspondence between the eigenvalues Ej =
(ij )2 and the zeros kj of the Jost function on the positive imaginary axis:
kj = ij , j > 0, j = 1, 2, . . . , n.
(e) There are no other zeros of F in Im k 0. In particular, F cannot
vanish for real values of k.
Remark. It may happen that one of the zeros of the Jost function is at
k = 0. This does not correspond to a true bound state with a square integrable
eigenfunction, but to a resonance at zero energy.
4.3.7.

The Physical Solution and the Phase Shift. We now consider

156

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

the asymptotic behavior of the regular solution (k, r) for k real positive and
r . Using the formula (3.64) together with (3.48) and (3.67), we nd

(3.73)

(k, r) =

|F (k)|
sin (kr + (k)) + .
k

Now both the physical solution given by (2.17) and vanish at r = 0.


They are therefore proportional to each other and we have, for Im k 0,

(3.74)

(k, r) =

k
(k, r).
F (k)

This means that the phase shift, dened for k real along the phase of ,
is identical to (k), which is minus the phase of the Jost function. In (3.74),
although is holomorphic in all of the nite k-plane, F (k) is analytic only in
Im k > 0. Therefore, in general, the relation makes sense only in Im k 0. In
this half-plane, the poles of the zeros of F (k)correspond to the bound
states and vice versa.
Remark. All these properties show the importance of the Jost function,
F (k). From this single function, we can get the phase shift, given by minus the
phase of F for k real positive (positive energies), and the eigenvalues (negativeenergy bound states) given by its zeros on the positive imaginary axis.
It is easily shown that the physical solution satises the integral equation

(3.75)

(k, r) = sin kr
0

sin kr< eikr >


V (r ) (k, r ) dr ,
k

where r< = min(r, r ), and r> = max(r, r ). Contrary to (3.12), this is now a
Fredholm integral equation, and therefore we have the Fredholm alternative.
The eigenfunctions j (r) are solutions of the homogeneous equation. It can be
shown that the Jost function is the Fredholm determinant of the above integral
equation. To be more precise, if we introduce a parameter in front of the
integral operator in (3.75), and write it symbolically as = 0 + K, we
know that if the kernel K is L2 , the solution of (3.75) is given in general by
= 0 + N ()0 /D(), where N is an integral operator acting on 0 and
D is a number. N and D are called Fredholm determinants (numerator and
denominator, respectively) and are both entire functions of . If 0 is a zero
of D, D(0 ) = 0, then the homogeneous equation has solutions and 1
is
0
an eigenvalue of the kernel K. For (3.75) with in front of the integral, one
can show that F (; k) D(; k). It is therefore quite natural that the zeros
of the Jost function should give the eigenvalues. We conclude this section by
rewriting the completeness relation (3.44) in terms of the physical solution .
From (3.75) it can be shown that if kj = ij corresponds to the eigenvalues
Ej = j2 , then j (r) is normalized at innity by

Inverse Problems in Potential Scattering

157

lim ej r j (r) = 1.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Dening Cj by

Cj

j2 (r)dr = 1,
0

we get
2

(k, r) (k, t) dk +

Cj j (r) j (t) = (r t),

which looks more like the familiar completeness relation (3.40) for Fourier
transforms. Similarly, the generalized orthogonality (3.48b) now becomes like
the relation for sine transform:

2
(k, r) (k  , r)dr = (k  k).
0
4.3.8. The Levinson Theorem. We know that the Jost function F (k) is
holomorphic in the upper half-plane Im k > 0 and continuous in Im k 0.
Suppose we have n eigenvalues (bound states) corresponding to the zeros kj =
ij , 0 < 1 < 2 < < n , and let us make a vertical cut joining kn = in to
the origin. In this half-plane with the cut, Log F (k) is also holomorphic, and
Log F () = 0.
Because the phase shift is minus Im log F , we get, by following the closed
contour shown on Figure 1, the variation shown. Therefore, remembering that
the zeros are all simple, we get
(+) (+0) + 2n + [(0) ()] = 0.

Figure 1. Variation Log F = 0.

158

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Because the phase shift is an odd function of k, we nally get


(+0) () = n.

(3.76)

This is the Levinson theorem, which relates the number of bound states to
(+0) (). Choosing the determination (3.71), we have

(3.77)

(0) = n.

Remark. It may happen that F (0) = 0, i.e., one of the zeros of F is just
at the origin. It can be shown that it is also simple. In this case, we get
1
(0) = n + ,
2

(3.78)

n being the number of true bound states with negative energies and k = 0
corresponding to a resonance at zero energy.
4.3.9. Some Integral Representations. We have already seen the integral
representation (3.29) for the regular solution . It is a consequence of the
PaleyWiener theorem and its variants for entire functions. For other functions
of interestJost solution, Jost function, phase shifts S-matrix, etc.some of
them analytic in the upper half-plane, and some dened in general only for
real values of k, it is possible to obtain similar integral representations.
The rst integral representation is for the Jost solution, and we will see
that from it we can deduce all the other integral representations. From the
analyticity of f (k, r) in the upper k-plane, and its asymptotic behavior (3.59),
it is clear that

2 A(r, t) = [f (k, r) eikr ] eikt dk = 0 , t < r,
where the contour is made of the real axis and a large semicircle in the upper
plane. Because the integral over this large semicircle is itself zero for t < r, it
follows that

(3.79)

2 A(r, t) =

[f (k, r) eikr ] eikt dk = 0 ,

t < r.

Now, from (3.50), (3.55), and (3.57b), it is easily seen that, for k real, we
have, for large values of k,

C
|f (k, r) eikr | <
t|V (t)|dt.
|k| r

Inverse Problems in Potential Scattering

159

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Therefore, the integrand in (3.79) is L2 (, ). It follows that we can invert


(3.79):

(3.80)

f (k, r) =

eikr

A(r, t) eikt dt,

+
r

where A(r, t) L2 (r, ) in the variable t. This is, in essence, part of the
following general theorem.
Theorem 4.3.6 (Titchmarsh). A necessary and sucient condition for
F (x) L2 (, ) to be the limit as y 0 of an analytic function F (z =
x + iy) in y > 0 such that

2
|F (x + iy)| dx = O (e2y )

is that


F (x)eitx dx = 0

for all values of t < .


The above integral representation (3.80) is on the basis of the Marchenko
method for solving the inverse problem, as we shall see later. For the time
being, let us make r = 0 in (3.80). We then get, for the Jost function, the
integral representation

(3.81)

A(t) eikt dt,

F (k) = f (k, 0) = 1 +
0

where A(t) = A(0, t) L2 (0, ). However, we can do better concerning the


behavior of A(t) as to t and replace L2 by L1 . For this purpose, we use
the representation (3.69) for the Jost function, together with (3.29). We then
obtain


eikr

F (k) = 1 +
0

sin kr
dr +
V (r)
k

eikr
0

sin kt
dt dr.
K(r, t)
k

Let us look now at the rst integral of this formula. It can be written as




 2r

1
1
V (r) dr
eikt dt =
eikt
V (r) dr dt.
t
2 0
2 0
0
2
Now the integrand satises

0


 

 
 2r





dt 
V (r) dr
dt
|V (r)|dr =
|V (r)|dr
dt
t
 t

0
0
0
2

160

Inverse Problems


=2

r|V (r)|dr < .

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

It follows that the rst integral is indeed the Fourier integral of an L1 function.
For the second integral,

 r
sin kt
ikr
dt
e
V (r) dr
K(r, t)
k
0
0

 r
 r+t
1
V (r) dr
K(r, t)dt
eiku du,
=
2 0
0
rt
we can also reach the same conclusion by now using the bound (3.35). In
conclusion, under the condition (2.20), we have

(3.82)

A(t) L1 (0, ) L2 (0, ).

A dierent proof of this result will be given later in 4.5.


What we have just seen is a general phenomenon. Whenever one has a
quantity which is dened by an integral containing a solution of the dierential
equation (3.9), and one nds that by replacing that solution by the rst term
of its (convergent) series expansion one obtains interesting properties such as
integral representations, asymptotic properties, and so on, one can then show
that the conclusion holds in general with the full solution. We shall see in 4.5
that one can obtain (3.81) directly.
Once the integral representation (3.80) with an L1 kernel is established,
one can deduce from it similar integral representations for other quantities of
interest using the following theorem.
Theorem 4.3.7 (WienerLevy). Let H(z) be analytic in a domain D of
the complex plane and let F (k) be such that the curve z = F (k), k ,
lies inside D. If F (k) is representable in the form

F (k) = C +
f (t) eikt dt, k real,

with f (t) L1 (, ), then H(F (k)) also possesses a similar representation.


Because of the RiemannLebesgue lemma, we have, of course, C = F ().
We now apply this theorem to the logarithm of the Jost function. More
precisely, we consider

(3.83)

G(k) =


n 

k + ij
j=1

k ij

F (k),

where the product is over the zeros of F (k) on the imaginary axis in the upper
half-plane. We have seen that these zeros correspond to the bound states.

Inverse Problems in Potential Scattering

161

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Now, G(k) is analytic (holomorphic) in the upper half-plane, continuous on


the real k-axis (Im k = 0), and does not vanish in the closed upper half-plane
Im k 0. Moreover,

(3.84)

lim G(k) = 1,

|k|

Im k 0.

We can therefore apply Theorem 4.3.7 to Log G(k), with Log G() = 0.
The net result is

g(t) eikt dt,
Log G(k) =
0

where g(t) L1 (0, ). The fact that g(t) vanishes for t < 0 is again the
consequence of the analyticity of Log G(k) in the upper k-plane. Going back
now to F (k), we nd, for k in the upper plane Im k 0,
 
n 

k ij
g(t)eikt dt
F (k) =
e 0
, g(t) L1 (0, ).
k
+
i
j
j=1

(3.85)

As we already mentioned, the completeness relation (3.44) is one of the


essential tools for solving the inverse problem. It contains the modulus of
the Jost function for real values of k, and we have to show that this can be
calculated from the phase shift. There are two methods for doing this. The
rst is to remember (3.67). It follows that, for k real,

(3.86)

n


(k) = Im Log F (k) = i

j=1


Log

k ij
k + ij

g(t) sin kt dt,


0

g(t) L1 (0, ).
From the knowledge of (k) for all k 0 and the bound state energies, we can
now calculate g(t) by inverting the above Fourier sine-transform. Once g(t) is
known, we then have F (k) from (3.85).
The second method is to use the Hilbert transforms (called dispersion relations by physicists). The main theorem to be used now is the following.
Theorem 4.3.8. Suppose f (x) L1 (, ), and f(k) is its Fourier
transform
f(k) =

f (x) eikx dx.

We know that f is a continuous function of the real variable k. Let us denote


by X(k) and Y (k) the real and the imaginary part of f, both continuous. Then,
for all k, we have

162

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Y (k  ) 
dk ,

a k k
a
X(k  ) 
dk ,

a k k

1
P
a

Y (k) = lim P
X(k) = lim

where the symbol P in front of the integrals means the principal value. These
two relations are called KramersKronig relations.
We can apply this theorem directly to Log G(k), given by (3.84). The
result is
1
Re Log G(k) = Log |F (k)| = P

 )
(k
dk  ,
k k


where, according to (3.83) and (3.67), (k)
is given by

(k)
=2

Arctg

j
+ (k).
k

In the above integral, we need the values of (k) for negative k, and these are
provided by (3.72). Putting all the pieces together, we nd

(3.87a)

|F (k)| =

j2
1+ 2
k

(k )

 dk
kk

and

(3.87b)

F (k) =


j

j2
1+ 2
k

(k )

 dk
kk

In the rst formula, k is real, whereas in the last formula, Im k > 0. Putting
k = Re k + i, and making 0, we get (3.87a) and (3.67) in the limit, by
using
lim
0

k

1
1
=P 
+ i (k  k).
k i
k k

Another integral representation which will be useful later is the one for the
S-matrix, dened for real values of k only:

(3.88)

S(k) =

F (k)
= e2i(k) ,
F (k)

k real.

We already have (3.81) and (3.82) for F (k). Consider now the function 1/F (k).
In the WienerLevy theorem, we can choose G(z) = 1/z, and this function is

Inverse Problems in Potential Scattering

163

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

analytic everywhere except at z = 0. Since F (k) does not vanish when k varies
from to + on the real axis, and F () = 1, we can apply the theorem
and we get
1
=1+
F (k)

(3.89)

B(t) eikt dt,

B(t) L1 .

Therefore,



F (k)
ikt
iku
= 1+
A(t) e
dt 1 +
B(u) e
du
F (k)
0



ikt
A(t) e
dt +
B(t) eikt dt
=1+


A(t) eikt dt

B(u) eiku du.

The last term is the product of the Fourier transforms of two L1 functions,
and it can be written as the Fourier transform of the convolution of A and B:



ikx
e
A(t) B(x + t) dt dx.

However, the integral in the bracket is the convolution of two L1 functions,


and we have the following theorem.
Theorem 4.3.9. If f (x) and g(x) belong to L1 (, ), then so does
their convolution

h(x) =
g(u) f (x u) du,

and its Fourier transform H(k) is the product of the Fourier transforms of f
and g : H(k) = F (k) G(k).
Using this theorem and putting all the pieces together, we nd

(3.90)

S(k) =

e2i(k)

s(t) eikt dt,

=1+

s(t) L1 (, ).

The same method also gives us, again for k real,

(3.91)

1
1
=
=1+
F (k) F (k)
|F (k)|2

C(t) cos kt dt, C L1 (0, ),

where we have used the fact that the function is real and even.
Remark. If there are no bound states, then 1/F (k) is also analytic in
Im k > 0. It follows that in (3.89), B(t) = 0 for t < 0.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

164

Inverse Problems

4.4. GelfandLevitan Integral Equation


As was previously mentioned, the three ingredients for deriving the Gelfand
Levitan equation are the representation (3.29), the relation (3.31) between the
kernel K and the potential, and the completeness relation (3.44)
with (3.43b).
In this last relation, the modulus of the Jost function, |F ( E)|, E 0, is
given in terms of the phase shift by (3.87). The purpose is to determine the
potential from what is commonly called the scattering data:

(4.1)

{(k), k 0} {j > 0, Cj > 0, j = 1, 2, . . . , n}

where (k) must satisfy, of course, the Levinson theorem (3.76).


In order to establish the GelfandLevitan integral equation, we start from
the integral transform dened by the complete set of (generalized) eigenfunctions , (3.46), and (3.47). Taking the integral transform of (k, t) =
(k, t) (sin kt/k), which satises the same boundary condition as , namely
(k, 0) = 0, we dene, for each xed nite value of t,
 t) =
(r,

(4.2)

( E, t) ( E, r) d(E),

 t) = 0
and we are going to show that, as for dened by (3.27), we have (r,
for all r > t. To show this, we use (3.64) and (3.43b) in the above integral.
Taking into account that is an even function of k, we nd
 t) = 1
(r,
i

(k, t) f (k, r)


k
dk +
Cj (ij , t)(ij , r).
F (k)
j

Now, if r > t, we can close the contour in the above integral by a large
semicircle in the upper k-plane, and we know that, because of (3.59), the
contribution of this large semicircle goes to zero as its radius goes to innity.
Therefore, the above integral on the real k-axis is equal to the sum of the
residues of the (simple) poles due to the (simple) zeros of F (k), kj = ij .
On the other hand, we know that, at kj = ij , (ij , r) and f (ij , r) are
proportional to each other. Since is normalized by  (k, 0) = 1 for all k, we
have

(4.3)

(ij , r) =

f (ij , r)
.
f  (ij , 0)

Putting all these pieces together, we nd, for r > t,


 t) =
(r,

n

2ij
Cj
+
(ij , t) f (ij , r),
F (ij ) f  (ij , 0)
j=1

Inverse Problems in Potential Scattering

165

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

where F = dF/dk.
It can be shown (see the appendix) that the sum inside the bracket vanishes
for each j. Therefore,

(4.4)

 t) =
(r,

sin Et
( E, r) d(E) = 0,
( E, t)
E

r > t.

In this formula, the integral is conditionally convergent at E = because,


when E = k 2 , the integrand is oscillating and goes to zero, due to

(k, r)

(4.5)

sin kr
1
= o(1).
k
k

We replace in (4.4) by its integral representation (3.29) and introduce

d(E) = d(E) d0 (E).

(4.6)

Relation (4.4) then leads to



(r t)




 r
sin Et sin Er
sin Es

+
ds d(E)
K(r, s)
E
E
E
0

 r
sin Er sin Et

=
d(E) +
K(r, s)
E
E

0




sin Et sin Es

d(E) ds = 0.
(t s) +
E
E

Introducing the symmetrical kernel



(4.7)

G(r, t) =

sin Et sin Er

d(E)
E
E

we nd, for t < r,



(4.8)

K(r, s) G(s, t)ds = 0.

K(r, t) + G(r, t) +
0

This is the GelfandLevitan integral equation, which gives the full solution
of the inverse problem: nding the potential from the scattering data (4.1).
Given these data, we can calculate the Jost function by (3.87a). We then have
d(E) by (3.43b). Using (3.41a), we obtain

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

166

Inverse Problems

(4.9)

d(E) =

1
1

EdE, E = k 2 0,
2 1
|F ( E)|
n

Cj (E Ej ) dE,
E < 0.
j=1

from which we can calculate the kernel G by (4.7). Having G, we must solve
the integral equation (4.8) for K, and then we get, nally,

(4.10)

V (r) = 2

d
K(r, r).
dr

Remark. The GelfandLevitan integral equation is, for each xed value of
r, a Fredholm integral equation for K in the second variable t, which has the
structure
 r
K(t) + G(t) +
G(t, s) K(s)ds = 0.
0

In order to show that the scattering data determine the potential in a


unique way, we have to show that the integral equation has a unique solution.
This can be shown to be the case in full generality whatever the scattering
data (4.1) may be, provided that
(i) the phase shift (k) is a continuous function of k and satises the conditions (3.71) and (3.79).
(ii) when bound states are present, the phase shift must satisfy the Levinson
theorem (3.77). Of course, it is implicitly assumed that the number of bound
states is nite.
There are no other constraints on the scattering data. In particular, the
positive quantities j and Cj can be chosen arbitrarily. This means that, given
the phase shift and the binding energies, we obtain a full class of so-called
phase-equivalent potentials V (r; Cj ), depending on n arbitrary positive constants Cj and having all the same phase shift and bound state energies. And
except for the Levinson theorem, there is also no relation between the phase
shift and the binding energies.
Remark. In deriving the integral representation (3.29) and the Gelfand
Levitan integral equation (4.8), we have mainly used the theory of analytic
functions and contour integration in the complex k-plane. An alternative way
of deriving (4.8) is to use the resolvent of the equation (3.29), considered as an
integral equation for (sin kr/k). We refer the reader to the references quoted
at the end for this alternative derivation.
4.4.1. More General Equations. So far, the starting point in our method
has been the free case where the potential V = 0. The radial Schr
odinger

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

167

equation was solved in terms of free solutions sine and exponential, and we
obtained the most important ingredient for the solution of the inverse problem,
namely the representation (3.29) in terms of the free solution (0) = (sin kr)/k.
Suppose now we start from a given potential V1 satisfying (3.4), for which
we assume that we can solve the Schrodinger equation completely, and for
which, therefore, we know everything explicitly: the phase-shift for all positive
energies, the energies of all the bound states and the corresponding normalization constants, the regular solution 1 , the kernel K1 of (3.29), etc. We may
then ask the question of whether it should be possible to start from V1 instead
of starting from the free case V1 = 0, and determine the potential V from the
spectral data (4.1). The advantage of starting from V1 is that this potential
may sometimes be chosen close to V , or reproduce some characteristics of V .
The rst step is to establish an integral representation similar to (3.29) for
in terms of 1 (instead of sin kr/k):


( E, r) = 1 ( E, r) +

(4.11)

K(r, t)1 ( E, t)dt.

For this purpose, we proceed along similar lines, and consider the integral
 


(4.12)
(t, r) =
( E, r) 1 ( E, r) ,

1
1 ( E, t)d1 (E) =
i

[(k, r) 1 (k, r)]

[F1 (k)f1 (k, t) F1 (k)f1 (k, t)]




1
+
[ 1 ] 1 d1 (E) =
i

k
dk
F1 (k)F1 (k)
[(k, r) 1 (k, r)]

n1
 


 


f1 (k, t)
(1)
(1)
(1)
(1)
k dk +
Cj ij , r 1 ij , r 1 ij , t .
F1 (k)
j=1

Now, in the last integral, if t > r, we can close the contour by a large semicircle
in the upper half-plane, and we know that its contribution is zero. Therefore,
the integral is given by the sum of the residues of the (simple) poles of the
integrand due to the (simple) zeros of F1 (k). Therefore, using (4.3) for 1 ,

(4.13)

(t, r) =



   (1) 
(1)
ij , r 1 ij , r ,
j

(1)
(1)


Cj
2ij

 f1 ij(1) , t = 0,
+ 
(1)
(1)
f1 ij , 0
F1 ij

t > r.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

168

Inverse Problems

If we invert now the integral transform (4.12) dening (t, r), we indeed get
(4.11), where K(r, t) (t, r). After establishing (4.11), it is easy to establish
the generalized GelfandLevitan integral equation by proceeding exactly in
the same way we established (4.8). This time, we consider
(4.14)

 t) =
(r,

( E, t) 1 ( E, t) ( E, r) d(E),

r > t,

 t) = 0. Again, as in (4.4), the integral is condiand we again nd that (r,


tionally convergent at E = since, as E , the integrand is oscillating
and goes to zero because of

(4.15)

(k, t) 1 (k, t) =

1
o(1).
k

We now replace by its integral representation (4.11) and dene

d(E) = d(E) d1 (E)

(4.16)
and


(4.17)

G(r, t) =

1 ( E, r) 1 ( E, t) d(E).

We then get, as for (4.8), the desired integral equation



(4.18)

K(r, s) G(s, t)ds = 0.

K(r, t) + G(r, t) +
0

The same analysis which led to (3.31) for the potential leads here to

(4.19)

2K
2K

= [V (r) V1 (t)] K(r, t)


2
r
t2

and

(4.20)

V (r) V1 (r) = 2

d
K(r, r).
dr

Making V1 = 0, we nd our previous results. Note that in the present case, V


and V1 may have a dierent number of bound states. We shall now indicate
some applications.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

169

4.4.2. Identical Continua. Here we have d(E) d1 (E) for E 0. In


(4.16), we are left then with a nite sum of 2n -functions. The kernel G is
then a degenerate kernel of nite rank, and we can then solve the Gelfand
Levitan integral equation algebraically. An interesting instance is when V1 has
no bound states, whereas V has n. The simplest example is when n = 1 : V
has a bound state with energy 2 and the normalization constant is C. If we
dene

(4.21)

C21 (i, s) ds

R(r) = 1 +
0

then

(4.22)

V V1 = 2

d2
Log R(r)
dr2

and

(4.23)

(k, r) = 1 (k, r) C 1 (i, r)

1 (i, t)1 (k, t) dt


0

1
.
R(r)

Making k = i in this relation, we nd that the wave-function of the bound


state is

(4.24)

(i, r) =

1 (i, r)
,
R(r)

where R is given by (4.21). It is easily checked here that C is indeed the


normalization of the bound state wave-function. We should notice here that
1 (i, r) e2r as r . This, when used in (4.23) and (4.21), shows that
(i, r) er , as expected. The next simple case is described in 4.4.3.
4.4.3. Phase-Equivalent Potentials. This is a particular case in which we
also have the same number of bound states in V and V1 , with identical binding
energies. The two potentials dier only by the normalizing constants Cj s, but
have the same Jost function.
A more interesting and more general case, and still soluble explicitly in
terms of 1 and F1 , is the following.
4.4.4. Bargmann Potentials. Bargmann potentials attached to a given
potential V1 are the potentials for which the Jost function is of the form
F (k) = (rational function of k) F1 (k),

170

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

where F1 is the Jost function of V1 . Since F () = F1 () = 1, it is obvious


that the rational function has the same number of zeros and poles, so that

(4.25)


N 

k + i aj
F (k) =
F1 (k).
k + i bj
j=1

Since both F1 and F are holomorphic in Im k > 0 and continuous on Im k = 0,


we must assume Re bj > 0. Also, either aj < 0, which means that V has a
bound state with energy a2j , which V1 does not have (if V1 had also this bound
state, then F would have a double zero at k = iaj , and this is forbidden, as
we know), or we must have Re aj > 0.
The problem here is, knowing everything about V1 (scattering data, the
wave-function 1 (E, r), etc.) and the product in the righthand side of (4.27),
how does one calculate V . Because of the symmetry property (3.66) of the
Jost function, complex as and bs appear always in pairs (a, a ) and (b, b ).
To show the essence of the method, we shall treat here the simple case where
we have one a and one b, both real and positive: no new bound state, but one
more zero and one more pole, both in Im k < 0. We also assume that V1 has
no bound states itself, for this would not alter the result. Therefore

(4.26)

k + ia
F1 (k),
k + ib

F (k) =

a > 0 , b > 0.

According to (4.22), we have

(4.27)

2
G(r, t) =

1 (k, r)
0

b2 a2
k 2 + a2


1 (k, t)

k2

2 dk.

|F1 (k)|

It is now easy to recognize that the above expression is the resolvent kernel of
the Schr
odinger equation (3.9) with V1 and energy a2 . Indeed, if we apply
the dierential operator D1 = (d2 /dr2 ) + V1 + a2 to the integral and use the
completeness relation for 1 (E, r), we obtain
D1 G(r, t) = (r t).

(4.28)

Moreover, G is symmetric in its arguments, and G(0, t) = 0 satises the Dirichlet boundary conditions. It is easily checked that one has

(4.29)

G(r, s) =

1 (ia r< ) f1 (ia, r> ) 2


(b a2 ) ,
F1 (ia)

where f1 is the Jost solution. The kernel G is now a separable kernel of rank
one, and the GelfandLevitan equation is trivially solved, as was done above,

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

171

and we can compute K(r, t) and the potential V explicitly in terms of 1 and
f1 .
It can be shown that in the general case where we have N poles and N
zeros, the kernel G reduces to a sum of N terms analogous to (4.29), and again
we can solve algebraically the integral equation, and get explicitly V .
4.4.5. The Discrete Case. So far, we have assumed that the potential vanishes at innity. This leads to the existence of the continuum in the energy
spectrum, and therefore to scattering. Suppose now that the potential is innite at innity: V () = +, for instance, V (x) r , > 0, > 0, as
r . We have now what is called a conning potential, and whatever the
boundary condition at r = 0 (Dirichlet, Neumann, or mixed), the spectrum is
discrete and goes to + : E1 < E2 < < En < . This case has
attracted much interest because of its applications in particle physics (quark
models where the interactions between quarks is known to be conning, and
where, at least for heavy quarks, the nonrelativistic Schr
odinger equation is
a good approximation), and other domains such as acoustics, etc. A similar
equivalent case is when we are dealing with the Schr
odinger equation in a box,
i.e., in a nite interval. Here also, if the potential has no strong singularities
in the interval, it is well known that the spectrum is discrete whatever the
boundary conditions at the ends of the interval are. And again the eigenvalues are nondegenerate and accumulate at innity: En as n . The
inverse problem is now to nd the potential from the energy spectrum and
the appropriate normalization constants of the eigenfunctions. This problem
is treated in Chapter 3.
4.4.6. Properties of the Potential. It is clear from the techniques used
to establish the GelfandLevitan integral equation that Fourier analysis plays
an essential role. Moreover, when the potential satises condition (2.20), most
of the quantities of interest we have to use in order to get the potential from
the scattering data are given by various Fourier integrals of the L1 function,
for instance, (3.81), (3.82), (3.85), (3.86), etc. Here we are going to show that
it is possible to obtain precise properties of the potential from the properties
of these Fourier transforms.
As we saw before, from the integral representation (3.81) and (3.82) for
the Jost function it is possible to obtain all the other integral representations
(3.85), (3.86), etc. Let us therefore consider the Jost function dened by (3.69)
and (3.81):

(4.30)

eikr V (r) (k, r)dr,

F (k) = 1 +
0


(4.31)

eikt A(t)dt,

F (k) = 1 +
0

A(t) L1 (0, ).

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

172

Inverse Problems

In (4.30), we can replace by its series expansion studied in 3.1, more


precisely, by the sequence (n) given by (3.15b). In this way we get a sequence
of F (n) which is absolutely and uniformly convergent in Im k 0, the domain
where F (k) is dened and continuous. We know from (3.70) that the integral
in (4.30) goes to zero as k and the same is of course true for the integral
in (4.31). More precisely, using (3.23) and (3.24) in (4.30), we immediately see
that, for k , the main terms of the expansion of F are the rst two terms:

(4.32)

F (k) =

1+

sin kr
V (r)
k

eikr
0

(1 + o(1)) ,

o(1) meaning here a constant times I(k, ), and



I(k, ) =

(4.33)

r|V (r)|
dr 0,
1 + |k|r

k ,

as we saw with (3.21).


Let us introduce now the integral of V :


W (r) =

(4.34)

V (t) dt.
r

By denition, since V is locally L1 , except perhaps at the origin, and because


of our assumption on the potential, (2.20), W is an absolutely continuous
function of r for r > 0, and also W L1 (0, ) since
(4.35)


|W |dr
0

dr


|V (t)|dt =


|V (t)|dt

dr =
0

t|V (t)|dt < .

Using now

(4.36)

eikr

sin kr
=
k

e2ikt dt
0

in (4.32), and exchanging the two integrals, we nd, for k ,



(4.37)

F (k) = 1

e2ikr W (r) dr + .

The similarity between this formula and (4.31) shows that, as far as the
integrability properties are concerned, A(r) and W (r) are equivalent and one
can show quite rigorously that

Inverse Problems in Potential Scattering

A(r) L1 (0, ) W (r) L1 (0, ).

(4.38)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

173

Other relations of this kind can be obtained along the same lines. For
instance, it is clear from (4.37) and (3.86) that one also has
g(r) L1 (0, ) W (r) L1 (0, ).

(4.39)

Let us also note that we get from (4.37) and (3.86)




(k) =

(4.40)

W (r) sin 2kr dr + ,

k .

In order to go further and establish (3.4), one has to dierentiate the


above quantities A(r), g(r), etc., since, by denition, V (r) = W  (r). It is then
possible to show rigorously that
r A (r) L1 (0, ) r V (r) L1 (0, )

(4.41)
and

r g  (r) L1 (0, ) r V (r) L1 (0, ).

(4.42)

These equivalence relations provide necessary and sucient conditions to


be imposed on the Fourier transforms of the phase-shift or the Jost function
in order to obtain a potential which satises (3.4).
We conclude this section by showing how Tauberian theorems can lead
from the properties of the potential near the origin to the properties of the
Jost function and the phase-shift near k = . We need the following two
theorems.
Theorem 4.4.1. Let f (x) = x (x), where 0 < < 1 and is of
bounded variation in (0, ). Then


(x) cos kx dx

1 ,
(+0) (1 ) sin
2 k
1 ,
() (1 ) sin
2 k

(k ),
(k 0).

The Fourier sine transform satises similar conditions with sin


2 replaced by
cos
.
2
Theorem 4.4.2. Let f (x) and f  (x) be integrable over any nite interval
not ending at x = 0 ; let x+1 f  (x) be bounded for all x, and let f (x) x
as x (respectively, x 0). Then

174

Inverse Problems


f (x) cos kx dx (1 ) sin

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1
k
2

as k 0 (respectively, k ). The Fourier sine transform satises similar

conditions with sin


2 replaced by cos 2 .
As an application, let us assume that

(4.43)

lim r1+ V (r) = V0 ,

r0

0 < < 1.

We then nd from (4.40) that

(4.44)

(k) 

V0
cos
(1 )(2k)1 ,

k .

These asymptotic properties can be formulated in terms of g  (r) and A (r),


as well as in terms of W (r), A(r), and g(r). One should note that the above
two theorems are two-sided, so that (4.44) implies (4.43).
Another result we can get from the above analysis concerns the potentials
of nite range: V (r) = 0 for r > R. It can be seen very easily on the basis of
(4.30) and (3.24) or (3.29) that the Jost functions then becomes an exponential
function (order 1) of type 2R, so that, by the PaleyWiener theorem, we have
in (4.31)
A(t) = 0,

r > 2R.

Conversely, it can also be shown easily that if the scattering data are such that
the Jost function is of exponential type 2R, then V (r) = 0 for r > R.
4.5. Marchenko Equation
An equivalent method for solving the inverse problem is to use the Jost solution
f (k, r) instead of the regular solution . Now, the Jost solution, for each xed
value of r, is an analytic function of k in Im k > 0 and is continuous and
bounded on the real k-axis. Moreover, it has the asymptotic behavior shown
in (3.61) and we have seen that we have the representation (3.80),

(5.1)

f (k, r) =

eikr

A(r, t)eikt dt,

+
r

where A(r, t) is, for each r xed, an L2 function of t.


Like the representation (3.29) for which was the starting point for the
GelfandLevitan method, the above representation is the essential ingredient
of the Marchenko method. To go further, we must check, as for (3.29), that
we indeed have a solution of the Schr
odinger equation. Proceeding as in 3,
we end up with

Inverse Problems in Potential Scattering

2A 2A
2 = V (r) A(r, t)
r2
t

(5.2)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

175

and
V (r) = 2

(5.3)

d
A(r, r).
dr

The discussion about the existence of the solution of these equations is


quite similar to that for (3.30), and we reach similar conclusions. Also, using
(5.1) in (3.50) and taking the Fourier transform of both sides, we get the
integral equation
(5.4)
1
A(r, t) =
2

r+t
2


V (s)ds

ds
r+t
2

tr
2

V (s u) A(s u, s + u)du,

t r.

One can directly check that this integral equation leads to (4.2) and (4.3).
Iterating (5.3), we get

(5.5)

1
|A(r, t)| <
2

r+t
2

|V (s)| ds exp

u|V (u)| du ,
r

which is analogous to (3.35). We get again that if rV L1 (0, ), we are


dealing with meaningful equations and can replace solving the Schr
odinger
equation by solving (5.2) and (5.3), or (5.4), and then using (5.1) to get f (k, r).
Also, making r = 0 in (5.5), we see that A(t) = A(0, t) is L1 (0, ), as was
shown by a dierent method in 3. Thus A(t) L1 L2 .
We now have to formulate the completeness relation (3.44) in terms of the
Jost
This is most easily done by replacing in its continuum part,
 solution f .

d(E),
(
E, r) by (3.64). We get then two terms. Changing k into
0
k in the second integral, we get, remembering that |F (k)|2 = F (k) F (k),
1
i

F (k)
k2
f (k, r)
(k, t)dk.
k
F (k) F (k)

We now replace (k, t) again by (3.64) and use the denition of the S-matrix
(3.88), S(k) = F (k)/F (k), k real. The nal result is that (3.44) can be
written as
(5.6)

n

1
f (k, r) [f (k, t) S(k) f (k, t)] dk +
Cj j (r)j (t) = (r t),
2
j=1

176

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

where j (ij , r) are the bound states wave functions. Expressing this sum
in terms of fj (r) = f (ij , r) is easy. We have, since  (k, 0) = 1 for all values
of k (remember (3.11)),

(5.7)

j (r) =

f (ij , r)
.
f  (ij , 0)

Using now (see the Appendix)

(5.8)

Cj1 =

2j (r)dr =
0

1
2

(f  (ij , 0))

fj2 (r) dr =
0

iF (ij )
,
2j f  (ij , 0)

the sum in (5.6) can be written


n 


(5.9)

j=1

2ij
F (ij )f  (ij , 0)


fj (r) fj (t) =

sj fj (r) fj (t).

We must notice here that fj (r), which has the precise asymptotic behavior
fj (r) = ej r + ,

(5.10)

r ,

is real because of (3.60), and therefore that sj > 0. The righthand side of (5.8)
gives us

(5.11)

fj2 dr = 1,

sj
0

which is the exact analogue of (3.42).


Consider now (5.6) for r < t, and assume rst for simplicity that there are
no bound states. We then get

f (k, r) [f (k, t) S(k) f (k, t)] dk = 0.

If we now replace the Jost solutions f (k, t) and f (k, t) inside the bracket by
their integral representation (5.1), we get


(5.12)

"

f (k, r) [eikt S(k) eikt ] dk

A(t, u)
t

#
f (k, r) [eiku S(k) eiku ] dk du = 0.

Inverse Problems in Potential Scattering

177

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

With obvious notations, this is, for each xed r(< t), a homogeneous integral
equation of the form

(5.13)

Gr (t) +

A(t, u) Gr (u)du = 0,
t

where A(t, u) L2 (t, ) in u. In other words, for each xed r, Gr (t) for
t > r satises a homogeneous Volterra integral equation with an L2 kernel.
Therefore, from the theory of Volterra integral equations, we infer that Gr (t) =
0, that is, given r, we have
1
2

(5.14)

f (k, r) [eikt S(k) eikt ] dk = 0,

for all t > r.

We have been reasoning as though Gr (t) in (5.13) were an ordinary function, which is not the case. We are dealing here with a distribution, but with
a little care the conclusion can be shown to hold. Since the starting point was
the completeness relation (3.44), or (5.6), which has to be understood in the
framework of the theory of distributions, likewise (5.14) has to be understood
in the same sense. Given an appropriate function F (t) which is C and whose
support is outside [0, r], we have

F (t) Gr (t) dt = 0.
0

When bound states are present, we also must use



(5.15)

fj (t) = f (ij , t) =

ej t

A(t, u) ej u du;

the sum over the bound states will be included in the denition of Gr (t) and
we get again (5.13). The full result is then

(5.16)

1
2

f (k, r) [eikt S(k)eikt ] dk +

n


sj fj (r) ej t = 0 , r < t .

j=1

If we replace the Jost solution by its representation (5.1), we obtain the


Marchenko integral equation

(5.17)

A(r, t) + A0 (r + t) +

A(r, u) A0 (u + t) ds = 0,
r

where

t r,

178

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

1
(5.18) A0 (t) =
2

[S(k) 1] eikt dk + sj ej t = s(t) +

n


sj ej t ,

j=1

with s(t) being the Fourier transform of S 1, (3.90). Since S 1 as k ,


s(t) is a well-dened bounded function and is L1 (, ), as we saw before.
The above equations are analogous to the GelfandLevitan equation. Each
one has its own advantages. It turns out that Marchenko equation is more useful in some cases, for instance, when dealing with the one-dimensional problem
on the full line (, ). Here there is no origin and we cannot use boundary
conditions at some nite point. We shall see more details in the next section.
Remark. It can be shown that the Marchenko equation has a unique solution, whatever s(t) L1 (, ) and positive numbers sj and j are, provided
that the Levinson theorem is satised. Moreover, as we saw before, according
to (4.41),


r|V |dr <
and
t |A0 (t)| dt <
0

are completely equivalent. Another important point is that one can show the
complete equivalence between the GelfandLevitan equation and the Marchenko equation.
4.6. Inverse Problem on the Line
The inverse problem on the line IR has become important because of its usefulness for solving explicitly many nonlinear partial dierential equations of
mathematical physics in one space and one time dimension. Both direct and
inverse problems here have much in common with what we saw in previous
sections with the Jost solution and the Marchenko integral equation. Thus, we
will skip the details and give here only the essential points.
One important dierence with the radial Schr
odinger equation studied previously is that the two boundary points are both situated at innity and are
distinct from each other. This makes the Schr
odinger equation similar to a
system of two coupled equations on a half-line, so that the S-matrix is not a
number but a 2-by-2 matrix. In the time-dependent description of scattering,
one can send particles at t = toward the target (scattering region) either
from x = (particles coming in from the left) or from x = + (particles
coming in from the right), and then one has to study what happens when
t +. In the time-independent description, given the time-independent
Schr
odinger equation
(6.1)

 + E = V (x),

< x < ,

we have two physical solutions, (x) and + (x). (x) corresponds to incoming particles coming from x = at t = . At t = +, it has the
asymptotic behavior (E = k 2 > 0),

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

(6.2)

(k, x)
=

179

tR (k) eikx ,

eikr + rL (k) eikx ,

x +,
x .

tR (k) is the transmission coecient to the right, rL (k) is the reection coecient to the left. In the second formula, eikx represents the continuous beam
of incoming particles from the left, propagating to the right.
Likewise, we have the solution + which corresponds at t = to incoming particles from the right, propagating to the left: eikx . Part of it is
reected back to the right at t = and another part goes through to the left
and is found at t = + at x = . The asymptotic behavior of + is

(6.3)

+ (k, x)
=

tL (k) eikx ,

eikr + rL (k)eikx ,

x ,
x +.

Since we have independent asymptotic regions x = , we are in fact dealing


with a kind of two-channel problem. The S-matrix is

(6.4)

S(k) =

s11 (k)

s12 (k)

s21 (k)

s22 (k)

tR (k)

rL (k)

rR (k)

tL (k)

When the potential is real (elastic scattering = conservation of the number


of particles or probabilities), the S-matrix is unitary. Also, because of time
reversal invariance, one can show that

(6.5)

s11 (k) = tR (k) = s22 (k) = tL (k).

The transmission coecients to the left of and to the right of are equal.
All of these will be shown below.
The solutions + and are the physical solutions and are similar to the
physical solution of the radial case, (3.1) and (3.8). We can now dene the
Jost solutions of (6.1), f (k, x), by precise asymptotic conditions at x = :

(6.6a)

(6.6b)

lim eikx f+ (k, x) = 1,

lim eikx f (k, x) = 1,

and similar conditions for the derivatives, as in (3.49).

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

180

Inverse Problems

In the radial case, we assumed that the potential satises (3.4). In the
present case, since the origin x = 0 does not play any important role, it
turns out that the condition which would guarantee the existence of all these
solutions, and therefore a good scattering theory, is


(6.7)

(1 + |x|) |V (x)| dr < .

In other words, V is L1 , and also its rst absolute moment is nite. As was
previously shown for the radial case, we can now combine the dierential equation (6.1) with the boundary conditions (6.6a) or (6.6b), to obtain the following
integral equations:

(6.8a)

f+ (k, x) =

eikx


(6.8b)

f (k, x) = eikx +

sin k(x t)
V (t) f+ (k, t) dt,
k

sin k(x t)
V (t)f (k, t) dt.
k

From these equations, we can deduce then all the properties of the solutions.
We leave the details for the reader to check by following exactly what was
done previously for the Jost solution, and summarize here the results in the
following theorem.
Theorem 4.6.1. Under condition (6.7) the Jost solutions f+ and f exist
and are unique. f+ is dened and continuous in Im k 0 and holomorphic in
Im k > 0. It there satises the bound

(6.9)

|f+ (k, x)

eikx |

eIm kx
<C
1 + |k|

(1 + |u|) |V (u)| du.

The same is true for f , again in Im k 0, where we have the bound

(6.10)

|f (k, x)

eikx |

eIm kx
<C
1 + |k|

(1 + |u|) |V (u)| du.

From the above bounds, again using the Titchmarsh theorem (Theorem
4.3.6), we obtain, in complete agreement with the radial case, the integral
representations

(6.11a)

f+ (k, x) =

eikx

+ (x, y) eiky dy,

+
x

Inverse Problems in Potential Scattering


Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(6.11b)

f (k, x) =

eikx

181

(x, y) e+iky dy,

valid in Im k 0, where both + and are square integrable in y on (x, )


for each xed value of x. The rst representation is more useful for x > 0 and
the second for x < 0.
If we now replace (6.11a) and (6.11b) in (6.8a) and (6.8b) and take the
Fourier inverse transform, we obtain integral equations for + and quite
similar to (5.4), and we can again deduce bounds similar to (5.5). Writing that
f+ and f are solutions of the Schr
odinger equation now leads to

(6.12)

2
2
2
2
x
y


(x, y) = V (x) (x, y),

together with the boundary conditions

(6.13)

(6.14)

1
+ (x, x) =
2

1
(x, x) =
2

V (t) dt,
x

V (t) dt,

and

(6.15)

(x, ) = 0.

The above partial dierential equation can be analyzed as in Chapter 3.


Together with the conditions (6.13)(6.15), we can show again that there exists
a unique solution (x, y), which, when used in (6.11a) or (6.11b), gives us
the appropriate solution of the Schr
odinger equation.
We now must nd the relations between the solutions f and the physical
solutions . First of all, we must notice that from f+ alone, as was done for
the Jost solutions in the radial case, we can obtain a complete set of solutions
of (6.1) by taking f+ (k, x). Indeed, from the asymptotic condition (6.6a), we
get the Wronskian

(6.16)

W [f+ (k, r), f+ (k, r)] = 2ik,

which is dierent from zero for k = 0. Likewise for f we get

182

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(6.17)

Inverse Problems

W [f (k, x), f (k, x)] = 2ik.

To these relations, we must also add

f+ (k, x) = (f+ (k, x)) ,

(6.18)

f (k, x) = (f (k, x)) ,

which are obvious on (6.8a) and (6.8b) since we assume the potential V to be
real.
From the denition of the physical solutions and their asymptotic behaviors at x = , we have

(6.19)

(k, x) = s11 (k) f+ (k, x) = f (k, x) + s12 (k) f (k, x)

and

(6.20)

+ (k, x) = s22 (k) f (k, x) = f+ (k, x) + s21 (k) f+ (k, x).

Using the Wronskians (6.16) and (6.17), we obtain

(6.21)

2ik
= W [f (k, x), f+ (k, x)] ,
s11 (k)

(6.22)

2ik

s12 (k)
= W [f+ (k, x), f (k, x)] ,
s11 (k)

(6.23)

2ik

s21 (k)
= W [f+ (k, x), f (k, x)] .
s11 (k)

From these relations, using the integral equations (6.8a) and (6.8b), it is now
easy to obtain integral representations for the above functions, from which
we can deduce the analytic and asymptotic properties of the transmission and
reection coecients as functions of k, as in Chapter 3. Since the four functions
f+ (k, x) and f (k, x) are not independent of each otherremember that
a linear second-order dierential equation has only two independent solutions,
so that any of the rst two solutions can be written as a linear combination of
the second ones and vice versathere should exist some relationships between
sij . They are very easily obtained from (6.21)(6.23) and read

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

183

(6.24)

s11 (k) = s22 (k),

(6.25)

s11 s21 + s12 s22 = 0,

and
2

|s11 | + |s12 | = 1.

(6.26)

Also, the reality of the potential implies that

sij (k) = [sij (k)] .

(6.27)

As was stated at the beginning of this chapter, the above relations imply the
unitarity of the S-matrix.
We now must study the bound states, if any. This can be done along
similar lines as in the radial case. There the S-matrix was dened for each
partial wave (each ) by (3.85),

(6.28)

S(k) =

F (k)
,
F (k)

k real.

Because of (3.60) we have

(6.29)

S(k) = [S(k)] ,

k real.

When F (k) is analytic in the whole k-plane, which happens if V 0, for


r > R (R nite), or when V decreases very fast at innity, for instance, like
exp(x2 ), the S-matrix becomes a meromorphic function in the whole plane
and its poles in Im k 0 (all on the imaginary axis) correspond to the zeros
F (k), i.e., to bound states. They are simple, as we saw, and there are no other
poles in Im k 0.
The situation is similar in the present case. A careful study of the analytic and asymptotic properties of various Jost solutions leads to the following
general theorem.
Theorem 4.6.2. The elements of the S-matrix, sij (k), are continuous
functions of k on the real axis and satisfy the symmetry relation

(6.30)

sij (k) = [sij (k)] ,

k real.

184

Inverse Problems

When k , we have

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php


s12 (k) = 0

1
|k|


,

s21 (k) = 0


s11 (k) = s22 (k) = 1 + 0

(6.31)

1
|k|

1
|k|


,


.

The function s11 (k) for k real is the limiting value of an analytic function in
the upper half-plane, with the exception of a nite number of simple poles at
the points kj = ij , j = 1, 2, . . . , n. The residues at these poles are given by

(6.32)

Res s11 (k)|k=ij


= ij = i

f+ (ij , x)f (ij , x)dx

The S-matrix is unitary, which means that, for k real, we have (6.25) and
(6.26).
Before going further, it is appropriate to make some remarks. First of all,
because the S-matrix is unitary because of the conservation of probabilities,
and satises condition (6.5) because of time-reversal invariance, it depends
on three real functions of k, 0 k . However, there is one more relation among these three functions. It is provided by the analyticity of s11 (k).
Because of (6.26), we have, for k real,
1/2

2
|s11 (k)| = 1 |s12 (k)|
,

(6.33)

s11 (k) being analytic in Im k > 0 and continuous on Im k = 0, except for simple
poles at kj = ij . Thus we have the dispersion relations (Hilbert relations) in
Im k > 0

1
(6.34) s11 (k) = exp
2i

Log

1 |s12

2
(k  )|

1/2

k k

dk 

j=1

and, of course,

(6.35)

s11 (k) = lim s11 (k + i),


0

k real.

From (6.25), we obtain then

(6.36)

s21 (k) =


n 

k + ij

s12 (k) s11 (k)


,
s11 (k)

k ij

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

185

which is valid for k real and compatible with (6.31). As a conclusion, all the
elements of the S-matrix can be determined from s12 (k), i.e., from two real
functions and, of course, the poles kj = ij .
This remark is very important for the inverse problem. As we said at the
beginning of this section, we have to deal with a problem which is similar to
a two-channel problem. We must nd the potential on r [0, ) and on
(, 0]. To nd these two potentials, we need something equivalent, i.e., two
other real functions, and they are indeed the real and the imaginary part of
s12 (k), together with the bound state energies Ej = j2 and the corresponding
residues j .
We should mention before continuing that a careful analysis of the elements
of the S-matrix at k = 0 turns out to be necessary in order to make the previous
analysis completely rigorous, both for the direct and the inverse problem. And
for this purpose, one has to assume that the potential must decrease at innity
faster than shown on (6.7). The extra condition to be imposed on the potential
is

x2 |V (x)|dx < .

With this extra condition, it is then found that one can obtain equivalence
relations similar to (4.41) and (4.42) for the radial case.
For solving the inverse problem, the starting points are the relations (6.21)
(6.23) between the elements of the S-matrix and the Jost solutions f+ and f .
We omit the details and refer the reader to the references. Proceeding as in
the radial case, we dene the kernels

(6.37)

(6.38)

1
+ (t) =
2

1
(t) =
2

s21 (k)

e2ikt dk

n


(1)

Mj ej t ,

j=1

s12 (k) e2ikt dk +

n


(2)

Mj ej t ,

j=1

where

(6.39)

(1)
Mj

2
f+

(ij , x) dx

and

(6.40)

(2)
Mj

2
f

(ij , x) dx

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

186

Inverse Problems

Of course, at k = ij , f+ and f are proportional to each other, and since


one vanishes at x = + and the other one at x = , they both vanish
at x = , which means that we had indeed a bound state, i.e., a squareintegrable wave-function on the whole line x (, ). In fact, one can
prove that
(1)

(6.41)

Mj

(2)

Mj

= 2j .

j is the residue given by (6.32).


We then have the following two integral equations:

(6.42)

A (x, y)

A (x, t) (x + y + t) dt + (x + y) = 0,
0

the rst one with + signs appropriate for y > 0, and the second with - signs for
y < 0. One can again show that each of these integral equations has a unique
solution. These two solutions give us the potential by

(6.43a)

A+ (x, 0) =

V (t) dt,
x


(6.43b)

A (x, 0) =

V (t) dt.

In conclusion, from the knowledge of both the function s12 (k) for real
values of k and the energies of the bound states, together with the normalizing
(1)
(2)
constants Mj and Mj , we obtain the potential in a unique way.
4.7. Nonlinear Evolution Equations
As we said in the introduction, the most important applications of the techniques of inverse problems in quantum scattering theory (as opposed to scattering of acoustic, elastic, and electromagnetic waves) turn out to be in the
eld of nonlinear evolution equations governing many interesting (classical)
physical phenomena, a partial list of which was also given. Here, as an example, we briey treat the Kortewegde Vries equation (KdV) in one time and
one spatial dimensions, which governs the motion of water waves in a shallow
channel. This equation turns out to be useful also in plasma physics. It was
discovered in the last century (exactly 100 years ago) and is the archetype of
evolution equations solvable by our inverse problem techniques. It reads
(7.1)

ut 6uux + uxxx = 0,

where the lower index means the derivative and u(x, t) is the amplitude of the
wave.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

187

The KdV equation has many kinds of solutions (oscillatory waves, solitary
waves, etc.). What is of interest to us is the solitary wave: a single wave
traveling with some speed and keeping its shape. It is easily seen that (6.1)
admits such a solution, given by
(7.2)

u(x, t) =

cosh2

1
2C

C
2 t(x

.

Ct x0 )

Here, x0 is a constant of integration and is the center of the wave at time


t = 0. Indeed, it is obvious that the amplitude is maximum at t = 0 for x = x0 .
As time goes on (t > 0), the center of the wave moves to the right at the speed
C, and at time t = T is found at the point X = CT + x0 .
Several remarks are in order here. Equation (7.1) is nonlinear, thus the
amplitude of a wave and the speed at which it moves are not independent.
This is seen in (7.2), where the amplitude (maximum) is 12 C and the speed is
C. For other nonlinear equations, the relation between the two is, of course,
dierent. Also, the shape of the wave cannot be arbitrary and depends on the
speed. This also is clear in (7.2). The half-width of the wave (i.e., the distance
from the center where the amplitude is half of the maximum) is given by

(7.3)

2
= cosh1
C


2
,
2

i.e., inversely proportional to the square root of C. This is also another general
characteristic of nonlinear equations admitting solitary waves. Henceforth, x0
plays no important role, and we take it to be x0 = 0. What is important to
remember is that the amplitude, the speed, and the shape (half-width) of a
solitary wave are not independent of each other. There is one free parameter,
which can be taken to be the speed C. This free parameter can be chosen at
will: there are solitary waves corresponding to any C > 0.
So far, we have been looking at (7.2) for t 0 and x x0 (= 0), i.e., we
assume that somehow a solitary wave, governed by equation (7.1), is generated
at t = 0 at the point x = 0 and then moves on with the speed C to the right as
time goes on. We can of course look at it for all t and all x and put the creation
of the solitary wave at the remote past (t = ), at the point x = , and
then let it evolve, moving to the right as time goes on. With this in mind, let
us look at the solution of (6.1) given by

(7.4)

u(x, t) = 12

3 + 4 cosh(2x 8t) + cosh(4x 65t)


{3 cosh(x 28t) + cosh(3x 36t)}

At t = 0, this solution is u(x, 0) = ch62 x , has a bell-shaped form centered


around x = 0, and looks again like a solitary wave. However, let us look at

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

188

Inverse Problems

it as t . It is easily seen that the wave then separates into two real
solitons, each of the form (7.2), with speeds C1 = 4 and C2 = 16. The solitary
wave which has the higher speed, C2 , and therefore the largest amplitude, is
much farther to the left that the one with the speed C1 . If we imagine that, by
some mechanism in the remote past, t = T (T very large), these waves have
been created at a distance x = (C2 C1 )T of each other, the higher wave
being to the left of the smaller, as times goes on we nd that at t = 0 they
overlap completely. Then at t = +T , they separate again completely and the
higher one which is faster, is found much farther to the right. However, the
distance between the two solitary waves is now x = (C2 C1 )T + L, where
L = 2Log 3. This shift, which would not exist for linear equations, is due to
the interaction of the two solitary waves.
What comes out of this simple example and is true in general is that, if
the solution u is made of several solitary waves of the form (7.2) in the remote
past, with dierent speeds and located far at x , with locations and
separations in accordance to their speeds, then when t +, they separate
again, keeping their shapes, but are shifted with respect to each other by
xij = (Cj Ci )T + Lij , where the extra shift Lij , due to the interaction, is
a function of the speeds of the solitary waves involved.
Now the question is, how can we solve the equation (7.1) in general? The
very ingenious way found by Kruskal et al. is to look at (7.1) as an initial value
problem, i.e., we are given, at the time t = 0,

(7.5)

u0 (x) = u(x, 0),

which we assume to be a very smooth function having the necessary number


of continuous derivatives with respect to x for all x (, ) and going fast
to zero as x , together with all these derivatives. Such a nice function
can now be considered as the potential in a ctitious Schr
odinger equation on
the full line (6.1):

(7.6)

 + E = u0 (x).

The potential u0 (x) is a nice potential, thus we can solve (7.6) and nd the
(1)
(2)
scattering data: {s12 (k); j , Mj , Mj , j = 1, 2, . . . , n}. Of course, solving the
Marchenko equations of the inverse problem with those scattering data, we get
back the potential u0 (x).
On the other hand, using u0 (x) as the initial value at t = 0 for (7.1), we
can nd the solution u(x, t), at least for some nite interval of f around t = 0.
This solution u(x, t) is a (complicated) functional of u0 (x). Suppose now we use
u(x, t) as the potential in (7.6), t being considered merely as a parameter. We
(1)
(2)
then nd the scattering data {s12 (k, t); j (t), Mj (t), Mj (t), j = 1, . . . , n}.
One may then question the relation between these scattering data and those

Inverse Problems in Potential Scattering

189

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

corresponding to u0 (x) = u(x, t = 0). It turns out, and this was the great
discovery of Kruskal et al., that if u(x, t) evolves according to (7.1), with
u(x, t = 0) = u0 (x), then

(7.7a)

j (t) = j (0) = j ,

(7.7b)

Mj (t) = e4j t Mj (0),

(7.7c)

s22 (k, t) = s22 (k, 0),

(7.7d)

s21 (k, t) = s21 (k, 0)e8ik t .

(1)

(1)

These results are extremely simple in view of the fact that the functional
dependence of u(x, t) on u0 (x) is very complicated and cannot be expressed in
closed form.
The strategy for nding u(x, t), the solution of (7.1), from the initial data
u0 (x) is now clear. We begin by solving the Schr
odinger equation (7.6) with
the potential u0 (x). Then we use the corresponding scattering data in (7.7a)
(7.7d) to nd the scattering data for t = 0. We now use these in the inverse
problem of 4.6 with Marchenko equations. These equations then lead to
the potential u(x, t), the solution of (7.1). As we see here, to go from u0 (x)
to u(x, t) we have two steps: rst solve a second-order ordinary dierential
equation and then an (Fredholm) integral equation. However, both are linear
equations and this is the real beauty of the method.
We did not give the proofs of the relations (7.7a)(7.7d), but they are simple
and need only elementary algebra. Once these relations are established, one
can then show that there is a one-to-one correspondence between the bound
states of u0 (x), {j2 , j = 1, . . . , n} and the solitary waves of the equation
(7.1) with the initial condition u0 (x). To each bound state with energy j2
there corresponds a solitary wave with the speed Cj = 4j2 . At time t = 0, we
may have solitons interacting and also some transmission and reection due to
s22 (k) and s21 (k). However, as time goes on, because of the very oscillatory
character of exp(8ik 3 t) in (7.7d), the eect of the reection becomes more and
more negligible and we are left with superpositions of pure soliton solutions
moving to the right according to what we saw before, with some amount of
transmission to the left.
The inverse method technique shown above to solve the KdV equation
applies to the solution of many other nonlinear evolution equations. It allows us to bring the solution of many complicated nonlinear partial dierential
equations to the solution of a linear dierential equation (not necessarily of
Schr
odinger-type equation, but also coupled rst-order equations, etc.) and
then the solution of a linear (Fredholm-type) integral equation, a great simplication indeed. We refer the reader to the vast literature on the subject. For
an elementary and comprehensive account, we recommend the book of Drazin
and Johnson [11].

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

190

Inverse Problems

4.8. Closing Remarks


For a general study of quantum scattering theory, LippmannSchwinger equation, partial waves, S-matrix, Jost function, bound states, etc., see [3]. The
mathematical study of quantum scattering can be found in [5] and in volume
3 of [9]. The physics of quantum scattering theory is also well explained in [1].
The derivation of the GelfandLevitan and Marchenko equations presented
here make use mostly of integration in the complex plane and the residue theorem. Although we have quoted the PaleyWiener and Titchmarsh theorems,
which are commonly used in the literature, we do not need them for our derivations. We quote them for the sake of completeness.
For a good history of the solution of the inverse problem and the Gelfand
Levitan and Marchenko methods, we refer the reader to the review article of
L. D. Faddeev, Journal of Mathematical Physics, 4 (1963), pp. 72104.
The complete and rigorous solution of the inverse problem in one dimension (the full line) was given rst by L. D. Faddeev, Trudy Mat. Inst. Steklov,
73 (1964), pp. 314336, translated in Amer. Math. Soc. 65 (1964), pp. 139
166. For another approach, with more details on characterization and reconstruction problems, see the article of P. Deift and E. Trubowitz, Comm. Pure
Appl. Math., 23 (1979), pp. 121251. More references on earlier works of Kay
and Moses, Moses, Krein, etc., are found in [7]. In the present chapter, the
presentation of the inverse problem on the full line follows that of Faddeev.
For dierential equations, dierential operators, and eigenfunction expansions, see [13], [14], [15], [25], and [26]. Integral equations of the Volterra and
Fredholm types are well studied in [18], [21][23]. For more abstract treatment,
see [20].
For entire functions and their properties, as well as the PaleyWiener theorem and its variants, see [16]. For the theory of Fourier integrals, including
the theory of Hilbert transforms, convolutions, as well as Tauberian theorems
we used in 4.6, we refer the reader to [13]. See also [24] for a collection of
many useful theorems.
The lectures presented here concern only what are called inverse problems
in energy (xed angular momentum). For all other inverse problems (xed
energy, various classes of potentials including nonlocal potentials, etc.) and
other approaches than those of GelfandLevitan and Marchenko, as well as
applications of the GelfandLevitan and Marchenko methods to various classes
of potentials, see [7]. Finally, for the state of the art for the formidable full
inverse problem in three dimensions, see [9].
A. Appendix
Here we must show that the expression inside the bracket after formula
(4.3) vanishes. Suppose i is a zero of the Jost function, corresponding therefore to a bound state of energy E = 2 . By denition, we have

Inverse Problems in Potential Scattering

F (i) f (i, r = 0) = 0,

(A.1)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

191

where f is the Jost solution, which satises the Schrodinger equation


f  + k 2 f = V f,

(A.2)

k = i.

At innity, f decreases like exp(r). On the other hand, k = i is inside


the domain of analyticity of f . We can therefore dierentiate f , f  , with
respect to k. Denoting this dierentiation with a dot and dierentiating (A.2),
we nd

(A.3)

f [(A.2) ] f[(A.2)] = f f ff  + 2kf 2 = 0.

Integrating, we get, after integration by parts and using (A.1),



(A.4)

2i

f 2 dr + F (i) f  (i, 0) = 0.

Using (4.3), we nally get, for the normalization constant C of the bound state
wave function,

(A.5)

1
=
C

2 (i, r) =
0

f 2 (i, r)dr =

1
2

[f  (i, 0)]

,
0

F (i)
,
2if  (i, 0)

which is the desired result. This relation applies to each bound state individually.
Acknowledgments. These lectures, given at the University of Oulu in
June 1994, were written in part when the author was guest of the Research
Institute for Theoretical Physics of the University of Helsinki. He thanks both
universities and Professors Lassi Paivarinta and Masud Chaichian for warm
hospitality and nancial support.
References
1. Quantum Mechanics
[1] A. Galindo and P. Pascual, Quantum Mechanics, 2 volumes, Springer-Verlag,
Berlin, 1990. This is an excellent treatise on quantum mechanics, at the graduate
level, for theoretical and mathematical physicists. The physics of scattering
theory is well explained here, as well as in Newtons book listed below.

192

Inverse Problems

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

[2] W. Thirring, Quantum Mechanics of Atoms and Molecules, Springer-Verlag,


Berlin, 1981. The mathematical structure of quantum mechanics is explained
well in this book, and it is well suited to students of mathematics.
2. Quantum Scattering Theory
R. G. Newton, Scattering Theory of Waves and Particles, Springer-Verlag,
Berlin, 1982. This is an excellent treatise for theoretical physicists.
[4] M. Schechter, Operator Method in Quantum Mechanics, North-Holland, Amsterdam, 1981. This book is on scattering theory in one space dimension, but it
contains much of the mathematics needed in the general case.
[5] B. Simon, Quantum Mechanics for Hamiltonians Dened as Quadratic Forms,
Princeton University Press, Princeton, NJ, 1971.
[6] M. Reed and B. Simon, vol. III of [17].
[3]

3. Inverse Problems
[7] K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory,
Springer-Verlag, Berlin, 1989.
[8] V. A. Marchenko, Sturm-Liouville Operators and Applications, Birkha
user,
Basel, 1986.
[9] R. G. Newton, Inverse Schr
odinger Scattering in Three Dimensions, Springer-Verlag, New York, 1989. See also, by the same author, Factorization of
the S -matrix, J. Math. Phys., 31 (1990), pp. 24142424 and his review paper
in Inverse Problems in Mathematical Physics, L. P
aiv
arinta and E. Somersalo,
eds., Springer-Verlag, Berlin, 1993.
4. Nonlinear Equations and Solitons
[10] M. J. Ablowitz and H. Segur, Solitons and the Inverse Scattering Transforms,
SIAM, Philadelphia, 1981.
[11] P. G. Drazin and R. S. Johnson, Solitons, An Introduction, Cambridge University Press, London, 1993.
[12] A. C. Newell, Solitons in Mathematics and Physics, SIAM, Philadelphia, 1985.
5. Mathematics
[13] R. C. Titchmarsh, Theory of Fourier Integrals, 2nd ed., Oxford University Press,
Oxford, 1967.
[14]
, Eigenfunction Expansions, I, 2nd ed., Oxford University Press, Oxford,
1962.
[15] G. Hellwig, Dierential Operators of Mathematical Physics, Addison-Wesley,
Reading, MA, 1967.
[16] R. P. Boas, Entire Functions, Academic Press, New York, 1954.
[17] M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vols. IIV,
Academic Press, New York, 19721979.
[18] F. Smithies, Integral Equations, Cambridge University Press, London, 1965.
[19] D. Colton and R. Kress, Integral Equation Methods in Scattering Theory, Wiley,
New York, 1983.

Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Inverse Problems in Potential Scattering

193

R. Kress, Linear Integral Equations, Springer-Verlag, Berlin, 1989.


F. G. Tricomi, Integral Equations, Interscience Publishers, New York, 1957.
W. Pogorzelski, Integral Equations, Pergamon Press, London, 1966.
S. G. Mikhlin, Integral Equations, Pergamon Press, London, 1966.
D. C. Champenay, A Handbook of Fourier Theorems, Cambridge University
Press, Cambridge, 1990.
[25] E.-A. Coddington and N. Levinson, Theory of Ordinary Dierential Equations,
McGraw-Hill, New York, 1955.
[26] E. Hille, Lectures on Ordinary Dierential Equations, Addison-Wesley, Reading,
MA, 1969.

[20]
[21]
[22]
[23]
[24]

You might also like