Professional Documents
Culture Documents
php
4
Inverse Problems
in Potential Scattering
Khosrow Chadan
4.1. Introduction
Since the pioneering work of Gardner, Greene, Kruskal, and Miura in 1968,
Zakharov and Shabat in 1972, and others, who showed that inverse problem
techniques of potential scattering can be used to solve several nonlinear partial
dierential equations of mathematical physics (Kortewegde Vries equation for
the motion of shallow water waves, nonlinear Schr
odinger equation for solitons
in optical bers, SineGordon equation, Boussinesq equation, etc.) and the
universality of the concept of solitons, a great amount of work has been devoted
to these problems. One is now able to solve a great number of these equations
with applications to various branches of physics where nonlinear equations are
derived by modeling important physical problems.
In other domains of physics also, such as electromagnetism (radar detection, medical imaging by electric elds) or acoustics (applications in geophysics,
tomography), other techniques have also been developed during recent years,
and are now used currently with great success.
The purpose of this chapter is to give an account of the inverse problem techniques in potential scattering. There are now many good books on
scattering theory, inverse problems, and solvable nonlinear partial dierential
equations. For those who wish to learn more about these problems and related
subjects, a list of references is given at the end.
4.2. Physical Background and Formulation of the Inverse Scattering
Problem
We consider here the scattering of a nonrelativistic particle by a xed center
of force, or what amounts to the same, the scattering of two such particles
by each other, after the motion of the center of mass has been removed. In
appropriate units of length, time, and mass, and when the interaction between
131
132
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
the two particles is local and represented by a local potential V (x), we have
to deal with the time-dependent Schr
odinger equation
(2.1)
(t) = H(t),
t
where the Hamiltonian is given by the sum of kinetic energy and potential
energy:
H = + V,
(2.2)
|V (x)| |V (y )| 3 3
d xd y < .
|x y |2
More precisely, L1 L2 implies L3/2 , and then the Sobolev inequality leads to
(2.3). As we shall see below, this condition, named after Rollnik, turns out to
be very important in scattering theory and dispersion relations. Henceforth,
we shall always assume (2.3).
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
133
|(x, t)| d3 x = 0,
lim
where
(2.5)
(x, t) =
What one would like to have is that (2.4) holds for all f (k) L2 . This would
mean that no matter how the wave-packet (x, t) is made, particles go far
apart from each other when time becomes large, so that the probability of
nding them together in an arbitrary nite region of space goes to zero. If this
is the case, then all of the continuous spectrum is made of scattering states,
and this is called asymptotic completeness.
Asymptotic completeness can be shown to hold also under various conditions on the potential. For instance, the condition V L1 L2 is sucient
to guarantee asymptotic completeness. Another condition is again the Rollnik
condition (2.3).
Once the asymptotic completeness is established, one can go from the timedependent description (2.1) to the time-independent description. Combining
the Schr
odinger equation (2.6) with the asymptotic condition that, at t = ,
particles move toward each other with relative momentum k (more precisely,
that we have only an incoming plane-wave without any spherical incoming
wave), one is lead to the celebrated LippmannSchwinger equation (k = |k|):
(2.7a)
1
(k, x) = eik.x
4
eik|xy|
V (y ) (k, y ) d3 y.
|x y |
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
134
Inverse Problems
lim r
s
iks
r
=0 ,
r = |x|,
(2.8)
1
(k, x) = 0 (k, x)
4
G(k, x, y ) sgn(V (y )) (k, y ) d3 y,
where
(2.9)
and
ik|
x
y|
(2.10)
e
G(k, x, y ) = |V (x)|1/2
|V (y )|1/2 .
|x y |
135
ik|
x|
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(2.11)
e
(k, x) = eik.x + A(k, k )
|x|
+ ,
(2.12)
1
A(k, k ) =
4
eik .x V (x) (k, x)d3 x.
In short, one can take the limit x in (2.7a) under the integral sign. Note
2
that k 2 = k = E.
After scattering amplitude is found, one shows that the cross-section d,
i.e., the probability of nding the particles moving outward far from the scattering center, with momentum k , in the solid angle d around the direction
k per unit of time and per unit ux of the incoming particles with momentum
k, is given by
(2.13)
2
d = A(k, k ) d.
136
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(2.14)
4
(k, x) =
(2 + 1)i P (cos ) (k, r),
kr
|x| = r,
=0
where is the angle between r and k : cos = (k.r)/kr, and the solution
of the radial Schr
odinger equation
(2.15)
d2
( + 1)
2+
+ V (r) (k, r) = E (k, r),
dr
r2
(2.16)
(k, 0) = 0.
(2.17)
(k, r) =
ei
1
sin kr +
2
+ o(1).
The quantity , called the phase shift, is a real function of k for each . In
terms of , the scattering amplitude can be written
(2.18)
1
(2 + 1) ei sin P (cos ),
A(k, k ) =
k
=0
2
where cos = (k k )/k 2 , since k 2 = k = E, E being the energy of the
incoming or outgoing particles. The partial-wave Hamiltonian for each ,
(2.19)
H =
d2
( + 1)
+
+ V (r)
dr2
r2
and we shall use it extensively throughout this chapter. It can be easily shown
that this condition implies the Rollnik condition (2.3).
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
137
Here, two natural inverse problems occur. The rst one is at xed energy. We perform scattering experimentsat a given energy E and we get the
scattering amplitude (2.18), where k = E. Projecting A on the Legendre
polynomials, we obtain, in principle, all the phase shifts . One may then ask
the following question: knowing all these numbers , would it be possible to
obtain the potential? This problem is of some interest in nuclear physics and
has been completely solved. However, the solution is not in general unique and
there are many ambiguities.
The second inverse problem is at xed . We are dealing with (2.15) for
a given . Knowing V (r), we can solve it and get (k) for all values of k, as
well as the energies of the possible bound states (negative point spectrum of
H ). Conversely, knowing (k) for all k [0, ) and the negative spectrum
E1 En for this same , we would like to nd the potential. This is the problem that we are going to study in this chapter. We shall see how to compute
the potential from the scattering data { (k)|k [0, )} {E1 , E2 , En }.
This inverse problem on the half line r [0, ), when extended to the full line
x (, ), has been instrumental in the solution of many partial dierential
equations of mathematical physics.
4.3. Scattering Theory for Partial Waves
We have to study the radial Schr
odinger equation for a given = 0, 1, 2, . . . :
(3.1)
(3.2)
d2
( + 1)
,
(k, r) + E = V (r) +
dr2
r2
r [0, ),
(k, 0) = 0,
E = k2 ,
and nd the asymptotic properties of the solution for large r, (2.17), as well
as the properties of the phase shift (k) as function of the momentum k.
We shall begin rst with some general remarks on the dierential operator
(the Hamiltonian)
(3.3)
H =
d2
( + 1)
+
+ V (r)
dr2
r2
together with the Dirichlet boundary condition given in (3.2). It can be shown
that if the potential, assumed to be real, satises the condition
(3.4)
r |V (r)| dr < ,
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
138
Inverse Problems
Then, depending on the sign of the potential V , it may or may not have a
point spectrum on the negative E-axis. Indeed, assume rst that the potential
is positive everywhere (repulsive potential in the terminology of physicists).
Now, we know that the operator d2 /dr2 acting in L2 (0, ) with Dirichlet
boundary condition at r = 0 is a positive operator. Therefore, if the potential
is positive everywhere, the operator H given by (3.3) with Dirichlet boundary
condition at r = 0 is also positive for all 0. It follows that H cannot have
negative eigenvalues. In order to have negative eigenvalues, the total potential
Vtot V (r) +
(3.5)
( + 1)
r2
(3.6)
r |V (r)| dr,
V (r) = V [V ] ,
where is the Heaviside function (V is the negative part of V ) [1], [3].
We may remark in passing that from (3.6), it is clear that whatever the
potential may be, the point spectrum is empty if is large enough, i.e., when
the righthand side of (3.6) becomes less than one. This is intuitively clear from
(3.5).
A better bound when the potential has a large negative part, and therefore
when there is a large point spectrum, is given by
(3.7)
2
n
0
1/2
|V (r)|
dr + 1
2
2
1+
( + 1),
but one has to assume here that the negative part of the potential is an increasing function: V (r) 0. Here, one sees again that if is large enough,
the point spectrum becomes empty.
Let us go back now to the positive part of the spectrum. Again, under the
condition (3.4), one can show that the radial Schr
odinger equation, (3.1) and
(3.2), has
a unique solutionup to a multiplicative constant factorfor every
xed k = E 0. We call this solution, as normalized by (2.17) at innity,
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.8)
(k, r) =
ei (k)
1
sin kr + (k) + ,
2
139
r ,
the physical solution, or the scattering solution, and (k) is of course a real
function of k. But this solution is dened by two boundary conditions, one at
r = 0 and one at r = .
Remember that we are dealing with a second-order dierential equation.
Therefore, a unique solution is dened by two boundary conditions: either by
the value of the solution and its rst derivative at a given point, or by the value
of the solution at two dierent points. However, it is usually more convenient
to dene each solution of a linear dierential equation by sucient boundary
conditions at one point and then try to relate one to another dierent solution
dened by dierent boundary points. As we shall see, this will provide a precise
denition of the phase shift (k) in terms of solutions of (3.1) and will enable
us to study its properties as function of k. In doing so, we shall in fact nd
many properties of the solutions of the Schr
odinger equation as functions of
k. These, in turn, will provide us with the key to the solution of the inverse
problem.
It turns out that the case = 0 is not much more complicated to study
than the case = 0. Only the algebra is more complicated, but the essential
points are very similar in all case. We shall therefore study rst the simple
case = 0, called S-wave by physicists.
4.3.1.
(3.9)
(k, 0) = 0.
(3.11)
(0) = 0,
(0) = 1.
One way to study (3.9) and (3.11) is to combine them into an integral
equation of Volterra type. Consider now the integral equation
(3.12)
sin kr
(k, r) =
+
k
0
sin k(r t)
V (t) (k, t) dt
k
140
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.13)
(k, r)
= cos kr +
0
We would then get from (3.12) and (3.13) that (0) = 0 and (0) = 1. If
we dierentiate (3.13) again formally, we would get the dierential equation
(3.9). It follows that, provided we can justify all the assumptions we made and
all the formal dierentiations we performed, the solution of (3.12) would give
us the regular solution :
(3.14)
(0) =
(3.15a)
sin kr
k
(3.15b)
(n)
sin kr
+
=
k
0
sin k(r t)
V (t) (n1) (k, t) dt,
k
n = 1, 2, . . . . If this sequence of functions (n) converges uniformly in an interval [0, R], for some xed value of k, then the limit function exists and satises
the integral equation for this value of k. Moreover, if at each step, (n) (0) = 0
and (n) (0) = 1, the same would be true for the limit function . To go from
the integral equation to the dierential equation, we must justify that (n) is
twice dierentiable at each step. If this can be done, we have proved that is
the unique solution of (3.9).
Remark. Before moving to the proof of uniform convergence of our sequence (n) , we must note the following point: for all values of r and 0 t r
in a nite interval [0, R], the functions sin kr/k and sin k(r t)/k are holomorphic in the variable k in any compact domain D of the complex k-plane.
Therefore, if at each step of the iteration, (n) is holomorphic in k in the same
domain, and if the convergence is also uniform in D, then the limit function
would also be holomorphic in k in D.
Assuming condition (3.4) on the potential, all of these uniform convergences and the validity of dierentiations under the integral signs can be shown
globally using the bound, valid for all real nite r and all nite k,
(3.16)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
141
k|r ,
t
C
e|Im
k
1 + |k| (r t)
r
e|Im k|r .
C
1 + |k| r
(3.17)
k|(rt)
x1
f (x1 )dx1
(3.18)
f (x2 )dx2
xn1
1
f (xn )dxn =
n!
f (x)dx
0
leads then to
(n) (k, r) < C
r
1 + |k| r
e|Im k|r
1 + CI(k, r)+
C2 2
Cn n
I (k, r) ,
+ I (k, r) + +
2!
n!
(3.19)
where I is given by
(3.20)
I(k, r) =
0
t |V (t)|
dt
1 + |k| t
t |V (t)| dt
0
t |V (t)| dt < .
This clearly shows all our assertions about uniform convergence and analyticity in k of our sequence (n) (k, r). Also from
r
|V (t)|
I(k, r)
dt
t |V (t)| dt +
|k|
0
we easily get
(3.21)
lim I(k, r) = 0.
|k|
This gives us also the asymptotic properties of our sequence (n) and .
Because the dierential equation is invariant under the change k k
and the boundary conditions (3.11) are independent of k, we have, for any real
or complex k,
142
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(k , r) = [(k, r)] ,
where * means the complex conjugate. Once these results are put together, we
end up with the following theorem.
Theorem 4.3.1.
(a) The regular solution (k, r), solution of (3.9) and (3.11) exists and is
unique for all values of r R+ and all k if the potential satises (3.4). For
each xed r, is holomorphic in the nite k-plane.
(b) We have the symmetry and reality properties
(3.22)
(3.23)
r
1 + |k| r
e|Im
k|r
eCI(k,r)
(3.24)
k|r .
(d) From this last bound, we see that the asymptotic behavior of for large
|k| is given by
(3.25)
sin kr
[1 + o(1)] ,
k
|k| .
This means that, for large E, the dominant part of is the free solution (0) =
sin kr/k. In fact, it is easily seen on (3.24) that, when k is real, we have
(3.26)
sin kr dk = 2
sin kr dk < ,
k
k
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
143
|z| ,
(t, r)
(k, r)
eikt
(k, r) eikt dk
dk =
K
contour
(K ei , r) eikt K ei d = 0.
+i
0
Now, if t > r, the last integral goes to zero as K because of (3.24) and
the damping factor exp(Im k t) = exp(Kt sin ). Therefore,
(3.27)
(t, r)
(k, r) eikt dk = 0,
t>r
144
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(t, r) cos kt dt =
r
2
=
2
=
[(t, r)] d
0
sin kt
k
sin kt
[(t, r)]
dt.
t
k
Writing
K(r, t)
(3.28)
2
(t, r),
t
(3.29)
sin kr
+
(k, r) =
k
K(r, t)
0
sin kt
dt,
k
sin kr sin kt 2 2
k dk
(k, r)
K(r, t) =
k
k
sin kr eikt 2 2
1
k dk.
(k, r)
=
2i
k
k
(3.28 )
The reason for introducing the factor (2/)k 2 will become clear in 4.3.3. We
can now close the contour again by a large half-circle in the upper k-plane,
and we nd that K(r, t) = 0 for all t > r. Inverting the Fourier sine transform
(3.28 ), we get, of course, (3.29). Both (3.28) and (3.28 ) are useful, as we shall
see later, for generalizing (3.29), and it is easily checked that K dened by
(3.28 ) is identical to (3.28).
145
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.29 )
K(r, 0) = 0,
t K(r, t)
= 0.
t
t=0
where
(t) L2 (, ).
Moreover, if f (z) is given by the above representation and (t) does not vanish
almost everywhere in any neighborhood of (or ), then f (z) is of order 1
and type (that is, not of exponential type less than ).
What we have shown for (k, r) is, in fact, nothing more than the rst
half of the above theorem. The second part of the theorem guarantees that
if the type of the function is , then the support of cannot be smaller than
(, ) at least at one end of the interval. This is indeed the case here for K at
t = r. We should note here that is analytic in k and goes to zero as k .
Therefore, if it is L1 at , it is also L2 there.
The integral representation (3.29) is the rst of the three ingredients for
solving the inverse problem. The second is the relation between the kernel K
and the potential, and we shall establish it now.
So far, (3.29) is purely formal, in the sense that any exponential function of
k with appropriate properties can be written in that form, without any relation
to a second-order dierential equation. We must now take into account that
, given by the above representation, indeed satises the Schr
odinger equation
(3.9). Dierentiating twice with respect to r under the integral sign and doing
two integrations by parts (with respect to t), we nd formally
(3.30)
2K
2K
0tr
and
(3.31)
K
K
d
+
V (r) = 2 K(r, r) = 2
dr
r
t t=r
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
146
Inverse Problems
(3.32)
(3.33)
V (t) dt.
0
1
(3.34) K(r, t) =
2
r+t
2
V (s)ds +
rt
2
r+t
2
ds
rt
2
rt
2
V (s + u) K(s + u, s u) du,
which can also be used to prove the existence and uniqueness of K. Setting
t = r, we get, as expected, (3.33) i.e., the integral version of (3.31). Setting
t = 0 in (3.34), we check that K(r, 0) = 0. We do the same for (tK/t)|t=0 =
0. Dierentiating twice with respect to r and t, we nd, as expected, (3.30).
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
147
To show the existence and uniqueness of the solution of (3.34), one can
again use the iteration method, and everything goes through without any problem. One can then check all the assumptions made on K and its derivatives
and get the bound
(3.35)
r+t
r+t
2
2
1
|K(r, t)| <
|V (s)| ds exp
u |V (u)| du .
2 rt
0
2
4.3.3. Completeness Relation. We are concerned here with the eigenfunction expansion of functions in the Hilbert space L2 (0, ). It is known that
self-adjoint operators in L2 have suciently many eigenfunctions to form a
complete basis, either in the usual sense of a countable set of eigenfunctions,
or in the generalized sense as in Fourier exponential, sine, or cosine transforms.
Let us begin with the simple case where the potential is absent. We have
just the dierential operator d2 /dr2 acting in L2 (0, ) together with the
boundary condition (0) = 0. The generalized eigenfunctions are sin kr, k
[0, ), and any L2 function can be expanded in terms of these. More precisely,
given a function f (r) L2 (0, ), if we dene its Fourier sine transform by
(3.36)
f(k) =
f (r) sin kr dr
0
then
(3.37)
f (r) =
f(k) sin kr dk
(3.38)
K
2
f(k) sin kr dk 0,
f (r)
0
K ,
(3.39)
0
2
|f | dr =
2
f dk.
Putting (3.36) in the righthand side of (3.37) and exchanging the order of
integrations, one gets symbolically the completeness of the set {sin kr},
(3.40)
0
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
148
Inverse Problems
where is the Dirac delta function. This relation is of course formal, but
can be given a precise meaning through the usual machinery of the theory of
distributions or integral transforms.
In order to make everything more precise, we can use the complete boundary conditions as given by (3.11)
(3.41)
(0) = 1.
(0) = 0,
Then the eigenfunctions in the free case (V 0) are given by (sin kr/k), and
if we dene
f(k) =
(3.36 )
f (r)
0
sin kr
dk
k
then
(3.37 )
f (r) =
sin kr 2
k dk
f(k)
k
(3.40 )
0
sin kr sin kt
k
k
2 2
k dk = (r t).
sin Er sin Et 1
E dE = (r t),
E
E
0 (E) =
0
1
E dE
is often called the spectral measure (in our case the free spectral measure), and
d0 /dE the spectral density. The Parseval equality then becomes
(3.41b)
0
2
|f (r)| dr =
2
f ( E) d0 (E).
149
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2j (r)dr = 1,
Cj
0
it can then be shown that the spectral measure (E) (cf. 1.6 in Chapter 1) is
given by
(3.43a)
(E) =
d(E),
1
E
|F ( E |2 ,
(3.43b)
E = k 2 0,
d(E)
n
=
dE
Cj (E Ej ), E 0,
j=1
(3.44)
( E, r) ( E, t) d(E) = (r t).
2
|f (r)| dr =
2
f (E) d(E),
(3.46)
f(E) =
0 f (r) ( E, r) dr,
E 0,
j = 1, . . . , n,
f (r) =
f(E) ( E, r) d(E),
E < 0.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
150
Inverse Problems
(3.48a)
0
sin kr sin k r
dr
k
k
(E E)
= 2 E (E E) =
.
d0 /dE
2 E E
In the general case, this then becomes, again for E > 0, E > 0,
(3.48b)
0
(E E)
.
( E, r) ( E , r)dr =
(d/dE)
and
(3.48d)
( E, r) j (r) dr = 0,
E 0.
When we introduce the physical solution (3.74) later, we will see that (3.48b)
can be written similarly to (3.40).
4.3.4. The Jost Solution. The Jost solution f (k, r) of the Schr
odinger
equation (3.9) is dened by the boundary conditions at innity:
151
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Again, we can try to combine the dierential equation with these boundary
conditions. The procedure is the same as for the regular solution . It is easily
veried, formally, that we get
(3.50)
f (k, r) =
eikr
sin k(r t)
V (t) f (k, t) dt
k
and
(3.51)
f (k, r)
ikeikr
(3.52)
and writing
(3.53)
sin k(r t)
V (t) f (n1) (k, t) dt.
k
(3.54)
k.r
and
(3.55)
sin k(r t)
< C (t r) e|Im
k
1 + k(t r)
k|(tr) ,
k.r
1 + CJ(k, r) +
C2 2
Cn n
J (k, r) + +
J (k, r) ,
2!
n!
152
Inverse Problems
where
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.57a)
J(k, r) =
r
t
|V (t)| dt
1 + |k|t
t |V (t)| dt < .
1
J
|k|
|V (t)|dt 0,
as |k| .
Again, as for the regular solution , we see that, under condition (3.4)
on the potential, our sequence of functions {f (n) } converges uniformly for all
nite r and all nite k in Im k 0. This uniform convergence entails that the
limit function f is the unique solution of the integral equation. Obviously, one
has the bound
(3.58)
k.r eCJ(k,r)
C eIm
k.r
and a similar bound for the derivative f (k, r), where C is another absolute
constant independent of r and k. It is then easily veried that f satises the
boundary conditions (3.49). Moreover, using the above bound in the righthand
side of (3.50), we nd the asymptotic behavior of f in the upper k-plane. For
each xed r,
(3.59)
|k| ,
Im k 0.
f (k , r) = [f (k, r)] .
(d) In the closed upper plane Im k 0, and for each xed nite r, we have
the asymptotic behaviors
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.61)
(3.62)
153
|k| ,
|k| .
(3.63)
f f = 2ik = 0.
W [f , f+ ] f f+
+
(3.64)
(k, r) =
1
[F (k)f (k, r) F (k)f (k, r)] .
2ik
(3.65)
154
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
This denition of F (k) makes sense for all k in Im k 0. From the symmetry
properties of and f , (3.22), and (3.60), it follows that
(3.66)
F (k ) = [F (k)] , Im k 0.
(3.67)
k 0,
where (k) is the phase of F , dened modulo 2. We will soon see how to
dene it in a precise way.
From the bounds for f and f , it is easily veried that rf (k, r) 0 as
r 0. This, when used in (3.65), gives
(3.68)
F (k) = 1 +
0
This integral representation is quite useful. If we use the bound (3.23) for
in (3.69), we immediately nd the asymptotic behavior of F (k):
(3.70)
F (k) = 1 + o(1),
|k| ,
Im k 0.
(3.71)
() = 0
and then nd, by continuity, (k) for nite values of k 0. Because of (3.66),
we also have
(3.72)
(k) = (k),
k 0.
Another property of F(k) is that it never vanishes on the real axis. Indeed,
if F (k0 ) = 0, k0 > 0, then, by (3.66), F (k0 ) = 0. Then it follows from
155
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.64) that (k0 , r) = 0 for all r, and this contradicts the boundary conditions
(k0 , 0) = 1 because and are both continuous functions of r.
4.3.6. Zeros of F (k) in Im k > 0. Suppose that F (k0 ) = 0, Im k0 > 0.
This means, according to (3.65), that (k0 , r) and f (k0 , r) are not independent
solutions. Therefore, (k0 , r) = A f (k0 , r), where A is a constant = 0. Since,
for r , the righthand side behaves as eIm k0 .r , the same is true for .
It follows that (k0 , r) L2 (0, ). Since satises the dierential equation
and the boundary conditions, it follows that E0 is an eigenvalue. However,
we saw that the dierential operator together with the boundary condition is
self-adjoint. This means that E0 = k02 is real and negative and, therefore, that
Re k0 = 0, Im k0 > 0. E0 is one of the point spectrum of the Hamiltonian,
{Ej , j = 1, n}, and (k0 , r) is its eigenfunction.
Conversely, suppose that Ej = j2 is one of the eigenvalues. Then one
must have F (ij ) = 0. Indeed, if this were not the case, it would mean that
j = (ij , r) and fj = f (ij , r) are independent of each other. By denition,
j is the eigenfunction corresponding to the eigenvalue Ej because we saw
that, for all nite values of E, the solution is unique. Therefore, j is square
integrable, which means that j () = 0. From the dierential equation,
it follows that j () = 0. Now since both j and j are continuous and
j () = j () = 0, it follows that j () = 0. If we now use these results
in (3.65) for r , we nd that F (ij ) = 0. There is therefore a one-to-one
correspondence between the eigenvalues Ej = j2 , and the zeros of the Jost
function on the positive imaginary axis. It can also be shown that all these
zeros are simple, and that the spectrum is nondegenerate. We therefore have
the following theorem.
Theorem 4.3.5.
(a) The Jost function F (k), dened by (3.65), or (3.68) or (3.69), is holomorphic in Im k > 0 and continuous in Im k 0.
(b) We have the symmetry property (3.66) and the asymptotic property
(3.70).
(c) The phase of the Jost function on the real axis can be dened by continuity for all real k from (3.71).
(d) There is a one-to-one correspondence between the eigenvalues Ej =
(ij )2 and the zeros kj of the Jost function on the positive imaginary axis:
kj = ij , j > 0, j = 1, 2, . . . , n.
(e) There are no other zeros of F in Im k 0. In particular, F cannot
vanish for real values of k.
Remark. It may happen that one of the zeros of the Jost function is at
k = 0. This does not correspond to a true bound state with a square integrable
eigenfunction, but to a resonance at zero energy.
4.3.7.
156
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
the asymptotic behavior of the regular solution (k, r) for k real positive and
r . Using the formula (3.64) together with (3.48) and (3.67), we nd
(3.73)
(k, r) =
|F (k)|
sin (kr + (k)) + .
k
(3.74)
(k, r) =
k
(k, r).
F (k)
This means that the phase shift, dened for k real along the phase of ,
is identical to (k), which is minus the phase of the Jost function. In (3.74),
although is holomorphic in all of the nite k-plane, F (k) is analytic only in
Im k > 0. Therefore, in general, the relation makes sense only in Im k 0. In
this half-plane, the poles of the zeros of F (k)correspond to the bound
states and vice versa.
Remark. All these properties show the importance of the Jost function,
F (k). From this single function, we can get the phase shift, given by minus the
phase of F for k real positive (positive energies), and the eigenvalues (negativeenergy bound states) given by its zeros on the positive imaginary axis.
It is easily shown that the physical solution satises the integral equation
(3.75)
(k, r) = sin kr
0
where r< = min(r, r ), and r> = max(r, r ). Contrary to (3.12), this is now a
Fredholm integral equation, and therefore we have the Fredholm alternative.
The eigenfunctions j (r) are solutions of the homogeneous equation. It can be
shown that the Jost function is the Fredholm determinant of the above integral
equation. To be more precise, if we introduce a parameter in front of the
integral operator in (3.75), and write it symbolically as = 0 + K, we
know that if the kernel K is L2 , the solution of (3.75) is given in general by
= 0 + N ()0 /D(), where N is an integral operator acting on 0 and
D is a number. N and D are called Fredholm determinants (numerator and
denominator, respectively) and are both entire functions of . If 0 is a zero
of D, D(0 ) = 0, then the homogeneous equation has solutions and 1
is
0
an eigenvalue of the kernel K. For (3.75) with in front of the integral, one
can show that F (; k) D(; k). It is therefore quite natural that the zeros
of the Jost function should give the eigenvalues. We conclude this section by
rewriting the completeness relation (3.44) in terms of the physical solution .
From (3.75) it can be shown that if kj = ij corresponds to the eigenvalues
Ej = j2 , then j (r) is normalized at innity by
157
lim ej r j (r) = 1.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Dening Cj by
Cj
j2 (r)dr = 1,
0
we get
2
(k, r) (k, t) dk +
which looks more like the familiar completeness relation (3.40) for Fourier
transforms. Similarly, the generalized orthogonality (3.48b) now becomes like
the relation for sine transform:
2
(k, r) (k , r)dr = (k k).
0
4.3.8. The Levinson Theorem. We know that the Jost function F (k) is
holomorphic in the upper half-plane Im k > 0 and continuous in Im k 0.
Suppose we have n eigenvalues (bound states) corresponding to the zeros kj =
ij , 0 < 1 < 2 < < n , and let us make a vertical cut joining kn = in to
the origin. In this half-plane with the cut, Log F (k) is also holomorphic, and
Log F () = 0.
Because the phase shift is minus Im log F , we get, by following the closed
contour shown on Figure 1, the variation shown. Therefore, remembering that
the zeros are all simple, we get
(+) (+0) + 2n + [(0) ()] = 0.
158
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.76)
This is the Levinson theorem, which relates the number of bound states to
(+0) (). Choosing the determination (3.71), we have
(3.77)
(0) = n.
Remark. It may happen that F (0) = 0, i.e., one of the zeros of F is just
at the origin. It can be shown that it is also simple. In this case, we get
1
(0) = n + ,
2
(3.78)
n being the number of true bound states with negative energies and k = 0
corresponding to a resonance at zero energy.
4.3.9. Some Integral Representations. We have already seen the integral
representation (3.29) for the regular solution . It is a consequence of the
PaleyWiener theorem and its variants for entire functions. For other functions
of interestJost solution, Jost function, phase shifts S-matrix, etc.some of
them analytic in the upper half-plane, and some dened in general only for
real values of k, it is possible to obtain similar integral representations.
The rst integral representation is for the Jost solution, and we will see
that from it we can deduce all the other integral representations. From the
analyticity of f (k, r) in the upper k-plane, and its asymptotic behavior (3.59),
it is clear that
2 A(r, t) = [f (k, r) eikr ] eikt dk = 0 , t < r,
where the contour is made of the real axis and a large semicircle in the upper
plane. Because the integral over this large semicircle is itself zero for t < r, it
follows that
(3.79)
2 A(r, t) =
t < r.
Now, from (3.50), (3.55), and (3.57b), it is easily seen that, for k real, we
have, for large values of k,
C
|f (k, r) eikr | <
t|V (t)|dt.
|k| r
159
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
f (k, r) =
eikr
+
r
where A(r, t) L2 (r, ) in the variable t. This is, in essence, part of the
following general theorem.
Theorem 4.3.6 (Titchmarsh). A necessary and sucient condition for
F (x) L2 (, ) to be the limit as y 0 of an analytic function F (z =
x + iy) in y > 0 such that
2
|F (x + iy)| dx = O (e2y )
is that
F (x)eitx dx = 0
F (k) = f (k, 0) = 1 +
0
eikr
F (k) = 1 +
0
sin kr
dr +
V (r)
k
eikr
0
sin kt
dt dr.
K(r, t)
k
Let us look now at the rst integral of this formula. It can be written as
2r
1
1
V (r) dr
eikt dt =
eikt
V (r) dr dt.
t
2 0
2 0
0
2
Now the integrand satises
0
2r
dt
V (r) dr
dt
|V (r)|dr =
|V (r)|dr
dt
t
t
0
0
0
2
160
Inverse Problems
=2
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
It follows that the rst integral is indeed the Fourier integral of an L1 function.
For the second integral,
r
sin kt
ikr
dt
e
V (r) dr
K(r, t)
k
0
0
r
r+t
1
V (r) dr
K(r, t)dt
eiku du,
=
2 0
0
rt
we can also reach the same conclusion by now using the bound (3.35). In
conclusion, under the condition (2.20), we have
(3.82)
(3.83)
G(k) =
n
k + ij
j=1
k ij
F (k),
where the product is over the zeros of F (k) on the imaginary axis in the upper
half-plane. We have seen that these zeros correspond to the bound states.
161
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(3.84)
lim G(k) = 1,
|k|
Im k 0.
We can therefore apply Theorem 4.3.7 to Log G(k), with Log G() = 0.
The net result is
g(t) eikt dt,
Log G(k) =
0
where g(t) L1 (0, ). The fact that g(t) vanishes for t < 0 is again the
consequence of the analyticity of Log G(k) in the upper k-plane. Going back
now to F (k), we nd, for k in the upper plane Im k 0,
n
k ij
g(t)eikt dt
F (k) =
e 0
, g(t) L1 (0, ).
k
+
i
j
j=1
(3.85)
(3.86)
n
j=1
Log
k ij
k + ij
g(t) L1 (0, ).
From the knowledge of (k) for all k 0 and the bound state energies, we can
now calculate g(t) by inverting the above Fourier sine-transform. Once g(t) is
known, we then have F (k) from (3.85).
The second method is to use the Hilbert transforms (called dispersion relations by physicists). The main theorem to be used now is the following.
Theorem 4.3.8. Suppose f (x) L1 (, ), and f(k) is its Fourier
transform
f(k) =
162
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Y (k )
dk ,
a k k
a
X(k )
dk ,
a k k
1
P
a
Y (k) = lim P
X(k) = lim
where the symbol P in front of the integrals means the principal value. These
two relations are called KramersKronig relations.
We can apply this theorem directly to Log G(k), given by (3.84). The
result is
1
Re Log G(k) = Log |F (k)| = P
)
(k
dk ,
k k
where, according to (3.83) and (3.67), (k)
is given by
(k)
=2
Arctg
j
+ (k).
k
In the above integral, we need the values of (k) for negative k, and these are
provided by (3.72). Putting all the pieces together, we nd
(3.87a)
|F (k)| =
j2
1+ 2
k
(k )
dk
kk
and
(3.87b)
F (k) =
j
j2
1+ 2
k
(k )
dk
kk
In the rst formula, k is real, whereas in the last formula, Im k > 0. Putting
k = Re k + i, and making 0, we get (3.87a) and (3.67) in the limit, by
using
lim
0
k
1
1
=P
+ i (k k).
k i
k k
Another integral representation which will be useful later is the one for the
S-matrix, dened for real values of k only:
(3.88)
S(k) =
F (k)
= e2i(k) ,
F (k)
k real.
We already have (3.81) and (3.82) for F (k). Consider now the function 1/F (k).
In the WienerLevy theorem, we can choose G(z) = 1/z, and this function is
163
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
analytic everywhere except at z = 0. Since F (k) does not vanish when k varies
from to + on the real axis, and F () = 1, we can apply the theorem
and we get
1
=1+
F (k)
(3.89)
B(t) L1 .
Therefore,
F (k)
ikt
iku
= 1+
A(t) e
dt 1 +
B(u) e
du
F (k)
0
ikt
A(t) e
dt +
B(t) eikt dt
=1+
A(t) eikt dt
The last term is the product of the Fourier transforms of two L1 functions,
and it can be written as the Fourier transform of the convolution of A and B:
ikx
e
A(t) B(x + t) dt dx.
and its Fourier transform H(k) is the product of the Fourier transforms of f
and g : H(k) = F (k) G(k).
Using this theorem and putting all the pieces together, we nd
(3.90)
S(k) =
e2i(k)
=1+
s(t) L1 (, ).
(3.91)
1
1
=
=1+
F (k) F (k)
|F (k)|2
where we have used the fact that the function is real and even.
Remark. If there are no bound states, then 1/F (k) is also analytic in
Im k > 0. It follows that in (3.89), B(t) = 0 for t < 0.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
164
Inverse Problems
(4.1)
(4.2)
( E, t) ( E, r) d(E),
t) = 0
and we are going to show that, as for dened by (3.27), we have (r,
for all r > t. To show this, we use (3.64) and (3.43b) in the above integral.
Taking into account that is an even function of k, we nd
t) = 1
(r,
i
(k, t) f (k, r)
k
dk +
Cj (ij , t)(ij , r).
F (k)
j
Now, if r > t, we can close the contour in the above integral by a large
semicircle in the upper k-plane, and we know that, because of (3.59), the
contribution of this large semicircle goes to zero as its radius goes to innity.
Therefore, the above integral on the real k-axis is equal to the sum of the
residues of the (simple) poles due to the (simple) zeros of F (k), kj = ij .
On the other hand, we know that, at kj = ij , (ij , r) and f (ij , r) are
proportional to each other. Since is normalized by (k, 0) = 1 for all k, we
have
(4.3)
(ij , r) =
f (ij , r)
.
f (ij , 0)
n
2ij
Cj
+
(ij , t) f (ij , r),
F (ij ) f (ij , 0)
j=1
165
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
where F = dF/dk.
It can be shown (see the appendix) that the sum inside the bracket vanishes
for each j. Therefore,
(4.4)
t) =
(r,
sin Et
( E, r) d(E) = 0,
( E, t)
E
r > t.
(k, r)
(4.5)
sin kr
1
= o(1).
k
k
(4.6)
r
sin Et sin Er
sin Es
+
ds d(E)
K(r, s)
E
E
E
0
r
sin Er sin Et
=
d(E) +
K(r, s)
E
E
0
sin Et sin Es
d(E) ds = 0.
(t s) +
E
E
G(r, t) =
sin Et sin Er
d(E)
E
E
K(r, t) + G(r, t) +
0
This is the GelfandLevitan integral equation, which gives the full solution
of the inverse problem: nding the potential from the scattering data (4.1).
Given these data, we can calculate the Jost function by (3.87a). We then have
d(E) by (3.43b). Using (3.41a), we obtain
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
166
Inverse Problems
(4.9)
d(E) =
1
1
EdE, E = k 2 0,
2 1
|F ( E)|
n
Cj (E Ej ) dE,
E < 0.
j=1
from which we can calculate the kernel G by (4.7). Having G, we must solve
the integral equation (4.8) for K, and then we get, nally,
(4.10)
V (r) = 2
d
K(r, r).
dr
Remark. The GelfandLevitan integral equation is, for each xed value of
r, a Fredholm integral equation for K in the second variable t, which has the
structure
r
K(t) + G(t) +
G(t, s) K(s)ds = 0.
0
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
167
equation was solved in terms of free solutions sine and exponential, and we
obtained the most important ingredient for the solution of the inverse problem,
namely the representation (3.29) in terms of the free solution (0) = (sin kr)/k.
Suppose now we start from a given potential V1 satisfying (3.4), for which
we assume that we can solve the Schrodinger equation completely, and for
which, therefore, we know everything explicitly: the phase-shift for all positive
energies, the energies of all the bound states and the corresponding normalization constants, the regular solution 1 , the kernel K1 of (3.29), etc. We may
then ask the question of whether it should be possible to start from V1 instead
of starting from the free case V1 = 0, and determine the potential V from the
spectral data (4.1). The advantage of starting from V1 is that this potential
may sometimes be chosen close to V , or reproduce some characteristics of V .
The rst step is to establish an integral representation similar to (3.29) for
in terms of 1 (instead of sin kr/k):
( E, r) = 1 ( E, r) +
(4.11)
For this purpose, we proceed along similar lines, and consider the integral
(4.12)
(t, r) =
( E, r) 1 ( E, r) ,
1
1 ( E, t)d1 (E) =
i
1
+
[ 1 ] 1 d1 (E) =
i
k
dk
F1 (k)F1 (k)
[(k, r) 1 (k, r)]
n1
f1 (k, t)
(1)
(1)
(1)
(1)
k dk +
Cj ij , r 1 ij , r 1 ij , t .
F1 (k)
j=1
Now, in the last integral, if t > r, we can close the contour by a large semicircle
in the upper half-plane, and we know that its contribution is zero. Therefore,
the integral is given by the sum of the residues of the (simple) poles of the
integrand due to the (simple) zeros of F1 (k). Therefore, using (4.3) for 1 ,
(4.13)
(t, r) =
(1)
(1)
ij , r 1 ij , r ,
j
(1)
(1)
Cj
2ij
f1 ij(1) , t = 0,
+
(1)
(1)
f1 ij , 0
F1 ij
t > r.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
168
Inverse Problems
If we invert now the integral transform (4.12) dening (t, r), we indeed get
(4.11), where K(r, t) (t, r). After establishing (4.11), it is easy to establish
the generalized GelfandLevitan integral equation by proceeding exactly in
the same way we established (4.8). This time, we consider
(4.14)
t) =
(r,
( E, t) 1 ( E, t) ( E, r) d(E),
r > t,
(4.15)
(k, t) 1 (k, t) =
1
o(1).
k
(4.16)
and
(4.17)
G(r, t) =
1 ( E, r) 1 ( E, t) d(E).
K(r, t) + G(r, t) +
0
The same analysis which led to (3.31) for the potential leads here to
(4.19)
2K
2K
and
(4.20)
V (r) V1 (r) = 2
d
K(r, r).
dr
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
169
C21 (i, s) ds
R(r) = 1 +
0
then
(4.22)
V V1 = 2
d2
Log R(r)
dr2
and
(4.23)
1
.
R(r)
(4.24)
(i, r) =
1 (i, r)
,
R(r)
170
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(4.25)
N
k + i aj
F (k) =
F1 (k).
k + i bj
j=1
(4.26)
k + ia
F1 (k),
k + ib
F (k) =
a > 0 , b > 0.
(4.27)
2
G(r, t) =
1 (k, r)
0
b2 a2
k 2 + a2
1 (k, t)
k2
2 dk.
|F1 (k)|
It is now easy to recognize that the above expression is the resolvent kernel of
the Schr
odinger equation (3.9) with V1 and energy a2 . Indeed, if we apply
the dierential operator D1 = (d2 /dr2 ) + V1 + a2 to the integral and use the
completeness relation for 1 (E, r), we obtain
D1 G(r, t) = (r t).
(4.28)
Moreover, G is symmetric in its arguments, and G(0, t) = 0 satises the Dirichlet boundary conditions. It is easily checked that one has
(4.29)
G(r, s) =
where f1 is the Jost solution. The kernel G is now a separable kernel of rank
one, and the GelfandLevitan equation is trivially solved, as was done above,
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
171
and we can compute K(r, t) and the potential V explicitly in terms of 1 and
f1 .
It can be shown that in the general case where we have N poles and N
zeros, the kernel G reduces to a sum of N terms analogous to (4.29), and again
we can solve algebraically the integral equation, and get explicitly V .
4.4.5. The Discrete Case. So far, we have assumed that the potential vanishes at innity. This leads to the existence of the continuum in the energy
spectrum, and therefore to scattering. Suppose now that the potential is innite at innity: V () = +, for instance, V (x) r , > 0, > 0, as
r . We have now what is called a conning potential, and whatever the
boundary condition at r = 0 (Dirichlet, Neumann, or mixed), the spectrum is
discrete and goes to + : E1 < E2 < < En < . This case has
attracted much interest because of its applications in particle physics (quark
models where the interactions between quarks is known to be conning, and
where, at least for heavy quarks, the nonrelativistic Schr
odinger equation is
a good approximation), and other domains such as acoustics, etc. A similar
equivalent case is when we are dealing with the Schr
odinger equation in a box,
i.e., in a nite interval. Here also, if the potential has no strong singularities
in the interval, it is well known that the spectrum is discrete whatever the
boundary conditions at the ends of the interval are. And again the eigenvalues are nondegenerate and accumulate at innity: En as n . The
inverse problem is now to nd the potential from the energy spectrum and
the appropriate normalization constants of the eigenfunctions. This problem
is treated in Chapter 3.
4.4.6. Properties of the Potential. It is clear from the techniques used
to establish the GelfandLevitan integral equation that Fourier analysis plays
an essential role. Moreover, when the potential satises condition (2.20), most
of the quantities of interest we have to use in order to get the potential from
the scattering data are given by various Fourier integrals of the L1 function,
for instance, (3.81), (3.82), (3.85), (3.86), etc. Here we are going to show that
it is possible to obtain precise properties of the potential from the properties
of these Fourier transforms.
As we saw before, from the integral representation (3.81) and (3.82) for
the Jost function it is possible to obtain all the other integral representations
(3.85), (3.86), etc. Let us therefore consider the Jost function dened by (3.69)
and (3.81):
(4.30)
F (k) = 1 +
0
(4.31)
eikt A(t)dt,
F (k) = 1 +
0
A(t) L1 (0, ).
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
172
Inverse Problems
F (k) =
1+
sin kr
V (r)
k
eikr
0
(1 + o(1)) ,
(4.33)
r|V (r)|
dr 0,
1 + |k|r
k ,
W (r) =
(4.34)
V (t) dt.
r
dr
|V (t)|dt =
|V (t)|dt
dr =
0
Using now
(4.36)
eikr
sin kr
=
k
e2ikt dt
0
F (k) = 1
e2ikr W (r) dr + .
The similarity between this formula and (4.31) shows that, as far as the
integrability properties are concerned, A(r) and W (r) are equivalent and one
can show quite rigorously that
(4.38)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
173
Other relations of this kind can be obtained along the same lines. For
instance, it is clear from (4.37) and (3.86) that one also has
g(r) L1 (0, ) W (r) L1 (0, ).
(4.39)
(k) =
(4.40)
k .
(4.41)
and
(4.42)
(x) cos kx dx
1 ,
(+0) (1 ) sin
2 k
1 ,
() (1 ) sin
2 k
(k ),
(k 0).
174
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
1
k
2
(4.43)
r0
0 < < 1.
(4.44)
(k)
V0
cos
(1 )(2k)1 ,
k .
r > 2R.
Conversely, it can also be shown easily that if the scattering data are such that
the Jost function is of exponential type 2R, then V (r) = 0 for r > R.
4.5. Marchenko Equation
An equivalent method for solving the inverse problem is to use the Jost solution
f (k, r) instead of the regular solution . Now, the Jost solution, for each xed
value of r, is an analytic function of k in Im k > 0 and is continuous and
bounded on the real k-axis. Moreover, it has the asymptotic behavior shown
in (3.61) and we have seen that we have the representation (3.80),
(5.1)
f (k, r) =
eikr
+
r
2A 2A
2 = V (r) A(r, t)
r2
t
(5.2)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
175
and
V (r) = 2
(5.3)
d
A(r, r).
dr
r+t
2
V (s)ds
ds
r+t
2
tr
2
V (s u) A(s u, s + u)du,
t r.
One can directly check that this integral equation leads to (4.2) and (4.3).
Iterating (5.3), we get
(5.5)
1
|A(r, t)| <
2
r+t
2
|V (s)| ds exp
u|V (u)| du ,
r
d(E),
(
E, r) by (3.64). We get then two terms. Changing k into
0
k in the second integral, we get, remembering that |F (k)|2 = F (k) F (k),
1
i
F (k)
k2
f (k, r)
(k, t)dk.
k
F (k) F (k)
We now replace (k, t) again by (3.64) and use the denition of the S-matrix
(3.88), S(k) = F (k)/F (k), k real. The nal result is that (3.44) can be
written as
(5.6)
n
1
f (k, r) [f (k, t) S(k) f (k, t)] dk +
Cj j (r)j (t) = (r t),
2
j=1
176
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
where j (ij , r) are the bound states wave functions. Expressing this sum
in terms of fj (r) = f (ij , r) is easy. We have, since (k, 0) = 1 for all values
of k (remember (3.11)),
(5.7)
j (r) =
f (ij , r)
.
f (ij , 0)
(5.8)
Cj1 =
2j (r)dr =
0
1
2
(f (ij , 0))
fj2 (r) dr =
0
iF (ij )
,
2j f (ij , 0)
(5.9)
j=1
2ij
F (ij )f (ij , 0)
fj (r) fj (t) =
sj fj (r) fj (t).
We must notice here that fj (r), which has the precise asymptotic behavior
fj (r) = ej r + ,
(5.10)
r ,
is real because of (3.60), and therefore that sj > 0. The righthand side of (5.8)
gives us
(5.11)
fj2 dr = 1,
sj
0
If we now replace the Jost solutions f (k, t) and f (k, t) inside the bracket by
their integral representation (5.1), we get
(5.12)
"
A(t, u)
t
#
f (k, r) [eiku S(k) eiku ] dk du = 0.
177
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
With obvious notations, this is, for each xed r(< t), a homogeneous integral
equation of the form
(5.13)
Gr (t) +
A(t, u) Gr (u)du = 0,
t
where A(t, u) L2 (t, ) in u. In other words, for each xed r, Gr (t) for
t > r satises a homogeneous Volterra integral equation with an L2 kernel.
Therefore, from the theory of Volterra integral equations, we infer that Gr (t) =
0, that is, given r, we have
1
2
(5.14)
We have been reasoning as though Gr (t) in (5.13) were an ordinary function, which is not the case. We are dealing here with a distribution, but with
a little care the conclusion can be shown to hold. Since the starting point was
the completeness relation (3.44), or (5.6), which has to be understood in the
framework of the theory of distributions, likewise (5.14) has to be understood
in the same sense. Given an appropriate function F (t) which is C and whose
support is outside [0, r], we have
F (t) Gr (t) dt = 0.
0
fj (t) = f (ij , t) =
ej t
A(t, u) ej u du;
the sum over the bound states will be included in the denition of Gr (t) and
we get again (5.13). The full result is then
(5.16)
1
2
n
sj fj (r) ej t = 0 , r < t .
j=1
A(r, t) + A0 (r + t) +
A(r, u) A0 (u + t) ds = 0,
r
where
t r,
178
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
1
(5.18) A0 (t) =
2
n
sj ej t ,
j=1
are completely equivalent. Another important point is that one can show the
complete equivalence between the GelfandLevitan equation and the Marchenko equation.
4.6. Inverse Problem on the Line
The inverse problem on the line IR has become important because of its usefulness for solving explicitly many nonlinear partial dierential equations of
mathematical physics in one space and one time dimension. Both direct and
inverse problems here have much in common with what we saw in previous
sections with the Jost solution and the Marchenko integral equation. Thus, we
will skip the details and give here only the essential points.
One important dierence with the radial Schr
odinger equation studied previously is that the two boundary points are both situated at innity and are
distinct from each other. This makes the Schr
odinger equation similar to a
system of two coupled equations on a half-line, so that the S-matrix is not a
number but a 2-by-2 matrix. In the time-dependent description of scattering,
one can send particles at t = toward the target (scattering region) either
from x = (particles coming in from the left) or from x = + (particles
coming in from the right), and then one has to study what happens when
t +. In the time-independent description, given the time-independent
Schr
odinger equation
(6.1)
+ E = V (x),
< x < ,
we have two physical solutions, (x) and + (x). (x) corresponds to incoming particles coming from x = at t = . At t = +, it has the
asymptotic behavior (E = k 2 > 0),
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(6.2)
(k, x)
=
179
tR (k) eikx ,
x +,
x .
tR (k) is the transmission coecient to the right, rL (k) is the reection coecient to the left. In the second formula, eikx represents the continuous beam
of incoming particles from the left, propagating to the right.
Likewise, we have the solution + which corresponds at t = to incoming particles from the right, propagating to the left: eikx . Part of it is
reected back to the right at t = and another part goes through to the left
and is found at t = + at x = . The asymptotic behavior of + is
(6.3)
+ (k, x)
=
tL (k) eikx ,
eikr + rL (k)eikx ,
x ,
x +.
(6.4)
S(k) =
s11 (k)
s12 (k)
s21 (k)
s22 (k)
tR (k)
rL (k)
rR (k)
tL (k)
(6.5)
The transmission coecients to the left of and to the right of are equal.
All of these will be shown below.
The solutions + and are the physical solutions and are similar to the
physical solution of the radial case, (3.1) and (3.8). We can now dene the
Jost solutions of (6.1), f (k, x), by precise asymptotic conditions at x = :
(6.6a)
(6.6b)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
180
Inverse Problems
In the radial case, we assumed that the potential satises (3.4). In the
present case, since the origin x = 0 does not play any important role, it
turns out that the condition which would guarantee the existence of all these
solutions, and therefore a good scattering theory, is
(6.7)
In other words, V is L1 , and also its rst absolute moment is nite. As was
previously shown for the radial case, we can now combine the dierential equation (6.1) with the boundary conditions (6.6a) or (6.6b), to obtain the following
integral equations:
(6.8a)
f+ (k, x) =
eikx
(6.8b)
f (k, x) = eikx +
sin k(x t)
V (t) f+ (k, t) dt,
k
sin k(x t)
V (t)f (k, t) dt.
k
From these equations, we can deduce then all the properties of the solutions.
We leave the details for the reader to check by following exactly what was
done previously for the Jost solution, and summarize here the results in the
following theorem.
Theorem 4.6.1. Under condition (6.7) the Jost solutions f+ and f exist
and are unique. f+ is dened and continuous in Im k 0 and holomorphic in
Im k > 0. It there satises the bound
(6.9)
|f+ (k, x)
eikx |
eIm kx
<C
1 + |k|
(6.10)
|f (k, x)
eikx |
eIm kx
<C
1 + |k|
From the above bounds, again using the Titchmarsh theorem (Theorem
4.3.6), we obtain, in complete agreement with the radial case, the integral
representations
(6.11a)
f+ (k, x) =
eikx
+
x
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(6.11b)
f (k, x) =
eikx
181
2
2
2
2
x
y
(x, y) = V (x) (x, y),
(6.13)
(6.14)
1
+ (x, x) =
2
1
(x, x) =
2
V (t) dt,
x
V (t) dt,
and
(6.15)
(x, ) = 0.
(6.16)
182
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
(6.17)
Inverse Problems
(6.18)
which are obvious on (6.8a) and (6.8b) since we assume the potential V to be
real.
From the denition of the physical solutions and their asymptotic behaviors at x = , we have
(6.19)
and
(6.20)
(6.21)
2ik
= W [f (k, x), f+ (k, x)] ,
s11 (k)
(6.22)
2ik
s12 (k)
= W [f+ (k, x), f (k, x)] ,
s11 (k)
(6.23)
2ik
s21 (k)
= W [f+ (k, x), f (k, x)] .
s11 (k)
From these relations, using the integral equations (6.8a) and (6.8b), it is now
easy to obtain integral representations for the above functions, from which
we can deduce the analytic and asymptotic properties of the transmission and
reection coecients as functions of k, as in Chapter 3. Since the four functions
f+ (k, x) and f (k, x) are not independent of each otherremember that
a linear second-order dierential equation has only two independent solutions,
so that any of the rst two solutions can be written as a linear combination of
the second ones and vice versathere should exist some relationships between
sij . They are very easily obtained from (6.21)(6.23) and read
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
183
(6.24)
(6.25)
and
2
|s11 | + |s12 | = 1.
(6.26)
(6.27)
As was stated at the beginning of this chapter, the above relations imply the
unitarity of the S-matrix.
We now must study the bound states, if any. This can be done along
similar lines as in the radial case. There the S-matrix was dened for each
partial wave (each ) by (3.85),
(6.28)
S(k) =
F (k)
,
F (k)
k real.
(6.29)
S(k) = [S(k)] ,
k real.
(6.30)
k real.
184
Inverse Problems
When k , we have
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
s12 (k) = 0
1
|k|
,
s21 (k) = 0
(6.31)
1
|k|
1
|k|
,
.
The function s11 (k) for k real is the limiting value of an analytic function in
the upper half-plane, with the exception of a nite number of simple poles at
the points kj = ij , j = 1, 2, . . . , n. The residues at these poles are given by
(6.32)
= ij = i
The S-matrix is unitary, which means that, for k real, we have (6.25) and
(6.26).
Before going further, it is appropriate to make some remarks. First of all,
because the S-matrix is unitary because of the conservation of probabilities,
and satises condition (6.5) because of time-reversal invariance, it depends
on three real functions of k, 0 k . However, there is one more relation among these three functions. It is provided by the analyticity of s11 (k).
Because of (6.26), we have, for k real,
1/2
2
|s11 (k)| = 1 |s12 (k)|
,
(6.33)
s11 (k) being analytic in Im k > 0 and continuous on Im k = 0, except for simple
poles at kj = ij . Thus we have the dispersion relations (Hilbert relations) in
Im k > 0
1
(6.34) s11 (k) = exp
2i
Log
1 |s12
2
(k )|
1/2
k k
dk
j=1
and, of course,
(6.35)
k real.
(6.36)
s21 (k) =
n
k + ij
k ij
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
185
which is valid for k real and compatible with (6.31). As a conclusion, all the
elements of the S-matrix can be determined from s12 (k), i.e., from two real
functions and, of course, the poles kj = ij .
This remark is very important for the inverse problem. As we said at the
beginning of this section, we have to deal with a problem which is similar to
a two-channel problem. We must nd the potential on r [0, ) and on
(, 0]. To nd these two potentials, we need something equivalent, i.e., two
other real functions, and they are indeed the real and the imaginary part of
s12 (k), together with the bound state energies Ej = j2 and the corresponding
residues j .
We should mention before continuing that a careful analysis of the elements
of the S-matrix at k = 0 turns out to be necessary in order to make the previous
analysis completely rigorous, both for the direct and the inverse problem. And
for this purpose, one has to assume that the potential must decrease at innity
faster than shown on (6.7). The extra condition to be imposed on the potential
is
x2 |V (x)|dx < .
With this extra condition, it is then found that one can obtain equivalence
relations similar to (4.41) and (4.42) for the radial case.
For solving the inverse problem, the starting points are the relations (6.21)
(6.23) between the elements of the S-matrix and the Jost solutions f+ and f .
We omit the details and refer the reader to the references. Proceeding as in
the radial case, we dene the kernels
(6.37)
(6.38)
1
+ (t) =
2
1
(t) =
2
s21 (k)
e2ikt dk
n
(1)
Mj ej t ,
j=1
n
(2)
Mj ej t ,
j=1
where
(6.39)
(1)
Mj
2
f+
(ij , x) dx
and
(6.40)
(2)
Mj
2
f
(ij , x) dx
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
186
Inverse Problems
(6.41)
Mj
(2)
Mj
= 2j .
A (x, y)
A (x, t) (x + y + t) dt + (x + y) = 0,
0
the rst one with + signs appropriate for y > 0, and the second with - signs for
y < 0. One can again show that each of these integral equations has a unique
solution. These two solutions give us the potential by
(6.43a)
A+ (x, 0) =
V (t) dt,
x
(6.43b)
A (x, 0) =
V (t) dt.
In conclusion, from the knowledge of both the function s12 (k) for real
values of k and the energies of the bound states, together with the normalizing
(1)
(2)
constants Mj and Mj , we obtain the potential in a unique way.
4.7. Nonlinear Evolution Equations
As we said in the introduction, the most important applications of the techniques of inverse problems in quantum scattering theory (as opposed to scattering of acoustic, elastic, and electromagnetic waves) turn out to be in the
eld of nonlinear evolution equations governing many interesting (classical)
physical phenomena, a partial list of which was also given. Here, as an example, we briey treat the Kortewegde Vries equation (KdV) in one time and
one spatial dimensions, which governs the motion of water waves in a shallow
channel. This equation turns out to be useful also in plasma physics. It was
discovered in the last century (exactly 100 years ago) and is the archetype of
evolution equations solvable by our inverse problem techniques. It reads
(7.1)
ut 6uux + uxxx = 0,
where the lower index means the derivative and u(x, t) is the amplitude of the
wave.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
187
The KdV equation has many kinds of solutions (oscillatory waves, solitary
waves, etc.). What is of interest to us is the solitary wave: a single wave
traveling with some speed and keeping its shape. It is easily seen that (6.1)
admits such a solution, given by
(7.2)
u(x, t) =
cosh2
1
2C
C
2 t(x
.
Ct x0 )
(7.3)
2
= cosh1
C
2
,
2
i.e., inversely proportional to the square root of C. This is also another general
characteristic of nonlinear equations admitting solitary waves. Henceforth, x0
plays no important role, and we take it to be x0 = 0. What is important to
remember is that the amplitude, the speed, and the shape (half-width) of a
solitary wave are not independent of each other. There is one free parameter,
which can be taken to be the speed C. This free parameter can be chosen at
will: there are solitary waves corresponding to any C > 0.
So far, we have been looking at (7.2) for t 0 and x x0 (= 0), i.e., we
assume that somehow a solitary wave, governed by equation (7.1), is generated
at t = 0 at the point x = 0 and then moves on with the speed C to the right as
time goes on. We can of course look at it for all t and all x and put the creation
of the solitary wave at the remote past (t = ), at the point x = , and
then let it evolve, moving to the right as time goes on. With this in mind, let
us look at the solution of (6.1) given by
(7.4)
u(x, t) = 12
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
188
Inverse Problems
it as t . It is easily seen that the wave then separates into two real
solitons, each of the form (7.2), with speeds C1 = 4 and C2 = 16. The solitary
wave which has the higher speed, C2 , and therefore the largest amplitude, is
much farther to the left that the one with the speed C1 . If we imagine that, by
some mechanism in the remote past, t = T (T very large), these waves have
been created at a distance x = (C2 C1 )T of each other, the higher wave
being to the left of the smaller, as times goes on we nd that at t = 0 they
overlap completely. Then at t = +T , they separate again completely and the
higher one which is faster, is found much farther to the right. However, the
distance between the two solitary waves is now x = (C2 C1 )T + L, where
L = 2Log 3. This shift, which would not exist for linear equations, is due to
the interaction of the two solitary waves.
What comes out of this simple example and is true in general is that, if
the solution u is made of several solitary waves of the form (7.2) in the remote
past, with dierent speeds and located far at x , with locations and
separations in accordance to their speeds, then when t +, they separate
again, keeping their shapes, but are shifted with respect to each other by
xij = (Cj Ci )T + Lij , where the extra shift Lij , due to the interaction, is
a function of the speeds of the solitary waves involved.
Now the question is, how can we solve the equation (7.1) in general? The
very ingenious way found by Kruskal et al. is to look at (7.1) as an initial value
problem, i.e., we are given, at the time t = 0,
(7.5)
(7.6)
+ E = u0 (x).
The potential u0 (x) is a nice potential, thus we can solve (7.6) and nd the
(1)
(2)
scattering data: {s12 (k); j , Mj , Mj , j = 1, 2, . . . , n}. Of course, solving the
Marchenko equations of the inverse problem with those scattering data, we get
back the potential u0 (x).
On the other hand, using u0 (x) as the initial value at t = 0 for (7.1), we
can nd the solution u(x, t), at least for some nite interval of f around t = 0.
This solution u(x, t) is a (complicated) functional of u0 (x). Suppose now we use
u(x, t) as the potential in (7.6), t being considered merely as a parameter. We
(1)
(2)
then nd the scattering data {s12 (k, t); j (t), Mj (t), Mj (t), j = 1, . . . , n}.
One may then question the relation between these scattering data and those
189
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
corresponding to u0 (x) = u(x, t = 0). It turns out, and this was the great
discovery of Kruskal et al., that if u(x, t) evolves according to (7.1), with
u(x, t = 0) = u0 (x), then
(7.7a)
j (t) = j (0) = j ,
(7.7b)
(7.7c)
(7.7d)
(1)
(1)
These results are extremely simple in view of the fact that the functional
dependence of u(x, t) on u0 (x) is very complicated and cannot be expressed in
closed form.
The strategy for nding u(x, t), the solution of (7.1), from the initial data
u0 (x) is now clear. We begin by solving the Schr
odinger equation (7.6) with
the potential u0 (x). Then we use the corresponding scattering data in (7.7a)
(7.7d) to nd the scattering data for t = 0. We now use these in the inverse
problem of 4.6 with Marchenko equations. These equations then lead to
the potential u(x, t), the solution of (7.1). As we see here, to go from u0 (x)
to u(x, t) we have two steps: rst solve a second-order ordinary dierential
equation and then an (Fredholm) integral equation. However, both are linear
equations and this is the real beauty of the method.
We did not give the proofs of the relations (7.7a)(7.7d), but they are simple
and need only elementary algebra. Once these relations are established, one
can then show that there is a one-to-one correspondence between the bound
states of u0 (x), {j2 , j = 1, . . . , n} and the solitary waves of the equation
(7.1) with the initial condition u0 (x). To each bound state with energy j2
there corresponds a solitary wave with the speed Cj = 4j2 . At time t = 0, we
may have solitons interacting and also some transmission and reection due to
s22 (k) and s21 (k). However, as time goes on, because of the very oscillatory
character of exp(8ik 3 t) in (7.7d), the eect of the reection becomes more and
more negligible and we are left with superpositions of pure soliton solutions
moving to the right according to what we saw before, with some amount of
transmission to the left.
The inverse method technique shown above to solve the KdV equation
applies to the solution of many other nonlinear evolution equations. It allows us to bring the solution of many complicated nonlinear partial dierential
equations to the solution of a linear dierential equation (not necessarily of
Schr
odinger-type equation, but also coupled rst-order equations, etc.) and
then the solution of a linear (Fredholm-type) integral equation, a great simplication indeed. We refer the reader to the vast literature on the subject. For
an elementary and comprehensive account, we recommend the book of Drazin
and Johnson [11].
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
190
Inverse Problems
F (i) f (i, r = 0) = 0,
(A.1)
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
191
(A.2)
k = i.
(A.3)
2i
f 2 dr + F (i) f (i, 0) = 0.
Using (4.3), we nally get, for the normalization constant C of the bound state
wave function,
(A.5)
1
=
C
2 (i, r) =
0
f 2 (i, r)dr =
1
2
[f (i, 0)]
,
0
F (i)
,
2if (i, 0)
which is the desired result. This relation applies to each bound state individually.
Acknowledgments. These lectures, given at the University of Oulu in
June 1994, were written in part when the author was guest of the Research
Institute for Theoretical Physics of the University of Helsinki. He thanks both
universities and Professors Lassi Paivarinta and Masud Chaichian for warm
hospitality and nancial support.
References
1. Quantum Mechanics
[1] A. Galindo and P. Pascual, Quantum Mechanics, 2 volumes, Springer-Verlag,
Berlin, 1990. This is an excellent treatise on quantum mechanics, at the graduate
level, for theoretical and mathematical physicists. The physics of scattering
theory is well explained here, as well as in Newtons book listed below.
192
Inverse Problems
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
3. Inverse Problems
[7] K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory,
Springer-Verlag, Berlin, 1989.
[8] V. A. Marchenko, Sturm-Liouville Operators and Applications, Birkha
user,
Basel, 1986.
[9] R. G. Newton, Inverse Schr
odinger Scattering in Three Dimensions, Springer-Verlag, New York, 1989. See also, by the same author, Factorization of
the S -matrix, J. Math. Phys., 31 (1990), pp. 24142424 and his review paper
in Inverse Problems in Mathematical Physics, L. P
aiv
arinta and E. Somersalo,
eds., Springer-Verlag, Berlin, 1993.
4. Nonlinear Equations and Solitons
[10] M. J. Ablowitz and H. Segur, Solitons and the Inverse Scattering Transforms,
SIAM, Philadelphia, 1981.
[11] P. G. Drazin and R. S. Johnson, Solitons, An Introduction, Cambridge University Press, London, 1993.
[12] A. C. Newell, Solitons in Mathematics and Physics, SIAM, Philadelphia, 1985.
5. Mathematics
[13] R. C. Titchmarsh, Theory of Fourier Integrals, 2nd ed., Oxford University Press,
Oxford, 1967.
[14]
, Eigenfunction Expansions, I, 2nd ed., Oxford University Press, Oxford,
1962.
[15] G. Hellwig, Dierential Operators of Mathematical Physics, Addison-Wesley,
Reading, MA, 1967.
[16] R. P. Boas, Entire Functions, Academic Press, New York, 1954.
[17] M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vols. IIV,
Academic Press, New York, 19721979.
[18] F. Smithies, Integral Equations, Cambridge University Press, London, 1965.
[19] D. Colton and R. Kress, Integral Equation Methods in Scattering Theory, Wiley,
New York, 1983.
Downloaded 11/22/13 to 190.144.171.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
193
[20]
[21]
[22]
[23]
[24]