You are on page 1of 21

INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS

Inverse Problems 21 (2005) 19531973 doi:10.1088/0266-5611/21/6/010


A variational approach to an elastic inverse problem
B M Brown
1
, M Jais
1
and I W Knowles
2
1
School of Computer Science, Cardiff University, Cardiff CF24 3AA, UK
2
Department of Mathematics, University of Alabama at Birmingham, Birmingham,
AL 35294, USA
E-mail: m.jais@cs.cf.ac.uk
Received 14 April 2005, in nal form 4 October 2005
Published 28 October 2005
Online at stacks.iop.org/IP/21/1953
Abstract
We present a variational approach to the seismic inverse problemof determining
the coefcients Cand of the hyperbolic systemof partial differential equations

j,k,l

x
j
_
C
i,j,k,l
(x)

x
l
u
k
(x, t )
_
= (x)

2
t
2
u
i
, 1 i n,
from traction and displacement data measured on the surface. A crucial point
of our approach will be a transformation of the above system to an elliptic
system of partial differential equations

k
(C
i,k
u
k
(x, s)) + s
2
u
i
(x, s) = 0, 1 i n.
Thus, we transform the inverse problem for a hyperbolic system to an inverse
problem for an elliptic system. We give a denition of the direct and inverse
seismic problem, where we distinguish between the isotropic and anisotropic
cases. Further, we develop the theoretical results that we need for a successful
recovery procedure of the coefcients C and in the isotropic case. Our
approach consists of a minimization procedure based on a conjugate gradient
descent algorithm. Finally, we present various numerical results that show the
effectiveness of our approach.
1. Introduction
In every elastic media , the stress and the strain e are linked by the relation

i,j
=

k,l
C
i,j,k,l
e
k,l
, (1)
where C is the fourth-order elasticity tensor. Because of the symmetry of the strain and stress
tensors, the three-dimensional elasticity tensor has at most 36 independent entries and in the
case of perfect elasticity only 21 independent entries. If the properties of the elastic solid
vary with direction, then the elasticity tensor has indeed up to 36 or 21 independent entries:
0266-5611/05/061953+21$30.00 2005 IOP Publishing Ltd Printed in the UK 1953
1954 B M Brown et al
this is the so-called anisotropic case; while if the properties of the solid do not vary with the
direction, then the elasticity tensor is called isotropic and has only two independent entries:
the so-called Lam e parameters and . The elements of C are in general not continuous or
differentiable but bounded. In this work, we conne ourselves to the isotropic case where the
elements of the elasticity tensor satisfy
C
i,j,k,l
=
i,j

k,l
+ (
i,l

j,k
+
i,k

j,l
). (2)
The seismic wave equations are dened by

2
u
i
t
2
=

j,k,l

j
(C
i,j,k,l

l
u
k
), 1 i n, (3)
over (0, ) with initial conditions for u(x, 0) L
2
(0, T ; H
1
()) and
t
u(x, 0)
L
2
( (0, T )) and either Dirichlet boundary conditions
u(x, t )|

= (x, t ) L
2
(0, T ; H
1/2
()), (4)
when we have measurements of the displacement on the boundary, or Neumann boundary
conditions

j,k,l
C
i,j,k,l

j
u
k

l
=
i
(x, t ) L
2
(0, T ; H
1/2
()), 1 i n, (5)
when we have measurements of the traction on the boundary. The innite time interval is a
matter of convenience since a typical seismic event takes place over a xed time period and
thus the signal may eventually taken to be zero. This is the direct problem for the seismic
wave equation. An important condition on the unique solvability of the seismic wave equation
in the isotropic case is the strong convexity condition
> 0, 2 + > 0. (6)
This can be guaranteed if the associated Poisson ratio

2( + )
(7)
takes values only in the interval [0, 0.5]. In this paper, we shall be concerned with the seismic
inverse problem in the isotropic case, that consists of recovering the Lam e coefcients and the
density from measurements of displacementtraction pairs on the boundary.
2. Formulation of the problem
Let be an open, simply connected subset of R
n
with C
1,1
boundary. We will often consider
sub-matrices C
i,k
, 1 i, k n, of the elasticity tensor C, dened by
(C
i,k
)
j,l
= C
i,j,k,l
, 1 j, l n.
In accordance with (2) and (6), we make the following assumptions on C and :
C L

()
nnnn
, L

(), C
i,k
= C
T
k,i
, (8)
CX, X
R
nn 0, X R
nn
, (9)
C > 0, (10)
(x) > 0, x , (11)
where and are constants in R and X, Y
R
nn =

i,j
X
i,j
Y
i,j
. Under these assumptions,
the seismic direct problem has a unique solution and depends continuously on the given data
A variational approach to an elastic inverse problem 1955
(see for example [16]). However, the seismic direct problem can only be solved if one knows
the coefcients C and in (3). The seismic inverse problem can be formulated as follows:
Problem 2.1 (the seismic inverse problem). Given u(x, 0),
t
u(x, 0) in {0} and
independent measurements of displacementtraction pairs (
m
,
m
), m = 1, 2, . . . , on
(0, T ), recover the elasticity tensor C and the density in .
Our approach to solving problem 2.1 is inspired by a method developed by Knowles
for elliptic problems (see for example [10, 11]). Briey, this consists of dening a suitable
functional F(E, ) which has the property that F(E, ) 0 and F(E, ) = 0
(E, ) = (C, ). The numerical algorithm commences with an arbitrary guess for (C, ),
then a gradient descent method is applied until F(E, ) = 0. Of course, there are many
functionals that have this property but the one dened in [12] is inspired by much experience
in solving elliptic problems and seems to have no spurious local minima. Since (3) is not an
elliptic equation but a hyperbolic equation, we have to transform it rst, in order to apply our
method. Therefore, we apply a Laplace transformation
u
i
(x, s) =
_

0
e
st
u
i
(x, t ) dt, 1 i n, (12)
to (3). The transformed equation is then given by

k
(C
i,k
u
k
) + s
2
u
i
= 0, 1 i n. (13)
The boundary conditions (4) and (5) then have the form
u(x, s)|

=

(x, s) H
1/2
()
n
(14)
in the Dirichlet case or
(
C
u(x, s))
i
|

k
C
i,k

x
u
k
(x, s)|

,
R
n
=
i
(x, s) H
1/2
(), 1 i n,
(15)
in the Neumann case.
Being independent of t, the tensor C and the density have not changed under the
transformation and we can restrict ourselves, in what follows, to the transformed equation.
For this reason, we will write u, and instead of u,

and , respectively. The boundary
problem we want to work with is then given by

k
(C
i,k
u
k
) + s
2
u
i
= 0, 1 i n, (16)
and either
u(x, s)|

= (x, s) (17)
in the Dirichlet case or
(
C
u(x, s))
i
|

k
C
i,k

x
u
k
(x, s)|

,
R
n
=
i
(x, s), 1 i n, (18)
in the Neumann case. Equation (16) has the following properties:
(i) Since C and do not depend on s, we do not have to solve (16) for all s C and we can
restrict ourselves to one (or nitely many) real s.
(ii) All the occurring variables are real-valued.
1956 B M Brown et al
(iii) The partial differential equation (16) is strongly elliptic, since C, and s
2
are all positive,
and therefore the corresponding Dirichlet and Neumann problems have exactly one
solution.
We make the following denition.
Denition 2.2. In the case of Dirichlet boundary conditions, we dene the differential operator
A
C,
: H
1
0
()
n
H
1
()
n
by
(A
C,
u)
i
=

k
(C
i,k
u
k
) + s
2
u
i
, 1 i n. (19)
In the case of Neumann boundary conditions, we dene the operator

A
C,
: H
1
()
n

H
1
()
n
by
(

A
C,
u)
i
=

k
(C
i,k
u
k
) + s
2
u
i
, 1 i n. (20)
Since the system of partial differential equations (16) is strongly elliptic, there exists for
each H
1/2
()
n
a unique solution u(x) that satises (16) and the Dirichlet boundary
condition (17). We can therefore dene a DirichletNeumann map
C,
: H
1/2
()
n

H
1/2
()
n
by
(
C,
())
i
=

k
C
i,k
u
k
|

,
R
n
, 1 i n, (21)
where u satises (16) and u|

= . The seismic inverse problem can then be formulated as


follows:
Problem 2.3. Given the DirichletNeumann map
C,
: H
1/2
()
n
H
1/2
()
n
,
determine C and in .
Remark. Knowledge of the DirichletNeumann map assumes that the waves are created by
applying displacements
m
on the surface. In our computations in section 3, we will assume
that the waves are created by tractions
m
. Thus,
m
is noise-free, but
m
might contain noise.
In the isotropic case, we have the following uniqueness result due to Nakamura and
Uhlmann (see [18]). (See [21] and [18].)
Theorem 2.4 (uniqueness of the inverse problem). Let n 3. Let (
i
,
i
,
i
) C

()
C

() C
1
(), i = 1, 2, satisfy the strong convexity condition > 0, + 2 > 0, > 0.
Assume
(
1
,
1
,
1
)
=
(
2
,
2
,
2
)
. Then, (
1
,
1
,
1
) = (
2
,
2
,
2
) in .
Therefore, under the above assumptions, any solution of the inverse problem can be
uniquely recovered from the DirichletNeumann map. The smoothness assumptions on ,
and in the above theorem are of a technical nature, since the proof makes extensive use
of pseudodifferential operators. We remark that this result does not hold anymore, in the
anisotropic case, where the entries of C can vary with direction (see [15]). However, one of
us (MJ) has recently shown [5] that the support of the coefcients may be recovered even in
this case by an extension of Kirschs factorization method (see [7]).
To solve the seismic inverse problem, we want to dene a functional G(E, ), on some
domain D
G
, that has a unique global minimum for (E, ) = (C, ). We now discuss this
concept and start by introducing some notation.
A variational approach to an elastic inverse problem 1957
Notation 2.5 (subindices and superindices). We will use the subindices i, j, k, l to denote
components of vectors, the subindices m, z, r to denote components of innite or nite
sequences, the superindices E and C to denote dependence on tensors and the superindices
and to denote dependence on density functions. For example, we denote the kth component
of the solution u of the boundary value problem,

k
(C
i,k
u
k
) + s
2
u
i
= 0, 1 i n, u|

= g,
by u
C,
k
. The only exceptions to this rule will be our notation of the DirichletNeumann map
and of differential operators, where we keep the standard notation of
C,
and A
C,
instead
of
C,
and A
C,
.
Knowledge of the DirichletNeumann map
C,
: H
1/2
()
n
H
1/2
()
n
corresponds to knowledge of
C,

m
for every
m
H
1/2
()
n
, where
m
, m = 1, 2, . . . ,
is a known basis of H
1/2
()
n
.
For each pair of (E, ) we can dene solutions u
E,
m
, m = 1, 2, . . . , and u
E,
m
,
m = 1, 2, . . . , where u
E,
m
satises (16) and the Dirichlet boundary condition
u
E,
m

=
m
, (22)
and u
E,
m
satises (16) and the Neumann boundary condition

E
u
E,
m

=
C

m
. (23)
With the help of the above denitions, we can now dene a functional G that will prove to
have the desired properties we need for a successful descent procedure. We dene the domain
of the functional G as follows:
D
G
= {(E, )| E satises (8)(10), E C|

= 0, E L

()
nnnn
and E H
1/2+
()
nnnn
in a neighbourhood of for some > 0, satises (11)}.
Remark. The domain D
G
implies that we know the tensor C and thus the Lam e coefcients
and on the boundary. This is certainly a restriction; however, in most physical applications
this is justied.
Denition 2.6 (the functional G). We dene the functional G(E, ) on D
G
by
G(E, ) =

m=1

m
_

i,k
E
i,k

_
u
E,
m,k
u
E,
m,k
_

_
u
E,
m,i
u
E,
m,i
_
+

i
s
2

_
u
E,
m,i
u
E,
m,i
_
2
dx,
(24)
where all
m
0, m = 1, 2, . . . , are chosen in a way such that the series converges with at
least one
m
> 0.
Remark. In an implementation with real data, we have only nitely many DirichletNeumann
pairs (
m
,
C,

m
), 1 m M, and thus we can choose
m
= 1 for 1 m M.
In the following, we will write u and u instead of u
E,
and u
E,
when it is clear to which
E and we are referring. An obvious property of G is the following.
Theorem 2.7. G(E, ) 0 and if uniqueness holds for the seismic inverse problem, we also
have
G(E, ) = 0 (E, ) = (C, ).
1958 B M Brown et al
Proof. The fact that G(E, ) 0 follows from (9) and (11). If G(E, ) = 0, then
u
E,
m
= u
E,
m
. Since u
E,
m
and u
E,
m
satisfy the same strongly elliptic partial differential
equation, they can only be equal if they satisfy the same boundary conditions. Therefore,

E,

m
|

=
E,
u
E,
m

=
E,
u
E,
m

=
C,

m
|

, m, which means that


E,
=

C,
and therefore by uniqueness (E, ) = (C, ).
Before we can prove more properties of the functional G, we need some intermediate
results.
Lemma 2.8. If H|

= 0, (E, ) D
G
, (E+H, ) D
G
and is xed, the following holds:
lim
H0
u
E+H
u
E

H
1
()
n = 0,
where u
E+H
and u
E
solve (16) for C = E + H and C = E, respectively.
Analogously, we have if (E, ) D
G
, (E, + h) D
G
and E is xed:
lim
h0
u
+h
u

H
1
()
n = 0,
where u
+h
and u

solve (16) for = + h and = , respectively.


Proof. The proof of lemma 2.8 is standard and therefore omitted.
We also need the Fr echet differentiability of the solution u as a function of E and as a
function of .
Lemma 2.9. The Fr echet derivative of u(E) H
1
()
n
as a function of the tensor E is given
by
u

(E)H = A
1
E
( (H u
E
)).
Here we have omitted the dependence on the density , since we do not allow it to vary.
The Fr echet derivative of u() with respect to the density is given by
u

()h = A
1
E
(s
2
hu

).
Here we have omitted the dependence on the tensor E, since here we always use the same E.
Proof. The proof is again a standard proof and therefore omitted. The interested reader can
nd it in [4].
Another result that we shall need is the following.
Lemma 2.10. For any tensor L

()
nnnn
with H
1/2+
(N)
nnnn
in a
neighbourhood N of and |

= 0, the following inequality holds:


( u
E+H
)
H
1
()
n K
for some real constant K, that does not depend on H where E, E + H D
G
and H
L
<
for a xed > 0. (We are only interested in H with small norm since we always demand that
H 0 in the results to come.) Here stands for the element-wise operation of .
Proof. We dene a functional F : H
1
()
n
R by
F() =
_

i,k

i,k
u
E+H
k

i
dx,
which satises
|F()| K
H
1
0
()
n ,
A variational approach to an elastic inverse problem 1959
where K does not depend on H, because of lemma 2.8 and since H
L
< . Therefore,
F H
1
()
n
. From this we can conclude
F = ( u
E+H
)
H
1
()
n K.
This completes the proof.
We shall now calculate the G ateaux derivative of the functional G. The formula for the
G ateaux derivative of G(E, ) =

m
g(E, ,
m
), where
g(E, ,
m
) =
_

i,k
E
i,k

_
u
E,
m,k
u
E,
m,k
_

_
u
E,
m,i
u
E,
m,i
_
+

i
s
2

_
u
E,
m,i
u
E,
m,i
_
2
dx,
can obviously be obtained by computing the G ateaux derivatives of g(E, ,
m
), m =
1, 2, . . . . Since the dependence on m affects only the boundary conditions of u
m
and u
m
,
we can suppress the dependence on m in the following and concentrate on a generic functional
g(E, ) which will represent any functional g(E, ,
m
).
Denition 2.11. We dene the generic functional g(E, ) by
g(E, ) =
_

i,k
E
i,k

_
u
E,
k
u
E,
k
_

_
u
E,
i
u
E,
i
_
+

i
s
2

_
u
E,
i
u
E,
i
_
2
dx. (25)
Therefore, g(E, ) represents an arbitrary addend of G(E, ).
The last ve lemmas enable us to prove one of the main theoretical results needed in our
approach to solving the seismic inverse problem.
Theorem 2.12 (G ateaux derivative of G). For (E, ), (E + H, + h) D
G
, the G ateaux
derivative G

(E, )(H, h) of the functional G is given by


G

(E, )(H, h) =

m=1

m
_

i,k
H
i,k
_
u
E,
m,k
u
E,
m,i
u
E,
m,k
u
E,
m,i
_
+

i
s
2
h
__
u
E,
m,i
_
2

_
u
E,
m,k
_
2
_
dx. (26)
Proof. As we pointed out above, it is sufcient to prove this for the functional g(E, ). As it
will be clear throughout this proof to which (E, ) we are referring to, we omit the superscripts
of u and u in this proof. Now we take r R and differentiate the expression

k
((E
i,k
+ rH
i,k
)u
E+rH,+rh
)
k
+ s
2
( + rh)u
E+rH,+rh
i
= 0, 1 i n,
with respect to r. Since we know from lemma 2.9 that u and u are differentiable with respect
to r, we can calculate w
k
=
u
k
r

r=0
and w
k
=
u
k
r

r=0
. The functions w and w then satisfy
the following equation:

k
(E
i,k
w
k
) = s
2
w
i
+ s
2
hu
i

k
(H
i,k
u
k
). (27)
From (22) we get
w
i
|

= 0, 1 i n, (28)
1960 B M Brown et al
since does not depend on r, and from (23) we get

k
E
i,k
w
k
,
R
n
|

= 0, (29)
since again
C
does not depend on r and also since H|

= 0. Now we can calculate


g

(E, )(H, h) =

r
g(E + rH, + rh)|
r=0
=
_

i,k
(H
i,k
(u
k
u
k
) (u
i
u
i
)) +

i,k
(E
i,k
(w
k
w
k
) (u
i
u
i
))
+

i,k
(E
i,k
(u
k
u
k
) (w
i
w
i
))
+ 2s
2

i
(u
i
u
i
)(w
i
w
i
) + s
2
h(u
i
u
i
)
2
dx.
Consider now the function T dened by
T :=
_

i,k
(E
i,k
(w
k
w
k
) (u
i
u
i
)) dx.
We get
T =
_

i,k
(E
i,k
(w
k
w
k
) (u
i
u
i
)) dx
=
_

i,k
(E
i,k
w
k
u
i
E
i,k
w
k
u
i
E
i,k
w
k
u
i
+ E
i,k
w
k
u
i
) dx
=
_

i,k
w
k
_
E
T
i,k
u
i
,
_
R
n dS
_

i,k
w
k

_
E
T
i,k
u
i
_
dx

i,k
w
k
_
E
T
i,k
u
i
,
_
R
n dS +
_

i,k
w
k

_
E
T
i,k
u
i
_
dx

i,k
u
i
E
i,k
w
k
,
R
n
dS +
_

i,k
u
i
(E
i,k
w
k
) dx
+
_

i,k
u
i
E
i,k
w
k
,
R
n
dS
_

i,k
u
i
(E
i,k
w
k
) dx.
Now we apply (28) and (29) to deduce that the boundary integrals are zero and substitute
expressions (16) and (27) giving
T =
_

k
w
k
s
2
u
k
dx +
_

k
w
k
s
2
u
k
dx
+
_

i
u
i
(s
2
w
i
+ s
2
h u
i
)

i,k
u
i
(H
i,k
u
k
) dx

i
u
i
(s
2
w
i
+ s
2
h u
i
)

i,k
u
i
(H
i,k
u
k
) dx.
We can do the same for
S :=
_

i,k
(E
i,k
(u
k
u
k
) (w
i
w
i
)) dx,
A variational approach to an elastic inverse problem 1961
giving
S =
_

i,k
(E
i,k
(u
k
u
k
) (w
i
w
i
)) dx
=
_

i,k
(E
i,k
u
k
w
i
E
i,k
u
k
w
i
E
i,k
u
k
w
i
+ E
i,k
u
k
w
i
) dx
=
_

i,k
w
i
E
i,k
u
k
,
R
n
dS
_

i,k
w
i
(E
i,k
u
k
) dx

i,k
w
i
E
i,k
u
k
,
R
n
dS +
_

i,k
w
i
(E
i,k
u
k
) dx

i,k
u
k
_
E
T
i,k
w
i
,
_
R
n dS +
_

i,k
u
k

_
E
T
i,k
w
i
_
dx
+
_

i,k
u
k
_
E
T
i,k
w
i
,
_
R
n dS
_

i,k
u
k

_
E
T
i,k
w
i
_
dx.
Again we apply (16) and (27)(29) to get
S =
_

i
w
i
s
2
u
i
dx +
_

i
w
i
s
2
u
i
+
_

k
u
k
(s
2
w
k
+ s
2
h u
k
)

i,k
u
k

_
H
T
i,k
u
i
_
dx

k
u
k
(s
2
w
k
+ s
2
h u
k
)

i,k
u
k

_
H
T
i,k
u
i
_
dx.
If we now use these expressions for T and S, we get
g

(E, )(H, h)
=
_

i,k
(H
i,k
(u
k
u
k
) (u
i
u
i
))

i,k
u
i
(H
i,k
u
k
) +

i,k
u
i
(H
i,k
u
k
)

i,k
u
k

_
H
T
i,k
u
i
_
+

i,k
u
k

_
H
T
i,k
u
i
_
dx
+
_

s
2

i
2(w
i
u
i
+ w
i
u
i
+ u
i
w
i
u
i
w
i
) + 2w
i
u
i
2w
i
u
i
2u
i
w
i
+ 2 u
i
w
i
. ,, .
=0
dx
+
_

i
2s
2
hu
i
u
i
2s
2
h u
2
i
+ s
2
h(u
i
u
i
)
2
dx.
As we can see in the previous expression, the second part of the last integral as well as the
whole second line from the bottom vanishes. If we now make a further integration by parts
and note that H|

= 0, we get
g

(E, )(H, h) =
_

i,k
(H
i,k
(u
k
u
k
) (u
i
u
i
)) +

i,k
H
i,k
u
k
u
i

i,k
H
i,k
u
k
u
i
+

i,k
H
i,k
u
k
u
i

i,k
H
i,k
u
k
u
i
1962 B M Brown et al
+

i
s
2
h
_
u
2
i
u
2
i
_
dx =
_

i,k
H
i,k
(u
k
u
i
u
k
u
i
)
+

i
s
2
h
_
u
2
i
u
2
i
_
.
Therefore, the G ateaux derivative of G is given by (26).
Before we show that the G ateaux derivative is also a Fr echet derivative, we calculate the
second G ateaux derivative.
Theorem 2.13 (second G ateaux derivative of G). If H|

= 0 and L|

= 0, then the second


Gateaux derivative is given by
G

(E, )[(L, l), (H, h)] =

m=1

m
_

2
___

A
1
E,

d
m
(L, l)
_
,

d
m
(H, h)
_
R
n

__
A
1
E,
d
m
(L, l)
_
, d
m
(H, h)
_
R
n
_
dx,
where
d
m
(L, l)
i
=

k

_
L
i,k
u
E,
m,k
_
s
2
lu
E,
m,i
, 1 i, n,
and

d
m
(L, l)
i
=

k

_
L
i,k
u
E,
m,k
_
s
2
l u
E,
m,i
, 1 i n.
d
m
(H, h) and

d
m
(H, h) are dened analogously.
Proof. We use the fact that
A
E,
u
E,
+ A
E+L,+l
u
E+L,+l
= 0
to get
_
A
E,
_
u
E+L,+l
u
E,
__
i
=

k

_
L
i,k
u
E+L,+l
k
_
s
2
l
_
u
E+L,+l
i
_
, 1 i n,
(30)
and conclude
(u
E+L,+l
u
E,
)
i
=
_
A
1
E,
( (L
i,k
u
E+L,+l
) s
2
l(u
E+L,+l
))
_
i
, 1 i n.
(31)
Again we will restrict ourselves to the generic functional g(E, ) being a representative of
any addend of G(E, ). Then,
g

:= g

(E + L, + l)(H, h) g

(E, )(H, h)
=
_

i,k
H
i,k
_
u
E+L,+l
k
u
E+L,+l
i
u
E,
k
u
E,
i
u
E+L,+l
k
u
E+L,+l
i
+ u
E,
k
u
E,
i
_
+

i
s
2
h
__
u
E+L,+l
i
_
2

_
u
E,
i
_
2

_
u
E+L,+l
i
_
2
+
_
u
E,
i
_
2
_
dx
=
_

i,k
H
i,k
_
u
E,
k

_
u
E+L,+l
i
u
E,
i
_
+ u
E+L,+l
k

_
u
E+L,+l
i
u
E,
i
_
u
E,
k

_
u
E+L,+l
i
u
E,
i
_
u
E+L,+l
k

_
u
E+L,+l
i
u
E,
i
__
+

i
s
2
h
__
u
E+L,+l
i
_
2

_
u
E,
i
_
2

_
u
E+L,+l
i
_
2
+
_
u
E,
i
_
2
) dx,
A variational approach to an elastic inverse problem 1963
since
_

i,k
H
i,k
u
E+L,+l
k
u
E,
i
=
_

i,k
H
i,k
u
E,
k
u
E+L,+l
i
.
An integration by parts, (31) and the formula a
2
b
2
= (a b)(a + b) give
g

=
_

i,k

_
u
E+L,+l
i
u
E,
i
_

_
H
i,k

_
u
E+L,+l
k
+ u
E,
k
__
+
_
u
E+L,+l
i
u
E,
i
_

_
H
i,k

_
u
E+L,+l
k
+ u
E,
k
__
+

i
_
u
E+L,+l
i
u
E,
i
_
s
2
h
_
u
E+L,+l
i
+u
E,
i
_

_
u
E+L,+l
u
E,
i
_
s
2
h
_
u
E+L,+l
i
+ u
E,
i
_
dx
=
_

_
u
E+L,+l
i
u
E,
i
_
_

k

_
H
i,k

_
u
E+L,+l
k
+u
E,
k
__
s
2
h
_
u
E+L,+l
i
+u
E,
i
_
_
+

i
_
u
E+L,+l
i
u
E,
i
_
_

k

_
H
i,k

_
u
E+L,+l
k
+ u
E,
k
__
s
2
h
_
u
E+L,+l
i
+ u
E,
i
_
_
dx.
Now we divide by L
L
and l
L
to get
g

=
_

u
E+L,+l
i
u
E,
i
L
L
l
L

k

_
H
i,k

_
u
E+L,+l
k
+ u
E,
k
__
s
2
h
_
u
E+L,+l
i
+ u
E,
i
_
_
+

i
u
E+L,+l
i
u
E,
i
L
L
l
L

k

_
H
i,k

_
u
E+L,+l
k
+ u
E,
k
__
s
2
h
_
u
E+L,+l
i
+ u
E,
i
_
_
dx.
The result now follows from (30), (31) and lemmas 2.8, 2.10 and 2.9. This completes the
proof.
Since we calculated a uniform limit in the above proof of the second G ateaux derivative,
we can conclude
Corollary 2.14. The functional G is Fr echet differentiable.
The form of the second Fr echet derivative of G does not enable us to conclude that G
is convex (and G is probably not convex). However, if we want to apply a steepest descent
procedure to the functional G, we have to make sure that G has not more than one local
minimum, since otherwise we could get trapped throughout our minimization procedure in
one of these minima and would end up with wrong results. We do not give a proof for a
unique local minimum here; however, numerical experiments indicate that the functional G is
essentially convex, i.e. G

(E, )[H, h] = 0, (H, h), implies G(E, ) = 0. This assumption


is also supported by the fact that a similar scalar functional for the EIT problem has this
property (see [9]). Now we summarize our results. We have dened a functional G, which
has both a unique global for exactly the tensor C and the density function which we want
to recover. We have further reason to believe that this is the only local minimum as well.
Therefore, the functional Gsatises all the conditions for a successful minimization procedure
by steepest descent.
The only remaining theoretical question for our recovery procedure is whether the
DirichletNeumann maps
E,
converge to
C,
as the functional G tends to zero. Since the
tensor C and the density are uniquely determined by the DirichletNeumann map, this is a
very crucial condition on the functional G (see our later discussion on an appropriate stopping
1964 B M Brown et al
criterion). However, the answer to this question is positive. To show this, we rst dene
the appropriate operator norm on L(H
1/2
()
n
, H
1/2
()
n
), for which we want to show
convergence.
Denition 2.15. We dene the operator norm

on L(H
1/2
()
n
, H
1/2
()
n
) as
A

= sup
:=

m
,||
A
H
1/2
()
n .
Theorem 2.16. Suppose we have a sequence (E
r
,
r
)
rN
in D
G
. If then G(E
r
,
r
) 0 as
r , then also
_
_

E
r
,
r

C,
_
_

0.
Proof. We write
G(E
r
,
r
) =

m=1

m
_

i,k
E
r
i,k

_
u
E
r
,
r
m,k
u
E
r
,
r
m,k
_

_
u
E
r
,
r
m,i
u
E
r
,
r
m,i
_
+

i
s
2

_
u
E
r
,
r
m,i
u
E
r
,
r
m,i
_
2
dx
as
G(E
r
,
r
) =

m=1

m
_

_
E
r

_
u
E
r
,
r
m
u
E
r
,
r
m
_
,
_
u
E
r
,
r
m
u
E
r
,
r
m
__
R
nn
+

i
s
2

_
E
r
,
r
m,i
u
E
r
,
r
m,i
_
2
dx,
where operates element-wise; it is clear, since all
r
and E
r
are strictly larger than zero
and E is positive denite, that

m=1

m
_
_

_
u
E
r
,
r
m
u
E
r
,
r
m
__
_
2
L
2
()
n
+
_
_
u
E
r
,
r
m
u
E
r
,
r
m
_
_
2
(L
2
)
n
0 as r .
This means that

m=1

m
_
_
u
E
r
,
r
m
u
E
r
,
r
m
_
_
2
H
1
()
n
0 as r . (32)
Nowwe knowfromthe trace theoremthat there exists a unique, continuous and right-invertible
trace operator T from H
1
()
n
onto H
1/2
()
n
. Therefore, we get
_
_
T
_
u
E
r
,
r
m
u
E
r
,
r
m
__
_
H
1/2
()
n
C
_
_
u
E
r
,
r
m
u
E
r
,
r
m
_
_
2
H
1
()
n
(33)
for some constant C, where
T
_
u
E
r
,
r
m
u
E
r
,
r
m
_
=
_
I
1
E
r
,
r

C,
_

m
, (34)
since by (23) we have
_

E
r
,
r
_
u
E
r
,
r
m
__
i
=

k
_
E
i,k
u
E
r
,
r
m,k
,
_
R
n = (
C,
())
i
, 1 i n.
Therefore, we can conclude that

m=1

m
_
_
_
I
1
E
r
,
r

C,
_

m
_
_
2
H
1/2
()
n
0 as r .
A variational approach to an elastic inverse problem 1965
Since the sequence (E
r
,
r
) converges, it is bounded. Consequently,
_
_

E
r
,
r
_
_

are uniformly
bounded and from the identity

E
r
,
r

C,
=
E
r
,
r
_
I
1
E
r
,
r

C,
_
,
we get

m=1

m
_
_
(
E
r
,
r

C,
)
m
_
_
2
H
1/2
()
n
0.
The result now follows from the denition of the norm

.
The last theorem ends the discussion of the theoretical results needed by our algorithm.
In the next section, we will discuss the numerical implementation of the algorithm.
3. The algorithm
To minimize the functional G we apply a variant of the conjugate gradient method, the Polak
Ribiere scheme, to the functional G. To do this we start with starting coefcients (
0
,
0
,
0
)
and update them by the following procedure:
_
h

0
, h

0
, h

0
_
=
_
g

0
, g

0
, g

0
_
=
N
G(
0
,
0
,
0
),
where
N
G is an appropriate gradient of G (see denition 3.1). At (
i
,
i
,
i
) we apply
a line search routine to minimize G(, , ), resulting in G(
i+1
,
i+1
,
i+1
) and we set
_
g

i+1
, g

i+1
, g

i+1
_
=
N
G(
i+1
,
i+1
,
i+1
) and
_
h

i+1
, h

i+1
, h

i+1
_
=
i
_
g

i+1
, g

i+1
, g

i+1
_
,
where
i
is dened as

i
=

=,,
_
g

i+1
g

i
, g

i+1
_
H
1

=,,
_
g

i
, g

i
_
H
1
.
As we can see, the rst thing we have to do, to apply this scheme, is to calculate the
gradient of G. From the formula of the G ateaux derivative, we see that the L
2
-gradient of G is
given by the list
G =
_

m
_
,
where the tensor
m
is dened as

m,i,j,k,l
=
j
u
E,
m,i

l
u
E,
m,k

j
u
E,
m,i

l
u
E,
m,k
, 1 i, j, k, l n, m = 1, 2, . . . , (35)
and the function
m
is given by

m
=

i
_
u
E,
m,i
_
2

_
u
E,
m,i
_
2
, m = 1, 2, . . . . (36)
Remark. We are discussing lists here and not vectors, since the elements of the lists are
not members of the same vector spaces. However, this simplies the notation. With this list
notation, we can write the G ateaux derivative of G as
G

(E, )(H, h) = G,

H,
where

H is the list given by

H = (Hh),
1966 B M Brown et al
and by abuse of notation, the articial inner product G,

H is the sum over the appropriate
inner products of the entries of G.
One of the major error sources in steepest descent methods is that the updated functional
after a descent step does not continue to lie in the domain of the functional anymore. In our
case, this presents a major problem. The update direction of the tensor E must vanish on the
boundary of , since the tensors E D
G
have to satisfy the condition (E C)|

= 0.
However, the terms

j
u
E,
m,i

l
u
E,
m,k

j
u
E,
m,i

l
u
E,
m,k
, 1 i, j, k, l n, m = 1, 2, . . . ,
do not vanish on in general. We overcome this problem by using a Neuberger gradient
(see [19]) for the update direction of the tensor E. In this situation, we give the following
denition of the Neuberger gradient.
Denition 3.1. In our case, the denition of the Neuberger gradient is given by

m,i,j,k,l
+
m,i,j,k,l
=
m,i,j,k,l
, (37)

m,i,j,k,l
|

= 0, 1 i, j, k, l n, m = 1, 2, . . . . (38)
We can easily see that the Neuberger gradient vanishes on the boundary. We also have to
ensure that it is a properly dened gradient and descent direction. By an integration by parts
it is easily veried that, if we omit the dependence on , the Neuberger gradient satises
G

(E)(H) = , H
H
1
0
()
nnnn .
Remark. is not only a good decent direction, since it solves equation (37), but also is given
by
m,i,j,k,l
= ( I)
1
(
m,i,j,k,l
), and therefore it is a preconditioned version of . Since
the entries of belong to H
1
0
, we expect that it is easier to recover smooth functions with the
Neuberger gradient than with the L
2
-gradient. However, it might be a slight disadvantage to
use the Neuberger gradient to recover discontinuous functions. In the one-dimensional case
of the inverse spectral problem for the SturmLiouville equation, this is certainly the case (see
the paper by Brown et al [1]). We do not have enough experimental evidence in our case yet,
but rst experiments indicate that this might also be true in our case.
Another condition that is crucial for the denition of the domain D
G
is that the tensor E
has to be positive denite and that the density satises > 0. Since we are interested in the
recovery of the Lam e parameters and , the condition of positive deniteness is equivalent
to > 0 and 2+ > 0 (see (6)). Therefore, we have to make sure that the updated values of
and satisfy this condition. This is one characteristic of the ill-posedness of our problem,
since if E is not positive denite anymore, we lose the strong ellipticity of our system of
partial differential equations (16) with (C, ) replaced by (E, ). As a consequence of this,
we cannot guarantee anymore the existence and uniqueness of the solutions of (16) and our
numerical elliptic solver would become unstable and our whole recovery procedure would
fail. We control this problem by cutting off the values of , and after each iteration, if
they are below a certain cut-off value. This is often justied on physical grounds by the usual
presence of earlier measurements of data, which allows one to establish a minimum for Lam e
parameters and by what we know about the Poisson ratio, which takes its values in the interval
(0.22, 0.35) (cf (7)). Assuming that the density has a positive lower bound is natural. We
are thus getting a better condition for our algorithm and making it well-posed. The slight
disadvantage of introducing a cut-off value is that our algorithm is not a real descent algorithm
A variational approach to an elastic inverse problem 1967
anymore, but an iterative algorithm and we are not descending that fast anymore. However,
this is a small price to pay, if we get a stable minimization procedure for it. In our case, we
choose a cut-off value of 0.5 for , and . The remaining condition specied in the denition
of D
G
is that the Lam e parameters have to be elements of H
1/2+
L

. Since the Neuberger


gradient is an element of H
1
(), the updated Lam e parameters must still be elements of
H
1/2+
. That they are also elements of L

follows from a regularity estimate by Morrey (see


[17] or [2, p 82]). The proof that
m,i,j,k,l
is an element of L

, for 1 i, j, k, l < n, can be


found in [12, pp 112].
3.1. The stopping criteria
An important issue is that of a stopping criterion for our algorithm. Since G tends to zero, one
might suggest to stop the algorithmif Gis small enough. This is not a very good criterion since
we have no guarantee that if G is below a certain value, then the recovered coefcients must be
good approximations. Especially in the presence of noise, the minimal value of G need not be
zero any longer and therefore the above criterion would certainly fail. Another criterion would
be to measure the normof the L
2
-gradient and if it is small enough, to abort the algorithmsince
the functional G has only one local minimum. However, we cannot be sure that the gradient
does not have a small norm away from the local minimum. However, we know that if G tends
to zero, the DirichletNeumann maps also converge (see theorem 2.15). Since the coefcients
are uniquely determined by the DirichletNeumann map, we can expect satisfactory results,
if the difference between the DirichletNeumann map for the recovered coefcients and the
DirichletNeumann map for true coefcients is sufciently small. Therefore, we suggest the
following stopping criteria:
(i) Check if the norm of the L
2
-gradient is below a certain value.
(ii) If (i) is true, check whether the difference between the DirichletNeumann maps is
sufciently small.
3.2. The implementation
The given data consist of displacementtraction pairs (
m
,
m
), with
m
=
C,

m
, m =
1, 2, . . . , M, where M is a nite number. Therefore, we have to change the denition of the
functional G to
G(E, ) =
M

m=1

m
_

i,k
E
i,k

_
u
E,
m,k
u
E,
m,k
_

_
u
E,
m,i
u
E,
m,i
_
+

i
s
2

_
u
E,
m,i
u
E,
m,i
_
2
dx.
(39)
Since we are working with a nite sum now, we can set
m
= 1, 1 m M. Although the
proof for the uniqueness result 2.4 is not valid for n = 2, we do all our implementations for
the two-dimensional case and choose = [0, 1] [0, 1]. This is justied by the facts that
we do not know of any counterexample for the two-dimensional case, and that for the inverse
EIT problem the uniqueness result for the function p,
(pu) = 0,
is still valid for n = 2 (see [6]). However, if we deal with the scalar equation
(pu) + qu = 0,
then one DirichletNeumann map does not uniquely identify p and q simultaneously (see for
example [3]). Another reason for implementing the method in two dimensions is that the
computing resources for a three-dimensional implementation have not been available.
1968 B M Brown et al
All our computations were done on a single PC with a 2.4 GHz processor and 1 GB
DDR RAM. Needless to say, this is by no means optimal and it would be desirable to do the
computations on parallel processors as there is natural parallelism in the algorithm. This was
not possible in our case, since we have not had the resources to do this.
We create our data by choosing the Neumann data as polynomials P
r,s
=

r,s

s,r
x
r
y
s
,
where the coefcients
s,r
are constants and r, s N, for some N N. Since we are dealing
with relatively nice Neumann data, it is sufcient to set M = 5 and N = 2. Thus, the functions
we use as traction data are of the form

0,1
x +
1,0
y +
1,1
xy +
0,2
x
2
+
2,0
y
2
.
The Dirichlet data are attained by solving equation (16). In two later examples (see gures 4
and 5), we will use noisy and time-dependent data to show that our proposed method is stable
and that applying a Laplace transformation is not ruining the data.
The most natural way of stabilizing our algorithm is to choose more than one Laplace
parameter s in our transformation of the hyperbolic system (3) to the elliptic system (16). This
way we get more DirichletNeumann mapsone for each sthat all uniquely identify the
Lam e parameters and and the density . In our computations, we use between 1 and 12
DirichletNeumann maps. It is probably better to use even more than 12 DirichletNeumann
maps (we would like to try 2040), but this is not feasible on a single PC(see also the table with
values for the average time for one descent step). The choice of the correct Laplace parameters
depends on the domain and the expected values of the functions in equation (16). In our
case, we choose the Laplace parameters s in such way that 0.5 < s
2
12. We divide
= [0, 1] [0, 1] into a regular nite-element grid using 7200 triangles. The numerical
derivatives arising from (39) are computed by Matlabs pdegrad function, which uses central
differences. This proved to be sufcient for most of our implementations. However, if the
given data contain a lot of noise (see gure 4), better differentiation techniques (see for example
[13]) are probably necessary. All integrations are done by Simpsons quadrature rule. The line
minimization in each descent step is done by Matlabs fminbnd function, which is similar to
the function Brent in [20, chapter 10].
Now we present some numerical results for the recovered Lam e coefcients, and ,
and the density . As we mentioned earlier, we always choose = [0, 1] [0, 1]. We
present implementations for the case of smooth, continuous or just bounded functions. In
our implementations, we vary the number Z of DirichletNeumann maps that we use. We
also distinguish between the cases = and = . We start with an example with for a
non-smooth . We set (x) 1 and

1
(x) =
1
(x) =
_
2.0, if |x
1
| < 0.5 and |x
2
| < 0.5,
0.5, otherwise.
(40)
We can see from gure 1 that the recovery procedure works well and one Dirichlet
Neumann map is sufcient to recover . We also did implementations for functions with more
smoothness. In the case of C

(), we were able to recover with an L


1
-error of just
0.0073.
Next we consider the case = . Again we set (x) 1 and use four DirichletNeumann
maps for our calculations.

2
(x) =
_
1.5, if |x
1
| < 0.4 and |x
2
| < 0.4,
0.5, otherwise,
(41)

2
(x) =
_
_
_
1.6, if 0.25 < x
1
< 0.75 and 0.25 < x
2
< 0.75,
1.6, if 0.75 < x
1
< 0.25 and 0.75 < x
2
< 0.25,
0.5, otherwise.
A variational approach to an elastic inverse problem 1969
1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.5
1
1.5
2
True
1
Computed
1
, L
1
-error = 0.0752
1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Figure 1.
True
2
Computed
2
, L
1
-error = 0.1524
True
2
Computed
2
, L
1
-error = 0.0868
0.5
0.5
0.5
0
0
1
1
1
1.5
0.5
0.5
1
1
0.5
0.5
0.5
0
0
1
1
1
1.5
0.5
0.5
1
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
1
1 0.5
0.5 0
0
0.5
0.5
1
1
1
0.5
1
1.5
2
1 0.5
0.5 0
0
0.5
0.5
1
1
Figure 2.
Although we get satisfactory results for and , the results indicate that it is easier to
recover the Lam e parameter than the Lam e parameter . This is not an unexpected result
since we have seen that the coefcients of the elasticity tensor C are given by
C
i,j,k,l
=
i,j

k,l
+ (
i,l

j,k
+
i,k

j,l
).
We see that appears more often than in the denition of C and that its inuence on the
elasticity tensor is bigger than of . Therefore, one can expect that it is easier to recover .
1970 B M Brown et al
True

2.4
2.8
2.6
2.2
2
1.6
1.8
1.4
1.2
1
1
1 0.5
0.5 0
0
0.5
0.5
1
1
2.4
2.6
2.2
2
1.4
1.6
1.8
1.2
1
1
1 0.5
0.8
0.5 0
0
0.5
0.5
1
1
True
3
Computed
3
, L
1
-error = 0.0316
1
1
2
1 0.5
0.5
1.5
0.5 0
0
0.5
0.5
1
1
1
2
1
1 0.5
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
0.5 0
0
0.5
0.5
1
1
Computed
3
, L
1
-error = 0.1463
True

1
1
1 0.5
0.5
1.5
0.5 0
0
0.5
0.5
1
1
1
1
1 0.5
0.4
1.1
1.2
1.3
1.4
0.5
0.6
0.7
0.8
0.9
0.5
0
0
0.5
0.5
1
1
Computed
3
, L
1
-error = 0.0458
Figure 3.
Now we consider an example where all three coefcients , and are unknown.

3
(x) = e
(x
2
1
1)(x
2
2
1)
+ 0.5, (42)

3
(x) =
_
_
_
2.0, if |x
1
+ 0.5| < 0.25 and |x
2
0.5| < 0.25,
1.0, if |x
1
| < 0.25 and |x
2
+ 0.5| < 0.25,
0.5, otherwise,

3
(x) = 1.5 x

.
We can see from the gures below that we got quite satisfactory results for the Lam e
coefcients, and , as well as for the density . This is very encouraging, since it shows
that we can recover all the unknown parameters in the seismic wave equation simultaneously,
A variational approach to an elastic inverse problem 1971
1.5
0.5
1
1.5
0.5
1
2.5
2
2
1
1 0.5
0.5 0
0
0.5
0.5
1
1
1
1 0.5
0.5 0
0
0.5
0.5
1
1
True
4
Computed
4
, 10% noise,
L
1
-error = 0.2240
Figure 4.
frommeasurements on the boundary. It also shows that our method provides a decent recovery
method in the isotropic case even for three unknown parameters. We also obtained satisfactory
results for the sole recovery of with known and .
Since real data are never noise-free, we want to consider an example with noise. Again
we implement the function

4
(x) =
4
(x) =
_
2.0, if |x
1
| < 0.5 and |x
2
| < 0.5,
0.5, otherwise,
and assume that (x) 1. We apply noise of 10% to our Dirichlet data, but no noise
to the Neumann data. We can see from gure 4 that the algorithm remains stable under
perturbation. The corresponding L
1
-error is about three times higher than for the unperturbed
function. Considering that we have only used one DirichletNeumann map, this result is still
acceptable. It is therefore desirable to try recovering perturbed Lam e coefcients with more
DirichletNeumann maps. One might also get better results by using a better differentiation
method than central differences, since differentiation is itself an inverse problem and the error
in the data has a huge effect on the calculated differences. Examples of regularization methods
for numerical differentiation can be found in [8, 14].
Finally, we want to consider an example with time-dependent data to show that a Laplace
transformation does not ruin the above results. We try to recover the function

5
(x) =
5
(x) = (x
2
1) (y
2
1) + 0.5
and assume that (x) 1. For the time dependence, we apply a sine wave to our data. Thus,
we use functions of the form
sin(t )(
0,1
x +
1,0
y +
1,1
xy +
0,2
x
2
+
2,0
y
2
)
as traction data and apply a nite Laplace transformation over the interval (0, 5). Apart from
the effects of the Laplace transformation, we also apply a noise of 1% to the Dirichlet data.
After 300 iterations, we obtained the following results. As we can see from gure 5, the
Laplace transformation does not have any crucial effect on the recovery of the coefcients.
Finally, we want to point out that although we can in theory use as many Dirichlet
Neumann maps as we want, we have to consider the much higher computing costs as can
be seen from table 1. We can see that the higher stability of our descent procedure comes
along with a corresponding higher computing cost. The number of iterations also depends
highly on the smoothness of the coefcients. For smooth coefcients, 200300 iterations were
1972 B M Brown et al
1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.5
1
1.5
True
5
1
0.5
0
0.5
1
1
0.5
0
0.5
1
0.5
1
1.5
Computed
5
, 1% noise,
L
1
-error = 0.02
Figure 5.
(This gure is in colour only in the electronic version)
Table 1. Time for one descent step.
Number of Average time
DirichletNeumann for one descent
maps step (s)
1 365
4 966
6 1297
12 2117
sufcient. For discontinuous coefcients our method needs between 400 and 2000 iterations,
which also depends on how many DirichletNeumann maps are used. Since there is natural
parallelism in the algorithm, it seems desirable to run it on parallel machines to reduce the
computing costs.
References
[1] Brown B M, Samko V S, Knowles I W and Marletta M 2003 Inverse spectral problem for the SturmLiouville
equation Inverse Problems 19 23552
[2] Giaquinta M 1993 Introduction to Regularity Theory for Nonlinear Elliptic Systems (Lectures in Mathematics,
ETH Z urich) (Basle: Birkh auser)
[3] Isakov V 1998 Inverse Problems for Partial Differential Equations (Applied Mathematical Sciences vol 127)
(New York: Springer)
[4] Jais M 2003 A variational approach to the seismic inverse problem MPhil Dissertation UWC http://www.cs.
cf.ac.uk/user/M.Jais
[5] Jais M 2005 Unique identiability of the common support of coefcients of a second order anisotropic elliptic
system by the DirichletNeumann map J. Comput. Appl. Math. at press
[6] Astala K and P aiv arinta L Calderons inverse conductivity problem in the plane Ann. Math. at press
[7] Kirsch A 2003 The factorization method for a class of inverse elliptic problems Math. Nachr.
[8] Kirsch A1996 An Introduction to the Mathematical Theory of Inverse Problems (Applied Mathematical Sciences
vol 120) (New York: Springer)
[9] Knowles I 2004 An optimal current functional for electric impedance tomography Preprint
A variational approach to an elastic inverse problem 1973
[10] Knowles I 1998 A variational algorithm for electrical impedance tomography Inverse Problems 14 151325
[11] Knowles I 1999 Uniqueness for an elliptic inverse problem SIAM J. Appl. Math. 59 135670 (electronic)
[12] Knowles I 2001 Parameter identication for elliptic problems J. Comput. Appl. Math. 131 17594
[13] Knowles I, Le T and Yan A 2004 On the recovery of multiple ow parameters from transient head data
J. Comput. Appl. Math. 169 115
[14] Knowles I and Wallace R 1995 A variational method for numerical differentiation Numer. Math. 70 91110
[15] Kohn R V and Vogelius M 1984 Identication of an unknown conductivity by means of measurements at
the boundary Inverse Problems: SIAMAMS Proc. (New York, 1983) vol 14 (Providence, RI: American
Mathematical Society) pp 11323
[16] McLean W2000 Strongly Elliptic Systems and Boundary Integral Equations (Cambridge: Cambridge University
Press)
[17] Morrey C B 1966 Multiple Integrals in the Calculus of Variations (New York: Springer)
[18] Nakamura G and Uhlmann G 2003 Erratum: Global uniqueness for an inverse boundary value problem arising
in elasticity Invent. Math. 152 2057 (Erratum to Invent. Math. 118 (3) (1994) 457474; MR 95i:35313)
[19] Neuberger J W 1997 Sobolev Gradients and Differential Equations (Lecture Notes in Mathematics vol 1670)
(Berlin: Springer)
[20] Press W H, Teukolsky S A, Vetterling W T and Flannery B P 1992 Numerical Recipes in FORTRAN 2nd edn
(Cambridge: Cambridge University Press)
[21] Uhlmann G 1997 Inverse boundary value problems for elastic materials Inverse Problems in Geophysical
Applications (Yosemite, CA, 1995) (Philadelphia, PA: SIAM) pp 1226

You might also like