You are on page 1of 284

The Fast Solution of Boundary

Integral Equations
MATHEMATICAL AND ANALYTICAL
TECHNIQUES WITH APPLICATIONS
TO ENGINEERING
Series Editor
Alan Jeffrey

The importance of mathematics in the study of problems arising from the real world,
and the increasing success with which it has been used to model situations ranging
from the purely deterministic to the stochastic, in all areas of todays Physical
Sciences and Engineering, is well established. The progress in applicable mathematics
has been brought about by the extension and development of many important
analytical approaches and techniques, in areas both old and new, frequently aided
by the use of computers without which the solution of realistic problems in modern
Physical Sciences and Engineering would otherwise have been impossible. The
purpose of the series is to make available authoritative, up to date, and self-contained
accounts of some of the most important and useful of these analytical approaches
and techniques. Each volume in the series will provide a detailed introduction to a
specific subject area of current importance, and then will go beyond this by reviewing
recent contributions, thereby serving as a valuable reference source.

Series Titles:
THE FAST SOLUTION OF BOUNDARY INTEGRAL EQUATIONS
Sergej Rjasanow & Olaf Steinbach, ISBN 978-0-387-34041-8
THEORY OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH JUMPS AND
APPLICATIONS
Rong Situ, ISBN 978-0-387-25083-0
METHODS FOR CONSTRUCTING EXACT SOLUTIONS OF PARTIAL
DIFFERENTIAL EQUATIONS
S.V. Meleshko, ISBN 978-0-387-25060-1
INVERSE PROBLEMS
Alexander G. Ramm, ISBN 978-0-387-23195-2
SINGULAR PERTURBATION THEORY
Robin S. Johnson, ISBN 978-0-387-23200-3
INVERSE PROBLEMS IN ELECTRIC CIRCUITS AND ELECTROMAGNETICS
N.V. Korovkin, ISBN 978-0-387-33524-7
The Fast Solution of Boundary
Integral Equations

Sergej Rjasanow
Universitt des Saarlandes

Olaf Steinbach
Technische Universitt Graz
Sergej Rjasanow
Fachrichtung 6.1 Mathematik
Universitt des Saarlandes
Postfach 151150
D-66041 Saarbrcken
GERMANY

Olaf Steinbach
Institut fr Numerische Mathematik
Technische Universitt Graz
Steyrergasse 30
A-8010 Graz
AUSTRIA

Library of Congress Control Number: 2006927233

ISBN 978-0-387-34041-8 e-ISBN 978-0-387-34042-5


ISBN 0-387-34041-6 e-ISBN 0-387-34042-4

Printed on acid-free paper.

c 2007 Springer Science+Business Media, LLC


All rights reserved. This work may not be translated or copied in whole or in part without
the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring
Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews
or scholarly analysis. Use in connection with any form of information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar methodology now know
or hereafter developed is forbidden. The use in this publication of trade names, trademarks,
service marks and similar terms, even if they are not identified as such, is not to be
taken as an expression of opinion as to whether or not they are subject to proprietary
rights.

9 8 7 6 5 4 3 2 1

springer.com
Preface

Boundary Element Methods (BEM) play an important role in modern numer-


ical computations in the applied and engineering sciences. Such algorithms are
often more convenient than the traditional Finite Element Method (FEM),
since the corresponding equations are formulated on the boundary, and, there-
fore, a signicant reduction of dimensionality takes place. Especially when the
physical description of the problem leads to an unbounded domain, traditional
methods like FEM become unalluring.
A numerical procedure, called Boundary Element Methods (BEM), has
been developed in the physics and engineering community since the 1950s.
This method turns out to be a powerful tool for numerical studies of various
physical phenomena. The most prominent examples of such phenomena are
the potential equation (Laplace equation) in electromagnetism, gravitation
theory, and in perfect uids. A further example leading to the Laplace equation
is the steady state heat ow. One of the most popular applications of the
BEM is, however, the system of linear elastostatics which can be considered
in both bounded and unbounded domains. A simple model for a uid ow, the
Stokes system, can also be solved by the use of the BEM. The most important
examples for the Helmholtz equation are the acoustic scattering and the sound
radiation.
It has been known for a long time that boundary value problems for el-
liptic partial dierential equations can be reformulated in terms of boundary
integral equations. The trace of the solution on the boundary and its co-
normal derivative (Cauchy data) can be found by solving these equations
numerically. The solution of the problem as well as its gradients or even high
order derivatives are then given by the application of Greens third formula
(representation formula); this method based on Greens formula is called the
direct BEM approach. Another possibility is to use the property that single or
double layer potentials solve the partial dierential equation exactly for any
given density function. Thus, this function can be used in order to fulll the
boundary conditions. The density function obtained this way has, in general,
VI

no physical meaning. Therefore, these boundary element methods are called


indirect.
When boundary integral equations are approximated and solved numer-
ically, the study of stability and convergence is the most important issue.
The most popular numerical methods are the Galerkin methods which per-
fectly t to the variational formulation of the boundary integral equations.
The theoretical study of the Galerkin methods is now completed and pro-
vides a powerful theoretical background for BEM. Traditionally, however, the
collocation methods were widely used, especially in the engineering commu-
nity. These methods provide an easier practical implementation compared
with the Galerkin methods. However, the stability and convergence theory
for collocation methods is available only for two-dimensional problems. Fur-
thermore, the error analysis of the collocation methods for three-dimensional
problems, when assuming their stability, shows that the rate of convergence
of the Galerkin methods is better, when assuming that the solution is smooth
enough.
In any case, a numerical procedure applied to the boundary integral equa-
tion leads to a linear system of algebraic equations. The matrix of this system
is in general dense, i.e. almost all its entries are dierent from zero, and, there-
fore, have to be stored in computer memory. It is clear that this is the main
disadvantage of the BEM compared with FEM which leads to sparse matri-
ces. This quadratic amount of computer memory sets very strong, unattractive
bounds for the discretisation parameters and, often, force the user to switch
to the outofcore programming. However, so called fast BEM have been de-
veloped in the last two decades. The original methods are the Fast Multipole
Method and the Panel Clustering; another example is the use of wavelets.
Furthermore, the Adaptive Cross Approximation (ACA) was introduced and
successfully applied to many practical problems in the last years.
The purpose of this book is twofold. The rst goal is to give an exact
mathematical description of various mathematical formulations and numer-
ical methods for boundary integral equations in the three-dimensional case
in an uniform and possibly compact form. The second goal is a systematic
numerical treatment of a variety of boundary value problems for the Laplace
equation, for the linear elastostatics system, and for the Helmholtz equation.
This study will illustrate both the convergence of the Galerkin methods cor-
responding to the theory and the fast realisation of BEM based on the ACA
method. We restrict our numerical tests to some more or less articial surface
examples. The simplest one is the surface of the unit sphere. Furthermore,
two TEAM examples (Testing Electromagnetic Analysis Methods) will be
considered besides some other non-trivial surfaces.
This book is subdivided into four parts. Chapter 1 provides an overview of
the direct and indirect reformulations of second order boundary value prob-
lems by using boundary integral equations, and it discusses the mapping prop-
erties of all boundary integral operators involved. From this, the unique solv-
ability of the resulting boundary integral equations and the continuous depen-
VII

dence of the solution on the given boundary data can be deduced. Chapter
2 is concerned with boundary element methods, especially with the Galerkin
method. The discrete version of the boundary integral equations from Chapter
1 and their variational formulations lead to systems of linear equations with
dierent matrices. The entries of these matrices are explicitly derived for all
integral operators involved. Chapter 3 describes the Adaptive Cross Approx-
imation of dense matrices and provides, in addition to the theory, some rst
numerical examples. The largest part of the book, Chapter 4, contains some
results of numerical experiments. First, the Laplace equation is considered,
where we study Dirichlet, Neumann, and mixed boundary value problems as
well as an inhomogeneous interface problem. Then, two mixed boundary value
problems of linear elastostatics will be presented, and, nally, many examples
for the Helmholtz equation are described. We consider again Dirichlet and
Neumann, interior and exterior boundary value problems as well as multifre-
quency analysis. Many auxiliary results are collected in three appendices.
The chapters are relatively independent of one another. Necessary nota-
tions and formulas are not only cross-referred to other chapters but usually
repeated at the appropriate places.
In 2003, Prof. Allan Jerey approached us with the idea to write a book
about fast solutions of boundary integral equations. It has been delightful to
write this book and we are also very thankful for his providing the opportunity
to get this book published.
We would like to thank our colleagues from the BEM community for many
useful discussions and suggestions. We are grateful to our home institutions,
the University of Saarland in Saarbrucken and the Technical University in
Graz, for providing an excellent scientic environment and nancial funding
to our research.
We appreciate the help of Jurgen Rachor, who read the manuscript and
made valuable comments and corrections. Furthermore, the authors would
very much like to express their appreciation to Richard Grzibovski for his
help in performing numerical tests.

Saarbrucken and Graz Sergej Rjasanow


March 2007 Olaf Steinbach
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V

1 Boundary Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Interior Dirichlet Boundary Value Problem . . . . . . . . . . . 10
1.1.2 Interior Neumann Boundary Value Problem . . . . . . . . . . 13
1.1.3 Mixed Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . 17
1.1.4 Robin Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . 19
1.1.5 Exterior Dirichlet Boundary Value Problem . . . . . . . . . . . 21
1.1.6 Exterior Neumann Boundary Value Problem . . . . . . . . . . 22
1.1.7 Poisson Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.1.8 Interface Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.2 Lame Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.2.1 Dirichlet Boundary Value Problem . . . . . . . . . . . . . . . . . . 35
1.2.2 Neumann Boundary Value Problem . . . . . . . . . . . . . . . . . . 36
1.2.3 Mixed Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . 37
1.3 Stokes System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.4 Helmholtz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.4.1 Interior Dirichlet Boundary Value Problem . . . . . . . . . . . 49
1.4.2 Interior Neumann Boundary Value Problem . . . . . . . . . . 50
1.4.3 Exterior Dirichlet Boundary Value Problem . . . . . . . . . . . 52
1.4.4 Exterior Neumann Boundary Value Problem . . . . . . . . . . 54
1.5 Bibliographic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

2 Boundary Element Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


2.1 Boundary Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.2 Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.1 Interior Dirichlet Boundary Value Problem . . . . . . . . . . . 65
2.3.2 Interior Neumann Boundary Value Problem . . . . . . . . . . 72
2.3.3 Mixed Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . 77
X Contents

2.3.4 Interface Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84


2.4 Lame Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.5 Helmholtz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
2.5.1 Interior Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 91
2.5.2 Interior Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.5.3 Exterior Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.5.4 Exterior Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.6 Bibliographic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

3 Approximation of Boundary Element Matrices . . . . . . . . . . . . 101


3.1 Hierarchical Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.1.2 Hierarchical clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.2 Block Approximation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.2.1 Analytic Form of Adaptive Cross Approximation . . . . . . 112
3.2.2 Algebraic Form of Adaptive Cross Approximation . . . . . 119
3.3 Bibliographic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

4 Implementation and Numerical Examples . . . . . . . . . . . . . . . . . . 131


4.1 Geometry Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.1 Unit Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.2 TEAM Problem 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.3 TEAM Problem 24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.1.4 Relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.1.5 Exhaust manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2.1 Analytical solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2.2 Discretisation, Approximation and Iterative Solution . . . 137
4.2.3 Generation of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.2.4 Interior Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.2.5 Interior Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.2.6 Interior Mixed Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.2.7 Inhomogeneous Interface Problem . . . . . . . . . . . . . . . . . . . 160
4.3 Linear Elastostatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.3.1 Generation of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.3.2 Relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.3.3 Foam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.4 Helmholtz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.4.1 Analytical Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.4.2 Discretisation, Approximation and Iterative Solution . . . 169
4.4.3 Generation of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.4.4 Interior Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.4.5 Interior Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.4.6 Exterior Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.4.7 Exterior Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 196
Contents XI

A Mathematical Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


A.1 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
A.2 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
A.2.1 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
A.2.2 Lame System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
A.2.3 Stokes System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
A.2.4 Helmholtz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
A.3 Mapping Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

B Numerical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225


B.1 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
B.2 Approximation Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

C Numerical Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239


C.1 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
C.2 Analytic Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
C.2.1 Single Layer Potential for the Laplace operator . . . . . . . . 246
C.2.2 Double Layer Potential for the Laplace operator . . . . . . . 249
C.2.3 Linear Elasticity Single Layer Potential . . . . . . . . . . . . . . 252
C.3 Iterative Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
C.3.1 Conjugate Gradient Method (CG) . . . . . . . . . . . . . . . . . . . 256
C.3.2 Generalised Minimal Residual Method (GMRES) . . . . . . 263

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
1
Boundary Integral Equations

The solutions of second order partial dierential equations can be described by


certain surface and volume potentials when a fundamental solution of the un-
derlying partial dierential equation is known. Although the existence of such
a fundamental solution can be guaranteed for a wide class of partial dieren-
tial operators, see for example [53], the explicite construction of fundamental
solutions is a more dicult task in the general case. Hence, we consider here
partial dierential operators with constant coecients only. In particular, we
restrict ourselves to the Laplace operator, the Helmholtz operator, and the
systems of linear elastostatics and of Stokes, which include the most important
applications of boundary integral equations and boundary element methods.
When using either a representation formula stemming from Greens sec-
ond formula or when considering indirect surface potential methods, one has
to nd unknown density functions from the given boundary conditions. This
is done by applying the corresponding trace operators to the surface and vol-
ume potentials yielding appropriate boundary integral equations to be solved.
Depending on the given boundary conditions one can derive dierent formu-
lations of rst or second kind boundary integral equations. Although on the
continuous level all boundary integral equations are equivalent to the original
boundary value problem, and, therefore, to each other, they admit quite dif-
ferent properties when applying a numerical scheme to obtain an approximate
solution.
In this chapter we give an overview of direct and indirect reformulations
of second order boundary value problems by using boundary integral equa-
tions and discuss the mapping properties of all boundary integral operators
involved. From this we can deduce the unique solvability of the resulting
boundary integral equations and the continuous dependence of the solution
on the given boundary data.
2 1 Boundary Integral Equations

1.1 Laplace Equation


The simplest example for a second order partial dierential equation is the
Laplace equation for a scalar function u : R3 R satisfying

3
2
u(x) = 2 u(x) = 0 for x R .
3
(1.1)
i=1
xi

This equation is used for the modelling of, for example, the stationary heat
transfer, of electrostatic potentials, and of ideal uids.
In (1.1), R3 is a bounded, multiply or simply connected domain with
a Lipschitz boundary = .
Multiplying the partial dierential equation (1.1) with a test function v,
integrating over , and applying integration by parts, this gives Greens rst
formula
 
(u(y)) v(y)dy = a(u, v) 1int u(y)0int v(y)dsy (1.2)

with the symmetric bilinear form



a(u, v) = (u(y), v(y))dy,

with the interior trace operator

0int v(y) = lim v(


y) ,
y, yy

and with the interior conormal derivative of u on ,


 
1int u(y) = lim n(y), yu(
y) .
y, yy

Here, n(y) is the outer normal vector dened for almost all y .
From Greens rst formula (1.2) and by the use of the symmetry of the
bilinear form a(, ), we deduce Greens second formula
 
 
v(y) u(y)dy + 1int v(y)0int u(y)dsy (1.3)

 
 
= u(y) v(y)dy + 1int u(y)0int v(y)dsy .

Inserting v = v0 1, we then obtain the compatibility condition


 
 
u(y) dy + 1int u(y)dsy = 0 . (1.4)

1.1 Laplace Equation 3

Now, choosing in Greens second formula (1.3) as a test function v a funda-


mental solution u : R3 R3 R satisfying

 
y u (x, y) u(y)dy = u(x) for x , (1.5)

the solution of the Laplace equation (1.1) is given by the representation for-
mula
 
int int u (x, y) int u(y)ds
u(x) = u (x, y)1 u(y)dsy 1,y 0 y (1.6)

for x . The fundamental solution of the Laplace equation is


1 1
u (x, y) = for x, y R3 . (1.7)
4 |x y|

For a domain with Lipschitz boundary = , the solution (1.6) of the


partial dierential equation (1.1) has to be understood in a weak or distribu-
tional sense. For this, appropriate Sobolev spaces H () and H ( ) have to
be introduced; see Appendix A.1.
To derive suitable boundary integral equations from the representation
formula (1.6), we rst have to investigate the surface potentials in (1.6) as
well as their interior trace and conormal derivative.

Single Layer Potential

First we consider the single layer potential


 
1 w(y)
(V w)(x) = u (x, y)w(y)dsy = dsy for x ,
4 |x y|

which denes a continuous map from a given density function w on the bound-
ary to a harmonic function V w in the domain . In particular,

V : H 1/2 ( ) H 1 ()

is continuous and V w H 1 () is a weak solution of the Laplace equation


(1.1) for any w H 1/2 ( ). Using the mapping property of the interior trace
operator
0int : H 1 () H 1/2 ( ) ,
we can dene the corresponding boundary integral operator

V = 0int V

with the following mapping properties, see for example [24, 71, 105].
4 1 Boundary Integral Equations

Lemma 1.1. The single layer potential operator

V = 0int V : H 1/2 ( ) H 1/2 ( )

is bounded with

V wH 1/2 ( ) cV2 wH 1/2 ( ) for all w H 1/2 ( ) ,

and H 1/2 ( )elliptic,

V w, w
cV1 w2H 1/2 ( ) for all w H 1/2 ( ).

Moreover, for w L ( ) there holds the representation


 
1 w(y)
(V w)(x) = u (x, y)w(y)dsy = dsy for x
4 |x y|

as a weakly singular surface integral.

Double Layer Potential

Next we consider the double layer potential


 
int u (x, y)v(y)ds = 1 (x y, n(y))
(W v)(x) = 1,y y v(y)dsy
4 |x y|3

for x , which again denes a continuous map from a given density func-
tion v on the boundary to a harmonic function W v in the domain . In
particular,
W : H 1/2 ( ) H 1 ()
is continuous and W v H 1 () is a weak solution of the Laplace equation
(1.1) for any v H 1/2 ( ). Applying the interior trace operator

0int : H 1 () H 1/2 ( ) ,

this denes an associated boundary integral operator [24, 71, 105].

Lemma 1.2. The boundary integral operator

0int W : H 1/2 ( ) H 1/2 ( )

is bounded with

0int W vH 1/2 ( ) cW


2 vH 1/2 ( ) for all v H 1/2 ( ).

For v H 1/2 ( ) there holds the representation


1.1 Laplace Equation 5
 
0int (W v)(x) = 1 + (x) v(x) + (Kv)(x) for x ,

with the double layer potential operator



(Kv)(x) = lim int u (x, y)v(y)ds
1,y y
0
y :|yx|

1 (x y, n(y))
= lim v(y)dsy
4 0 |x y|3
y :|yx|

and 
1 1
(x) = lim dsy for x .
0 4 2
y:|yx|=

Moreover, for v = v0 1, we have

(x)v0 (x) + (Kv0 )(x) = 0 for x .

If x is on a smooth part of the boundary = , then we obtain


1
(x) = .
2
Otherwise, if x is on an edge or in a corner point of the boundary = ,
(x) is related to the interior angle of in x . However, without loss of
generality, we assume (x) = 1/2 for almost all x .
By applying the interior trace operator 0int to the representation formula
(1.6),
u(x) = (V 1int u)(x) (W 0int u)(x) for x ,
we obtain the boundary integral equation
1
0int u(x) = (V 1int u)(x) + 0int u(x) (K0int u)(x) (1.8)
2
for almost all x . In particular, this is a weakly singular boundary integral
equation,
 
1
u (x, y)1int u(y)dsy = 0int u(x) + 1,y int u (x, y) int u(y)ds
0 y
2

for x , or,
 
1 1 int 1 int 1 (x y, n(y)) int
1 u(y)dsy = 0 u(x) + 0 u(y)dsy .
4 |x y| 2 4 |x y|3

Instead of the interior trace operator 0int , we may also apply the interior
conormal derivative operator 1int to the representation formula (1.6). To do
6 1 Boundary Integral Equations

so, we rst need to investigate the interior conormal derivatives of the single
and double layer potentials V w and W v, which are both harmonic in . Then,

1int : H 1 (, ) H 1/2 ( ),

where

H 1 (, ) =  1 () .
v H 1 () : v H

Adjoint Double Layer Potential

Lemma 1.3. The boundary integral operator

1int V : H 1/2 ( ) H 1/2 ( )

is bounded with
int V
1int V wH 1/2 ( ) c21 wH 1/2 ( ) for all w H 1/2 ( ).

For w H 1/2 ( ) there holds the representation


1
1int (V w)(x) = w(x) + (K  w)(x)
2
in the sense of H 1/2 ( ), with the adjoint double layer potential operator

 int u (x, y)w(y)ds
(K w)(x) = lim 1,x y
0
y :|yx|

1 (y x, n(x))
= lim w(y)dsy .
4 0 |x y|3
y :|yx|

In particular, we have
1 1
1int V w, v
= w, v
+ K  w, v
= w, v
+ w, Kv

2 2
for all v H 1/2 ( ).

Hypersingular Integral Operator

In the same way as for the single layer potential V w, we now consider the
interior conormal derivate of the double layer potential W v.

Lemma 1.4. The operator

D = 1int W : H 1/2 ( ) H 1/2 ( )


1.1 Laplace Equation 7

is bounded with

DvH 1/2 ( ) cD
2 vH 1/2 ( ) for all v H 1/2 ( ),

and H 1/2 ( )semielliptic,

Dv, v
cD
1 |v|H 1/2 ( )
2
for all v H 1/2 ( ).

In particular, for v = v0 1, we have

(Dv0 )(x) = 0 for x .

Moreover, for continuous functions u, v H 1/2 ( ) C( ) there holds the


representation
 
1 (curl u(y), curl v(x))
Du, v
= dsy dsx (1.9)
4 |x y|

where
curl u(x) = n(x) x u
(x) for x
 is some (locally dened) extension of u into
is the surface curl operator and u
the neighbourhood of .

The boundary integral operator D = 1int W does not exhibit an explicit


representation as a Cauchy singular surface integral, in particular,
 
(Dv)(x) = 1int (W v)(x) = lim n(x), x (W v)(
x) =
, x
x x

1 (x y, n(y))(x y, n(x)) (n(x), n(y))
lim 3 v(y)dsy
4 0 |x y|5 |x y|3
y :|yx|

does not exist. Therefore the boundary integral operator D is called hyper-
singular operator and it requires some appropriate regularisation procedure.
Since u(x) = 1 for x is a solution of the Laplace equation u(x) = 0,
the representation formula (1.6) reads for this special choice

1,yint u (
x, y)dsy = 1 for x .

Thus, we have

x int u (
1,y .
x, y)dsy = 0 for x

Then,
8 1 Boundary Integral Equations
 
(Dv)(x) = 1int (W v)(x) = lim n(x), x (W v)(
x)
, x
x x


= lim n(x), x 1,y x, y)v(y)dsy
int u (
, x
x x


  
= lim n(x), x 1,y x, y) v(y) v(x) dsy
int u (
, x
x x

exists as a Cauchy singular surface integral,

(Dv)(x) =

1 (x y, n(y))(x y, n(x)) (n(x), n(y))  
3 v(y) v(x) dsy .
4 |x y|5 |x y|3

However, using integration by parts as in the derivation of formula (1.9) of


Lemma 1.4, the induced bilinear form of the hypersingular boundary integral
operator can be transformed to a weakly singular bilinear form including some
surface curl operators.

Boundary Integral Equations

Applying now the interior conormal derivative operator 1int to the represen-
tation formula (1.6),

u(x) = (V 1int u)(x) (W 0int u)(x) for x ,

this gives the boundary integral equation


1 int
1int u(x) = u(x) + (K  1int u)(x) + (D0int u)(x) (1.10)
2 1
in the sense of H 1/2 ( ). In particular, this is a hypersingular boundary
integral equation,
 
int
1,x int u (x, y) int u(y)ds = 1 int u(x)
1,y int u (x, y) int u(y)ds ,
1,x
0 y y
2 1 1

or,

1 (y x, n(y))(y x, n(x)) (n(x), n(y))
3
4 |x y|5 |x y|3

  
int int 1 int (y x, n(x)) int
0 u(y) 0 u(x) dsy = 1 u(x) 1 u(y)dsy .
2 |x y|3

1.1 Laplace Equation 9

Combining (1.8) and (1.10), we can write both boundary integral equations
by the use of the Calderon projector as
    
0int u 2I K
1
V 0int u
= 1 
. (1.11)
1int u D 2I + K 1int u

From the boundary integral equations of the Calderon projector (1.11) one
can derive some important properties of boundary integral operators, i.e.
1  1 
V K  = KV, V D = I K I +K .
2 2

SteklovPoincare Operator

Since the single layer potential V is H 1/2 ( )elliptic and therefore invertible,
we obtain from the rst equation in (1.11) the Dirichlet to Neumann map
1 
1int u(x) = V 1 I + K 0int u(x) = (S int 0int u)(x) for x (1.12)
2
which denes the SteklovPoincare operator

S int : H 1/2 ( ) H 1/2 ( )

associated to the partial dierential equation (1.1). Inserting the Dirichlet


to Neumann map (1.12) into the second equation of the Calderon projector
(1.11), this gives

1int u(x) =
1  1 
D+ I + K  V 1 I + K 0int u(x) = (S int 0int u)(x)
2 2
with the symmetric representation of the SteklovPoincare operator
1  1 
S int = D + I + K  V 1 I + K . (1.13)
2 2
Note that also the representation (1.12) of the SteklovPoincare operator is
symmetric. However, due to Lemma 1.1 and Lemma 1.4, we conclude from
the symmetric representation (1.13)
     1  1  
S int v, v = Dv, v + V 1 I + K v, I +K v
2 2
 
Dv, v cD
1 |v|H 1/2 ( )
2

for all v H 1/2 ( ), and, therefore, S int is H 1/2 ( )semielliptic. In particu-


lar, for v = v0 1, we have (S int v0 )(x) = 0 for x .
10 1 Boundary Integral Equations

The SteklovPoincare operator S int denes the Dirichlet to Neumann map,


1 u = S int 0int u, which is a relation of the Cauchy data associated to a solu-
int
tion of the homogeneous partial dierential equation. This map will be used to
handle more general, e.g. nonlinear, boundary conditions. Moreover, Steklov
Poincare operators play an important role in domain decomposition methods,
e.g. when considering boundary value problems with piecewise constant co-
ecients, or when considering the coupling of nite and boundary element
methods, see, for example, [60, 107].

1.1.1 Interior Dirichlet Boundary Value Problem

We rst consider the interior Dirichlet boundary value problem for the Laplace
equation, i.e.,

u(x) = 0 for x , 0int u(x) = g(x) for x . (1.14)

Using the representation formula (1.6), the solution of the Dirichlet boundary
value problem (1.14) is given by
 
int u (x, y)g(y)ds
u(x) = u (x, y)t(y)dsy 1,y y for x ,

where t = 1int u is the unknown conormal derivative of u on which has to


be determined from some appropriate boundary integral equations.

Direct Single Layer Potential Formulation

Using the rst equation in the Calderon projector (1.11), we have to solve a
rst kind boundary integral equation to nd t H 1/2 ( ), such that
1
(V t)(x) = g(x) + (Kg)(x) for x , (1.15)
2
or,  
1 t(y) 1 1 (x y, n(y))
dsy = g(x) + g(y)dsy .
4 |x y| 2 4 |x y|3

Using duality arguments, the boundary integral equation


1
Vt = g + Kg H 1/2 ( )
2
corresponds to
  V t 12 g Kg, w

 1 
0 = V t g Kg  1/2 = sup .
2 H ( ) 0=wH 1/2 ( ) wH 1/2 ( )
1.1 Laplace Equation 11

Therefore, t H 1/2 ( ) is the solution of the variational problem


   1  
V t, w = I + K g, w for all w H 1/2 ( ), (1.16)
2
or,
 
1 t(y)
w(x) dsy dsx =
4 |x y|

  
1 1 (x y, n(y))
= w(x)g(x)dsx + w(x) g(y)dsy dsx .
2 4 |x y|3

Theorem 1.5. Let g H ( ) be given. Then there exists a unique solution


1/2

t H 1/2 ( ) of the variational problem (1.16). Moreover,


1  
tH 1/2 ( ) V 1 + cW
2 gH 1/2 ( ) .
c1
Because the boundary integral equation (1.15) results from the representation
formula (1.6) this approach is called direct.
Since both the single and the double layer potentials are harmonic in ,
the solution of the Dirichlet boundary value problem (1.14) can be represented
also either by a single or by a double layer potential alone. Then the unknown
density functions have no physical meaning in general. The resulting meth-
ods are called indirect and have a long history when solving boundary value
problems for second order partial dierential equations, see, e.g., [33].

Indirect Single Layer Potential Formulation


Let us consider the indirect single layer potential approach

 1 w(y)
u(x) = (V w)(x) = dsy for x ,
4 |x y|

where we have to nd the unknown density function w H 1/2 ( ). Applying


the interior trace operator 0int , from the given Dirichlet boundary conditions,
we then obtain the rst kind boundary integral equation

(V w)(x) = u (x, y)w(y)dsy = g(x) for x , (1.17)

which is equivalent to the variational problem


V w, z
= g, z
for all z H 1/2 ( ) , (1.18)
or,   
1 w(y)
z(x) dsy dsx = z(x)g(x)dsx .
4 |x y|

12 1 Boundary Integral Equations

Theorem 1.6. Let g H 1/2 ( ) be given. Then there exists a unique solution
w H 1/2 ( ) of the variational problem (1.18). Moreover,
1
wH 1/2 ( ) gH 1/2 ( ) .
cV1

Note that both boundary integral equations (1.15) and (1.17) are of the same
structure, while they are dierent in the denition of the right hand side. In
fact, the boundary integral equation (1.15) of the direct approach involves
the application of the double layer potential K to the given Dirichlet datum
g, while the right hand side of the boundary integral equation (1.17) of the
indirect approach is just the given Dirichlet datum g itself.

Indirect Double Layer Potential Formulation

Instead of the indirect single layer potential u = V w we now consider the


indirect double layer potential approach

1 (x y, n(y))
u(x) = (W v)(x) = v(y)dsy for x
4 |x y|3

which leads, by applying the interior trace operator 0int and by the use of
Lemma 1.2, to a second kind boundary integral equation to nd v H 1/2 ( )
such that
1
v(x) (Kv)(x) = g(x) for x , (1.19)
2
or, 
1 1 (x y, n(y))
v(x) v(y)dsy = g(x) for x .
2 4 |x y|3

Since this boundary integral equation is formulated in H 1/2 ( ), the equivalent


variational problem is to nd v H 1/2 ( ) such that
 1    
I K v, w = g, w for all w H 1/2 ( ).
2

The solution of the second kind boundary integral equation (1.19) is given by
the Neumann series

 
1
v(x) = I +K g(x) for x . (1.20)
2
=0

The convergence of the Neumann series (1.20) and therefore the unique solv-
ability of the boundary integral equation (1.19) can be established when using
an appropriate norm in the Sobolev space H 1/2 ( ), see [108].
1.1 Laplace Equation 13

Theorem 1.7. Let g H 1/2 ( ) be given. Then there exists a unique solution
v H 1/2 ( ) of the boundary integral equation (1.19). Moreover,
1
vV 1 gV 1
1 cK
where cK < 1 is the contraction rate,
 1  
 
 I + K z  1 cK zV 1 for all z H 1/2 ( )
2 V

with respect to the norm induced by the inverse single layer potential,

z2V 1 = V 1 z, z
for all z H 1/2 ( ).

It seems to be a natural setting to consider the second kind boundary integral


equation (1.19) in the trace space H 1/2 ( ), where Theorem 1.7 ensures the
unique solvability. However, for practical reasons, the boundary integral equa-
tion (1.19) is often considered in L2 ( ). While it is known that the shifted
double layer potential operator
1
I K : L2 ( ) L2 ( )
2
is bounded, see [112], it is an open problem whether this operator is invertible
in L2 ( ) or not for general Lipschitz boundaries = .

1.1.2 Interior Neumann Boundary Value Problem

For a simply connected domain R3 , we now consider the interior Neu-


mann boundary value problem for the Laplace equation,

u(x) = 0 for x , 1int u(x) = g(x) for x . (1.21)

From (1.4), we have to assume the solvability condition



g(y)dsy = 0. (1.22)

Note that the solution of the Neumann boundary value problem (1.21) is only
unique up to an additive constant.
Using the representation formula (1.6), a solution of the Neumann bound-
ary value problem (1.21) is given by
 
u(x) = u (x, y)g(y)dsy 1,y
int u (x, y) int u(y)ds , x .
0 y

Hence, we have to nd the yet unknown Dirichlet datum u = 0int u on .


14 1 Boundary Integral Equations

Direct Double Layer Potential Formulation

Using the rst equation in (1.11), we have to solve a second kind boundary
integral equation to nd u = 0int u H 1/2 ( ) such that
1
u(x) + (K u)(x) = (V g)(x) for x , (1.23)
2
or,
 
1 1 (x y, n(y)) 1 g(y)
u(x) + u(y)dsy = dsy for x .
2 4 |x y|3 4 |x y|

As for the second kind boundary integral equation (1.19) for the Dirichlet
boundary value problem (1.14), a solution of the second kind boundary inte-
gral equation (1.23) is given by the Neumann series

 
1
u(x) = I K (V g)(x) for x . (1.24)
2
=0

Since the given Neumann datum g H 1/2 ( ) has to satisfy the solvability
condition (1.22), and since v0 1 is the eigenfunction corresponding to the
zero eigenvalue of 1/2 I + K, all members of the Neumann series (1.24), and,
1/2
therefore, u are in the subspace H ( ) H 1/2 ( ) dened as follows

H ( ) = v H 1/2 ( ) : V 1 v, 1
= 0 .
1/2

The general solution of the second kind boundary integral equation (1.23) is
then given by u = u + c where c R is an arbitrary constant. To x the
constant, we may require the scaling condition

u , 1
= u (y)dsy = , (1.25)

where R can be arbitrary, but prescribed. This nally leads to a variational


problem to nd u H 1/2 ( ) such that
 1          
I + K u , w + u , 1 w, 1 = V g, w + w, 1 (1.26)
2

is satised for all w H 1/2 ( ). Note that the bilinear form of the ex-
tended variational problem (1.26) is regular due to the additional term
u , 1
w, 1
, which regularises the singular operator 1/2 I + K. Summaris-
ing the above, we obtain the following result:
1.1 Laplace Equation 15

Theorem 1.8. Let g H 1/2 ( ) and R be given. Then there exists


a unique solution u H 1/2 ( ) of the extended variational problem (1.26)
satisfying  
u H 1/2 ( ) c gH 1/2 ( ) + || .
If g H 1/2 ( ) satises the solvability condition (1.22), then u H 1/2 ( )
is the unique solution of the boundary integral equation (1.23) satisfying the
scaling condition (1.25).

Direct Hypersingular Integral Operator Formulation


When using the second equation in (1.11), we have to solve a rst kind bound-
ary integral equation to nd u = 0int u H 1/2 ( ) such that
1
(Du)(x) = g(x) (K  g)(x) for x (1.27)
2
is satised in a weak sense, in particular, in the sense of H 1/2 ( ). Since the
hypersingular boundary integral operator D has a nontrivial kernel, we have
to consider the equation (1.27) in suitable subspaces. For this we dene

1/2
H ( ) = v H 1/2 ( ) : v, 1
= 0 .

Then the variational problem of the boundary integral equation (1.27) reads
1/2
to nd u H ( ) such that
   1  
Du, v = I K  g, v (1.28)
2
1/2
is satised for all v H ( ). The general solution of the rst kind boundary
integral equation (1.27) is then given by u = u + c where c R is a constant
which can be determined by the scaling condition (1.25) afterwards.
1/2
Instead of solving the variational problem (1.28) in the subspace H ( )
and nding the unique solution afterwards from the scaling condition (1.25),
we can formulate an extended variational problem to nd u H 1/2 ( ) such
that
       1    
Du , v + u , 1 v, 1 = I K  g, v + v, 1 (1.29)
2

is satised for all v H 1/2 ( ).


Theorem 1.9. Let g H 1/2 ( ) and R be given. Then there exists
a unique solution u H 1/2 ( ) of the extended variational problem (1.29)
satisfying  
u H 1/2 ( ) c gH 1/2 ( ) + || .
If g H 1/2 ( ) satises the solvability condition (1.22), then u is the unique
solution of the hypersingular boundary integral equation (1.27) satisfying the
scaling condition (1.25).
16 1 Boundary Integral Equations

SteklovPoincare Operator Formulation

Instead of the hypersingular boundary integral equation (1.27) we may also


consider a SteklovPoincare operator equation to nd u = 0int u H 1/2 ( )
such that
(S int u)(x) = g(x) for x , (1.30)
where the SteklovPoincare operator S int is given either by the Dirichlet to
Neumann map (1.12) or in the symmetric form (1.13). As for the hypersingular
boundary integral equation (1.27), one can formulate an extended variational
formulation to nd u H 1/2 ( ) such that

S int u , v
+ u , 1
v, 1
= g, v
+ v, 1
(1.31)

is satised for all v H 1/2 ( ), and where R is given by the scaling


condition (1.25).

Theorem 1.10. Let g H 1/2 ( ) and R be given. Then there exists


a unique solution u H 1/2 ( ) of the extended variational problem (1.31)
satisfying  
u H 1/2 ( ) c gH 1/2 ( ) + || .
If g H 1/2 ( ) satises the solvability condition (1.22), then u is the unique
solution of the SteklovPoincare operator equation (1.30) satisfying the scaling
condition (1.25).

Indirect Single and Double Layer Potential Formulations

When using the indirect single layer potential ansatz u = V w in , the


application of the interior conormal derivative operator 1int gives the second
kind boundary integral equation
1
w(x) + (K  w)(x) = g(x) for x . (1.32)
2
As for the second kind boundary integral equation (1.23), the solution of the
boundary integral equation (1.32) is given by the Neumann series

 
1
w(x) = I K g(x) for x . (1.33)
2
=0

The convergence of the series (1.33) follows as in Theorem 1.7 due to the
contraction estimate, see [108],
 1  
 
 I K  w cK wV for all w H 1/2 ( ) : w, 1
= 0
2 V

with cK < 1.
1.1 Laplace Equation 17

The indirect double layer potential approach u = W v in leads, nally,


to the hypersingular boundary integral equation

(Dv)(x) = g(x) for x ,

which is of the same structure and hence can be handled like the hypersingular
boundary integral equation (1.27); we skip the details.

1.1.3 Mixed Boundary Value Problem

In most applications we have to deal with boundary value problems with


boundary conditions of mixed type, e.g. with Dirichlet or Neumann boundary
conditions on dierent nonoverlapping parts D and N of the boundary
= D N , respectively. Therefore, we now consider the mixed boundary
value problem
u(x) = 0 for x ,
0int u(x) = g(x) for x D , (1.34)
1int u(x) = f (x) for x N .
Note that for simplicity the domain is supposed to be simply connected.
The solution of the mixed boundary value problem (1.34) is then given by the
representation formula
 

u(x) = int
u (x, y)1 u(y)dsy + u (x, y)f (y)dsy
D N
 
int u (x, y)g(y)ds
1,y int u (x, y) int u(y)ds
1,y for x ,
y 0 y
D N

where we have to nd the yet unknown Cauchy data 0int u on N and 1int u
on D . As we have seen in the two previous subsections on the Dirichlet and
on the Neumann problem, there exist dierent approaches leading to dierent
boundary integral equations to nd the unknown Cauchy data. However, we
consider here only two direct methods, which seem to be the most convenient
approaches to solve mixed boundary value problems by boundary element
methods. The denition of the Sobolev spaces H  1/2 (D ) can
 1/2 (N ) and H
be seen in Appendix A.1.

Symmetric Formulation of Boundary Integral Equations

The symmetric formulation (cf. [103]) is based on the use of the rst kind
boundary integral equation (1.15) to nd the unknown Neumann datum 1int u
on the Dirichlet part D , while the hypersingular boundary integral equation
(1.27) is used to nd the unknown Dirichlet datum 0int u on the Neumann
part N :
18 1 Boundary Integral Equations

1
(V 1int u)(x) =g(x) + (K0int u)(x) for x D ,
2
1
(D0int u)(x) = f (x) (K  1int u)(x) for x N .
2
Let g H ( ) and f H
1/2  1/2
( ) be some arbitrary, but xed extensions
of the given boundary data g H 1/2 (D ) and f H 1/2 (N ), respectively.
Then, we have to nd

 1/2 (N ),
 = 0int u g H
u t = 1int u f H
  1/2 (D )

satisfying the system of boundary integral equations


1
(V 
t )(x) (K u
 )(x) = g )(x) (V f)(x) for x D ,
g(x) + (K
2
1
(K   u )(x) = f (x) (K  f)(x) (D
t )(x) + (D g )(x) for x N .
2
The associated variational problem is to nd

( ) H
t, u  1/2 (N )
 1/2 (D ) H

such that
a(  ; w, v) = F (w, v)
t, u (1.35)
is satised for all (w, v) H  1/2  1/2
(D ) H (N ) with

a(  ; w, v) = V 
t, u t, w
D K u, w
D + K  
t, v
N + Du, v
N ,
1  1 
F (w, v) = g + Kg V f, w + f K  f D
g, v .
2 D 2 N

Since the bilinear form a(, ; , ) is skewsymmetric, i.e.

a(w, v; w, v) = V w, w
D + Dv, v
N ,

the unique solvability of the variational problem (1.35) follows from the map-
ping properties of the single layer potential V and of the hypersingular integral
operator D.

Theorem 1.11. Let g H 1/2 (D ) and f H 1/2 (N ) be given. Then there


exists a unique solution ( ) H
t, u  1/2 (D ) H
 1/2 (N ) of the variational
problem (1.35) satisfying
 
 1/2 ( ) + 
t2H u2H
1/2 ( ) c gH 1/2 (D ) + f H 1/2 (N ) .
2 2
D N
1.1 Laplace Equation 19

SteklovPoincare Operator Formulation

Instead of using the weakly singular boundary integral equation (1.15) on


D and the hypersingular boundary integral equation (1.27) on N , we may
also use the Dirichlet to Neumann map (1.12) to derive a second boundary
integral approach to nd the yet unknown Cauchy data. Then we have to
H
solve an operator equation to nd u  1/2 (N ) such that

(S int u
)(x) = f (x) (S int g)(x) for x N , (1.36)

where the SteklovPoincare operator S int : H 1/2 ( ) H 1/2 ( ) is either


given by the representation (1.12) or by the symmetric version (1.13). Al-
though both representations are equivalent in the continuous case, they ex-
hibit dierent stability properties when applying some numerical approxima-
tion schemes.

Theorem 1.12. Let g H 1/2 (D ) and f H 1/2 (N ) be given. Then there


exists a unique solution uH  1/2 (N ) of the SteklovPoincare operator equa-
tion (1.36) satisfying
 

uH1/2 (N ) c g||H 1/2 (D ) + f H 1/2 (N ) .

When the Dirichlet datum 0int u = u


 + g is known on the whole boundary ,
we can nd the complete Neumann datum 1int u by solving the corresponding
Dirichlet boundary value problem afterwards.

1.1.4 Robin Boundary Value Problem

Besides of standard Dirichlet or Neumann boundary conditions also linear


or nonlinear boundary conditions of Robin type have to be included, as for
example in radiosity transfer problems.

Linear Robin Boundary Conditions

Hence we now consider the Robin boundary value problem

u(x) = 0 for x , 1int u(x) + (x)0int u(x) = g(x) for x , (1.37)

where L ( ) is strictly positive with (x) 0 > 0 for x . Using the


Dirichlet to Neumann map 1int u = S int 0int u on with the SteklovPoincare
operator S int : H 1/2 ( ) H 1/2 ( ) either dened by (1.12) or by (1.13),
we can nd the unknown Dirichlet datum 0int u H 1/2 ( ) by solving the
boundary integral equation

(S int 0int u)(x) + (x)0int u(x) = g(x) for x .


20 1 Boundary Integral Equations

Since is assumed to be strictly positive, the additive term regularises the


H 1/2 ( )semielliptic SteklovPoincare operator S int yielding the unique
solvability of the equivalent variational problem to nd u H 1/2 ( ) such
that

S int u, v
+ u, v
= g, v
for all v H 1/2 ( ). (1.38)

Theorem 1.13. Let g H 1/2 ( ) and L ( ) with (x) 0 > 0 for


x be given. Then there exists a unique solution of the variational problem
(1.38). Moreover,
uH 1/2 ( ) c gH 1/2 ( ) .

When the complete Dirichlet datum u = 0int u H 1/2 ( ) is known we can


nd the Neumann datum 1int u by solving the corresponding Dirichlet bound-
ary value problem.

Nonlinear Robin Boundary Conditions

Instead of linear Robin boundary conditions in the boundary value problem


(1.37), we may also consider a boundary value problem with nonlinear Robin
boundary conditions,

u(x) = 0 for x , 1int u(x) + f (0int u, x) = g(x) for x ,


 m
where f (, ) is nonlinear in the rst argument, for example f (u, x) = u(x)
with m N, typical choices are m = 3 or m = 4. Using again the Dirichlet
to Neumann map 1int u = S int 0int u on , we can nd the unknown Dirich-
let datum u = 0int u H 1/2 ( ) by solving the nonlinear boundary integral
equation

(S int u)(x) + f (u, x) = g(x) for x .

The equivalent variational problem is to nd u H 1/2 ( ) such that

S int u, v
+ f (u, ), v
= g, v
for all v H 1/2 ( ). (1.39)
The unique solvability of the nonlinear variational problem (1.39) follows from
appropriate assumptions on the nonlinear function f , see, e.g., [32, 95].
Theorem 1.14. Let g H 1/2 ( ) be given and let f be strongly monotone
satisfying
 
f (u, ) f (v, ), u v c u v2L2 ( ) for all u, v L2 ( ).

Then there exists a unique solution u H 1/2 ( ) of the nonlinear variational


problem (1.39) satisfying

uH 1/2 ( ) c gH 1/2 ( ) .


1.1 Laplace Equation 21

1.1.5 Exterior Dirichlet Boundary Value Problem

One of the main advantages in using boundary element methods for the ap-
proximate solution of boundary value problems is their applicability to prob-
lems in exterior unbounded domains. As a rst model problem we consider
the exterior Dirichlet boundary value problem

u(x) = 0 for x e = R3 \, 0ext u(x) = g(x) for x (1.40)

with the radiation condition



1
|u(x) u0 | = O as |x| , (1.41)
|x|

where u0 R is given. We denote by

0ext u(x) = lim u(


x)
 e , x
x x

the exterior trace of u on and by


 
1ext u(x) = lim n(x), x u(
x)
 e , x
x x

the exterior conormal derivative of u on . Note that the outer normal vector
n(x) is still dened with respect to the interior domain .
For a xed y0 and R > 2 diam , let BR (y0 ) be a ball of radius R
with centre in y0 and including . The solution of the boundary value problem
(1.40) is then given by the representation formula, see (1.6), for x BR (y0 )\
 
ext ext u (x, y)g(y)ds
u(x) = u (x, y)1 u(y)dsy + 1,y y

 
+ u (x, y)1int u(y)dsy int u (x, y) int u(y)ds .
1,y 0 y

BR (y0 ) BR (y0 )

Taking the limit R and incorporating the radiation condition (1.41),


this gives the representation formula in the exterior domain e
 
ext ext u (x, y)g(y)ds
u(x) = u0 u (x, y)1 u(y)dsy + 1,y y (1.42)

for x e . To nd the yet unknown Neumann datum t = 1ext u H 1/2 ( ),


we apply the exterior trace operator 0ext to obtain the boundary integral
equation
1
(V t)(x) = g(x) + (Kg)(x) + u0 for x . (1.43)
2
22 1 Boundary Integral Equations

As for the direct and the indirect approach for the interior Dirichlet boundary
value problem, we can conclude the unique solvability of the rst kind bound-
ary integral equation (1.43) from Lemma 1.1. We then obtain the Dirichlet to
Neumann map
 1 
1ext u(x) = V 1 I + K 0ext u(x) + (V 1 u0 )(x) for x (1.44)
2
associated to the exterior Dirichlet boundary value problem (1.40).
Applying the exterior conormal derivative to the representation formula
(1.42), and inserting the Dirichlet to Neumann map (1.44), this gives
1 
1ext u(x) = I K  1ext u(x) (D0ext u)(x)
2
1   1 
 1 1
= I K V I + K 0 u(x) + (V u0 )(x) (D0ext u)(x)
ext
2 2
1 
= (S ext 0ext u)(x) + I K  (V 1 u0 )(x) (1.45)
2
with the SteklovPoincare operator (cf. (1.13))
 1   1 
S ext = D + I + K  V 1 I + K : H 1/2 ( ) H 1/2 ( ) (1.46)
2 2
associated to the exterior boundary value problem (1.40).

1.1.6 Exterior Neumann Boundary Value Problem

Instead of the exterior Dirichlet boundary value problem (1.40), we now con-
sider the exterior Neumann boundary value problem

u(x) = 0 for x e , 1ext u(x) = g(x) for x (1.47)

with the radiation condition (1.41)



1
|u(x) u0 | = O as |x| ,
|x|

where u0 R is given. Note that, due to the radiation condition, we have


unique solvability of the exterior Neumann boundary value problem (1.47).
As for the exterior Dirichlet boundary value problem, the solution of the
exterior Neumann boundary value problem is given by the representation
formula (1.42)
 
ext u (x, y) ext u(y)ds .
u(x) = u0 u (x, y)g(y)dsy + 1,y 0 y (1.48)

1.1 Laplace Equation 23

for x e . To nd the yet unknown Dirichlet datum u = 0ext H 1/2 ( ),


we apply the exterior trace operator 0ext to obtain the boundary integral
equation
1
u(x) (K u)(x) = u0 (V g)(x) for x . (1.49)
2
As for the indirect double layer potential formulation for the interior Dirichlet
boundary value problem, the solution of the boundary integral equation (1.49)
is given by the Neumann series

 
1
u(x) = u0 + I +K (V g)(x) for x , (1.50)
2
=0

where we have used (1/2 I + K)u0 = 0. The convergence of the Neumann


series (1.50) in H 1/2 ( ) follows from Theorem 1.7. Note that the boundary
integral equation (1.49), and, therefore, the exterior Neumann boundary value
problem (1.47) with the radiation condition (1.41) is uniquely solvable for any
given g H 1/2 ( ).
When applying the exterior conormal derivative 1ext to the representation
formula (1.48), this gives the hypersingular boundary integral equation to nd
u = 0ext u H 1/2 ( ) satisfying
1
(Du)(x) = g(x) (K  g)(x) for x . (1.51)
2
The boundary integral equation (1.51) is equivalent to the variational problem
to nd u H 1/2 ( ) such that
   1  
Du, v = g, I +K v (1.52)
2

is satised for all v H 1/2 ( ). Using the test function v = v0 1, this gives
the trivial equality
     1  
Du, v0 = u, Dv0 = g, I + K v0 = 0.
2

This shows that the variational problem (1.52) has to be considered in a sub-
space of H 1/2 ( ) which is orthogonal to constants. In particular, the solution
of the variational problem (1.52) is only unique up to a constant. Since the
hypersingular boundary integral operator D : H 1/2 ( ) H 1/2 ( ) is only
H 1/2 ( )semi elliptic, see Lemma 1.4, a suitable regularisation of the hyper-
singular boundary integral operator has to be introduced. As in (1.29), we
obtain an extended variational problem to nd u H 1/2 ( ) such that
       1  
Du, v + u, 1 v, 1 = g, I +K v (1.53)
2

is satised for all v H 1/2 ( ). The extended variational problem (1.53) is


uniquely solvable yielding a solution u H 1/2 ( ) satisfying the orthogonality
24 1 Boundary Integral Equations

u, 1
= 0. Since u(x) = 1 for x e is a solution of the Laplace equation
u(x) = 0 with the radiation condition (1.41) for u0 = 1, the representation
formula (1.48) reads

u(x) = u0 + 1,y ext u (x, y)ds
y

implying 
ext u (x, y)ds = 0 for x e .
1,y y

This shows that the scaling condition for the solution u of the extended vari-
ational problem (1.53) can be chosen in an arbitrary way, the representation
formula (1.48) describes the correct solution for any scaling parameter.

1.1.7 Poisson Problem

Instead of the homogeneous Laplace equation (1.1), we now consider an inho-


mogeneous Poisson equation with some given right hand side. The Dirichlet
boundary value problem for the Poisson equation reads

u(x) = f (x) for x , 0int u(x) = g(x) for x . (1.54)

From Greens second formula (1.3), we then obtain the representation formula
  
u(x) = u (x, y)t(y)dsy 1,y int u (x, y)g(y)ds +
y u (x, y)f (y)dy

for x , where t = 1int u is the yet unknown Neumann datum. As for the
interior Dirichlet boundary value problem (1.14), we have to solve a rst kind
boundary integral equation to nd t H 1/2 ( ) such that
1
(V t)(x) = g(x) + (Kg)(x) (N0 f )(x) for x , (1.55)
2
where 
(N0 f )(x) = u (x, y)f (y)dy for x

is the Newton potential entering the right hand side. Hence, the unique solv-
ability of the boundary integral equation (1.55) follows, as in Theorem 1.5,
for the rst kind boundary integral equation (1.15), which is associated to the
Dirichlet boundary value problem (1.14).
The drawback in considering the boundary integral equation (1.55) is the
evaluation of the Newton potential N0 f . Besides a direct computation there
exist several approaches leading to more ecient methods.
1.1 Laplace Equation 25

Particular Solution Approach

Let up be a particular solution of the Poisson equation in (1.54) satisfying


up (x) = f (x) for x .
Then, instead of (1.54), we consider a Dirichlet boundary value problem for
the Laplace operator,

u0 (x) = 0 for x , 0int u0 (x) = g(x) 0int up (x) for x .


The solution u of (1.54) is then given by u0 + up . The unknown Neumann
datum t0 = 1int u0 is the unique solution of the boundary integral equation
1    
(V t0 )(x) = g(x) 0int up (x) + K g 0int up (x) for x .
2
On the other hand we have
 
t0 = 1int u0 = 1int u up = t 1int up (x).
Hence, we obtain
1 1
(V t)(x) = g(x) + (Kg)(x) 0int up (x) (K0int up )(x) + (V 1int up )(x)
2 2
for x , and, therefore,
1 int
(N0 f )(x) = up (x) + (K0int up )(x) (V 1int up (x)) for x .
2 0
Thus, we can evaluate a Newton potential N0 f by the use of the surface
potentials, when a particular solution up of the Poisson equation is known.

Integration by Parts

In several applications the given function f in (1.54) satises a certain homo-


geneous partial dierential equation. For simplicity, we assume that
f (x) = 0 for x .
Using
1 1  1 
u (x, y) = = y |x y|
4 |x y| 8
we obtain from the Greens second formula (1.3)
 
1
u (x, y)f (y)dy = f (y)y |x y|dy
8

 
1 int |x y| int f (y)ds 1 int |x y| int f (y)dy.
= 1,y 0 y 0,y 1
8 8

26 1 Boundary Integral Equations

1.1.8 Interface Problem

In addition to interior and exterior boundary value problems, we may also


consider an interface problem, i.e.,

i ui (x) = f (x) for x , e ue (x) = 0 for x e , (1.56)

with transmission conditions describing the continuity of the potential and of


the ux, respectively,

0int ui (x) = 0ext ue (x) , i 1int ui (x) = e 1ext ue (x) for x , (1.57)

and with the radiation condition for a given u0 R,



 
ue (x) u0  = O 1 as |x| . (1.58)
|x|

The solution of the above interface problem is given by the representation


formula
 
ui (x) = u (x, y)1int ui (x)dsx 1,y
int u (x, y) int u (y)ds
0 i y


1
+ u (x, y)f (y)dy
i

for x and
 
ue (x) = u0 u (x, y)1ext ue (x)dsx + ext u (x, y) ext u (y)ds
1,y 0 e y

int/ext int/ext
for x e . To nd the unknown Cauchy data 0 u and 1 u, which
are linked via the transmission conditions (1.57), we have to solve appropriate
boundary integral equations on the interface boundary . Using the Dirichlet
to Neumann map associated to the interior Dirichlet boundary value problem
(1.54), in particular, solving the boundary integral equation (1.55),
1 int 1
(V 1int ui )(x) = ui (x) + (K0int ui )(x) (N0 f )(x) for x ,
2 0 i
we obtain
1  1
1int ui (x) = V 1 I + K 0int ui (x) V 1 (N0 f )(x) for x .
2 i
Let us assume that there is given a particular solution up satisfying

up (x) = f (x) for x .


1.2 Lame Equations 27

Hence, we obtain
1 int
(N0 f )(x) = up (x) + (K0int up )(x) (V 1int up )(x) for x ,
2 0
and, therefore,

1int ui (x) =
1  1 1 1 
V 1 I + K 0int ui (x) + 1int up (x) V 1 I + K 0int up (x) =
2 i i 2
int int 1 int 1 int int
(S 0 ui )(x) + 1 up (x) (S 0 up )(x)
i i

for x with the SteklovPoincare operator S int . Correspondingly, the


Dirichlet to Neumann map (1.45) associated to the exterior Dirichlet boundary
value problem (1.40) gives
1 
1ext ue (x) = (S ext 0ext ue )(x) + I K  (V 1 u0 )(x) for x .
2
Inserting the transmission conditions (1.57),

u = 0ext ue (x) = 0int ui (x) , i 1int ui (x) = e 1ext ue (x) for x ,

we obtain a coupled SteklovPoincare operator equation to nd u H 1/2 ( )


such that

i (S int u)(x) + e (S ext u)(x) =


1 
(S int 0int up )(x) 1int up (x) + e I K  (V 1 u0 )(x)
2
is satised for x . This is equivalent to a variational problem to nd
u H 1/2 ( ) such that
 
(i S int + e S ext )u, v = (1.59)

 1  
S int 0int up 1int up + e I K  V 1 u0 , v
2

is satised for all v H 1/2 ( ). The unique solvability of (1.59) nally follows
from the ellipticity estimates for the interior and exterior SteklovPoincare
operators S int and S ext .

1.2 Lame Equations


In linear isotropic elastostatics the displacement eld u of an elastic body occu-
pying some reference conguration R3 satises the equilibrium equations
28 1 Boundary Integral Equations

3

ij (u, x) = 0 for x , i = 1, 2, 3 , (1.60)
j=1
xj

where R33 denotes the stress tensor. For a homogeneous isotropic ma-
terial, the linear stressstrain relation is given by Hookes law

E  3
E
ij (u, x) = ij ekk (u, x) + eij (u, x)
(1 + )(1 2) 1+
k=1

for i, j = 1, 2, 3. Here, E > 0 is the Young modulus, and (0, 1/2) denotes
the Poisson ratio. The strain tensor e is dened as follows,

1
eij (u, x) = uj (x) + ui (x) for i, j = 1, 2, 3 .
2 xi xj

Inserting the strain and stress tensors, we obtain from (1.60) the Navier system

u(x) ( + )grad div u(x) = 0 for x

with the Lame constants


E E
= , = .
(1 + )(1 2) 2(1 + )

Multiplying the equilibrium equations (1.60) with some test function vi , in-
tegrating over , applying integration by parts, and taking the sum over
i = 1, 2, 3, this gives the rst Betti formula
  3   

ij (u, y)vi (y)dy = a(u, v) 1int u(y), 0int v(y) dsy (1.61)
i,j=1
yj

with the symmetric bilinear form


 
3
a(u, v) = ij (u, y)eij (v, y)dy
i,j=1
 
3 
= 2 eij (u, y)eij (v, y)dy + div u(y) div v(y)dy
i,j=1

and with the boundary stress operator


3
(1int u)i (y) = ij (u, y)nj (y) for y ,
j=1

and for i = 1, 2, 3, which can be written as


1.2 Lame Equations 29


(1int u)(y) = div u(y) n(y) + 2 u(y) + n(y) curl u(y) for y .
n(y)

From (1.61) and using the symmetry of the bilinear form a(, ), we can deduce
the second Betti formula
  3   

ij (v, y)ui (y)dy + 1int v(y), 0int u(y) dsy (1.62)
i,j=1
yj

  3   

= ij (u, x)vi (x)dy + 1int u(y), 0int v(y) dsy .
i,j=1
yj

Let

1 0 0 x2 0 x3
R = span 0 , 1 , 0 , x1 , x3 , 0 (1.63)

0 0 1 0 x2 x1

be the space of the rigid body motions which are solutions of the homogeneous
Neumann boundary value problem

3

ij (v, x) = 0 for x , (1int v)i (x) = 0 for x ,
j=1
xj

for i = 1, 2, 3, and v R. Then there holds


  3   

ij (u, y)vi (y)dy + 1int u(y), 0int v(y) dsy = 0
i,j=1
yj

for i = 1, 2, 3, and for all v R.


Choosing in (1.62) as a test function v a fundamental solution U  (x, y)
having the property
  3

ij (U  (x, y), y)ui (y)dy = u (x), (1.64)
i,j=1
y j

the displacement eld u satisfying the equilibrium equations (1.60) is given


by the Somigliana identity
     
u (x) = 1int U  (x, y), u(y) dsy int U (x, y), int u(y) ds (1.65)
1,y  0 y

for x and  = 1, 2, 3. The fundamental solution of linear elastostatics is


given by the Kelvin tensor
30 1 Boundary Integral Equations

1 1 1+ k (xk yk )(x y )
Uk (x, y) = (3 4) + (1.66)
8 E 1 |x y| |x y|3

for x, y R3 and k,  = 1, 2, 3. Note that the fundamental solution is dened


even in the incompressible case = 1/2.
The mapping properties of all boundary potentials and the related bound-
ary integral operators follow as in the case of the Laplace operator.

Single Layer Potential

The single layer potential of linear elastostatics is given as


 
1 1 1+ 3

(V Lame w)k (x) = 
(3 4)(V wk )(x) + 
(Vk w )(x) ,
2E 1
=1

where 
1 wk (y)
(V wk )(x) = dsy
4 |x y|

is the single layer potential of the Laplace operator, and



1 (xk yk )(x y )
(Vk w )(x) = w (y)dsy
4 |x y|3


1 1
= w (y)(xk yk ) dsy
4 y |x y|

for k,  = 1, 2, 3. The single layer potential V Lame denes a continuous map


from a given vector function w on the boundary to a vector eld V Lame w
which is a solution of the homogeneous equilibrium equations (1.60). In par-
ticular,
V Lame : [H 1/2 ( )]3 [H 1 ()]3
is continuous. Using the mapping property of the interior trace operator

0int : H 1 () H 1/2 ( )

for (V Lame w) ,  = 1, 2, 3, this denes a continuous boundary integral oper-
ator V Lame = 0int V Lame .

Lemma 1.15. The single layer potential operator

V Lame : [H 1/2 ( )]3 [H 1/2 ( )]3

is bounded with

V Lame w[H 1/2 ( )]3 cV2 w[H 1/2 ( )]3 for all w [H 1/2 ( )]3
1.2 Lame Equations 31

and, if (0, 1/2), [H 1/2 ( )]3 elliptic,

V Lame w, w
cV1 w2[H 1/2 ( )]3 for all w [H 1/2 ( )]3 ,

where the duality pairing ,


is now dened as follows

u, v
= (u(y), v(y))dsy .

Moreover, for w [L ( )]3 there holds the representation


 
1 1 1+ 3
(V Lame w)k (x) = (3 4)(V wk )(x) + (Vk w )(x) ,
2E 1
=1

where 
1 wk (y)
(V wk )(x) = dsy
4 |x y|

is the single layer potential of the Laplace operator, and



1 1
(Vk w )(x) = w (y)(xk yk ) dsy
4 y |x y|

for k,  = 1, 2, 3, all dened as weakly singular surface integrals.

Note that the single layer potential V Lame of linear elastostatics can be
written as

(V Lame w)k (x) =


 
1 1 1+ 
3
1 1+
(V wk )(x) + (Vk w )(x) + (1 2)(V wk )(x) ,
2E 1 E 1
=1

where the rst part corresponds to the single layer potential V Stokes of the
Stokes problem (see Section 1.3). From V Stokes n = 0, we then obtain (cf.
[106])
1 1+  3
(V Lame n, n) = (1 2) (V nk , nk ) ,
E 1
k=1

showing that the ellipticity constant behaves like O(1 2) for 1/2.
cV1
In particular, we have
lim cV1 () = 0.
1/2
32 1 Boundary Integral Equations

Double Layer Potential

The double layer potential of linear elastostatics is



(W Lame v) (x) = (1,y int U (x, y), v(y)) ds
 y

for  = 1, 2, 3. The double layer potential W Lame denes a continuous map


from a given vector function v on the boundary to a vector eld W Lame v
which is a solution of the homogeneous equilibrium equations (1.60). In par-
ticular,
W : [H 1/2 ( )]3 [H 1 ()]3
is continuous. Using the mapping property of the interior trace operator

0int : H 1 () H 1/2 ( )

applied to the components (W v) ,  = 1, 2, 3, this denes an associated bound-


ary integral operator.
Lemma 1.16. The boundary integral operator

0int W Lame v : [H 1/2 ( )]3 [H 1/2 ( )]3

is bounded with

0int W Lame v[H 1/2 ( )]3 cW


2 v[H 1/2 ( )]3 for all v [H 1/2 ( )]3 .

For continuous v there holds the representation


1
0int (W Lame v)(x) = v + (K Lame v)(x)
2
for x with the double layer potential operator
E
(K Lame v)(x) = (Kv)(x) (V M (, n)v)(x) + (V Lame M (, n)v)(x) ,
1+
where K and V are the double and single layer potential for the Laplace oper-
ator, and V Lame is the single layer potential of linear elasticity, respectively.
In addition, we have used the matrix surface curl operator given by

Mij (y , n(y)) = nj (y) ni (y) (1.67)
yi yj
for i, j = 1, 2, 3. Moreover, we have
1 
I + K Lame v(x) = 0 for all v R ,
2
where R is the space of the rigid body motions.
1.2 Lame Equations 33

By applying the interior trace operator 0int to the representation formula


(1.65), we obtain the rst boundary integral equation
1
0int u(x) = (V Lame 1int u)(x) + 0int u(x) (K Lame 0int u)(x) (1.68)
2

for x . Instead of the interior trace operator 0int , we may also apply the
interior boundary stress operator 1int to the representation formula (1.65).
To do so, we rst need to investigate the application of the boundary stress
operator to the single and double layer potentials V Lame w and W Lame v which
are both solutions of the homogeneous equilibrium equations (1.60).

Adjoint Double Layer Potential

Lemma 1.17. The boundary integral operator

1int V Lame : [H 1/2 ( )]3 [H 1/2 ( )]3

is bounded with
int V
1int V Lame w[H 1/2 ( )]3 c21 w[H 1/2 ( )]3 for all w [H 1/2 ( )]3 .

For w [H 1/2 ( )]3 there holds the representation


1   
(1int V Lame w)(x) = w(x) + K Lame w (x)
2
in the sense of [H 1/2 ( )]3 . In particular, for v [H 1/2 ( )]3 we have
1
1int V Lame w, v
= w, v
+ w, K Lame v
.
2

Hypersingular Integral Operator

In the same way as for the single layer potential V Lame w, we now consider the
application of the boundary stress operator 1int to the double layer potential
W Lame v.

Lemma 1.18. The boundary integral operator

DLame = 1int W Lame : [H 1/2 ( )]3 [H 1/2 ( )]3

is bounded with

DLame v[H 1/2 ( )]3 cD


2 v[H 1/2 ( )]3 for all v [H 1/2 ( )]3

1/2
and HR ( )elliptic,
34 1 Boundary Integral Equations

DLame v, v
cD
1/2
1 v[H 1/2 ( )]3 for all v HR ( ) ,
2

1/2
where HR ( ) is the space of all vector functions which are orthogonal to the
space R of rigid body motions. In particular, there holds

(DLame v)(x) = 0 for all v R.

Moreover, for continuous vector functions u, v [H 1/2 ( ) C( )]3 , there


holds the representation

DLame u, v
=
 
1  
3

u(y), v(x) dsy dsx +
4 |x y| Sk (y) Sk (x)
k=1
 
(M (x , n(x))v(x))


I
42 U (x, y) M (y , n(y))u(y)dsy dsx +
2 |x y|
   3
1
Mkj (x , n(x))vi (x) Mki (y , n(y))vj (y)dsy dsx
4 |x y|
i,j,k=1

with the surface curl operator M (, n) as dened in (1.67) and



= M32 (x , n(x)),
S1 (x)

= M13 (x , n(x)),
S2 (x)

= M21 (x , n(x)).
S3 (x)

Boundary Integral Equations

Applying the interior boundary stress operator 1int to the Somigliana identity
(1.65),

u(x) = (V Lame 1int u)(x) (W Lame 0int u)(x) for x ,

this gives a second boundary integral equation


1  
1int u(x) = 1int u(x) + (K Lame ) 1int u (x) + (DLame 0int u)(x) (1.69)
2
for x . As in (1.11), we can write the boundary integral equations (1.68)
and (1.69) by the use of the Calderon projector as
1.2 Lame Equations 35
   
0int u 1
2I K Lame V Lame 0int u
 Lame  . (1.70)
1int u DLame 1
2I + K
1int u

Since the single layer potential V Lame is [H 1/2 ( )]3 elliptic and therefore
invertible, we obtain from the rst equation in (1.70) the Dirichlet to Neumann
map
1int u(x) = (S Lame 0int u)(x) for x (1.71)
with the SteklovPoincare operator
 1  1 
S Lame = V Lame I + K Lame
2
1    1  1 
=D Lame + I + K Lame V Lame I + K Lame .
2 2
Note that it holds
(S Lame 0int v)(x) = 0 for all v R.

1.2.1 Dirichlet Boundary Value Problem

When considering the Dirichlet boundary value problem of linear elastostatics,


3

ij (u, x) = 0 for x , i = 1, 2, 3,
j=1
x j

0int u(x) = g(x) for x ,


the displacement eld u can be described by the Somigliana identity
     
uk (x) = U k (x, y), t(y) dsy int U (x, y), g(y) ds
1,y k y

for x and k = 1, 2, 3, where the boundary stress t = 1int u has to be


determined from some appropriate boundary integral equation.
Using the rst equation in the Calderon projector (1.70), we have to solve
a rst kind boundary integral equation to nd t [H 1/2 ( )]3 such that
1
(V Lame t)(x) = g(x) + (K Lame g)(x) for x .
2
This boundary integral equation corresponds to nding the solution t of the
variational problem
   1  
V Lame t, w = I + K Lame g, w (1.72)
2

in [H 1/2 ( )]3 for all test functions w [H 1/2 ( )]3 . Since the single layer
potential V Lame is [H 1/2 ( )]3 elliptic, the unique solvability of the varia-
tional problem (1.72) follows due to the LaxMilgram theorem.
36 1 Boundary Integral Equations

1.2.2 Neumann Boundary Value Problem

For a simply connected domain R3 , we now consider the Neumann bound-


ary value problem

3

ij (u, x) = 0 for x , i = 1, 2, 3,
j=1
x j (1.73)
1int u(x) = g(x) for x

where we have to assume the solvability conditions


  
g(y), 0int v(y) dsy = 0 for all v R. (1.74)

Note that the solution of the Neumann boundary value problem (1.73) is only
unique up to the rigid body motions v R.
Using the Somigliana identity (1.65), a solution of the Neumann boundary
value problem (1.73) is given by the representation formula
     
u (x) = U  (x, y), g(y) dsy int U (x, y), int u(y) ds
1,y  0 y

for x and  = 1, 2, 3. Hence, we have to nd the yet unknown Dirichlet


datum u = 0int u on .
When using the second equation in the Calderon projector (1.70), we have
to solve a rst kind boundary integral equation to nd u [H 1/2 ( )]3 such
that
 
Lame 1 Lame
(D u)(x) = g(x) K g (x) for x (1.75)
2

is satised in a weak sense, in particular, in the sense of [H 1/2 ( )]3 . Since the
hypersingular boundary integral operator DLame has the nontrivial kernel
of the rigid body motions, we have to consider the boundary integral equation
(1.75) in suitable subspaces. To this end, we dene

1/2
HR ( ) = u [H 1/2 ( )]3 : u, v
= 0 for all v R .

Then the variational problem of the boundary integral equation (1.75) is to


1/2
nd u HR ( ) such that
   1    
DLame u, v = I K Lame g, v (1.76)
2

1/2
is satised for all v HR ( ). The general solution of the hypersingular
boundary integral equation (1.75) is then given by
1.2 Lame Equations 37


6
u (x) = u(x) + ck v k (x) ,
k=1

where the vectors v k , k = 1, . . . , 6 build a basis in the space of rigid body


motions (cf. (1.63)). To x the constants ck , we may require the scaling con-
ditions   
u(y), 0int v k (y) dsy = k (1.77)

for k = 1, . . . , 6, where the k R can be arbitrary, but prescribed.


1/2
Instead of solving the variational problem (1.76) in the subspace HR ( )
and nding the unique solution afterwards from the scaling conditions (1.77),
one can formulate an extended variational problem to nd u [H 1/2 ( )]3
such that
  6 
   
DLame u , v + u , v k v, v k = (1.78)

k=1
 1    
6  
I K Lame g, v + k v, v k
2
k=1

is satised for all v [H 1/2 ( )]3 . The extended variational problem (1.78) is
uniquely solvable for any given g [H 1/2 ( )]3 . If g satises the solvability
conditions (1.74), then u is the unique solution of the hypersingular boundary
integral equation (1.75) satisfying the scaling conditions (1.77).

1.2.3 Mixed Boundary Value Problem

Let R3 be simply connected. Then we consider the mixed boundary value


problem
3

ij (u, x) = 0 for x ,
j=1
xj
0int ui (x) = gi (x) for x D,i , (1.79)

3
ij (u, x)nj (x) = fi (x) for x N,i ,
j=1

and for i = 1, 2, 3. We assume that

= N,i D,i , N,i D,i = , meas D,i > 0

for i = 1, 2, 3 is satised.
Using the Somigliana identity (1.65), the solution of the mixed boundary
value problem (1.79) is given by the representation formula
38 1 Boundary Integral Equations
3 
 3 

u (x) =
fi (x)Ui (x, y)dsy int U ) (x, y)ds +
gi (y)(1,y  i y
i=1 i=1
N,i D,i


3  
3 

(1int u)i (x)Ui (x, y)dsy int U ) (x, y)ds
0int ui (y)(1,y  i y
i=1 i=1
D,i N,i

for x and for  = 1, 2, 3. Hence, we have to nd the yet unknown Cauchy


data (1int u)i on D,i and 0int ui on N,i .
The symmetric formulation of boundary integral equations is based on the
use of the rst kind boundary integral equation (1.68) for those components,
where the boundary displacement 0int ui = gi is given, while the hypersingular
boundary integral equation (1.69) is used when the boundary stress (1int u)i =
fi is prescribed.
Let gi H 1/2 ( ) and fi H 1/2 ( ) be some arbitrary but xed ex-
tensions of the given boundary data gi H 1/2 (D,i ) and fi H 1/2 (N,i ),
respectively. Then, we have to nd

 1/2 (N,i ),
i = 0int ui gi H
u ti = (1int u)i fi H
  1/2 (D,i )

satisfying a system of boundary integral equations,

(V Lame 
t)i (x) (K Lame u
)i (x) =
1
gi (x) + (K Lame g)i (x) (V Lame f)i (x)
2
for x D,i , and
  
(DLame u
)i (x) + K Lame t (x) =
i
1   
fi (x) K Lame f (x) (DLame g)i (x)
2 i

for x N,i , and for i = 1, 2, 3. The associated variational problem is to nd


3 
3
( )
t, u  1/2 (D,i )
H  1/2 (N,i )
H
i=1 i=1

such that
a(  ; w, v) = F (w, v)
t, u (1.80)
is satised for all

3 
3
(w, v)  1/2 (D,i )
H  1/2 (N,i )
H
i=1 i=1

with the bilinear form


1.2 Lame Equations 39

a(  ; w, v) =
t, u
3  3 
 
(V Lame t )i , wi (K Lame u
 )i , wi +
D,i D,i
i=1 i=1
3 
  3 
 
ti , (K Lame v )i + (DLame u
 )i , vi
N,i N,i
i=1 i=1

and with the linear form

F (w, v) =
 3      
1 Lame Lame 
gi , wi + (K g)i , wi (V f )i , wi +
i=1
2 D,i D,i D,i

 3      
1
fi , vi fi , (K Lame v)i (DLame g)i , vi .
i=1
2 N,i N,i N,i

Since the bilinear form a(, ; , ) is skewsymmetric, the unique solvability


of the variational problem (1.80) follows from the mapping properties of all
boundary integral operators involved.
In the mixed boundary value problem (1.79), dierent boundary conditions
in the cartesian coordinate system are prescribed. In many practical applica-
tions, however, boundary conditions are given with respect to some dierent
orthogonal coordinate system. As an example, we consider the mixed bound-
ary value problem

3

ij (u, x) = 0 for x , i = 1, 2, 3
j=1
x j

(1.81)
(0int u(x), n(x)) = g(x) for x ,

1int u(x) (1int u(x), n(x))n(x) = 0 for x .

An elastic body, which is modelled by the mixed boundary value problem


(1.81), can slide in tangential direction while in the normal direction a dis-
placement is given. Note that the boundary value problem (1.81) may arise
when considering a linearisation of nonlinear contact (Signorini) boundary
conditions.
Using the Dirichlet to Neumann map (1.71), it remains to nd the bound-
ary displacements 0int u and the boundary stresses 1int u satisfying

1int u(x) = (S Lame 0int u(x) for x ,

as well as the boundary conditions

(0int u(x), n(x)) = g(x) , 1int u(x) (1int u(x), n(x))n(x) = 0 for x .
40 1 Boundary Integral Equations

Using

0int u(x) = g(x)n(x) + uT (x) ,

we have to nd a tangential displacement eld


1/2
uT HT ( ) = v [H 1/2 ( )]3 : (v(x), n(x)) = 0 for x

as a solution of the boundary integral equation

(S Lame gn)(x) + (S Lame uT )(x) (1int u(x), n(x))n(x) = 0 for x .


1/2
For a test function v T HT ( ), we then obtain the variational problem

S Lame uT , v T
= S Lame gn, v T
,

which is uniquely solvable due to the mapping properties of the Steklov


Poincare operator. Note that one may also consider mixed boundary value
problems with sliding boundary conditions only on a part S , but standard
Dirichlet or Neumann boundary conditions elsewhere. However, to ensure
uniqueness, one needs to assume Dirichlet boundary conditions somewhere
for each component.

1.3 Stokes System


The Stokes problem is to nd a velocity eld u R3 and a pressure p such
that
u(x) + p(x) = 0, div u(x) = 0 for x (1.82)
is satised, where  is the viscosity of the uid. Note that the Stokes system
(1.82) also arises in the limiting case when considering the Navier system

u(x) ( + )grad div u(x) = 0 for x

for incompressible materials. Introducing the pressure

p(x) = ( + )div u(x) for x ,

we get
u(x) + p(x) = 0 for x ,
as well as
1 2
div u(x) = p(x) = (1 + )(1 2)p(x) = 0
+ E

in the incompressible case = 1/2.


1.3 Stokes System 41

Using integration by parts, we obtain from the second equation in (1.82)


the compatibility condition
 
0 = div u(y) dy = (u(y), n(y)) dsy . (1.83)

Greens rst formula for the Stokes system (1.82) reads


 3

a(u, v) = ui (y) + p(y) vi (y)dy (1.84)
i=1
yi

  
3
+ p(y)div v(y)dy + ti (u(y), p(y))vi (y)dsy
i=1

with the symmetric bilinear form


 
3 
a(u, v) = 2 eij (u, y)eij (v, y)dy div u(y) div v(y) dy
i,j=1

and with the associated boundary stress


3
ti (u(y), p(y)) = p(y)ni (y) + 2 eij (u, y)nj (y), y , i = 1, 2, 3.
j=1

From Greens rst formula (1.84), we now derive Greens second formula which
reads for the solution (u, p) of (1.82) as
3
  

vi (y) + q(y) ui (y)dy p(y)div v(y)dy
yi
i=1
 
3  
3
= ti (u(y), p(y))vi (y)dsy ti (v(y), q(y))ui (y)dsy .
i=1 i=1

Choosing as test functions a pair of fundamental solutions U  (x, y) and


q (x, y), i.e. satisfying
3
 


Ui (x, y) + q (x, y) ui (y)dy = u (x), div U  (x, y) = 0,
yi
i=1

we obtain a representation formula for x


 
3  
3
u (x) =
tk (u, p)Uk (x, y)dsy tk (U  (x, y), q )uk (y)dsy
k=1 k=1
42 1 Boundary Integral Equations

for  = 1, 2, 3. The fundamental solution of the Stokes system is given by



1 1 k (xk yk )(x y )
Uk (x, y) = + (1.85)
8  |x y| |x y|3
for k,  = 1, 2, 3, and
1 y x
q (x, y) =
4 |x y|3
for  = 1, 2, 3. Note that the fundamental solution (1.85) coincides with the
Kelvin tensor (1.66) for
1 E
= , = .
2 3
Hence, we can dene and analyse all the boundary integral operators and re-
lated boundary integral equations as for the system of linear elastostatics. The
only exception is the Dirichlet boundary value problem of the Stokes system
which requires a special treatment of the associated single layer potential.
As in linear elastostatics the single layer potential of the Stokes system is
given by  
1 3
(V Stokes w)k = 
(V wk )(x) + 
(Vk w )(x)
2
=1

for k = 1, 2, 3. As before,

V Stokes : [H 1/2 ( )]3 [H 1 ()]3

denes a continuous map. Combining this with the mapping properties of the
interior trace operator

0int : H 1 () H 1/2 ( ) ,

we can dene the continuous boundary integral operator

V Stokes = 0int V Stokes : [H 1/2 ( )]3 [H 1/2 ( )]3

allowing the representation


 
11 
3
(V Stokes w) k = (V wk )(x) + (Vk w )(x) for x ,
2
=1

and for k = 1, 2, 3 as a weakly singular surface integral; see also Lemma 1.15.
When considering the interior Dirichlet boundary value problem for the
Stokes system
u(x) + p(x) = 0 for x ,
div u(x) = 0 for x , (1.86)
0int u(x) = g(x) for x ,
1.3 Stokes System 43

and using (1.83), we rst have to assume the solvability condition



(g(y), n(y)) dsy = 0. (1.87)

On the other hand, it is obvious, that the pressure p satisfying the rst equa-
tion in (1.86) is only unique up to an additive constant. In particular, the
homogeneous Dirichlet boundary value problem

u(x) + p(x) = 0, div u(x) = 0 for x , u(x) = 0 for x

has the nontrivial pair of solutions u (x) = 0 and p (x) = 1 for x .


The rst kind boundary integral equation of the direct approach for the
Dirichlet boundary value problem (1.86) is
1
(V Stokes t)(x) = g(x) + (K Stokes g)(x) for x . (1.88)
2
For the homogeneous Dirichlet boundary value problem with g = 0, we there-
fore obtain
(V Stokes t )(x) = 0 for x
with
t (u (x), p (x)) = p (x)n(x) = n(x) for x .
Thus, t = n is an eigenfunction of the single layer potential V Stokes yielding
a zero eigenvalue. Therefore, we conclude that the boundary integral equa-
tion (1.88) is only solvable modulo t , and we have to consider the boundary
integral equation (1.88) in an appropriate factor space [90]. Hence, we dene

1/2
3
H ( ) = w [H 1/2 ( )]3 : w, n
V = V wk , nk
= 0 ,
k=1

where
V : H 1/2 ( ) H 1/2 ( )
is the single layer potential of the Laplace operator. Considering the boundary
1/2
integral equation (1.88) in H ( ), this can be rewritten as an extended
variational problem to nd t [H 1/2 ( )]3 such that
       1  
V Stokes t, w + t, n w, n = I + K Stokes g, w (1.89)
V V 2

is satised for all w [H 1/2 ( )]3 . Note that there exists a unique solution
t [H 1/2 ( )]3 of the extended variational problem (1.89) for any given
Dirichlet datum g [H 1/2 ( )]3 . If g satises the solvability condition (1.87),
1/2
we then obtain t H ( ).
44 1 Boundary Integral Equations

1.4 Helmholtz Equation


Let U : R+ R be a scalar function which satises the wave equation

1 2
U (t, x) = U (t, x) for t > 0 , x . (1.90)
c2 t2
The equation (1.90) is valid for the wave propagation in a homogeneous,
isotrop, friction-free medium having the constant speed of sound c. The most
important examples are the acoustic scattering and the sound radiation.
The time harmonic acoustic waves are of the form
 
U (t, x) = Re u(x)e t , (1.91)

where is the imaginary unit. In (1.91), u : C is a scalar, complex


valued function and > 0 denotes the frequency. Inserting (1.91) into the
wave equation (1.90), we obtain the reduced wave equation or the Helmholtz
equation
u(x) 2 u(x) = 0 for x , (1.92)
where = /c > 0 is the wave number.
First we consider the Helmholtz equation (1.92) in a bounded domain
R3 . Multiplying this equation (1.92) with a test function v, integrating
over , and applying integration by parts, this gives Greens rst formula
 
(u(y) 2 u(y))v(y)dy = a(u, v) 1int u(y)0int v(y)dsy (1.93)

with the symmetric bilinear form


 
 
a(u, v) = u(y), v(y) dy 2
u(y)v(y)dy.

From Greens formula (1.93) and by the use of the symmetry of the bilinear
form a( , ), we deduce Greens second formula,
 
(u(y) u(y))v(y)dy + 1int u(y)0int v(y)dsy =
2


 
(v(y) v(y))u(y)dy +
2
1int v(y)0int u(y)dsy .

Now, choosing as a test function v a fundamental solution u : R3 R3 C


satisfying
  
u (x, y) 2 u (x, y) u(y)dy = u(x) for x , (1.94)

1.4 Helmholtz Equation 45

the solution of the Helmholtz equation (1.92) is given by the representation


formula
 
u(x) = u (x, y)1int u(y)dsy 1,y
int u (x, y) int u(y)ds
0 y (1.95)

for x . The fundamental solution of the Helmholtz equation (1.92) is


1 e |xy|
u (x, y) = for x, y R3 . (1.96)
4 |x y|
As for the Laplace operator, we consider the single layer potential
  |xy|
1 e
(V w)(x) = u (x, y)w(y)dsy = w(y)dsy for x
4 |x y|

which denes a continuous map from a given density function w on the bound-
ary to a function V w, which satises the partial dierential equation (1.92)
in . In particular,
V : H 1/2 ( ) H 1 ()
is continuous and V w H 1 () is a weak solution of the Helmholtz equation
(1.92) for any w H 1/2 ( ). Using the mapping properties of the interior
trace operators
0int : H 1 () H 1/2 ( )
and
1int : H 1 (, + 2 ) H 1/2 ( ) ,
we can dene corresponding boundary integral operators, e.g. the single layer
potential operator
V = 0 V : H 1/2 ( ) H 1/2 ( ) ,
as follows:
 
1 e |xy|
(V w)(x) = u (x, y)w(y)dsy = w(y)dsy for x .
4 |x y|

Its conormal derivative is


1
1int V =
I + K : H 1/2 ( ) H 1/2 ( )
2
with the adjoint double layer potential operator

(K w)(x) = lim int u (x, y)w(y)ds
1,x y
0
y :|yx|
  
1 e |xy|
= lim x , n(x) w(y)dsy .
0 4 |x y|
y :|yx|
46 1 Boundary Integral Equations

Note that

H 1 (, + 2 ) =  1 () .
v H 1 () : v + 2 v H

Since the density functions of the boundary integral operators introduced


above may be complex valued, we consider

v, w
= v(x)w(x)dsx

as an appropriate duality pairing for v H 1/2 ( ) and w H 1/2 ( ). Then


the single layer potential operator is complex symmetric, i.e. the following
property holds for w, z H 1/2 ( )
 
1 e |xy|
V w, z
= w(y)dsy z(x)dsx
4 |x y|

  |xy|
1 e
= w(y) z(x)dsx dsy = w, V z
.
4 |x y|

If is a Lipschitz boundary, the operator

V V0 : H 1/2 ( ) H 1/2 ( )

is compact. Since the single layer potential V0 of the Laplace operator is


H 1/2 ( )elliptic (see Lemma 1.1), the single layer potential V is coercive,
i.e. with the compact operator C = V0 V , the Gardings inequality

(Vk + C)w, w
= V0 w, w
cV1 0 w2H 1/2 ( ) (1.97)

is satised for all w H 1/2 ( ).


Next we consider the double layer potential
   
int 1 e |xy|
(W v)(x) = 1,y u (x, y)v(y)dsy = y , n(y) v(y)dsy
4 |x y|

for x , which again denes a continuous map from a given density function
v on the boundary to a function W v satisfying the Helmholtz equation
(1.92). In particular,
W : H 1/2 ( ) H 1 ()
is continuous and W v H 1 () is a weak solution of the Helmholtz equation
(1.92) for any v H 1/2 ( ). Using the mapping properties of the interior trace
operator
0int : H 1 () H 1/2 ( )
1.4 Helmholtz Equation 47

and
1int : H 1 (, + 2 ) H 1/2 ( ) ,
we can dene corresponding boundary integral operators, i.e. the trace
1
0int W = I + K
2
with the double layer potential operator
  e |xy| 
1
(K v)(x) = lim y , n(y) v(y)dsy for x .
0 4 |x y|
y :|yx|

As for the single layer potential, we have



K v, w
= v, K w

for all v H 1/2 ( ) and w H 1/2 ( ).


The conormal derivative of the double layer potential denes the hyper-
singular boundary integral operator

D = 1int W : H 1/2 ( ) H 1/2 ( ) .

For a Lipschitz boundary , the operator

D D0 : H 1/2 ( ) H 1/2 ( )

is compact. Since the regularised hypersingular boundary integral operator


D0 + I of the Laplace operator is H 1/2 ( )elliptic, and since the embedding
H 1/2 ( ) H 1/2 ( ) is compact, the hypersingular boundary integral oper-
ator D is coercive, i.e. with the compact operator C = D D0 I, the
Gardings inequality

(D + C)v, v
= (D0 + I)v, v
cD
1 vH 1/2 ( )
0 2
(1.98)

is satised for all v H 1/2 ( ).


As for the bilinear form for the hypersingular boundary integral operator
for the Laplace equation (see (1.9)), there holds an analogue result for the
Helmholtz equation, see [78]:
   |xy|
1 e  
(D u)(x)v(x)dsx = curl u(y), curl v(x) dsy dsx
4 |x y|

 
2 e |xy|  
u(y)v(x) n(x), n(y) dsy dsx . (1.99)
4 |x y|

In addition to the interior boundary value problem for the Helmholtz equation
(1.92), we also consider the exterior boundary value problem
48 1 Boundary Integral Equations

u(x) 2 u(x) = 0 for x e = R3 \ , (1.100)

where we have to add the Sommerfeld radiation condition


  
 x  1
 , u(x) u(x) = O as |x| . (1.101)
 |x|  |x|2

For a xed y0 and R > 2 diam , let BR (y0 ) be a ball of radius R with
centre y0 and including . Let u be a solution of the exterior boundary value
problem for the Helmholtz equation (1.100) satisfying the radiation condition
(1.101). Considering Greens rst formula (1.93) with respect to the bounded
domain R = BR (y0 )\ and choosing v = u as test function, we obtain
  
|u(y)|2 dy k 2 |u(y)|2 = 1int u(y)0int u(y)dsy
R R R
 
= 1int u(y)0int u(y)dsy 1ext u(y)0ext u(y)dsy ,
BR (y0 )

when taking into account the opposite direction of the normal vector n(x) for
x . Since the left hand side of the above equation is real, we conclude
 
Im 1int u(y)0int u(y)dsy = Im 1ext u(y)0ext u(y)dsy .
BR (y0 )

By the use of this property, the Sommerfeld radiation condition (1.101) implies
  2
 int 
0 = lim 1 u(y) 0int u(y) dsy
R
BR (y0 )

 

= lim |1int u(y)|2 dsy + 2 |0int u(y)|2 dsy
R
BR (y0 ) BR (y0 )



2 Im 1int u(y)0int u(y)dsy
BR (y0 )

 

= lim |1int u(y)|2 dsy + 2 |0int u(y)|2 dsy
R
BR (y0 ) BR (y0 )


2 Im 1ext u(y)0ext u(y)dsy

and, therefore,
1.4 Helmholtz Equation 49

2 Im 1ext u(y)0ext u(y)dsy


 

= lim |1int u(y)|2 dsy + 2 |0int u(y)|2 dsy 0.
R
BR (y0 ) BR (y0 )

In particular, this gives



lim |u(y)|2 dsy = O(1) ,
R
BR (y0 )

and, therefore,
1
|u(x)| = O as |x| . (1.102)
|x|
For the bounded domain R , we can apply the representation formula (1.95)
to obtain
 
int int u (x, y) int u(y)ds
u(x) = u (x, y)1 u(y)dsy + 1,y 0 y

 
+ u (x, y)1int u(y)dsy int u (x, y) int u(y)ds
1,y 0 y

BR (y0 ) BR (y0 )

for x R . Taking the limit R and incorporating the radiation condi-


tions (1.101) and (1.102), this gives the representation formula in the exterior
domain e , i.e. for x e
 
int int u (x, y) int u(y)ds .
u(x) = u (x, y)1 u(y)dsy + 1,y 0 y (1.103)

1.4.1 Interior Dirichlet Boundary Value Problem

We rst consider the interior Dirichlet boundary value problem for the Helm-
holtz equation, i.e.

u(x) 2 u(x) = 0 for x , 0int u(x) = g(x) for x .(1.104)

Using the representation formula (1.95), the solution of the above Dirichlet
boundary value problem is given by
 
u(x) = u (x, y)t(y)dsy 1,y
int u (x, y)g(y)ds
y for x ,

where t = 1int u is the unknown conormal derivative of u on which has to


be determined from some appropriate boundary integral equation.
50 1 Boundary Integral Equations

Applying the interior trace operator 0int to the representation formula,


this gives a boundary integral equation to nd t H 1/2 ( ) such that
1
(V t)(x) = g(x) + (K g)(x) for x . (1.105)
2
Note that t H 1/2 ( ) is the solution of the variational problem
   1  
V t, w = I + K g, w for all w H 1/2 ( ). (1.106)
2

When applying the interior normal derivative 1int to the representation for-
mula, this gives a second kind boundary integral equation to nd t H 1/2 ( )
such that
1
t(x) (K t)(x) = (D g)(x) for x . (1.107)
2
To investigate the unique solvability of the variational problem (1.106), and,
therefore, of the boundary integral equation (1.105) as well as of the boundary
integral equation (1.107), we rst consider the Dirichlet eigenvalue problem
for the Laplace operator,

u(x) = u(x) for x , 0int u(x) = 0 for x . (1.108)

Let R+ be a certain eigenvalue, and let u be the corresponding eigen-


function. Since the eigenvalue problem (1.108) can be seen as the Helmholtz
equation with the wave number satisfying 2 = , we obtain for the conor-
mal derivative t = 1int u the boundary integral equations
1 int
(V t )(x) = u (x) + (K 0int u )(x) = 0 for x
2 0
and
1
t (x) (K t )(x) = (D 0int u )(x) = 0 for x .
2
Thus, the boundary integral operators V and 1/2 I K are singular, and,
therefore, not invertible, if 2 = is an eigenvalue of the Dirichlet eigenvalue
problem (1.108). On the other hand, if 2 is not an eigenvalue of the Dirichlet
eigenvalue problem (1.108), the single layer potential V is injective and hence,
since V is coercive, also invertible. This shows the unique solvability of the
variational problem (1.106) and of the boundary integral equation (1.105)
in this case. Note that also in this case the second kind boundary integral
equation (1.107) is uniquely solvable.

1.4.2 Interior Neumann Boundary Value Problem

Next we consider the interior Neumann boundary value problem for the Helm-
holtz equation, i.e.
1.4 Helmholtz Equation 51

u(x) 2 u(x) = 0 for x , 1int u(x) = g(x) for x .(1.109)

From the representation formula (1.95), we can obtain the solution of the
above boundary value problem as
 
u(x) = u (x, y)g(y)dsy 1,y
int u (x, y) int u(y)ds
0 y for x .

Applying the interior trace operator 0int to the above representation formula,
this gives a rst boundary integral equation to nd u = 0int u H 1/2 ( ) such
that
1
u(x) + (K u)(x) = (V g)(x) for x . (1.110)
2
When applying the conormal derivative operator 1int to the above represen-
tation formula, this gives a second boundary integral equation,
1
(D u)(x) = g(x) (K g)(x) for x . (1.111)
2
Hence, u H 1/2 ( ) is a solution of the variational problem
   1  
D u, v = I K g, v for all v H 1/2 ( ). (1.112)
2

To investigate the unique solvability of the variational problem (1.112), and,


therefore, of the boundary integral equations (1.110) and (1.111), we now
consider the Neumann eigenvalue problem for the Laplace operator,

u(x) = u(x) for x , 1int u(x) = 0 for x . (1.113)

Let R+ be a certain eigenvalue, and let u be the corresponding eigen-


function. Since the eigenvalue problem (1.113) can be seen as the Helmholtz
equation with the wave number satisfying 2 = , we then obtain the
boundary integral equations
1 int
(D u )(x) = u (x) (K 1int u )(x) = 0 for x
2 1
and
1
u (x) + (K u )(x) = (V 1int u )(x) = 0 for x .
2
Thus, the boundary integral operators D and 1/2 I + K are singular and,
therefore, not invertible if 2 = is an eigenvalue of the Neumann eigenvalue
problem (1.113). On the other hand, if 2 is not an eigenvalue for the Neumann
eigenvalue problem (1.113), the hypersingular boundary integral operator D
is injective and coercive, and, therefore, invertible.
52 1 Boundary Integral Equations

1.4.3 Exterior Dirichlet Boundary Value Problem

The exterior Dirichlet boundary value problem for the Helmholtz equation
reads

u(x) 2 u(x) = 0 for x e , 0ext u(x) = g(x) for x , (1.114)

where, in addition, we have to require the Sommerfeld radiation condition


(1.101),  
 x   1
 , u(x) u(x) = O as |x| .
 |x|  |x|2
Note that the exterior Dirichlet boundary value problem is uniquely solvable
due to the radiation condition. The solution of the above problem is given by
the representation formula (1.103),
 
ext ext u (x, y)g(y)ds
u(x) = u (x, y)1 u(y)dsy + 1,y y for x e .

To nd the yet unknown Neumann datum t = 1ext u, we consider the bound-


ary integral equation which results from the representation formula when ap-
plying the exterior trace operator 0ext ,
1
(V t)(x) = g(x) + (K g)(x) for x . (1.115)
2
This boundary integral equation is equivalent to a variational problem to nd
t H 1/2 ( ) such that
   1  
V t, w = I + K g, w for all w H 1/2 ( ) . (1.116)
2

Since the single layer potential V of the exterior Dirichlet boundary value
problem coincides with the single layer potential of the interior Dirichlet
boundary value problem, V is not invertible when 2 = is an eigenvalue of
the Dirichlet eigenvalue problem (1.108). However, we have
 1    1  

I + K g, t = g, I K t = 0,
2 2

and, therefore,
 1 
I + K g Im V .
2
In fact, the variational problem (1.116) of the direct approach is solvable, but
the solution is not unique. As for the Neumann problem (1.21) for the Laplace
equation, we can use a stabilised variational formulation to obtain a unique
solution t H 1/2 ( ) satisfying some prescribed side condition, e.g.,

V0 t, t
= 0 ,
1.4 Helmholtz Equation 53

where V0 : H 1/2 ( ) H 1/2 ( ) is the single layer potential operator of the


Laplace equation. Instead of the variational problem (1.116), we then have
to nd the function t H 1/2 ( ) as the unique solution of the stabilised
variational problem
       1  
V t, w + V0 t, t V0 w, t = I + K g, w
2

for all w H 1/2 ( ). Since this formulation requires the a priori knowledge of
the eigensolution t , this approach does not seem to be applicable in general.
If 2 is not an eigenvalue of the interior Dirichlet eigenvalue problem
(1.108), then the unique solvability of the boundary integral equation (1.115)
follows, since V is coercive and injective.
Instead of a direct approach, we may also consider an indirect single layer
potential approach,

u(x) = (V w)(x) = u (x, y)w(y)dsy for x e .

Then, applying the exterior trace operator, this leads to a boundary integral
equation to nd w H 1/2 ( ) such that

(V w)(x) = g(x) for x . (1.117)

Again, we have unique solvability of the boundary integral equation (1.117)


only for those wave numbers 2 , which are not eigenvalues of the interior
Dirichlet eigenvalue problem (1.108).
When using an indirect double layer potential approach,

u(x) = (W v)(x) = ext u (x, y)v(y)ds
1,y for x e ,
y

this leads to a boundary integral equation to nd v H 1/2 ( ) such that


1
v(x) + (K v)(x) = g(x) for x . (1.118)
2
The boundary integral operator 1/2 I + K is singular, and, therefore, not
invertible when 2 is an eigenvalue of the interior Neumann boundary value
problem (1.113). If 2 is not an eigenvalue of the interior Neumann eigen-
value problem (1.113), the unique solvability of the boundary integral equa-
tion (1.118) follows.
Although the exterior Dirichlet boundary value problem for the Helmholtz
equation is uniquely solvable, the related boundary integral equations may not
be solvable, in particular, when 2 either coincides with an eigenvalue of the
interior Dirichlet boundary value problem or with an eigenvalue of the interior
Neumann boundary value problem. However, in any case at least one of the
54 1 Boundary Integral Equations

boundary integral equations (1.117) or (1.118) is uniquely solvable since 2 can


not be an eigenvalue of both the interior Dirichlet and the interior Neumann
boundary value problem. Thus, we may combine both, the indirect single and
double layer potential formulations to derive a boundary integral equation,
which is uniquely solvable for arbitrary wave numbers. This leads to the well
known BrakhageWerner formulation (see [13])

u(x) = (W w)(x) + (V w)(x) for x e ,

which leads to the boundary integral equation


1
w(x) + (K w)(x) + (V w)(x) = g(x) for x . (1.119)
2
Here, R is some real parameter. Note that this equation is usually con-
sidered in the L2 ( ) sense. The numerical analysis to investigate the unique
solvability of the combined boundary integral equation (1.119) is based on
the coercivity of the underlying boundary integral operator, and, therefore,
on some compactness argument. In general, this may require more regularity
assumptions for the boundary surface under consideration (cf. [23]). Instead
of considering the boundary integral equation (1.119) in L2 ( ), one may for-
mulate some modied boundary integral equations to be considered in the
energy spaces H 1/2 ( ) or H 1/2 ( ), see [17, 18].

1.4.4 Exterior Neumann Boundary Value Problem

Finally, we consider the exterior Neumann boundary value problem

u(x) 2 u(x) = 0 for x e , 1int u(x) = g(x) for x , (1.120)

where we have to require the Sommerfeld radiation condition (1.101),


  
 x  1
 , u(x) u(x) = O as |x| .
 |x|  |x|2
Due to the radiation condition, the exterior Neumann boundary value problem
is uniquely solvable. The solution of the above boundary value problem is given
by the representation formula (1.103),
 
u(x) = u (x, y)g(y)dsy + 1,y ext u (x, y) ext u(y)ds
0 y for x e .

To nd the yet unknown Dirichlet datum u = 0ext u, we consider the boundary


integral equation which results from the representation formula when applying
the exterior conormal derivative 1ext ,
1
(D u)(x) = g(x) (K g)(x) for x . (1.121)
2
1.4 Helmholtz Equation 55

This hypersingular boundary integral equation is equivalent to the variational


problem to nd u H 1/2 ( ) such that
   1  
D u, v = I + K g, v for all v H 1/2 ( ) . (1.122)
2

Since the hypersingular boundary integral operator D of the exterior Neu-


mann boundary value problem coincides with the operator which is related
to the interior Neumann boundary value problem, D is not invertible when
2 = is an eigenvalue of the interior Neumann eigenvalue problem (1.113).
However, we have
 1    1  
I + K g, u = g, I + K u = 0,
2 2

and, therefore,
1 
I + K g Im D .
2
In fact, the variational problem (1.122) of the direct approach is solvable,
but the solution is not unique. Again, one can formulate a suitable stabilised
variational problem; we skip the details.
If 2 is not an eigenvalue of the Neumann eigenvalue problem (1.113), then
the unique solvability of the variational problem (1.122) follows, since D is
coercive and injective.
When applying the exterior trace operator 0ext to the representation for-
mula, this gives a second kind boundary integral equation to be solved,
1
u(x) + (K u)(x) = (V g)(x) for x . (1.123)
2
If 2 = is an eigenvalue of the Dirichlet eigenvalue problem (1.108), the
operator
1 
I K : H 1/2 ( ) H 1/2 ( )
2
is singular and, therefore, not invertible. Then, the adjoint operator
1
I K : H 1/2 ( ) H 1/2 ( )
2
is also not invertible.
As for the exterior Dirichlet boundary value problem, one may formulate
a combined boundary integral equation in L2 ( ), i.e., a linear combination of
the boundary integral equations (1.121) and (1.123) gives (cf. [19])

1  1 
u(x) + (K u)(x) + (D u)(x) = (V g)(x) g(x) + (K g)(x)
2 2
for x , which is uniquely solvable due to the coercivity of the underly-
ing boundary integral operators when assuming sucient smoothness of the
boundary .
56 1 Boundary Integral Equations

1.5 Bibliographic Remarks


The history of using surface potentials to describe solutions of partial dieren-
tial equations goes back to the middle of the 19th century. Already C. F. Gau
[33, 34] proposed to solve the Dirichlet boundary value problem for the Laplace
equation in a suciently smoothly bounded domain by using an indirect dou-
ble layer potential. To nd the yet unknown density function, a second kind
boundary integral equation has to be solved. C. Neumann [80] applied a series
representation to construct this solution, and he showed the convergence, i.e.
a contraction property, when the domain is convex. These results were then
extended by several authors, see also the discussion in [108], where the solv-
ability of second kind boundary integral equations was considered for domains
with nonsmooth boundaries. This proof is based on dierent representations
of the boundary integral operators which follow from the Calderon projection
property. In particular, the symmetry of the double layer potential, with re-
spect to an inner product induced by the single layer potential, was already
observed for a simple model problem by J. Plemelj [88]. A dierent view on the
historical development of those results was given recently in [25]. For a general
review on the history of boundary integral and boundary element methods,
see, for example, [22].
For a long time, direct and indirect boundary integral formulations have
been a standard approach to describe solutions of partial dierential equations
in mathematical physics, see, for example, [29, 55, 59, 61, 62, 70, 72, 74, 91].
While second kind boundary integral equation methods [6] resulting from
an indirect approach have a long tradition in both the analysis and numerical
treatment of boundary value problems [81, 82], direct formulations and rst
kind boundary integral equation methods became more popular in the last
decades. This is mainly due to the rigorous mathematical analysis of boundary
integral formulations and related numerical approximation schemes, which is
available for rst kind equations in the setting of energy spaces. First results
were obtained simultaneously by J. C. Nedelec and J. Planchard [79] and by
G. C. Hsiao and W. L. Wendland [57]. More general results on the mapping
properties of boundary integral operators in Sobolev spaces were later given
by M. Costabel and W. L. Wendland [24, 26], see also the monograph [71] by
W. McLean.
While for boundary value problems with pure Dirichlet or pure Neumann
boundary conditions one may use either rst or second kind boundary in-
tegral equations, the situation becomes more complicated when considering
boundary value problems with mixed boundary conditions. Direct formula-
tions, which are based on the weakly singular boundary integral equation
only, then lead to systems combining boundary integral operators of both
the rst and the second kind, see, e.g., [56]. Today, the symmetric formula-
tion of boundary integral equations [103] seems to be more popular, see also
[109, 114].
1.5 Bibliographic Remarks 57

Alternative representations of boundary integral operators are important


for both analytical and numerical considerations. In particular, by using in-
tegration by parts, the bilinear form of the hypersingular boundary integral
operator, which is the conormal derivative of the double layer potential, can
be transformed into a linear combination of weakly singular forms, see [77]
for the Laplace and for the Helmholtz operator. In fact, this also remains
true for the system of linear elastostatics [47]. Moreover, also the double layer
potential of linear elastostatics, which is dened as a Cauchy singular inte-
gral operator, can be written as a combination of weakly singular boundary
integral operators [62].
The use of boundary integral equation methods to describe solutions of
boundary value problems is essentially based on the knowledge of a funda-
mental solution of the underlying partial dierential operator. In fact, a fun-
damental solution is a solution of the partial dierential equation with a Dirac
impulse as the right hand side. While the existence of such a fundamental
solution can be ensured for a quite large class of partial dierential opera-
tors, in particular for partial dierential operators with constant coecients
[30, 53, 73], the explicite construction can be a complicated task in general,
see for example [66, 85, 86]. For more general partial dierential operators, i.e.
with variable coecients, the concept of a parametrix, also known as a Levi
function [73], was introduced by D. Hilbert [52]. A Levi function is a solution
of the partial dierential equation where the right hand side is given by a
Dirac impulse and some more regular remainder. For example, such an ap-
proach was used in [89] to model shells by using a boundarydomain integral
method.
2
Boundary Element Methods

The numerical approximation of boundary integral equations leads to bound-


ary element methods in general. Since already the formulation of boundary
integral equations is not unique, the choice of an appropriate discretisation
scheme gives even more variety. The most common approximation methods
are the Collocation scheme and the Galerkin method. In this chapter we rst
introduce boundary element spaces of piecewise constant piecewise linear basis
functions. Then we describe some discretisation methods for dierent bound-
ary integral formulations and we discuss the corresponding error estimates.

2.1 Boundary Elements


Let = be the boundary of a Lipschitz domain R3 . For N N, we
consider a sequence of boundary element meshes
'
N
N =  . (2.1)
=1

In the most simple case, we assume that is piecewise polyhedral and that
each boundary element mesh (2.1) consists of N plane triangular boundary
elements  with mid points x . Using the reference element
( )
= R2 : 0 < 1 < 1, 0 < 2 < 1 1 ,
the boundary element  =  ( ) with nodes xi for i = 1, 2, 3 can be described
via the parametrisation
x() =  () = x1 + 1 (x2 x1 ) + 2 (x3 x1 )  for .
For the area  of the boundary element  , we then obtain
  *
1*
 = dsx = EG F 2 d = EG F 2 ,
2

60 2 Boundary Element Methods

where
3 2

E= xi () = |x2 x1 |2 ,
i=1
1

3 2

G= xi () = |x3 x1 |2 ,
i=1
2

3

F = xi () xi () = (x2 x1 , x3 x1 ) .
i=1
1 2

Using  , we dene the local mesh size of the boundary element  as


*
h =  for  = 1, . . . , N
implying the global mesh sizes
h = hmax = max h , hmin = min h . (2.2)
1N 1N

The sequence of boundary element meshes (2.1) is called globally quasi uni-
form if the mesh ratio
hmax
cG
hmin
is uniformly bounded by a constant cG which is independent of N N. Finally,
we introduce the element diameter
d = sup |x y| .
x,y

We assume that all boundary elements  are uniformly shape regular, i.e.,
there exists a global constant cB independent of N such that
d c B h  for all  = 1, . . . , N.
With
x2 ,1 x1 ,1 x3 ,1 x1 ,1
J = x2 ,2 x1 ,2 x3 ,2 x1 ,2 R32
x2 ,3 x1 ,3 x3 ,3 x1 ,3
and using the parametrisation  =  ( ), a function v dened on  can be
interpreted as a function v with respect to the reference element ,
v(x) = v(x1 + J ) = v () for , x =  ().
Vice versa, a function v dened in the parameter domain implies a function
v on the boundary element  ,
v() = v(x1 + J ) = v (x) for , x =  ().
Hence, we can dene boundary element basis functions on  by dening
associated shape functions on the reference element .
2.2 Basis Functions 61

2.2 Basis Functions


Piecewise Constant Basis Functions

The piecewise constant shape function

0 () = 1 for

implies the piecewise constant basis functions on



1 for x  ,
 (x) = (2.3)
0 elsewhere

for  = 1, . . . , N , and, therefore, the global trial space



N
Sh0 ( ) = span  , dim Sh0 ( ) = N.
=1

Note that any wh Sh0 ( ) can be written as


N
wh = w  Sh0 ( ), w R for  = 1, . . . , N.
=1

Moreover, a function wh Sh0 ( ) can be identied with the vector w RN


dened by the components w for  = 1, . . . , N .
In what follows, we will consider the approximation property of the trial
space Sh0 ( ) L2 ( ). For this, we introduce the L2 projection of a given
function w L2 ( ),

N
Qh w = w  Sh0 ( ),
=1

which minimises the error w Qh w in the L2 ( )norm,


  2
Qh w = arg min w wh  2
L 2 ( ) = arg min w(x) wh (x) dsx .
wh Sh ( )
0 0 wh Sh ( )

Note that Qh w is the unique solution of the variational problem


 
(Qh w)(x)k (x)dsx = w(x)k (x)dsx for k = 1, . . . , N,

or,


N  
w  (x)k (x)dsx = w(x)k (x)dsx for k = 1, . . . , N.
=1
62 2 Boundary Element Methods

Due to  
 for k = ,
 (x)k (x)dsx = ,
0 for k = 

we obtain 
1
w = w(x)dsx for  = 1, . . . , N.



From this explicit representation of w , one can prove the error estimate, see
Appendix B.2,

N
w Qh w2L2 ( ) c  |w|H s ( ) c h |w|Hpw
h2s 2 2s 2
s ( ) (2.4)
=1

for a suciently regular function w Hpw s


( ) and s (0, 1]. The semi-norm
in (2.4) is dened as
 
|w(x) w(y)|2
|w|H s ( ) =
2
dsx dsy for s (0, 1)
|x y|2+2s
 

and 
|w|2H 1 ( ) = | w( ())|2 d for s = 1.

From the above variational formulation, we conclude the Galerkin orthogo-
nality
  
w(x) (Qh w)(x) vh (x)dsx = 0 for all vh Sh0 ( ) ,

and, therefore, the trivial error estimate


w Qh wL2 ( ) wL2 ( ) .
Using a duality argument, we further obtain
w Qh wH ( ) c hs |w|Hpw
s ( )

for [1, 0] and s [0, 1].


Summarising the above, we obtain the following approximation property
in Sh0 ( ).
Theorem 2.1. Let w Hpw
s
( ) for some s [0, 1]. Then there holds

inf w wh H ( ) c hs |w|Hpw
s ( ) (2.5)
wh Sh
0 ( )

for all [1, 0].


Moreover, the approximation property (2.5) remains valid for all s 1
with < 1/2.
2.2 Basis Functions 63

Piecewise Linear Discontinuous Basis Functions

With respect to the reference element , we may also dene local polynomial
shape functions of higher order. In particular, we introduce the linear shape
functions

11 () = 1 1 2 , 21 () = 1 , 31 () = 2 for . (2.6)

These shape functions imply globally discontinuous piecewise linear basis func-
tions 
i1 () for x =  ()  ,
,i (x) =
0 elsewhere
for  = 1, . . . , N , i = 1, 2, 3, and, therefore, the global trial space

N
Sh1,1 ( ) = span ,1 (x), ,2 (x), ,3 (x) , dim Sh1,1 ( ) = 3N.
=1

Any function wh Sh1,1 ( ) can be written as


N 
3
wh = w,i ,i Sh1,1 ( ) .
=1 i=1

Moreover, a function wh Sh1,1 ( ) can be identied with the vector w R3N


which is dened by the coecients w,i for i = 1, 2, 3 and  = 1, . . . , N .
As for piecewise constant basis functions, we may also dene the corre-
sponding L2 projection Qh w Sh1,1 ( ) L2 ( ),


N 
3
Qh w = w,i ,i Sh1,1 ( ),
=1 i=1

as the unique solution of the variational problem


 
(Qh w)(x)k,j (x)dsx = w(x)k,j (x)dsx , j = 1, 2, 3 , k = 1, . . . , N

satisfying the error estimate


N
w Qh w2L2 ( ) c h4 |w|2H 2 ( ) c h4 |w|2Hpw
2 ( )

=1

when assuming w Hpw


2
( ). Combining this with the trivial error estimate

w Qh wL2 ( ) wL2 ( ) ,

and using an interpolation argument, the nal error estimate


64 2 Boundary Element Methods

w Qh wL2 ( ) c hs |w|Hpw
s ( )

follows when assuming w Hpw


s
( ) for some s [0, 2]. Using again a duality
argument, we nally obtain

w Qh wH ( ) c hs |w|Hpw
s ( ) (2.7)

for [2, 0] and s [0, 2].


Summarising the above, we obtain the approximation property in Sh1,1 ( ).
Theorem 2.2. Let w Hpw
s
( ) for some s [0, 2]. Then there holds

inf w wh H ( ) c hs |w|Hpw
s ( ) (2.8)
1,1
wh Sh ( )

for all [2, 0]. Moreover, the approximation property (2.8) remains valid
for all s 2 with < 1/2.

Piecewise Linear Continuous Basis Functions

Up to now, we have considered only globally discontinuous basis functions


which do not require any admissibility condition of the triangulation (2.1).
But such a condition is needed to dene globally continuous basis functions.
Let {xj }Mj=1 be the set of all nodes of the triangulation (2.1). A boundary
element mesh consisting of plane triangular elements is called admissible, if
the intersection of two neighboured elements  and k is just one common
edge or one common node. Then I(j) is the index set of all boundary elements
 containing the node xj while J() is the threedimensional index set of the
nodes dening the triangular element  .
For j = 1, . . . , M , one can dene globally continuous piecewise linear basis
functions j with


1 for x = xj ,
j (x) = 0 for x = xi = xj ,


piecewise linear elsewhere.

Note that the restrictions of j onto a boundary element k for k I(j) can
be represented by the linear shape functions j1k ,

j (x) = j1k () for x = k () k . (2.9)

The basis functions j are used to dene the trial space



M
Sh1 ( ) = span j , dim Sh1 ( ) = M.
j=1

The piecewise linear continuous L2 projection Qh w Sh1 ( ) is then dened


as the unique solution of the variational problem
2.3 Laplace Equation 65
 
Qh w(x)j (x)dsx = w(x)j (x)dsx for j = 1, . . . , M.

Due to Sh1 ( ) Sh1,1 ( ) we immediately nd the error estimate


w Qh wH ( ) c hs |w|Hpw
s ( )

when assuming w Hpws


( ), [2, 0], s [0, 2].
Dening Ph u Sh ( ) as the unique solution of the variational problem
1

Ph w, vh
H 1 ( ) = w, vh
for all vh Sh1 ( )
we can show the error estimate
w Ph wH ( ) c hs |w|Hpw
s ( )

when assuming w Hpw


s
( ) and (0, 1], s [1, 2].
Hence, we have the following result.
Theorem 2.3. Let v Hpw
s
( ) for some s [1, 2]. Then there holds
inf v vh H ( ) c hs |v|Hpw
s ( ) (2.10)
vh Sh
1 ( )

for all [2, 1]. Moreover, the approximation property (2.10) remains valid
for all s 2 with < 3/2.

2.3 Laplace Equation


2.3.1 Interior Dirichlet Boundary Value Problem
The solution of the interior Dirichlet boundary value problem (cf. (1.14))
u(x) = 0 for x , 0int u(x) = g(x) for x ,
is given by the representation formula (cf. (1.6))
 
u(x) = u (x, y)t(y)dsy 1,y
int u (x, y)g(y)ds
y for x ,

where the unknown Neumann datum t = 1int u H 1/2 ( ) is the unique


solution of the boundary integral equation (cf. (1.15))
 
1 1 1 1 (x y, n(y))
t(y)dsy = g(x) + g(y)dsy for x .
4 |x y| 2 4 |x y|3

Replacing t H 1/2 ( ) by a piecewise constant approximation



N
th = t  Sh0 ( ), (2.11)
=1

we have to nd the unknown coecient vector t RN from some appropriate


system of linear equations.
66 2 Boundary Element Methods

Collocation Method

Inserting (2.11) into the boundary integral equation (1.15), and choosing the
boundary element mid points xk as collocation nodes, we have to solve the
collocation equations
 
1 1 1 1 (xk y, n(y))
t h (y)dsy = g(x ) + g(y)dsy (2.12)
4 |xk y| 2 k
4 |xk y|3

for k = 1, . . . , N , or using the denition (2.3) of the piecewise constant basis


functions  ,


N  
1 1 1 1 (xk y, n(y))
t dsy = g(xk ) + g(y)dsy
4 |xk y| 2 4 |xk y|3
=1 

for k = 1, . . . , N . With

1 1
Vh [k, ] = dsy
4 |xk y|


for k,  = 1, . . . , N , and

1 1 (xk y, n(y))
fk = g(xk ) + g(y)dsy
2 4 |xk y|3

for k = 1, . . . , N , this results in a linear system of equations,

Vh t = f .

The stiness matrix Vh of the collocation method is in general nonsymmetric


and the stability of the collocation scheme (2.12) and therefore the invertibility
of the stiness matrix Vh is still an open problem when is the boundary
of a general Lipschitz domain R3 . When assuming the stability of the
collocation scheme (2.12), the quasi optimal error estimate, i.e., Ceas lemma,

t th H 1/2 ( ) c inf t wh H 1/2 ( )


wh Sh
0 ( )

follows. Combining this with the approximation property (2.5) for = 1/2,
we get the error estimate

t th H 1/2 ( ) c hs+1/2 |t|Hpw


s ( ) ,

when assuming t Hpw s


( ) for some s [0, 1]. Applying the AubinNitsche
trick (for < 1/2) and an inverse inequality argument (for (1/2, 0]),
we also obtain the error estimate
2.3 Laplace Equation 67

t th H ( ) c hs |t|Hpw
s ( ) , (2.13)

when assuming t Hpw s


( ) for some s [0, 1], and [1, 0]. Note that the
lower bound 1 is due to the collocation approach, independently of the
degree of the used polynomial basis functions.
Inserting the computed solution th into the representation formula (1.6),
this gives an approximate representation formula
 
(x) =
u u (x, y)th (y)dsy 1,yint u (x, y)g(y)ds
y

for x , describing an approximate solution of the Dirichlet boundary value


problem (1.14). Note that u is harmonic, satisfying the Laplace equation, but
the Dirichlet boundary conditions are satised only approximately. For an
arbitrary x , the error is given by
  
u(x) u(x) = u (x, y) t(y) th (y) dsy .

Using a duality argument, the error estimate

(x)| u (x, )H ( ) t th H ( )


|u(x) u

for some R follows. Combining this with the error estimate (2.13) for the
minimal possible value = 1, we obtain the pointwise error estimate

(x)| c hs+1 u (x, )H 1 ( ) |t|Hpw


|u(x) u s ( ) ,

when assuming t Hpw s


( ) for some s [0, 1]. Hence, if t is suciently
smooth, i.e. t Hpw
1
( ), we obtain as the optimal order of convergence for
s=1
|u(x) u(x)| c h2 u (x, )H 1 ( ) |t|Hpw
1 ( ) . (2.14)
Note that the error estimate (2.14) involves the position of the observation
point x . In particular, the error estimate (2.14) does not hold in the
limiting case x .

Galerkin Method

The boundary integral equation (cf. (1.15))


 
1 1 1 1 (x y, n(y))
t(y)dsy = g(x) + g(y)dsy for x
4 |x y| 2 4 |x y|3

is equivalent to the variational problem (1.16),


68 2 Boundary Element Methods
   1  
V t, w = I + K g, w for all w H 1/2 ( ),
2

and to the minimisation problem

F (t) = min F (w)


wH 1/2 ( )

with
1   1  
F (w) = V w, w I + K g, w .
2 2

Using a sequence of nite dimensional subspaces Sh0 ( ) spanned by piecewise


constant basis functions, associated approximate solutions


N
th = t  Sh0 ( )
=1

are obtained from the minimisation problem

F (th ) = min F (wh ).


wh Sh
0 ( )

The solution th Sh0 ( ) of the above minimisation problem is dened via the
Galerkin equations
   1  
V th , k = I + K g, k for k = 1, . . . , N. (2.15)
2

With (2.11) and by using the denition (2.3) of the piecewise constant basis
functions  , this is equivalent to


N  
1 1
t dsy dsx =
4 |x y|
=1 k 
  
1 1 (x y, n(y))
g(x)dsx + g(y)dsy dsx
2 4 |x y|3
k k

for k = 1, . . . , N . With
 
1 1
Vh [k, ] = dsy dsx
4 |x y|
k 

for k,  = 1, . . . , N , and
  
1 1 (x y, n(y))
fk = g(x)dsx + g(y)dsy dsx
2 4 |x y|3
k k

for k = 1, . . . , N , we nd the coecient vector t RN as the unique solution


of the linear system
2.3 Laplace Equation 69

Vh t = f . (2.16)
The Galerkin stiness matrix Vh is symmetric and positive denite. Therefore,
one may use a conjugate gradient scheme for an iterative solution of the linear
system (2.16). Since the spectral condition number of Vh behaves like O(h1 ),
i.e.,
max (Vh ) 1
2 (Vh ) = Vh 2 Vh1 2 = c ,
min (Vh ) h
an appropriate preconditioning is sometimes needed. Moreover, since the sti-
ness matrix Vh is dense, fast boundary element methods are required to con-
struct more ecient algorithms, see Chapter 3.
From the H 1/2 ( )ellipticity and the boundedness of the single layer
potential
V : H 1/2 ( ) H 1/2 ( ) ,
see Lemma 1.1, we conclude the unique solvability of the Galerkin variational
problem (2.15), or, correspondingly, of the linear system (2.16), as well as the
quasi optimal error estimate, i.e. Ceas lemma,

cV2
t th H 1/2 ( ) inf t wh H 1/2 ( ) .
cV1 wh Sh
0 ( )

Combining this with the approximation property (2.5) for = 1/2, we get
1
t th H 1/2 ( ) c hs+ 2 |t|Hpw
s ( ) ,

when assuming t Hpw s


( ) and s [0, 1]. Applying the AubinNitsche trick
(for < 1/2) and an inverse inequality argument (for (1/2, 0]), we
also obtain the error estimate

t th H ( ) c hs |t|Hpw
s ( ) , (2.17)

when assuming t Hpw s


( ) for some s [0, 1] and [2, 0].
Inserting the computed Galerkin solution th Sh0 ( ) into the representa-
tion formula (1.6), this gives an approximate representation formula
 
(x) =
u u (x, y)th (y)dsy 1,yint u (x, y)g(y)ds
y for x , (2.18)

describing an approximate solution of the Dirichlet boundary value problem


 is harmonic satisfying the Laplace equation, but the Dirich-
(1.14). Note that u
let boundary conditions are satised only approximately. For an arbitrary
x , the error is given by
  
u(x) u (x) = u (x, y) t(y) th (y) dsy .

70 2 Boundary Element Methods

Using a duality argument, the error estimate

(x)| u (x, )H ( ) t th H ( )


|u(x) u

for some R follows. Combining this with the error estimate (2.17) for the
minimal value = 2, we obtain the pointwise error estimate

(x)| c hs+2 u (x, )H 2 ( ) |t|Hpw


|u(x) u s ( ) ,

when assuming t Hpw s


( ) for some s [0, 1]. Hence, if t Hpw
1
( ) is
suciently smooth, we obtain the optimal order of convergence for s = 1,

(x)| c h3 u (x, )H 2 ( ) |t|Hpw


|u(x) u 1 ( ) . (2.19)

Note that the error estimate (2.19) involves the position of the observation
point x again. In particular, the error estimate (2.19) does not hold in
the limiting case x .
The computation of the right hand side f in the linear system (2.16)
requires the evaluation of the integrals
  
1 1 (x y, n(y))
fk = g(x)dsx + g(y)dsy dsx
2 4 |x y|3
k k

for k = 1, . . . , N . An approximation of the given Dirichlet datum g H 1/2 ( )


by a globally continuous and piecewise linear function


M
gh = gj j Sh1 ( )
j=1

can be obtained either by interpolation,


M
gh = g(xj ) j , (2.20)
j=1

or by the L2 projection,


M
gh = gj j ,
j=1

where the coecients gj , j = 1, . . . , M satisfy


M
gj j , i
L2 ( ) = g, i
L2 ( ) for i = 1, . . . , M . (2.21)
j=1

This leads to
2.3 Laplace Equation 71
  
1 
M M
1 (x y, n(y))
fk = gj j (x)dsx + gj j (y)dsy dsx
2 j=1 j=1
4 |x y|3
k k


M
1
= gj Mh [k, j] + Kh [k, j]
j=1
2

with the matrix entries


  
1 (x y, n(y))
Mh [k, j] = j (x)dsx , Kh [k, j] = j (y)dsy dsx
4 |x y|3
k k

for j = 1, . . . , M and k = 1, . . . , N . Instead of the linear system (2.16), we


then have to solve a linear system with a perturbed right hand side f, yielding
a perturbed solution vector  t, i.e., we have to solve the linear system
1 
Vh 
t = Mh + Kh g . (2.22)
2

For the perturbed boundary element solution 


th Sh0 ( ), the error estimate

t 
th H ( ) c1 t th H ( ) + c2 g gh H +1 ( )

follows with [2, 0], when the L2 projection (2.21) is used to dene
gh Sh1 ( ). Note that [1, 0] in the case of the interpolation (2.20).
Assuming t Hpw s
( ) and g Hpws+1
( ) for some s [0, 1], we then obtain
the error estimate
 
t 
th H ( ) hs c1 |t|Hpw
s ( ) + c2 |g| s+1
Hpw ( ) .

For the approximate representation formula


 
(x) =
u u (x, y)th (y)dsy 1,y
int u (x, y)g (y)ds
h y for x ,

we then obtain the optimal error estimate

|u(x) u
(x)| c(x, t, g) h3 , (2.23)

when using the L2 projection (2.21) and when assuming t Hpw 1


( ) and
g Hpw ( ). When using the interpolation (2.20) instead, the error estimate
2

|u(x) u
(x)| c(x, t, g) h2

follows.
72 2 Boundary Element Methods

2.3.2 Interior Neumann Boundary Value Problem

Let R3 be a simply connected domain. The solution of the interior


Neumann boundary value problem (cf. (1.21))

u(x) = 0 for x , 1int u(x) = g(x) for x ,

is given by the representation formula (cf. (1.6))


 
int u (x, y)u(y)ds
u(x) = u (x, y)g(y)dsy 1,y y for x ,

where the unknown Dirichlet datum u = 0int u H 1/2 ( ) is a solution of the


hypersingular boundary integral equation (cf. (1.27))
 
int u (x, y)u(y)ds = 1 g(x)
1int 1,y int u (x, y)g(y)ds
1,x
y y
2

for x . Since the hypersingular boundary integral operator D has a non


trivial kernel, we consider the extended variational problem (cf. (1.29)) to nd
u H 1/2 ( ) such that
       1    
Du , v + u , 1 v, 1 = I K  g, v + v, 1
2

is satised for all v H 1/2 ( ). Note that from the solvability condition (1.22),
we reproduce the scaling condition (1.25). Since the bilinear form of this vari-
ational problem is strictly positive, the variational problem is equivalent to
the minimisation problem

F (u ) = min F (v)
vH 1/2 ( )

with
1    2   1    
F (v) = Dv, v + v, 1 I K  g, v v, 1 .
2 2

Using a sequence of nite dimensional subspaces Sh1 ( ) H 1/2 ( ) spanned


by piecewise linear and continuous basis functions, an associated approximate
function

M
u,h = u,j j Sh1 ( ) (2.24)
j=1

is obtained from the minimisation problem

F (u,h ) = min F (vh ).


vh Sh
1 ( )
2.3 Laplace Equation 73

The solution u,h Sh1 ( ) of the above minimisation problem is then dened
via the Galerkin equations
     
Du,h , i + u,h , 1 i , 1 =

 1    
I K  g, i + i , 1 (2.25)
2

for i = 1, . . . , M . Using (2.24), this becomes


M       
u,j Dj , i + j , 1 i , 1 =

j=1
 1    
I K  g, i + i , 1
2

for i = 1, . . . , M . With
   
1 curl j (y) , curl i (x)
Dh [i, j] = Dj , i
= dsx dsy ,
4 |x y|


ai = i , 1
= i (x)dsx ,

 1  
fi = I K  g, i
2
  
1 int u (x, y)g(y)ds ds
= g(x)i (x)dsx i (x) 1,x y x
2

  
1 1 (y x, n(x))
= g(x)i (x)dsx i (x) g(y)dsy dsx
2 4 |x y|3

for i, j = 1, . . . , M , we nd the coecient vector u RM as the unique


solution of the linear system
 
Dh + a a u = f + a. (2.26)

The extended stiness matrix Dh +a a is symmetric and positive denite.


Therefore, one may use a conjugate gradient scheme for an iterative solution
of the linear system (2.26). However, due to the estimate for the spectral
condition number
1
2 (Dh + a a ) c ,
h
an appropriate preconditioning is sometimes needed.
74 2 Boundary Element Methods

Note, that instead of a direct evaluation of the hypersingular boundary


integral operator D, we apply integration by parts to obtain the representation
(1.9) in Lemma 1.4, where

curl i (x) = n(x) x


i (x) for x

is the surface curl operator, and i is some locally dened extension of i into
the neighbourhood of . Since i is linear on every boundary element k , and
dening the extension i to be constant along n(x), we obtain curl i to be
a piecewise constant vector function. Hence, we get

Dh [i, j] = (2.27)
    1   1
curl i|k , curl j| dsx dsy .
4 |x y|
k supp i  supp j k 

Thus, the entries of the stiness matrix Dh of the hypersingular boundary


integral operator D are linear combinations of the entries Vh [k, ] of the single
layer potential matrix Vh . Hence, we can write

Vh 0 0
Dh = T 0 Vh 0 T
0 0 Vh

with some sparse transformation matrix T RM 3N .


From the H 1/2 ( )ellipticity of the extended bilinear form, i.e.,

Dv, v
+ v, 1
2 cD
1 vH 1/2 ( )
2
for all v H 1/2 ( ),

we conclude the unique solvability of the variational problem (2.25), or corre-


spondingly, of the linear system (2.26). Furthermore, the quasi optimal error
estimate, i.e., Ceas lemma,

u u,h H 1/2 ( ) c inf u vh H 1/2 ( )


vh Sh
1 ( )

holds. Combining this with the approximation property (2.10) for = 1/2,
we get
u u,h H 1/2 ( ) c hs1/2 |u |Hpw
s ( ) , (2.28)
when assuming u Hpw s
( ) for some s [1/2, 2]. Applying the Aubin
Nitsche trick, we also obtain the error estimate

u u,h H ( ) c hs |u |Hpw
s ( ) ,

when assuming u Hpw s


( ) for some s [1/2, 2] and [1, 1/2].
Inserting the computed Galerkin solution u,h Sh1 ( ) into the represen-
tation formula (1.6), this gives an approximate representation formula
2.3 Laplace Equation 75
 
(x) =
u u (x, y)g(y)dsy int u (x, y)u (y)ds
1,y ,h y for x

describing an approximate solution of the Neumann boundary value problem


(1.21). For an arbitrary x , the error is given by
  
u(x) u
(x) = int u (x, y) u (y) u (y) ds .
1,y ,h y

Using a duality argument, the error estimate


 
 int 
|u(x) u(x)| 1,y u (x, ) u u,h H ( )
H ( )

for some R follows. Combining this with the error estimate (2.28) for the
minimal value = 1, we obtain the pointwise error estimate
 
 int 
|u(x) u(x)| c hs+1 1,y u (x, ) 1 |u |Hpw
s ( ) ,
H ( )

when assuming u Hpw


s
( ) for some s [1/2, 2]. Hence, if u Hpw
2
( ) is
suciently smooth, we obtain the optimal order of convergence for s = 2,

|u(x) u int u (x, ) 1 |u | 2


(x)| c h3 1,y H ( ) Hpw ( ) . (2.29)

Again, the error estimate (2.29) involves the position of the observation point
x , and, therefore, it is not valid in the limiting case x .
As in the boundary element method for the Dirichlet boundary value prob-
lem, we may also approximate the given Neumann datum g H 1/2 ( ) rst.
If gh Sh0 ( ) is dened by the L2 projection, i.e. if it is the unique solution
of the variational problem
 
gh (x)k (x) dsx = g(x)k (x) dsx for k = 1, . . . , N ,

then the error estimate

g gh H ( ) c hs |g|Hpw
s ( )

holds, when assuming g Hpw s


( ) for some s [0, 1] and [1, 0]. Hence,
if g is suciently smooth, i.e., g Hpw1
( ), we get the optimal error estimate

g gh H 1 ( ) c h2 |g|Hpw
1 ( ) . (2.30)

Then,
76 2 Boundary Element Methods
 1  
fi = I K  gh , i
2

N  
N  
1 int u (x, y)ds ds
= g i (x)dsx g i (x) 1,x y x
2
=1  =1 
  
1 
N N
1 (y x, n(x))
= g i (x)dsx g i (x) dsy dsx
2 4 |x y|3
=1  =1 


N
1
= g Mh [, i] Kh [, i] .
2
=1

Instead of the linear system (2.26), we now have to solve a linear system with
a perturbed right hand side f yielding a perturbed solution vector u  , i.e.,
we have to solve the linear system
  1 
Dh + a a u  = Mh Kh g + a . (2.31)
2
,h Sh1 ( ), the error estimate
For the associated boundary element solution u

u u
,h H 1/2 ( ) u u,h H 1/2 ( ) + c g gh H 1/2 ( )
 
c h3/2 |u |Hpw
2 ( ) + |g|H 1 ( )
pw
,

holds, when assuming u Hpw 2


( ) and g Hpw
1
( ). Applying the Aubin
Nitsche trick to obtain an error estimate in lower order Sobolev spaces, the
restriction due to the error estimate (2.30) has to be considered. Hence, we
obtain the error estimate

u u
h H ( ) c1 u uh H ( ) + c2 g gh H 1 ( )
 
c h2 |u|Hpw 2 ( ) + |g|H 1 ( )
pw
,

when assuming u Hpw2


( ), g Hpw1
( ), and 0. Therefore, the optimal
error estimate reads
 
,h L2 ( ) c h2 |u |Hpw
u u 2 ( ) + |g|H 1 ( )
pw
. (2.32)

For the approximate representation formula


 
int int u (x, y)
(x) =
u 0,y u (x, y)gh (y)dsy 1,y u,h (y)dsy (2.33)

for x , we then obtain the best possible error estimate

|u(x) u
(x)| c(x, g, u ) h2 , (2.34)

when assuming u Hpw


2
( ) and g Hpw
1
( ).
2.3 Laplace Equation 77

2.3.3 Mixed Boundary Value Problem

The solution of the mixed boundary value problem (cf. (1.34))

u(x) = 0 for x ,
0int u(x) = g(x) for x D ,
int u(x) = f (x)
1 for x N

is given by the representation formula


 
u(x) = u (x, y)1int u(y)dsy + u (x, y)f (y)dsy
D N
 
int u (x, y)g(y)ds
1,y int u (x, y) int u(y)ds
1,y
y 0 y
D N

for x , where we have to nd the yet unknown Cauchy data 0int u on N


and 1int u on D . Let g H 1/2 ( ) and f H 1/2 ( ) be some arbitrary,
but xed extensions of the given boundary data g H 1/2 (D ) and f
H 1/2 (N ), respectively.
The new Cauchy data

 1/2 (N )
 = 0int u g H
u

and
t = 1int u f H
  1/2 ( )

are the unique solutions of the variational problem (cf. (1.35))

a(  ; w, v) = F (w, v)
t, u

 1/2 (N ) and w H
for all v H  1/2 (D ) with the bilinear form
 
 1 1 
 ; w, v) =
a(t, u w(x) t(y)dsy dsx
4 |x y|
D D
 
1 (x y, n(y))
w(x) (y)dsy dsx
u
4 |x y|3
D N
 
1 (y x, n(x)) 
+ v(x) t(y)dsy dsx
4 |x y|3
N D
   
1 (y)
curl v(x) , curl u
+ dsy dsx
4 |x y|

and with the linear form


78 2 Boundary Element Methods
  
1 1 (x y, n(y))
F (w, v) = w(x)g(x)dsx + w(x) g(y)dsy dsx
2 4 |x y|3
D D
  
1 1  1
w(x) f (y)dsy dsx + v(x)f (x)dsx
4 |x y| 2
D N
 
1 (y x, n(x)) 
v(x) f (y)dsy dsx
4 |x y|
N
   
1 curl v(x) , curl g(y)
dsy dsx .
4 |x y|
N

To be able to dene approximate solutions of the above variational problem,


we rst dene suitable trial spaces,

ND
 1/2 (D ) = span 
Sh0 (D ) = Sh0 ( ) H ,
=1

MN
 1/2 (N ) = span j
Sh1 (N ) = Sh1 ( ) H .
j=1

The Galerkin formulation of the variational problem (1.35) is to nd



th Sh0 (D )
and
h Sh1 (N )
u
such that
a( h ; wh , vh ) = F (wh , vh )
th , u (2.35)
is satised for all wh and vh
Sh0 (D ) Sh1 (N ).
This formulation is equiva-
lent to a linear system of equations
    
Vh Kh 
t g

= (2.36)
Kh Dh 
u f
with the following blocks:
Vh RND ND , Kh RND MN , Dh RMN MN .
The matrix entries of these blocks are dened by
 
1 1
Vh [k, ] = dsy dsx ,
4 |x y|
k 
 
1 (x y, n(y))
Kh [k, j] = j (y)dsy dsx ,
4 |x y|3
k
 
1 (curl j (y), curl i (x))
Dh [i, j] = dsy dsx
4 |x y|

2.3 Laplace Equation 79

for all k,  = 1, . . . , ND and i, j = 1, . . . , MN . The components of the right


hand side, g RND and f RMN , are given by
  
1 1 (x y, n(y))
gk = g(x)dsx + g(y)dsy dsx
2 4 |x y|3
k
  k

1 1 
f (y)dsy dsx ,
4 |x y|
k
  
1 1 (y x, n(x)) 
fi = i (x)f (x)dsx i (x) f (y)dsy dsx
2 4 |x y|
N N
 
1 (curl g(y), curl i (x))
dsy dsx
4 |x y|
N

for all k = 1, . . . , ND and i = 1, . . . , MN .


Since the trial spaces Sh0 (D ) Sh0 ( ) and Sh1 (N ) Sh1 ( ) are sub-
spaces of the trial spaces already used for the Dirichlet and for the Neumann
boundary value problems, the blocks of the matrix in (2.36) are submatrices
of the stiness matrices already used in (2.22) and in (2.31), respectively. In
particular, the evaluation of the discrete hypersingular integral operator Dh
can be reduced to the evaluation of some linear combinations of the matrix
entries of the discrete single layer potential Vh .
Since the stiness matrix in (2.36) is positive denite but block skew
symmetric, we have to apply a generalised Krylov subspace method such as
the Generalised Minimal Residual Method (GMRES) (see Appendix C.3) to
solve (2.36) by an iterative method. Here we will describe two alternative
approaches to apply the conjugate gradient scheme to solve (2.36).
Since the discrete single layer potential Vh is symmetric and positive de-
nite and hence invertible, we can solve the rst equation in (2.36) to nd
 

t = Vh1 g + Kh u  .

Inserting this into the second equation in (2.36), this gives the Schur comple-
ment system
 = f Kh Vh1 g
Sh u (2.37)
with the symmetric and positive denite Schur complement matrix

Sh = Dh + Kh Vh1 Kh .

Therefore, we can apply a conjugate gradient scheme to solve (2.37), where


we eventually need a suitable preconditioning matrix for Sh . Note that the
matrix by vector multiplication with the Schur complement matrix Sh in-
volves one application of the inverse single layer potential matrix Vh . This
can be realised either by a direct inversion, if the dimension ND is small, or
80 2 Boundary Element Methods

by the application of an inner conjugate gradient scheme. Again, a suitable


preconditioning matrix is eventually needed, which is spectrally equivalent to
Vh .
Following [14], we can also apply a suitable transformation to (2.36) to ob-
tain a linear system with a symmetric, positive denite matrix. In particular,
the transformed matrix
  
Vh CV1 I 0 Vh Kh
=
Kh CV1 I Kh Dh
 
Vh CV1 Vh Vh (I Vh CV1 )Kh
Kh (I CV1 Vh ) Dh + Kh CV1 Kh

is symmetric and positive denite. Hence, instead of (2.36), we now solve the
transformed linear system
  
Vh CV1 Vh Vh (I Vh CV1 )Kh 
t
1 1
= (2.38)
Kh (I CV Vh ) Dh + Kh CV Kh 
u
  
Vh CV1 I 0 g
1
Kh CV I f

by a preconditioned conjugate gradient scheme. In the above, CV is a suitable


preconditioning matrix, which is spectrally equivalent to the discrete single
layer potential Vh , i.e.,

c1 (CV w, w) (Vh w, w) c2 (CV w, w) for all w RND .

To ensure that (2.38) is equivalent to (2.36), we have to require the invertibility


of
Vh CV1 I = (Vh CV )CV1 .
Due to

((Vh CV )w, w) (c1 1)(CV w, w) for all w RND ,

a sucient condition is c1 > 1, which ensures the positive deniteness of


Vh CV , and, therefore, its invertibility. A suitable preconditioning matrix
for (2.38) is  
V h CV 0
CM = ,
0 CS
where CS is a preconditioning matrix for the Schur complement Sh .
From the H  1/2 (D )H 1/2 (N )ellipticity of the underlying bilinear form
a(, ; , ), we conclude the unique solvability of the Galerkin variational prob-
lem (2.35), and, therefore, of the linear system (2.36). In particular, we obtain
the quasi optimal error estimate
2.3 Laplace Equation 81


tth 2H 1/2 ( ) + uuh 2H 1/2 ( )

c inf 
t wh 2H 1/2 ( ) + inf 
u vh 2H 1/2 ( )
wh Sh (D )
0 vh Sh
1 ( )
N

from Ceas lemma. Using the approximation property (2.5) for = 1/2 as
well as the approximation property (2.10) for = 1/2, this gives


t
th 2H 1/2 ( ) +  h 2H 1/2 ( ) c1 h2s1 +1 |
uu t|2Hpw
s1
( )
+ c2 h2s2 1 |
u|2Hpw
s2
( )
,

when assuming  t Hpw s1


( ) for some s1 [1/2, 1], and u Hpws2
( ) for
some s2 [1/2, 2]. Since, in general, those regularity estimates result from a
regularity estimate for the solution u H s () of the mixed boundary value
s1/2 s3/2
problem (1.34), we obtain 0int u Hpw ( ) and 1int u Hpw ( ) by
applying the trace theorems, and, therefore, s1 = s 3/2 and s2 = s 1/2.
Thus, if u H s () is the solution of the mixed boundary value problem
(1.34) for some s [1, 5/2], we then obtain the error estimate


t
th 2H 1/2 ( ) + 
uu
h 2H 1/2 ( ) c h2(s1) |u|2H s () .

As for the Dirichlet and for the Neumann boundary value problem, applying
the AubinNitsche trick (for [2, 1/2)) and an inverse inequality argu-
ment (for (1/2, 0]), we obtain the error estimate


t
th 2H ( ) + 
uu
h 2H +1 ( ) c h2(s)3 |u|2H s () , (2.39)

when assuming u H s () for some s [1, 5/2] and [2, 0].


Inserting the computed Galerkin solutions  th Sh0 (D ) and u h Sh1 (N )
into the representation formula (1.6), this gives an approximate representation
formula
     
 
(x) = u (x, y) th (y) + f (y) dsy 1,y
u int u (x, y) uh (y) + g(y) dsy

for x . The above formula describes an approximate solution of the mixed


boundary value problem (1.34). For an arbitrary x , the error is given by

u(x) u(x) =
     
u (x, y) t(y)  int u (x, y) u
th (y) dsy 1,y (y) u
h (y) dsy .
N D

Using a duality argument, the error estimate

|u(x) u
(x)|
 
 
u (x, )H 1 ( ) 
t
th H 1 ( ) + 1int u (x, ) 
uu
h H 2 ( )
H 2 ( )
82 2 Boundary Element Methods

for some 1 , 2 R follows. Combining this with the error estimate (2.39)
for the minimal values 1 = 2 and 2 = 1, we obtain the pointwise error
estimate
 
(x)| c h2s+1 u (x, )H 2 ( ) + 1int u (x, )H 1 ( ) |u|H s () ,
|u(x) u

when assuming u H s () for some s [1, 5/2]. In particular, for s = 5/2,


we obtain the optimal order of convergence,

(x)| c(x) h3 |u|H 5/2 () .


|u(x) u (2.40)

Note that the error estimate (2.40) is based on the exact use of the given
boundary data g H 1/2 (D ) and f H 1/2 (N ), and their extensions g
H 1/2 ( ) and f H 1/2 ( ).
Starting from an approximation uh Sh1 ( ) of the complete Dirichlet
datum 0int u,


M 
MN 
M
uh = u j j = u j j + h + gh ,
u j j = u
j=1 j=1 j=MN +1

we rst have to nd the coecients uj for j = MN + 1, . . . , M of the approx-


imate Dirichlet datum gh Sh1 ( ) H 1/2 (N ). This can be done, e.g., by
applying the L2 projection,


M  
uj j (x)i (x)dx = g(x)i (x)dsx for i = MN + 1, . . . , M.
j=MN +1 D D

In a similar way, we obtain an approximation fh Sh0 (N ) of the given


Neumann datum f H 1/2 (N ),


N  
t  (x)k (x)dx = f (x)k (x)dsx for k = ND + 1, . . . , N.
=ND +1 N N

Hence, we have to nd the remaining Cauchy data


th Sh0 (D ) h Sh1 (N )
and u

from the variational problem

a( h ; k , i ) = F(k , i )
th , u

for k = 1, . . . , ND and i = 1, . . . , MN , where the perturbed linear form is now


given by
2.3 Laplace Equation 83
  
1 1 (x y, n(y))
F(k , i ) = gh (x)dsx + gh (y)dsy dsx
2 4 |x y|3
k k
  
1 1 1
fh (y)dsy dsx + fh (x)i (x)dsx
4 |x y| 2
k N N
 
1 (y x, n(x))
i (x) fh (y)dsy dsx
4 |x y|3
N N
   
1 curl i (x) , curl gh (y)
dsy dsx .
4 |x y|
N

The above perturbed variational problem is now equivalent to a linear system


of equations
     
Vh Kh 
t Vh 1
2 M h + K h f

= 1
. (2.41)
 2 Mh Kh Dh
Kh Dh u g

Note that the right hand side of this system diers from the one in (2.36).
The blocks on the right have the following dimensions:

Vh RND (N ND ) , Mh RND (M MN ) , Kh RND (M MN )

and the following entries


 
1 1
Vh [k, ] = dsy dsx ,
4 |x y|
k 

Mh [k, j] = j (x)dsx ,
k
 
1 (x y, n(y))
Kh [k, j] = j (y)dsy dsx ,
4 |x y|3
k
   
1 curl j (y) , curl i (x)
Dh [i, j] = dsy dsx
4 |x y|

for

 = ND + 1, . . . , N , k = 1, . . . , ND , j = MN + 1, . . . , M , i = 1, . . . , MN .

Note that the matrices Vh , Mh , Kh , and Dh are also submatrices of the


stiness matrices already used in (2.16) and (2.26) to handle the Dirichlet
and Neumann boundary value problem, respectively.
The solution of the perturbed linear system (2.41) can be realised as for
the linear system (2.36). The error estimates for the resulting approximations
84 2 Boundary Element Methods

can be obtained as in the previous cases, however, the approximations of the


given boundary data have to be recognised accordingly. This can be done as
for the Dirichlet boundary value problem and as for the Neumann boundary
value problem. In particular, the error estimate (2.39) holds for [1, 0],
and instead of (2.40), we obtain only the pointwise error estimate

(x)| c(x) h2 |u|H 5/2 ()


|u(x) u (2.42)

for x , when assuming u H 5/2 ().

2.3.4 Interface Problem

We consider the interface problem (1.56)(1.58), i.e., the system of partial


dierential equations (1.56),

i ui (x) = f (x) for x , e ue (x) = 0 for x e ,

the transmission conditions (1.57),

0int ui (x) = 0ext ue (x), i 1int ui (x) = e 1ext ue (x) for x ,

and the radiation condition (1.58) with u0 = 0,



1
|ue (x)| = O as |x| .
|x|

Introducing u = 0int ui = 0ext ue H 1/2 ( ), we have to solve the resulting


variational problem (1.59),

(i S int + e S ext )u, v


= S int 0int up 1int up , v

for all v H 1/2 ( ), where up is a particular solution satisfying up = f in


.
Using a sequence of nite dimensional subspaces Sh1 ( ) H 1/2 ( )
spanned by piecewise linear and continuous basis functions, an associated
approximate solution
M
uh = uj j Sh1 ( )
j=1

can be found as the unique solution of the Galerkin equations

(i S int + e S ext )uh , i


= S int 0int up 1int up , i
(2.43)

for i = 1, . . . , M . This is equivalent to a system of linear equations,

Sh u = f ,
2.3 Laplace Equation 85

with Sh RM M and f RM with the entries

Sh [i, j] = (i S int + e S ext )j , i


,

fi = S int 0int up 1int up , i

for i, j = 1, . . . , M . Since the SteklovPoincare operators


1 
(S int u)(x) = V 1 I + K u(x)
2
1  1 
= D+ I + K  V 1 I + K u(x),
2 2
 1 
(S ext u)(x) = V 1 I + K u(x)
2
 1   1 
= D + I + K  V 1 I + K u(x)
2 2

do not allow a direct evaluation of both, the stiness matrix and the right hand
side, additional approximations are required. The application of the Steklov
Poincare operator S int related to the interior Dirichlet boundary value prob-
lem can be written as
1  1 
(S int u)(x) = D + I + K  V 1 I + K u(x)
2 2
1 
= (Du)(x) + I + K  ti (x) ,
2
where 1 
ti = V 1 I + K u H 1/2 ( )
2
is the unique solution of the variational problem
 1  
V ti , w
= I + K u, w for all w H 1/2 ( ).
2

Let ti,h Sh0 ( ) be the unique solution of the Galerkin variational problem
   1  
V ti,h , wh = I + K u, wh for all wh Sh0 ( ).
2

Then,
1 
(Sint u)(x) = (Du)(x) + I + K  ti,h (x)
2
denes an approximate SteklovPoincare operator associated to the interior
Dirichlet boundary value problem. In the same way, we dene an approximate
SteklovPoincare operator
86 2 Boundary Element Methods

1  
(Sext u)(x) = (Du)(x) + I + K  te,h (x) ,
2
which is associated to the exterior Dirichlet boundary value problem, and
where te,h Sh0 ( ) is the unique solution of the Galerkin equations
 1  
V te,h , wh
= I + K u, wh for all wh Sh0 ( ).
2

Now, instead of the variational problem (2.43), we consider the perturbed


problem
uh , i
= Sint up,h tp,h , i

(i Sint + e Sext ) (2.44)


for i = 1, . . . , M . In (2.44), tp,h and up,h
Sh0 ( ) Sh1 ( )
are suitable
approximations (L2 projections) of the Cauchy data of the particular solution
up , i.e.,
tp,h , k
L2 ( ) = 1int up , k
L2 ( )
for k = 1, . . . , N and
up,h , i
L2 ( ) = 0int up , i
L2 ( )
for i = 1, . . . , M . From (2.44), we then obtain the linear system
1   
1 1
i Dh + M + Kh V h Mh + Kh (2.45)
2 h 2
 1   1 
1
+e Dh + Mh + Kh Vh Mh + Kh =
u
2 2
1  1 
Dh + Mh + Kh Vh1 Mh + Kh up Mh tp ,
2 2
where
Vh RN N , Mh RN M , Kh RN M , Dh RM M
are the Galerkin stiness matrices, which have already been used for the
Dirichlet and for the Neumann boundary value problems. The entries of these
matrices are dened as
 
1 1
Vh [k, ] = dsy dsx ,
4 |x y|
k 

Mh [k, j] = j (x)dsx ,
k
 
1 (x y, n(y))
Kh [k, j] = j (y)dsy dsx ,
4 |x y|3
k
 
1 (curl j (y), curl i (x))
Dh [i, j] = dsy dsx
4 |x y|

2.4 Lame Equations 87

for k,  = 1, . . . , N and i, j = 1, . . . , M .
Instead of the linear system (2.45) we may also solve the equivalent coupled
system

i Vh 0 i ( 12 Mh + Kh ) ti

0 e Vh e ( 2 Mh + Kh ) te
1


1 1
i ( 2 Mh + Kh ) e ( 2 Mh + Kh ) (i + e )Dh 
u

( 12 Mh + Kh )up

= 0 , (2.46)
Dh up Mh tp

which is of the same structure as the linear system (2.36), i.e. block skew
symmetric but positive denite. Note that (2.45) is the Schur complement
system of (2.46).
As for the Neumann boundary value problem, we conclude the error esti-
mate

h H 1/2 ( ) c1
u u inf u vh H 1/2 ( )
vh Sh
1 ( )

+c2 inf S int u wh H 1/2 ( ) + c3 inf S ext u wh H 1/2 ( ) .


wh Sh
0 ( ) wh Sh
0 ( )

Hence, assuming u Hpw 2


( ) and S int/ext u Hpw 1
, ( ), we obtain the error
estimate
 
h H 1/2 ( ) c h3/2 uHpw
u u 2 ( ) + S
int u 1 ext u 1
Hpw ( ) + S Hpw ( ) ,

and by applying the AubinNitsche trick, we get

u u
h L2 ( ) c(u) h2 .

When the Dirichlet datum uh is known, one can compute the remaining Neu-
mann datum by solving both, the interior and exterior Dirichlet boundary
value problems. Since those boundary value problems are Dirichlet boundary
value problems with approximated boundary data, the corresponding error
estimates are still valid.

2.4 Lame Equations

For a simply connected domain R3 , we consider the mixed boundary


value problem (1.79)
88 2 Boundary Element Methods

3

ij (u, x) = 0 for x ,
j=1
xj

0int ui (x) = gi (x) for x D,i ,



3
(1int u)i (x) = ij (u, x)nj (x) = fi (x) for x N,i ,
j=1

for i = 1, 2, 3. Note that we assume

= N,i D,i , N,i D,i = , meas D,i > 0

for i = 1, 2, 3. To nd the yet unknown Cauchy data (1int u)i on D,i and
0int ui on N,i , we consider the variational problem (1.80), which is related to
the symmetric formulation of boundary integral equations. Hence, we have to
nd
ti = (1int u)i fi H
  1/2 (D,i )
and
 1/2 (N,i )
i = 0int ui gi H
u
such that
a(  ; w, v) = F (w, v)
t, u
 1/2 (D,i ) and vi H
is satised for all wi H  1/2 (N,i ) for i = 1, 2, 3. Note
that the bilinear form is given by
3 
  3 
 
a(  ; w, v) =
t, u (V Lame 
t )i , wi (K Lame u
 )i , wi
D,i D,i
i=1 i=1
3 
  3 
 
+ ti , (K Lame v )i + (DLame u
 )i , vi ,
N,i N,i
i=1 i=1

while the linear form is

F (w, v) =
3      
1 Lame Lame 
gi , wi + (K g )i , wi (V f )i , wi +
i=1
2 D,i D,i D,i

3      
1
fi , vi fi , (K Lame v )i (DLame g )i , vi .
i=1
2 N,i N,i N,i

As for the Laplace equation, we rst dene suitable trial spaces,



ND,i
 1/2 (D,i ) = span i
Sh0 (D,i ) = Sh0 ( ) H ,
=1

MN,i
 1/2 (N,i ) = span i
Sh1 (N,i ) = Sh1 ( ) H j
j=1
2.4 Lame Equations 89

for i = 1, 2, 3. The Galerkin formulation of the variational problem (1.80) is


to nd ti,h Sh0 (D,i ) and u
i,h Sh1 (N,i ) such that

a( h ; wh , v h ) = F (wh , v h )
th , u

is satised for all wi Sh0 (D,i ) and vi Sh1 (M,i ) for i = 1, 2, 3. This
formulation is equivalent to a linear system of equations

VhLame KhLame 
  t = g , (2.47)
KhLame DhLame 
u f

having the blocks

VhLame RND ND , KhLame RND MN , DhLame RMN MN ,

where

3 
3
ND = ND,i , MN = MN,i .
i=1 i=1

While the blocks in the linear system (2.47) recover only the unknown coe-
cients  i,j , an implementation based on the complete stiness matrices
ti, and u
may be advantageous. Let

N
M
Sh0 ( ) = span  , Sh1 ( ) = span j
=1 j=1

be the boundary element spaces spanned by piecewise constant and piece-


wise linear continuous basis functions, respectively. Note that both Sh0 ( )
and Sh1 ( ) are dened with respect to a boundary element mesh of the com-
plete surface . By Pi : RN RND,i and Qi : RM RMN,i , we denote some
nodal projection operators describing the imbedding wi = Pi w RND,i for
w RN with


ND,i

N
whi (x) = wi i (x) Sh0 (D,i ), wh (x) = w  (x) Sh0 ( )
=1 =1

as well as the imbedding v i = Qi v RMN,i for v RM with


MN,i

N
vhi (x) = vji ij (x) Sh1 (N,i ), vh (x) = vj j (x) Sh1 ( ).
j=1 j=1

From this we obtain the representations

VhLame = P VhLame P , KhLame = P KhLame Q , DhLame = QDhLame Q ,

where the stiness matrices VhLame , KhLame , and DhLame correspond to the
Galerkin discretisation of the associated boundary integral operators V Lame ,
90 2 Boundary Element Methods

K Lame and DLame with respect to the boundary element spaces [Sh0 ( )]3
and [Sh1 ( )]3 . In particular, for the discrete single layer potential Vh we have
the representation

VhLame = (2.48)

Vh 0 0 V11,h V21,h V13,h
1 1 1+
(3 4) 0 Vh 0 + V21,h V22,h V23,h
2E 1
0 0 Vh V31,h V32,h V33,h

with the matrix Vh RN N having the entries


 
1 1
Vh [k, ] = dsy dsx , (2.49)
4 |x y|
k 

and six further matrices Vij,h RN N dened by


 
1 (xi yi )(xj yj )
Vij,h [k, ] = dsy dsx (2.50)
4 |x y|3
k 
 
1 1
= (xi yi ) dsy dsx
4 yj |x y|
k 

for k,  = 1, . . . , N and i, j = 1, 2, 3. Note that Vh is just the Galerkin stiness


matrix of the single layer potential for the Laplace operator, while the matrix
entries Vij,h [, k] are similar to the Galerkin discretisation of the double layer
potential for the Laplace operator.
From Lemma 1.16, we nd the representation for the double layer potential
K Lame
  E  Lame 
(K Lame v)(x) = (Kv)(x) V M (, n)v (x) + V M (, n)v (x)
1+
for x , and, therefore, the matrix representation

KhLame = (2.51)

Kh 0 0 Vh 0 0
0 Kh 0 0 Vh 0 T + E VhLame T ,
1+
0 0 Kh 0 0 Vh
where Vh and Kh are the Galerkin matrices related to the single and double
layer potential of the Laplace operator. Furthermore, T is a transformation
matrix related to the matrix surface curl operator M (, n).
Using the representation of the bilinear form of the hypersingular bound-
ary integral operator DLame as given in Lemma 1.18, one can derive a similar
representation for the Galerkin matrix DhLame , which is based on the trans-
formation matrix T and on the Galerkin matrices related to the single layer
potential of both, the Laplace operator and the system of linear elastostatics.
2.5 Helmholtz Equation 91

2.5 Helmholtz Equation


2.5.1 Interior Dirichlet Problem

The solution of the interior Dirichlet boundary value problem (cf. (1.104)),

u(x) 2 u(x) = 0 for x , 0int u(x) = g(x) for x ,

is given by the representation formula (cf. (1.95))


 
int u (x, y)g(y)ds
u(x) = u (x, y)t(y)dsy 1,y y for x ,

where the unknown Neumann datum t = 1int u H 1/2 ( ) is the unique


solution of the boundary integral equation (cf. (1.105))
1
(V t)(x) = g(x) + (K g)(x) for x .
2
Note that for the unique solvability, we have to assume that 2 is not an
eigenvalue of the Dirichlet eigenvalue problem (1.108). Then, t H 1/2 ( ) is
the unique solution of the variational problem (cf. (1.106))
   1  
V t, w = I + K g, w for all w H 1/2 ( ).
2

Using a sequence of nite dimensional subspaces Sh0 ( ) spanned by piecewise


constant basis functions, associated approximate solutions


N
th = t  Sh0 ( )
=1

are obtained from the Galerkin equations


   1  
V th , k = I + K g, k for k = 1, . . . , N. (2.52)
2

Hence, we nd the coecient vector t CN as the unique solution of the


linear system
V,h t = f
with
 
1 e|xy|
V,h [k, ] = dsy dsx , (2.53)
4 |x y|
k 

for k,  = 1, . . . , N , and
92 2 Boundary Element Methods
  
1 1   (x y, n(y))
fk = g(x)dsx + 1 |x y| e |xy| g(y)dsy dsx
2 4 |x y|3
k k

for k = 1, . . . , N .
Since the single layer potential V : H 1/2 ( ) H 1/2 ( ) is coercive, i.e.
V satises (1.97), and since V is injective when 2 is not an eigenvalue of
the Dirichlet eigenvalue problem (1.108), we conclude the unique solvability
of the Galerkin variational problem (2.52), as well as the quasi optimal error
estimate, i.e. Ceas lemma,

t th H 1/2 ( ) c inf t wh H 1/2 ( ) .


wh Sh
0 ( )

Combining this with the approximation property (2.5) for = 1/2, we get
1
t th H 1/2 ( ) c hs+ 2 |t|Hpw
s ( ) ,

when assuming t Hpw s


( ) and s [0, 1]. Applying the AubinNitsche trick
(for < 1/2) and the inverse inequality argument (for (1/2, 0]), we
also obtain the error estimate

t th H ( ) c hs |t|Hpw
s ( ) , (2.54)

when assuming t Hpw s


( ) for some s [0, 1] and [2, 0].
Inserting the computed Galerkin solution th Sh0 ( ) into the representa-
tion formula (1.95), this gives an approximate representation formula
 
(x) =
u 0int u (x, y)th (y)dsy 1int u (x, y)g(y)dsy , (2.55)

for x , describing an approximate solution of the Dirichlet boundary value


problem (1.104). Note that u satises the Helmholtz equation, but the Dirich-
let boundary conditions are satised only approximately. For an arbitrary
x , the error is given by
  
u(x) u
(x) = u (x, y) t(y) th (y) dsy .

Using a duality argument, the error estimate

(x)| u (x, )H ( ) t th H ( )


|u(x) u

for some R follows. Combining this with the error estimate (2.54) for the
minimal value = 2, we obtain the pointwise error estimate

(x)| c hs+2 u (x, )H 2 ( ) |t|Hpw


|u(x) u s ( ) .
2.5 Helmholtz Equation 93

Hence, if t Hpw
1
( ) is suciently smooth, we obtain the optimal order of
convergence for s = 1,

(x)| c h3 u (x, )H 2 ( ) |t|Hpw


|u(x) u 1 ( ) . (2.56)

Again, the error estimate (2.56) involves the position of the observation point
x , and, therefore, it is not valid in the limiting case x .
As for the Dirichlet problem for the Laplace equation, the computation of
fk requires the evaluation of the integrals
  
1 1   (x y, n(y))
fk = g(x)dsx + 1 |x y| e |xy| g(y)dsy dsx .
2 4 |x y|3
k k

When using a piecewise linear approximation gh Sh1 ( ) of the given Dirichlet


datum g H 1/2 ( ), we nd a perturbed solution vector  t CN from the
linear system
1 
V,h 
t = Mh + K,h g (2.57)
2
with additional matrices dened by the entries

Mh [k, j] = j (x)dsx ,
k

and
 
1   (x y, n(y))
K,h [k, j] = 1 |x y| e |xy| j (y)dsy dsx (2.58)
4 |x y|3
k

for k = 1, . . . , N and j = 1, . . . , M . Then, the exact Galerkin solution th has


to be replaced by the perturbed solution  th to obtain an approximate solution
of the Dirichlet problem (1.104) for x ,
 
(x) =
u u (x, y)
th (y)dsy 1int u (x, y)gh (y)dsy .

Thus, we obtain the optimal error estimate

|u(x) u
(x)| c(x, t, g) h3 , (2.59)

when using a L2 projection to approximate the boundary conditions, and


when assuming t Hpw
1
( ) and g Hpw
2
( ).

2.5.2 Interior Neumann Problem

Next we consider the interior Neumann boundary value problem (1.109),


94 2 Boundary Element Methods

u(x) 2 u(x) = 0 for x , 1int u(x) = g(x) for x .


The solution is given by the representation formula for x (cf. (1.95))
 
int u (x, y) int u(y)ds .
u(x) = u (x, y)g(y)dsy 1,y 0 y

We assume that 2 is not an eigenvalue of the Neumann eigenvalue problem


(1.113). In this case, the unknown Dirichlet datum u = 0int u H 1/2 ( ) is
the unique solution of the boundary integral equation (1.111),
1
(D u)(x) = g(x) (K g)(x) for x ,
2
or of the equivalent variational problem (1.112),
   1  
D u, v = I K g, v for all v H 1/2 ( ) .
2

Using a sequence of nite dimensional subspaces Sh1 ( ) spanned by piecewise


linear continuous basis functions, associated approximate solutions

M
uh = uj j Sh1 ( )
j=1

are obtained from the Galerkin equations


   1  
D uh , i = I K g, i for i = 1, . . . , M. (2.60)
2

Hence, we nd the coecient vector u CM as the unique solution of the


linear system
D,h u = f (2.61)
with
D,h [i, j] = D j , i
(2.62)
  |xy|
1 e
= (curl j (y), curl i (x))dsy dsx
4 |x y|

 
2 e|xy|
j (y)i (x)(n(x), n(y))dsy dsx ,
4 |x y|

for i, j = 1, . . . , M , and

1
fi = g(x)i (x)dsx
2

 
1   (x y, n(y))
i (x) 1 |x y| e |xy| g(y)dsy dsx
4 |x y|3

2.5 Helmholtz Equation 95

for i = 1, . . . , M . Note that for the computation of the matrix entries D,h [i, j],
we can reuse the discrete single layer potential V,h for picewise constant basis
functions, but we also need to have the Galerkin discretisation with piecewise
linear continuous basis functions of the operator
 |xy|
e
(C u)(x) = (n(x), n(y)) u(y)dsy , (2.63)
|x y|

which is similar to the single layer potential operator.


Since the hypersingular integral operator

D : H 1/2 ( ) H 1/2 ( )

is coercive, i.e. D satises (1.98), and since D is injective when 2 is not


an eigenvalue of the Neumann eigenvalue problem (1.113), we conclude the
unique solvability of the Galerkin variational problem (2.60), as well as the
quasi optimal error estimate, i.e. Ceas lemma,

u uh H 1/2 ( ) c inf u vh H 1/2 ( ) .


vh Sh
1 ( )

Combining this with the approximation property (2.10) for = 1/2, we get
1
u uh H 1/2 ( ) c hs 2 uHpw
s ( ) ,

when assuming u Hpws


( ) and s [1, 2]. Applying the AubinNitsche trick
we also obtain the error estimate

u uh H ( ) c hs uHpw
s ( ) , (2.64)

when assuming u Hpws


( ) for some s [1, 2] and [1, 1/2].
Inserting the computed Galerkin solution uh Sh1 ( ) into the represen-
tation formula (1.95), this gives an approximate representation formula for
x ,  
(x) =
u u (x, y)g(y)dsy 1,yint u (x, y)u (y)ds ,
h y (2.65)

describing an approximate solution of the Neumann boundary value prob-


lem (1.109). Note that u satises the Helmholtz equation, but the Neumann
boundary conditions are satised only approximately. For an arbitrary x ,
the error is given by
  
u(x) u
(x) = int u (x, y) u (y) u(y) ds .
1,y h y

Using a duality argument, the error estimate


96 2 Boundary Element Methods

(x)| u (x, )H ( ) u uh H ( )


|u(x) u

for some R follows. Combining this with the error estimate (2.64) for the
minimal value = 1, we obtain the pointwise error estimate

(x)| c hs+1 u (x, )H 1 ( ) |u|Hpw


|u(x) u s ( ) .

Hence, if u Hpw
2
( ) is suciently smooth, we get the optimal order of
convergence for s = 2,

(x)| c h3 u (x, )H 1 ( ) |u|Hpw


|u(x) u 2 ( ) . (2.66)

Again, the error estimate (2.66) involves the position of the observation point
x , and, therefore, is not valid in the limiting case x .
When using a piecewise constant approximation gh Sh0 ( ) of the given
Neumann datum g H 1/2 ( ), we can compute a perturbed piecewise linear
approximation uh Sh1 ( ) from the Galerkin equations
   1  
D uh , i = I K gh , i for i = 1, . . . , M
2

or from the equivalent linear system


1 
 =
D,h u Mh K,h

g
2
with

Mh [i, ] = i (x)dsx = Mh [, i],

 
 1   (x y, n(y))
K,h [i, ] = i (x) 1 |x y| e |xy| dsy dsx .
4 |x y|3


An approximate solution of the interior Neumann boundary value problem is


then given for x ,
 
int u (x, y)
(x) =
u u (x, y)gh (y)dsy 1,y uh (y)dsy .

As for the perturbed linear system (2.31) for the Neumann boundary value
problem of the Laplace equation, we obtain the error estimate

|u(x) u
(x)| c(x, t, g) h2 , (2.67)

when using a L2 projection to approximate the boundary conditions, when


assuming g Hpw
1
( ) and u Hpw
2
( ).
2.5 Helmholtz Equation 97

2.5.3 Exterior Dirichlet Problem

The solution of the exterior Dirichlet boundary value problem (cf. (1.114))

u(x) 2 u(x) = 0 for x e , 0ext u(x) = g(x) for x ,

where, in addition, we have to require the Sommerfeld radiation condition


(1.101), is given by the representation formula for x e (cf. 1.103)
 
u(x) = u (x, y)t(y)dsy + 1,y ext u (x, y)g(y)ds .
y

Again we assume that 2 is not an eigenvalue of the Dirichlet eigenvalue


problem (1.108). The unknown Neumann datum t = 1ext H 1/2 ( ) is then
the unique solution of the boundary integral equation (cf. (1.115))
1
(V t)(x) = g(x) + (K g)(x) for x .
2
To compute an approximate solution of this boundary integral equation, and,
therefore, of the exterior Dirichlet problem, we can proceed as in the case of
the interior Dirichlet problem. In particular, when using a piecewise linear
approximation gh Sh1 ( ), we nd a perturbed piecewise constant approxi-
mation th Sh0 ( ) from the Galerkin equations
   1  
V 
th , k = I + K gh , k for k = 1, . . . , N .
2

Hence, we obtain the coecient vector  t CN as the unique solution of the


linear system
 1 
V,h 
t = Mh + K,h g,
2
and an approximate solution of the exterior Dirichlet problem for x ,
 
(x) = u (x, y)
u ext u (x, y)g (y)ds .
th (y)dsy + 1,y h y (2.68)

Moreover, as for the interior Dirichlet problem, there holds the optimal error
estimate
|u(x) u
(x)| c(x, t, g) h3 , (2.69)
when using a L2 projection to approximate the boundary conditions, and
when assuming t Hpw
1
( ) and g Hpw
2
( ).
98 2 Boundary Element Methods

2.5.4 Exterior Neumann Problem

The solution of the exterior Neumann boundary value problem (cf. (1.120))

u(x) 2 u(x) = 0 for x e , 1ext u(x) = g(x) for x ,

where, in addition, we have to require the Sommerfeld radiation condition


(1.101), is given by the representation formula for x c (cf. (1.95))
 
u(x) = u (x, y)g(y)dsy + 1,y ext u (x, y) ext u(y)ds .
0 y

Again, we assume that 2 is not eigenvalue of the Neumann eigenvalue problem


(1.113). The unknown Dirichlet datum u = 0ext u H 1/2 ( ) is then the
unique solution of the boundary integral equation (cf. (1.121))
1
(D u)(x) = g(x) (K g)(x) for x .
2
To compute an approximate solution of this boundary integral equation, and,
therefore, of the exterior Neumann problem, we can proceed as in the case of
the interior Neumann problem. In particular, when using a piecewise constant
approximation gh Sh0 ( ) of the given Neumann datum g, we nd a perturbed
piecewise linear approximation u h Sh1 ( ) from the Galerkin equations
   1  
h , i
D u = I K gh , i for i = 1, . . . , M.
2

Hence, we obtain the coecient vector u CM as the unique solution of the


linear system
 1 
 = Mh K,h
D,h u 
g,
2
and an approximate solution of the exterior Neumann problem for x ,
 
(x) = u (x, y)gh (y)dsy + 1,y
u ext u (x, y)
uh (y)dsy .

Moreover, we obtain the error estimate

|u(x) u
(x)| c(x, t, g) h2 , (2.70)

when using the L2 projection to approximate the boundary conditions, and


when assuming g Hpw
1
( ) and u Hpw
2
( ).
2.6 Bibliographic Remarks 99

2.6 Bibliographic Remarks


The numerical analysis of boundary element methods was introduced inde-
pendently by J.C. Nedelec and J. Planchard [79] and by G. C. Hsiao and
W. L. Wendland [57]. While the stability and error analysis of the Galerkin
boundary element methods follow as in the case of the nite element meth-
ods, the stability of the collocation boundary element methods for general
Lipschitz boundaries is still open, see [4, 5, 100, 101] for some special cases.
The AubinNitsche trick to obtain higher order error estimates for boundary
element methods was rst given in [58].
Since the implementation of boundary element methods often requires nu-
merical integration techniques, an appropriate numerical analysis is manda-
tory. Galerkin collocation schemes were rst discussed in [54, 68]. Further
investigations on the use of numerical integration schemes were made in
[45, 97, 98, 102]. In [76], the inuence on an additional boundary approxi-
mation was considered.
Further references on boundary element methods are, for example, [12, 15,
21, 40, 50, 104, 117] and [99, 105].
3
Approximation of
Boundary Element Matrices

When using boundary element methods for the numerical solution of boundary
value problems for three-dimensional second order partial dierential equa-
tions, one has to deal with two main diculties. First of all, almost all matri-
ces involved are dense, i.e. all their entries do not vanish in general, leading
to an asymptotically quadratic memory requirement for the whole procedure.
Thus, classical boundary element realisations are applicable only for a rather
moderate number N of boundary elements. Fortunately, all boundary element
matrices can be decomposed into a hierarchical system of blocks which can
be approximated by the use of low rank matrices. This approximation will be
the main content of this chapter.
The second diculty is the complicated form of the matrix entries to be
generated. The Galerkin method, for example, requires the evaluation of dou-
ble surface integrals for each matrix entry. This can not be done analytically
in general. Thus, combined semi-analytical computations will be used to gen-
erate the single entries of the matrices. A more detailed description of the
corresponding procedures is presented in Appendix C.

3.1 Hierarchical Matrices


The formal denition and description of hierarchical matrices as well as op-
erations involving those matrices can be found in [41, 42]. In this section we
give a more intuitive introduction to this topic.

3.1.1 Motivation

Let K : [0, 1] [0, 1] R be a given function of two scalar variables and let
A RN M be a given matrix having the entries

ak = K(xk , y ) , k = 1, . . . , N ,  = 1, . . . , M , (3.1)


102 3 Approximation of Boundary Element Matrices

with (xk , y ) [0, 1] [0, 1]. It is obvious, that the asymptotic memory re-
quirement for the dense matrix A is Mem(A) = O(N M ), and the asymptotic
number of arithmetical operations required for a matrix-vector multiplication
is Op(A s) = O(N M ) as N, M . This quadratic amount is too high for
modern computers, already for moderate values of N and M . However, if we
agree to store just an approximation A of the matrix A and to deal with the
product A s instead of the exact value A s, the situation may change. But
then it is necessary to control the error, i.e. to guarantee the error bound

A AF AF , (3.2)

for some prescribed accuracy , where AF denotes the Frobenius norm of
the matrix A,
 1/2

N 
M
AF = a2k . (3.3)
k=1 =1

Singular value decomposition

The best possible approximation of the matrix A RN M is given by its


partial singular value decomposition

r
A A = A(r) = i ui vi , (3.4)
i=1

where
i R+ , ui RN , vi RM , i = 1, . . . , r
are the biggest singular values and the corresponding singular vectors of the
matrix A. The rank r = r() is chosen corresponding to the condition


min(N,M )

min(N,M )
A A2F i2 2 i2 = 2 A2F . (3.5)
i=r+1 i=1

Unfortunately, the complete singular value decomposition of the matrix A re-


quires O(N 3 ) arithmetical operations when assuming N M , and, therefore,
it is too expensive for practical computations. However, the singular value
decomposition can be perfectly used for the illustration of the main ideas.
As an example, let us consider the following function on [0, 1] [0, 1],
1
K(x, y) = , (3.6)
+x+y
where > 0 is some real parameter. For small values of the parameter the
function K gets an articial singularity at the corner (0, 0) of the square
[0, 1] [0, 1].
3.1 Hierarchical Matrices 103

The domain [0, 1] [0, 1] is uniformly discretised using the nodes


  1 1
(xk , y ) = (k 1)hx , ( 1)hy , hx = , hy = (3.7)
N 1 M 1
for k = 1, . . . , N and  = 1, . . . , M . In Fig. 3.1, the logarithmic plots of the
singular values of the matrix (3.1) (i.e. the quantities log10 i , i = 1, . . . , N )
for N = M = 32 (left plot) and N = M = 1024 (right plot) are presented
for = 104 . It can easily be seen that only very few singular values are
needed to represent the matrix A in its singular value decomposition (3.4).
For N = 1024, almost all singular values are close to the computer zero, and,
therefore, this matrix can be well approximated by a low rank matrix A.

0 0

-5 -5

-10 -10

-15
-15

5 10 15 20 25 30 0 200 400 600 800 1000

Fig. 3.1. Singular values for N = 32 and N = 1024

The number of signicant singular values slowly increases with the di-
mension, as it can be seen in Fig. 3.2. Here the rst 32 singular values (lo-
garithmic plot) for N = M = 32, 128, 256 (left plot, from below) and for
N = M = 256, 512, 1024 (right plot, from below) are shown. The accuracy of

5 5

0 0

-5 -5

-10 -10

-15 -15

0 5 10 15 20 25 30 0 5 10 15 20 25 30

Fig. 3.2. First 32 singular values for N = 32, 64, 128, 256, 512, and N = 1024

the low rank approximation A(r) of the matrix A is illustrated in Fig. 3.3,
where the logarithmic plot of the function
A A(r)F
(r) =
AF
104 3 Approximation of Boundary Element Matrices

0 0

-5 -5

-10 -10

-15 -15

-20 -20
0 5 10 15 20 25 30 0 5 10 15 20 25 30

Fig. 3.3. Accuracy of the low lank approximation

for r = 1, . . . , 32 is depicted. The left plot in Fig. 3.3 corresponds again to the
dimensions N = 32, 64, and N = 128 (from below), while the right plot shows
the results for N = 256, 512, and N = 1024 (from below). Thus, the behaviour
of the singular values determines the quality of the low rank approximation
(3.4). The results shown do not really depend on the parameter . If becomes
smaller, the results are even better.
The situation changes if the singularity of the function K is more serious.
As a further example, let us consider the following function on [0, 1] [0, 1],
1
K(x, y) = , (3.8)
+ (x y)2
where > 0 is again some real parameter. For small values of the parameter
the function K gets an articial singularity along the diagonal {(x, x)}
of the square [0, 1] [0, 1].
In Fig. 3.4 (left plot), the rank r() for = 106 and N = M = 256 is
shown as a function of the parameter . The horizontal axis corresponds to
the values log2 (), while changes from 20 till 28 . Thus, the rank of the
matrix strongly depends on the parameter . However, if we separate the

70 70

60 60

50 50

40 40

30 30

20 20

10 10

0 2 4 6 8 0 2 4 6 8

Fig. 3.4. Rang of the matrix A

variables x and y, i.e. consider only the quarter [0, 0.5] [0.5, 1] of the square
[0, 1][0, 1], then the situation is much better. The right plot in Fig. 3.4 shows
the same curve for separated x and y, which is more or less constant now.
3.1 Hierarchical Matrices 105

The logarithmic plots of the singular values of the matrix A for = 101
(lower curve) and for = 108 (upper curve) are shown in Fig. 3.5. The left
plot in this gure corresponds to the whole square [0, 1][0, 1], while the right
plot shows the results for the separated variables x and y, i.e. if we consider
only the quarter [0, 0.5] [0.5, 1] of the square [0, 1] [0, 1]. Now the main

0 0

-5 -5

-10 -10

-15 -15

0 50 100 150 200 250 0 50 100 150 200 250

Fig. 3.5. Singular values for N = 256

idea of hierarchical methods is quite clear. If we decompose the whole matrix


A into four blocks corresponding to the domains

[0, 0.5] [0, 0.5], [0, 0.5] [0.5, 1], [0.5, 1] [0, 0.5], [0.5, 1] [0.5, 1] ,

we will be able to approximate two of these four blocks eciently. The two re-
maining, main diagonal blocks have the same structure as the original matrix,
but only half of the size and their rank will be smaller. In Fig. 3.6, the left
diagram corresponds to the whole matrix and its rank r() = 73 is obtained
for = 29 , = 106 and N = M = 256. The 2 2 block matrix together
with the ranks of the blocks is shown in the second diagram of Fig. 3.6. The

12 7
20 8 7 12 8
38 9 8 20
9 12 7 9
8
73 20 8
7 12
12 7
7 12 8
9 38 9 8 20
9 12 7
8 7 12

Fig. 3.6. Original matrix and its hierarchical decomposition in blocks

approximation of the separated blocks is now acceptable, and we continue to


decompose only the blocks on the main diagonal. The results can be seen in
the third and in the fourth diagram of Fig. 3.6. The memory requirements
for these four matrices are quite dierent: the rst matrix needs 146N words
of memory, the second 94N , the third 74N , and, nally, we will need 72N
words of memory for the last block matrix in Fig. 3.6. Thus, a hierarchical
106 3 Approximation of Boundary Element Matrices

decomposition into blocks and their separate approximation using a singular


value decomposition leads to a drastic reduction of memory requirements (the
latter decomposition requires less than 50%) even for this rather small matrix
having a diagonal singularity. Note that the rank of the blocks on the main
diagonal increases almost linear with the dimension: 12 20 38 73, while
the rank of the separated blocks has at most a logarithmic growth: 7 8 9.
Thus, a hierarchical approximation of large dense matrices arising from
some generating function having diagonal singularity consists of three steps:
Construction of clusters for variables x and y,
Finding of possible admissible blocks (i.e. blocks with separated x and y),
Low rank approximation of admissible blocks.
In the above example, the clusters were simply the sets of points xk which
belong to smaller and smaller intervals. Fortunately, the decomposition prob-
lem is only slightly more complicated for general, three-dimensional irregular
point sets. Also, the admissible blocks in the above example are very natural.
They are just blocks outside of the main diagonal. In the general case, we will
need some permutations of rows and columns of the matrix to construct ad-
missible blocks. Finally, the singular value decomposition approximation we
have used, is not applicable for more realistic examples. We will need more
ecient algorithms, namely the Adaptive Cross Approximation (ACA), to
approximate admissible blocks.

Degenerated approximation

In the above example, the approximation of the blocks for separated variables
x and y is based on the smoothness of the function K for x = y. However, if
the function K is degenerated, i.e. it is a nite sum of products of functions
depending only on x and y,

r
K(x, y) = pi (x)qi (y) , (3.9)
i=1

then the rank of the matrix A dened in (3.1) is equal to r independent of


its dimension. Thus for N, M  r, the matrix A is a low rank matrix. This
property is independent of the smoothness of the functions pi , qi in (3.9). The
low rank representation of the matrix A is then

r
A= ui vi ,
i=1

with

(ui )k = pi (xk ) , (vi ) = qi (y )


3.1 Hierarchical Matrices 107

for k = 1, . . . , N and  = 1, . . . , M . Note that this representation is not the


singular value decomposition (3.4).
Another possibility to obtain a low rank approximation of a matrix of the
form (3.1) is based on the smoothness. If the function K is smooth enough,
then we can use its Taylor series (cf. [44]) with respect to the variable x in
some xed point x ,
r
1 i K(x , y)
K(x, y) = i
(x x )i + Rr (x, y) ,
i=0
i ! x

to obtain a degenerated approximation



r
A A = ui vi , (3.10)
i=0

with
1 i K(x , y )
(ui )k = (xk x )i , (vi ) =
i! xi
for k = 1, . . . , N and  = 1, . . . , M . Again, (3.10) is not the partial singu-
lar value decomposition (3.4) of the matrix A. If the remainder term Rr is
uniformly bounded by the original function K satisfying
   
   
Rr (x, y) K(x, y)

for all x and y with some r = r(), then we can guarantee the accuracy of the
low rank matrix approximation
A AF AF (3.11)
for all dimensions N and M . The rank r+1 of the matrix A is also independent
of its dimension. Thus, for N M , the matrix A requires only Mem(A) =
O(N ) words of computer memory. However, an ecient construction of the
Taylor series for a given function in the three-dimensional case is practically
impossible. Thus, it is rather an illustration for the fact that there exist low
rank approximations of dense matrices, which are not based on the singular
value decomposition.
A further example of a low rank approximation of a given function is the
decomposition of the fundamental solution of the Laplace equation
1 1
u (x, y) = for x, y R3
4 |x y|
(cf. (1.7)) into spherical harmonics in some point x with |x x | < |y x |
 n  
1 1 |x x |
u (x, y) =
Pn (ex , ey ) ,
4 |y x | n=0 |y x |
x x y x
ex = , ey = ,
|x x | |y x |
108 3 Approximation of Boundary Element Matrices

where the Legendre polynomials are dened for |u| 1 as follows,


1 dn 2
P0 (u) = 1 , Pn (u) = (u 1)n , for n 1 . (3.12)
2n n! dun
Note that the Legendre polynomials allow the following separation of variables
  
n
Pn (ex , ey ) = Ynm (ex )Ynm (ey ) ,
m=n

where Ynm are the spherical harmonics. See [39, 93] for more details.

3.1.2 Hierarchical clustering

Large dense matrices arising from integral equations have no explicit structure
in general. As a rule, because of the singularity of the kernel function on the
diagonal, i.e. for x = y, these matrices are also not of low rank. However, it
is possible to nd a permutation, so that the matrix with permuted rows and
columns contains rather large blocks close to some low-rank matrices with
respect to the Frobenius norm (cf. (3.2)).

Cluster tree

To nd a suitable permutation, a cluster tree is constructed by a recursive


partitioning of some weighted, pairwise disjunct, characteristic points

(xk , gk ) , k = 1, . . . , N R3 R+ (3.13)

and

(y , q ) ,  = 1, . . . , M R3 R+ (3.14)

in order to separate the variables x and y. A large distance between two


characteristic points results in a large dierence of the respective column or
row numbers.
When dealing with boundary element matrices, the characteristic points
can be
the mid points xk of the triangle elements k with weights gk = k = |k |,
when using piecewise constant basis functions k (cf. (2.3)),
the nodes xk of the grid with weights gk = |supp k |, when using piecewise
linear continuous basis functions k (cf. (2.9)).
A group of weighted points is called cluster if the points are close to each
other with respect to the usual distance. A given cluster

Cl = (xk , gk ) , k = 1, . . . , n

with n > 1 can be separated in two sons using the following algorithm.
3.1 Hierarchical Matrices 109

Algorithm 3.1
1. Mass of the cluster

n
G= gk R+ ,
k=1

2. Centre of the cluster

1 
n
X= gk xk R3 ,
G
k=1

3. Covariance matrix of the cluster



n
C= gk (xk X) (xk X) R33 ,
k=1

4. Eigenvalues and eigenvectors

C vi = i vi , i = 1, 2, 3 , 1 2 3 0 ,

5. Separation
5.1 initialisation

Cl1 := , Cl2 := ,

5.2 for k = 1, . . . , n

if (xk X, v1 ) 0 then Cl1 := Cl1 (xk , gk )


else Cl2 := Cl2 (xk , gk ) .

The eigenvector v1 of the matrix C corresponds to the largest eigenvalue of


this matrix and shows(in the direction of the longest ) expanse of the cluster.
The separation plane x R3 : (x X, v1 ) = 0 goes through the centre
X of the cluster and is orthogonal to the eigenvector v1 . Thus, Algorithm 3.1
divides a given arbitrary cluster of weighted points into two more or less equal
sons. In Fig. 3.7, the rst two separation levels of a given, rather complicated,
surface are shown. The separation of a given cluster in two sons denes a
permutation of the points in the cluster. The points in the rst son will be
numbered rst and then the ones in the second son. Algorithm 3.1 will be
applied recursively to the sons, until they contain less or equal than some
prescribed (small and independent of N ) number nmin of points.

Cluster pairs

Next, cluster pairs which are geometrically well separated are identied. They
will be regarded as admissible cluster pairs, as e.g. the clusters in Fig. 3.8. An
110 3 Approximation of Boundary Element Matrices

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

Fig. 3.7. Clusters of the rst two levels


3.1 Hierarchical Matrices 111

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

Fig. 3.8. An admissible cluster pair

appropriate admissibility criterion is the following simple geometrical condi-


tion. A pair of clusters (Clx , Cly ) with nx > nmin and my > nmin elements
is admissible if
 
min diam(Clx ), diam(Cly ) dist(Clx , Cly ) , (3.15)

where 0 < < 1 is a given parameter. Although the criterion (3.15) is quite
simple, a rather large computational eort (quadratic with respect to the
number of elements in the clusters Clx and Cly ) is required for calculating
the exact values

diam(Clx ) = max |xk1 xk2 | ,


k1 ,k2
diam(Cly ) = max |y1 y2 | ,
1 ,2
dist(Clx , Cly ) = min |xk y | .
k,

In practice, one can use rougher, more restrictive, but easily computable
bounds

diam(Clx ) 2 max |X xk | ,
k
diam(Cly ) 2 max |Y y | ,

112 3 Approximation of Boundary Element Matrices

1 
dist(Clx , Cly ) |X Y | diam(Clx ) + diam(Cly ) ,
2
where X and Y are the already computed centres (cf. Algorithm 3.1) of the
clusters Clx and Cly , for the admissibility condition. If a cluster pair is not
admissible, but nx > nmin and my > nmin are satised, then there exist sons
of both clusters

Clx = Clx,1 Clx,2 , Cly = Cly,1 Cly,2 .

For simplicity, let us assume that the cluster Clx is bigger than Cly , i.e.

diam(Clx ) diam(Cly ).

In this case, we have to check the following two new pairs


   
Clx,1 , Cly , Clx,2 , Cly

for admissibility, and so on. This recursive procedure stops if nx nmin or


my nmin is satised. The corresponding block of the matrix is small, and it
will be computed exactly. The cluster trees for the variables x and y together
with the set of the admissible cluster pairs, as well as the set of the small
cluster pairs allow to split the matrix into a collection of blocks of various sizes.
The hierarchical block structure of the Galerkin matrix for the single layer
potential on the surface from Figs. 3.73.8 is shown in Fig. 3.9. The colours
of the blocks indicate the quality of the approximation. The light grey
colour corresponds to well approximated blocks, while the dark grey colour
indicates a less good approximation. The small blocks are computed exactly
and they are depicted in black. Thus, the remaining main problem is how
to approximate the blocks which correspond to the admissible cluster pairs,
without using the singular value decomposition. The corresponding procedures
will be described in the following section.

3.2 Block Approximation Methods


3.2.1 Analytic Form of Adaptive Cross Approximation

Let X, Y R3 be two nonempty domains, and let K : X Y R be a


given function. The following abstract Adaptive Cross Approximation algo-
rithm constructs a degenerated approximation of the function K using nodal
interpolation in some points
( ) ( )
x1 , x2 , . . . X , y1 , y2 , . . . Y ,

which will be determined during the realisation of the algorithm on an adap-


tive way.
3.2 Block Approximation Methods 113

Fig. 3.9. Matrix decomposition

Algorithm 3.2
1. Initialisation

R0 (x, y) = K(x, y) , S0 = 0 .

2. For i = 0, 1, 2, . . . compute
2.1. pivot element

(xi+1 , yi+1 ) = ArgMax|Ri (x, y)| ,

2.2. normalising constant


1
i+1 = (Ri (xi+1 , yi+1 )) ,

2.3. new functions

ui+1 (x) = i+1 Ri (x, yi+1 ) , vi+1 (y) = Ri (xi+1 , y) ,

2.4. new residual

Ri+1 (x, y) = Ri (x, y) ui+1 (x)vi+1 (y) ,

2.5. new approximation

Si+1 (x, y) = Si (x, y) + ui+1 (x)vi+1 (y) .


114 3 Approximation of Boundary Element Matrices

The stopping criterion for the above algorithm can be realised in Step 2.1.
corresponding to the condition

|Rr (x, y)| |K(x, y)| for (x, y) X Y . (3.16)

Algorithm 3.2 produces a sequence of approximations {Si } and an associated


sequence of residuals {Ri } possessing the approximation property

K(x, y) = Ri (x, y) + Si (x, y) , (x, y) X Y , i = 0, 1, . . . , r (3.17)

and the interpolation property

Rm (xi , y) = 0 , y Y , Rm (x, yi ) = 0 , x X

for i = 1, 2, . . . , m and for m = 1, 2, . . . , r. Furthermore, if the function K(x, y)


is harmonic for x = y then its approximations Si (x, y) are also harmonic for
all i. The residuals {Ri } accumulate zeros, and, therefore, the sequence of the
functions {Si } interpolates the given function K(x, y) in more and more points
corresponding to (3.17). If we are interested in computing an approximation
A for a matrix A RN M having the entries

ak = K(xk , y ) , k = 1, . . . , N ,  = 1, . . . , M (3.18)

for some points (xk , y ) X Y , then the approximation Sr (x, y) of the


function K(x, y) can be used to obtain

ak = Sr (xk , y ) ak .

Due to the stopping criterion (3.16), and due to the approximation property
(3.17), the following estimate obviously holds

A AF AF ,

where  F denotes the Frobenius norm of a matrix (cf. (3.3)).

Remark 3.3. Step 2.1. of Algorithm 3.2 should be discussed in more details.
It can be very dicult, if not impossible, to solve the maximum problem
formulated there. There are two possibilities to proceed. First, we can look
for the maximum only in a nite set of given points (xk , y ). In this case
only the original entries of the matrix A will be used for its approximation.
The algorithm will coincide with the algebraic fully pivoted ACA algorithm as
described in Subsection 3.2.2. The other possibility is to choose some articial
points and to look for the maximum there. These points can be the zeros
of the three-dimensional Chebyshev polynomials in corresponding bounding
boxes for the sets X, Y . In this case the ACA approximation will be similar
to the best possible polynomial interpolation.
3.2 Block Approximation Methods 115

Remark 3.4. The stopping criterion (3.16) can be applied only if the function
K(x, y) is smooth on (X, Y ). If this is not the case, but if the function K is
asymptotically smooth (cf. (3.19)), then we have to decompose the domains
X and Y into two systems of clusters and to approximate the function on
each admissible cluster pair (Clx , Cly ) separately using Algorithm 3.2. This
decomposition implies the corresponding decomposition of the matrix in a
hierarchical system of blocks.

Using the theory of polynomial multidimensional interpolation, the following


result was proven in [7].
Theorem 3.5. Let the function K(x, y) be asymptotically smooth with respect
to y, i.e. K(x, ) C (R3 \{x}) for all x R3 , satisfying

|y K(x, y)| cp |x y|gp , p = || (3.19)

for all multiindices N30 with a constant g < 0. Moreover, the matrix
A RN M with entries (3.18) is decomposed into blocks corresponding to the
admissibility condition

diam(Cly ) dist(Clx , Cly ) , < 1 .

Then the matrix A with M N can be approximated up to an arbitrary given


accuracy > 0 using a system of given points (xk , y ),

A AF AF ,

and

Op(A) = Op(A s) = Mem(A) = O(N 1+ ) for all > 0 .

In Theorem 3.5, Op(A) denotes the number of arithmetical operations re-


quired for the generation of the matrix A, Op(A s) is the asymptotic number
of arithmetical operations required for the matrix-vector multiplication with
the matrix A, and Mem(A) = r(M + N ) is the asymptotic memory require-
ment for the matrix A as N . Thus, Theorem 3.5 states that an almost
linear complexity is achieved for these important quantities.
Let us now consider matrices arising from collocation boundary element
methods (cf. Chapter 2). Let be a Lipschitz boundary, let

 : R,  = 1, . . . , M

be a given system of basis functions, and let


xk , k = 1, . . . , N

be a set of collocation points. If, for example, is the union of plane triangles
(cf. (2.1)),
116 3 Approximation of Boundary Element Matrices

'
N
=  ,
=1

then the most simple collocation method with piecewise constant basis func-
tions
.
1 , y  ,
 (y) =
0,y / 

can be used. In this case, the collocation points xk are the midpoints of the
triangles k . The corresponding collocation matrix A RN M with

ak = K(xk , y)  (y) dsy , k = 1, . . . , N ,  = 1, . . . , M (3.20)

for some kernel function K can be approximated using the following fully
pivoted ACA algorithm.
Algorithm 3.6
1. Initialisation

R0 (x, y) = K(x, y) , S0 = 0 .

2. For i = 0, 1, 2, . . . compute
2.1. pivot element
 
 
 

(ki+1 , i+1 ) = ArgMax  Ri (xk , y)  (y) dsy  ,

 

2.2. normalising constant


1

i+1 = Ri (xki+1 , y) i+1 (y) dsy ,

2.3. new functions



ui+1 (x) = i+1 Ri (x, y) i+1 (y) dsy , vi+1 (y) = Ri (xki+1 , y) ,

2.4. new residual

Ri+1 (x, y) = Ri (x, y) ui+1 (x)vi+1 (y) ,

2.5. new approximation

Si+1 (x, y) = Si (x, y) + ui+1 (x)vi+1 (y) .


3.2 Block Approximation Methods 117

Note that the approximation property (3.17) remains valid for Algorithm
3.6, while the interpolation property (3.18) remains valid only with respect to
the variable x,
Rm (xki , y) = 0, y , i = 1, 2, . . . , m, m = 1, 2, . . . , r . (3.21)
The interpolation property with respect to y changes to the orthogonality

Rm (x, y) i (y)dsy = 0, x , i = 1, . . . , m, m = 1, 2, . . . r. (3.22)

For the analysis of Algorithm 3.6, it is useful to introduce the following func-
tions

U (x) = K(x, y)  (y)dsy ,  = 1, . . . , M ,

having the property ak = U (xk ) , (cf. (3.20)). Using the properties (3.21)
and (3.22), we can conclude that the functions

U (x) = Sr (x, y)  (y)dsy (3.23)

coincide with U for  {1 , . . . , r }. Moreover, all other functions U , i.e.


/ {1 , . . . , r }, are interpolated by the functions (3.23) at points xk ,
for 
k {k1 , . . . , kr }. The approximation A of the collocation matrix A is then
given by the entries

ak ak = Sr (xk , y)  (y)dsy .

In [9], the interpolation theory of multidimensional Chebyshev polynomials


was used in order to prove Theorem 3.5 for collocation matrices.
A straightforward modication of Algorithm 3.6 leads to an algorithm for
the Galerkin matrix A RN M with elements
 
ak = K(x, y)  (y) k (x) dsy dsx (3.24)

for k = 1, . . . , N and  = 1, . . . , M . In (3.24), a system of basis functions


 : R,  = 1, . . . , M ,

may dier from the system of test functions


k : R, k = 1, . . . , N .

The Galerkin matrix A can be approximated using the following ACA algo-
rithm.
118 3 Approximation of Boundary Element Matrices

Algorithm 3.7
1. Initialisation

R0 (x, y) = K(x, y) , S0 = 0 .

2. For i = 0, 1, 2, . . . compute
2.1. pivot element
 
  
 

(ki+1 , i+1 ) = ArgMax  Ri (x, y)  (y) k (x) dsy dsx  ,
 

2.2. normalising constant


1
 
i+1 = Ri (x, y) i+1 (y) ki+1 (x) dsy dsx ,

2.3. new functions



ui+1 (x) = i+1 Ri (x, y) i+1 (y) dsy ,


vi+1 (y) = Ri (x, y) ki+1 (x) dsx ,

2.4. new residual

Ri+1 (x, y) = Ri (x, y) ui+1 (x)vi+1 (y) ,

2.5. new approximation

Si+1 (x, y) = Si (x, y) + ui+1 (x)vi+1 (y) .

The approximation property (3.17) remains valid for Algorithm 3.7, which,
instead of the interpolation property (3.18), possesses the following orthogo-
nalities for i = 1, . . . , m , m = 1, 2, . . . r:

Rm (x, y) i (y) dsy = 0 , x ,


Rm (x, y) ki (x) dsx = 0 , y .

It is practically impossible to compute the elements of the Galerkin matrices


corresponding to (3.24) analytically in a general setting. Even in the simplest
3.2 Block Approximation Methods 119

situation, e.g. for plane triangles k and by using piecewise constant basis func-
tions, some numerical integration is involved (cf. Chapter 4). If both integrals
in (3.24) are computed numerically, i.e.

ak ak = k,kx ,ky K(xk,kx , y,ky )  (y,ky )k (xk,kx ), (3.25)
kx ky

where k,kx , ,ky are the weights of the quadrature rule (including Jacobians)
and xk,kx , y,ky are the corresponding integration points, then not the exact
Galerkin matrix A, but its quadrature approximation A will be further ap-
proximated by ACA. The matrix A is, corresponding to (3.25), a nite sum
of matrices as dened in (3.18), multiplied by degenerated diagonal matrices.
Therefore, Theorem 3.5 remains valid for Galerkin matrices, if the relative
accuracy of the numerical integration (3.25) is higher than the approximation
of the matrix A by ACA.

3.2.2 Algebraic Form of Adaptive Cross Approximation

On the matrix level, all three algorithms formulated in Section 3.2.1 can be
written in the fully pivoted ACA form.

Fully pivoted ACA algorithm

Let A RN M be a given matrix.


Algorithm 3.8
1. Initialisation

R0 = A , S0 = 0 .

2. For i = 0, 1, 2, . . . compute
2.1. pivot element

(ki+1 , i+1 ) = ArgMax |(Ri )k | ,

2.2. normalising constant


 1
i+1 = (Ri )ki+1 i+1 ,

2.3. new vectors

ui+1 = i+1 Ri ei+1 , vi+1 = Ri eki+1 ,

2.4. new residual



Ri+1 = Ri ui+1 vi+1 ,
120 3 Approximation of Boundary Element Matrices

2.5. new approximation



Si+1 = Si + ui+1 vi+1 .

In Algorithm 3.8, ej denotes the jth column of the identity matrix I. The
whole residual matrix Ri is inspected in Step 2.1 of Algorithm 3.8 for its
maximal entry. Thus, its Frobenius norm can easily be computed in this step,
and the appropriate stopping criterion for a given > 0 at step r would be

Rr F AF .

Note that the crosses built from the column-row pairs with the indices ki , i
for i = 1, . . . , r will be computed exactly
 
Sm = aki ,l , l = 1, ..., M ,
k ,l
  i
Sm = ak,li , k = 1, ..., N
k,li

for i = 1, . . . , m , m = 1, . . . , r, while all other elements are approximated.


The number of operations required to generate the approximation A = Sr is
O(r2 N M ). The memory requirement for Algorithm 3.8 is O(N M ), since the
whole matrix A is assumed to be given at the beginning. Thus, Algorithm 3.8
is much faster than a singular value decomposition, but still rather expensive
for large matrices.
The eciency of Algorithm 3.8 will now be illustrated using the follow-
ing examples. First, we consider the matrix A, generated as in (3.1), for the
function
1
K(x, y) = , = 102
+x+y
(cf. (3.6)) on the uniform grid (3.7) for N = M = 32. In Table 3.1 the results of
the application of Algorithm 3.8 are presented. The plot of the initial residual
R0 , i.e. of the function K on the grid, is shown in Fig. 3.10, while the next
three Figs. 3.113.13 show the residual Rk for k = 3, 6, and k = 9. The
three-dimensional plots of the residuals Rk (x, y) are presented on the left,
while the corresponding matrices are depicted on the right. Note that the
exactly computed crosses are shown in black, while the remaining grey scales
are adapted to the actual values of the residuals, and, therefore, are dierent
for all pictures. This example illustrates the behaviour of the fully pivoted
ACA algorithm very clearly. The generation function (3.6) has only a weak
singularity at the corner of the computational domain. This singularity is
not important for the low rank approximation and is completely removed after
the rst two iterations. Further iterations quickly reduce the relative error of
the approximation.
However, if the singularity is on the diagonal as for the function
1
K(x, y) = , = 102 (3.26)
+ (x y)2
3.2 Block Approximation Methods 121

Table 3.1. Fully Pivoted ACA algorithm for the function (3.6)

Step Pivot row Pivot column Pivot value Relative error


1 1 1 1.00 10 +2
3.43 101
2 2 2 7.91 10+0 1.62 101
3 6 6 1.10 10+0 3.66 102
4 28 28 2.25 101 2.26 103
5 3 3 6.10 102 8.40 104
6 13 13 9.87 103 2.28 105
7 4 4 3.91 104 8.85 106
8 20 20 1.02 104 2.69 107
9 9 9 6.32 106 3.30 108
10 32 32 1.97 106 1.13 109

0 8 16 24 32
0 0

8 8

100 16 16
75
30
50
25
0 20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.10. Initial residual for the function (3.6)

0 8 16 24 32
0 0

8 8

16 16
0.2
0.1 30
0
20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.11. Residual R3 for the function (3.6)


122 3 Approximation of Boundary Element Matrices
0 8 16 24 32
0 0

8 8

0.0004 16 16
0.0002
30
0
-0.0002 20
24 24

10
10
20

30 32 32
0 8 16 24 32

Fig. 3.12. Residual R6 for the function (3.6)

0 8 16 24 32
0 0

8 8

16 16

24 24

32 32
0 8 16 24 32

Fig. 3.13. Residual R9 for the function (3.6)

(cf. (3.8)), then, as we have already seen, the situation changes. The results of
the computations are presented in Table 3.2 and in the four Figs. 3.143.17.
The convergence of the ACA algorithm is now slow, the crosses chosen can
not approximate the main diagonal, because they are too small there. This
illustrates once again the necessity of the hierarchical clustering.
The next function we consider,

K(x, y) = sin6 ((2x + y)) , (3.27)

is degenerated corresponding to the denition (3.9), having the exact low rank
r = 7. This function is obviously innitely smooth. But it is oscillating and
the convergence of Algorithm 3.8 is slow again. The convergence is, of course,
better than for the singular function (3.26), but not really sucient. However,
after exactly 7 iterations, the error is equal to computer zero. It means that
Algorithm 3.8 has correctly detected the low rank of the function (3.27). The
numerical results can be seen in Table 3.3 and in the four Figs. 3.183.21,
3.2 Block Approximation Methods 123

Table 3.2. Fully Pivoted ACA algorithm for the function (3.8)

Step Pivot row Pivot column Pivot value Relative error


1 32 32 1.00 10 +2
9.35 101
2 1 1 9.99 10+1 8.68 101
3 17 17 9.69 10+1 7.02 101
4 9 9 9.63 10+1 5.65 101
5 25 25 9.53 10+1 4.00 101
6 5 5 7.32 10+1 3.45 101
7 21 21 7.32 10+1 2.78 101
8 13 13 7.31 10+1 1.92 101
9 29 29 6.24 10+1 1.12 101
10 3 3 2.53 10+1 1.02 101

0 8 16 24 32
0 0

8 8

100 16 16
75
30
50
25
0 20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.14. Initial residual for the function (3.8)

0 8 16 24 32
0 0

8 8

16 16
75
50 30
25
0 20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.15. Residual R3 for the function (3.8)


124 3 Approximation of Boundary Element Matrices
0 8 16 24 32
0 0

8 8

16 16
60
40 30
20
0
20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.16. Residual R6 for the function (3.8)

0 8 16 24 32
0 0

8 8

16 16
20
10 30

0
20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.17. Residual R9 for the function (3.8)

where the initial residual and three sequential residuals (for k = 2, 4, and
k = 6) are shown.

Table 3.3. Fully Pivoted ACA algorithm for the function (3.27)

Step Pivot row Pivot column Pivot value Relative error


1 32 19 1.00 10+0 8.44 101
2 24 3 1.00 10+0 6.51 101
3 28 27 9.69 101 4.81 101
4 20 11 9.68 101 2.16 101
5 30 23 3.05 101 1.30 101
6 22 7 2.88 101 6.19 102
7 26 31 1.17 101 2.91 1015
3.2 Block Approximation Methods 125
0 8 16 24 32
0 0

8 8

1 16 16
0.75
30
0.5
0.25
0 20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.18. Initial residual for the function (3.27)

0 8 16 24 32
0 0

8 8

16 16

0.5 30

0
20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.19. Residual R2 for the function (3.27)

0 8 16 24 32
0 0

8 8

16 16
0.2
30
0

-0.2 20 24 24

10
10
20
32 32
30 0 8 16 24 32

Fig. 3.20. Residual R4 for the function (3.27)


126 3 Approximation of Boundary Element Matrices
0 8 16 24 32
0 0

8 8

16 16
0.1
0.05
30
0
-0.05
-0.1 20 24 24

10
10
20
32 32
30
0 8 16 24 32

Fig. 3.21. Residual R6 for the function (3.27)

Partially pivoted ACA algorithm

If the matrix A has not yet been generated, but if there is a possibility of
generating its entries ak individually, then the following partially pivoted
ACA algorithm can be used for the approximation:
Algorithm 3.9
1. Initialisation

S0 := 0 , I := , J := , c := 0 RN , r := 0 RM .

2. Restart with the next not yet generated row


If

#I = N or #J = M

then STOP else

ki+1 := min {k : k
/ I} , CrossT ype := Row ,

3. Generate cross
3.1 Type of the cross
If

CrossT ype == Row

then
3.1.1 Generate row, Update control vector

a := A eki+1 , I := I {ki+1 } , r := r + |a| ,

3.1.2 Test
If |a| = 0 then GOTO 2. (zero row)
3.2 Block Approximation Methods 127

3.1.3 Row of the residual and the pivot column


i
rv := a (um )ki+1 vm ,
m=1
i+1 := ArgMax |(rv ) | ,

3.1.4 Test
If |rv | = 0 then GOTO 4. (linear depending row)
3.1.5 Normalising constant

i+1 := (rv )1
i+1 ,

3.1.6 Generate column, Update control vector

b := A ei+1 , J := J {i+1 } , c := c + |b| ,

3.1.7 Column of the residual and the pivot row


i
ru := b (vm )i+1 um ,
m=1
ki+2 := ArgMax |(ru )k | ,

3.1.8 New vectors

ui+1 := ru , vi+1 := i+1 rv ,

else
3.2.1 Generate column, Update control vector

b := Aei+1 , J := J {i+1 } , c := c + |b| ,

3.2.2 Test
If |b| = 0 then GOTO 2. (zero column)
3.2.3 Column of the residual and the pivot row


i
ru := b (vm )i+1 um ,
m=1
ki+1 := ArgMax |(ru )k | ,

3.2.4 Test
If |ru | = 0 then GOTO 4. (linear depending column)
3.2.5 Normalising constant

i+1 := (ru )1
ki+1 ,
128 3 Approximation of Boundary Element Matrices

3.2.6 Generate row, Update control vector

a := A eki+1 , I := I {ki+1 } , r := r + |a| ,

3.2.7 Row of the residual and the pivot column


i
rv := a (um )ki+1 vm ,
m=1
i+2 := ArgMax |(rv ) | ,

3.2.8 New vectors

ui+1 := i+1 ru , vi+1 := rv ,

3.3 New approximation



Si+1 := Si + ui+1 vi+1 ,

3.4 Frobenius norm of the approximation


i
Si+1 2F = Si 2F +2 u
i+1 um vm vi+1 + ui+1 F vi+1 F .
2 2

m=1

3.5 Test
If ui+1 F vi+1 F < Si+1 F
then GOTO 4
else i := i + 1, GOTO 3
4. Check control vectors
If i
/ I and ci = 0 then

i := i + 1 , ki+1 = i , CrossT ype = Row , GOTO 3

or
If j
/ J and rj = 0 then

i := i + 1 , i+1 = j , CrossT ype = Column , GOTO 3

else STOP
Algorithm 3.9 starts to compute an approximation for the matrix A by gen-
erating its rst row. Then, the rst column will be chosen automatically. If
a cross is successfully computed, the next row index is prescribed, and the
procedure repeats. If a zero row is generated, then it is not possible to nd
the column, and the algorithm restarts in Step 2. Since the matrix A will not
be generated completely, we can use the norm of its approximant Si to dene
a stopping criterion. This norm can be computed recursively as it is described
in Step 3.4. However, since the whole matrix A will not be generated while
3.3 Bibliographic Remarks 129

using the partially pivoted ACA algorithm, it is necessary to check the control
vectors c and r before stopping the algorithm. Note that these vectors contain
the sums of absolute values of all elements generated. If, for example, there
is some index i / I with ci = 0 then the row i has not yet contributed to
the matrix. It can happen that this row contains relevant information, and,
therefore, we have to restart the algorithm in Step 3. The same argumen-
tation is valid for the columns. The only dierence is, that the crosses after
this restart will be generated on the dierent way: rst prescribed column and
then automatically chosen row. Thus, Algorithm 3.9 can be used not only for
dense matrices but also for reducible, and even for sparse matrices containing
only few non-zero entries.
Algorithm 3.9 requires only O(r2 (N + M )) arithmetical operations and
its memory requirement is O(r(N + M )). Thus, this algorithm is perfect for
large matrices. All approximations of boundary element matrices of the next
chapter will be generated with the help of Algorithm 3.9.

3.3 Bibliographic Remarks

The history of asymptotically optimal approximations of dense matrices is


now about 20 years old. It starts with the paper [93] by V. Rokhlin. The
boundary value problem for a partial dierential equation was transformed
to a Fredholm boundary integral equation of the second kind. The Nystrom
method was used for the discretisation, leading to a dense large system of
linear equations. This system was solved iteratively using the generalised con-
jugate residual algorithm. This algorithm requires matrix-vector multiplica-
tions, which were realised in a fast manner, leading to optimal costs of order
O(N ), or O(N log(N )) for the whole procedure.
Then, the method, which was called Fast Multipole Method, was developed
in the papers [20, 37, 38] for large-scale particle simulations in problems of
plasma physics, uid dynamics, molecular dynamics, and celestial mechanics.
The method was signicantly improved in [39]. Later, the Fast Multipole
Method was successfully applied to a variety of problems. In [39, 84], for
example, we can nd its application to the Laplace equation. In [83], the
authors apply the Fast Multipole Method to the system of linear elastostatics
discretised by the use of a Galerkin Boundary Element Method. Many papers
on the Fast Multipole Method are devoted to the Helmholtz equation, see, for
example, [2, 3], where the problem of acoustic scattering was considered, and
[94].
The next method, introduced in [44], is called Panel clustering. This
method was also applied to the potential problems in [43, 44, 46] and for
the system of linear elastostatics in [51].
A further possibility to solve boundary integral equations on an asymp-
totically optimal way is based on the use of Wavelets. This research starts
with the papers [1, 11], where the dense matrices arising from discretisation
130 3 Approximation of Boundary Element Matrices

of integral operators were transformed into a sparse form using orthogonal


or bi-orthogonal systems of compactly supported wavelets. The cost of the
matrix-vector multiplication was reduced from the straightforward O(N 2 )
number of operations to O(N log(N )) or even O(N ). In two papers [27, 28],
the authors study the stability and the convergence of the wavelet method
for pseudodierential operator equations as well as their fast solution based
on the matrix approximation. A dierent wavelet technique was applied in
[115, 116], leading in [67] to an optimal algorithm with O(N ) complexity. For
recent results, see also [48, 49].
In [36], the authors consider an algebraic approach for the approximation
of dense matrices based on the use of some their original entries.
The Adaptive Cross Approximation method was introduced in [7] for
Nystrom type matrices and in [9] for collocation matrices arising from bound-
ary integral equations. This method uses a hierarchical decomposition (cf.
[41, 42]) of the matrix in a system of blocks. There are several applications of
this method to dierent problems. The ACA was applied to potential prob-
lems in [9, 87]. In [8, 10], the ACA was used for the approximation of matrices
arising from the radiation heat transfer equation. The applications of the ACA
to electromagnetic problems can be found in [16, 63, 64, 65]. In [16] and in
[31], a comparison of the ACA method with the Fast Multipole Method was
given. In [110, 111, 113, 119], the boundary integral equations arising from
the Helmholtz equation were solved using the ACA method.
Finally, we refer to [69], where an algebraic multigrid preconditioners were
constructed for the boundary integral formulations of potential problems ap-
proximated by using the Adaptive Cross Approximation algorithm.
4
Implementation
and Numerical Examples

4.1 Geometry Description

In this section we describe some surfaces which will be used for numerical
examples in the following sections. We show the geometry of these surfaces
and give the number of elements and nodes. Furthermore, the corresponding
cluster structures will be shown.

4.1.1 Unit Sphere

The most simple smooth surface = for R3 is the surface of the unit
sphere,

= x R3 : |x| = 1 . (4.1)

As an appropriate discretisation of , we consider the icosahedron that is


uniformly triangulated before being projected onto the circumscribed unit
sphere. On this way we obtain a sequence {N } of almost uniform meshes
on the unit sphere, which are shown in Figs. 4.14.2 for dierent numbers
of boundary elements N . This sequence allows to study the convergence of
boundary element methods for dierent examples. In Fig. 4.3 the clusters of
the levels 1 and 2 obtained with Alg. 3.1 for N = 1280 are presented. In Fig.
4.4 a typical admissible cluster pair is shown.

4.1.2 TEAM Problem 10

Now we consider the TEAM problem 10 (cf. [75]). TEAM is an acronym for
Testing Electromagnetic Analysis Methods, which is a community that creates
benchmark problems to test nite element analysis software. An exciting coil
is set between two steel channels and a thin steel plate is inserted between the
channels. Thus, the domian consists of four disconnected parts. The coarsest
132 4 Implementation and Numerical Examples
1 1
0.5 0.5
0 0
-0.5 -0.5

-1 -1
1
1 1

0.5
0.5

0
0

-0.5
-0.5
-1
-1 -1
-0.5
-0.5 0
0 0.5
0.5 1

Fig. 4.1. Discretisation of the unit sphere for N = 80 and N = 320

1 1
0.5 0.5
0 0
-0.5 -0.5
-1
1 -1
1
1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1
-1 -1
-0.5 -0.5
0 0
0.5 0.5
1 1

Fig. 4.2. Discretisation of the unit sphere for N = 1280 and N = 5120

mesh of this model contains N = 4928 elements. We perform two uniform


mesh renements in order to get meshes with N = 19712 and N = 78848
elements, respectively. The initial mesh for N = 4928 is shown in Fig. 4.5.
The speciality of this model is an extremely thin chink (less then 0.2% of the
model size) between the steel plate and the channels. Another speciality of it
is the very ne discretisation close to the edges of the channels. In Fig. 4.6
the clusters of the levels 1 and 2 for N = 4928 are shown.
4.1 Geometry Description 133

-1
-0.5
0
0.5
1
1

0.5

-0.5

-1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
1

0.5

-0.5

-1
-1
-0.5
0
0.5
1

Fig. 4.3. Clusters of the level 1 and 2 for N = 1280


134 4 Implementation and Numerical Examples

1
0.5
0
-0.5
-1
1

0.5

-0.5

-1
1
0.5
0
-0.5
-1
Fig. 4.4. An admissible cluster pair for N = 1280

4.1.3 TEAM Problem 24

The test rig consists of a rotor and a stator (cf. [92]). The stator poles are tted
with coils as shown in Fig. 4.7. The rotor is locked at 22 with respect to the
stator, providing only a small overlap between the poles. The coarsest mesh of
this model is shown in Fig. 4.7 and contains N = 4640 elements. We perform
two uniform mesh renements in order to get meshes with N = 18560 and
N = 74240 elements, respectively. This model consists of four independent
parts. In Fig. 4.8 the clusters of the levels 1 and 2 for N = 4928 are shown.

4.1.4 Relay

The simply connected domain shown in Fig. 4.9 will be considered as model
for a relay. The speciality of this domain is the small air gap between the
kernel and the armature. Its surface contains N = 4944 elements. We perform
two uniform mesh renements in order to get meshes with N = 19776 and
N = 79104 elements, respectively. In Fig. 4.10 the corresponding clusters can
be seen.
4.2 Laplace Equation 135

50

100
0

-50
0
-100
100
-50
0
50 -100

100

Fig. 4.5. TEAM problem 10 for N = 4928

4.1.5 Exhaust manifold

A simplied model of an exhaust manifold is shown in Fig. 4.11. Its surface


contains N = 2264 elements. We perform two uniform mesh renements in
order to get meshes with N = 9056 and N = 36224 elements, respectively.
The clusters of the level 1 and 2 are presented in Fig. 4.12.

4.2 Laplace Equation


In this section we consider some numerical examples for the Laplace equation

u(x) = 0 (4.2)

where u is an analytically given harmonic function.

4.2.1 Analytical solutions

Particular solutions of the Laplace equation (4.2) are, for example,


136 4 Implementation and Numerical Examples

50

100
0

-50
0
-100
100
-50
0
50 -100

100

50

100
0

-50
0
-100
100
-50
0
50 -100

100

Fig. 4.6. Clusters of the level 1 and 2, TEAM problem 10 for N = 4928
4.2 Laplace Equation 137

40
20
100
0
-20 50
-40
-100
100 0
-50
0 -50
50
-100
100

Fig. 4.7. TEAM problem 24 for N = 4640

 
k1 ,k2 ,k3 (x) = exp k1 x1 + k2 x2 + k3 x3 , k12 + k22 + k32 = 0 ,
 
0,k2 ,k3 (x) = (a + b x1 ) exp k2 x2 + k3 x3 , k22 + k32 = 0 , (4.3)
0,0,0 (x) = (a1 + b1 x1 )(a2 + b2 x2 )(a3 + b3 x3 ) .

Here, k1 , k2 , and k3 are arbitrary complex numbers satisfying the correspond-


ing conditions. Thus, dierent products of real valued linear, exponential,
trigonometric and hyperbolic functions can be chosen for numerical tests, if
we consider interior boundary value problems in a three-dimensional, open,
and bounded domain R3 . Furthermore, the fundamental solution of the
Laplace equation (cf. (1.7))
1 1
u (x, y) = , (4.4)
4 |x y|
can be considered as a particular solution of the Laplace equation for both,
interior (x , y e = R3 \ ), and exterior (x e , y ) boundary
value problems.

4.2.2 Discretisation, Approximation and Iterative Solution

We solve the interior Dirichlet, Neumann and mixed boundary value problems,
as well as an interface problem using a Galerkin boundary element method
138 4 Implementation and Numerical Examples

40
20
100
0
-20 50
-40
-100
100 0
-50
0 -50
50
-100
100

40
20
100
0
-20 50
-40
-100
100 0
-50
0 -50
50
-100
100

Fig. 4.8. Clusters of the level 1 and 2, TEAM problem 24 for N = 4640

(cf. Section 2). Piecewise linear basis functions  will be used for the approx-
imation of the Dirichlet datum 0int u and piecewise constant basis functions
k for the approximation of the Neumann datum 1int u. We will use the L2
projection for the approximation of the given part of the Cauchy data. The
boundary element matrices Vh , Kh and Dh are generated in approximative
form using the partially pivoted ACA algorithm with a variable relative ac-
curacy 1 . The resulting systems of linear equations are solved using some
variants of the Conjugate Gradient method (CGM) with or without precon-
ditioning up to a relative accuracy 2 = 108 .
4.2 Laplace Equation 139

-5

10

7.5

2.5

0
0
5

10

15

Fig. 4.9. Relay for N = 4944

4.2.3 Generation of Matrices

The most important matrices to be generated while using the Galerkin bound-
ary element method are the single layer potential matrix Vh and the double
layer potential matrix Kh , having the entries
 
1 1
Vh [k, ] = dsy dsx for k,  = 1, . . . , N , (4.5)
4 |x y|
k 

and
 
1 (x y, n(y))
Kh [k, j] = j (y) dsy dsx (4.6)
4 |x y|3
k

for k = 1, . . . , N and j = 1, . . . , M (cf. Section 2.3). The analytical evaluation


of these integrals seems to be impossible in general. Thus, some numerical
quadrature rules have to be involved. These quadrature formulae produce
some additional numerical errors in the whole procedure. However, it is pos-
sible to compute the inner integrals of the entries (4.5)(4.6), namely the
integrals

1 1
S(, x) = dsy (4.7)
4 |x y|

and
140 4 Implementation and Numerical Examples

-5

10

7.5

2.5

0
0
5

10

15
5

-5

10

7.5

2.5

0
0
5

10

15

Fig. 4.10. Clusters of the level 1 and 2, Relay for N = 4944


4.2 Laplace Equation 141

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

Fig. 4.11. Exhaust manifold for N = 2264


1 (x y, n(y))
Di (, x) = ,i (y) dsy for i = 1, 2, 3 . (4.8)
4 |x y|3

Here, R3 is a plane triangle having the nodes x1 , x2 , x3 , and ,i is the


piecewise linear function (2.6) which corresponds to the node xi , i.e.

,i (xj ) = ij , j = 1, 2, 3 .

The explicit form of these functions can be seen in Appendix C.2. By the use
of the functions (4.7)(4.8), the matrix entries (4.5)(4.6) can be rewritten as
follows:
 
1 
Vh [k, ] = S( , x) dsx + S(k , x) dsx (4.9)
2
k 

for k,  = 1, . . . , N and
 
Kh [k, j] = Di : xi ( )=xj (, x) dsx (4.10)
I(j) k

for k = 1, . . . , N , j = 1, . . . , M . Note that we have used the symmetrisation


for the entries of the single layer potential in (4.9). In (4.10), the summation
takes place over all triangles containing the node xj . The index i {1, 2, 3}
of the function Di to be integrated over k , is chosen in such a way, that
the node i of the triangle is xj . We are not going to compute the latter
integrals in (4.9)(4.10) in a closed form. Thus, some numerical integration
142 4 Implementation and Numerical Examples

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

0.05

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2

Fig. 4.12. Clusters of the level 1 and 2, Exhaust manifold for N = 2264
4.2 Laplace Equation 143

has to be involved. If we denote by NG the number of integration points, by


m the weights of the quadrature, and by x,m the integration points within
the triangle , then the exact entries (4.9)(4.10) can be approximated by

1 
Ng  
Vh [k, ] m S( , xk ,m ) + S(k , x ,m ) (4.11)
2 m=1

for k,  = 1, . . . , N and

 
Ng
Kh [k, j] m Di : xi ( )=xj (, xk ,m ) (4.12)
I(j) m=1

for k = 1, . . . , N and j = 1, . . . , M . For our numerical tests, we have used a


7-point quadrature rule, see Appendix C.1 for more details.

4.2.4 Interior Dirichlet Problem

Here we solve the Laplace equation (4.2) together with the boundary condition
0int u(x) = g(x) for x , where is a given surface. The variational problem
(1.16)
   1  
V t, w = I + K g, w for all w H 1/2 ( )
2

is discretised, which leads to a system of linear equations (2.16)


1 
Vh 
t = Mh + Kh g .
2
Since the matrix Vh is symmetric and positive denite, the classical Conjugate
Gradient method (CGM) is used as solver.

Unit sphere

The analytical solution is taken in the form (4.3). For x = (x1 , x2 , x3 )


we consider the harmonic function

u(x) = Re 0,2,2 = (1 + x1 ) exp(2 x2 ) cos(2 x3 ) (4.13)

as a test solution of the Laplace equation (4.2). The results of the computa-
tions are shown in Tables 4.1 and 4.2. The number of boundary elements is
listed in the rst column of these tables. The second column contains the num-
ber of nodes, while in the third column of Table 4.1, the prescribed accuracy
for the ACA algorithm for the approximation of both matrices Kh RN M
and Vh RN N is given. The fourth column of this table shows the memory
requirements in MByte for the approximate double layer potential matrix Kh .
The quality of this approximation in percentage of the original matrix is listed
144 4 Implementation and Numerical Examples

Table 4.1. ACA approximation of the Galerkin matrices Kh and Vh

N M 1 MByte(Kh ) % MByte(Vh ) %
2
80 42 1.0 10 0.03 97.8 0.02 48.7
320 162 1.0 103 0.26 65.6 0.21 27.2
1280 642 1.0 104 2.45 39.1 1.94 15.5
5120 2562 1.0 105 20.05 20.0 15.72 7.9
20480 10242 1.0 106 149.19 9.3 115.83 3.6
81920 40962 1.0 107 1085.0 4.2 837.50 1.6

Fig. 4.13. Partitioning of the BEM matrices for N = 5120 and M = 2562

in the next column. The corresponding values for the single layer potential
matrix Vh can be seen in the columns six and seven. The partitioning of the
matrix for N = 5120 as well as the quality of the approximation of single
blocks is shown in Fig. 4.13. The left diagram in Fig. 4.13 shows the sym-
metric single layer potential matrix Vh , while the rectangular double layer
potential matrix Kh is depicted in the right diagram. The legend indicates
the percentage of memory needed for the ACA approximation of the blocks
compared to the full memory. Further numerical results are shown in Table
4.2. The third column in Table 4.2 shows the number of Conjugate Gradient
iterations needed to reach the prescribed accuracy 2 . The relative L2 error
for the Neumann datum
1int u th L2 ( )
Error1 = (4.14)
1int uL2 ( )
is given in the fourth column. The next column represents the rate of con-
vergence for the Neumann datum, i.e. the quotient between the errors in two
4.2 Laplace Equation 145

Table 4.2. Accuracy of the Galerkin method, Dirichlet problem

N M Iter Error1 CF1 Error2 CF2


1 0
80 42 22 9.34 10 7.29 10
320 162 32 5.06 101 1.85 3.29 101 22.16
1280 642 45 2.23 101 2.27 3.53 102 9.32
5120 2562 56 1.04 101 2.14 3.54 103 9.97
20480 10242 72 5.11 102 2.03 4.11 104 8.61
81920 40962 94 2.53 102 2.02 4.30 105 9.56

consecutive lines of column four. Finally, the last two columns show the ab-
solute error (cf. (2.19)) in a prescribed inner point x ,

Error2 = |u(x ) u(x )| , x = (0.250685, 0.417808, 0.584932) , (4.15)

for the value u(x ) obtained using an approximate representation formula


(2.18). Table 4.2 obviously shows a linear convergence O(N 1/2 ) = O(h) of
the Galerkin boundary element method for the Neumann datum in the L2
norm. It should be noted that this theoretically guaranteed convergence order
can already be observed when approximating the matrices Kh and Vh with
much less accuracy as it was used to obtain the results in Table 4.1. However,
this high accuracy is necessary in order to be able to observe the third order
(or even better) pointwise convergence rate within the domain presented
in the last two columns of Table 4.2. Especially for N = 81920, a very high
accuracy of 1 = 1.0 107 of the ACA approximation is necessary.
In Figs. 4.14-4.15, the given Dirichlet datum and computed Neumann da-
tum for N = 5120 boundary elements and M = 2562 nodes are presented.
The numerical curve obtained when using an approximate representation
formula in comparison with the curve of the exact values (4.13) along the line

0.3 0.6
x(t) = 0.5 + t 1.0 , 0 t 1 (4.16)
0.7 1.4

inside of the domain is shown in Fig. 4.16 for N = 80 (left plot) and
for N = 320 (right plot). The values of the numerical solution u and of the
analytical solution u have been computed in 512 points uniformly placed on
the line (4.16). The thick dashed line represents in these gures the course
of the analytical solution (4.13), while the thin solid line shows the course of
the numerical solution u. The values of the variable x1 along the line (4.16)
are used for the axis of abscissas. The next Fig. 4.17 shows these curves for
N = 1280 (left plot) and on the zoomed interval [0.2, 0.3] (right plot) with
respect to the variable x1 in order to see the dierence between them. It is
almost impossible to see any optical dierence between the numerical and
analytical curves for higher values of N . Note that the point x in (4.15) is
146 4 Implementation and Numerical Examples

5.8149E02

0.5

0 1.4856E02

-1 -0.5
-0.5
-1
0 -1
-0.5
0.5 0
0.5
1 1
2.8438E02

Fig. 4.14. Given Dirichlet datum for the unit sphere, N = 5120

-1 3.6811E03
-0.5
0
0.5
1
1

0.5

7.7897E02
0

-0.5

-1
-1
-0.5
0
0.5
1 2.1232E03

Fig. 4.15. Computed Neumann datum for the unit sphere, N = 5120
4.2 Laplace Equation 147

0
0
-5 -2.5

-10 -5

-15 -7.5

-10
-20
-12.5
-25
-15
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.16. Numerical and analytical curves for N = 80 and N = 320, Dirichlet
problem

0 -10
-2.5
-11
-5
-12
-7.5
-13
-10

-12.5 -14

-15 -15
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.2 0.22 0.24 0.26 0.28 0.3

Fig. 4.17. Numerical and analytical curves for N = 1280, Dirichlet problem

chosen close to the minimum of the function u along the line, where the error
seems to reach its maximum.

TEAM Problem 10

The analytical solution is now taken in the form (4.4) with y = (0, 60, 50) .
The results of the computations are shown in Tables 4.3 and 4.4. The third

Table 4.3. ACA approximation of the Galerkin matrices Kh and Vh , Dirichlet


problem

N M 1 MByte(Kh ) % MByte(Vh ) %
2
4928 2470 1.0 10 14.03 15.11 7.46 4.0
19712 9862 1.0 103 131.85 8.9 65.22 2.2
78848 39430 1.0 104 1190.00 5.0 604.56 1.3

column shows the number of iterations required by the Conjugate Gradient


method with diagonal preconditioning

D = diag | | ,  = 1, . . . , N . (4.17)
148 4 Implementation and Numerical Examples

Table 4.4. Accuracy of the Galerkin method, Dirichlet problem

N M Iter Error1 CF1 Error2 CF2


1 5
4928 2470 91 6.02 10 2.55 10
19712 9862 183 1.99 101 3.02 4.69 106 5.44
78848 39430 248 1.13 101 1.76 5.22 107 8.98

The last two columns of Table 4.4 show the absolute error (cf. (2.19)) in a
prescribed inner point x ,

Error2 = |u(x ) u(x )| , x = (0.0, 90.0, 49.7943) , (4.18)

for the value u(x ) obtained using the approximate representation formula
(2.18). All other entries in these tables have the same meaning as those dis-
played in Tables 4.1-4.2. In Figs. 4.184.19 the given Dirichlet datum and the
computed Neumann datum for N = 4928 boundary elements and M = 2470
nodes are presented. The numerical curve obtained when using the approxi-

8.4120E03
-100
-50
0
50
100

4.3880E03
50
0

0 -100
-10

-50
0

100
3.6407E04

Fig. 4.18. Given Dirichlet datum for the TEAM problem 10

mate representation formula in comparison with the curve of the exact values
(4.4) along the line
4.2 Laplace Equation 149

4.6686E04
-100
-50
0
50
100

3.7515E05
50
0

0 -100
-10

-50
0

100
3.9183E04

Fig. 4.19. Computed Neumann datum for the TEAM problem 10


0.0 0.0
x(t) = 90.0 + t 0.0 , 0 t 1 (4.19)
49.99 99.98

inside of the domain is shown in Fig. 4.20 for N = 4928. The values of the
numerical solution u and of the analytical solution u have been computed in
512 points uniformly placed on the line (4.19). The thick dashed line represents
the course of the analytical solution (4.4), while the thin solid line shows the
course of the numerical solution u. The values of the variable x3 along the
line (4.19) are used for the axis of abscissas. The right plot in this gure
shows a zoomed picture on the interval [40, 49.99] with respect to the variable
x3 . Note that the end of the line (4.19) is very close to the boundary of the
domain, which lies at x3 = 50. Thus, the loss of accuracy of the numerical
representation formula close to the boundary can be clearly seen. The courses
of the numerical solutions obtained for N = 19712 (left plot) and N = 78848
(right plot) are shown in Fig. 4.21. They do not distinguish optically from the
course of the analytical solution on the whole interval. Thus, we show only
the zoomed pictures.

4.2.5 Interior Neumann Problem

We consider the interior Neumann boundary value problem with the boundary
condition 1int u(x) = g(x) for x . The variational problem (1.29)
150 4 Implementation and Numerical Examples

0.0025 0.002675

0.00265

0.002 0.002625

0.0026
0.0015 0.002575

0.00255
0.001
0.002525

-40 -20 0 20 40 40 42 44 46 48 50

Fig. 4.20. Numerical and analytical curves for N = 4928, Dirichlet problem

0.00266
0.00264
0.00264
0.00262
0.00262
0.0026 0.0026

0.00258 0.00258

0.00256 0.00256

0.00254 0.00254

0.00252 0.00252
40 42 44 46 48 50 40 42 44 46 48 50

Fig. 4.21. Numerical and analytical curves for N = 19712 and N = 78848

       1    
Du, v + u, 1 v, 1 = I K  g, v + v, 1
2

1/2
for all v H ( ), is discretised and leads to a system of linear equations
(cf. (2.31))
  1 
Dh + a a u  = Mh Kh g + a ,
2
where the vector a RM contains the integrals of the piecewise linear basis
functions  over the surface ,
1
a = |supp  | ,  = 1, . . . , M .
3
The symmetric and positive denite system is then solved using a Conjugate
Gradient method up to the relative accuracy 2 = 108 .

Unit Sphere

We consider again the harmonic function (4.13) as the exact solution. The
results for the ACA approximation of the matrix Dh RM M are presented
in Table 4.5. The corresponding results for the matrix Kh are identical to
those already presented in Table 4.1. Note that in this example, the Galerkin
matrix with piecewise linear basis functions for the hypersingular operator is
generated according to (1.9). The partitioning of the matrices Kh and Dh for
4.2 Laplace Equation 151

Table 4.5. ACA approximation of the Galerkin matrix Dh , Neumann problem

N M 1 MByte(Dh ) %
2
80 42 1.0 10 0.01 51.2
320 162 1.0 103 0.10 48.1
1280 642 1.0 104 1.02 32.3
5120 2562 1.0 105 8.67 17.3
20480 10242 1.0 106 64.75 8.09
81920 40962 1.0 107 446.13 3.49

N = 5120 and M = 2562 as well as the quality of the approximation of the


single blocks are shown in Fig. 4.22. The left diagram in Fig. 4.22 shows the

Fig. 4.22. Partitioning of the BEM matrices for N = 5120 and M = 2562.

rectangular double layer potential matrix KhT RM N , while the symmetric


hypersingular matrix Dh is depicted in the right diagram. The legend indicates
the percentage of memory needed for the ACA approximation of the blocks
compared to the full memory. The accuracy obtained for the whole numerical

Table 4.6. Accuracy of the Galerkin method, Neumann problem

N M Iter Error1 CF1 Error2 CF2


1 1
80 42 10 7.63 10 9.37 10
320 162 14 2.72 101 2.81 1.95 100
1280 642 17 6.02 102 4.52 4.27 101 4.56
5120 2562 25 1.37 102 4.39 1.01 101 4.22
20480 10242 35 3.28 103 4.18 2.49 102 4.08
81920 40962 51 8.02 104 4.09 6.13 103 4.06

procedure is presented in Table 4.6. The numbers in this table have the same
meaning as in Table 4.2. The third column shows the number of iterations
required by the Conjugate Gradient method without preconditioning. Note
152 4 Implementation and Numerical Examples

that the convergence of the Galerkin method for the unknown Dirichlet datum
in the L2 norm

0int u u
h L2 ( )
Error1 = (4.20)
0int uL2 ( )

is now quadratic corresponding to the error estimate (2.32). Also in the inner
point x (cf. (4.15)) we now observe the quadratic convergence (7th column) as
it was predicted in (2.34) instead of the cubic order obtained for the Dirichlet
problem (cf. Table 4.2). This fact is clearly illustrated in Figs. 4.234.24, where
the convergence of the boundary element method can be seen optically. The
results obtained for N = 80 are plotted in Fig. 4.23 (left plot). The numerical

0 0

-5
-5

-10
-10
-15

-15
-20

-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.23. Numerical and analytical curves for N = 80 and N = 320, Neumann
problem

curve in Fig. 4.23 (right plot) is notedly better than the previous one. However,
its quality is not as high as the one of the corresponding curve obtained solving
the Dirichlet problem (cf. Fig. 4.16). The next Fig. 4.24 shows the same curves
for N = 1280 and for N = 5120. Here, we do not need to zoom the pictures in
order to see the dierence between the numerical and the analytical curves.

0 0

-2.5 -2.5

-5 -5

-7.5 -7.5

-10 -10
-12.5 -12.5
-15 -15
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.24. Numerical and analytical curves for N = 1280 and N = 5120, Neumann
problem
4.2 Laplace Equation 153

Exhaust Manifold

The analytical solution is taken in the form (4.4) with y = (0, 0, 0.06) . The
results of the computations are reported in Tables 4.7 and 4.8. The third

Table 4.7. ACA approximation of the Galerkin matrices Kh and Vh , Neumann


problem

N M 1 MByte(Kh ) % MByte(Vh ) %
2264 1134 1.0 103 6.99 35.7 3.94 10.1
9056 4530 1.0 104 62.27 19.9 34.99 5.6
36224 18114 1.0 105 500.66 10.0 282.44 2.8

Table 4.8. Accuracy of the Galerkin method, Neumann problem

N M Iter Error1 CF1 Error2 CF2


2 3
2264 1134 72 2.12 10 2.69 10
9056 4530 110 4.95 103 4.3 5.36 104 5.0
36224 18114 163 1.13 103 4.4 1.07 104 5.0

column shows the number of iterations required by the Conjugate Gradient


method with diagonal preconditioning,

D = diag |supp  | ,  = 1, . . . , M . (4.21)

The last two columns of Table 4.8 show the absolute error (cf. (2.34)) in a
prescribed inner point x ,
Error2 = |u(x ) u(x )| , x = (0.0112524, 0.1, 0.05) , (4.22)
for the value u(x ) obtained using an approximate representation formula
(2.33). All other entries in these tables have the usual meaning. The quadratic
convergence of the Dirichlet datum in the L2 norm as well as the quadratic
convergence (or even slightly better) in the inner point x can be observed
again.
In Figs. 4.254.26 the given Dirichlet datum and computed Neumann da-
tum for N = 2264 boundary elements and M = 1134 nodes are presented.
The numerical curve obtained when using an approximate representation
formula in comparison with the curve of the exact values (4.4) along the line

0.05 0.2
x(t) = 0.1 + t 0.0 , 0 t 1 (4.23)
0.05 0.0
154 4 Implementation and Numerical Examples

4.4393E00

0.1

0.05
2.3679E00
0

-0.05

-0.1

0.1

0.2
0 2
2.9645E01

Fig. 4.25. Computed Dirichlet datum, exhaust manifold

2.4101E02

0.05
1.0975E02
0

-0.05

-0.1
0.1
0

0.1 0

0.2
0 2
2.1509E01

Fig. 4.26. Given Neumann datum, exhaust manifold


4.2 Laplace Equation 155

inside of the domain is shown in Fig. 4.27 for N = 2254 and N = 9056,
while Fig. 4.28 shows the results obtained for N = 36224 (left plot). The right
plot in this gure presents the same curve on a zoomed interval [0.05, 0.05].
The values of the numerical solution u and of the analytical solution u have
been computed in 512 points uniformly placed on the line (4.23). The thick
dashed line represents the course of the analytical solution (4.4), while the
thin solid line shows the course of the numerical solution u. The values of the
variable x1 along the the line (4.23) are used for the axis of abscissas. Again,
a quite high accuracy of the Galerkin BEM can be observed.

0.525 0.525

0.5 0.5

0.475 0.475

0.45 0.45

0.425 0.425

0.4 0.4
0.375 0.375
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.27. Numerical and analytical curves for N = 4944 and N = 19776

0.535
0.525
0.53
0.5

0.475 0.525

0.45 0.52

0.425 0.515
0.4
0.51
0.375
-0.05 0 0.05 0.1 0.15 -0.04 -0.02 0 0.02 0.04

Fig. 4.28. Numerical and analytical curves for N = 36224

4.2.6 Interior Mixed Problem


Here, we consider the interior mixed boundary value problem (cf. (1.34))
u(x) = 0 for x ,
0int u(x) = g(x) for x D , (4.24)
1int u(x) = f (x) for x N ,
where the function g is the interior trace of the exact solution on the boundary
, while the function f denotes its interior conormal derivative on . After
156 4 Implementation and Numerical Examples

discretisation, the variational problem (1.35) leads to the skew symmetric


system of linear equations (cf. (2.36))
     
Vh Kh 
t Vh 1
2 M h + K h f

= 1
 2 Mh Kh Dh
Kh Dh u g

The matrix of the single layer potential Vh and the matrix of the double
layer potential Kh are generated in an approximative form using the partially
pivoted ACA algorithm 3.9 with increasing accuracy. The system of linear
equations (2.41) is then solved using the Conjugate Gradient method for the
Schur complement system as in (2.37) up to the relative accuracy 2 = 108 .
Note that this realisation requires an additional solution of a linear system
with the single layer potential matrix in each iteration step. This system is
solved again using a Conjugate Gradient method up to the relative accuracy
2 = 108 . The matrix of the hypersingular operator is not generated explic-
itly. Its multiplication with a vector is realised using the matrix of the single
layer potential as it is described in (2.27).

Unit Sphere

In the rst example, we prescribe the Dirichlet datum on the upper part of
the unit sphere D = {x : x3 0} and the Neumann datum on the lower
part N = {x : x3 < 0}. The analytical solution is taken in the form
(4.13) and the numerical results are presented in Tables 4.94.10. Of course,

Table 4.9. Accuracy of the Galerkin method on the boundary, mixed problem

N M Iter1 Iter2 D Error1 CFD N Error1 CFN


1 1
80 42 8 18-19 5.97 10 8.78 10
320 162 11 25-27 2.33 101 2.56 4.54 101 1.93
1280 642 16 36-38 5.00 102 4.66 2.11 101 2.15
5120 2562 24 49-53 1.14 102 4.39 1.02 101 2.07
20480 10242 36 64-75 2.73 103 4.18 5.07 102 2.01

the results for the ACA approximation of the matrices Vh and Kh are the
same as for the Dirichlet problem (cf. Table 4.1). The accuracy obtained for
the mixed boundary value problem is presented in Table 4.9. The numbers in
this table have the following meaning: The third column in Table 4.9 shows
the number of Conjugate Gradient iterations without preconditioning needed
to reach the prescribed accuracy 2 for the Schur complement, while the fourth
column indicates the numbers of Conjugate Gradient iterations without pre-
conditioning needed in each iteration step for the system with the single layer
potential matrix. Thus, the total number of iterations when solving a mixed
4.2 Laplace Equation 157

boundary value problem is much higher than for solving a pure Dirichlet or
Neumann boundary value problem. The error for the Dirichlet datum and the
convergence factor are shown in columns 5 and 6, while the corresponding
error for the Neumann datum can be seen in columns 7 and 8. Note that
the convergence of the Galerkin method for the unknown Dirichlet datum in
the L2 norm (4.20) is quadratic (6th column), while the convergence of the
Neumann datum (4.14) is linear (8th column). Corresponding to Table 4.1,
the matrices Kh and Vh together with some additional memory will require
more than 2 Gbyte of memory for N = 81902. Thus, we are not able to store
both these matrices on a regular workstation. The error in the inner point x

Table 4.10. Accuracy of the Galerkin method in the inner point x , mixed problem

N M Error2 CF2
0
80 42 5.82 10
320 162 4.94 101 11.77
1280 642 1.43 101 3.46
5120 2562 3.68 102 3.88
20480 10242 9.31 103 3.95

(cf. (4.15)) can be observed in Table 4.10. Here, quadratic convergence can be
observed, at least asymptotically.

TEAM Problem 10

Here, we consider the mixed boundary value problem for the Laplace equation
in the domain presented in Fig. 4.5. The Dirichlet part of the boundary is
dened by D = {x : x3 = 50}. Thus, Dirichlet boundary conditions
are given only on the top of the coil. The analytical solution is taken in
the form (4.4) with y = (0.0, 60.0, 50.0)
/ and the numerical results are
presented in Table 4.11. The meaning of the values presented in this table is

Table 4.11. Accuracy of the Galerkin method on the boundary, TEAM problem
10
N M Iter1 Iter2 D Error1 CFD N Error1 CFN
2 1
4928 2470 1061 90-95 3.04 10 6.04 10
19712 9862 1732 118-123 7.60 103 4.00 2.03 101 2.98

the same as in Table 4.9. The third column in Table 4.11 shows the number of
Conjugate Gradient iterations with diagonal preconditioning (4.21) needed to
reach the prescribed accuracy 2 for the Schur complement, while the fourth
158 4 Implementation and Numerical Examples

column indicates the number of Conjugate Gradient iterations with diagonal


preconditioning (4.17) needed in each iteration step for the system with the
single layer potential matrix. Note that the number of iterations is rather
high for this geometrically very complicated example. Thus, a more eective
preconditioning is required. The courses of the numerical solution inside of
the domain along the line (4.19) is very similar to those presented in Figs.
4.204.21. The Cauchy data can be seen in Figs. 4.184.19.

TEAM Problem 24
Here, we consider the mixed boundary value problem for the Laplace equation
in the domain presented in Fig. 4.7. The Dirichlet part of the boundary is
dened by D = {x : x3 0}. Thus, the Dirichlet boundary condition is
given on the upper part of the symmetric surface . The analytical solution
is taken in the form (4.4) with y = (0.0, 80.0, 20.0)
/ and the numerical
results are presented in Table 4.12 The meaning of the values presented in this

Table 4.12. Accuracy of the Galerkin method on the boundary, TEAM problem
24
N M Iter1 Iter2 D Error1 CFD N Error1 CFN
2 1
4640 2320 48 72-78 2.56 10 3.08 10
18560 9280 71 93-102 4.67 103 5.48 1.48 101 2.08

table is the same as in Table 4.11. We have used the same preconditioning as
in the previous example. The number of iterations reported in the third and
in the fourth columns of the table is now much less. In Figs. 4.294.30 the
Dirichlet datum and the Neumann datum for N = 4944 boundary elements
and M = 2474 nodes are presented. The numerical curve obtained when
using an approximate representation formula in comparison with the curve of
the exact values (4.4) along the line

0.0 0.0
x(t) = 100.0 + t 40.0 , 0 t 1 (4.25)
0 0
inside of the domain is shown in Fig. 4.31 for N = 4640. The values of the
numerical solution u and of the analytical solution u have been computed in
512 points uniformly placed on the line (4.25). The thick dashed line represents
the course of the analytical solution (4.4), while the thin solid line shows the
course of the numerical solution u. The values of the variable x2 along the
the line (4.25) are used for the axis of abscissas. The course of the numerical
approximation does not optically distinguish from the exact solution on the
left plot of Fig. 4.31. Thus, we show the zoom of these curves on the interval
[84, 76] (right plot).
4.2 Laplace Equation 159

1.1279E02

40 5.8440E03
20
100
0
-20 50
-40
-100
100 0
-50
0 -50
50
-100
100
4.0923E04

Fig. 4.29. Computed Dirichlet datum for the TEAM problem 24

1.3945E03

40 5.5593E04
20
100
0
-20 50
-40
-100
100 0
-50
0 -50
50
-100
100
2.8372E04

Fig. 4.30. Computed Neumann datum for the TEAM problem 24


160 4 Implementation and Numerical Examples

0.004 0.00398

0.0038
0.00396
0.0036

0.0034 0.00394

0.0032
0.00392
0.003

0.0028 -84 -82 -80 -78 -76


-100 -90 -80 -70 -60

Fig. 4.31. Numerical and analytical curves for N = 4640, Mixed problem

4.2.7 Inhomogeneous Interface Problem

The purpose of this subsection is twofold. The rst goal is to illustrate the
numerical solution of the Poisson equation by the use of a particular solution
(cf. Subsection 1.1.7). The second goal is to illustrate the numerical solution
for an interface problem by Boundary Element Methods (cf. Subsection 1.1.8).
We consider the following interface problem

i ui (x) = fi (x) for x , e ue (x) = 0 for x e , (4.26)

with transmission conditions describing the continuity of the potential and of


the ux, respectively,

0int ui (x) = 0ext ue (x) , i 1int ui (x) = e 1ext ue (x) for x , (4.27)

and with the radiation condition



  1
ue (x) = O as |x| .
|x|

If a particular solution upi of the interior Poisson equation is known,

i upi (x) = fi (x) for x ,

then the above interface problem can be reformulated as follows (cf. Subsection
1.1.8). Introduce a new unknown function ui by

ui = ui + upi

and rewrite the interface problem in terms of the functions ui and ue

i ui (x) = 0 for x , e ue (x) = 0 for x e ,

with new transmission conditions

0int ui (x) = 0ext ue (x) 0int upi (x) ,

i 1int ui (x) = e 1ext ue (x) i 1int upi (x) for x .


4.2 Laplace Equation 161

Then, by the use of the interior and exterior Steklov-Poincare operators S int
and S ext (cf. (1.13), (1.46))
1int ui = S int 0int ui , 1ext ue = S ext 0ext ue ,
we rewrite the interface problem as (cf. (1.59))
 
i S int + e S ext 0int ui = i 1int upi e S ext 0int upi . (4.28)

Once the Dirichlet datum 0int ui is found, we solve the interior Dirichlet
boundary value problem for the Neumann datum 1int ui . The Cauchy data
for the unknown functions ui and ue are then obtained via
e ext
0int ui = 0ext ue = 0int ui + 0int upi , 1int ui = ue = 1int ui + 1int upi .
i 1

Unit Sphere
Let be the surface of the unit sphere (4.1). The constants i , e and the
right hand side fi in (4.26) are
i = e = 1 , fi (x) = 1 , for x .
The exact solution of this simple model problem is
3 |x|2 1
ui (x) = , x, ue (x) = , x e . (4.29)
6 3 |x|
Consider the function
1
upi (x) = x21 , for x
2
as a particular solution of the Poisson equation.
The Galerkin method with piecewise linear basis functions  for the
Dirichlet data 0int ui = 0ext ue and 0int ui and with piecewise constant ba-
sis functions k for the Neumann data 1int ui = 1ext ue and 1int ui will be
used. The matrix of the single layer potential Vh and the matrix of the double
layer potential Kh are generated in an approximative form using the par-
tially pivoted ACA algorithm 3.9 with increasing accuracy 1 . The resulting
system of linear equations (cf. (2.45)) is then solved using the Conjugate Gra-
dient method without preconditioning up to the relative accuracy 2 = 108 .
The accuracy obtained for the analytical solution (4.29) is presented in Table
4.13. The numbers in this table have the following meaning. The third column
in Table 4.13 shows the number of Conjugate Gradient iterations needed to
reach the prescribed accuracy 2 for the linear system (2.45). The error for
the Dirichlet datum and the convergence factor are shown in columns 4 and 5,
while the corresponding error for the Neumann datum can be seen in columns
6 and 7. Note that the convergence of the Galerkin method for the unknown
Dirichlet datum in the L2 norm (4.20) is quadratic (5th column), while the
convergence of the Neumann datum (4.14) is linear (7th column).
162 4 Implementation and Numerical Examples

Table 4.13. Accuracy of the Galerkin method on the boundary, interface problem

N M Iter D Error1 CFD N Error1 CFN


2 1
80 42 8 8.47 10 1.65 10
320 162 12 2.22 102 3.82 7.99 102 2.07
1280 642 17 5.59 103 3.97 3.88 102 2.06
5120 2562 22 1.40 103 3.99 1.92 102 2.02
20480 10242 33 3.50 104 4.00 9.55 103 2.01

4.3 Linear Elastostatics

In this section we consider two numerical examples for the mixed boundary
value problem of linear elastostatics (cf. (1.79))

3

ij (u, x) = 0 for x , i = 1, 2, 3,
j=1
x j
(4.30)
0int u(x) = g(x) for x D ,

1int u(x) = f (x) for x N ,

where u is the displacement eld of an elastic body initially occupying some


bounded open domain R3 with boundary = D N .

4.3.1 Generation of Matrices

The most important matrices to be generated while using the Galerkin bound-
ary element method for the mixed boundary value problem (4.30) are the
single layer potential matrix VhLame and the double layer potential matrix
KhLame (cf. 2.4), having the representation (2.48) and (2.51), respectively.
Thus, in addition to the single and double layer potential matrices (Vh and
Kh ) for the Laplace operator, six additional dense matrices Vij,h RN N for
1 i j 3 have to be generated corresponding to (cf. (2.50))
 
1 (xi yi )(xj yj )
Vij,h [k, ] = dsy dsx .
4 |x y|3
k 

Using the abbreviation (cf. Appendix C.2.3)



1 (xi yi )(xj yj )
Sij (, x) = dsy ,
4 |x y|3

the above entries can be written in a symmetrised form


4.3 Linear Elastostatics 163

1   
Vij,h [k, ] = Sij ( , x) dsx + Sij (k , x) dsx .
2
k 

The explicit form of the functions Sij can be seen in Appendix C.2.3. The
remaining integrals in the above symmetric form of the matrix entries Vij,h
can be computed numerically using a 7-point quadrature rule, see Appendix
C.1.

4.3.2 Relay

The geometry of the domain is shown in Fig. 4.32. The bottom of the relay
is chosen to be the Dirichlet part D of the boundary and the boundary
condition is homogeneous, i.e.
( )
0int u(x) = 0 , for x D = x : x3 = 0 .

The remaining part of the boundary is then considered as the Neumann


boundary, where only on the top of the domain inhomogeneous boundary
conditions are formulated,

0 , x : x3 < 10 ,
1int u(x) =
1 , x : x3 = 10 .

We choose the Young modulus E = 114 000 and the Poisson ratio = 0.24
that correspond to the values of steel. The original domain is shown in Fig.
4.32 for N = 4944 The matrix of the single layer potential Vh for the Laplace
operator (cf. (2.49)), six matrices of the single layer potential Vh for the Lame
operator (cf. (2.50)), and the matrix of the double layer potential Kh (cf.
(2.51)) for the Laplace operator are generated in an approximative form using
the partially pivoted ACA algorithm 3.9 with increasing accuracy. The system
of linear equations is then solved using the Conjugate Gradient method for
the Schur complement of the system (cf. (2.47)) up to the relative accuracy
2 = 108 . Note that this realisation requests an additional solution of a
linear system with the single layer potential matrix in each iteration step. This
system is solved again using Conjugate Gradient method up to the relative
accuracy 2 = 108 . The matrix of the hypersingular operator is not generated
explicitly. Its multiplication with a vector is realised using the matrix of the
single layer potential as it is described in Section 2.4.
The results of the approximation are presented in Tables 4.144.16. The
number of boundary elements is listed in the rst column of these tables. The
second column contains the number of nodes, while the prescribed accuracy
for the ACA algorithm for the approximation of all matrices Kh RN M
and Vh , Vk,h RN N , k,  = 1, 2, 3 is given in the third column. The pairs of
further columns of these tables show the memory requirements in MByte and
the percentage of memory compared to the original matrix. The deformed
164 4 Implementation and Numerical Examples

5 0
5
0 10
15
-5

15

10

Fig. 4.32. Relay for N = 4944

Table 4.14. ACA approximation of the Galerkin matrices Vh and Kh

N M 1 Vh % Kh %
4944 2474 1.0 104 37.24 20.0 52.96 56.8
19776 9890 1.0 105 258.65 8.7 326.45 10.9

Table 4.15. ACA approximation of the Galerkin matrices V11,h , V12,h and V13,h

N M 1 V11,h % V12,h % V13,h %


4
4944 2474 1.0 10 46.03 24.7 46.75 25.1 45.74 24.5
19776 9890 1.0 105 435.61 14.6 433.91 14.5 402.00 13.5
4.3 Linear Elastostatics 165

Table 4.16. ACA approximation of the Galerkin matrices V22,h , V23,h and V33,h

N M 1 V22,h % V23,h % V33,h %


4
4944 2474 1.0 10 47.21 25.3 46.78 25.1 45.75 24.5
19776 9890 1.0 105 463.43 15.5 415.98 13.9 421.66 14.1

5 0 5.3390E01
5
0 10
15
-5

15

10
2.5840E01

1.7101E02

Fig. 4.33. Deformation of the relay for N = 4944

Table 4.17. Number of iterations, Relay problem

N M Iter1 Iter2
4944 2474 286 26-28
19776 9890 368 25-29
166 4 Implementation and Numerical Examples

domain can be seen from the same point of view in Fig. 4.33. In this gure,
the real deformation is amplied by a factor 10. In Table 4.17, the number
of iterations required by the Conjugate Gradient method is shown. The third
column of this table shows the number of iterations for the Schur comple-
ment equation (2.47), while the fourth column shows the number of iterations
required for the iterative solution of the linear system for the single layer po-
tential in each iteration step. The required accuracy was 2 = 108 for both
systems.

4.3.3 Foam

The geometry of the domain, which is a model for a metal foam, is shown
in Fig. 4.34. The speciality of this domain is its multiple connectivity and
rather small volume compared to its surface. There is only one discretisation
of the domain with N = 28952 surface elements. The bottom and the top of
the foam are chosen to be the Dirichlet part D of the boundary , and the
boundary condition is homogeneous on the bottom, i.e.

0int u(x) = 0 , for x : x3 = 0 ,

while a prescribed constant displacement is posed on the top, i.e.

0int u(x) = (0, 0, 0.1) , for x : x3 = 15 .

The remaining part of the boundary is then considered as the Neumann


boundary, where homogeneous boundary conditions are formulated:

1int u(x) = 0 , for x : 0 < x3 < 15 .

We choose the Young modulus E = 114 000 and the Poisson ratio = 0.24
that correspond to the values of steel. The original domain is shown in Fig.
4.34 for N = 28952. The matrix of the single layer potential Vh for the Laplace
operator (cf. (2.49)), six matrices of the single layer potential Vh for the Lame
operator (cf. (2.50)), and the matrix of the double layer potential Kh (cf.
(2.51)) for the Laplace operator are generated in an approximative form us-
ing the partially pivoted ACA algorithm 3.9. The system of linear equations
is then solved using a Conjugate Gradient method for the Schur complement
system (cf. (2.47)) up to the relative accuracy 2 = 108 . Note that this real-
isation requires an additional solution of a linear system with the single layer
potential matrix in each iteration step. This system is solved again using a
Conjugate Gradient method up to the relative accuracy 2 = 108 . The ma-
trix of the hypersingular operator is not generated explicitly. Its multiplication
with a vector is realised using the matrix of the single layer potential as it is
described in Section 2.4.
The results of the approximation are presented in Tables 4.184.20. The
number of boundary elements is listed in the rst column of these tables.
4.3 Linear Elastostatics 167

30 0
10
20 20
30
10

40

30

20

10

Fig. 4.34. Foam for N = 28952

The second column contains the number of nodes while in the third column
the prescribed accuracy for the ACA algorithm for the approximation of all
matrices Kh RN M and Vh , Vk,h RN N , k,  = 1, 2, 3 is given. The
pairs of further columns of these tables show the memory requirements in
MByte and the percentage of memory compared to the original matrix. The

Table 4.18. ACA approximation of the Galerkin matrices Vh and Kh

N M 1 Vh % Kh %
4
28952 14152 1.0 10 260.66 4.1 496.58 15.9

deformed domain can be seen from the same point of view in Fig. 4.35. In
this gure, the real deformation is amplied by a factor 100. The number of
iterations required by the Conjugate Gradient method is shown in Table 4.21.
In this table, the third column shows the number of iterations for the Schur
complement equation (2.47), while the number of iterations required for the
168 4 Implementation and Numerical Examples

Table 4.19. ACA approximation of the Galerkin matrices V11,h , V12,h and V13,h

N M 1 V11,h % V12,h % V13,h %


4
28952 14152 1.0 10 398.36 6.2 417.94 6.5 418.55 6.5

Table 4.20. ACA approximation of the Galerkin matrices V22,h , V23,h and V33,h

N M 1 V22,h % V23,h % V33,h %


4
28952 14152 1.0 10 402.36 6.3 415.22 6.5 398.93 6.2

30 0 1.0000E01
10
20 20
30
10

0
40

30
4.4986E02

20

10

1.0028E02

Fig. 4.35. Deformation of the foam for N = 28952

iterative solution of the linear system for the single layer potential in each
iteration step can be seen in the fourth column. The required accuracy was
2 = 108 for both systems.

4.4 Helmholtz Equation


In this section we consider some numerical examples for the Helmholtz equa-
tion
4.4 Helmholtz Equation 169

Table 4.21. The number of iterations, Foam problem

N M Iter1 Iter2
28952 14152 253 19-21

u(x) 2 u(x) = 0 , (4.31)

where u is an analytically given function.

4.4.1 Analytical Solutions

Particular solutions of the Helmholtz equation (4.31) are, for example,


 
k1 ,k2 ,k3 (x) = exp (k1 x1 + k2 x2 + k3 x3 ) , k12 + k22 + k32 = 2 ,
 
0,k2 ,k3 (x) = (a + b x1 ) exp (k2 x2 + k3 x3 ) , k22 + k32 = 2 , (4.32)
 
0,0,0 (x) = (a1 + b1 x1 )(a2 + b2 x2 ) exp x3 .

Here k1 , k2 and k3 are arbitrary complex numbers satisfying the corresponding


conditions. Thus, dierent products of linear, exponential, trigonometric, and
hyperbolic functions can be chosen for numerical tests, if we consider interior
boundary value problems in a three-dimensional open bounded domain
R3 . Furthermore, the fundamental solution

1 e |xy|
u (x, y) = (4.33)
4 |x y|
can be considered as a particular solution of the Helmholtz equation (4.31)
for both, interior (x , y e = R3 \ ), and exterior (x e , y )
boundary value problems.

4.4.2 Discretisation, Approximation and Iterative Solution

We solve the interior and exterior Dirichlet and Neumann boundary value
problems using a Galerkin boundary element method (cf. Section 2). Piecewise
linear basis functions  will be used for the approximation of the Dirichlet
datum 0int u and piecewise constant basis functions k for the approximation
of the Neumann datum 1int u. We will use the L2 projection for the approxi-
mation of the given part of the Cauchy data. The boundary element matrices
Vh , Kh , and Ch are generated in an approximative form using the complex
valued version of the partially pivoted ACA algorithm 3.9 with a variable rel-
ative accuracy 1 . The resulting systems of linear equations are solved using
the GMRES method with or without preconditioning up to a relative accuracy
2 = 108 .
170 4 Implementation and Numerical Examples

4.4.3 Generation of Matrices

The most important matrices to be generated while using the Galerkin boun-
dary element method for boundary value problems for the Helmholtz equation
(4.31) are the single layer potential matrix V,h (cf. (2.53)),
 
1 e|xy|
V,h [k, ] = dsy dsx ,
4 |x y|
k 

and the double layer potential matrix K,h (cf. 2.58),


 
1   (x y, n(y))
K,h [k, j] = 1 |x y| e |xy| j (y)dsy dsx .
4 |x y|3
k

Furthermore, when solving the Neumann boundary value problem for the
Helmholtz equation, the matrix of the hypersingular operator (cf. (2.62))

D,h [i, j] =
  |xy|
1 e
(curl j (y), curl i (x))dsy dsx
4 |x y|

2  
e|xy|
j (y)i (x)(n(x), n(y))dsy dsx
4 |x y|

has to be involved. The rst part of this formula corresponds for = 0 to


the hypersingular operator for the Laplace equation, and, therefore, can be
handled in the same way, i.e. these entries are some linear combinations of the
entries of the matrix of the single layer potential V,h . It remains to generate
an additional matrix C,h , having the entries
 
e|xy|
C,h [i, j] = j (y)i (x)(n(x), n(y))dsy dsx . (4.34)
|x y|

To generate the entries of the single layer potential matrix V,h numerically,
we rewrite (2.53) as follows:
 
1 e|xy| 1
V,h [k, ] = V0,h [k, ] + dsy dsx .
4 |x y|
k 

In the above, the entries V0,h [k, ] are the entries of the single layer potential
matrix of the Laplace operator, and, therefore, can be computed as it was
discussed in Section 4.2.3. The remaining double integral has no singularity
for x y, and can be computed numerically, using the 7-point quadrature
rule (cf. Appendix C.1) for each triangle.
4.4 Helmholtz Equation 171

For the double layer potential matrix, the same idea leads to the following
decomposition:

K,h [k, j] =
    (x y, n(y))
1
K0,h [k, j] + (1 |x y|)e |xy| 1 j (y)dsy dsx .
4 |x y|3
k

Again, the rst part of this decomposition belongs to the double layer potential
matrix of the Laplace operator, while the second part has no singularity for
x y. Thus, the 7-point quadrature rule can be applied again. However, the
numerical integration with respect to the variable y has to be done over all
triangles in the support of the basis function j for each integration point
with respect to the variable x. Therefore, the generation of the matrix entries
for the double layer potential matrix for the Helmholtz equation is by far more
complicated than for the Laplace equation.
However, the most complicated numerical procedure is required when gene-
rating the entries of the matrix C,h corresponding to (4.34). Using the same
decomposition as for the previous matrices, we get
  |xy|
e 1
C,h [i, j] = C0,h [i, j] + j (y)i (x)(n(x), n(y))dsy dsx .
|x y|

In the above, the second summand has no singularity for x y, and the 7-
point quadrature rule can be applied again. Note that in this case, the quadra-
ture rule has to be applied to every triangle in the support of the function
i , and, for each of its integration points, to every triangle in the support of
the function j . Furthermore, a symmetrisation is necessary in order to keep
the symmetry of the matrix C,h . Fortunately, the rst summand C0,h [i, j]
does not require some additional analytical work. Since the normal vectors
n(x) and n(y) are constant within the single triangles in the supports of the
functions i and j , these integrals can be computed using the symmetrised
combination of the analytical integration and of the 7-point quadrature rule.

4.4.4 Interior Dirichlet Problem

Here we solve the Helmholtz equation (4.31) together with the Dirichlet
boundary condition 0int u(x) = g(x) for x , where is a given surface.
The variational problem (1.106)
 1  
V t, w
= I + K g, w for all w H 1/2 ( )
2

is discretised and leads to a system of linear equations


1 
V,h 
t = Mh + K,h g . (4.35)
2
172 4 Implementation and Numerical Examples

The matrix V,h of this system is symmetric. This property can be used in
order to save computer memory while generating the matrix. However, the
matrix is not selfadjoint, and, therefore, the Conjugate Gradient method can
not be used. Thus, for an iterative solution of the system (4.35), the complex
GMRES method will be used instead.

Unit Sphere

The analytical solution is taken in the form (4.32). For x = (x1 , x2 , x3 )


we consider the function

u(x) = 0, 23,4 = 4 x1 exp(2 3 x2 ) exp( 4 x3 ) , (4.36)

which satises the Helmholtz equation (4.31) for = 2. The results of the
computations for this rather moderate wave number are shown in Tables 4.22
and 4.23. The number of boundary elements is listed in the rst column of

Table 4.22. ACA approximation of the Galerkin matrices K,h and V,h

N M 1 MByte(Kh ) % MByte(Vh ) %
2
80 42 1.0 10 0.05 99.9 0.05 50.4
320 162 1.0 103 0.66 84.0 0.55 35.0
1280 642 1.0 104 6.08 48.5 5.02 20.1
5120 2562 1.0 105 49.05 24.5 39.46 9.86
20480 10242 1.0 106 357.90 11.2 280.60 4.47

these tables. The second column contains the number of nodes, while in the
third column of Table 4.22 the prescribed accuracy for the ACA algorithm for
the approximation of both matrices K,h CN M and V,h CN N is given.
In this table, the fourth column shows the memory requirements in MByte
for the approximate double layer potential matrix K,h . The quality of this
approximation in percentage of the original matrix is listed in the next column,
whereas the corresponding values for the single layer potential matrix V,h can
be seen in the columns six and seven. The third column in Table 4.23 shows
the number of GMRES iterations needed to reach the prescribed accuracy 2 ,
while the relative L2 error for the Neumann datum,

1int u th L2 ( )
Error1 = ,
1int uL2 ( )

is given in the fourth column. The next column represents the rate of conver-
gence for the Neumann datum, i.e. the quotient of the errors in two consecutive
lines of column four. Finally, the last two columns show the absolute error (cf.
(2.56)) in a prescribed inner point x ,
4.4 Helmholtz Equation 173

Table 4.23. Accuracy of the Galerkin method, Dirichlet problem

N M Iter Error1 CF1 Error2 CF2


1 0
80 42 21 6.98 10 3.12 10
320 162 29 3.13 101 2.23 8.68 102 36.00
1280 642 37 1.45 101 2.16 7.42 103 11.70
5120 2562 46 7.03 102 2.06 7.37 104 10.06
20480 10242 57 3.48 102 2.02 7.74 105 9.52

Error2 = |u(x ) u(x )| , x = (0.28591, 0.476517, 0.667123) , (4.37)


for the value u(x ) obtained using the approximate representation formula
(2.55). Table 4.23 obviously shows a linear convergence O(N 1/2 ) = O(h) of
the Galerkin boundary element method for the Neumann datum in the L2
norm. It should be noted that this theoretically guaranteed convergence order
can already be observed when approximating the matrices K,h and V,h with
much less accuracy as it was used to obtain the results in Table 4.22. However,
this high accuracy is necessary in order to be able to observe the third order
(or even better) pointwise convergence rate within the domain presented
in the last two columns of Table 4.23.
In Figs. 4.364.37, the given Dirichlet datum (real and imaginary parts)
for N = 1280 boundary elements is presented. The computed Neumann
datum is presented in Figs. 4.38 (real part) and 4.39 (imaginary part). The
numerical curves obtained when using an approximate representation formula
in comparison with the curve of the exact values (4.36) along the line

0.3 0.6
x(t) = 0.5 + t 1.0 , 0 t 1 (4.38)
0.7 1.4
inside of the domain are shown in Fig. 4.40 for N = 80 and in Fig. 4.41
for N = 320. The values of the numerical solution u and of the analytical
solution u have been computed in 512 points uniformly placed on the line
(4.38). In these gures, the thick dashed line represents the course of the
analytical solution (4.36), while the thin solid line shows the course of the
numerical solution u. The values of the variable x2 along the the line (4.38)
are used for the axis of abscissas. The left plots in these gures correspond
to the real parts of the solutions while the imaginary parts are shown on the
right. The next Fig. 4.42 shows these curves for N = 1280, but on the zoomed
interval [0.3, 0.5] with respect to the variable x2 in order to see the very small
dierence between them. It is almost impossible to see any optical dierence
between the numerical and analytical curves for higher values of N . Note that
the point x in (4.37) is chosen close to the maximum of the function Im u
along the line where the error seems to reach its maximum.
Thus, for the moderate value of the wave number = 2, the quality of
numerical results on the unit sphere is almost the same as for the Laplace
174 4 Implementation and Numerical Examples

-1 4.0924E01
-0.5
0
0.5
1
1

0.5

0.0000E00
0

-0.5

-1
-1
-0.5
0
0.5
1 4.0924E01

Fig. 4.36. Given Dirichlet datum (real part) for the unit sphere, N = 1280

-1 3.0617E01
-0.5
0
0.5
1
1

0.5

4.4670E01
0

-0.5

-1
-1
-0.5
0
0.5
1 3.1510E01

Fig. 4.37. Given Dirichlet datum (imaginary part) for the unit sphere, N = 1280
4.4 Helmholtz Equation 175

-1 1.6062E02
-0.5
0
0.5
1
1

0.5

0.0000E00
0

-0.5

-1
-1
-0.5
0
0.5
1 1.6062E02

Fig. 4.38. Computed Neumann datum (real part) for the unit sphere, N = 1280

-1 1.3744E02
-0.5
0
0.5
1
1

0.5

2.1111E00
0

-0.5

-1
-1
-0.5
0
0.5
1 1.3322E02

Fig. 4.39. Computed Neumann datum (imaginary part) for the unit sphere, N =
1280
176 4 Implementation and Numerical Examples

0 6

-1 5

-2 4

-3 3
-4
2
-5
1
-6
0
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.40. Numerical and analytical curves for N = 80, Dirichlet problem

0 3

-1 2.5

-2 2

-3 1.5
-4
1
-5
0.5
-6
0
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.41. Numerical and analytical curves for N = 320, Dirichlet problem

-1
3

-2
2.8
-3

-4 2.6

-5
2.4
-6

0.2 0.22 0.24 0.26 0.28 0.3 0.2 0.22 0.24 0.26 0.28 0.3

Fig. 4.42. Numerical and analytical curves for N = 1280, Dirichlet problem

equation. The ACA approximation is good, the number of GMRES iterations


is low without any preconditioning, and it grows corresponding to the theory
and, nally, the theoretical linear convergence order of the Neumann datum
on the surface as well as the cubic convergence order in the inner points of
the domain are perfectly illustrated.

Unit Sphere. Multifrequency Analysis

Since the Helmholtz equation provides an additional parameter, the wave


number , it is especially interesting and important to study the behaviour of
our numerical methods with respect to this parameter. The quality of the ma-
trix approximation, the number of iterations needed to solve the correspond-
4.4 Helmholtz Equation 177

ing linear systems, and, of course, the accuracy of the whole procedure are of
special interest. We will now solve the Helmholtz equation for a xed discreti-
sation of the surface but for a sequence of wave numbers [min , max ].
If Im = 0, then the inner Dirichlet boundary value problem is uniquely
solvable. The situation is dierent for Im = 0. In this case the uniqueness
holds only if 2 is not an eigenvalue of the Laplace operator subjected to
homogeneous Dirichlet boundary conditions,

u(x) = u(x) for x , 0int u(x) = 0 for x ,

(cf. Section 1.4). For general , the eigenvalues of the Laplace operator are
not known and it can happen that one or even several of them belong to the
interval [min , max ]. In this case some diculties will occur when solving
the discrete problem. Now we are going to illustrate the situation. The exact
eigenfunctions and eigenvalues on the unit ball are analytically known and
can be represented in spherical coordinates

cos sin
x =  sin sin , 0  < 1 , 0 < 2 , 0
cos

as follows
Jn+1/2 (n,m )
uk,n,m (, , ) = Pn,|k| (cos ) e k , (4.39)


with
m N, n N0 , |k| n .
In (4.39), n,m are the zeros of the Bessel functions Jn+1/2 . Pn,k are the
associated Legendre polynomials
 k/2 dk
Pn,k (u) = (1)k 1 u2 Pn (u)
dxk
dened for
|u| 1 , k = 0, . . . , n , n N0 .
The Legendre polynomials Pn are given in (3.12). The corresponding eigen-
values are
n,m = 2n,m .
For n = k = 0 and m N, the eigenvalues and the eigenfunctions are of an
especially simple form. In this case we use
/
2 sin z
J1/2 (z) =
z

and obtain
178 4 Implementation and Numerical Examples

sin(0,m )
u0,0,m (, , ) = cm , m N. (4.40)

Thus, the corresponding critical values of are
= 0,m = m , m N.
In particular, = is a critical value.
We solve the boundary value problem for the Dirichlet boundary value
problem for the Helmholtz equation (4.31) having the analytical solution
(4.33) with y = (1.1, 0.0, 0.0)
/ . We will use 17 uniformly distributed val-
ues of on the interval [3.1, 3.2]. The discretisation of the boundary will be a
polyhedron having N = 320 boundary elements (cf. Fig. 4.1). The following
gures illustrate the results: In Fig. 4.43, the L2 error of the Neumann datum
is shown as a function of in the left plot. The right plot shows the number
of GMRES iterations needed to reach the relative accuracy 2 = 108 of the
numerical solution of the linear system (2.57). Thus, a signicant jump of the

42

0.74
41

0.735
40

0.73
39
0.725

38
3.1 3.12 3.14 3.16 3.18 3.2
3.1 3.12 3.14 3.16 3.18 3.2

Fig. 4.43. Multifrequency computation for N = 320, Dirichlet problem

accuracy is displayed for the value = 3.1750. Also, a signicant increase of


the number of iterations can be seen close to the critical value = 3.1750.
The quality of the ACA approximation of the matrices K,h and V,h is more
or less the same for all values of the parameter on this rather small interval.
Thus we can deduce that the value of the parameter 2 = 3.17502 is close to
the eigenvalue of the Dirichlet boundary value problem for the Laplace equa-
tion in the polyhedron h with N = 320 elements. This value is remarkably
close to the rst eigenvalue 2 of the continuous problem. However, the dis-
crete values of the parameter will never meet the correct eigenvalue exactly
and, probably, the closeness to the eigenvalue will not be detected. In this
situation quite wrong results can be obtained. We illustrate this fact in the
next three gures where the real parts (left plots) and the imaginary parts
(right plots) of the analytical solution (thick dashed lines) and of the numer-
ical solution (thin solid lines) are presented for three values of the parameter
= 3.15, 3.175, 3.2. We can see that the numerical solution diers signi-
cantly from the analytical one for = 3.175, while the numerical results for
= 3.15 and = 3.2 are quite good for this rather rough discretisation.
4.4 Helmholtz Equation 179

0.02 -0.01

0 -0.02

-0.02 -0.03

-0.04
-0.04

-0.06
-0.05

-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.44. Numerical and analytical curves for = 3.15 and N = 320, Dirichlet
problem

-0.01
0.02

-0.02
0

-0.02 -0.03

-0.04 -0.04

-0.06
-0.05

-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.45. Numerical and analytical curves for = 3.175 and N = 320, Dirichlet
problem

0.02
-0.02
0

-0.03
-0.02

-0.04 -0.04

-0.06
-0.05
-0.08
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Fig. 4.46. Numerical and analytical curves for = 3.2 and N = 320, Dirichlet
problem

In the next example, we solve the Dirichlet boundary value problem on the
polyhedron h with N = 1280 elements for 65 values of the wave number
uniformly distributed on the interval [0, 16]. Thus, the rst value corresponds
to the Laplace operator.
In Figs. 4.474.48 we show how the ACA approximation quality of the
matrices K,h and V,h depends on the wave number. The left plots in these
gures present the memory requirements in MByte, while the right plots show
the same result in percentage compared to the full memory for 1 = 104 .
The linear dependence of the memory requirement of the wave number is
180 4 Implementation and Numerical Examples

clearly indicated by these numerical tests. Also this example shows the loss

100
11
80
10
60
9

8 40

7 20

6
0 2.5 5 7.5 10 12.5 15 0 2.5 5 7.5 10 12.5 15

Fig. 4.47. Approximation of the double layer potential matrix K,h for N = 1280

100
9
80
8
60
7
40
6
20
5

0 2.5 5 7.5 10 12.5 15 0 2.5 5 7.5 10 12.5 15

Fig. 4.48. Approximation of the single layer potential matrix V,h for N = 1280

of the accuracy close to the critical values of the parameter . In Fig. 4.49, we
present again the L2 norm of the error for the Neumann datum (left plot) and
the number of GMRES iterations (right plot) as functions of the wave number
. The left plot in Fig. 4.49 clearly shows a total loss of accuracy close to the

1.6

1.4
200
1.2
150
1

0.8
100
0.6

50
0.4
0 2.5 5 7.5 10 12.5 15 0 2.5 5 7.5 10 12.5 15

Fig. 4.49. Multifrequency computation for N = 1280, Dirichlet problem

wave number = 13.0. If we plot the analytical and the numerical values
4.4 Helmholtz Equation 181

of the solution of the boundary value problem for three subsequent points
12.75, 13.0 and 13.25 for the parameter we can see this loss of accuracy
optically. The results are presented in Figs. 4.504.52. It is remarkable that

0.06 0.08

0.04 0.06

0.04
0.02
0.02
0
0
-0.02
-0.02
-0.04
-0.04
-0.06
-0.06
-0.4 -0.2 0 0.2 0.4 -0.4 -0.2 0 0.2 0.4

Fig. 4.50. Numerical and analytical curves for = 12.75 and N = 1280, Dirichlet
problem

0.04 0.1

0.02 0.05

0
0
-0.02
-0.05
-0.04

-0.06 -0.1
-0.4 -0.2 0 0.2 0.4 -0.4 -0.2 0 0.2 0.4

Fig. 4.51. Numerical and analytical curves for = 13.0 and N = 1280, Dirichlet
problem

0.04 0.06

0.04
0.02
0.02
0
0
-0.02
-0.02
-0.04
-0.04
-0.06
-0.06
-0.4 -0.2 0 0.2 0.4 -0.4 -0.2 0 0.2 0.4

Fig. 4.52. Numerical and analytical curves for = 13.25 and N = 1280, Dirichlet
problem

only one critical value (close to 4) of the wave number was detected on
182 4 Implementation and Numerical Examples

the interval [0, 16]. This fact is due to the rather big step of 0.25 with respect
to , which was used in the above example.

Exhaust Manifold

The analytical solution is taken in the form (4.33) with y = (0, 0, 0.06) and
= 80, which is moderate compared to the rather small dimension of the
domain (cf. Fig. 4.11). The results of the computations are shown in Tables
4.24 and 4.25. The third column shows the number of iterations required

Table 4.24. ACA approximation of the Galerkin matrices K,h and V,h

N M 1 MByte(K,h ) % MByte(V,h ) %
3
2264 1134 1.0 10 16.62 42.4 11.82 15.1
9056 4530 1.0 104 137.86 22.0 97.08 7.8
36224 18114 1.0 105 1046.80 10.5 696.65 3.5

Table 4.25. Accuracy of the Galerkin method, Dirichlet problem

N M Iter Error1 CF1 Error2 CF2


1 3
2264 1134 177 3.10 10 - 8.88 10 -
9056 4530 208 1.40 101 2.2 1.08 103 8.2
36224 18114 244 5.83 102 2.4 9.27 105 11.7

by the GMRES method without preconditioning. The fourth column displays


the L2 error of the Neumann datum, while the next column shows its linear
convergence. The last pair of columns of Table 4.25 shows the absolute error
(cf. (2.56)) in a prescribed inner point x ,

Error2 = |u(x ) u(x )| , x = (0.145303, 0.1, 0.05) (4.41)

for the value u(x ) obtained using an approximate representation formula


(2.55). Finally, the last column of this table indicates the cubic (or even better)
convergence of this quantity.
In Figs. 4.53 4.54, the real part and the imaginary part of the given
Dirichlet datum are presented. The computed Neumann datum is presented
in Figs. 4.554.56, where again the left plot corresponds to the real part of
the Neumann datum, while the imaginary part is shown on the right. The
numerical curve, obtained when using an approximate representation formula
in comparison with the curve of the exact values (4.33) along the line
4.4 Helmholtz Equation 183

1.2070E00

0.05
5.9464E01
0

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2
2.3963E00

Fig. 4.53. Given Dirichlet datum (real part) for the exhaust manifold

4.7306E00

0.05
1.5706E00
0

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2
1.5894E00

Fig. 4.54. Given Dirichlet datum (imaginary part) for the exhaust manifold
184 4 Implementation and Numerical Examples

3.6341E02

0.05
1.3407E02
0

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2
9.5273E01

Fig. 4.55. Computed Neumann datum (real part) for the exhaust manifold

1.9587E02

0.05
6.7482E01
0

-0.05

-0.1 0.1

0
0
0.1

0.2
0 2
6.0909E01

Fig. 4.56. Computed Neumann datum (imaginary part) for the exhaust manifold
4.4 Helmholtz Equation 185

0.05 0.2
x(t) = 0.1 + t 0.0 , 0 t 1 (4.42)
0.05 0.0
inside of the domain is shown in Figs. 4.57-4.58 for N = 2264 and corre-
spondingly for N = 9056. The values of the numerical solution u and of the
analytical solution u have been computed in 512 points uniformly placed on
the line (4.42). The thick dashed line represents the course of the analytical
solution (4.33) while the thin solid line shows the course of the numerical so-
lution u. The values of the variable x1 along the line (4.42) are used for the
axis of abscissas. Note that the numerical solution for N = 9056 perfectly

0.4 0.4

0.2 0.2

0 0

-0.2
-0.2

-0.4
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.57. Numerical and analytical curves for the exhaust manifold for N = 2264

0.4
0.4

0.2 0.2

0 0

-0.2
-0.2

-0.4
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.58. Numerical and analytical curves for the exhaust manifold for N = 9056

coincides with the analytical curves.

4.4.5 Interior Neumann Problem


We consider the interior Neumann boundary value problem for the Helmholtz
equation with the boundary condition 1int u(x) = g(x) for x . The varia-
tional problem (1.109)
   1  
D u, v = I K g, v for all v H 1/2 ( )
2
186 4 Implementation and Numerical Examples

is discretised and leads to a system of linear equations (cf. (2.61))


1 
 =
D,h u Mh K,h

g.
2
The symmetric but complex valued system is then solved using the GMRES
method up to the relative accuracy 2 = 108 .

Unit Sphere

We consider again the harmonic function (4.36) as the exact solution. The
results for the ACA approximation of the matrix C,h CM M are presented

in Table 4.26. The corresponding results for the matrices K,h for the compu-
tation of the right hand side of the above system and V,h , which will be used
for the multiplication with the matrix D,h are identical to those already pre-
sented in Table 4.22. Note that in this example the complex valued Galerkin

Table 4.26. ACA approximation of the Galerkin matrix C,h , Neumann problem

N M 1 MByte(C,h ) %
80 42 1.0 102 0.01 51.2
320 162 1.0 103 0.20 49.1
1280 642 1.0 104 2.15 34.2
5120 2562 1.0 105 17.53 17.5
20480 10242 1.0 106 126.29 7.89

matrix C,h with piecewise linear basis functions is generated. This is a rather
time consuming procedure. Thus, a quite good approximation of this matrix is
especially important when using the ACA algorithm. The accuracy obtained

Table 4.27. Accuracy of the Galerkin method, Neumann problem

N M Iter Error1 CF1 Error2 CF2


1 0
80 42 10 4.94 10 2.15 10
320 162 17 1.25 101 3.95 4.72 101 4.67
1280 642 25 2.75 102 4.55 1.09 101 4.32
5120 2562 37 6.41 103 4.29 2.60 102 4.19
20480 10242 46 1.55 103 4.14 6.41 103 4.05

for the whole numerical procedure is presented in Table 4.27. The numbers in
this table have the usual meaning. The third column shows the number of it-
erations required by the GMRES method without preconditioning. Note that
4.4 Helmholtz Equation 187

the convergence of the Galerkin method for the unknown Dirichlet datum in
L2 norm,
 int u u
h L2 ( )
Error1 = 0 int ,
0 uL2 ( )
is now quadratic corresponding to the estimate (2.67). In the inner point x ,
we now observe quadratic convergence (7th column), as it was predicted in
(2.67), instead of the cubic order obtained for the Dirichlet problem (cf. Table
4.23). This fact is clearly illustrated in Figs. 4.594.61, where the convergence
of the boundary element method can be seen optically. The results obtained
for N = 80 are plotted in Fig. 4.59, where the left plot shows the course of the
real part of the solution, while the imaginary part is presented on the right.
The numerical curves in Fig. 4.60 are notedly better than the previous ones.
However, their quality is not as high as of the corresponding curves obtained
while solving the Dirichlet problem (cf. Fig. 4.41).

0
4
-1

-2 3

-3
2
-4

-5 1

-6
0
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.59. Numerical and analytical curves for N = 80, Neumann problem

0
3
-1
2.5
-2
2
-3
1.5
-4
1
-5
0.5
-6
0
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.60. Numerical and analytical curves for N = 320, Neumann problem

Exhaust Manifold
The analytical solution is taken in the form (4.33) with y = (0, 0, 0.06) and
= 10, which is rather small compared with the dimension of the domain (cf.
188 4 Implementation and Numerical Examples

0 3

-1 2.5

-2 2

-3 1.5

-4 1
-5
0.5
-6
0
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.61. Numerical and analytical curves for N = 1280, Neumann problem

Fig. 4.11). We will consider bigger values of the wave number, when studying
the multifrequency behaviour of the problem. This small value of is well
situated to demonstrate convergence properties of the Galerkin BEM for a
regular value of the wave number. The results of the computations are shown
in Tables 4.28 and 4.29. The quality of the ACA approximation of the ma-

Table 4.28. ACA approximation of the Galerkin matrices K,h and V,h

N M 1 MB(K,h ) % MB(V,h ) % MB(C,h ) %


3
2264 1134 1.0 10 13.81 35.3 8.29 10.6 4.54 23.1
9056 4530 1.0 104 124.86 20.0 72.14 5.8 40.33 12.9
36224 18114 1.0 105 998.19 9.9 580.32 2.9 354.61 7.1

trix C,h can be seen in columns eight and nine of the Table 4.28. The third

Table 4.29. Accuracy of the Galerkin method, Neumann problem

N M Iter Error1 CF1 Error2 CF2


2 3
2264 1134 177 2.26 10 - 1.40 10 -
9056 4530 201 5.20 103 4.3 2.92 104 4.8
36224 18114 244 1.21 103 4.3 4.90 105 5.9

column shows the number of iterations required by the GMRES method with
diagonal preconditioning (4.21). The fourth column displays the L2 error of
the computed Dirichlet datum, and the next column shows its quadratic con-
vergence. Column six of Table 4.29 displays the absolute error (cf. (2.67)) in
a prescribed inner point x ,

Error2 = |u(x ) u(x )| , x = (0.0740705, 0.1, 0.05) , (4.43)


4.4 Helmholtz Equation 189

for the value u(x ) obtained using an approximate representation formula


(2.65). Finally, the last column of this table indicates the quadratic (or even
better) convergence of this quantity.
The Cauchy data obtained when solving the Neumann boundary value
problem is optically the same as for the Dirichlet boundary value problem
and can be seen in Figs. 4.53 4.54. The numerical curve obtained when using
an approximate representation formula in comparison with the curve of the
exact values (4.36) along the line (4.42) inside of the domain is shown
in Figs. 4.62-4.63 for N = 2264, and, correspondingly, for N = 9056. The
values of the numerical solution u and of the analytical solution u have been
computed in 512 points uniformly placed on the line (4.42). The thick dashed
line represents the course of the analytical solution (4.33) while the thin solid
line shows the course of the numerical solution u. The values of the variable
x1 along the the line (4.16) are used for the axis of abscissas. Note that

0.05

0.5
0

-0.05 0.45

-0.1 0.4

-0.15
0.35

-0.2
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.62. Numerical and analytical curves for the Neumann Problem, N = 2264

0.05

0.5
0

-0.05 0.45

-0.1 0.4

-0.15
0.35

-0.2
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.63. Numerical and analytical curves for the Neumann Problem, N = 9056

the numerical solution for N = 9056 perfectly coinsides with the analytical
curves.
190 4 Implementation and Numerical Examples

Exhaust Manifold. Multifrequency Analysis


Here we solve the Neumann boundary value problem for the Helmholtz equa-
tion on the surface, depicted in Fig. 4.11, for 17 uniformly distributed values
of the wave number on the interval [16, 24]. In Fig. 4.64, we present the
L2 norm of the error for the computed Dirichlet datum (left plot) and the
number of GMRES iterations (right plot) as functions of the wave number .
The left plot in Fig. 4.64 clearly shows a total loss of accuracy close to the

92.5
0.3
90
0.25
87.5
0.2
85
0.15
82.5
0.1
80
0.05 77.5
16 18 20 22 24 16 18 20 22 24

Fig. 4.64. Multifrequency computation for N = 2264, Neumann problem

wave numbers = 17.5 and = 20.5. If we plot the analytical and the numer-
ical values of the solution of the boundary value problem for three subsequent
points 17.0, 17.5, and 18.0 for the parameter , we can see this loss of accuracy
optically. The results are presented in Figs. 4.654.67. Thus, the numerical

-0.34 0.3

-0.36
0.2
-0.38

-0.4 0.1
-0.42
0
-0.44

-0.46 -0.1
-0.48
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.65. Numerical and analytical curves for = 17.0 and N = 2264, Neumann
problem

solution is quite wrong for = 17.5, while it is acceptable for = 17.0 and
= 18.0.
The picture is similar, if we consider the numerical and the analytical
solutions for = 20.0, 20.5, and = 21.0.
Note that the quality of the approximation of the boundary element ma-
trices K,h , V,h , and C,h is more or less constant for all values of the wave
number on the whole interval [16, 24].
4.4 Helmholtz Equation 191
0.3

-0.25
0.2

-0.3
0.1

-0.35
0
-0.4
-0.1
-0.45
-0.2
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.66. Numerical and analytical curves for = 17.5 and N = 2264, Neumann
problem

0.2
-0.3

0.1
-0.35

0
-0.4
-0.1
-0.45
-0.2
-0.05 0 0.05 0.1 0.15 -0.05 0 0.05 0.1 0.15

Fig. 4.67. Numerical and analytical curves for = 18.0 and N = 2264, Neumann
problem

4.4.6 Exterior Dirichlet Problem

Here, we solve the Helmholtz equation (4.31) in e = R3 \ together with


the boundary condition 0ext u(x) = g(x) for x , where = is the
boundary of the surface . The variational problem (1.116)
   1  
V t, w = I + K g, w for all w H 1/2 ( )
2

is discretised and leads to a system of linear equations (cf. (2.57))


 1 
V,h 
t = Mh + K,h g .
2
The matrix V,h of this system is identical with the corresponding matrix of
the interior boundary value problem. Thus, in this subsection, we will choose
dierent values of the wave number compared with those used in Subsection
4.4.6.

Unit Sphere

The analytical solution is taken in the form (4.33) for y = (0.9, 0, 0) ,


i.e. close to the boundary of the domain . The results of the computations
192 4 Implementation and Numerical Examples

Table 4.30. ACA approximation of the Galerkin matrices K,h and V,h

N M 1 MByte(K,h ) % MByte(V,h ) %
2
80 42 1.0 10 0.05 100.0 0.05 50.6
320 162 1.0 103 0.75 94.5 0.64 40.8
1280 642 1.0 104 6.88 54.9 5.66 22.7
5120 2562 1.0 105 53.85 26.9 42.91 10.7
20480 10242 1.0 106 379.09 11.8 302.35 4.72

for the wave number = 4, which is still moderate, are shown in Tables 4.30
and 4.31. The number of boundary elements is listed in the rst column of
these tables. The second column contains the number of nodes, while in the
third column of Table 4.30, the prescribed accuracy for the ACA algorithm
for the approximation of both matrices K,h CN M and V,h CN N is
given. The dierence in the ACA approximation of these matrices for = 4
(Table 4.30) and for = 2 (Table 4.22) can be clearly seen. In Table 4.31, the

Table 4.31. Accuracy of the Galerkin method, Dirichlet problem

N M Iter Error1 CF1 Error2 CF2


80 42 28 9.43 101 1.88 101
320 162 41 6.95 101 1.36 4.39 102 4.28
1280 642 52 3.68 101 1.89 7.48 103 5.87
5120 2562 63 1.64 101 2.24 8.04 104 9.30
20480 10242 75 7.79 102 2.11 6.89 105 11.68

third column shows the number of GMRES iterations without preconditioning


needed to reach the prescribed accuracy 2 = 108 . The relative L2 error for
the Neumann datum,

1int u th L2 ( )
Error1 = ,
1int uL2 ( )

is given in the fourth column. The next column represents the rate of conver-
gence for the Neumann datum, i.e. the quotient of the errors in two consecutive
lines of column four. We can see that linear convergence can be observed as-
ymptotically. Finally, the last two columns show the absolute error (cf. (2.69))
in the point x e ,

Error2 = |u(x ) u(x )| , x = (1.1, 0, 0) , (4.44)

for the value u(x ) obtained using the approximate representation formula
(2.68). Again, a rather high accuracy of the ACA approximation is necessary in
4.4 Helmholtz Equation 193

order to be able to observe a third order (asymptotically even better) pointwise


convergence rate within the domain e , shown in the last two columns of Table
4.31.
In Figs. 4.684.69, the given Dirichlet datum (real and imaginary parts)
for N = 320 boundary elements is presented. The computed Neumann datum

7.6908E01

0.5

0 3.2795E01

-0.5
5 -1
-
-0.5
-0
0
-1 0.5
-1
-0.5
0 1
0.5
1
1.1317E01

Fig. 4.68. Given Dirichlet datum (real part) for the unit sphere, N = 320

is shown in Figs. 4.70 (real part) and 4.71 (imaginary part). The numerical
curves obtained when using an approximate representation formula in com-
parison with the curve of the exact values (4.33) along the line

1.1 0.0
x(t) = 0.0 + t 0.0 , 0 t 1 (4.45)
4.0 8.0
inside of the domain e is shown in Fig. 4.72 for N = 80 and in Fig. 4.73 for
N = 320. The values of the numerical solution u and of the analytical solution
u have been computed in 512 points uniformly placed on the line (4.45). In
these gures, the thick dashed line represents the course of the analytical
solution (4.36), while the thin solid line shows the course of the numerical
solution u. The values of the variable x3 along the line (4.38) are used for
the axis of abscissas. The left plots in these gures correspond to the real
parts of the solutions, while the imaginary parts are shown on the right. The
next Fig. 4.74 shows these curves for N = 1280, but on the zoomed interval
194 4 Implementation and Numerical Examples

3.3070E01

0.5

0 1.2859E01

-0.5
5 -1
-
-0.5
-0
0
-1 0.5
-1
-0.5
0 1
0.5
1
7.3514E02

Fig. 4.69. Given Dirichlet datum (imaginary part) for the unit sphere, N = 320

2.6568E01

0.5

0 1.2298E00

-0.5
5 -1
-
-0.5
-0
0
-1 0.5
-1
-0.5
0 1
0.5
1
2.7254E00

Fig. 4.70. Computed Neumann datum (real part) for the unit sphere, N = 320
4.4 Helmholtz Equation 195

1.8124E01

0.5

0 3.6691E02

-0.5
5 -1
-
-0.5
-0
0
-1 0.5
-1
-0.5
0 1
0.5
1
2.5462E01

Fig. 4.71. Computed Neumann datum (imaginary part) for the unit sphere, N =
320

0.25
0.2
0.2

0.15
0.1
0.1

0.05
0
0

-0.05
-0.1
-4 -2 0 2 4 -4 -2 0 2 4

Fig. 4.72. Numerical and analytical curves for N = 80, Dirichlet problem

0.25
0.2
0.2

0.15
0.1
0.1

0.05
0
0

-0.05
-0.1
-4 -2 0 2 4 -4 -2 0 2 4

Fig. 4.73. Numerical and analytical curves for N = 320, Dirichlet problem
196 4 Implementation and Numerical Examples

[1, 1] with respect to the variable x3 , in order to see the very small dierence
between them. It is almost impossible to see any optical dierence between

0.25
0.2
0.2

0.15
0.1
0.1

0.05
0
0

-0.1 -0.05
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1

Fig. 4.74. Numerical and analytical curves for N = 1280, Dirichlet problem

the numerical and the analytical curves for higher values of N . Note that the
point x in (4.44) is chosen close to the maximum of the function Re u along
the line where the error seems to reach its maximum.
Thus, for the moderate value of the wave number = 4, the quality of
the numerical results on the unit sphere is perfect. The ACA approximation
is good, the number of GMRES iterations is low without any preconditioning
and it grows corresponding to the theory, and, nally, the theoretical linear
convergence order of the Neumann datum on the surface as well as the cubic
convergence order in the inner points of the domain e is perfectly illustrated.

4.4.7 Exterior Neumann Problem

We consider the exterior Neumann boundary value problem for the Helmholtz
equation in e = R3 \ with the boundary condition 1ext u(x) = g(x) for
x . The variational problem (1.122)
   1  
D u, v = I + K g, v for all v H 1/2 ( )
2

is discretised and leads to a system of linear equations (cf. (2.61))


1 
 = Mh K,h
D,h u
g.
2
This symmetric, but complex valued system is then solved using the GMRES
method up to the relative accuracy 2 = 108 .

Unit Sphere

We consider again the analytical solution in the form (4.33) for an interior
point y = (0.9, 0, 0) and = 4 as the exact solution. The results for the
4.4 Helmholtz Equation 197

Table 4.32. ACA approximation of the Galerkin matrix C,h , Neumann problem

N M 1 MByte(C,h ) %
2
80 42 1.0 10 0.01 51.2
320 162 1.0 103 0.20 50.3
1280 642 1.0 104 2.41 38.3
5120 2562 1.0 105 19.17 19.1
20480 10242 1.0 106 135.47 8.46

Table 4.33. Accuracy of the Galerkin method, Neumann problem

N M Iter Error1 CF1 Error2 CF2


80 42 16 6.78 101 3.24 102
320 162 23 1.91 101 3.55 4.99 103 6.49
1280 642 31 5.81 102 3.29 1.24 103 4.02
5120 2562 44 1.42 102 4.09 2.90 104 4.28
20480 10242 62 3.27 103 4.34 7.16 105 4.04

ACA approximation of the matrices K,h and V,h are the same as for the
exterior Dirichlet boundary value problems and can be seen in Table 4.30. The
approximation results for the matrix C,h are shown in Table 4.32. In Table
4.33, the accuracy obtained for the whole numerical procedure is presented
and the numbers in this table have the usual meaning. The third column shows
the number of iterations required by the GMRES method without precondi-
tioning. Note that the convergence of the Galerkin method for the unknown
Dirichlet datum in the L2 norm,

0int u u
h L2 ( )
Error1 = ,
0int uL2 ( )

is close to quadratic. In the inner point x = (1.1, 0.0, 1.0978) e (close to


a local minimum of the exact solution), we now observe quadratic convergence
(7th column) as it was predicted in (2.70) instead of the cubic order obtained
for the Dirichlet problem (cf. Table 4.31). This fact is clearly illustrated in
Figs. 4.754.77, where the convergence of the boundary element method can
be seen optically. The results obtained for N = 80 are plotted in Fig. 4.75,
where the left plot shows the course of the real part of the solution, while the
imaginary part is presented on the right. Thus, these curves are rather rough
approximations of the exact solution. The numerical curves in Fig. 4.76 are
notedly better than the previous ones. However, their quality is not as high
as of the corresponding curves obtained while solving the Dirichlet problem
(cf. Fig. 4.73). The numerical curves for N = 1280 are acceptable.
198 4 Implementation and Numerical Examples

0.4
0.2
0.3

0.1
0.2

0 0.1

0
-0.1
-0.1
-4 -2 0 2 4 -4 -2 0 2 4

Fig. 4.75. Numerical and analytical curves for N = 80, exterior Neumann problem

0.25
0.2
0.2

0.15
0.1
0.1

0.05
0
0

-0.05
-0.1
-4 -2 0 2 4 -4 -2 0 2 4

Fig. 4.76. Numerical and analytical curves for N = 320, exterior Neumann problem

0.25
0.2
0.2

0.15
0.1
0.1

0.05
0
0

-0.05
-0.1
-4 -2 0 2 4 -4 -2 0 2 4

Fig. 4.77. Numerical and analytical curves for N = 1280, exterior Neumann problem
A
Mathematical Foundations

A.1 Function Spaces


Let = (1 , . . . , d ) Nd0 be a multiindex with || = 1 + . . . + d and
! = 1 ! . . . d !, d = 2, 3. Moreover, for x Rd we dene x = x 1 d
1 . . . xd as
well as the partial derivatives

||
D u(x) = u(x1 , . . . , xd ).
x
1
1
. . . x
d
d

For an open bounded domain Rd and k N0 , we denote by C k () the


space of k times continuously dierentiable functions equipped with the norm

uC k () = sup |D u(x)| ;
x
||k

C () is dened accordingly. The support of a given function is the closed


set ( )
supp u = x : u(x) = 0 .
Then, C0 () is the space of innite times continuously dierentiable func-
tions with compact support,
( )
C0 () = u C () : supp u .

For k N0 and (0, 1], we dene the space C k, () of Holder continuous


functions equipped with the norm
 |D u(x) D u(y)|
uC k, () = uC k () + sup .
x,y |x y|
||=k

In particular, for k = 0 and = 1, we obtain the space C 0,1 () of Lipschitz


continuous functions.
200 A Mathematical Foundations

The boundary of an open and bounded set R3 is given as

= = (R3 \).

We assume that the boundary can be represented by a certain decomposi-


tion
'p
= i , (A.1)
i=1

where each boundary segment i is described via a local parametrisation,


( )
i = x R3 : x = i () for Ti R2 , (A.2)

with respect to some open parameter domain Ti . A domain is said to be a


Lipschitz domain, when all functions i in (A.2) are Lipschitz continuous for
any arbitrary decomposition (A.1).
For 1 p < , we dene Lp () as the space of all measurable functions
u on with uLp () < , where  Lp () denotes the norm:
1/p

uLp () = |u(x)|p dx .

In fact, two functions u, v Lp () are identied, if they dier only on a set K


with Lebesgue zero measure (K) = 0. L () is the space of all measurable
functions which are bounded almost everywhere,

uL () = ess sup |u(x)| = inf sup |u(x)|.


x K,(K)=0 x\K

Note that for u Lp () and v Lq () with adjoint parameters p, q satisfying


1 1
+ = 1,
p q
the Holder inequality holds:

|u(x)v(x)| dx uLp () vLq () .

Dening the duality pairing



u, v
= u(x)v(x)dx ,

we then obtain
| u, v
| 1 1
vLq () = sup for 1 p < , + = 1.
0=uLp () uLp () p q
A.1 Function Spaces 201

In particular, for p = 2, the Hilbert space L2 () is the function space of


square integrable functions. Finally, Lloc
1 () is the space of locally integrable
functions.
A function u Lloc
1 () is said to have a generalised partial derivative


v= u Lloc
1 () ,
xi
if it satises the equality
 

v(x)(x)dx = u(x) (x)dx for all C0 (). (A.3)
xi

In the same way, we may dene the generalised derivative D u Lloc


1 () by
 
D u(x)(x)dx = (1)|| u(x)D (x)dx for all C0 ().

Then, for k N0 and 1 p < ,


1/p

vWpk () = D vpLp ()
||k

denes a norm, while for p = we set

k () = max D vL () .
vW

||k

Now we are able to dene the Sobolev spaces Wpk () as the closure of C ()
with respect to the Sobolev norms as introduced above,
W k ()
Wpk () = C () p .

In particular, for any v Wpk (), there exists a sequence {j }jN C ()


such that
lim v j Wpk () = 0.
j

In the same way, we may also dene the Sobolev spaces


W k ()
W p () = C0 ()
k p .

Up to now, the above Sobolev spaces are dened only for k N0 . However,
the denition of the Sobolev norms  Wpk () , and, therefore, of the Sobolev
spaces, can be generalised for arbitrary s R. For s > 0 with s = k + ,
k N0 , (0, 1), we dene the SobolevSlobodeckii norm
202 A Mathematical Foundations
 1/p
vWps () = vpW k () + |v|pW s ()
p p

with the semi-norm


   |D v(x) D v(y)|p
|v|pW s () = dxdy.
p |x y|d+p
||=k

For s < 0 and 1 < p < , the Sobolev space Wps () is the dual space of

W qs (), where 1/p + 1/q = 1. The corresponding norm is given by


| f, v
|
f Wps () = sup .
vWqs ()
vW qs ()


Accordingly, W ps () is the dual space of Wqs ().
Next we will collect some properties of the Sobolev spaces needed later.
Theorem A.1 (Sobolevs imbedding theorem). Let R3 be a boun-
ded domain with Lipschitz boundary = , and let s 3 for p = 1, and
s > 3/p for p > 1. Then every function v Wps () is continuous, v C(),
satisfying
vL () c vWps () for all v Wps ().

In particular, we are interested in the Sobolev spaces W2s (), i.e. for p = 2.
For s = 1, the norm in W21 () is given by
 1/2
vW21 () = v2L2 () + v2L2 () .

Now, we will derive equivalent norms in W21 (). A norm  W21 (),f is called
equivalent to the norm  W21 () , if there are some positive constants c1 and
c2 such that

c1 vW21 () vW21 (),f c2 vW21 () for all v W21 ().

Let f : W21 () R be a bounded linear functional satisfying

|f (v)| cf vW21 () for all v W21 ().

Let c R be an arbitrary constant. If we can always conclude c = 0 from


f (c) = 0, then
 1/2
vW21 (),f = |f (v)|2 + v2L2 ()

denes an equivalent norm in W21 (). Examples are the norms


2

v2 1 = v(x)dx + v2L ()
W2 (), 2


A.1 Function Spaces 203

and 2

v2W 1 (), = v(x)dsx + v2L2 () .
2

As a rst consequence, the Sobolev norms  L2 () and  W21 () are



equivalent norms in W pk (). Secondly, there holds the Poincare inequality
2
  
 2
|v(x)|2 dx cP v(x)dx + v(x) dx for all v W21 ().

Theorem A.2 (BrambleHilbertLemma). Let f : W2p+1 () R for


p = 0, 1 be a bounded linear functional satisfying

|f (v)| cf vW p+1 () for all v W2p+1 ().


2

Let
P0 () = {q(x) = q0 : x Rd }
be the space of constant polynomials and let

P1 () = {q(x) = q0 + q1 x1 + . . . + qd xd : x Rd }

be the space of linear polynomials dened on . If f (q) = 0 is satised for all


q Pp (), then it follows that

|f (v)| cp cf |v|W p+1 () for all v W2p+1 ().


2

Recall that the denition of the Sobolev spaces W2s () is based on the gen-
eralised derivatives (cf. (A.3)) in the sense of Lloc
1 (). In what follows, we
will give a second denition of Sobolev spaces, which is based on derivatives
of distributions. A distribution T D () is a complex continuous linear
functional with respect to D() = C0 (). For v Lloc
1 (), the equality

Tv () = v(x)(x)dx for D()

denes a regular distribution Tv D (). The most famous distribution,


which is not regular, is the Dirac distribution satisfying

0 () = (0) , for all D() .

The equality

(D Tv )() = (1)|| Tv (D ) for all D()


204 A Mathematical Foundations

denes the derivative D Tv D () of a distribution Tv D ().


Next, we consider the Schwartz space S(Rd ) of smooth fast decreasing
functions and its dual space S  (Rd ) of tempered distributions. For S(Rd ),
we dene the Fourier transform F : S(Rd ) S(Rd ) as

0
() = (F )() = (2)d/2 e (x,) (x)dx for Rd ,
Rd

as well as the inverse Fourier transform



1 d/2
(x) = (F 0
)(x) = (2) 0
e (x,) ()d for x Rd .
Rd

Note that for S(Rd ), we have

D (F)() = (i)|| F(x )(), (F)() = (i)|| F(D )().

For a distribution T S  (Rd ), the Fourier transformation T0 S  (Rd ) is


dened via
T0() = T ()0 for all S(Rd ).
For s R and v S(Rd ) we dene the Bessel potential operator J s : S(Rd )
S(Rd ) as

(J v)(x) = (1 + ||2 )s/2 v0()e (x,) d for x Rd .
s

Rd

The application of the Fourier transform yields

(FJ s v)() = (1 + ||2 )s/2 (Fv)() .

Thus, J s acts similar to a dierential operator of order s. For a distribution


T S  (Rd ), we can dene J s T S  (Rd ) via

(J s T )() = T (J s ) for all S(Rd ).

Now we are in a position to dene the Sobolev space H s (Rd ) as a space of all
distributions v S  (Rd ) with J s v L2 (Rd ), and with the inner product

u, v
H s (Rd ) = J s u, J s v
L2 (Rd ) ,

and the induced norm



v2H s (Rd ) = J s v2L2 (Rd ) = (1 + ||2 )s |0
v ()|2 d.
Rd

It turns out that H s (Rd ) = W2s (Rd ) for all s R. For a bounded domain
Rd , the Sobolev space H s () is dened by restriction
A.1 Function Spaces 205
( )
H s () = v = v| : v H s (Rd )

with the norm


vH s () = inf 
v H s (Rd ) .
H s (Rd ) : v
v | =v

Moreover, we introduce the Sobolev spaces

 s () = C ()H s (Rd ) ,
H H0s () = C0 ()
H s ()
0

and state the following result:


 s ()
Theorem A.3. Let R3 be a Lipschitz domain and s 0. Then H
s
H0 (). In particular, there holds

 s () = H0s () 1 3 5
H for s = , , ,... .
2 2 2
Moreover,
   s 
 s () = H s ()  ,
H  () 
H s () = H for all s R.

Finally, we comment on the equivalence of the Sobolev spaces W2s () and


H s (), where we have to impose additional restrictions on the bounded do-
main . In particular, let us assume that there is given a bounded linear
extension operator
E : W2s () W2s (Rd ).
Note that this condition is satised, if an uniform cone condition holds for .
In fact, if is a Lipschitz domain, we conclude H s () = W2s () for all s > 0.
To dene Sobolev spaces on the boundary = of a bounded domain
R3 we start with an arbitrary overlapping parametrisation

'
J
( )
= i , i = x R3 : x = i (), Ti R2 .
i=1

We consider a partition of unity subordinated to the above decomposition,


i.e. nonnegative functions i C0 (R3 ) satisfying


J
i (x) = 1 for x , i (x) = 0 for x \i .
i=1

Any function v on can then be written as


J 
J
v(x) = i (x)v(x) = vi (x) for x ,
i=1 i=1

with
206 A Mathematical Foundations

vi (x) = i (x)v(x) for x i .


Inserting the local parametrisation, this gives

vi (x) = i (x)v(x) = i (i ())v(i ()) = vi () for Ti R2 .

For the above dened functions vi (), for Ti R2 , and s 0, we now con-
sider the Sobolev space H s (Ti ) associated with the norms  vi H s (Ti ) . Hence,
we can dene the Sobolev spaces H s ( ) equipped with the norm
 J 1/2

vHs ( ) = 
vi 2H s (Ti ) .
i=1

Note that derivatives D vi () of the order || k require the existence of


derivatives D i () due to the chain rule. Hence, we assume i C k1,1 (Ti ).
Therefore, if is a Lipschitz domain, we can dene the Sobolev spaces H s ( )
on the boundary only for |s| 1.
Note that the denition of the above Sobolev norm  Hs ( ) depends on
the parametrisation chosen, but all of the norms are equivalent. In particular,
for s = 0 1/2

vL2 ( ) = |v(x)|2 dsx

is equivalent to vH0 ( ) . For s (0, 1), an equivalent norm is the Sobolev


Slobodeckii norm
1/2
 
|v(x) v(y)| 2
v2H s ( ) = v2L2 ( ) + dsx dsy .
|x y|2+2s

Moreover,
2 1/2
  
|v(x) v(y)| 2

vH 1/2 ( ), = v(x)dsx + dsx dsy
|x y|3

denes an equivalent norm in H 1/2 ( ).


 
For s < 0, Sobolev spaces H s ( ) = H s ( ) are dened by duality,

w, v

wH s ( ) = sup ,
0=vH s ( ) vH s ( )

with respect to the duality pairing



w, v
= w(x)v(x)dsx .

A.1 Function Spaces 207

Next, we consider open boundary parts 0 = . For s 0, we dene


( )
H s (0 ) = v = v|0 : v H s ( ) ,

and
vH s (0 ) = inf 
v H s ( ) .
H s ( ):
v v|0 =v

Correspondingly,
( )
 s (0 ) =
H v = v|0 : v H s ( ), supp v 0 .

If s < 0, then by duality


 s   
 (0 )  ,
H s (0 ) = H  s (0 ) = H s (0 )  .
H

Finally, we consider a piecewise smooth boundary

'
J
= i, i j = for i = j ,
i=1

and dene for s 0 the Sobolev spaces


( )
s
Hpw ( ) = v L2 ( ) : v|i H s (i ), i = 1, . . . , J ,

equipped with the norm


 J 1/2

vHpw
s ( ) = v|i 2H s (i ) .
i=1

For s < 0, the corresponding Sobolev spaces are given as a product space


J
s
Hpw ( ) =  s (i ) ,
H
i=1

with the norm



J
wHpw
s ( ) = w|i H
s ( ) .
i=1

Note that for all w Hpw


s
( ) and s < 0, we conclude w H s ( ), satisfying

wH s ( ) wHpw
s ( ) .

At the end of this subsection, we state some relations between the Sobolev
spaces H s () in the domain and H s ( ) on the boundary = .
208 A Mathematical Foundations

Theorem A.4 (Trace Theorem). Let R3 be a bounded domain with


boundary C k1,1 . The trace operator

0int : H s () H s1/2 ( )

is continuous for 1/2 < s k, i.e.

0int vH s1/2 ( ) c T vH s () for all v H s ().

Theorem A.5 (Inverse Trace Theorem). Let R3 be a bounded do-


main with boundary C k1,1 . For 1/2 < s k, there exists a continuous
extension operator
E : H s1/2 ( ) H s () ,
i.e.
EvH s () c IT vH s1/2 ( ) for all v H s1/2 ( ) ,

satisfying 0int Ev = v.

A.2 Fundamental Solutions


A.2.1 Laplace Equation

The fundamental solution (1.7) of the Laplace equation can be found from
the relation (1.5), which can be written as a partial dierential equation in
the distributional sense,

y u (x, y) = 0 (y x) for x, y R3 ,

where 0 denotes the Dirac distribution. Since the Laplace operator is in-
variant with respect to translations and rotations, we may use the transfor-
mation z = y x to nd v satisfying

z v(z) = 0 (z) for z R3 .

The application of the Fourier transform together with the transformation


rules for derivatives gives
1
||2 v0() = ,
(2)3/2
and, therefore,
1 1
v0() = S  (R3 ) .
(2)3/2 ||2
The Fourier transform of the tempered distributions of the above form is
well studied and can be found, for example, in [35, Chapter 2]. Thus, in
ddimensional space, it holds for = d, d 2, . . .
A.2 Fundamental Solutions 209
   +d 
1 +d/2
F || (z) = 2

 2  |z|d .
2

For d = 3 and = 2, the above formula leads to


  /
1 2 1
F || (z) = |z| ,
2

where the the values


(1/2) = /2 , (1) = 1
of the Gamma function have been used. Therefore, with z = x y, the
fundamental solution of the Laplace operator is
1 1
u (x, y) = .
4 |x y|

A.2.2 Lame System

The fundamental solution of linear elastostatics, given by the Kelvin tensor


(1.66), can be found from (1.64), which can be written as a system (for  =
1, 2, 3) of partial dierential equations in the distributional sense:

U  (x, y) ( + )grad div U  (x, y) = 0 (y x)e for x, y R3 .

Dening
+
U  (x, y) = w (x, y) grad div w (x, y) for x, y R3 ,
+ 2
the solution of the inhomogeneous system of linear elastostatics is equivalent
to a system of scalar BiLaplace equations

2 w (x, y) = 0 (y x)e for x, y R3 .

In particular, for  = 1, we set w,2 (x, y) = w,3 (x, y) = 0, and it remains to


solve
2 w,1 (x, y) = 0 (y x) for x, y R3 .
Using the transformation z = y x, we have to nd v satisfying

2 v(z) = 0 (z) for z R3

or
(z) = 0 (z), v(z) = (z) .
From this, we rst get
1 1 1
(z) = .
4 |z|
210 A Mathematical Foundations

To determine v, we rewrite the Laplace equation in spherical coordinates as



1 2 1 1 1
r v(r) = for r > 0 .
r2 r r 4 r

Thus, we obtain the general solution



1 1 1 a
v(r) = r+ +b for r > 0 , a, b R .
4 2 r

Choosing a = b = 0, we, therefore, get


1 1
v(z) = |z|.
8
From this, we nd

+ 2
U1,1 (z) = v(z) v(z) ,
+ 2 z12

+ 2
U2,1 (z) = v(z) ,
+ 2 z1 z2

+
U3,1 (z) = v(z) ,
+ 2 z1 z3
and, hence,

1 + 3 1 1 + z12
U1,1 (z) = + ,
8 ( + 2) |z| 8 ( + 2) |z|3

1 + z1 z2
U2,1 (z) = ,
8 ( + 2) |z|3

1 + z1 z3
U3,1 (z) = .
8 ( + 2) |z|3

Doing the same computations for  = 2, 3, and inserting the Lame constants,
this gives the Kelvin tensor as the fundamental solution of linear elastostatics,

1 1 1+ k (yk xk )(y x )
Uk (x, y) = (3 4) +
8 E 1 |x y| |x y|3

for k,  = 1, 2, 3.

A.2.3 Stokes System

To nd the fundamental solution for the Stokes system, we have to solve the
following system of partial dierential equations
A.2 Fundamental Solutions 211

U  (x, y) + q (x, y) = 0 (y x)e , div U  (x, y) = 0 for x, y R3 ,

for  = 1, 2, 3, and

U 4 (x, y) + q4 (x, y) = 0, div U 4 (x, y) = 0 (y x) for x, y R3 .

As in the case of linear elastostatics, we set

U  (x, y) = w (x, y) grad div w (x, y), q (x, y) =  div w (x, y)

for x, y R3 and for  = 1, 2, 3. This implies divU  (x, y) = 0, and

2 w (x, y) = 0 (y x)e .

For  = 1, we nd
1 1 1
w1,1 (x, y) =
 4 |x y|
and
1 1
w1,1 (x, y) = |x y|, w1,2 (x, y) = w1,3 (x, y) = 0 .
 8
Using
1 1 1 1 y1 x1
div w1 (x, y) = |x y| = ,
 8 y1  8 |x y|

we obtain


U11 (x, y) = w1,1 (x, y) div w1 (x, y) =
y1
1 1 1 1 1 y1 x1
= =
 4 |x y|  8 y1 |x y|

1 1 1 (x1 y1 )2
= + ,
 8 |x y| |x y|3


U12 (x, y) = div w1 (x, y) =
y2
1 1 y1 x1 1 1 (x1 y1 )(x2 y2 )
= = ,
 8 y2 |x y|  8 |x y|3


U13 (x, y) = div w1 (x, y) =
y3
1 1 y1 x1 1 1 (x1 y1 )(x3 y3 )
= = ,
 8 y3 |x y|  8 |x y|3

and
212 A Mathematical Foundations

1 1 1 x1 y1
q1 (x, y) = divw1 (x, y) = = .
4 y1 |x y| 4 |x y|3
Doing the same computations for  = 2, 3, we obtain the fundamental solution
of the Stokes system as

1 1 k (xk yk )(x y )
Uk (x, y) = + ,
 8 |x y| |x y|3
1 x y
q (x, y) =
4 |x y|3

for k,  = 1, 2, 3.
To nd the fundamental solution U 4 (x, y) and q4 (x, y), we have to solve
the system

U 4 (x, y) + q4 (x, y) = 0, div U 4 (x, y) = 0 (y x) for x, y R3

in the distributional sense. Setting

U 4 (x, y) = (x, y) for x, y R3 ,

we get
(x, y) = 0 (y x) for x, y R3
and therefore
1 1
(x, y) = .
4 |x y|
Hence,

1 xk yk
U4k (x, y) = (x, y) = , k = 1, 2, 3 .
yk 4 |x y|3

In addition, taking the divergence of the rst equation, this gives

q4 (x, y) = div U 4 (x, y) = 0 (y x) ,

implying
q4 (x, y) =  0 (y x) .

A.2.4 Helmholtz Equation

The fundamental solution u (x, y) satises the relation (1.94),



 
u (x, y) 2 u (x, y) u(y)dy = u(x) for x ,

which can be written as a partial dierential equation in the distributional


sense,
A.3 Mapping Properties 213

u (x, y) 2 u (x, y) = 0 (y x) for x, y R3 .


Setting z = y x, we have to nd v as a solution of

v(z) 2 v(z) = 0 (z) for z R.

Using spherical coordinates as well as v(z) = v(r) with r = |z|, this is equiv-
alent to 1 2
1
2 r2 v(r) 2 v(r) = 0 for r > 0 .
r r r
With the substitutions
w(r) w(r) 1 1
v(r) = , v(r) = = w(r) 2 w(r) ,
r r r r r r r
we get
2
w(r) + 2 w(r) = 0 for r > 0 ,
r2
with the general solution

w(r) = Ae r + Be r for r > 0 .

Choosing A = 1/4 and B = 0, this gives the fundamental solution for the
Helmholtz equation,

1 e |xy|
u (x, y) = for x, y R3 .
4 |x y|

A.3 Mapping Properties

In this section, we will summarise the mapping properties of all boundary


integral operators used. For simplicity, we will restrict ourselves to the case of
the Laplace equation. For a more detailed presentation, we refer, for example,
to [24, 71, 105].
The Newton potential

u(x) = (N f )(x) = u (x, y)f (y)dy for x

is a generalised solution of the Poisson equation

u(x) = f (x) for x .

 1 (), we have N
In particular, for a given f H  f H 1 () satisfying

 f H 1 () c f  1
uH 1 () = N  1 ().
for all f H
H ()
214 A Mathematical Foundations

Taking the interior trace, we obtain



 f )(x) =
(N f )(x) = 0int (N u (x, y)f (y)dy for x .

Combining the mapping properties of the Newton potential


 :H
N  1 () H 1 ()

with those of the interior trace operator

0int : H 1 () H 1/2 ( ) ,

this gives
 : H
N = 0int N  1 () H 1/2 ( ) .

With this we can derive corresponding mapping properties of the single layer
potential 
(V w)(x) = u (x, y)w(y)dsy for x R3 \ .

For f H 1 (), we have, by exchanging the order of integration,


 
V w, f
= f (x) u (x, y)w(y)dsy dx

 
= w(y) u (x, y)f (x)dx dsy = w, N f
.

Hence, due to N f H 1/2 ( ) for f H 1 (), we nd w H 1/2 ( ) and



V w H (). In fact, the single layer potential
1

V : H 1/2 ( ) H 1 ()

is a bounded operator, and, combining this with the mapping property of the
interior trace operator

0int : H 1 () H 1/2 ( ) ,

this gives
V = 0int V : H 1/2 ( ) H 1/2 ( ).
Moreover, for x we have
 
 

(V w)(x) = u (x, y)w(y)dsy = x u (x, y) w(y)dsy = 0,

A.3 Mapping Properties 215

i.e. V w is a generalised solution of the Laplace equation for any w H 1/2 ( ).


The above considerations are essentially based on the symmetry of the
fundamental solution, i.e.

u (x, y) = u (y, x) .

Therefore, these considerations are valid only for selfadjoint partial dier-
ential operators. For more general partial dierential operators, one has to
incorporate also all the volume and surface potentials which are dened by
the fundamental solution of the formally adjoint partial dierential operator,
see, for example, [71].
It remains to describe an explicite representation of the single layer po-
tential operator
(V w)(x) = 0int (V w)(x)
for x . For w L ( ), we obtain

(V w)(x) = 0int (V w)(x) = u (x, y)w(y)dsy

as a weakly singular boundary integral (cf. Lemma 1.1). For > 0, we consider
 and x with |
x x x| < . Then,
 
  
 
 
lim  u ( x, y)w(y)dsy u (x, y)w(y)dsy 
x 
x 
 y :|yx|> 
 
   
  
 
lim  u ( x, y) u (x, y) w(y)dsy 
x 
x 
y :|yx|> 
 
  
 
 
+ lim  u (x, y)w(y)dsy 
x 
x 
y :|yx|< 
 
  
 
 
= lim  u (
x, y)w(y)dsy 
x 
x 
y :|yx|< 
    
   
sup w(y) lim u (
x, y)dsy
y :|yx|< x
x
y :|yx|<

1 1
wL ( ) lim dsy
4 x
x |
x y|
y :|yx|<

1 1
= wL ( ) dsy c wL ( ) ,
4 |x y|
y :|xy|<
216 A Mathematical Foundations

where we used polar coordinates to obtain the last inequality. Taking the
limit 0, this gives the denition of the single layer potential as a weakly
singular integral. In the same way we get for the exterior trace
0ext (V w)(x) = (V w)(x) for x .
Since V w H 1 () is a generalised solution of the Laplace equation for any
w H 1/2 ( ), we can compute the associated conormal derivative of V w as
1int V w H 1/2 ( ). For v H 1 (), and using Greens rst formula as well
as interpreting the surface integral as weakly singular integral, we obtain
   
1int (V w)(x)0int v(x)dsx = (V w)(x), v(x) dx

   
= u (x, y)w(y)dsy , v(x) dx

   
= lim u (x, y)w(y)dsy , v(x) dx
0
y :|yx|
   
= w(y) lim x u (x, y), v(x) dx dsy .
0
x:|xy|

Again, Greens rst formula gives


   
x u (x, y), x v(x) dx = int u (x, y) int v(x)ds
1,x 0 x

x:|xy| x :|xy|

+ int u (x, y) int v(x)ds ,
1,x 0 x

x:|xy|=

and, therefore,
  
1int (V w)(x)0int v(x)dsx = w(y) lim 1,x u int
(x, y)0int v(x)dsx dsy
0
x :|xy|
 
int
+ w(y) lim 1,x u (x, y)0int v(x)dsx dsy
0
x:|xy|=
 
int
= lim 1,x u (x, y)w(y)dsy 0int v(x)dsx
0
y :|yx|
 
int
+ w(y) lim 1,x u (x, y)v(x)dsx dsy
0
x:|xy|=
 
= (K  w)(x)0int v(x)dsx + w(y)(y)v(y)dsy .

A.3 Mapping Properties 217

In the above, the adjoint double layer potential



 int u (x, y)w(y)ds
(K w)(x) = lim 1,x y for x
0
y :|yx|

has been introduced. Furthermore, we have used



lim int u (x, y)v(x)ds = (y)v(y) ,
1,x x
0
x:|xy|=

with

int
(y) = lim 1,x u (x, y)dsx
0
x:|xy|=
 
1 (y x, n(x)) 1 1
= lim dsx = lim dsx ,
0 4 |x y|3 0 4 2
x:|xy|= x:|xy|=

which is related to the interior angle of in y . In particular, if is


smooth in y , we nd (y) = 1/2 (cf. Lemma 1.3).
Summarising the above, for w H 1/2 ( ), we have

1int V w = (x)w(x) + (K  w)(x) for x


in the sense of H 1/2 ( ). Correspondingly, the application of the exterior
conormal derivative gives
1ext V w = ((x) 1)w(x) + (K  w)(x) for x ,
and, therefore, we obtain the jump relation of the adjoint double layer poten-
tial,

[1 V w] = 1ext (V w)(x) 1int (V w)(x) = w(x) for x .


Next, we consider the double layer potential

(W v)(x) = int u (x, y)v(y)ds
1,y for x R3 \ .
y

 1 (), by exchanging the order of integration, we have,


For f H
 
W v, f
= f (x) 1,yint u (x, y)v(y)ds dx
y

 
= v(y) int
1,y u (x, y)f (x)dx dsy


=  f )(y)ds
v(y) int (N y
 f
.
= v, 1int N
1,y

218 A Mathematical Foundations

From this, we nd W v H 1 () for any v H 1/2 ( ). Moreover, W v H 1 ()


is a generalised solution of the Laplace equation for any v H 1/2 ( ). The
application of the interior trace operator

0int : H 1 () H 1/2 ( )

gives
0int W : H 1/2 ( ) H 1/2 ( ).
It turns out (cf. Lemma 1.2) that

0int (W v)(x) = (1 + (x))v(x) + (Kv)(x) for x ,

with the double layer potential



int
(Kv)(x) = lim (K v)(x) = lim 1,y u (x, y)v(y)dsy for x .
0 0
y :|yx|

This representation follows, when considering x  and x satisfying


|
x x| < , from
 
x) (K v)(x) = 1,y
(W v)( int u ( int u (x, y)v(y)ds
x, y)v(y)dsy 1,y y
y :|yx|
  
= int u (
1,y int u (x, y) v(y)ds
x, y) 1,y y

y :|yx|
  
+ int u (
1,y x, y) v(y) v(x) dsy
y :|yx|

+v(x) int u (
1,y x, y)dsy .
y :|yx|

Note that for all > 0 we have


  
lim int u (
1,y int u (x, y) v(y)ds = 0.
x, y) 1,y y
x
x
y :|yx|

In addition,
  
lim int u (
1,y x, y) v(y) v(x) dsy = 0.
0
y :|yx|

Moreover, for x B (x) = {z : |z x| < }, we have


  
int int int u (
1,y u (x, y)dsy = 1,y u ( x, y)dsy 1,y x, y)dsy .
y :|yx| B (x) y:|yx|=
A.3 Mapping Properties 219

Taking into account that



int u (
1,y x, y)dsy = 1  B (x)
for x
B (x)

as well as the direction of the normal vector along y : |y x| = , we


nally obtain the above representation for 0int W v.
Correspondingly, the application of the exterior trace operator gives

0ext (W v)(x) = (x)v(x) + (Kv)(x) for x ,

and, therefore, the jump relation of the double layer potential reads

[0 W v] = 0ext (W v)(x) 0int (W v)(x) = v(x) for x .

Since the double layer potential W v H 1 () is a solution of the Laplace


equation for any v H 1/2 ( ), the application of the conormal derivative 1int
denes the bounded operator

D = 1int W : H 1/2 ( ) H 1/2 ( ).

Note that for x , we have


 
(Dv)(x) = 1int (W v)(x) = lim x (W v)(
x), n(x)
xx


1 (n(x), n(y)) (y x, n(y))(y x, n(x))
= lim 3 v(y)dsy ,
4 0 |x y| 3 |x y|5
y :|yx|

which does not exist. In what follows we will sketch the computations to obtain
the alternative representation (1.9) (cf. Lemma 1.4). For the derivatives of the
double layer potential for x we rst have


(W v)(
x) = 1int u (
x, y)v(y)dsy
xi xi

 
1 1 
= n(y), y v(y)dsy
4  xi |
x y|

 
1 1 
= n(y), y v(y)dsy
4 yi |x y|


1 1
= n(y), curly ei y v(y)dsy
4 |
x y|


1 1
= v(y) curl,y ei y dsy .
4 |
x y|

220 A Mathematical Foundations

Now, using integration by parts, i.e.


 
 
curl (y)(y)dsy = (y), curl (y) dsy ,

we obtain

1 1
x) =
(W v)( curl v(y), ei y dsy

xi 4 |
x y|


1 1
= ei , curl v(y) y dsy ,
4 |
x y|

and, therefore,

1 1
x (W v)(
x) = curl v(y) y dsy
4 |
x y|


1 1
= curl v(y) x dsy .
4 |
x y|

Hence,

1 1
x), n(x)) =
(x (W v)( curl v(y) x , n(x) dsy
4 |
x y|


1 1
= curl v(y), n(x) x dsy .
4 |
x y|

 x, we obtain
Taking the limes x

1 1
(Dv)(x) = curl v(y), n(x) x dsy
4 |x y|


1 1
= curl v(y), curl,x dsy ,
4 |x y|

and, therefore,
 
1 1
Dv, u
= u(x) curl v(y), curl,x dsy dsx
4 |x y|

    1
1
= curl,x u(x)curl,y v(y) dsx dsy .
4 |x y|

With
A.3 Mapping Properties 221
    
curl,x u(x)curl,y v(y) = n(x), x u(x)curl,y v(y)
 
= n(x), x u(x) curl,y v(y)
 
= n(x) x u(x), curl,y v(y)
 
= curl,x u(x), curl,y v(y) ,

we nally obtain the alternative representation (1.9),


 
1 (curl u(x), curl v(y))
Dv, u
= dsx dsy .
4 |x y|

Since the ellipticity of boundary integral operators is essential to derive


results on the unique solvability of boundary integral equations, we will sketch
the proof of the ellipticity of the single layer potential V , cf. Lemma 1.1.
The single layer potential

u(x) = (V w)(x) = u (x, y)w(y)dsy for x R3 \

is a solution of the interior Dirichlet boundary value problem

u(x) = 0 for x , 0int u(x) = 0int (V w)(x) = (V w)(x) for x .

By applying Greens rst formula (cf. (1.2)), we obtain


 
a (u, u) = |u(x)|2 dx = 1int u(x)0int u(x)dsx


= 1int (V w)(x)0int (V w)(x)dsx

  
1
= w(x) + (K  w)(x) (V w)(x)dsx .
2

From Greens rst formula (1.2), we also nd the estimate

c1 1int u2H 1/2 ( ) a (u, u).

Since u = V w is also a solution of the exterior Dirichlet boundary value


problem,

u(x) = 0 for x e , 0ext u(x) = 0ext (V w)(x) = (V w)(x) for x ,

satisfying the radiation condition


222 A Mathematical Foundations

1
|u(x)| = O as |x| ,
|x|

we also nd the relations



a e (u, u) = 1ext u(x)0ext u(x)dsx

  
1
= w(x) + (K  w)(x) (V w)(x)dsx ,
2

and
c2 0ext u2H 1/2 ( ) a e (u, u).
Hence, we obtain

V w, w
= w(x)(V w)(x)dsx

  
1
= w(x) + (K  w)(x) (V w)(x)dsx
2

  
1
w(x) + (K  w)(x) (V w)(x)dsx
2

= a (u, u) + a e (u, u)
 
min{c1 , c2 } 1int u2H 1/2 ( ) + 1ext u2H 1/2 ( ) .

Using
1  1 2
 
w2H 1/2 ( ) =  w + K  w w + K  w  1/2
2 2 H ( )

= 1int u 1ext u2H 1/2 ( )


 
2 1int u2H 1/2 ( ) + 1ext u2H 1/2 ( ) ,

we nally obtain the ellipticity estimate

V w, w
cV1 w2H 1/2 ( ) for all w H 1/2 ( ).

Note that the ellipticity estimate of the hypersingular boundary integral oper-
ator D (cf. Lemma 1.4) follows almost in the same manner when considering
the double layer potential u = W v.
If = is the boundary of a Lipschitz domain R3 , we then can
extend the mapping properties of all boundary integral operators, see [24]. In
particular, the boundary integral operators
A.3 Mapping Properties 223

V : H 1/2+s ( ) H 1/2+s ( ),
K : H 1/2+s ( ) H 1/2+s ( ),
K  : H 1/2+s ( ) H 1/2+s ( ),
D : H 1/2+s ( ) H 1/2+s ( )

are bounded for all s [1/2, 1/2].


B
Numerical Analysis

B.1 Variational Methods


Let X be a Hilbert space with an inner product ,
X , which induces a norm
*
vX = v, v
X for all v X .

Let X  be the dual space of X with respect to the duality pairing

,
: X  X R .

Then,
f, v

f X  sup for all f X  .


0=vX vX
Let A : X X  be a bounded linear operator with

AvX  cA
2 vX for all v X, (B.1)

which is symmetric, i.e.

Au, v
= Av, u
for all u, v X.

For a given f X  , we consider an operator equation to nd u X such that

Au = f (B.2)

is satised in X  . Then,
Au f, v

0 = Au f X  = sup ,
0=vX vX

and, therefore, u X is a solution of the variational problem

Au, v
= f, v
for all v X. (B.3)
226 B Numerical Analysis

For a given operator A : X X  , we dene the functional


1
F (v) = Av, v
f, v
.
2
If the operator A is assumed to be positive semielliptic, i.e.

Av, v
0 for all v X ,

then the solution of the variational problem (B.3) is equivalent to the min-
imisation problem
F (u) = min F (v) . (B.4)
vX

Essential for the analysis of variational problems is the Riesz representation


theorem [118], i.e. any bounded linear functional f X  is of the form

f, v
= u, v
X for all v X. (B.5)

The element u X is hereby uniquely determined and satises

uX = f X  .

The Riesz representation theorem (see (B.5)) denes a linear and bounded
operator J : X  X with

Jf, v
X = f, v
for all v X, Jf X = f X  .

Now, instead of the operator equation (B.2), Au = f in X  , we consider the


equivalent operator equation

JAu = Jf in X. (B.6)

Since A : X X  is a bounded operator satisfying (B.1), we obtain

JAvX = AvX  cA
2 vX for all v X,

i.e. the operator JA : X X is bounded with

JAXX cA
2 .

An operator A : X X  is called Xelliptic, if

Av, v
cA
1 vX
2
for all v X . (B.7)

Then, if A is Xelliptic, the estimate

JAv, v
X = Av, v
cA
1 vX
2
for all v X

follows, i.e. the operator JA : X X is also Xelliptic. Therefore, instead of


(B.6), we consider the x point equation
B.1 Variational Methods 227

u = u J(Au f ) = T u + Jf , (B.8)

with
T = I JA : X X
for some positive  R+ . For

cA
0 <  < 2  1 2
cA
2

the operator T : X X denes a contraction in X with T XX < 1. Thus,


the unique solvability of the x point equation (B.8) follows from Banachs
x point theorem. Hence, if A : X X  is bounded and Xelliptic, we
conclude the unique solvability of the operator equation (B.2), and, therefore,
of the equivalent variational problem (B.3), and of the minimisation problem
(B.4). This result is just the well known LaxMilgram lemma. Moreover, for
the solution u X of the operator equation Au = f , we obtain from the
ellipticity estimate (B.7)

1 uX Au, u
= f, u
f X  uX ,
cA 2

and, therefore,
1
uX f X  .
cA
1

Now we consider a sequence {XM }M N of conformal trial spaces



M
XM = span  X,
=1

which is assumed to be dense in X. In particular, we assume the approximation


property
lim inf v vM X = 0 for all v X. (B.9)
M vM XM

To dene an approximate solution of the operator equation (B.2), or of the


equivalent minimisation problem (B.4), we consider the nite dimensional
minimisation problem

F (uM ) = min F (vM ) ,


vM XM

with
1
F (vM ) = AvM , vM
f, vM

2
1  
M M M
= v vj A , j
v f, 
= F(v) .
2 j=1
=1 =1

From the necessary condition


228 B Numerical Analysis

d 
F (v) = 0 for k = 1, . . . , M ,
dvk
we then obtain

M
v A , k
f, k
= 0 for k = 1, . . . , M .
=1

Thus, uM XM is the solution of the Galerkin variational problem


M
u A , k
= f, k
for k = 1, . . . , M, (B.10)
=1

or, in the equivalent formulation,

AuM , vM
= f, vM
for all vM XM . (B.11)

The Galerkin variational problem (B.10) is obviously equivalent to a linear


system
AM u = f ,
with
AM [k, ] = A , k
, fk = f, k

for k,  = 1, . . . , M . Note that


M 
M 
M 
M
(AM u, v) = A[k, ]u vk = A , k
u vk = AuM , vM

k=1 =1 k=1 =1

for all u, v RM , i.e. uM , vM XM . Therefore,

(AM v, v) = AvM , vM
cA
1 vM X
2

for all v RM , i.e. vM XM X. The Galerkin stiness matrix AM is


hence positive denite, and, therefore, invertible. In particular, the Galerkin
variational problem (B.11) is uniquely solvable. Moreover, from

1 uM X AuM , uM
= f, uM
f X  uM X ,
cA 2

we conclude the stability estimate


1
uM X f X  .
cA
1

From (B.3), we obtain

Au, vM
= f, vM
for all vM XM X.
B.1 Variational Methods 229

Subtracting from this the Galerkin variational problem (B.11), this gives the
Galerkin orthogonality

A(u uM ), vM
= 0 for all vM XM . (B.12)

Using the ellipticity estimate (B.7), the Galerkin orthogonality (B.12), and
the boundedness of A, we then obtain

1 u uM X A(u uM ), u uM

cA 2

= A(u uM ), u vM
+ A(u uM ), vM uM

= A(u uM ), u vM

A(u uM )X  u vM X
cA
2 u uM X u vM X ,

and, therefore,

cA
u uM X 2
u vM X for all vM XM .
cA
1

This results in Ceas lemma, i.e.

cA
u uM X 2
inf u vM X . (B.13)
cA
1 vM XM

Hence, we obtain convergence uM u for M from the approximation


property (B.9).
In many applications, e.g. for direct boundary integral formulations, the
right hand side in (B.2) is given as f = Bg, where B : Y X  is some
bounded operator satisfying

BgX  cB
2 gY for all g Y.

Then the Galerkin formulation (B.11) requires the evaluation of

fk = Bg, k
for all k = 1, . . . , M ,

which may be complicated for general given g Y . Hence, introducing an


approximation
N
gN = gj j YN Y ,
j=1

we may compute the perturbed vector values


N 
N
fk = BgN , k
= gj Bj , k
= BN [k, j]gj ,
j=1 j=1

leading to the linear system


230 B Numerical Analysis

 = BN g .
AM u

 RM corresponds to the unique solution u


Note that the solution vector u M
of the perturbed variational problem

A
uM , vM
= BgN , vM
for all vM XM .

Now, instead of the Galerkin orthogonality (B.12), we obtain

A(u
uM ), vM
= A(uM 
uM ), vM
= B(ggN ), vM
for all vM XM .

From the ellipticity estimate (B.7), we then conclude

1 uM u
cA M 2X A(uM u
M ), uM u
M

= B(g gN ), uM u
M
cB
2 g gN Y uM u
M X ,

and, therefore,
cB
uM u
M X 2
g gN Y .
cA
1
Hence, we nd from the triangle inequality the error estimate

u u
M X u uM X + uM uM X
B
c
u uM X + 2A g gN Y .
c1
Besides an approximation of the given right hand side f , we also have to con-
sider an approximation of the given operator A, e.g., when applying numerical
integration schemes. Hence, instead of the Galerkin formulation (B.11), we
have to solve a perturbed variational problem to nd the solution u M XM
satisfying
uM , vM
= f, vM
for all vM XM .
A (B.14)
 : X X  is a bounded linear operator satisfying
We assume that A
 X  cA 
Av 2 vX for all v X.

To ensure the unique solvability of the Galerkin formulation (B.14), we have


 i.e.
to assume the XM ellipticity of A,

 M , vM
cA vM 2
Av 1 X for all vM XM . (B.15)
M , dened by
The perturbed stiness matrix A
M [k, ] = A
A   , k

for k,  = 1, . . . , M , is then positive denite, and, therefore, invertible. Sub-


tracting the perturbed variational formulation (B.14) from the Galerkin vari-
ational problem (B.11), this gives the orthogonality
B.2 Approximation Properties 231

uM , vM
= 0 for all vM XM .
AuM A

 we nd
From this, and by using the XM ellipticity of A,
  M u
1 uM u
cA M 2X A(u M ), uM u
M

 A)uM , uM u
= (A M

(A A)uM X  uM uM X ,

and, therefore,
1  M X  .
uM u
M X 
(A A)u
cA
1
Hence, applying the triangle inequality, we nally obtain the error estimate

u u
M X u uM X + uM u M X
1  M X 
u uM X +  (A A)u
cA
1
1   X  + (A A)(u


u uM X +  (A A)u uM )X 
cA
 
1

cA + c A
1  X .
1 + 2  2 u uM X +  (A A)u
cA
1 c A
1

B.2 Approximation Properties


In this section, we will prove the approximation properties of piecewise poly-
nomial basis functions dened in Chapter 2.
Let = be a piecewise smooth Lipschitz boundary which is repre-
sented by a nonoverlapping decomposition

'
J
= i , i j = for i = j,
i=1

where each boundary segment i is described via a local parametrisation


(A.2), ( )
i = x R3 : x = i () for Ti R2 ,
with respect to some parameter domain Ti .
For a sequence of boundary element discretisations (cf. Chapter 2),

'
N
= ,
=1
232 B Numerical Analysis

we further assume that for each boundary element  there exists exactly one
boundary segment i with  i . Hence, there also exists an element qi such
that  = i (qi ). For the area  of the boundary element  , we then obtain
  *
 = dsx = EG F 2 d ,
 qi

with
3 2 3 2 3

E= xi () , G = xi () , F = xi () xi ().
i=1
1 i=1
2 i=1
1 2

We assume that the parametrisation is uniformly bounded, i.e.


*
c1 EG F 2 c2 for all Ti , i = 1, . . . , J.

Hence, we obtain
c1 area qi  c2 area qi .
Then,
  *
v2L2 ( ) = |v(x)|2 dsx = |v(i ())|2 EG F 2 d
 qi
 
c 
c2 |v(i ())| d 2
2
|v(i ())|2 d .
c1 area qi
qi qi

Now, using a parametrisation qi = i ( ) with respect to the parameter do-


main (cf. Chapter 2), this gives
 
|v(i ())|2 d = |v(i (i ()))|2 |det i | d ,
qi

and with  
1
area qi = d = |det i | d = |det i | ,
2
qi

we nally obtain

v2L2 ( ) 2c  
v 2L2 ( ) , v () = v(i (i ())) .

Note that this result would follow directly when considering a parametrisation
of the boundary element  with respect to the reference element . However,
the above approach is needed when considering higher order Sobolev spaces,
for example
B.2 Approximation Properties 233
 
|v|2H m ( ) = |D v(i ())|2 d, m N.
||=m
qi

For the parametrisation qi = i ( ) and for || = m, we now obtain the norm
equivalence inequalities,
 
1
(area qi )1m |D v ()|2 d |D v(i ())|2 d ,
cm
qi

and  
|D v(i ())|2 d cm (area qi )1m |D v ()|2 d .
qi

Hence, we have

c1 1m
 |
v |2H m ( ) |v|2H m ( ) c2 1m
 |
v |2H m ( ) for m N .

Let Qh w Sh0 ( ) be the L2 projection satisfying the variational problem


N  
w  (x)k (x)dsx = w(x)k (x)dsx for k = 1, . . . , N.
=1

Due to the denition of the piecewise constant basis functions k , we obtain


the Galerkin orthogonality
  
w(x) Qh w(x) dsx = 0 for  = 1, . . . , N


as well as 
1
w = w(y)dsy for  = 1, . . . , N.



For the local error, we rst nd

w Qh w2L2 ( ) 2c  w
 Q w
 2L2 ( ) ,

 () = Qh w(i ()).
where Q w
For an arbitrary but xed L2 ( ), we dene the linear functional
  
 ) =
f (w  () Q w
w  () ()d

satisfying

 )| w
|f (w  Q w
 L2 ( ) L2 ( )

2 w
 L2 ( ) L2 ( ) 2 w
 H 1 ( ) L2 ( ) ,
234 B Numerical Analysis

since the L2 projection Q : L2 ( ) L2 ( ) is bounded. If q is given as a


constant function, we obviously have Q q = q, and, therefore,

f (q) = 0 for all q P0 ( ).

The BrambleHilbert lemma then implies

 )| c |w
|f (w  |H 1 ( ) L2 ( ) .

In particular for = w  Q w L2 ( ), we then obtain


  2
w Q w
 2L2 ( ) = w () Q w
 () d = |f (w )|

c |w
 |H 1 ( ) L2 ( ) = c |w
 |H 1 ( ) w
 Q w
 L2 ( ) ,

and, therefore,
 Q w
w  L2 ( ) c |w
 |H 1 ( ) .
Altogether, we nally obtain

w Qh w2L2 ( ) = 2 w
 Q w
 2L2 ( )


c  |w
 |2H 1 ( ) c h2 |w|2H 1 ( ) ,

as well as

N
w Qh w2L2 ( ) c h2 |w|2H 1 ( )
=1

and
w Qh wL2 ( ) c h |w|Hpw
1 ( ) ,

when assuming w Hpw 1


( ). This is the error estimate (2.4) for s = 1.
To obtain the error estimate (2.4) for s (0, 1), we consider x  , where
we nd
 
1 1
w(x) Qh w(x) = w(x) w(y)dsy = (w(x) w(y))dsy
 
 

for the error, and, therefore, by applying the CauchySchwarz inequality,


 2
  
 1 
|w(x) Qh w(x)| = 
2  (w(x) w(y))dsy 
  

 2
 
1  w(x) w(y) 
= 2 |x y| dsy 
1+s
  |x y|1+s 

B.2 Approximation Properties 235
 
1 (w(x) w(y))2
dsy |x y|2+2s dsy
2 |x y|2+2s
 

d2+2s (w(x) w(y)) 2

dsy ,
 |x y|2+2s


where d is the element diameter (cf. Chapter 2). Integrating over  , and
since the boundary element  is shape regular, i.e. d cB h , this gives with
 = h2
  
(w(x) w(y))2
(w(x) Qh w(x))2 dsx c2+2s h 2s
dsy dsx .
B 
|x y|2+2s
  

Taking the sum over all boundary elements  , we obtain


 
N  
(w(x) w(y))2
(w(x) Qh w(x))2 dsx c2+2s h2s dsy dsx ,
B 
|x y|2+2s
=1  

which is the error estimate (2.4) for s (0, 1).


Hence, we have
w Qh wL2 ( ) c hs |w|Hpw
s ( ) ,

when assuming w Hpw


s
( ) for some s [0, 1]. For [1, 0), we then nd
by duality,
w Qh w, v

w Qh wH ( ) = sup
0=vH ( ) vH ( )
w Qh w, v Qh v

= sup
0=vH ( ) vH ( )
w Qh wL2 ( ) v Qh vL2 ( )
= sup
0=vH ( ) vH ( )

c h w Qh wL2 ( ) c hs |w|Hpw
s ( ) ,

and, therefore, the approximation property (2.5).


It remains to derive the approximation property (2.8) for piecewise linear
but globally discontinuous basis functions. For this, we dene the L2 projec-
tion Qh w Sh1,1 ( ) satisfying the variational problem


N 
3  
w,i ,i (x)k,j (x)dsx = w(x)k,j (x)dsx
=1 i=1

for k = 1, . . . , N, j = 1, 2, 3. As for piecewise constant basis functions, we then


obtain
236 B Numerical Analysis

w Qh w2L2 ( ) 2c  w
 Q w
 L2 ( )
for the local error. The BrambleHilbert lemma now implies

w  L2 ( ) c |w
 Q w  |H 2 ( ) .

Hence, we have

w Qh w2L2 ( ) c  |w
 |2H 2 ( ) 
c 2 |w|2H 2 ( ) = 
c h4 |w|2H 2 ( ) ,

and, therefore,

N
w Qh w2L2 ( ) 
c h4 |w|2H 2 ( )
=1

as well as
w Qh wL2 ( ) c h2 |w|Hpw
2 ( ) ,

when assuming w Hpw


2
( ). Since

Qh : L2 ( ) L2 ( )

is bounded, we obtain, by applying an interpolation argument, the error esti-


mate
w Qh wL2 ( ) c hs |w|Hpw(
s
)
,
when assuming w Hpw
s
( ) for some s [0, 2]. By duality, we then nd

w Qh wH ( ) c hs |w|Hpw
s ( )

for [2, 0] and s [0, 2], which is the approximation property (2.8).
Finally, we consider the piecewise linear, globally continuous L2 projection
Qh w Sh1 ( ) as the unique solution of the variational problem
 
Qh w(x)j (x)dsx = w(x)j (x)dsx for j = 1, . . . , M.

Due to Sh1 ( ) Sh1,1 ( ), we immediately nd the approximation property


(2.10) for [2, 0] and s [0, 2]. To derive (2.10) for (0, 1], we introduce
the H 1 projection of a given function u H 1 ( ),


M
Ph u = uj j Sh1 ( ),
j=1

which minimises the error u Ph u in the H 1 ( )norm:

Ph u = arg min u vh H 1 ( ) .
vh Sh
1 ( )
B.2 Approximation Properties 237

Thus, Ph u Sh1 ( ) is the unique solution of the variational problem

Ph u, vh
H 1 ( ) = u, vh
H 1 ( ) for all vh Sh1 ( ).

As for the L2 projection, we nd the error estimate

u Ph uH 1 ( ) c h |u|Hpw
2 ( ) ,

when assuming u Hpw


2
( ). Since

Ph : H 1 ( ) H 1 ( )

is bounded, we also conclude the trivial estimate

u Ph uH 1 ( ) uH 1 ( ) .

Then, using an interpolation argument, we also obtain the error estimate

u Ph uH 1 ( ) c hs1 |u|H s ( ) ,

when assuming u Hpw


s
( ) for some s [1, 2]. Then, for [0, 1), we get
by duality,

u Ph u, v
H 1 ( )
u Ph uH ( ) = sup
0=vH 2 ( ) vH 2 ( )

u Ph u, v Ph v
H 1 ( )
= sup
0=vH 2 ( ) vH 2 ( )

u Ph uH 1 ( ) v Ph vH 1 ( )
sup
0=vH 2 ( ) vH 2 ( )

c h1 u Ph uH 1 ( ) c hs |u|H s ( ) ,

when assuming u H s ( ) for some s [1, 2]. This is the approximation


property (2.10) for (0, 1] and s [1, 2].
C
Numerical Algorithms

C.1 Numerical Integration


For the boundary = of a Lipschitz domain R3 , we consider a
sequence of boundary element meshes (2.1),

'
N
N = .
=1

We assume that all boundary elements  are plane triangles. Using the ref-
erence element
( )
= R2 : 0 < 1 < 1 , 0 < 2 < 1 1 ,

the boundary element  =  ( ) with nodes xi for i = 1, 2, 3 can be described


via the parametrisation

x() =  () = x1 + 1 (x2 x1 ) + 2 (x3 x1 ) = x1 + J 

for . Using 
 = dsx ,


we obtain for the integral of a given function f on  ,


 
I = f (x)dsx = 2 f () d, f () = f ( ()).


Hence, it is sucient to consider numerical integration schemes



1 
M
i f (i ) f () d
2 i=1

240 C Numerical Algorithms

with respect to the reference element , which are exact for polynomials of
a certain order. From this, we may nd 3M parameters (i,1 , i,2 , i ) for i =
1, . . . , M . Let ( )
Pk = span 11 22 1 +2 k
be the space of polynomials of an order less or equal k. The dimensions of
Pk and the minimal number M of integration points are given in Table C.1.
The last line of this table shows the number P = 3M of free parameters.
These parameters (i,1 , i,2 , i ) for i = 1, . . . , M are solutions of the nonlinear

Table C.1. Dimensions of Pk and required number of integration nodes

k 0 1 2 3 4 5
dimPk 1 3 6 10 15 21
M 1 3 4 7
P 3 9 12 21

equations

1
M
11 22 d = 1 2
i i,1 i,2 for 1 , 2 : 1 + 2 k . (C.1)
2 i=1

For k = 1 and M = 1, the equations (C.1) read



1 1
1 d = = 1 ,
2 2

1 1
1 d = = 1 1,1 ,
6 2


1 1
2 d = = 1 1,2 ,
6 2

and, therefore, we obtain


1 1
1 = 1, 1,1 = , 1,2 = .
3 3
The resulting formula is the midpoint quadrature rule

f (x) dsx  f (x ) ,


which integrates linear functions exactly. For k = 2 and M = 3, we have from


(C.1)
C.1 Numerical Integration 241

1 1 
1 d = = 1 + 2 + 3 ,
2 2

1 1 
1 d = = 1 1,1 + 2 2,1 + 3 3,1 ,
6 2


1 1 
2 d = = 1 1,2 + 2 2,2 + 3 3,2 ,
6 2

1 1 
12 d = 2
= 1 1,1 2
+ 2 2,1 2
+ 3 3,1 ,
12 2


1 1 
22 d = 2
= 1 1,2 2
+ 2 2,2 2
+ 3 3,2 ,
12 2

1 1 
1 2 d = = 1 1,1 1,2 + 2 2,1 2,2 + 3 3,1 3,2 .
24 2

To nd a solution of the above system of nonlinear equations, we add some ad-


ditional constraints with respect to the geometrical setting. Due to symmetry,
we rst set
1
1 = 2 = 3 = .
3
Introducing some real parameter s R, we may dene the integration nodes
on the straight lines connecting the corner nodes with the midpoint,

s 1 1 s 2 0 s 1
1 = , 2 = + , 3 = + .
3 1 0 3 1 1 3 2

Note that with this choice, the second and third equations are satised. For
the remaining three equations, we obtain

4s2 8s + 3 = 0 ,

with the solutions


1
s1/2 = 1 .
2
In particular, for s1 = 1/2, the integration nodes and the corresponding
weights are
1 1 2 1 1 2 1
1 = , , 2 = , , 3 = , , 1 = 2 = 3 = ,
6 6 3 6 6 3 3
while for s2 = 3/2 the solution is
1 1  1 1  1
1 = , , 2 = 0, , 3 = ,0 , 1 = 2 = 3 = .
2 2 2 2 3
Note that both integration rules are exact for quadratic polynomials.
For k = 3 and M = 4, the nonlinear equations (C.1) read
242 C Numerical Algorithms

1
4
11 22 d = 1 2
i i,1 i,2 for 1 , 2 : 1 + 2 3 .
2 i=1

Hence, we have to nd 12 parameters (i,1 , i,2 , i ) satisfying 10 nonlinear


equations. As in the previous case, we may introduce additional constraints
to construct particular solutions of the above nonlinear system. Due to sym-
metry, we dene the integration nodes as

1 1 s 1 1 s 2 0 s 1
1 = , 2 = , 3 = + , 4 = + ,
3 1 3 1 0 3 1 1 3 2

and the associated integration weights as

1 = , 2 = 3 = 4 .

From the equation for 1 = 2 = 0, we then nd


1
1 = , 2 = 3 = 4 = (1 ).
3
Note that both equations, for 1 , 2 : 1 + 2 = 1, are then satised. It
turns out that it is sucient to consider only two additional equations, the
remaining equations are then satised automatically. For 1 = 2 and 2 = 0,
it follows
  1
(1 ) s2 2s + 1 = ,
4
while for 1 = 3 and 2 = 0, we have
  17
(1 ) 8 18s + 12s2 2s3 = .
10
Thus, there are two equations for the two remaining unknowns and s. We
obtain
1 1 17 1
1 = = ,
4 s2 2s + 1 10 8 18s + 12s2 2s3
and, therefore,
(s 1)2 (5s 3) = 0 ,
with the solutions
3
s1/2 = 1, . s3 =
5
For s = 3/5, we then get the integration weights
9 25
1 = , 2 = 3 = 4 =
16 48
and the integration points
C.1 Numerical Integration 243

1 1 1 1
1 = , 2 = ,
3 1 5 1

1 3 1 1
3 = , 4 = .
5 1 5 3

However, the rst weight 1 is negative, and, therefore, the above quadrature
can not be used for practical computations.
For k = 5 and M = 7, the nonlinear equations (C.1) read

1
7
11 22 d = 1 2
i i,1 i,2 for all 1 , 2 : 1 + 2 5.
2 i=1

Hence, we have to nd 21 parameters (i,1 , i,2 , i ) satisfying 21 nonlinear


equations. As in the previous cases, we introduce additional constraints to
construct a solution of the above nonlinear equations. Due to symmetry, we
dene the integration nodes as

1 1 s 1 1 s 2 0 s 1
1 = , 2 = , 3 = + , 4 = +
3 1 3 1 0 3 1 1 3 2

and

1 1 t 1 1 1 t 1 1 0 t 2
5 = + , 6 = + , 7 = + ,
2 0 6 2 2 1 6 1 2 1 6 1
as well as the associated integration weights as

1 = , 2 = 3 = 4 , 5 = 6 = 7 .

From the equation for 1 = 2 = 0, we easily nd

1 = 1 32 35 .

With this, the equations for 1 , 2 : 1 + 2 = 1 are satised automatically.


Moreover, all the remaining equations can be reduced to the four equations
for the four parameters 2 , 5 , s, and t:

122 (s 1)2 + 35 (t 1)2 = 1 ,


1202 (s 1) (4 s) + 155 (t 1)2 (t + 5) = 34 ,
2

2402 (s 1)2 (3s2 10s + 13) + 155 (t 1)2 (3t2 + 2t + 19) = 176 ,
33602 (s 1)2 (8 11s + 6s2 s3 ) + 1055 (13 t + 3t2 + t3 ) = 1184 .

From the rst two equations, we nd

1 35 (t 1)2 10s 6
2 = , 5 = .
12(s 1)2 15(t 1)2 (2s + t 3)
244 C Numerical Algorithms

Inserting this into the third equation, this gives

s(9 5t) + 3(t 1) = 0 ,

and, therefore,
t1
s = 3 .
5t 9
The fourth equation is nally equivalent to

7t2 18t + 3 = 0 .

Thus,
9 2 15
t1/2 = .
7
Hence, we have
9 2 15 6 15
t = , s = .
7 7
For the integration weights, we nd

155 + 15 155 15 9
5 = , 2 = , 1 = ,
1200 1200 40
while for the integration nodes, we nally obtain

1 6 15 1 9 + 2 15 1 6 15
2 = , 3 = , 4 =
21 6 15 21 6 15 21 9 + 2 15

and

1 6 + 15 1 6 + 15 1 9 2 15
5 = , 6 = , 7 = .
21 6 + 15 21 9 2 15 21 6 + 15

C.2 Analytic Integration


For the boundary = of a Lipschitz domain R3 , we consider a
sequence of boundary element meshes (2.1),

'
N
N = .
=1

We assume that all boundary elements  are plane triangles. In order not to
overload the subsequent formulae, we consider a plane triangle R3 given
via its three corner points x1 , x2 , and x3 (cf. Fig. C.2), having in mind that
this triangle is one of the boundary elements  ,  = 1 . . . , N .
We rst dene a suitable local coordinate system connected to the bound-
ary element with the origin in x1 and having the basis vectors
C.2 Analytic Integration 245

x3



 r2
n
  

 r1
x1 
 x

 

x2

Fig. C.1. Boundary element

 
r1 , r2 , n .

The unit vector r2 directed from x2 to x3 is dened as follows


1
r2 = (x3 x2 ) , t = |x3 x2 | .
t
Here, t > 0 denotes the length of the side x2 x3 of the triangle . This is one
of the characteristic quantities, we will need for the analytical integration.
To nd the unit vector r1 which is directed along the line through the
origin in x1 and perpendicular to r2 , we rst determine the intersection point
x ,
x = x1 + s r1 = x2 + t r2 ,
where s > 0, the height of the triangle , will be the second characteristic
quantity. The parameter t can be easily computed as

t = (x1 x2 , r2 ) .

Then,
1
r1 = (x x1 ) , s = |x x1 | .
s
Finally, the corresponding normal vector is dened by

n = r1 r2 .

Two further characteristic quantities, we will need for the analytical integra-
tion, are the tangents of the angles between the height x1 x and the sides
x1 x2 and x1 x3 , correspondingly. These quantities can be computed as
t t t
1 = , 2 = .
s s
Thus, an appropriate parametrisation of the boundary element would be

= y = y(s, t) = x1 + s r1 + t r2 : 0 < s < s , 1 s < t < 2 s .


246 C Numerical Algorithms

Any point x R3 can now be represented in the new coordinate system as

x = x1 + sx r1 + tx r2 + ux n

with

sx = (x x1 , r1 ), tx = (x x1 , r2 ), ux = (x x1 , n) .

From this, we nd the distance between any arbitrary point x R3 and a


point y as follows:

|x y|2 = (s sx )2 + (t tx )2 + u2x .

C.2.1 Single Layer Potential for the Laplace operator

For x R3 , we rst consider the following function:



1 1
S(, x) = dsy
4 |x y|

s 2 s
1 1
= * dt ds
4 (s sx ) + (t tx )2 + u2x
2
0 1 s
s 3  * 42 s
1
= log t tx + (s sx )2 + (t tx )2 + u2x ds
4 1 s
0
1  
= F (s , 2 ) F (0, 2 ) F (s , 1 ) + F (0, 1 ) .
4
Hence, it is sucient to compute the integral
 * 

F (s, ) = log s tx + (s sx )2 + (s tx )2 + u2x ds
  * 
= log s tx + (1 + 2 )(s p)2 + q 2 ds ,

with parameters

tx + sx (tx sx )2
p = , q 2 = u2x + .
1 + 2 1 + 2
Integration by parts gives
 * 
F (s, ) = (s sx ) log s tx + (1 + 2 )(s p)2 + q 2 s
 *
(tx sx ) (1 + 2 )(s p)2 + q 2 + (1 + 2 )(s p)(p s ) q 2
* ds .
(1 + )2 (s p)2 + q 2 + (s tx ) (1 + 2 )(s p)2 + q 2
C.2 Analytic Integration 247

With the substitution


q ds q
s = p+ sinh u, = cosh u ,
1 + 2 du 1 + 2
we obtain for the remaining integral
 *
(tx sx ) (1 + 2 )(s p)2 + q 2 + (1 + 2 )(s p)(p sx ) q 2
I= * ds
(1 + )2 (s p)2 + q 2 + (s tx ) (1 + 2 )(s p)2 + q 2


1 + 2 (tx sx ) cosh u + (tx sx ) sinh u 1 + 2 q
=q du.
q(1 + 2 ) cosh u + 1 + 2 q sinh u + (sx tx )

The second substitution


u 2v 1 + v2 du 2
v = tanh , sinh u = , cosh u = , =
2 1 v2 1 v2 dv 1 v2
gives for the last integral

1 1 + 2 (tx sx )(1 + v 2 ) + 2(tx sx )v 1 + 2 q(1 v 2 )
2q dv,
1 v2 q(1 + 2 )(1 + v 2 ) + 2 1 + 2 qv + (sx tx )(1 v 2 )

and, therefore,

1 B2 v 2 + B1 v + B0
I = 2q dv ,
1 v 2 A2 v 2 + A1 v + A0
with parameters
*
B2 = 1 + 2 (tx sx + q),
B1 = 2(tx sx ),
*
B0 = 1 + 2 (tx sx q),
A2 = (1 + 2 )q (sx tx ),
*
A1 = 2 1 + 2 q,
A0 = (1 + 2 )q + (sx tx ).

Due to
A21 4A0 A2 = 4(1 + 2 )u2x 0 ,
we decompose

1 B2 v 2 + B1 v + B0 C1 C2 C3 v + C4
= + +
1 v A2 v + A1 v + A0
2 2 1 v 1 + v A2 v + A1 v + A0
2

to obtain
248 C Numerical Algorithms

1 B0 + B1 + B2
C1 = ,
2 A0 + A1 + A2
1 B0 B1 + B2
C2 = ,
2 A0 A1 + A2
C3 = A2 (C1 C2 ),

C4 = B0 (C1 + C2 )A0 ,
and, therefore,

tx sx 1 + 2 u2x
C1 = C 2 = , C3 = 0, C4 = .
2q 1 + 2 q
Hence,

tx sx  1 1  * 1
I = + 2 1 + ux
2 2
dv .
1 + 2 1 v 1 + v A2 v 2 + A1 v + A0
Integrating the rst part and resubstituting, this gives
   
1 1  1 + v
dv = log |v + 1| log |v 1| = log  
1+v v1 1 v

1 + 2 (s p)
= 2 arctanh v = u = arcsinh
q
 5 
1 + 2 (s p) (1 + 2 )(s p)2
= log + +1
q q2
* * 
= log 1 + 2 (s p) + (1 + 2 )(s p)2 + q 2 log q .

For the remaining integral, we obtain


* 
1 2A2 v + A1
2 1 + 2 u2x 2
dv = 2ux arctan .
A2 v + A1 v + A0 2 1 + 2 ux
From sinh u = 2v/(1 v 2 ), we nd
* *
1 + sinh2 u 1 (1 + 2 )(s p)2 + q 2 q
v = = ,
sinh u 1 + 2 (s p)
and, therefore,
*  
2A2 v + A1 A2 (1 + 2 )(s p)2 + q 2 + (1 + 2 )(s p) A2 q
=
2 1 + 2 ux (1 + 2 )(s p)
 *  
x tx
q s
1+2 (1 + 2 )(s p)2 + q 2 + s tx q q
= .
(s p)ux
C.2 Analytic Integration 249

Collecting all terms together and ignoring constant parts, we nally obtain
 * 
F (s, ) = (s sx ) log s tx + (s sx )2 + (s tx )2 + u2x s

sx tx * * 
+ log 1 + 2 (s p) + (1 + 2 )(s p)2 + q 2
1 + 2
 *  
x tx
q s
1+2 (1 + 2 )(s p)2 + q 2 + s tx q q
+2ux arctan .
(s p)ux

C.2.2 Double Layer Potential for the Laplace operator

For piecewise linear basis functions we have to compute the local contributions
of the integrals

1 (x y, n(y))
Di (, x) = ,i (y)dsy
4 |x y|3

s 2 s
1 s s ux
=   dtds
4 s (t t )2 + (s s )2 + u2 3/2
0 1 s x x x

s 6 7 2 s
ux s s 1 t tx
= * ds
4 s (s sx )2 + u2x (t tx )2 + (s sx )2 + u2x
0 1 s

1  
= F (s , 2 ) F (0, 2 ) F (s , 1 ) + F (0, 1 ) .
4s
Hence, it is sucient to compute the integral

(s s)(s tx )
F (s, ) = ux  * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2
  
(s sx ) (sx tx ) (s sx ) + (s sx )(sx tx ) + u2x
= ux  * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2

1
ux * ds
(1 + )(s p)2 + q 2
2
  
(s sx ) (sx tx ) (s sx ) + (s sx )(sx tx ) + u2x
= ux  * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2
* * 
ux log 1 + 2 (s p) + (1 + 2 )(s p)2 + q 2 ,
1 + 2
with parameters p and q already used for the single layer potential:
tx + sx (tx sx )2
p= , q 2 = u2x + .
1 + 2 1 + 2
250 C Numerical Algorithms

With the substitution


q ds q
s = p+ sinh u, = cosh u ,
1 + 2 du 1 + 2
we obtain for the remaining integral
  
(s sx ) (sx tx ) (s sx ) + (s sx )(sx tx ) + u2x
I = ux  * ds =
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2
 q s +t 2s  sinh u+ 1 (1 + 2 )q 2 +(s s )(t s )
x x 1+2 x x x
ux   du .
q 2 sinh2 u + 2 1 + 2 q(p sx ) sinh u + (1 + 2 ) (p sx )2 + u2x

The second substitution


u 2v 1 + v2 du 2
v = tanh , sinh u = , cosh u = , =
2 1 v2 1 v2 dv 1 v2
gives for the last integral
 2q s +t 2s v+ 1 (1+2 )q 2+(s s )(t s )(1v 2 )
x x 1+2 x x x
2ux   dv,
4q 2 v 2 +4 1+2 q(psx )v(1v 2 ) + (1+2 ) (p sx )2 +u2x (1v 2 )2

and, thereforfe,

C 2 v 2 + C 1 v C2
I = 2ux dv ,
a1 v 4 a2 v 3 + a3 v 2 + a2 v + a1
with parameters
1  
C2 = (s sx )(tx sx ) (1 + 2 )q 2 ,
1+ 2
 
C1 = 2q s + tx 2sx ,
 
a1 = (1 + 2 ) u2x + (p sx )2 = u2x + 2 q 2 > 0 ,
* 4q(tx sx )
a2 = 4q 1 + 2 (p ss ) = ,
1 + 2
a3 = 4q 2 2a1 ,

or 
2ux C2 v 2 + C1 v C2
I = dv ,
u2x + 2 q 2 v4 av 3 + bv 2 + av + 1
with
4q(tx sx ) 4q 2
a =  , b = 2.
1 + 2 u2x + 2 q 2 u2x + 2 q 2
For the decomposition
C.2 Analytic Integration 251

C 2 v 2 + C 1 v C2 D1 v + E 1 D2 v + E2
= 2 + 2 ,
v4 av 3 + bv 2 + av + 1 v + A1 v + B1 v + A2 v + B2
we nd the coecients

2 1 + 2 q tx sx
A1/2 = 2 q ,
ux + 2 q 2 1 + 2
2
1 + 2 tx sx
B1/2 = 2 q ,
ux + 2 q 2 1 + 2
as well as
1 2 
D1 = ux + 2 q 2 ,
2
1 2 
D2 = ux + 2 q 2 ,
2
1   
E1 = s sx + q tx sx + (1 + 2 )q ,
2 1+ 2

1   
E2 = s sx q tx sx (1 + 2 )q .
2 1 + 2
Hence, we have
 
2ux D1 v + E1 D2 v + E2
I = 2 dv + dv ,
ux + 2 q 2 v 2 + A1 v + B1 v 2 + A2 v + B2
and it remains to integrate

2ux D1/2 v + E1/2
I1/2 = 2 dv
ux + 2 q 2 v 2 + A1/2 v + B1/2

2ux 1  
= 2 2 2
D1/2 log v 2 + A1/2 v + B1/2
ux + q 2

 1  1
+ E1/2 A1/2 B1/2 dv
2 v 2 + A1/2 v + B1/2
1  
= ux log v 2 + A1/2 v + B1/2
2 
2ux  1  1
+ 2 2 2
E1/2 A1/2 B1/2 2
dv .
ux + q 2 v + A1/2 v + B1/2

Due to
2
1 (1 + 2 )u2x tx sx
G21/2 = B1/2 A21/2 =  2 q ,
4 u2x + 2 q 2 1 + 2

we nd
1 + 2 |ux | tx sx
G1/2 = q > 0.
u2x + 2 q 2 1 + 2
252 C Numerical Algorithms

Hence, we obtain

 1  1
E1/2 A1/2 D1/2 dv
2 v 2 + A1/2 v + B1/2

 1  1
= E1/2 A1/2 D1/2 1 dv
2 (v + 2 A1/2 )2 + G21/2

1  1  2v + A1/2
= E1/2 A1/2 D1/2 arctan
G1/2 2 2G1/2

u2x + 2 q 2 2v + A1/2
= (s sx ) arctan .
2|ux | 2G1/2

Therefore,

1   ux 2v + A1/2
I1/2 = ux log v 2 + A1/2 v + B1/2 (s sx ) arctan .
2 |ux | 2G1/2

Recall that
* *
1 + sinh2 u 1 (1 + 2 )(s p)2 + q 2 q
v = = .
sinh u 1 + 2 (s p)

C.2.3 Linear Elasticity Single Layer Potential

By the use of Lemma 1.15, it is sucient to describe the computation of



1 (xi yi )(xj yj )
Sij (, x) = dsy .
4 |x y|3

With the local parametrisations

y = x1 + s r1 + t r2 , x = x1 + sx r1 + tx r2 + ux n ,

the kernel function can be written as


(xi yi )(xj yj ) aij
=  1/2 +
|x y|3 (s sx ) + (t tx )2 + u2x
2

 
bij (s sx ) cij ux (t tx ) dij (s sx )2 eij (s sx )ux + fij u2x
 3/2 +  3/2
(s sx )2 + (t tx )2 + u2x (s sx )2 + (t tx )2 + u2x

with the parameters


C.2 Analytic Integration 253

aij = r2,i r2,j ,


bij = r1,i r2,j + r2,i r1,j ,
cij = r2,i nj + r2,j ni ,
dij = r1,i r1,j aij ,
eij = r1,i nj + r1,j ni ,
fij = ni nj aij .

Hence, we have to compute


1 2 3
Sij (, x) = Sij (, x) + Sij (, x) + Sij (, x)

with
s 2 s
1 1 1
Sij (, x) = aij
4  1/2 dt ds,
0 1 s
(s sx )2 + (t tx )2 + u2x

s 2 s  
2 1 bij (s sx ) cij ux (t tx )
Sij (, x) =   dt ds,
4 (s s )2 + (t t )2 + u2 3/2
0 s x x x
1

s 2 s
3 1 dij (s sx )2 eij (s sx )ux + fij u2x
Sij (, x) =
4  3/2 dt ds.
0 1 s
(s sx )2 + (t tx )2 + u2x

1
Note that Sij (, x) corresponds to the single layer potential function for the
Laplace equation.
For the second entry we obtain
s 2 s  
2 1 bij (s sx ) cij ux (t tx )
Sij (, x) =
4  3/2
0 s
(s sx )2 + (t tx )2 + u2x
1

s 6 72 s
1 bij (s sx ) cij ux
=  1/2 ds
4 (s sx )2 + (t tx )2 + u2x
0 1 s
1  2 
= Fij (s , 2 ) Fij2 (s , 1 ) Fij2 (0, 2 ) + Fij2 (0, 1 ) ,
4
with the integral

bij (s sx ) cij ux
Fij2 (s, ) =  1/2 ds
(s sx )2 + (s tx )2 + u2x

b (s sx ) cij ux
= * ij ds
(1 + 2 )(s p)2 + q 2
254 C Numerical Algorithms
 
sp bij (p sx ) cij ux
= bij * ds + * ds
(1 + )(s p)2 + q 2
2 (1 + 2 )(s p)2 + q 2
bij  1/2
= (1 + 2 )(s p)2 + q 2
1 + 2
bij (p sx ) cij ux * * 
+ log 1 + 2 (s p) + (1 + 2 )(s p)2 + q 2 .
1 + 2
For the third integral, we get
s 2 s
3 1 dij (s sx )2 eij (s sx )ux + fij u2x
Sij (, x) =
4  3/2 dt ds
0 1 s
(s sx )2 + (t tx )2 + u2x

s 6 7 2 s
t tx dij (s sx )2 eij (s sx )ux + fij u2x
= * ds
(s sx )2 + u2x (s sx )2 + (t tx )2 + u2x
0 1 s

1  3 
= Fij (s , 2 ) Fij3 (s , 1 ) Fij3 (0, 2 ) + Fij3 (0, 1 )
4
with

s tx dij (s sx )2 eij (s sx )ux + fij u2x
Fij3 (s, ) = * ds
(s sx ) + ux
2 2
(s sx )2 + (s tx )2 + u2x
    
s tx dij (s sx )2 + u2x eij (s sx )ux + fij dij u2x
= * ds
(s sx )2 + u2x (s sx )2 + (s tx )2 + u2x

s tx
= dij * ds
(1 + 2 )(s p)2 + q 2

(s sx )ux s tx
eij * ds
(s sx ) + ux (1 + 2 )(s p)2 + q 2
2 2


  u2x s tx
+ fij dij * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2
 
= dij F 3,1 (s, ) eij F 3,2 (s, ) + fij dij F 3,3 (s, )

and

s t
F 3,1
(s, ) = * ds
(1 + 2 )(s p)2 + q 2
 
sp p t
= * ds + * ds
(1 + 2 )(s p)2 + q 2 (1 + 2 )(s p)2 + q 2
C.2 Analytic Integration 255
*
= (1 + 2 )(s p)2 + q 2
1 + 2
p t * * 
+ log 1 + 2 (s p) + (1 + 2 )(s p)2 + q 2 .
1 + 2
The remaining integrals can be computed in the same way as for the double
layer potential of the Laplace operator. In particular, up to the sign and the
choice s = s , the integral

(s sx )ux s tx
3,2
F (s, ) = * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2

coincides with the integral of the double layer potential function of the Laplace
operator.
Furthermore, using the transformations as for the Laplace operator, we
have

u2x s tx
F 3,3 (s, ) = * ds
(s sx )2 + u2x (1 + 2 )(s p)2 + q 2

q sinh u + 1 + 2 (p tx )
= u2x du
q 2 sinh2 u + 2 1 + 2 q(p sx ) sinh u + (1 + 2 )[u2x + (p sx )2 ]

q2v+ 1+2 (ptx )(1v 2 )
= 2u2x   dv
4q 2 v 2+4 1+2 q(psx )v(1v 2 )+(1+2 ) u2x +(p sx )2 (1v 2 )2

C2 v 2 + C1 v C 2
= 2u2x dv ,
a 1 v a 2 v 3 + a 3 v 2 + a 2 v + a1
4

with the parameters


*
C2 = 1 + 2 (p tx ), C1 = 2q ,

and
4q(tx sx )
a1 = u2x + 2 q 2 , a2 = , a3 = 4q 2 2a1 .
1 + 2
Hence, we get

2u2 C2 v 2 + C1 v C2
F 3,3
(s, ) = 2 x 2 2 dv
ux + q (v 2 + A1 v + B1 )(v 2 + A2 v + B2 )

with
2
2 1 + 2 q tx sx 1 + 2 tx sx
A1/2 = 2 q , B 1/2 = q .
ux + 2 q 2 1 + 2 u2x + 2 q 2 1 + 2

For the decomposition


256 C Numerical Algorithms

C2 v 2 + C1 v C2 D1 v + E1 D2 v + E2
= 2 + 2 ,
(v 2 + A1 v + B1 )(v 2 + A2 v + B2 ) v + A1 v + B1 v + A2 v + B2
we nd the coecients
(tx sx ) + (1 + 2 )q (tx sx ) (1 + 2 )q
D1 = D2 = 0, E1 = , E2 = .
2 1+ 2 2 1 + 2
Therefore,
 
3,3 2 2 E1 E2
F (s, ) = 2  2 2 dv + dv ,
 + q v 2 + A1 v + B1 v 2 + A2 v + B2

and it remains to integrate



u2x (tx sx ) (1 + 2 )q 1
I1/2 = 2 2 2
dv
ux + q 1 + 2 v2 + A1/2 v + B1/2

u2 (tx sx ) (1 + 2 )q 1
= 2 x2 2 1 dv
ux + q 1 + 2 (v + 2 A1/2 )
2 + G21/2

1 u2x (tx sx ) (1 + 2 )q 2v + A1/2


= 2 2 2
arctan ,
G1/2 ux + q 1+ 2 2G1/2

with
1 + 2 | | tx sx
G1/2 = q > 0.
u2x + 2 q 2 1 + 2
Finally,
2v + A1/2
I1/2 = |ux | arctan .
2G1/2

C.3 Iterative Solution Methods


In this section we consider the iterative solution of a linear system

Ax = f , A RN N , x, f RN . (C.2)

Note that the dimension N of the system is usually connected to the discreti-
sation parameter h (cf. (2.2)), and, therefore, we have a sequence of linear
systems to be solved for N . We rst consider symmetric and positive
denite systems and later general systems with regular matrices.

C.3.1 Conjugate Gradient Method (CG)

For a symmetric and positive denite matrix A RN N , we may dene an


inner product
C.3 Iterative Solution Methods 257

(u, v)A = (Au, v) for all u, v RN .


A system of vectors
( )N 1
P = pk k=0 , pk = 0 , k = 0, . . . , N 1 (C.3)

is called conjugate, or Aorthogonal, if there holds

(Ap , pk ) = 0 for k =  .

Note that (Apk , pk ) > 0 since A is assumed to be positive denite. The vector
system P forms an Aorthogonal basis of the vector space RN . Hence, the
solution vector x RN of the linear system (C.2) can be written as


N 1
x = x0  p
=0

with an arbitrary given vector x0 RN . To nd the yet unknown coecients


 , we consider the linear system


N 1
Ax = Ax0  Ap = f .
=0

Taking the Euclidean inner product with pk , this gives


N 1
 (Ap , pk ) = (Ax0 f , pk ) , for k = 0, . . . , N 1 ,
=0

and, therefore, by using the Aorthogonality of the system P

(Ax0 f , pk )
k = for k = 0, . . . , N 1 . (C.4)
(Apk , pk )

Thus, if a system P of Aorthogonal vectors pk is given, we can compute the


coecients k from (C.4), and, therefore, the solution x of the linear system
(C.2). With it, this method can be seen as a direct solution algorithm. To
dene an iterative process, we introduce approximate solutions as


k
xk+1 = x0  p = xk k pk for k = 0, . . . , N 1 .
=0

For the computation of the coecient k , we obtain from (C.4)


 
(Ax0 f , pk ) 1 
k1
(Axk f , pk )
k = = Ax 0
f  Ap  k
, p = ,
(Apk , pk ) (Apk , pk ) (Apk , pk )
=0
258 C Numerical Algorithms

when using the Aorthogonality of the system (C.3). Therefore,

(rk , pk )
k = for k = 0, . . . , N 1 , (C.5)
(Apk , pk )

where
rk = Axk f
is the residual vector induced by the approximate solution xk . Note that the
following recursion holds:

rk+1 = Axk+1 f = A(xk k pk ) f = rk k Apk

for k = 0, . . . , N 2. Moreover, we have

(rk , pk )
(rk+1 , pk ) = (rk k Apk , pk ) = (rk , pk ) (Apk , pk ) = 0
(Apk , pk )

for k = 0, . . . , N 2. From

(rk+1 , p ) = (rk , p ) k (Apk , p ) ,

it follows by induction that

(rk+1 , p ) = 0 for  = 0, . . . , k, k = 0, . . . , N 2 . (C.6)

It remains to construct an Aorthogonal vector system P via the orthogonali-


sation method of GramSchmidt. Let W be any system of linear independent
vectors wk . Setting p0 = w0 , we can compute Aorthogonal vectors pk for
k = 0, . . . , N 2 as follows:


k
(Awk+1 , p )
pk+1 = wk+1 k p , k = , for  = 0, . . . , k . (C.7)
(Ap , p )
=0

From (C.7) and (C.6), we further conclude


l1
(rk+1 , w ) = (rk+1 , p ) + j (rk+1 , pj ) = 0 (C.8)
j=0

for  = 0, . . . , k. In particular, if the residual vector rk+1 does not vanish, the
vectors
{w0 , w1 , . . . , wk , rk+1 }
are linear independent and we can choose wk+1 = rk+1 . If rk+1 = 0, then
the system (C.2) is solved. In general, this will not happen and we can choose

wk = rk for k = 0, . . . , N 1 .
C.3 Iterative Solution Methods 259

From (C.8), we then conclude

(rk+1 , r ) = 0 for all  = 0, . . . , k .

For the enumerator of the coecient k dened in (C.5), we obtain


k1
(r , p ) = (r , r )
k k k k
k1, (rk , p ) = (rk , rk ) = k .
=0

by using the recursion (C.7) and the orthogonality relation (C.6). From  > 0,
it follows  > 0 for  = 0, . . . , k, and, therefore, we can write
1   
Ap = r r+1 .

For the enumerator of the coecients kj in (C.7), we then obtain
1 k+1 
(Awk+1 , p ) = (rk+1 , Ap ) = (r , r r+1 ) = 0

for  = 0, . . . , k 1 and
1 k+1 k+1 k+1
(Awk+1 , pk ) = (r ,r ) =
k k
for  = k. Hence, we have k = 0 for  = 0, . . . , k 1 and
k+1
kk = k = .
k (Apk , pk )

Using rk+1 = rk k Apk , we nally obtain

k (Apk , pk ) = (rk rk+1 , pk ) = (rk , pk ) = k ,

and, therefore,
k+1
pk+1 = rk+1 + k pk , k = .
k
Hence, we end up with the conjugate gradient method as summarised in Al-
gorithm C.1.
Algorithm C.1
1. Compute for an arbitrary given initial solution x0 RN

r0 = Ax0 f , p0 = r0 , 0 = (r0 , r0 ).

2. Iterate for k = 0, . . . , N 2
k
sk = Apk , k = (sk , pk ), k = , xk+1 = xk k pk
k
260 C Numerical Algorithms

and compute the new residual

rk+1 = rk k sk , k+1 = (rk+1 , rk+1 ).

Stop, if
k+1 2 0
is satisfied with some prescribed accuracy .
Otherwise compute
k+1
k = , pk+1 = rk+1 + k pk .
k

For the error of the computed approximate solution xk+1 , we obtain


N 1
xk+1 x =  Ap ,
=k+1

and using the Aorthogonality of the vector system P , we further conclude

 1
 k+1 2   N
x xA = A(xk+1 x), xk+1 x = 2 (Ap , p ) .
=k+1

For an arbitrary given vector


k
uk+1 = x0  p , (C.9)
=0

we obtain in the same way

  1
 k+1 2 k
 2 N
u xA =   (Ap , p ) + 2 (Ap , p )
=0 =k+1

Hence, we nd the approximate solution xk+1 as the solution of the minimi-


sation problem  k+1   
x xA = min uk+1 xA ,
uk+1

where the minimum is taken over all vectors uk+1 of the form (C.9). From the
recursions

p0 = r 0 , pk+1 = rk+1 + k pk , rk+1 = rk k Apk ,

we nd representations
p =  (A)r0
with a matrix polynomials  (A) of degree . Hence, we conclude with e0 =
x0 x
C.3 Iterative Solution Methods 261


k 
k 
k
uk+1 x = x0 x  p = e0   (A) r0 = e0   (A) Ae0 ,
=0 =0 =0

and, therefore,
uk+1 x = pk+1 (A)e0
with some matrix polynomial pk+1 (A) of degree k + 1 and having the prop-
erty pk+1 (0) = 1. The polynomial pk+1 (A) obtained for the vector xk+1 is,
therefore, the solution of the minimisation problem
 k+1   
x xA = min pk+1 (A)e0 A , (C.10)
pk+1 (A)

where the minimum is taken over all polynomials pk+1 (A) having the property
pk+1 (0) = 1. The space
( ) ( )
Sk (A, r0 ) = span p0 , p1 , . . . , pk = span r0 , Ar, . . . , Ak r0

is called a Krylov space of the matrix A induced by the residual vector r0 .


From the minimisation problem (C.10), we further conclude the estimate
 k+1    
e  min max pk+1 () e0  ,
A pk+1 (A) A

where the minimum is again taken over all polynomials pk+1 (A) having the
property pk+1 (0) = 1, and the maximum over the spectrum of the matrix A,
i.e. [min (A), max (A)] . The above min maxproblem will be solved by
the scaled Tschebysche polynomials Tk+1 (), and we nd

    2q k+1
min max pk+1 () = max Tk+1 () = ,
pk+1 (A) 1 + q 2(k+1)

with * * *
max (A) + min (A) 2 (A) + 1
q = * * = * ,
max (A) min (A) 2 (A) 1
where
max (A)
2 (A) =
min (A)
is the spectral condition number of the symmetric and positive denite matrix
A. Using the Raleigh quotient

(Ax, x) (Ax, x)
min (A) = min max = max (A) ,
x=0 (x, x) x = 0 (x, x)

we nd from the spectral equivalence inequalities

1 (x, x) (Ax, x) c2 (x, x)


cA for x RN
A
262 C Numerical Algorithms

an upper bound for the spectral condition number

cA
2 (A) 2
.
cA
1

Since the spectral condition number of the boundary element stiness matrices
may depend on mesh parameters such as the mesh size h or the mesh ratio
hmax /hmin , an appropriate preconditioning is mandatory in many cases.
Hence, we assume that there is a symmetric and positive denite matrix
CA RN N which can be factorised as CA = CA CA , where CA is again
1/2 1/2 1/2

symmetric and positive denite. Instead of the linear system (C.2), we now
consider the transformed system

x = C 1/2 A C 1/2 C 1/2 x = C 1/2 f = f,


A A A A A

where the transformed system matrix A  = C 1/2 A C 1/2 is again symmet-


A A
ric and positive denite. For the solution of the linear system Ax = f, we
can apply Algorithm C.1 to obtain, by substituting the transformations, the
precondioned conjugate gradient method as described in Algorithm C.2.
Algorithm C.2
1. Compute for an arbitrary given initial solution x0 RN
1 0
r0 = Ax0 f , v 0 = CA r , p0 = v 0 , 0 = (v 0 , r0 ).

2. Iterate for k = 0, . . . , N 2
k
sk = Apk , k = (sk , pk ), k = , xk+1 = xk k pk
k
and compute the new residual
1 k+1
rk+1 = rk k sk , v k+1 = CA r , k+1 = (v k+1 , rk+1 ).

Stop, if
k+1 2 0
is satisfied with some prescribed accuracy .
Otherwise compute
k+1
k = , pk+1 = v k+1 + k pk .
k
For the preconditioned conjugate gradient scheme, we then obtain the error
estimate
 k+1  2q k+1  
e  e0   ,
A 1+q 2(k+1) A

where
C.3 Iterative Solution Methods 263
5
 +1
2 (A) 
max (A) cA

q = ,  =
2 (A) 2 ,
 1
2 (A) 
min (A) cA
1
 
and cA A
1 , c2 are the positive constants from the spectral equivalence inequalities

 x, x 
cA
1 ( ) (A
x, x ) cA
2 ( ) for all x
x, x  RN .
1/2
 = CA x, this is equivalent to the spectral equivalence inequalities
Inserting x
 
1 (CA x, x) (Ax, x) c2 (CA x, x)
cA for x RN .
A
(C.11)

Hence, the quite often challenging problem is to nd a preconditioning matrix


CA satisfying the spectral equivalence inequalities (C.11) and allowing a sim-
1
ple and ecient application of the inverse matrix CA as needed in Algorithm
C.2.

C.3.2 Generalised Minimal Residual Method (GMRES)

For a symmetric and positive denite matrix A, we have used the Krylov
space ( )
Sk (A, r0 ) = span r0 , Ar0 , . . . , Ak r0
to construct an Aorthogonal vector system P (cf. (C.3)). Formally, such a
vector system can be dened for any arbitrary matrix A RN N . However,
a nonsymmetric and possibly indenite matrix A does not induce an inner
product. Instead of an Aorthogonal vector system, we therefore dene an
orthonormal vector system
( )N 1
V = v k k=0
satisfying 
k  1 for k = ,
(v , v ) =
0 for k = 
using the method of Arnoldi as described in Algorithm C.3.
Algorithm C.3
1. Compute for an arbitrary given initial solution x0 RN

r0
r0 = Ax0 f , v0 = .
r0 2

2. Iterate for k = 0, . . . , N 1


k
v k+1 = Av k k v  , k = (Av k , v  ) .
=0
264 C Numerical Algorithms

Stop, if v k+1 2 = 0 is satisfied.


Otherwise, compute
v k+1
v k+1 = .
v k+1 2
Note that the method of Arnoldi (Algorithm C.3) may fail if v k+1 2 = 0 is
satised. We will comment this break down situation later.
Using the orthonormal basis vectors from the system V , we may dene an
approximate solution of the linear system Ax = f as


k
xk+1 = x0  v  ,
=0

where we have to nd the yet unknown coecients  . To this end, we may re-
quire to minimise the residual rk+1 = Axk+1 f with respect to the Euclidean
vector norm,

 k+1     
k

r  = Axk+1 f  = Ax0 f  Av  2 min ,
2 2
=0

using the parameters 0 , . . . , k . From the method of Arnoldi (Algorithm


C.3), we obtain


 
+1
 
Av  = v +1 + j v j = j v j , +1 = v +1 2 .
j=0 j=0

Hence, we have


k 
+1
rk+1 = r0  j v j = r0 Vk+1 Hk
=0 j=0

with the orthogonal matrix


 
Vk+1 = v 0 , v 1 , . . . , v k+1 RN (k+2) ,

and with a upper Hessenberg matrix Hk R(k+2)(k+1) dened by



j for j  + 1,
Hk [j, ] =
0 for j >  + 1.

Moreover, we can write


   
r0 = r0 2 v 0 = r0 2 Vk+1 e0 ,

where the notation e0 = (1, 0, . . . , 0) Rk+2 has been used. Since Vk+1 is
orthogonal, we deduce
C.3 Iterative Solution Methods 265
 k+1    
r  = Vk+1 r0 2 e0 Hk 
2 2
 0 0   0 
= r 2 e Hk 2 = r 2 Qk e0 Qk Hk 2 ,
  

where Qk R(k+2)(k+2) is an orthogonal matrix such that Rk = Qk Hk


R(k+2)(k+1) is an upper triangular matrix. Then, we obtain
 k+1 2  0 
r  = r 2 Qk e0 Rk 2
2 2


k+1 2
= r0 2 Qk e0 Rk

=0
k 
 2  2  2
= r0 2 Qk e0 Rk + r0 2 Qk e0 = r0 2 Qk e0 ,
 k+1 k+1
=0

if the coecient vector Rk+1 is found from the upper triangular linear
system
Rk = r0 2 Qk e0 .
It remains to nd an orthogonal matrix Qk R(k+2)(k+2) transforming the
upper Hessenberg matrix

0,0 1,0 . . . k,0
..
0,1 1,1 .

.. R(k+2)(k+1)
Hk = 0 1,2 . . . .

.
0 . . k,k
k,k+1

into an upper triangular matrix



r0,0 r0,1 . . . r0,k
..
0 r1,1 .


Rk = Qk Hk = 0 0 . . ... R(k+2)(k+1) .
.

.
. . rk,k
0

This can be done by the use of the Givens rotations. Let us rst consider the
column vector

hj = (j,0 , . . . , j,j1 , j,j , j,j+1 , 0, . . . , 0) Rk+2 ,

where we have to nd an orthogonal matrix Gj such that


 
Gj hj = j,0 , . . . , j,j1 , j,j , 0, 0, . . . , 0
266 C Numerical Algorithms

j
is satised. For this, it is sucient to consider the orthogonal matrix G
R 22
such that
 j,j j,j
Gj =
j,j+1 0
 j R22 allows the general representation
is fullled. The orthogonal matrix G

 aj b j
Gj = , a2j + b2j = 1 ,
bj aj
where the coecients aj and bj can be found from the condition
bj j,j + aj j,j+1 = 0
as
j,j j,j+1
aj = 8 , bj = 8 ,
2
j,j + 2
j,j+1 2
j,j 2
+ j,j+1
and, therefore, when assuming j,j+1 > 0,
8
j,j = aj j,j + bj j,j+1 = 2 + 2
j,j j,j+1 > 0. (C.12)

For j = 0, . . . , k the resulting orthogonal matrices Gj are of the form



1
..
.

1

aj b j
Gj =
R(k+2)(k+2)

b j a j

1
..
.
1
with Gj [j, j] = Gj [j + 1, j + 1] = aj . Their recursive application gives

0,0 1,0 . . . k,0
0,1 1,1 . . . k,1

.. ..
Gk Gk1 . . . G2 G1 G0 Hk = Gk Gk1 . . . G2 G1 G0 0 1,2 . .

.
0 .. k,k
k,k+1

0,0 1,0 . . . k,0
0 1,1 . . . k,1

.. ..
= Gk Gk1 . . . G2 G1
1,2 . .

..
. k,k

k,k+1
C.3 Iterative Solution Methods 267

0,0 1,0 . . . k,0
0 1,1 . . . k,1

. ..
= Gk Gk1 . . . G2
0 .. .

..
.
k,k
k,k+1

0,0 1,0 . . . k,0
0
1,1 . . . k,1

. . ..
=
0 . . = Rk .
.. 
. k,k
0

Hence, we have constructed the orthogonal matrix

Qk = Gk Gk1 . . . G1 G0 R(k+2)(k+2) ,

which fulls

1 a0
0 b0

0 0

Qk e = Gk . . . G0 . = Gk . . . G1 .
0
.. ..

0 0
0 0

a0 a0
a1 (b0 ) a1 (b0 )

(b0 )(b1 ) a2 (b0 )(b1 )

= Gk . . . G2 .. = .. Rk+2 .
. .

0 ak (b0 ) (bk1 )
0 (b0 ) (bk )

From this, we nd

       k
k+1 = e0 2 (Qk e0 )k+1  = e0 2 bj .
j=0

With the denition of


j,j+1 v j 2
bj = 8 = 8 ,
2 2
j,j+1 + j,j v j 22 + (Av j , v j )2

we conclude bj < 1 when assuming (Av j , v j ) = 0. Hence, the error is


monotonic decreasing. In the case of the break down situation in the method
268 C Numerical Algorithms

of Arnoldi (Algorithm C.3), i.e. v k 2 = 0, we nd bk = 0, and, therefore,


k+1 = rk+1 2 = 0. In particular, xk+1 = x is the solution of the linear
system Ax = f .
Summarising the above, we obtain the Generalised Method of the Minimal
Residual (GMRES) as described in Algorithm C.4, see [96].
Algorithm C.4
1. Compute for an arbitrary given initial solution x0 RN
1 0
r0 = Ax0 f , 0 = r0 2 , v0 = r , p0 = 0 .
0
2. Iterate for k = 0, . . . , N 2


k
wk = Av k , v k+1 = wk k v  , k = (wk , v  ), kk+1 = v k+1 2 .
=0

Go to 3. if kk+1 = 0 is satisfied.
Otherwise compute
1
v k+1 = v k+1 .
kk+1
For  = 0, . . . , k 1 compute

k = a k + b k+1 , k+1 = b k + a k+1

and
kk kk+1 8
ak = 8 , bk = 8 , kk = 2 + 2
kk kk+1
2 + 2
kk 2 + 2
kk
kk+1 kk+1

as well as

pk+1 = bk pk , pk = ak pk , k+1 = |pk+1 |.

Stop, if k+1 < 0 is satisfied with some prescribed accuracy


.
3. Compute the approximate solution, i.e. for  = k, k 1, . . . , 0

1 
k
 = p j j

j=+1

and

k
xk+1 = x0  v  .
=0
References

1. B. Alpert, G. Beylkin, R. Coifman, and V. Rokhlin. Wavelet-like bases for


the fast solution of second-kind integral equations. SIAM J. Sci. Comput.,
14(1):159184, 1993.
2. S. Amini and A. T. J. Prot. Analysis of a diagonal form of the fast multipole
algorithm for scattering theory. BIT, 39(4):585602, 1999.
3. S. Amini and A. T. J. Prot. Multi-level fast multipole Galerkin method for
the boundary integral solution of the exterior Helmholtz equation. In Current
trends in scientic computing (Xian, 2002), volume 329 of Contemp. Math.,
pages 1319. Amer. Math. Soc., Providence, RI, 2003.
4. D. N. Arnold and W. L. Wendland. On the asymptotic convergence of collo-
cation methods. Math. Comp., 41:349381, 1983.
5. D. N. Arnold and W. L. Wendland. The convergence of spline collocation for
strongly elliptic equations on curves. Numer. Math., 47:317341, 1985.
6. K. E. Atkinson. The Numerical Solution of Integral Equations of the Second
Kind. Cambridge University Press, 1997.
7. M. Bebendorf. Approximation of boundary element matrices. Numer. Math.,
86(4):565589, 2000.
8. M. Bebendorf and S. Rjasanow. Matrix compression for the radiation heat
transfer in exhaust pipes. In W.L. Wendland, A.-M. Sandig, and W. Schiehlen,
editors, Multield Problems. State of the Art, pages 183192. Springer, 2000.
9. M. Bebendorf and S. Rjasanow. Adaptive lowrank approximation of colloca-
tion matrices. Computing, 70(1):124, 2003.
10. M. Bebendorf and S. Rjasanow. Numerical simulation of exhaust systems in
car industry - Ecient calculation of radiation heat transfer. In W. Jager and
H.-J. Krebs, editors, Mathematics. Key Technology for the Future, pages 5562.
Springer, 2003.
11. G. Beylkin, R. Coifman, and V. Rokhlin. Fast wavelet transforms and numer-
ical algorithms. I. Comm. Pure Appl. Math., 44(2):141183, 1991.
12. M. Bonnet. Boundary Integral Equation Methods for Solids and Fluids. John
Wiley & Sons, Chichester, 1999.
13. H. Brakhage and P. Werner. Uber das Dirichletsche Aussenraumproblem fur
die Helmholtzsche Schwingungsgleichung. Arch. Math., 16:325329, 1965.
14. J. H. Bramble and J. E. Pasciak. A preconditioning technique for indenite sys-
tems resulting from mixed approximations of elliptic problems. Math. Comp.,
50:117, 1988.
270 References

15. C. A. Brebbia, J. C. F. Telles, and L. C. Wrobel. Boundary Element Techniques:


Theory and Applications in Engineering. Springer, Berlin, 1984.
16. A. Buchau, S. Kurz, O. Rain, V. Rischmuller, S. Rjasanow, and W. M. Rucker.
Comparison between dierent approaches for fast and ecient 3D BEM com-
putations. IEEE Transaction on Magnetics, 39(2):11071110, 2003.
17. A. Bua, R. Hiptmair, T. von Petersdor, and C. Schwab. Boundary element
methods for Maxwell transmission problems in Lipschitz domains. Numer.
Math, 95(3):459485, 2003.
18. A. Bua and S. Sauter. On the acoustic single layer potential: Stabilisation
and Fourier analysis. SIAM, J. Sci. Comp., 28(5):19741999, 2006.
19. A. J. Burton and G. F. Miller. The application of integral equation methods to
the numerical solution of some exterior boundary-value problems. Proc. Roy.
Soc. London. Ser. A, 323:201210, 1971.
20. J. Carrier, L. Greengard, and V. Rokhlin. A fast adaptive multipole algorithm
for particle simulations. SIAM J. Sci. Statist. Comput., 9(4):669686, 1988.
21. G. Chen and J. Zhou. Boundary Element Methods. Academic Press, New York,
1992.
22. A. H.-D. Cheng and D. T. Cheng. Heritage and early history of the boundary
element method. Engrg. Anal. Boundary Elements, 29:268302, 2005.
23. D. Colton and R. Kress. Inverse acoustic and electromagnetic scattering theory.
Springer, Berlin, 1992.
24. M. Costabel. Boundary integral operators on Lipschitz domains: Elementary
results. SIAM J. Math. Anal., 19:613626, 1988.
25. M. Costabel. Some historical remarks on the positivity of boundary integral op-
erators. In M. Schanz and O. Steinbach, editors, Boundary Element Analysis:
Mathematical Aspects and Applications. Lecture Notes is Applied and Compu-
tational Mechanics, vol. 29, Springer, Heidelberg, pp. 127, 2007.
26. M. Costabel and W. L. Wendland. Strong ellipticity of boundary integral
operators. Crelles J. Reine Angew. Math., 372:3463, 1986.
27. W. Dahmen, S. Prossdorf, and R. Schneider. Wavelet approximation methods
for pseudodierential equations. II. Matrix compression and fast solution. Adv.
Comput. Math., 1(3-4):259335, 1993.
28. W. Dahmen, S. Prossdorf, and R. Schneider. Wavelet approximation meth-
ods for pseudodierential equations. I. Stability and convergence. Math. Z.,
215(4):583620, 1994.
29. R. Dautray and J. L. Lions. Mathematical Analysis and Numerical Methods for
Science and Technology. Volume 4: Integral Equations and Numerical Methods.
Springer, Berlin, 1990.
30. A. Douglis and L. Nirenberg. Interior estimates for elliptic systems of partial
dierential equations. Commun. Pure Appl. Math., 8:503538, 1955.
31. H. Forster, T. Schre, R. Dittrich, W. Scholz, and J. Fidler. Fast Boundary
Methods for Magnetostatic Interactions in Micromagnetics. IEEE Transaction
on Magnetics, 39(5):25132515, 2003.
32. M. Ganesh and O. Steinbach. Nonlinear boundary integral equations for har-
monic problems. J. Int. Equations Appl., 11:437459, 1999.
33. C. F. Gau. Allgemeine Theorie des Erdmagnetismus., volume 5 of Werke.
Dieterich, Gottingen, 1838, 1867.
34. C. F. Gau. Atlas des Erdmagnetismus, volume 5 of Werke. Dieterich,
Gottingen, 1840, 1867.
References 271

35. I. M. Gelfand and G. E. Shilov. Generalized functions. Vol. I: Properties and


operations. Translated by Eugene Saletan. Academic Press, New York, 1964.
36. S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of
pseudoskeleton approximations. Linear Algebra Appl., 261:121, 1997.
37. L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J.
Comput. Phys., 73(2):325348, 1987.
38. L. Greengard and V. Rokhlin. The rapid evaluation of potential elds in
three dimensions. In Vortex methods (Los Angeles, CA, 1987), pages 121141.
Springer, Berlin, 1988.
39. L. Greengard and V. Rokhlin. A new version of the fast multipole method
for the Laplace equation in three dimensions. In Acta numerica, 1997, pages
229269. Cambridge Univ. Press, Cambridge, 1997.
40. W. Hackbusch, editor. Boundary Elements: Implementation and Analysis
of Advanced Algorithms, volume 54 of Notes on Numerical Fluid Mechanics,
Vieweg, Braunschweig, 1996.
41. W. Hackbusch. A sparse matrix arithmetic based on H-matrices. I. Introduc-
tion to H-matrices. Computing, 62(2):89108, 1999.
42. W. Hackbusch and B. N. Khoromskij. A sparse Hmatrix arithmetic. II. App-
lication to multidimensional problems. Computing, 64:2147, 2000.
43. W. Hackbusch, C. Lage, and S. A. Sauter. On the ecient realization of
sparse matrix techniques for integral equations with focus on panel clustering,
cubature and software design aspects. In Boundary element topics (Stuttgart,
1995), pages 5175. Springer, Berlin, 1997.
44. W. Hackbusch and Z. P. Nowak. On the fast matrix multiplication in the
boundary element method by panel clustering. Numer. Math., 54(4):463491,
1989.
45. W. Hackbusch and S. A. Sauter. On the ecient use of the Galerkin method
to solve Fredholm integral equations. Appl. Math., 38:301322, 1993.
46. W. Hackbusch and S. A. Sauter. On the ecient use of the Galerkin method
to solve Fredholm integral equations. Appl. Math., 38(4-5):301322, 1993. Pro-
ceedings of ISNA 92International Symposium on Numerical Analysis, Part
I (Prague, 1992).
47. H. Han. The boundary integrodierential equations of threedimensional Neu-
mann problem in linear elasticity. Numer. Math., 68:269281, 1994.
48. H. Harbrecht and R. Schneider. Biorthogonal wavelet bases for the boundary
element method. Math. Nachr., 269/270:167188, 2004.
49. H. Harbrecht and R. Schneider. Wavelet Galerkin schemes for boundary in-
tegral equations implementation and quadrature. SIAM J. Sci. Comput.,
27:13471370, 2006.
50. F. Hartmann. Introduction to Boundary Elements. Springer, Berlin, 1989.
51. K. Hayami and S. A. Sauter. A panel clustering method for 3-D elastostatics
using spherical harmonics. In Integral methods in science and engineering
(Houghton, MI, 1998), volume 418 of Chapman & Hall/CRC Res. Notes Math.,
pages 179184. Chapman & Hall/CRC, Boca Raton, FL, 2000.
52. D. Hilbert. Grundzuge einer allgemeinen Theorie linearer Integralgleichungen.
Teubner, Leipzig, 1912.
53. L. Hormander. The Analysis of Linear Partial Dierential Operators I.
Springer, Berlin, 1983.
54. G. C. Hsiao, P. Kopp, and W. L. Wendland. A Galerkin collocation method
for some integral equations of the rst kind. Computing, 25:89130, 1980.
272 References

55. G. C. Hsiao and R. MacCamy. Solution of boundary value problems by integral


equations of the rst kind. SIAM Rev., 15:687705, 1973.
56. G. C. Hsiao, E. P. Stephan, and W. L. Wendland. On the integral equation
method for the plane mixed boundary value problem of the Laplacian. Math.
Meth. Appl. Sci., 1:265321, 1979.
57. G. C. Hsiao and W. L. Wendland. A nite element method for some integral
equations of the rst kind. J. Math. Anal. Appl., 58:449481, 1977.
58. G. C. Hsiao and W. L. Wendland. The AubinNitsche lemma for integral
equations. J. Int. Equat., 3:299315, 1981.
59. M. A. Jaswon and G. T. Symm. Integral Equation Methods in Potential Theory
and Elastostatics. Academic Press, London, 1977.
60. B. N. Khoromskij and G. Wittum. Numerical Solution of Elliptic Dierential
Equations by Reduction to the Interface. Lecture Notes in Computational
Science and Engineering, 36. Springer, Berlin, 2004.
61. R. Kress. Linear Integral Equations. Springer, Heidelberg, 1999.
62. V. D. Kupradze. Threedimensional problems of the mathematical theory of
elasticity and thermoelasticity. NorthHolland, Amsterdam, 1979.
63. S. Kurz, O. Rain, V. Rischmuller, and S. Rjasanow. Periodic and Anti-Periodic
Symmetries in the Boundary Element Method. In Proceedings of 10th inter-
national IGTE Symposium, Graz, Austria, pages 375380, 2002.
64. S. Kurz, O. Rain, and S. Rjasanow. The Adaptive Cross Approximation Tech-
nique for the 3D Boundary Element Method. IEEE Transaction on Magnetics,
38(2):421424, 2002.
65. S. Kurz, O. Rain, and S. Rjasanow. Application of the Adaptive Cross Ap-
proximation Technique for the Coupled BE-FE Solution of Electromagnetic
Problems. Comput. Mech., 32(46):423429, 2003.
66. P. K. Kythe. Fundamental Solutions for Dierential Operators and Applica-
tions. Birkhauser, Boston, 1996.
67. C. Lage and C. Schwab. Wavelet Galerkin algorithms for boundary integral
equations. SIAM J. Sci. Comput., 20(6):21952222 (electronic), 1999.
68. U. Lamp, T. Schleicher, E. P. Stephan, and W. L. Wendland. Galerkin collo-
cation for an improved boundary element method for a plane mixed boundary
value problem. Computing, 33:269296, 1984.
69. U. Langer and D. Pusch. Data-sparse algebraic multigrid methods for large
scale boundary element equations. Appl. Numer. Math., 54(3-4):406424, 2005.
70. V. G. Mazya. Boundary integral equations. In V. G. Mazya and S. M. Nikolskii,
editors, Analysis IV, volume 27 of Encyclopaedia of Mathematical Sciences,
pages 127233. Springer, Heidelberg, 1991.
71. W. McLean. Strongly Elliptic Systems and Boundary Integral Equations. Cam-
bridge University Press, 2000.
72. S. G. Michlin. Integral Equations. Pergamon Press, London, 1957.
73. C. M. Miranda. Partial Dierential Equations of Elliptic Type. Springer,
Berlin, 1970.
74. N. I. Muskhelishvili. Singular Integral Equations. Noordho, Groningen, 1953.
75. T. Nakata, N. Takahashi, and K. Fujiwara. Summary of results for benchmark
problem 10 (steel plates around a coil). COMPEL, 11:335344, 1992.
76. J. C. Nedelec. Curved nite element methods for the solution of singular
integral equations on surfaces in R3 . Comp. Meth. Appl. Mech. Engrg., 8:61
80, 1976.
References 273

77. J. C. Nedelec. Integral equations with non integrable kernels. Int. Eq. Operator
Th., 5:562572, 1982.
78. J. C. Nedelec. Acoustic and Electromagnetic Equations. Springer, New York,
2001.
79. J. C Nedelec and J. Planchard. Une methode variationelle delements nis
pour la resolution numerique dun probleme exterieur dans R3 . R.A.I.R.O.,
7:105129, 1973.
80. C. Neumann. Untersuchungen uber das Logarithmische und Newtonsche Po-
tential. Teubner, Leipzig, 1877.
81. E. J. Nystrom. Uber die praktische Auosung von linearen Integralgleichungen
mit Anwendungen auf Randwertaufgaben der Potentialtheorie. Soc. Sci. Fenn.
Comment. Phys. Math., 4:152, 1928.
82. E. J. Nystrom. Uber die praktische Auosung von linearen Integralgleichungen
mit Anwendungen auf Randwertaufgaben. Acta Math., 54:185204, 1930.
83. G. Of, O. Steinbach, and W. L. Wendland. Applications of a fast multipole
Galerkin boundary element method in linear elastostatics. Comput. Vis. Sci.,
8(3-4):201209, 2005.
84. G. Of, O. Steinbach, and W. L. Wendland. The fast multipole method for the
symmetric boundary integral formulation. IMA J. Numer. Anal., 26(2):272
296, 2006.
85. N. Ortner. Fundamentallosungen und Existenz von schwachen Losungen lin-
earer partieller Dierentialgleichungen mit konstanten Koezienten. Ann.
Acad. Sci. Fenn. Ser. A I Math., 4:330, 1979.
86. N. Ortner and P. Wagner. A survey on explicit representation formulae for
fundamental solutions of linear partial dierential operators. Acta Appl. Math.,
47:101124, 1997.
87. J. Ostrowski, Z. Andjelic, M. Bebendorf, B. Cranganu-Cretu, and J. Smajic.
Fast BEM-Solution of Laplace Problems with H-Matrices and ACA. IEEE
Trans. on Magnetics, 42(4):627630, 2006.
88. J. Plemelj. Potentialtheoretische Untersuchungen. Teubner, Leipzig, 1911.
89. A. Pomp. The BoundaryDomain Integral Method for Elliptic Systems. With
an Application to Shells, volume 1683 of Lecture Notes in Mathematics.
Springer, Berlin, 1998.
90. B. Reidinger and O. Steinbach. A symmetric boundary element method for
the Stokes problem in multiple connected domains. Math. Methods Appl. Sci.,
26:7793, 2003.
91. F. J. Rizzo. An integral equation approach to boundary value problems of
classical elastostatics. Quart. Appl. Math., 25:8395, 1967.
92. D. Rodger, N. Allen, H. C. Lai, and Leonard P. J. Calculation of transient
3D eddy currents in nonlinear media = verication using a rotational test rig.
IEEE Transaction on Magnetics, 30(5(2)):29882991, 1994.
93. V. Rokhlin. Rapid solution of integral equations of classical potential theory.
J. Comput. Phys., 60(2):187207, 1985.
94. V. Rokhlin and M. A. Stalzer. Scalability of the fast multipole method for
the Helmholtz equation. In Proceedings of the Eighth SIAM Conference on
Parallel Processing for Scientic Computing (Minneapolis, MN, 1997), page 8
pp. (electronic), Philadelphia, PA, 1997. SIAM.
95. K Ruotsalainen and W. L. Wendland. On the boundary element method for
some nonlinear boundary value problems. Numer. Math., 53:299314, 1988.
274 References

96. Y. Saad and M. H. Schultz. GMRES: a generalized minimal residual algo-


rithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput.,
7(3):856869, 1986.
97. S. A. Sauter and A. Krapp. On the eect of numerical integration in the
Galerkin boundary element method. Numer. Math., 74:337360, 1996.
98. S. A. Sauter and C. Lage. Transformation of hypersingular integrals and black
box cubature. Math. Comp., 70:223250, 2001.
99. S. A. Sauter and C. Schwab. Randelementmethoden. Analyse, Numerik und
Implementierung schneller Algorithmen. B. G. Teubner, Stuttgart, Leipzig,
Wiesbaden, 2004.
100. G. Schmidt. On spline collocation methods for boundary integral equations in
the plane. Math. Meth. Appl. Sci., 7:7489, 1985.
101. G. Schmidt. Spline collocation for singular integrodierential equations over
(0,1). Numer. Math., 50:337352, 1987.
102. C. Schwab and W. L. Wendland. On numerical cubatures of singular surface
integrals in boundary element methods. Numer. Math., 62:343369, 1992.
103. S. Sirtori. General stress analysis method by means of integral equations and
boundary elements. Meccanica, 14:210218, 1979.
104. I. H. Sloan. Error analysis of boundary integral methods. Acta Numerica,
92:287339, 1992.
105. O. Steinbach. Numerische Naherungsverfahren fur elliptische Randwertprob-
leme. Finite Elemente und Randelemente. B. G. Teubner, Stuttgart, Leipzig,
Wiesbaden, 2003.
106. O. Steinbach. A robust boundary element method for nearly incompressible
elasticity. Numer. Math., 95:553562, 2003.
107. O. Steinbach. Stability Estimates for Hybrid Coupled Domain Decomposition
Methods. Springer Lecture Notes in Mathematics, 1809. Springer, Heidelberg,
2003.
108. O. Steinbach and W. L. Wendland. On C. Neumanns method for second order
elliptic systems in domains with nonsmooth boundaries. J. Math. Anal. Appl.,
262:733748, 2001.
109. E. P. Stephan. A boundary integral equations for mixed boundary value prob-
lems in R3 . Math. Nachr., 131:167199, 1987.
110. M. Stolper. Computing and compression of the boundary element matrices for
the Helmholtz equation. J. Numer. Math., 12(1):5575, 2005.
111. M. Stolper and S. Rjasanow. A compression method for the Helmholtz equa-
tion. In Numerical mathematics and advanced applications, pages 786795.
Springer, Berlin, 2004.
112. G. Verchota. Layer potentials and regularity for the Dirichlet problem for
Laplaces equation in Lipschitz domains. J. Funct. Anal., 59:572611, 1984.
113. O. von Estor, S. Rjasanow, M. Stolper, and O. Zaleski. Two ecient methods
for a multifrequency solution of the Helmholtz equation. Comput. Vis. Sci.,
8(24):159167, 2005.
114. T. von Petersdor. Boundary integral equations for mixed Dirichlet, Neumann
and transmission problems. Math. Meth. Appl. Sci., 11:185213, 1989.
115. T. von Petersdor and C. Schwab. Wavelet approximations for rst kind
boundary integral equations on polygons. Numer. Math., 74(4):479516, 1996.
116. T. von Petersdor, C. Schwab, and R. Schneider. Multiwavelets for second-kind
integral equations. SIAM J. Numer. Anal., 34(6):22122227, 1997.
References 275

117. W. L. Wendland, editor. Boundary Element Topics, Heidelberg, 1997. Springer.


118. K. Yosida. Functional Analysis. Springer, Berlin, Heidelberg, New York, 1980.
119. K. Zhao, M. N. Vouvakis, and J.-F. Lee. The Adaptive Cross Approxima-
tion Algorithm for Accelerated Method of Moments Computations of EMC
Problems. IEEE Transactions on electromagnetic compatibility, 47(4):763773,
2005.
Index

H 1 projection, 236 Calderon projector, 9, 34


L2 projection, 61, 63, 70, 75, 82, 169 Cauchy data, 17, 19, 26, 77, 88, 138, 169
CauchySchwarz inequality, 234
Adaptive Cross Approximation Ceas lemma, 66, 69, 74, 81, 95, 229
fully pivoted, 119 Characteristic points, 108
partially pivoted, 126 Cluster, 108
Admissibility condition, 112 Cluster tree, 108
Admissible cluster pairs, 109 Collocation Method, 66
Analytical solution, 143, 147, 153 Conormal derivative
Approximate representation formula, exterior, 21
67, 69, 71, 74, 76, 81, 92, 145, 153, interior, 2, 155
158, 173, 182, 189 Contraction, 227
Approximation property, 62, 64, 114, Convergence
117, 229, 231 cubic, 145, 176, 182
Area, 59 linear, 145, 176, 192
Arnoldi method, 263 quadratic, 152, 153, 187, 189, 197
AubinNitsche trick, 66, 69, 74, 76, 81,
87, 92, 95
Dirac distribution, 208
Banachs x point theorem, 227 Dirichlet datum, 12, 13, 19, 20, 70, 72,
Bessel function, 177 98, 138, 169
Bilinear form, 2, 14, 38, 41, 77 Displacement eld, 27
Boundary element, 59 Distribution, 203
Boundary element mesh, 59, 64, 239 Duality, 206, 235
Boundary value problem Duality argument, 62, 64
exterior Dirichlet, 21, 52, 191 Duality pairing, 31, 200, 206, 225
exterior Neumann, 22, 54, 196
homogeneous Neumann, 29 Eigenfunction, 14, 43, 177
interior Dirichlet, 10, 35, 42, 49, 65, Eigenvalue, 43, 109, 177
143 Eigenvector, 109
interior Neumann, 13, 36, 50, 72, 149 Element diameter, 60
mixed, 17, 37, 77, 87, 155, 162 Equation
nonlinear Robin, 20 BiLaplace, 209
Robin, 19 x point, 226
278 Index

Helmholtz, 44, 168 Interface problem, 26, 84, 160


Laplace, 2, 10, 13, 135 Interpolation, 70, 112
operator, 225 Interpolation property, 114, 117
Poisson, 24, 160 Inverse inequality argument, 66, 69, 81,
wave, 44 92
Equilibrium equations, 27 Iterative solution, 256
Error estimate, 62, 67
Exhaust manifold, 135, 153, 182, 187 Krylov space, 261

Foam, 166 Lame constants, 28, 210


Formula LaxMilgram lemma, 227
Bettis rst, 28 Lebesgue measure, 200
Bettis second, 29 Legendre polynomials, 108, 177
Greens rst, 2, 41, 44 Lipschitz boundary, 2, 13, 115, 231
Greens second, 2, 41, 44 Lipschitz domain, 59, 66, 200, 239
Fourier transform, 204, 208 Local coordinate system, 244
Frobenius norm, 102, 120 Low rank approximation, 103
Function
Matrix
k times continuously dierentiable,
approximation, 102
199
dense, 102
degenerated, 106
Hessenberg, 265
Holder continuous, 199
hierarchical, 101
harmonic, 3, 4
low rank, 106
innite times continuously dieren-
orthogonal, 265
tiable, 199
sparse, 129
Lipschitz continuous, 199
stiness, 228
piecewise constant, 233
symmetric and positive denite, 256
with compact support, 199 transformation, 74
Fundamental solution triangular, 265
Helmholtz equation, 45, 169, 212 Mesh ratio, 60
Laplace equation, 3, 107, 137, 208 Mesh size
linear elastostatics, 29, 209, 210 global, 60
Stokes system, 42, 210 local, 60
Midpoint quadrature rule, 240
Galerkin method, 67, 137, 169 Minimisation problem, 68, 72, 226
Galerkin orthogonality, 62, 229 Multiindex, 115, 199
Galerkin solution, 69
Galerkin variational problem, 228 Navier system, 28, 40
Generalised derivative, 201 Neumann datum, 14, 19, 20, 24, 25, 65,
Givens rotations, 265 75, 91, 98, 138, 169
Global trial space, 61, 63 Neumann series, 12, 14, 16, 23
GramSchmidt orthogonalisation, 258 Norm
Gardings inequality, 46, 47 equivalent, 202
semi-, 62, 202
Holder inequality, 200 Sobolev, 201, 206
Hierarchical block structure, 112 SobolevSlobodeckii, 201, 206
Hookes law, 28 Numerical integration, 239

Inner product, 204, 225, 256 Operator


Index 279

Xelliptic, 226 Singular vector, 102


adjoint double layer potential, 6, 33, Singularity, 102, 104
45 Solvability condition, 13, 36, 43
Bessel potential, 204 Somigliana identity, 29, 35
boundary stress, 28 Sommerfeld radiation condition, 48, 52,
bounded, 225 54
double layer potential, 5, 32, 47 Space
hypersingular, 7, 15, 33, 47 conformal trial, 227
matrix surface curl, 32, 90 dual, 202, 225
semielliptic, 226 Hilbert, 201, 225
single layer potential, 4, 30, 45 Schwartz, 204
SteklovPoincare, 9, 16, 19, 22, 27, Sobolev, 3, 17, 201, 204, 206
35, 85, 161 Spectral condition number, 69, 73, 262
surface curl, 7, 74 Spherical harmonics, 107
trace, 208 Stability, 66
Optimal order of convergence, 67, 70, Stiness matrix, 66, 69
75, 76 Stopping criterion, 114, 120
Orthogonality, 23, 117, 118, 259 Stressstrain relation, 28
Support, 199
Parametrisation, 239 System
Particular solution, 25, 26, 84, 135, 160, linear, 228, 256
169 linear elastostatics, 27
Partition of unity, 205 Stokes, 40
Poincare inequality, 203
Poisson ratio, 28, 163, 166 Taylor series, 107
Potential TEAM
adjoint double layer, 6 problem 10, 131, 147, 157
double layer, 4, 32, 46 problem 24, 134, 158
Newton, 24
Tempered distribution, 204
single layer, 3, 30, 42, 45
Tensor
Preconditioning, 147, 153, 262
Kelvin, 29, 209, 210
Pressure, 40
Strain, 28
Radiation condition, 21, 22, 26, 84, 160 Stress, 28
Raleigh quotient, 261 Trace
Reference element, 59 exterior, 21
Relay, 134, 163 interior, 2, 155
Representation formula, 3, 17, 21, 22, Transmission conditions, 26, 84, 160
24, 26, 36, 41, 45, 49, 52, 54, 65, Tschebysche polynomials, 261
72, 77, 91, 94, 98
Riesz representation theorem, 226 Uniform cone condition, 205
Rigid body motions, 29 Unit sphere, 131, 143, 150, 156

Scaling condition, 14, 37 Variational problem, 11, 1416, 20, 23,


Schur complement, 79, 87, 156 27, 35, 36, 40, 43, 61, 75, 77, 82,
Shape function 84, 225, 233, 237
linear, 63, 64 Velocity eld, 40
piecewise constant, 61 Viscosity, 40
Singular value, 102
Singular value decomposition, 102 Young modulus, 28, 163, 166

You might also like