Professional Documents
Culture Documents
Greg Fasshauer
Fall 2010
1 Introduction
2 kd-Trees
0.9
5
1
0.8
7
0.7
0.6
8 2 3
9
0.5
2
0.4
1
4 5 6 7
0.3
4
0.2
0.1 3 8 9
6
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
However, it turns out that it is easier to deal with the compact support if
we compute the distance matrix corresponding to the (1 r )+ term
since otherwise those entries of the distance matrix that are zero
(since the mutual distance between two identical points is zero) would
be lost in the sparse representation of the matrix.
Program (DistanceMatrixCSRBF.m)
1 function DM = DistanceMatrixCSRBF(dsites,ctrs,ep)
2 N = size(dsites,1); M = size(ctrs,1);
% For each ctr/dsite, find the dsites/ctrs
% in its support along with u-distance u=1-ep*r
3 supp = 1/ep; nzmax = 25*N; DM = spalloc(N,M,nzmax);
4 if M > N % faster if more centers than data sites
5 T = kd_buildtree(ctrs,0);
6 for i = 1:N
7 [idx,dist,pts]=kd_rangequery_ball(T,dsites(i,:),supp);
8 DM(i,idx) = 1-ep*dist;
9 end
10 else
11 tTree = kd_buildtree(dsites,0);
12 for j = 1:M
13 [idx,dist,pts]=kd_rangequery_ball(T,ctrs(j,:),supp);
14 DM(idx,j) = 1-ep*dist;
15 end
16 end
17 clear T
Remark
The M ATLAB code DistanceMatrixCSRBF.m contains two similar
blocks that will be used depending on whether we have more centers
than data sites or vice versa.
Example
If there are more data sites than centers (cf. lines 59), then we
build a kd-tree for the data sites and
find for each center x j those data sites within the support of
the basis function centered at x j ,
i.e., we construct the (sparse) matrix column by column.
In the other case (cf. lines 1115) we
start with a tree for the centers and
build the matrix row by row.
This is accomplished by determining for each data site x i all
centers whose associated basis function covers data site x i .
fasshauer@iit.edu MATH 590 Chapter 12 17
Assembly of the Sparse Interpolation Matrix
Remark
As mentioned above, kd_buildtree is provided by the kd-tree
library and kd_rangequery_ball is a modified version of the
library code kd_rangequery (which finds points inside a
rectangular box surrounding the query point instead of inside a
hyper-sphere as needed for RBFs).
The call in line 5 (respectively 11) of DistanceMatrixCSRBF.m
generates the kd-tree of all the centers (data sites), and with the
call to kd_rangequery_ball in line 7 (respectively 13) we find
all centers (data sites) that lie within an isotropic L2 -distance supp
of the jth center point (data site).
The actual distances are returned in the vector dist and the
indices into the list of all data sites are provided in idx.
The distances for these points only are stored in the matrix DM.
Remark
For maximum efficiency (in order to avoid dynamic memory
allocation) it is important to have a good estimate of the number of
nonzero entries in the matrix for the allocation statement in line 3.
The reason for coding DistanceMatrixCSRBF.m in two different
ways is so that we will be able to speed up the program when
dealing with non-square (evaluation) matrices (for example in the
context of MLS approximation (c.f. Chapter 24).
Program (DistanceMatrixCSRBFA.m)
1 function DM = DistanceMatrixCSRBF(dsites,ctrs,ep)
2 N = size(dsites,1); M = size(ctrs,1);
3 supp = 1/ep; nzmax = 25*N;
4 rowidx = zeros(1,nzmax); colidx = zeros(1,nzmax);
5 validx = zeros(1,nzmax); istart = 1; iend = 0;
6 if M > N % faster if more centers than data sites
7 T = kd_buildtree(ctrs,0);
8 for i = 1:N
9 [idx,dist,pts]=kd_rangequery_ball(T,dsites(i,:),supp);
10 newentries = length(idx);
11 iend = iend + newentries;
12 rowidx(istart:iend) = repmat(i,1,newentries);
13 colidx(istart:iend) = idx;
14 validx(istart:iend) = 1-ep*dist;
15 istart = istart + newentries;
16 end
17 else [ similar code ] end
29 idx = find(rowidx);
30 DM = sparse(rowidx(idx),colidx(idx),validx(idx),N,M);
31 clear T
Remark
Note the use of the sparse matrix of ones spones. Had we used
5-4*r instead, then a full matrix would have been generated (with
many additional and unwanted ones).
fasshauer@iit.edu MATH 590 Chapter 12 22
Assembly of the Sparse Interpolation Matrix
Remark
In order to speed up the solution of the (symmetric positive
definite) sparse linear system we could use the preconditioned
conjugate gradient algorithm (pcg in M ATLAB) instead of the basic
backslash \ (or matrix left division mldivide) operation, i.e., we
could replace line 17 of RBFInterpolation2D by
17 c = pcg(IM,rhs); Pf = EM * c;
Note, however, that the backslash \ operator also employs
state-of-the-art direct sparse solvers by first applying a minimum
degree preordering.
Example
We use Wendlands compactly supported function
Remark
The rate listed in the table is the exponent of the observed
RMS-convergence rate O(hrate ).
It is computed using the formula
ln(ek 1 /ek )
ratek = , k = 2, 3, . . . , (1)
ln(hk 1 /hk )
Remark
The % nonzero column indicates the sparsity of the interpolation
matrices,
and the time is measured in seconds.
Errors are computed on an evaluation grid of 40 40 equally
spaced points in [0, 1]2 .
Remark
We can observe nice convergence for the first few iterations, but once
an RMS-error of approximately 5 103 is reached, there is not much
further improvement.
This behavior is not yet fully understood.
However, it is similar to what happens in the approximate
approximation method of Mazya (see, e.g.,
[Mazya and Schmidt (2001), Mazya and Schmidt (2007)] and our
discussion in Chapter 26).
Example
Now we use the non-stationary approach to interpolation,
i.e., we use basis functions without adjusting their support size, i.e.,
= 0.7 is kept fixed for all experiments.
We have convergence although it is not obvious what the rate
might be.
However, the matrices become increasingly denser, computation
requires lots of system memory, and therefore the non-stationary
approach is very inefficient.
9 1.562729e-001 0.03
25 2.807706e-002 2.4766 0.04
81 4.853006e-003 2.5324 0.12
289 2.006041e-004 4.5965 0.45
1089 1.288000e-005 3.9611 2.75
4225 1.382497e-006 3.2198 47.92
Remark
The time comparison between the entries in the two tables above is
not a straightforward one since we used the (dense) code
RBFInterpolation2D to do the experiments for the non-stationary
experiment since there is no sparseness to be exploited and the
kd-trees actually introduce additional overhead.
Example
For comparison purposes we repeat the experiments with the
oscillatory basic function
(r ) = 2 (r ) = (1 r )6+ 3 + 18r + 3r 2 192r 3 ,
9 1.655969e-001 0.03
25 3.097850e-002 2.4183 0.06
81 4.612941e-003 2.7475 0.20
289 1.305297e-004 5.1432 0.72
1089 4.780575e-006 4.7711 4.06
4225 2.687479e-007 4.1529 55.09
Remark
While the performance of the oscillatory functions for the
stationary experiment is even more disappointing than that of
Wendlands functions, the situation is reversed in the
non-stationary case.
In fact, the errors obtained with the oscillatory basis functions are
almost as good as those achieved with optimally scaled
Gaussians (c.f. Chapter 2).
In order to overcome the problems due to the trade-off principle
that are apparent in both the stationary and non-stationary
approach to interpolation with compactly supported radial
functions we will later consider using a multilevel stationary
scheme (see Chapter 32).
References I
Buhmann, M. D. (2003).
Radial Basis Functions: Theory and Implementations.
Cambridge University Press.
Fasshauer, G. E. (2007).
Meshfree Approximation Methods with M ATLAB.
World Scientific Publishers.
Higham, D. J. and Higham, N. J. (2005).
M ATLAB Guide.
SIAM (2nd ed.), Philadelphia.
Iske, A. (2004).
Multiresolution Methods in Scattered Data Modelling.
Lecture Notes in Computational Science and Engineering 37, Springer Verlag
(Berlin).
References II
References III