Professional Documents
Culture Documents
dkeren@cs.haifa.ac.il
Abstract
We define a generalized distance function on an unoriented 3D point set and describe how it may be used to
reconstruct a surface approximating these points. This distance function is shown to be a Mahalanobis distance
in a higher-dimensional embedding space of the points, and the resulting reconstruction algorithm a natural
extension of the classical Radial Basis Function (RBF) approach. Experimental results show the superiority of our
reconstruction algorithm to RBF and other methods in a variety of practical scenarios.
Keywords: surface reconstruction, distance measure, point clouds
ACM CCS: I.3.2 [Computer Graphics]: Graphics Systems (C.2.1, C.2.4, C.3); I.3.5 [Computer Graphics]: Curve,
surface, solid, and object representations
c
2010 The Authors
Computer Graphics Forum c 2010 The Eurographics
Association and Blackwell Publishing Ltd. Published by
Blackwell Publishing, 9600 Garsington Road, Oxford OX4
2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
2479
2480 R. Poranne et al. / 3D Surface Reconstruction Using a Generalized Distance Function
d to assign these extra off-surface points. Furthermore, the trivial solution aj = 0 will result. The simplest way to avoid
surface may not be closed, or it may have one-dimensional this is to provide off-surface points with non-zero values,
parts protruding from it, in which case it will not have a but this is undesirable, because these non-zero values are
clear-cut ‘inside’ or ‘outside’. somewhat arbitrary, and can have an adverse effect on the
solution.
In the case that normal information is available at the
samples, this can be used to generate inside/outside points To show how a trivial solution may be avoided without
close to the surface, or provide information on the gradient off-surface points, we present a different interpretation of
of an indicator function, namely a function that has the value the RBF approach, which will provide a natural non-trivial
1 inside the surface and 0 outside. Kazhdan et al. [KBH06] solution, and also eventually lead to our generalized distance
show how computing such an indicator function reduces to function. Assume first that we have RBF centres at all data
solving a Poisson equation. Once an indicator function is points. Define a map : R3 → RN by
obtained, extracting the level set d = 1/2 yields the desired
surface. Alliez et al. [ACSTD07] show how to estimate and (x) = (ϕ(||x − x1 ||), . . . , ϕ(||x − xN ||) .
use unoriented normal information from point sets.
This maps the N input points X ⊆ R3 to N-dimensional
In this paper, we describe a novel construction of an un- points (X) = {(xi )}N i=1 ⊆ R . Because it consists of
N
signed distance function from sample data (without normal N points, the set (X) will always lie on an (N − 1)-
information). This function obtains very small values on the dimensional hyperplane H in RN , which is the simplest pos-
data, which increase smoothly and monotonically with dis- sible surface imaginable in this space. Thus, we may assume
tance from the data. Thus, almost no spurious components that the underlying surface we seek is the set of all points in
will be present in the reconstruction. The price we pay for this R3 which are mapped to H by , namely, all x such that
almost ideal function is twofold: higher computational com-
plexity in its construction and extra difficulty in extracting the
N
surface than when using a signed distance function, because d(x) = αj ϕ(||x − xj ||) + αN+1 = 0 (1)
j =1
no precise constant value can be considered the represen-
tative value of the surface. The common Marching Cubes for an appropriate coefficient vector α = (α1 , . . . , αN+1 )T .
algorithm is not so useful and we must resort to more sophis- This same assumption is made in the traditional RBF ap-
ticated surface extraction techniques. proach. It is easy to see that this vector belongs to the right
nullspace of the following N × (N + 1) matrix A:
2. Radial Basis Functions ϕ(||xi − xj ||) j ≤ N ,
Ai,j =
We start by reviewing the implicit function approach based on 1 j = N + 1.
so-called radial basis functions (RBF). Consider the problem
3
of fitting a surface to a set of N points X = {xi }N i=1 ⊆ R , The dimension of this nullspace is at least one because
given no additional information on the surface. A popular A has more columns than rows. It will be larger than one
approach is to find some approximation to a signed distance only if the basis functions are degenerate in some way (e.g.
function from the surface, and then define the surface as its if two data points coincide). Thus, taking α to be a null
zero set [CBC∗ 01]. The main (and only) assumption is that vector of A results in Equation (1) defining a non-trivial RBF
the function obtains zero values close to the data. The distance interpolant d from which we can extract the surface as its
function d is usually represented as a linear combination of zero set. It is also straightforward to see that d(x) is actually
RBF proportional to the signed distance of a point (x) ∈ RN
M from the hyperplane H . Note that this distance is measured in
d(x) = αj ϕ(||x − cj ||), RN , as opposed to R3 , so it is not the true Euclidean distance
j =1 from the underlying surface in R3 , but an approximation.
αj ’s can be found by solving a linear system derived from its coincide with the data set and (2) the surface is modelled as a
desired values at the data points (usually zero). hyperplane H of dimension M − l in RM . H may be defined
as the intersection of l hyperplanes of dimension M − 1, each
One inherent difficulty in this approach is that the zero containing the N points of (X). This dimension reduction
constraints result in a homogeneous linear system, so the can be viewed as obtaining a tighter fit to X. Thus, we need to
It is possible to slightly modify the straightforward definition where α are the columns of U , and ., . denotes inner prod-
k
of D as a simple Euclidean distance in RM to be a weighted uct. Because = B T B, the eigenvectors of are identical to
the right singular vectors of B and the squares of the singular high complexity. The latter operation is not as straightfor-
values of B are equal to the eigenvalues of . ward, and requires some sophistication. In the sequel, we
describe the details of each of these steps.
As in the previous section, we may choose to model the
point cloud as the subspace of RM of dimension M − l.
Similarly to PCA, this is the subspace formed by the M − l 5.1. MaD calculation
eigenvectors of having the largest eigenvalues. Thus, the
The algorithm involves first choosing the number of centres,
l eigenvectors of having the smallest eigenvalues form
M, and the dimension of the approximate nullspace, l. Then
an orthogonal basis of the complementary V ⊥ , and we call
the M × M covariance matrix , as defined in (3), is con-
it the approximate nullspace of . All this is equivalent
structed. This costs O(M 2 N), where N is the size of the input
to using the rank-l approximation to −1 minimizing the
data set. Next, the l eigenvectors of with smallest eigen-
Frobenius norm. By the Young–Eckart theorem [GVL96],
values must be computed. This costs O(lM 2 ). Computing the
this approximation is obtained by eliminating (i.e. setting to
MaD over the sample grid involves evaluating (5) at each
zero) the M − l smallest eigenvalues of −1 (which are the
grid point. The complexity of this is O(lM 2 G), where G is
largest eigenvalues of ) Thus, D(x) can be approximated
the number of grid points. A typical number for G would
as
be 5123 = 1.4 × 108 . However, it is impractical and quite
1/2 unnecessary to sample the entire grid at full resolution. Be-
M
D(x) = σk−1 (x), α k 2 . (5) cause high accuracy is required only near the surface, we
k=M−l+1 employed the following refinement technique. Initially, the
MaD is sampled on a coarse square grid, forming a voxel
We call this distance function the ‘MaD distance function’ space. Voxels whose centres have function values which fall
(MaD is short for Mahalanobis Distance), and remind the under a certain threshold are then further sampled on a finer
reader again that the traditional RBF method is the special grid. This can be continued recursively on an octree-type
case l = 1 (with the slight caveat that D(x) will be the ab- structure. Moreover, the sampling at the higher levels may
solute value of the RBF function). Note also that the simple make do with just a few eigenvectors [a small value of k in
Euclidean distance function defined in the previous section (5)]. Extra eigenvectors can be added while refining the grid,
is obtained when all σk in (5) are simply taken to be ones. making the computation even more efficient.
The observant reader will note the similarity between our
method and kernel PCA methods [SGS05, SSM98, WCS05]. 5.2. The watershed transform
In kernel methods, each point in the data is mapped into a
high-dimensional space, called the feature space. By choos- The most difficult part of the algorithm is extracting the
ing the mapping correctly, various relations might be found surface. In 2D, we can imagine the MaD as an elevation map
within the data in the feature space that would be hard to of mountains and ridges. The curve we seek is a collection of
find otherwise. For example, PCA can be used on the fea- points for which the MaD is small, so must lie in the ridges
ture space to gather information about linear relations in the of the map. As a start, we would like detect these ridges in
high-dimensional space, which are non-linear relations in the the map. In image processing, this is done using a technique
ambient data space. Our approach can be viewed as mapping known as the watershed transform. It is used to segment an
the data into a ‘feature space’, where the features are some image by intensity to different valleys. Intuitively, if a drop
form of closeness (‘affinity’) to each of the given data points. of water falls on a mountain it will slide down away from
There is also a connection between our approach and Support the watershed, or drainage divide, and not cross any other
Vector Machines (SVM) [SC08], where linear classifiers are such line. If we apply the watershed transform to the negative
sought in a high-dimensional feature space. of the MaD, the mountains become valleys and watersheds
become ridges, so the watershed transform of the negative
There is also an alternative way to derive the MaD function of the MaD will reveal the set of ridges. We outline here the
as the weighted average of relevant positive functions, where mathematical definitions, and then adapt to the 3D case. See
larger weight is given to those functions obtaining small [RM00] for a more detailed treatment. Let f be an image.
values on the data xi . See the Appendix for details. Define the topographical distance between two points p and
q in the image as:
5. The Reconstruction Algorithm
Tf (p, q) = inf ||∇f (γ (s))|| ds,
Having covered the basic theory behind our method, we are γ γ
now in a position to describe our reconstruction algorithm. It
consists of two main steps: First, the MaD is sampled on a hi- where the infimum is over all paths γ from p to q. Now let
erarchical grid covering the volume of interest. Secondly, the mi be a minimum of f . Then the catchment basin of mi ,
local minimum surface of the function is extracted. The first CB(mi ), or watershed segment, is the set of all points, which
operation is quite straightforward, albeit having relatively are topographically closer to mi plus the absolute height of
dim V = M − l
7. Conclusion
6.11. Watershed versus Min-Cut
We have presented a method to compute a distance function
An alternative method of extracting a surface from an un- derived from a given point data set, based on embedding the
signed distance function was proposed by Hornung and data in a higher-dimensional space. In this space, the point
Kobbelt [HK06]. This is based on the minimal cut of a regu- set, and the underlying surface it is sampled from, may be
lar grid graph using a discrete distance function and may be modelled as a linear subspace, making all subsequent pro-
adapted to use our MaD distance function instead. The main cessing very easy and intuitive. The resulting MaD distance
advantage of the min-cut method over the watershed method function is just a Mahalanobis (weighted) distance from this
is that segment selection, a difficult phase in the watershed subspace. The more conventional distance function resulting
method, is avoided. However, in certain situations, especially from approximating the surface as a zero-set of a single linear
where the object is ‘thin’, the cut may pass through the thin combination of RBF functions, is shown to be a very special
part and skip part of the point set entirely. This is illustrated case of the MaD distance.
in Figure 17, where the continuous line represents the de-
sired curve, and the dashed line represents the cut that the Embedding data in a higher-dimensional space using so-
min-cut algorithm might generate. In contrast, the watershed called kernels, to enable easier processing, is a common tech-
transform of the map will result in two interior segments, nique in machine learning. It remains to be seen whether other
which will be easily classified as such in the next phase of machine-learning techniques may be borrowed and applied
the algorithm. to geometry processing.
The MaD method has proven to be quite versatile and on Computational Geometry (Minneapolis, MN, USA,
robust, justifying the extra computational overhead compared 1998), pp. 39–48.
to more traditional methods such as RBF. However, a number
of issues require further research. First and foremost, the [ACK01] AMENTA N., CHOI S., KOLLURI R. K.: The power
surface extraction procedure is still somewhat cumbersome, crust, unions of balls, and the medial axis transform. Com-
and could benefit from some simplification. Secondly, the putational Geometry 19, 2–3 (2001), pp. 127–153.
computational complexity of the method should be reduced
in order for it to be more practical. We are confident this can [ACSTD07] ALLIEZ P., COHEN-STEINER D., TONG Y., DESBRUN
be done with some further optimization. Perhaps this can be M.: Voronoi-based variational reconstruction of unori-
combined with the surface extraction procedure. ented point sets. In Proceedings of the Symposium on Ge-
ometry Processing (Barcelona, Spain, 2007), Eurograph-
ics Association, pp. 39–48.
Appendix: A Different Interpretation
[Boi84] BOISSONNAT J.-D.: Geometric structures for three-
of the MAD Function
dimensional shape representation. ACM Transaction on
We present here an alternative derivation of the MAD func- Graphics 3, 4 (1984), 266–286.
tion.
[CBC7*01] CARR C., BEATSON R. K., CHERRIE J. B., MITCHELL
Given a set of data points {xi }N i=1 and basis functions T. J., FRIGHT W. R., MCCALLUM B. C., EVANS T. R.: Recon-
{φj }M
j =1 , we seek a positive distance function related to the struction and representation of 3D objects with radial basis
subspace spanned by these basis functions (denoted by ), functions. In Proceedings of the SIGGRAPH’01 (Los An-
ideally having a very small absolute value on the data points. geles, CA, USA, 2001), ACM, pp. 67–76.
The following function seems to have these properties:
[DEJ00] DE MAESSCHALCK R., JOUAN-RIMBAUD D., MASSART
2
φ
f exp(− N 2
i=1 f (xi ))df
DL. The Mahalanobis distance. Chemometrics and Intel-
D= N . ligent Laboratory Systems 50, 1 (2000), 1–18.
φ
exp(− i=1 f 2 (xi ))df
[DoC76] DO CARMO M. P.: Differential Geometry of Curves
This is because D is the weighted average of all functions, and Surfaces. Prentice-Hall, Upper Saddle River, NJ,
which are the square of a function in , such that the weight 1976, Chap. 3
exponentially decreases in the average value of the function
[DTS02] DINH H. Q., TURK G., SLABAUGH G.: Reconstruct-
on the data points.
ing surfaces by volumetric regularization using radial ba-
Now we show that D is precisely our MaD distance func- sis functions. IEEE Transactions on Pattern Analysis and
tion by integrating over through its coefficient space Machine Intelligence, 24 10 (2002), 1358–1371.
N
[GG07] GUENNEBAUD G., GROSS M.: Algebraic point set sur-
D = c f 2 exp − f 2 (xi ) df faces. In Proceedings of the SIGGRAPH’07 (San Diego,
φ
⎛ ⎞2
i=1
⎛ ⎛ ⎞2 ⎞ CA, USA, 2007), ACM, p. 23.
M
N M
=c ⎝ aj φj ⎠ exp ⎝− ⎝ aj φj (xi )⎠ ⎠ da [GGG08] GUENNEBAUD G., GERMANN M., GROSS M.: Dy-
Rn j =1 i=1 j =1 namic sampling and rendering of algebraic point set sur-
faces. Computer Graphics Forum 27, 2 (2008), 653–
T 2 T
=c (a φ) exp(−a a) da 662.
Rn
T −1
= φ φ, [GVL96] GOLUB G. H., VAN LOAN C. F.: Matrix Computa-
tions (3rd edition). Johns Hopkins University Press, Bal-
where c−1 = φ exp(− N 2
i=1 f (xi )) df and the last equality timore, MD, 1996.
is by application of the standard Gaussian integral for vector
a and matrix [HDD*92] HOPPE H., DEROSE T., DUCHAMP T., MCDONALD
J., STUETZLE W.: Surface reconstruction from unorganized
ai aj exp −a a da = c−1 ij−1 . points. In Proceedings of the SIGGRAPH’92 (Chicago,
RN IL, USA, 1992), ACM, pp. 71–78.
[KBH06] KAZHDAN M., BOLITHO M., HOPPE H.: Poisson sur- tions. In Proceeding of the Symposium on Geometry Pro-
face reconstruction. In Proceedings of the Symposium on cessing (Cagliari, Sardinia, Italy, 2006), Eurographics As-
Geometry Processing (Cagliari, Sardinia, Italy, 2006), pp. sociation, pp. 51–60.
61–70.
[SGS05] SCHÖLKOPF B., GIESEN J., SPALINGER S.: Kernel
[KSO04] KOLLURI R., SHEWCHUK J. R., O’BRIEN J. F.: Spec- methods for implicit surface modeling. In Advances in
tral surface reconstruction from noisy point clouds. In Pro- Neural Information Processing Systems 17. L. K. Saul,
ceedings of the SIGGRAPH (Nice, France, 2004), ACM, Y. Weiss, L. Bottou (Eds.). MIT Press, Cambridge, MA
pp. 11–21. (2005), pp. 1193–1200.
[LSW09] LUO C., SAFA I., WANG Y.: Approximating gra- [SSM98] SCHÖLKOPF B., SMOLA A., MÜLLER K.: Non-
dients for meshes and point clouds via diffusion metric. linear component analysis as a kernel eigenvalue
Computer Graphics Forum, 28, 5 (2009), pp. 1497–1508. problem. Neural Computation 10, 5 (1998), 1299–
1319.
[ML] MeshLab. http://meshlab.sourceforge.net/
[OGG09] OZTIRELI A. C., GUENNEBAUD G., GROSS M.: Fea- [TG08] TOLEDO S., GOTSMAN C.: On the computation of the
ture preserving point set surfaces based on nonlinear ker- nullspace of sparse rectangular matrices. SIAM Journal of
nel regression. Computer Graphics Forum 28, 2 (2009), Matrix Analysis and Applications, 30, 2 (2008), 445–463.
493–501.
[SC08] STEINWART I., CHRISTMANN A.: Support Vector Ma-
[RM00] ROERDINK J., MEIJSTER A.: The watershed transform: chines. Springer-Verlag, New York, 2008.
Definitions, algorithms and parallelization strategies. Fun-
damenta Informaticae 41, 1–2 (2000), 187–228. [WCS05] WALDER C., CHAPELLE O., SCHÖLKOPF B.: Implicit
surface modelling as an eigenvalue problem. In Proceed-
[SAAY06] SAMOZINO M., ALEXA M., ALLIEZ P., YVINEC M.: ings of the Machine Learning (Bonn, Germany, 2005),
Reconstruction with Voronoi-centered radial basis func- ACM, pp. 936–939.