You are on page 1of 17

Precision Engineering Journal of the International Societies for Precision Engineering and Nanotechnology 26 (2002) 105121

Optimizing discrete point sample patterns and measurement data analysis on internal cylindrical surfaces with systematic form deviations
K. D. Summerhaysa,e,*, R. P. Henkeb,e, J. M. Baldwinc, R. M. Cassoua, C. W. Brownd
a

University of San Francisco, Harney Science Center, Room 409, 2130 Fulton Street, San Francisco, CA 94117-1080, USA b Micro Vu, Inc., 7909 Conde Lane, Windsor, CA 95492, USA c Sandia National Laboratories, P.O. Box 969, MS9133, Livermore, CA 94551-0969, USA d Honeywell Federal Manufacturing & Technologies, D/A11, MC74, P.O. Box 419159, Kansas City, MO 64141-6159, USA e MetroSage LLC, 26896 Shake Ridge Road, Volcano, CA 95689, USA Received 9 January 2001; received in revised form 15 June 2001; accepted 20 September 2001

Abstract Selection of an appropriate sampling strategy and of the correct tting algorithm are two of the key issues in the practice of modern coordinate metrology. A recent report described the development of new techniques for modeling the form errors of machined part features and illustrated their use in identifying and understanding the relation of form errors to machining process variables. In this report, these mathematical tools are further developed and applied for the determination of optimum, reasonably-sized probing patterns for measurement under time and economic constraints. The focus of the work reported here is on full internal cylindrical surfaces. The technique is, however, of general utility and can be employed with any nominal feature geometry. Its application does, in many cases, produce signicant improvements in the uncertainty of derived geometric dimensioning and tolerancing parameters with modest or no increase in measurement time. In contrast to most work on these problems, extensive use is made of process-dependent information in selection of sampling protocol and data analysis method. Since the method begins with an understanding of the form errors introduced into the part by the particular manufacturing regime, at least partial benet can be seen even when the sample pattern optimization is employed in conjunction with commercial, off-the-shelf CMM control and data analysis software. 2001 Elsevier Science Inc. All rights reserved.
Keywords: Coordinate measuring machine; Sampling plan; Fitting algorithm; Cylinder; Form error; Principal component analysis; Chebyshev polynomial; Fourier series; Eigen shapes; Least squares; Minimum zone; Maximum inscribed cylinder

1. Introduction The use of coordinate measuring machines (CMMs) and other discrete point sampling devices in dimensional metrology raises questions regarding the proper interpretation of data which technically constrains a surface only at the points measured. In the absence of additional information, point sample devices can make statements only about the sample dataset rather than about the part. Reconciling the intrinsic limitations of such inspection techniques with dimensioning and tolerancing standards, which often make the implicit assumption that surfaces can be fully characterized, is a major concern of the metrology community. This issue is at the root of the so-called methods diver* Corresponding author. Tel.: 1-415-422-6142; fax: 1-415-4225800. E-mail address: summerhays@usfca.edu (K.D. Summerhays).

gence problem which has plagued dimensional metrology practitioners in recent years [1]. The current work leaves aside the issue of the appropriate evaluation criterion implied by a particular dimensional specication, an issue which has seen signicant recent progress through standardization activities [2]. It addresses two additional but important issues: Y Choice of an appropriate sampling strategy. This is often based on the intuition of the individual practitioner and applied with little or no demonstrable assurance of its adequacy to satisfy any specic metrological objective. Particularly, the characteristics of the surface directly attributable to the manufacturing method typically are not taken into account. Y Choices of appropriate tting and data analysis algorithms. These usually are left to the author of the CMM software and frequently are made without due

0141-6359/02/$ see front matter 2001 Elsevier Science Inc. All rights reserved. PII: S 0 1 4 1 - 6 3 5 9 ( 0 1 ) 0 0 1 0 6 - 4

106

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

consideration of the limitations imposed by the data sample. It is important to understand that these issues are, although related to the question of theoretically correct evaluation criterion, distinct concerns. For example, it is quite clear that the correct value according to the standard [2] for the form of a nominally cylindrical feature is given by the distance between the pair of perfect, minimally-separated cylinders that just enclose the real surface. It is less clear that a minimum-zone t to a point sample dataset taken on the feature will produce the best (in any sense) estimate of that correct value in a specic instance. In addition to these fundamental issues, economic considerations argue for more efcient and reliable dimensional measurements. This has led to a search for sampling strategies and data analysis methods that maximize the information available from discrete point samples of limited size. Finally, one should be aware of limitations of most currently-used CMM tting and data reduction algorithms, which typically assume that all deviations from the tted model geometry follow a Gaussian distribution and are randomly distributed over the feature surface, conditions that manifestly are not valid approximations for a great many machining techniques [3]. A recent work presented a set of new mathematical techniques for the identication and evaluation of systematic departures of machined features from ideal form [4]. Here, those methods are extended and applied to the specic problems of sampling plan and substitute geometry algorithm selection. Specically, this paper develops and demonstrates the performance of a technique for generating optimized sampling patterns of nite size, based on knowledge of the character of the form error in a manufactured surface.

2.1. Choice of the feature model In commercial CMM software the model chosen for tting to the measured data is, perhaps without exception, the ideal feature geometry. Some workers have recognized that it may be advantageous to t to the measured point data a model that attempts to represent at least a portion of the deviation from ideal form. Perhaps the most universally appreciated possibility has been to model radial deviations with a periodic function. Goto and Iizuka, as long ago as 1975, used Fourier series and orthogonal polynomials to represent form errors in cylindrical features [7]. Kurfess et al. [8,9] proposed, more generally, to t a series of models, beginning with the ideal geometry and proceeding to models incorporating increasingly complicated deviations, using a maximum likelihood test as the indicator of adequate description of the data. The sequence of models would be dictated by expert knowledge of the manufacturing process. In a similar vein, Abackerli et al. have discussed the value of modeling the form error of the real feature by tting a Fourier series or some other appropriate function to the residuals of the tted ideal feature [10]. Butler and Cox have illustrated the approach with a specic example [11]. They also introduce the idea of an underestimation factor, an additive or multiplicative factor to account for the fact that a nite sample of the surface under examination is likely to produce an optimistic estimate of the form error. Dowling and coworkers [12] also suggested the merits of tting to a model that attempts to account for systematic deviations from the ideal shape. 2.2. Fitting the feature model to the data The matter of best choice of tting criterion and algorithm was discussed at least as early as 1977 [13]. The literature of this aspect of the problem is far too extensive to exhaustively review here. It is sufcient to say that one of the main thrusts of the work in this area has been comparison of the least squares t of the nominal shape (then, and still, the most commonly-available t with commercial CMM software systems) with various limiting ts, especially the minimum zone for evaluation of form error. Much of the early published work employed the criterion of producing the smallest value for the form error from a relatively sparse sampling of the feature surface, ignoring both the low probability that a sparse sample will include the functional extremes of the surface and the relatively robust character of the least squares t against outliers (as compared to a limiting t to a nite sample). The more appropriate (and more difcult to evaluate) criterion is that alluded to above: closeness to the value obtained by applying the theoretically correct tting procedure to the completely characterized surface. Work through the early part of this decade has been reviewed by Feng and Hopp [14]. One more recent treatment of this issue is that of Choi and Kurfess [15].

2. Historical The methods divergence issue was brought forcefully to the attention of the measurement community in 1988 [5]. The basic issues at the heart of the problem have been known to workers in dimensional metrology for much longer. A clear statement of the nature and origins of the general problem was presented by Weckenmann et al. [6]. The recent beginning to the mathematical codication of geometric dimensioning and tolerancing (GD&T) principles [2] should help to focus work to resolve these issues. It is important to understand, as was pointed out by the creators of that document, that the denitions therein presume complete knowledge of the surface(s) in question and thus they do not directly address the problems of optimum extraction of GD&T parameters from realizable point sample populations nor of design of efcient sampling strategies.

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

107

2.3. Sampling plans The literature on this topic is, once again, voluminous, the concern for striking a reasonable balance between sample delity and economy going back at least to the early 1980s [16]. The need for a good sampling strategy and for established methods for objectively choosing that strategy have been clearly enunciated by Weckenmann et al. [6]. Much of the work described in the literature has been directed at estimation of required sample size or density; there have been comparatively few reports of work to determine advantageous locations for sampling. Kanada and Tsukada considered the choice of optimum sampling interval for measurement of cylindrical form [17], treating the machined surface as a stationary random process and using the correlation function to determine appropriate sampling intervals in the axial (straightness) and circumferential (roundness) directions. The state of conventional practice at the beginning of the 1990s was summarized by Caskey et al. [18], who categorized potential sampling patterns as equidistant, random, stratied and restricted stratied. They stated a preference for stratied sampling in the belief that it will exhibit superior robustness in the face of waviness. Experiments with synthesized surfaces containing periodic as well as random errors led them to conclude that, regardless of the tting criterion, sample densities considerably greater than those used in common inspection practice are required to accurately recover the form error. Coy [19] has described the use of the autocorrelation function to predict the errors arising in uniform sampling of a surface possessing both periodic and random deviations from perfect form. Most recently, several groups have attempted to develop optimum sampling strategies based on actual or presumed knowledge of surface characteristics. Mestre and AbouKandil [20,21] have used Bayesian methods to model the surface. From a relatively sparse dataset and a 1D autocorrelation function taken over a small region of the surface, they compute an expectation and variance for any location and thus determine a set of boundary surfaces that contain the actual surface with a known probability. Additional measurement points can be taken in areas where the estimated surface approaches the minimum-separation bounding surfaces, thereby rening the estimate of the surface. Woo, Liang, et al. [2225] compared traditional sampling patterns, e.g., uniform, with strategies based on mathematical sequences to predict that the latter allow approximately quadratic reduction in the required sampling density for a given level of accuracy. The Hammersley and HaltonZaremba sequences gave roughly equivalent results and superior performance to a uniform sampling pattern for Gaussian surfaces intended to approximate the roughness and atness characteristics of various machined surfaces. Choi et al. [26] have examined the issue of uncertainty of form error estimates derived from minimum zone ts. They showed that the distribution of the form error is important in

establishing the uncertainty and thereby indicating that a priori determination of the appropriate sampling density is likely to be problematic. Two complementary works on sampling plans for measurement of circular features are those of Phillips et al. [27] who considered the inuence of errors arising from the measurement apparatus and of Capello and Semeraro [28] who studied the effect of periodic form errors. An adaptive sampling methodology for coordinate measurement has been described, although it requires probing capabilities not commonly found on the average CMM [29].

3. Experimental 3.1. Description of the test artifact The set of machined artifacts used as the source of data for this work has been described in detail elsewhere [30]. Briey, they were produced from cast aluminum plate machined to nominal one inch (25.4 mm) thickness. Each contains 50 full internal cylindrical features, through, blind and counterbored, of various sizes and depths, produced by a variety of machining techniques. All holes were started with a center-drill operation, followed by drilling to nominal or near-nominal size (designated C/D). Some of the holes were further nished by reaming (C/D/R), boring/ reaming (C/D/B/R), plunge end milling (C/D/P), or peripheral milling (C/D/M). Nominal hole sizes ranged from 1/16 to 1 in. (1.59 to 25.4 mm) diameter. Nominal depths were between 1/4 in. (6.35 mm) and the full thickness of the plate. For the through holes, depths were controlled by rst milling pockets, where necessary, of appropriate depth into the back face of the artifact prior to hole production. A series of 30 artifacts was produced on a K&T Orion 2200 horizontal machining center with a Gemini-D System 65 controller using, as far as possible, the same machining cutters throughout. (A few were replaced due to breakage at various points in the production cycle.) 3.2. Measurement of the test artifact Measurements of the inside cylindrical features on all artifacts were taken with a Moore Special Tool Company Model M48V measuring machine, the basic performance capabilities of which have been described in detail elsewhere [31]. The details of the measurement procedure, along with a detailed error budget for the measurement are also presented in a separate report [30]. For each hole, the data were gathered in 37 vertical scans at equiangular intervals around the circumference of the feature. For through holes, data were taken over the complete vertical extent of the feature. For blind and counterbored holes, data taking was terminated approximately 0.5 mm above the nominal bottom of the feature. The number of discrete points taken per feature was in the range of 103 to 2 104, depending on

108

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

the feature size. The worst case uncertainty of the radial distance measurement was estimated to be about 0.6 m.

cylinder denes the substitute geometry for purposes of determining size, orientation and primary datum control. 4.2. Mathematical foundation

4. The extended zone model 4.1. Description of the model The mathematical and conceptual basis for the new model, which has been designated the Extended Zone (EZN) model, was presented in an earlier paper [4]. Herein we review the essential features of the model and describe its extension to optimized sample pattern and algorithm selection. The concept is as follows. Since the focus is on the importance of systematic form deviations, the treatment begins, for any manufacturing regime or process, with an assessment of the form deviations endemic to that regime. Ideally, this is based on sampling, densely and accurately, a signicantly-sized set of surfaces. The requisite size and time interval of the sample and, indeed, the applicability of the suggested approach depend on the degree and persistence of stability exhibited by the manufacturing regime under study. An attempt is made to reasonably model the systematic form deviations by treating each surface in this set as a linear combination of a set of basis functions. These basis functions may be smoothly-varying analytical forms, for example polynomials and Fourier series, or they may involve eigenfunctions (what we call Eigen shapes) obtained from principal component analysis of the dense measurement results. Alternatively, the assessment could be the result of expert experience or intuition, or might be derived using a sufciently accurate model of the machining process. Once the model basis set has been established, locations can be selected, for any chosen number of sampling points, representing an optimized sparse sample pattern which is expected to yield a minimum variance estimate of this model surface over its entire domain. When an individual part is measured, using this optimized sparse sample pattern, the sparsely-sampled data is used to determine the coefcients in the linear combination model. This establishes the basic tted surface. It is then recognized that uncertainties about that t must be considered which, since they are not accounted for by the basis set model, are treated as random. This is done using the residuals to the t, specically the standard deviation, scaled to create a narrow band of uncertainty about the tted surface. The tted surface, embellished by the uncertainty zone, is considered in just the same way as a dense dataset (that is, one sufciently dense to be regarded for practical purposes as a complete characterization) in, for example, a minimum zone method. This envelope is treated as the dense dataset and from it the boundaries of the substitute geometry are established. In the present case, the distance between two minimally-separated, concentric cylinders just contacting these boundaries is taken to be the form error. A similar set of cylinders, suitably constrained according to any datum references, gives the prole error. The maximum inscribed 4.2.1. Selection of the basis set In the case of a nominally cylindrical feature, the actual surface is expressed as a perfect cylinder plus a small correction to the radius. This correction is given by an expression r a1f1( ,z) a2f2( ,z) a3f3( ,z) ... where (r, ,z) are the cylindrical coordinates of a point on the surface, the fn( ,z) are members of the basis set and the actual shape is determined by the values of the coefcients an. The choice of basis set is, in a sense, arbitrary. Any set of functions that spans the space of possible shapes is a candidate. The form errors of real machined cylinders can be well represented by a set of relatively few functions. Two types of basis functions have been investigated: 1) analytical sets with angular dependencies modeled as a Fourier series and axial dependencies as Chebyshev polynomials of the second kind and 2) Eigen shape sets (eigenvectors) derived by principal component analysis (PCA) from the dense measurement data. Each of these offers specic advantages. The analytical set is able to explicitly model many of the traditional deviations using intuitively reasonable basis functions. Radial lobing may, for example, be modeled with a Fourier series of appropriate order. Taper can be represented by a polynomial of degree one. Barrel, hourglass, banana, bellmouth and spiral twist may be modeled by higher-order functions or combinations of functions. This type of basis set is thus well-suited to experiential selection of the important terms. It is conceivable that, given sufcient experience and adequate intuition, a workable analytical basis set could be chosen for a specic manufacturing regime with relatively little up-front investment in process characterization. Eigen shapes are sometimes preferable to analytical forms because they are specic to the manufacturing process under study and because they can accommodate form deviations which are not smoothly varying, whereas analytical basis set models come upon difculty if asked to explain surfaces with sharp changes or discontinuities, unless one is willing to include terms of very high order. Eigen shapes are directly derivable from the measured datasets and represent an optimally efcient means of capturing the variance in the original data, conveniently ltering out underlying redundancy. Each eigenvector has an associated eigenvalue which indicates the extent to which that eigenvector serves to account for the variance found in the original data matrix. Eigen shapes, coming as they do directly from measurement data, demand substantial investment in process characterization. Later in this report, results will be presented which elaborate the relative merits of these two classes of basis sets in the specic applications of sampling pattern and tting algorithm selection. A rigorous extension of the EZN model to optimized sample pattern selection will now be presented.

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

109

4.2.2. Determination of the EZN surface The reader is referred to [4] for the details of the mathematical development on which the following sections build. Let the densely measured surface be represented by a column vector, d, the set of values of the radial error over the surface. An estimation of the dense surface can be expressed as de Ba (1)

where B is a matrix containing the set of basis shapes and a is a column vector of weights. Then the LSQ estimate of a for the dense dataset is given by a (B t B)
1

B t d.

(2)

A similar expression can be developed for a sparse subset of the surface data. To choose a specic sparse set, a selection matrix W is dened to be a diagonal matrix which has 1s in the diagonal positions corresponding to the points on the surface to be used in the sparse data subset and 0s elsewhere. Then the LSQ sum to be minimized is L2 (d e - d)t W (d e - d). (3)

4.2.3. Optimized sparse sample set selection The optimal set of sampling locations is determined by a technique known as V or variance optimization [32]. The goal is to minimize the rms error of the predicted EZN surface with respect the spatial distribution of a xed number of sampling points in the sparse sample, selected from the dense sample set. Equation 7 gives the expression for the EZN surface, de, that results from the measurement dataset d as sampled according to the weight matrix W. The problem is to nd the optimal locations of 1s along the diagonal of W, subject to the constraint that the total number of 1s along the diagonal is xed. First it is necessary to derive an expression for the rms error over the surface. Equation 7 can be used to propagate the errors in the measured set, d, into the estimated set, de. The differential of equation 7 relates the effect of an error in d, d, on de, de B (B t W B)
1

B t W d.

(9)

The covariance matrix of de is given by the expected value of the outer product of de with its transpose C(d e) d e d et B (B t W B)
1

Bt W
1

If L2 is minimized with respect to a, the LSQ solution is a (B t W B)


1

d d t W B (B t W B)

B t,

B t W d.

(4)

Consider the effect of W. Through the non-zero elements along its diagonal it selects rows of B and elements of d and de. Another way of expressing the same selection is to dene a submatrix, Bs, which has only the selected rows of B. This is the basis set of the sparse dataset. Likewise the column vector s will be the measured sparse dataset and se its estimate given by se B s a. (5)

where the expected value is an average taken over an ensemble of many measurement sets with random errors in d, d. Since it is assumed that the measurements at all points over the sample are made in the same way, having a standard deviation of with no correlation between the assumed random errors at different points on the surface, the expected value of d dt is 21, where 1 is the identity matrix. Also since W W W the expression for the covariance of de reduces to C(d e)
2

B (B t W B)

B t.

(10)

Using this notation equation 4 can be rewritten a (B st B s)


1

B st s.

(6)

Equations 4 and 6 could be considered to express essentially the same information about the solution for the parameter set a. The difference between them, however, is that equation 6 states the results one would obtain from a dataset s measured only at the sparse locations, and equation 4 is the expression to be used if one selects a subset of d to be used as a sparse set for testing purposes. This distinction will become clearer when equation 4 is used as part of the derivation of an optimal sparse set. Finally the expression for the EZN surface derived from the sparse set is written, de B (B t W B)
1

The diagonal elements of C(de) are the expected variances of the measurement errors as propagated to the estimated EZN surface. With n being the number of points in the dense sample set, the expected average variance over the EZN surface is (1/n) tr(C(de)), where tr signies the trace of the matrix. That is Var d e) (
2

/n) tr(B (B t W B)

B t).

(11)

B t W d,

(7)

derived from equations 1 and 4 or de B (B st B s)


1

B st s,

(8)

derived from equations 1 and 6.

This quantity is to be minimized with respect to the placement of a xed number of 1s along the diagonal of W. The algorithm used for achieving this minimum is described in Appendix A. It may be helpful to illustrate the outcome of this process with a specic, idealized example. Fig. 1 presents a series of basis shapes, in this case analytical functions, that have been chosen to model the deviations endemic to a particular machining process. The rst 5 shapes represent, respectively, variation in size (0th-order Chebyshev polynomial), position ( two 1st-order Fourier terms) and orientation (two 1st-order Chebyshev, 1st-order Fourier terms) and are just the terms required to completely constrain the ideal geom-

110

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

1. Simple analytical function basis set, and its optimized sampling pattern.

etry model. If these 5 terms are believed to be sufcient to represent the feature geometry, the extreme ends of the feature will be the most advantageous locations to measure in a 2-level pattern typical of what many inspectors intuitively choose in practice. In this example, though, it has been concluded that inclusion of two additional terms (2ndand 3rd-order Fourier terms) is useful to model the feature. Each of these seven terms is shown to an exaggerated radial scale in the upper portion of Fig. 1. Introduction of the two extra terms creates 3 additional axial locations at which the feature size may attain an extreme value, and dictates the placement of additional sampling points at those levels. This is borne out by the optimized sampling pattern produced by the algorithm just described and shown in the lower half of Fig. 1, where each circle represents the location of a potential sampling point. The size of each circle is indicative of its importance in reducing Var(de) and the 100 most inuential candidates are shown. 4.3. Application of the EZN method The application of the EZN method to the establishment of sparse sampling patterns is summarized in Fig. 2. It begins with creation of a dense, regular 2D point sampling pattern for the part feature of interest. This pattern is taken

to cover the domain of the feature. The dense pattern also serves as the full set of potential sampling locations from which all sparse subsets are subsequently selected. The 2D dense pattern is used to guide the measurement of one or more instances of the feature. Ideally, the densely measured part(s) will span the range of variability for the production process. The resulting dense 3D datasets are analyzed according to one or more algorithms (left-hand branch of Fig. 2), yielding values for the GD&T parameters of interest. These values are taken to be correct in that the dense datasets approximate full surface characterization. The sampling density required to adequately represent the full, true surface depends on the frequency and magnitude of the surface irregularities and the extent to which the dimensions and response of the probing device tend to lter this data. The dense 3D datasets also serve to determine the systematic nature of the form deviations endemic to the manufacturing process under study (right-hand branch of Fig. 2). They may be analyzed in a variety of ways which, in this study, were: (1) tting to analytical forms comprised of linear combinations of Fourier series and Chebyshev polynomial terms, and (2) PCA. Either or both of these methods can be used to yield one or more appropriate basis sets believed to reasonably span the systematic feature surface variability. These basis sets allow linear modeling of the

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

111

patterns that might be employed conventionally in CMM data acquisition. It is common practice among CMM operators to take data points in a pattern more or less evenly distributed around the circumference of a hole at several axial levels. Conventional practice was approximated with patterns in which the requisite number of points were extracted from the dense set at locations evenly spaced around the circumference of the cylinder and at either two (top and bottom) or three (top, middle and bottom) axial locations. Point locations at successive axial levels were rotated by 360/(# points) degrees. These patterns represent only slight idealization of common CMM sampling practice. 4.5. Figure of meritthe composite error It is desirable to have some fairly concise criterion for comparing performances of sampling pattern/tting combinations. For the sake of having a single gure of merit, a composite error was constructed from all possible errors to have, simply, a dimension of length. The form component, ef, of the composite error is the difference in the form error derived from the dense dataset and that derived from a particular sparse sampling. The location component, ep, is the distance between the dense set and sparse set cylinder axes, taken at the center of mass of the dense set . The size component, es, is the difference between the cylinder radii derived from the dense and sparse sets. In order to maintain consistent dimensionality, the orientation component, eo, is dened as the sine of the angle between the axes of the dense set and sparse set cylinders multiplied by the length of the cylinder, in conventional GD&T parlance, the parallelism of the two cylinders. The composite error over the set of cylinders is formed by nding the largest (absolute value) of the individual components for each of the holes in the set, then taking the rms average of these over the entire set of holes, i.e.,
1/2

2. Flowchart of the analysis method.

feature surfaces. The part surface model(s) thus produced, in conjunction with the correct results based on the dense 3D sets, serve in estimating the extrapolation factors employed in the extended zone analysis. The part surface model(s) embodied in the basis set(s) also underlie the selection, from the 2D dense pattern, of sparse sampling patterns to be evaluated for routine use. The number of sample points for these patterns is also a variable at this stage of the process. These sparse sampling patterns are used to subsample the dense 3D datasets and the resulting sparse 3D datasets are processed by the various analysis algorithms. For the EZN method, the previously determined extrapolation factors are applied. Finally the GD&T parameter values derived from the sparse sets are compared to the correct values determined from the dense samples. On the basis of these comparisons the user may then select the most appropriate sampling strategy and analysis algorithm for subsequent part measurements. Generally these determinations will be based upon the users sense of the accuracy required for the particular application, the economics of point gathering and the availability of specic analysis methods in the particular CMM software being used. 4.4. Manual sample patterns It is necessary to compare the performance of optimized sparse sample sets with results obtained using sampling

e comp

[max(e f, e p, e o, e s)2]/n

5. Results The performance of the two basis set models described above and of the optimized sampling patterns derived from them will be illustrated with three examples which were selected to illustrate distinct behaviors found throughout the complete suite of 50 features. First, consider a hole produced by the sequence C/D/R. This particular feature is a through hole of 3.175 mm nominal radius by 12.7 mm depth. A typical hole shape is shown in Fig. 3a. In each section of the gure, the residuals to a LSQ perfect cylinder are plotted. Visually, the predominant error shape for this hole throughout the series of 30 parts is a seven-lobed pattern that persists from top to bottom of the feature and which is accompanied by a slight spiral twist. The phase

112

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

3. Typical form errors: a) Feature #14 (C/D/R), b) Feature #13 (C/D), c) Feature #1a (C/D/M).

angle of this lobing pattern is uncorrelated between holes. Also evident are taper and some amount of bellmouth at both the tool entry and exit ends of the hole. Fig. 4a shows the rst eight terms of the optimized-order Fourier/Chebyshev basis set for this feature, over and above the ve terms required to dene the LSQ perfect cylinder. These are presented in order of decreasing importance, going from left to right and top to bottom. The rst term accounts for taper and, in the nomenclature used throughout this section is designated 001 (nlm, where n order of the Fourier sine factor, l order of the Fourier cosine factor and m degree of the Chebyshev factor). This is followed in importance by the two complementary 7th- order Fourier/ 1st-order Chebyshev terms (701 and 071), which take some further account of taper but, more importantly, begin to account for the lobing pattern. Inclusion of both these terms is appropriate because of the observed lack of angular correlation of the radial pattern. Most of the remaining terms (702, 072, 073 and 703) combine further explanation of the radial lobing with higher-order axial distortions (in essence an attempt to account for the spiral character of the lobing pattern, mentioned earlier). A purely 2nd-order Chebyshev term (002) also accounts for some of the axial variation, especially the bellmouth at the bottom of the hole. This model is able to account for almost 54% of the variance from perfect form of the entire suite of 30 features. The upper-left frame of Fig. 5 shows the rst 100 points in the optimized sampling pattern derived from this model. This pattern, as all of those that follow, is plotted with the point distribution normalized to a feature height of 25.4 mm. Since we have chosen a model with 13 basis functions, that number of points is required as a minimum to fully dene the model. These are shown as the large, dark lled circles. Successive points, which may be added to further account for variation are shown as open circles whose radii decrease in proportion to the signicance of their contributions. The points are distributed fairly evenly about four axial levels of the feature, with the top and bottom levels being most inuential, in keeping with the observation of taper as the predominant, rotationally symmetric form error. The performance of this model with various tting algorithms and varying numbers of sample points is shown in the remaining three frames of Fig. 5. Each frame represents one tting method from the set of LSQ, MI and MZ. (The details of the methods used to compute each of these ts are presented in Appendices B and C.) In each case four curves are plotted, representing direct application of the chosen

tting algorithm to two- and three-level manual sampling patterns (Dir-M2 and Dir-M3), direct application of the chosen tting algorithm to the optimized sampling pattern (Dir-Opt) and application of the chosen tting algorithm to the full EZN model (Ext-Opt), all as functions of the number of sampling points. Looking rst to the LSQ algorithm, all four curves show general improvement in the composite error as the number of sampling points increases. Here, as in all the examples to follow, the performance of the manual sampling patterns deteriorates markedly as the number of points decreases into the range typically used in routine CMM operation. The plots for the Dir-M2 and Dir-M3 cases also exhibit cyclic variation due to interaction of the pattern spacing with the lobing pattern and are indicative of a behavior that might be expected if the automatic probing pattern generation feature of typical off-the-shelf CMM software is applied to features with dominant periodic form error. Beyond the absence of this instability, there is no particular advantage to the Dir-Opt case and only slight improvement for the full, Ext-Opt treatment. Similar observations hold for the MZ algorithm. For the MI t, there is even less distinction between the methods, when considered in the context of the global trend. This is again, not surprising given the predominance of taper in this feature. The MI cylinder is determined largely by points at the very bottom of the feature so any pattern that has a signicant fraction of its points so placed would be expected to perform fairly well. The Dir-M2 pattern becomes markedly less satisfactory in comparison to the others at low numbers of sampling points and the oscillatory behavior of the error for the manual patterns largely disappears. Turning attention now to Fig. 4b and looking at the top eight functions from the basis set for the Eigen shape model, it is instructive to make a comparison with the data just examined. Not being bound to the shapes dictated by a particular set of mathematical functions, but rather being driven by the information content of the data, the Eigen shape basis set is able to account for more complicated form deviations with a relatively few basis terms. The rst indication of this is in the most signicant shape, which at a single stroke begins to account for taper and the seven- lobe pattern. The next two shapes offer additional explanation of the lobing pattern, explicitly incorporating its spiral character. Information about the slight bellmouth errors is evident in various of the basis shapes. The optimized sampling pattern derived from the Eigen shape model, Fig. 6, is noticeably less regular than the pattern for the analytical function model, again indicating the ability of this model to accommodate itself to form deviations of irregular character. The overall performance of the model is better, accounting for over 78% of the total variance across the suite of test parts. In all three of the composite error plots, the curves for the manual patterns are, of course, identical to those in Fig. 5. For the LSQ and MZ ts, the full EZN treatment is markedly superior, maintaining good performance down to even very sparse sample

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

113

4. Feature #14 model basis sets: a) optimized order Fourier/Chebyshev terms (001, 071, 701, 702, 072, 002, 073, 703), b) Eigen shapes. Terms are in order of decreasing signicance left-to-right and top-to-bottom.

sets. Also worth remarking is the relatively good performance of the Dir-Opt case which, although it deteriorates somewhat as sample size shrinks, still performs well in comparison to the traditional patterns. This is important, because the Dir-Opt case represents just the situation that would pertain if one were to apply the optimized sampling

pattern to the tting algorithms ordinarily found in todays commercial CMM software. For the MI t, there is no great distinction among the results of various sampling pattern/ model combinations, with the exception of the Dir-Opt case, which gives clearly superior results. This general lack of distinction among the methods is, once again, understand-

114

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

5. Feature #14, optimized sampling pattern, and composite errors for Fourier/Chebyshev model.

able in terms of the dominance of taper as a contributor to the total form error and the fact that all the patterns tested put a high percentage of their heavily-weighted points near the ends of the feature. The second example presents an instance of a more challenging modeling problem. This series of features was created by center drilling, followed by nishing with a parabolic-ute drill (C/D). This particular feature is a through hole of 3.175 mm nominal radius by 12.7 mm depth. The general shape of the errors is shown for selected instances in Fig. 3b. The predominant error is one-lobed spiral. That is, the cross-section at any axial level is close to circular, but the center of the cross section follows a spiral path. As in the previous example, the angular orientation of

6. Feature #14, optimized sampling pattern, and composite errors for Eigen shape model.

the lobing pattern varies from one instance of the feature to another. The analytical model has particular difculty in accounting for this type of error, and attempts to do so with a sequence of terms in relatively high Chebyshev orders, Fig. 7a. The Eigen shape model, on the other hand, (Fig. 7b) makes a relatively easy task of this, explicitly accounting for the most obvious spiral character with the rst two shapes. (Two are necessary here, again, because of the lack of phase correlation of the lobing.) The Eigen shape model then is able to use succeeding basis set members to account for some higher-order spiral lobing that is not readily discernable by eye in the raw data. The corresponding optimized sampling patterns and composite error information are presented in Figs. 8 and 9. Again, models with 13 basis functions were chosen for the comparison. As in the previous example, the pattern for the analytical basis set model is noticeably more regular in structure than the corresponding pattern derived from the Eigen shape model. The points derived from the analytical basis set are stratied at ve axial levels, corresponding to the heights at which the model functions can achieve maxima or minima. The Eigen shape model imposes no such limitation and this is reected in the less regular axial disposition of the point pattern. Both models do fairly well, though, in explaining the deviation from cylindrical form, accounting for about 87% (Fourier/ Chebyshev) and 97% (Eigen shape) of the variance. In the case of the LSQ ts, the full EZN model (Ext-Opt) is markedly superior with either choice of basis set. The results for the t applied directly to the optimized pattern data (Dir-Opt), while not quite as good, are still signicantly better than those for the manual patterns. For the limiting ts, the distinction is generally less dramatic but the performance of the EZN model is again worthy of note, particularly with regard to its stability versus the number of points sampled and its continued good performance to very low numbers of points. With MI t and for both basis models, there is a small but progressive improvement in composite error at all but the lowest numbers of sample points, in the order Dir-M2 Dir-M3 Dir-Opt Ext-Opt. This is consistent with the spiral form error, which gives rise to a comparatively even distribution over the surface of probability of nding a dening point for the t and thus favors increasingly sophisticated sampling strategies. The Dir-M3 case breaks this order at very low numbers of sample points, performing much less well than might be expected on the basis of intuition. One further example completes the suite of illustrative cases. Fig. 3c shows a feature produced by C/D/M. This particular feature is a counterbore of 6.35 mm nominal radius by 3.175 mm depth and, once more models of 13 basis functions are presented. The predominant error type is taper; however, the unique aspect of this feature, when contrasted to the earlier examples, is the occurrence of an angularly-correlated wrinkle in the surface in the vicinity of the 270 point. Inspection of the CNC program used to machine the artifacts conrms that the wrinkle corre-

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

115

7. Feature #13 model basis sets: a) optimized order Fourier/Chebyshev terms (012, 102, 103, 013, 014, 104, 002, 302), b) Eigen shapes. Terms are in order of decreasing signicance left-to-right and top-to-bottom.

sponds to the tool entry/exit point for the milling pass. It would be expected, and reference to the basis shape plots of Fig. 10a conrms, that the Fourier/Chebyshev model, consisting as it does of functions with strong rotational symmetry, is not particularly effective in capturing this shape. The rst (001) term accounts for the taper component. This

is followed by a series of Fourier term pairs which comprise a relatively unsuccessful attempt to model the angular components of the variation. This is to be contrasted with the Eigen shape model (Fig. 10b) wherein the rst term visibly captures not only the taper but also a portion of the phasecorrelated radial error. The second term largely provides

116

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

8. Feature #13, optimized sampling pattern, and composite errors for Fourier/Chebyshev model.

points); they are sometimes slightly worse. The advantage to the optimized patterns lies, in this instance, in the much better stability of the composite error plots as a function of the number of sample points. Performance of the inscribed t is similar to that exhibited in the rst example and, likely for the same reason: taper as the dominant error. It is particularly interesting to note that, in all the examples, the Ext-Opt case performs as well, in terms of the composite error, when used with the LSQ t as it does when used with the theoretically-to-be-preferred (per ANSI Y14.5.1 [2]) MZ t. The analogous favorable comparison holds almost as well for the Dir-Opt case, again indicating the potential value of the optimized sampling patterns when used with commercial, off-the-shelf CMM software and serving as an illustration of a point made early in this report; namely, that adherence to the protocol that is correct in principle is a separate issue from realization of the best estimate of the corresponding GD&T parameter.

6. Discussion further accounting of the wrinkle. This comparison is strengthened by examination of the corresponding optimized sampling patterns in Figs. 11 and 12. The points derived from the Fourier/Chebyshev model are disposed to capture the axial variation, but with little sensitivity to angular variability. The pattern for the Eigen shape model, on the other hand, picks up readily on the entry/exit wrinkle, placing several highly-weighted points in the vicinity of the 270 location. The optimized patterns, regardless of how applied, lead to uniformly better composite errors for the Eigen shape model than for the Fourier/Chebyshev model. In neither case are the composite errors for the optimized sampling patterns markedly lower than those for the manual patterns (except for very low numbers of sample The EZN method has been applied to a wide variety of full cylindrical surfaces, produced by many machining techniques, in small-lot quantities. The parts were densely sampled and the data processed by a variety of new and established algorithms to yield feature (primary datum) location, orientation, size and form parameters. The success of the new methods for sampling and analysis has been gauged by comparing the parameter values from the dense sample sets with their counterparts obtained from optimized and manual sparse sampling of the same parts and treatment of the sparse data by both conventional and new algorithms. It has been found that, compared to uniform sampling patterns and conventional analysis algorithms, the optimized sampling patterns and new analysis methods can signicantly reduce rms composite errors, especially for quite small numbers of sample points. While functions composed of Fourier/Chebyshev terms can provide an effective means of representing form errors for cylinders produced by many manufacturing methods, Eigen shapes are generally more efcient in modeling these errors, especially when the surfaces have notable discontinuities or are particularly complex, as is often the case for features produced by several successive machining operations. Optimized sampling patterns based on either Fourier/Chebyshev or Eigen shape models can be efcient means of sampling, provided that the basis set selected adequately represents the range of form errors encountered. These methods identify locations where systematic form deviations have a high probability of being at or near their extremes. Consequently, even the conventionally-used models of LSQ and MZ perfect cylinders yield results of better accuracy when EZN optimized point patterns are used rather than conventional sampling patterns. Optimized sampling patterns generally work best for the LSQ tting method, when compared to typical manual sam-

9. Feature #13, optimized sampling pattern, and composite errors for Eigen shape model.

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

117

10. Feature #1a model basis sets: a) optimized order Fourier/Chebyshev terms (001, 200, 020, 040, 400, 300, 030, 002), b) Eigen shapes. Terms are in order of decreasing signicance left-to-right and top-to-bottom.

pling patterns. They also work reasonably well for MZ. Improvements in accuracy over manual sampling patterns for MI ts are generally not dramatic although substantial improvements were often observed in the stability of the composite error and in robustness with small numbers of sample points. Optimized sampling patterns employed directly in any of

the analysis methods typically do not provide as much improvement as do the same patterns processed by the EZN version of the same analysis method. This is especially true at very low numbers of sample points. Even in circumstances where marked general improvements in composite error are not noted, optimized patterns can greatly improve stability and reduce the level of sensitivity of the error to the

118

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

11. Feature #1a, optimized sampling pattern, and composite errors for Fourier/Chebyshev model.

number of sample points taken. This is particularly true for cylinders with high-order lobing characteristics such as are found for C/D/R and C/D/B/R processes. Finally, it should be said that this work provides a straightforward illustration of the general applicability of the approach to any nominal feature shape of metrological interest. All that is necessary for extension to new feature types is formulation of appropriate basis set model(s). Future reports will deal with the extension of these techniques to other feature geometries. Ultimately, one might envision a fully automated CMM software system in which the CAD denition of a part serves, among other purposes, as a basis for the selection of a dense sampling pattern which is then used to measure a

relatively small number of parts. As the part data becomes available, PCA methods determine the level of systematic versus random character to the surfaces and the nature of the systematic variation as embodied in the derived Eigen shapes. The relative importance of these surface shapes is indicated by the magnitude of the corresponding eigenvalue from the analysis. This information is sufcient to suggest the preferred algorithm for data reduction and the attendant sample pattern optimization method. Moreover, the dense sample sets allow directly for calibration of the extrapolation factors used in the extended zone method. Based on these dense sets we also have available an indication of the level of accuracy that can be anticipated when a sparse subset of this dense sampling is employed. The condence level desired by the user together with the response time required of the measuring system will then determine the number of points to be taken and the sample pattern optimization scheme will yield the desired sparse set distribution pattern for use on production parts. Occasionally a part might be sampled more densely and the data gathered used to verify that the process variability continues to be accurately represented in the database or to indicate renements in the sparse sampling patterns.

Acknowledgments The authors acknowledge the following organizations for nancial support of the work reported here: the Consortium for Advanced Manufacturing - International under contract C-92-QAP-01 to the University of San Francisco, Sandia National Laboratories under DOE contract DE-AC04 94AL85000, and Honeywell, Federal Manufacturing and Technologies, under DOE contract DE-AC04 01AL66850. The assistance of Mr. Robert Pilkey, who provided portions of the measurement data, and of Mr. Vince De Sapio, who aided with some of the data analysis for this report, is gratefully acknowledged.

Appendix A. Finding the minimum variance EZN surface We require an expression for the sensitivity of Var(de) to the addition of a single sampling point to the set already in place. Mathematically this is the partial derivative of Var(de) with respect to an increase in weight at the ith location along the diagonal of W. If we dene Wi to be an n x n matrix of all zeros except for a 1 at element Wii, the expression for this derivative is Var d e) / W ii
2

tr(B (B t W B)
1

Bt Wi (A1)

B (B t W B)
12. Feature #1a, optimized sampling pattern, and composite errors for Eigen shape model.

B t ).

This is not computationally intensive because the term B (Bt W B) 1 Bt can be computed once and then used for all of

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

119

the positions along the diagonal of W. Moreover, the one 1 in the diagonal of Wi just calls for the dot product of a column (or row, its symmetrical) of the matrix B (Bt W B) 1 Bt with itself, times - 2. The derivative is always negative, as it should be. Adding weight can only make the error smaller. In deriving equation A1 the matrix identity M 1/ v M 1 M/ v M 1 is used, where M is an invertible matrix and v is any variable it is dependent on. We can now proceed to nd the minimum value of Var(de). 1. Assume there are n points represented by d and k basis functions, so a is a k dimensional vector, d an n dimensional vector, and B an n x k dimensional matrix. Initially let there be a sampling subset of k points, just enough to determine the elements of a. Let the k points be statistically distributed uniformly over the n data points. This would not be possible with an actual dataset but is possible mathematically by giving all of the diagonal elements of W a value of k/n rather than either 0 or 1. For example, if we have chosen to model the surface using six Fourier and polynomial terms, we need a minimum of 11 points (ve for the ideal cylinder and one for each of the additional terms) to fully constrain the model. If we have a dense set of 1,000 points we give each point location a beginning weight of 11/1,000 and we get a corresponding value of Var(de) . It should be noted that if individual diagonal elements of W have values 1, the equality, WW W, no longer holds. Under this condition, equation 11 only approximates Var(de) . It is expedient, however, to employ equation A1 under these conditions to determine the most benecial placement of succeeding points. When choice of the most advantageous point order is complete, the appropriate diagonal elements of W are again assigned values of unity and equation 11 gives the correct value of Var(de) . This expedient is justied based on the observation, below, that more complicated procedures, designed to seek a global minimum for Var(de) , yield only a moderately improved result. 2. Next with this uniform distribution determine which of the n data points would provide the greatest benet in reducing Var(de) by using equation A1 to determine which point produces the most negative value of Var(de) / Wii. Put a 1 at corresponding diagonal element of W and take the weight necessary for this point from the others by making the remaining points have a weight of (k-1)/(n-1). In our specic example, each of the 999 points remaining will be given a weight of 10/999. 3. Continue in the spirit of step 2, k-1 times, nding the next best point to provide a weight of 1 by evaluating Var(de) / Wii at all of the points which do not have 1s and choosing the most negative value. Each time take the weight necessary for this point from the remaining points with weight less than 1. So, for example, when j points have been found there will be j locations along the diagonal of W which have values of 1 and all of the others will have values of (k-j)/(n-j). When the kth point is nally assigned a value

of 1, all of the weight of the k assumed points will be assigned to a location with a weight of 1 and all of the other points (without 1s) will have weights of 0. Those points having a weight of 1 constitute the optimal minimum set of sampling points necessary to just determine the k elements of a. 4. Continue adding points as desired by searching all of the points with a weight of 0 to nd which of them is most suitable for adding to the set. Each time search for the most negative value of Var(de) / Wii. The procedure outlined is not guaranteed to provide the absolute minimum value of Var(de) , as point exchanges are not allowed. It is, however, fast and reasonably effective. Other procedures have also been tried which admit the possibility of moving one of the weights of 1 to a location having a 0. This procedure was found to be of marginal value and also had the very undesirable effect of producing sets of points which were not supersets of those having fewer points, but which had changes in the locations of their earlier members. One has to choose the number of points to be used in measuring a part before any of the data has been taken. If one wants to measure an additional point for more accuracy, the already measured points might not all be in the right locations. Another practical disadvantage is that separate point location les would have to be produced and maintained for each of the sets of different numbers of points. In contrast, the above algorithm produces one le and the user chooses the desired number of sampling points from the earliest entries in the le, adding others in order as subsequently desired.

Appendix B. Least squares cylinder estimation From the previous report [4] the Fourier/Chebyshev representation of the nonideal cylinder can be expressed as
ordz

r z,

R
i 0 ord

Ti

2 z z max

z min) z min
ord

a ijcos j
j 0 j 1

b ijsin j (B1)

where r is the analytic model of the actual curved surface. R is a predetermined constant nominal radius. R may be taken to be the nominal radius from a print, the best t radius from the t of an ideal cylinder to the data, or perhaps the average of the radii of best t cylinders to a set of nominally identical cylinders. The only real requirement is (R - r )/R 1. Ti ( ) is the Chebyshev polynomial of the second kind, of order i. Finally, aij and bij are the coefcients which represent the curved actual cylinder, including its form error. Five of the terms in equation B1 are of particular interest

120

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

and use. They are the terms which can be used to represent small changes in the size, location, and orientation of the cylinder. Although there are higher order terms involved in small changes in a perfect cylinder, if the changes are much smaller than R the higher order terms can be considered to be negligible. If these ve functions are represented by the vector Fc(z, ) and their coefcients by ac, then in a polar system the ideal cylinder can be represented by r z, Take Fc to be 1 cos( ) sin( ) , T1( ) cos( ) T1( ) sin( ) R F c(z, )t a c. (B2)

First consider the MIC. The objective function to be minimized in this case is the rst element of ac, which is the coefcient of the function 1 (See equation B3). The set of constraints is d de d Bc ac 0 (C1)

for the dense set and W (d d e) W (d B c a c) 0 (C2)

for the sparse set. That is, each element of Bc ac d for the dense set and

F c(z, )

(B3)

W (b c a c

d)

where 2 z z min z max z min 1

is the vertical position in the cylinder normalized to [-1, 1] to be used as an argument for T. The discrete parametric representation of the model ideal cylinder is given by the model surface vector de B c a c. (B4)

The LSQ solution for the t of an ideal cylinder to the dense dataset was shown to be [4] ac (B ct B c)
1

B ct d.

(B5)

Similarly, the LSQ t to the sparse dataset selected by the weight matrix W, as developed from equation 4 in the main body of this paper is ac (B ct W B c)
1

B ct W d.

(B6)

The vector ac in each of these cases is the vector ac in equation B2 which gives the continuous analytic representation of the surface.

Appendix C. Maximum inscribed and minimum zone cylinder representations It is necessary to have expressions for various limiting ts. Equation B5 presents the LSQ solution for the best t of the model cylinder, de, to the densely measured surface d. Equation B6 is the corresponding result for a sparse subset of the dense set. It is now necessary to nd mini- max solutions using the same parametric representation. Using the linearized approximation for the ideal cylinder given in equation B4 (a very good approximation in the present case since each element of ac is very small compared to R for all but the crudest of manufactured parts), this becomes a linear programming problem.

for the sparse set must be 0. These constraints require that all of the model data points must lie at a radius less than, or equal to, the measured ones. Thus the problem is to maximize the rst element of ac by varying all of the elements of ac, subject to the constraints given either by equation C1 or equation C2. Any linear programming method, such as the well known simplex method, could be employed. The gradient projection algorithm [33] was used here. The rst approximation cylinder is taken to be the cylinder with an axis determined from the LSQ solution and a radius equal to the distance of smallest radius point from this LSQ axis. It should be noted that linear programming problems have unique solutions. This would imply that the MI solution would be unique. This is only guaranteed in the linearized approximation. In the case of the inscribed cylinder, the real solution is not unique. For example, assume that the cylinder axis does not thread the measured point set but instead can lie outside of it. In this case the radius is free to grow to innity and the axis can lie many places and have many orientations. For real cylinders with included angles greater that 180 this is not of practical concern if the sampling points are well distributed and if a good rst approximation is used. The LSQ solution is an example of a good rst approximation. The solution for the MZ cylinder requires the inclusion of one additional parameter in the parameter set. This parameter is the thickness of the MZ, the cylindricity, which we will call t. The objective function to be minimized in this case is, of course, t itself. The model to be used is a cylinder given by equation B4 with the minimum and maximum boundaries at radii of -t/2 and t/2 from this radius respectively. The set of constraints is doubled in number because we now have surfaces on both sides of the point set. The constraints are d de d de t 1 2 d Bc ac d t 1 2 Bc ac 0 t 1 2

(C3) 0

t 1 2

for the dense set and

K.D. Summerhays, et al. / Precision Engineering 26 (2002) 105121

121

W d

de d de

t 1 2 t 1 2

W d

Bc ac

t 1 2

Bc ac

t 1 2

(C4)

for the sparse set. The vector 1 is a vector with the dimensionality of d having all of the elements equal to 1. The problem is to minimize t by varying t and all of the elements of ac subject to the constraints given either by equation C3 or equation C4. The starting approximation cylinder again has the LSQ solution axis and t is twice the largest distance of a data point from the LSQ cylinder. The specic implementations of these algorithms used in the present work were veried by processing all the approximately 1,500 data sets analyzed in this work with an alternative, published [34,35] system for computing the limiting ts. In well over 99% of the instances, the two tting programs gave results identical within machine roundoff. In the few remaining instances, discrepancies were minor.

References
[1] Brown CW. Dimensional inspection techniques for sample-point measurement technology. Prec Eng 1992;14(2):110 11. [2] ASME. Mathematical Denition of Dimensioning and Tolerancing Principles. ASME Y14.5.1. New York: American Society of Mechanical Engineers, 1994. [3] Stout KJ, Davis EJ, Sullivan PJ. Atlas of Machined Surfaces. London, UK: Chapman and Hall, 1990. [4] Henke RP, Summerhays KD, Baldwin JM, Cassou RM, Brown CW. Methodologies for evaluation of systematic geometric deviations in machined parts and their relationships to process variables. Prec Eng 1999;23(4):27392. [5] Government, and Industry Data Exchange Program. Alert No. X1A-88 01. August 22, 1988. [6] Weckenmann A, Eitzert H, Garmer M, Weber H. Functionalityoriented evaluation and sampling strategy in coordinate metrology. Prec Eng 1995;17(4):244 52. [7] Goto M, Iizuka K. A method for evaluating form errors of cylindrical parts. J Japan Soc Prec Eng 1975;41(5). [8] Kurfess, TR, Banks DL. Statistical verication of conformance to geometric tolerance. Computer-Aided Design 1995;27(5):353 61. [9] Kurfess TR, Banks DL, Wolfson LJ. A multivariate statistical approach to metrology. ASME J Manufact Sci Eng 1996;118:6527. [10] Abakerli AJ, Butler BP, Cox, MG. Optimum probing strategies: determining and using underestimation factors. Berlin: PhysikalischTechnische Bundesanstalt 1997. Workshop on Traceability of Coordinate Measuring Machines. [11] Butler BP, Cox, MG. Optimum probing strategies: analysis of the GKN Technology sphere data. Berlin: Physikalisch-Technische Bundesanstalt 1997. Workshop on Traceability of Coordinate Measuring Machines.

[12] Dowling MM, Grifn PM, Tsui K-L, Zhou C. Statistical issues in geometric feature inspection using coordinate measuring machines. Technometrics 1997;39(1):324. [13] Goto M, Iizuka, K. An analysis of the relationship between minimum zone deviation and least squares deviation in circularity and cylindricity. Proc Int Conf on Production Eng. New Delhi 1977, x6170. [14] Feng SC, Hopp TH. A review of current geometric tolerancing theories, and inspection data analysis algorithms. NISTIR 4509. National Institute of Standards and Technology, Gaithersburg, MD, 1991. [15] Choi W, Kurfess TR. Uncertainty of extreme t evaluation for threedimensional measurement data analysis. Computer-Aided Design 1998;30(7):549 57. [16] Tsukada T, Sasajima, K. An optimum sampling interval for digitizing surface asperity proles. Wear 1982;83:119 28. [17] Kanada T, Tsukada, T. Sampling space in discrete measurements of cylindrical form. Bull Japan Soc Prec Eng 1986;20(3):16570. [18] Caskey G, Hari Y, Hocken R, Machireddy R, Raja J, Wilson R, Zhang G, Chen K, Yang J. Sampling techniques for coordinate measuring machines. Proc 1992 NSF Design Manuf Sys Conf 1992; Atlanta, GA. P. 983 8. [19] Coy J. Sampling error for co-ordinate measurement. Proc 28th Int MATADOR Conf. Manchester, UK: Dept. of Mech. Eng., U. Manchester 1990:4819. [20] Mestre M, Abou-Kandil H. Linear prediction of signals applied to dimensional metrology of industrial surfaces. Measurement 1993;11: 119 34. [21] Mestre M, Abou-Kandil H. Measurement of errors of form of industrial surfaces: prediction and optimization. Prec Eng 1994;16(4):268 75. [22] Woo TC, Liang, R. Dimensional measurement of surfaces and their sampling. Computer-Aided Design 1993;25(4):2339. [23] Woo TC, Liang, R, Hsieh, CC, Lee NK. Efcient sampling for surface measurements. J Manufact Sys 1995;14(5):34554. [24] Liang R, Woo TC, Hsieh, CC. Accuracy and time in surface measurement, part 1: mathematical foundations. J Manufact Sci Eng 1998;120:1419. [25] Liang R, Woo TC, Hsieh, CC. Accuracy and time in surface measurement, part 2: optimal sampling sequence. J Manufact Sci Eng 1998;120:150 5. [26] Choi W, Kurfess TR, Cagan J. Sampling uncertainty in coordinate measurement data analysis. Prec Eng 1998;22(3):153 63. [27] Phillips SD, Borchardt B, Estler WT, Buttress J. The estimation of measurement uncertainty of small circular features measured by coordinate measuring machines. Prec Eng 1998;22(2):8797. [28] Capello E, Semeraro Q. The effect of sampling in circular substitute geometries evaluation. Int J Mach Tools Manuf 1999;39:55 85. [29] Edgeworth R, Wilhelm RG. Adaptive sampling for coordinate metrology. Prec Eng 1999;(23):144 54. [30] Baldwin JM, Pilkey RD, Cassou RM, Summerhays KD, Henke RP. Modication of the Sandia National Laboratories/California Advanced Coordinate Measuring Machine for High Speed Scanning. SAND97 8245. Sandia National Laboratories, Livermore, CA, 1997. [31] Pilkey RD, Klevgard PA. Advanced Coordinate Measuring Machine at Sandia National Laboratories/California. SAND93 8208. Sandia National Laboratories, Livermore, CA 1993. [32] Atkinson AC, Donev AN. Optimum experimental designs, Oxford: Clarendon Press, 1992. [33] Rosen JB. The Gradient Projection Method for Non-Linear Programming. I. Linear Constraints. SIAM J 1960;8:181217. [34] Carr K, Ferreira P. Verication of form tolerances Part I: Basic issues, atness, and straightness. Prec Eng 1995;17:131 43. [35] Carr K, Ferreira P. Verication of form tolerances Part II: Cylindricity and straightness of a median line. Prec Eng 1995;17:144 56.

You might also like