You are on page 1of 9

Machine vision for high-precision volume measurement applied to levitated

containerless material processing


R. C. Bradshaw, D. P. Schmidt, J. R. Rogers, K. F. Kelton, and R. W. Hyers
Citation: Review of Scientific Instruments 76, 125108 (2005); doi: 10.1063/1.2140490
View online: http://dx.doi.org/10.1063/1.2140490
View Table of Contents: http://scitation.aip.org/content/aip/journal/rsi/76/12?ver=pdfcov
Published by the AIP Publishing
Articles you may be interested in
A new pressurizable dilatometer for measuring the time-dependent bulk modulus and pressure-volumetemperature properties of polymeric materials
Rev. Sci. Instrum. 80, 053903 (2009); 10.1063/1.3122964
Volume Measurement in Solid Objects Using Artificial Vision Technique
AIP Conf. Proc. 724, 175 (2004); 10.1063/1.1811843
Thermal Contraction Measurements of Various Materials Using High Resolution Extensometers between 290 K
and 7 K
AIP Conf. Proc. 711, 151 (2004); 10.1063/1.1774564
Nonlinear viscoelastic analysis of the torque, axial normal force and volume change measured simultaneously in
the National Institute of Standards and Technology torsional dilatometer
J. Rheol. 46, 901 (2002); 10.1122/1.1475980
Development of an adiabatic calorimeter for simultaneous measurement of enthalpy and volume under high
pressure
Rev. Sci. Instrum. 69, 185 (1998); 10.1063/1.1148494

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

REVIEW OF SCIENTIFIC INSTRUMENTS 76, 125108 2005

Machine vision for high-precision volume measurement applied


to levitated containerless material processing
R. C. Bradshaw and D. P. Schmidt
Department of Mechanical and Industrial Engineering, University of Massachusetts,
Amherst, Massachusetts 01003

J. R. Rogers
NASA Marshall Space Flight Center, Huntsville, Alabama 35812

K. F. Kelton
Department of Physics, Washington University, St. Louis, Missouri 63130

R. W. Hyersa
Department of Mechanical and Industrial Engineering, University of Massachusetts,
Amherst, Massachusetts 01003

Received 4 April 2005; accepted 31 October 2005; published online 29 December 2005
By combining the best practices in optical dilatometry with numerical methods, a high-speed and
high-precision technique has been developed to measure the volume of levitated, containerlessly
processed samples with subpixel resolution. Containerless processing provides the ability to study
highly reactive materials without the possibility of contamination affecting thermophysical
properties. Levitation is a common technique used to isolate a sample as it is being processed.
Noncontact optical measurement of thermophysical properties is very important as traditional
measuring methods cannot be used. Modern, digitally recorded images require advanced numerical
routines to recover the subpixel locations of sample edges and, in turn, produce high-precision
measurements. 2005 American Institute of Physics. DOI: 10.1063/1.2140490

I. INTRODUCTION

Noncontact, or containerless, processing provides the


ability to study highly reactive materials without the possibility of contamination affecting thermophysical properties.
Levitation is a common technique used to isolate a sample as
it is being processed. However, this isolation prevents the use
of traditional property measurement techniques. To overcome this problem, optical methods have been devised to
measure properties by analyzing images of a sample as it is
being processed. Properties such as viscosity and surface
tension,1 as well as density,2,3 are examples of properties that
can be measured with this type of method. Density, in particular, is extremely important as it is used to measure other
properties as well as being commercially important.
Noncontact optical measuring techniques were first used
to measure density in the early 1960s.2,3 The first applications of noncontact measurements used electromagnetic levitation EML to isolate samples being processed, at which
point they were photographed with 35 mm film. These photographs were enlarged and the image analysis was performed by hand. Sample images were sliced into many trapezoids from which a numerical integration was done to
calculate the samples volume. At times only two percent of
all images were of high enough quality to measure.3,4 Later
methods have made use of analog video to capture images of
samples during processing.5 These video images were subsea

Author to whom correspondence should be addressed; electronic mail:


hyers@ecs.umass.edu

0034-6748/2005/7612/125108/8/$22.50

quently digitized and analyzed with computer software.


Early software techniques measured volume using thresholding methods or convolution masks to determine sample
edges to within a resolution of about one pixel.5,6
Typically, image analysis consists of capturing gray
scale images of a sample as it is being processed. Modern
techniques use software to analyze and measure the sample
in each image. For density measurements, volume is measured from 2D silhouettes of the samples. Accurate and repeatable edge detection of the samples in the image is crucial
for volume measurements.
The video process replaces edge discontinuities that can
be seen by the eye with steep, but continuous, changes in
pixel intensities. This phenomenon effectively smears the
edge of a sample. However, through the use of advanced
numerical methods, sub-pixel edge determination is possible
by examining the intensity values of pixels surrounding the
edge of a sample.
Recent methods make use of digital video cameras that
capture images directly to a hard drive for storage. These
images are in turn analyzed using subpixel edge detection
techniques.79 Current and recently used subpixel techniques
can be broken down into two types: a gradient-based search,
and a half-height technique. The gradient-based searching
schemes perform numerical calculations of gradient values
across the image until a maximum is found; this maximum in
the absolute value of the gradient is taken to be the edge. The
gradient-based searches are either radial or Cartesian.6,7,9 A
radial search is generally started at an approximate centroid

76, 125108-1

2005 American Institute of Physics

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-2

Rev. Sci. Instrum. 76, 125108 2005

Bradshaw et al.

location and then moves outward along a predefined number


of vectors, while a Cartesian search moves across the image
in horizontal and/or vertical directions.
Half-height based searches differ from gradient-based
searches in that the position of the edge is taken as the location where the edge intensity value is half the difference
between the background and the samples gray scale values.
To find the edge location, a function is fit to the values that
are located at the edge, represented by a sharp change in
profile, and then solved for the location of the half-height
value.8
The techniques and algorithms presented here put forth a
method that builds on the best practices of Samwer et al.8
and Rhim et al.7 By blending their techniques with advanced
numerical methods, high precision and fast measurement
speed are accomplished. This application uses the concept of
radial edge searching7 with the idea of fitting a cubic polynomial to the edge values.8 Once the edges have been found
in this manner, the radial function used to represent the edge
values is allowed to rotate about the center of the sample to
align with the best possible symmetry axis.8 In addition to
these combined techniques, a detailed method for optimized
polynomial edge fitting is presented.

II. SAMPLE PROCESSING AND IMAGE ACQUISITION

The containerless processing technique employed is


electrostatic levitation ESL. ESL provides a stably levitated
spherical sample of 2-3 mm in diameter from which volume
measurements may be taken for each frame of video that is
recorded. This is different from some electromagnetic levitation EML measurements where many frames must be averaged to make a single measurement, due to large-amplitude
oscillations of the sample surface.5,6,9 The ESL system used
for sample processing is located in the Electrostatic Levitation Laboratory at NASA Marshal Space Flight Center
MSFC Huntsville, AL.10
The video and optical hardware used is comprised of a
Redlake Motion Pro 10000 camera, Infinity K2 telecentric
lens and filter array. The filter array is used between the
camera assembly and the processing chamber to filter out
extraneous light sources such as positioning and heating lasers. For example, when an 810 nm fiber-coupled laser diode
array was used for heating, the filters chosen were one cyan
subtractive filter, one hot mirror, and one extended hot mirror, all of which were from Edmunds Optics part numbers
NT52-538, NT43-452 and NT46-388, respectively. The filters are used to block out reflected laser light that could
either damage the camera or show up as bright spots on the
surface of the samples. A telecentric lens is used so that
magnification changes due to changes in working distance,
as a result of sample motion, are minimized. Temperature
measurements are taken using a single-color IGA-100 Impac
GmbH now Mikron Infrared, Inc. pyrometer.
Video is recorded at a rate of 25 frames/s and stored
digitally on a computer for post-processing analysis. Video
of the samples consists of 8-bit gray scale images with 512
512 resolution. A temperature sampling rate of approxi-

FIG. 1. Color online Intensity profile and gradient plots for vertical slice
taken through center of sample image. These plots depict how the sample
pixels are delineated from the background pixels.

mately 16 Hz is typically used for these measurements, although other sampling rates are available at the MSFC ESL
facility.
III. IMAGE ANALYSIS

The goal of this technique is to measure the volume of


samples as they are being processed by analyzing images of
the samples. The calculations performed by the program can
be broken down into three steps. The first step is to determine a candidate area in which a detailed subpixel search can
be conducted. This is done by detecting the edges of the
sample in a coarse manner, within approximately one pixel.
The second step is to use the position of the coarse edge
values to perform a subpixel edge detection of the edges.
This is important, as the sample edge is typically smeared
during the video process. The third step is to fit a sixth-order
Legendre polynomial to the subpixel edge points. This polynomial is used to represent the radius of the sample as a
function of rotation in a polar coordinate system. The polynomial is used to represent the shape of the sample so as to
calculate the volume of the sample based on the 2D representation in an image.
A. Coarse edge detection

Coarse edge detection is performed by thresholding the


image. Threshold values are determined in a similar technique put forth by Lin et al.,11 who used the integer intensity
value that is halfway between the average of the background
and sample intensity values. This technique calculates this
value by examining a vertical slice through the image. The
averages of the background and sample are determined by
first segregating the slice through the image into background
and sample pixels. The segregation is based on a centered
difference first derivative search along the slice for the largest positive and negative gradient values.12 These values are
taken as approximate edges; any pixels located between
these edge values are taken to be sample pixels and all pixel
locations outside, above or below, are considered to be background pixels. In this manner, a threshold value is determined for each image that is analyzed. Figure 1 shows plots

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-3

Rev. Sci. Instrum. 76, 125108 2005

Machine vision for volume measurements

FIG. 3. Candidate area that has been masked out using the horizontal coarse
edge detected points. This is the area within which the subpixel edge detection will take place.

FIG. 2. Horizontally detected edge points using single pixel resolution


threshold edge detection. Notice top and bottom locations contain fewer
edge values. This results from the top and bottom areas being resolved by
only a few rows as well as the analysis program requiring a minimum
distance between edge values.

of both a profile of intensity and the first derivative along a


slice through an image. It should also be noted that during
this step the fractional half-value between the background
and sample is calculated and stored for future subpixel edge
detection.
Once an appropriate threshold value is determined, horizontal edge detection is performed for each pixel row of the
image. Starting at both sides of the image, the edges of the
sample are sought by checking the pixel intensity values in
each row moving inward until pixel values that are equal to
or less than the threshold value are found. Two horizontal
edge points are detected for each row that contains the
sample silhouette. Figure 2 shows a plot of the horizontally
detected edge points. Notice the top and bottom edge areas
have fewer points detected. This is a result of the top and
bottom of the sample being resolved by only a few rows;
also a filtering algorithm used by the software to prevent
extraneous pixels from being identified requires a minimum
distance between horizontal points for them to be detected.
After completing the horizontal coarse edge detection, a
21 21 pixel array is masked out around each coarse edge
point to form a candidate area within which to perform the
subpixel edge detection. This candidate area can be seen in
Fig. 3 as a ring of pixels surrounding the sample silhouette in
the image.

B. Subpixel edge detection

The subpixel edge detection process is carried forth in a


radial search that is similar to that of Rhim et al.7 Rhims
technique performs a search for the edge along 400 individual vectors by starting at the approximate centroid of the
sample and moving outward along each vector. The technique presented here also performs a radial search starting at
the centroid of the sample. However, a distinct difference is
that this technique searches along almost three times as many
vectors on average. An explanation of how the number of

search vectors is determined and how that quantity varies


will be given later.
The radial subpixel edge detection is started by determining an initial centroid from which to start the search.
This is achieved by calculating the location of the centroid
from the coarsely detected edge points in the following
expressions:

x iA i
X = i=1
,
n
i=1 Ai

y iA i
Y = i=1
.
n
i=1 Ai

Using the initial centroid location, all of the coarse edge


points are converted into a polar coordinate scheme using the
centroid as the origin. After the coordinate transformation, a
sixth-order Legendre polynomial is fitted to the coarse points
using a least-squares technique.12 The form of the analytical
expression for the samples radius can be seen in the following equation, where the radius R of the sample is expressed as a function
6

R = ai Picos

i=0

of the rotation angle theta; Picos is the ith-order Legendre polynomial and ai its respective coefficient. All Legendre polynomial fits and search directions are done under
the assumption that a rotation angle of zero is located at the
12 oclock position of the sample and positive rotation is in
the clockwise direction.
Before searching for the edge, the number of search directions is first determined by approximating the circumference of the sample by using the a0 coefficient of the initial
least-squares fit of the Legendre polynomial. The a0 coefficient calculation provides a zero-order approximation of the
samples radius. The number of search directions is taken as
the integer value of the circumference of the sample, calculated with the a0 coefficient. In this manner the number of
search directions will vary with sample size, but the relative
spacing between vectors will stay roughly the same. The

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-4

Rev. Sci. Instrum. 76, 125108 2005

Bradshaw et al.

FIG. 5. Color online Example of cubic polynomial fit through edge pixels
from three vectors. Also included in this plot are the left and right brackets
required to perform the closed bisection rooting finding procedure used to
find the subpixel edge location.
FIG. 4. Color online Depiction of three vectors used to fit the cubic polynomial to represent the edge along the middle vector. Note the center vector
is the direction of the cubic edge interpolation and the two surrounding
vectors provide additional points that are used to describe the edge pixel
values as well as smooth out image noise.

angle between each vector is calculated by dividing 2 radians by the number of search directions. Each search direction
is equally spaced by this amount.
The subpixel edge location is searched for along each
vector by incrementing by one pixel7 length along a vector
that starts at the centroid and points along each search direction. Each time the vector is incremented, the location of the
vector head is calculated and converted to Cartesian coordinates. From this, the pixel location of the vector head is
determined. The vector is incremented until the program determines that the head has landed in a pixel that is both in the
subpixel search area and is either equal to or greater than the
threshold value. Once this pixel criterion has been met, this
pixel and three previously tested pixels along this search
direction are stored for a total of four pixels for each search
direction.
C. Cubic edge interpolation

After selection of fitting pixels for the edge, the next step
is to use these pixels to fit cubic polynomials. Each cubic
polynomial is used to represent the edge profile along its
specific search vector and is solved for the fractional value of
half the difference between the average of the background
and sample pixel intensity values. Noise in the image, which
presents as variability in pixel intensities from one search
vector to another, is reduced by including the pixel values
from two vectors that surround the direction selected for cubic polynomial fitting. Three vectors for each search direction help smooth noise by using 12 pixel values and locations, which in turn provide more information about the
influence of surrounding edge profiles as well. Figure 4 provides a graphical example of which vectors are used in the
fitting of a cubic polynomial along a search direction. Note

that there are three search vectors, two surrounding the


middle vector, which is the direction to be fit along. Figure 5
depicts a sample cubic polynomial fit through 12 edge pixels.
As indicated before, the subpixel location along a vector
is taken as the fractional value halfway between the sample
and background pixel intensities. This location is solved for
using a closed bisection root-finding technique.13 Also depicted in Fig. 5 are the brackets used to solve the polynomial
as well as the location of the subpixel edge.
Once all of the edge values have been detected, a centroid is recalculated by averaging the vertical and horizontal
components of the edge locations. Because the radial function, used to represent the edge points, is critically sensitive
to the centroid location, the radial subpixel edge detection is
performed a second time. This step is also done by Rhim
et al.7
D. Measuring volume

As indicated before, a Legendre polynomial is fit to the


detected edge points so as to describe the edge as a function
of rotation angle. Once fit, this Legendre polynomial is integrated using the following relation to calculate volume:
2
Volume =
3

R3 sind .

However, before integration, the polynomial needs to be optimized. The fit of the polynomial is optimized in two distinct ways. The first is to locate the optimal location of the fit
origin within the detected edge points. Legendre polynomials
have the ability to describe symmetrically deformed spheroids and the origin of the polynomial fit may not necessarily
be located at the centroid of the edge points. The second
optimization deals with determining at what angle the polynomial fit should be done. Theoretically, in ESL a sample
will levitate with the long axis oriented vertically. Typically
the Legendre fit is done with the assumption that the rotation
angle of zero is oriented along a vertical axis of symmetry.

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-5

Rev. Sci. Instrum. 76, 125108 2005

Machine vision for volume measurements

FIG. 7. Color online Nonoptimized centroid and declination angle plot.

FIG. 6. Color online Golden Ratio search directions. Direction one is to


optimize the declination angle while directions two and three are for the
perpendicular and parallel centroid optimizations, respectively.

However, a sample may have a slightly different symmetry


axis, so a coordinate transformation must be executed to
align the axis of the fit with the axis of symmetry of the
sample. For purposes of discussion, this coordinate system
rotation from the vertical axis will be termed the declination
angle.
While the origin of the Legendre fit may not be at the
centroid, the origin will be along the axis of symmetry. As a
result of this, it is important to determine the appropriate
declination angle so that the best origin location may be
sought along the true axis of symmetry.
Optimization of the fit is achieved by minimizing the L2
norm of the fit with respect to the origin location and declination angle.14 The equation for the L2 norm can be seen
below:
L2 =

R Rfit21/2 .

shown that negligible improvement is gained by iterating the


Golden Ratio searches. Once the optimal declination angle
and origin have been determined, the volume of the sample
is calculated by using the relation shown in Eq. 4 above.
The importance of optimizing the centroid and declination angle is shown in Figs. 7 and 8. Both these plots depict
the average radius of the edge values subtracted from the
radius of individual edge values, Eq. 6, as well as the radius of the polynomial fit at the same rotations as the edge
values, Eq. 7. The purpose of these plots is to illustrate how
closely
point = Rpoint Ravg ,

fit = Rfit Ravg

the polynomial fit represents the detected edges. If the polynomial is correctly representing the edges, then the two
curves should lie on top of each other. Figure 7 depicts these
values when neither the origin nor the declination angle have
been optimized for a sample image. It can clearly be seen in

To achieve this L2 minimization, three Golden Ratio searches


are performed on each polynomial fit.12 The first search is
used to determine the optimal declination angle. This is done
by evaluating the L2 norm at each test declination angle and
iterating until a minimum L2 norm is converged upon. The
second and third searches are used first to align the origin
with the symmetry axis and then to search along this axis for
the best origin location. Both second and third searches also
evaluate the L2 norm at each test location. The second
searches in a perpendicular manner to the symmetry axis and
the third searches parallel or along the symmetry axis for the
location of the minimum L2 norm. Figure 6 shows an example of the three search directions imposed on a sample
Legendre shape. Each of these optimization steps is performed once. It might appear that iterating these steps in
sequence may increase the quality of the fit, but testing has

FIG. 8. Color online Optimized centroid and declination angle plot.

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-6

Rev. Sci. Instrum. 76, 125108 2005

Bradshaw et al.

this plot that the values are out of phase and are not even the
same amplitude. In contrast, Fig. 8 depicts the same plot, for
the same sample image, except that the origin and the declination angle of the fit have been optimized. Note how much
better the fit represents the edge values.
IV. CALIBRATION

Using the previously discussed steps and techniques, the


program measures the volume of the sample in cubic pixels.
For the volume measurement to be converted to a density or
thermal expansion measurement, it must first be converted to
a real volume such as m3. This is achieved via a calibration
factor that is multiplied by the pixel3 volume to get an m3
volume. The calibration factor is created by analyzing video
of a high-precision calibration sphere with known diameter.
The calibration sphere used for this step is a 2.5 mm grade 3
tungsten carbide machined sphere purchased from Industrial
Tectonics Inc. ITI that has a diameter tolerance of 750 nm
and a sphericity tolerance of 75 nm.15 Approximately 30 s of
video is taken of this sphere and then analyzed. The ratio of
the known volume of the sphere to the average measured
volume is the calibration factor.
Temperature is correlated to each volume measurement
by time synchronizing the end of each video file with the
pyrometry or temperature file. As the temperature sampling
is slightly slower 16 Hz than the video sampling
25 Hz, temperature for each frame is linearly interpolated
from the two closest temperatures in time to each video
frame. Each temperature sample is time-stamped to millisecond accuracy.
V. VERIFICATION

The calibration sphere measurements used to create the


calibration factors can be used to quantify how precisely the
program can conduct measurements. In ESL, samples tend to
levitate with their long axis oriented vertically. Also, the
samples often spin. If a sample deviates from spherical and is
spinning in front of the camera, the program will be presented with varying silhouettes, which in turn will produce
slightly different volumes, hence the standard error of a measurement will be a function of the sphericity of a sample. By
assuming that the calibration sphere may be out of round by
the 75 nm tolerance and will produce silhouettes that will
range from spherical to slightly ellipsoidal, a predicted standard error of 0.00305% can be calculated from the following
expression:
Syx =

Vspherical Vspheroid
100.
Vspherical

FIG. 9. Color online Calibration sphere analysis plot of volume vs frame


number. The standard error, normalized by the average volume, is 0.0265%.

To establish the absolute accuracy of this technique, six


grade 3 tungsten carbide spheres purchased from Industrial
Tectonics Inc.,15 four with diameters of 2.2 mm and two with
diameters of 2.0 mm, were measured independently. For
calibration, one of the 2.2 mm spheres was NIST certified to
grade 3 standards. After measuring the volume of the remaining five spheres, the largest error in volume between
measured volume and published volume was approximately
0.098%. This particular deviation was from one of the
2.2 mm spheres. As previously cited, the basic diameter tolerances for grade 3 spheres, published on Industrial Tectonics Inc.s website,15 is 750 nm. This corresponds to a maximum difference in volume between two different 2.2 mm
grade 3 spheres of approximately 0.21% 0.105% . Our
technique can distinguish between individual grade 3 spheres
and the accuracy of the technique is better than the published
tolerance of these spheres.
Error as a result of loading and unloading samples was
also checked with the use of calibration spheres. During the
course of a measurement campaign, calibration spheres are
loaded several times a day to ensure that the instrument is
calibrated as close to measurement times as possible. Figure
10 shows two plots of volume versus frame for the same
calibration sphere from two different calibration videos taken

Figure 9 shows a plot of volume versus frame for


an analyzed video file of the calibration sphere. This plot
shows the volume measurement made for each frame of a
calibration video, approximately 30 s long. The standard
error for the volume measurements, normalized by the average volume, is 0.0265% or a precision of 265 parts per million ppm. This is an order of magnitude larger than the
predicted standard error for the sphere, indicating that a
grade 3 sphere is sufficient for calibration.

FIG. 10. Color online Calibration sphere volume plot from morning and
afternoon calibration videos.

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-7

Rev. Sci. Instrum. 76, 125108 2005

Machine vision for volume measurements

FIG. 11. Color online Density vs temperature plot for liquid


Zr62Cu20Al10Ni8.

2.5 h apart. The difference in the volume measurement in


pixels3 is approximately 0.0092% 92 ppm, which is two
orders of magnitude smaller than the accuracy of the measurement technique. This indicates that the dominant source
of error is from the technique itself, and not from the loading
and unloading of samples.
VI. APPLICATION

After validating the software and measuring technique, it


has been used to measure the density of Zr62Cu20Al10Ni8.
Figure 11 shows a plot of density versus temperature for
Zr62Cu20Al10Ni8. Notice the two different width bands of
data points. The wide band of data is for solid density values
while the narrow band of data is for liquid values of density.
Also note that the narrow band of data extends below the
freezing temperature of approximately 910 C. This particular sample undercooled by approximately 190 C.
The cause of the different width scatters results from the
difference in symmetry between the solid and liquid samples.
The analysis assumes that the silhouette in an image is of an
axisymmetric sample. This assumption is valid when the
sample is in the liquid state as surface tension pulls the
sample into an axisymmetric shape. However, this sample
was not a machined sphere, and due to shrinkage upon solidification the solid state is not quite axisymmetric, hence
producing a larger variation in the measured values.
The standard errors for the measurement of the liquid
and solid phases in Fig. 11 are approximately 0.038%
380 ppm and 0.430% 4300 ppm, respectively. The measurement precision of the liquid is of the same order of magnitude as that of the calibration sphere. Linear fits to both the

FIG. 12. Color online Intensity profile plots from sample calibration
sphere and 316 SS sphere images. Notice the jagged nature of both profiles
as well as the change in intensity values from top and bottom background
areas in the 316 SS profile.

solid and liquid phases of Zr62Cu20Al10Ni8 are provided in


Table I as well as the standard error, as normalized by the
average of solid and liquid densities. Also included with
these linear fits are 95% confidence intervals for the fit parameters. These intervals show how much deviation is required in the fit parameters to encompass 95% of the values.
It should be noted that this is a density measurement of
this particular alloy. No quaternary phase diagram exists, as
of yet, to indicate whether the melting observed in the density plot is congruent or incongruent.

VII. DISCUSSION

By analyzing the calibration sphere data, it has been


shown that the program can measure the volume of a highprecision sphere with a precision of approximately 265 ppm.
The determination of the absolute accuracy of this technique
is limited by the absolute accuracy of the manufacturing of
the calibration spheres, which is currently 0.105% for grade
3 spheres. It is also not clear as to which image attributes
affect the quality of measurements, and to what extent. An
example of this can be seen in Fig. 12, which contains a plot
of intensity profiles from both a calibration image and an
image of a 316 SS sample. Notice the background values of
the 316 SS differ dramatically from the bottom to the top of
the image. While it appears that the uniformity of the background intensity impacts the quality of measurements, it is

TABLE I. Linear fits to the solid and liquid phases of Zr62Cu20Al10Ni8. The standard error percentages are
normalized by the average density for the respective phase. Temperature is in C. The data used to create the
liquid fit were data taken only while the sample was cooling, so as to remove any heating effects on measurement. 95% confidence intervals have also been included as x.xx on each fit parameter.
Composition

Solid kg/ m3

Std. Err.

Liquid kg/ m3

Std. Err.

Zr62Cu20Al10Ni8

s = A-BT-25
A = 6565 5.44
B = 0.25 0.01

0.43%

l = A-BT-782
A = 6599.1 0.85
B = 0.3640 0.0005

0.038%

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

125108-8

Rev. Sci. Instrum. 76, 125108 2005

Bradshaw et al.

not clear as to how it affects the average measured value or


whether effects due to uniformity may be coupled with
something else, such as focus or contrast.
The largest source of error for this type of containerless
measuring technique is mass loss do to evaporation, which
causes a shift in the absolute value of the measurement. The
mass loss for the measurement of Zr62Cu20Al10Ni8 shown in
Fig. 11 was 0.14%. While this is a small amount, it is an
order of magnitude larger than the standard error of the measurement of the liquid phase. A shift such as this would
clearly be visible with the resolution of the program. One
way to reduce this measurement error is to use the postprocessing mass of the sample. The majority of the mass loss
during processing should occur at the maximum temperature
of the thermal cycle. By using the post-test mass and the
volume measurements from after the peak temperature, the
absolute error should be minimized.
In summary, a noncontact method that blends the best
traits from previous optical measuring techniques7,8 has successfully been employed to measure the volume of liquid
samples with a precision level on the order of 370 ppm or
0.037% standard error. This research has also shown promise
in producing measurements, in the future, with even better
precision. Future work will include identifying proper metrics to ensure consistent image quality that will provide the
best possible measurements as well as improve accuracy.
Work is currently being done to create a backlighting system
that provides a smoother, more uniform background, while
providing sufficient contrast for measurements at very high
temperature, 2000 C.

ACKNOWLEDGMENTS

This work was supported in part by NASA Office of


Biological and Physical Research through Grants No.
NAG8-1682 and No. NNM04AA016, NASA MSFC Center
Directors Discretionary Fund, and NASA Graduate Student
Researchers Program Fellowship NGT8-52948. The ESL
work was performed at the NASA MSFC Electrostatic Levitation Facility. We would also like to thank Mary Warren for
her help in processing the data that produced the plot in
Fig. 11.
Lord Rayleigh, Proc. R. Soc. London 29, 71 1879; H. Lamb, Proc.
London Math. Soc. 13, 51 1881; Won-Kyu Rhim et al., Rev. Sci.
Instrum. 70, 2796 1999.
2
A. E. El-Mehairy and R. G. Ward, Trans. Metall. Soc. AIME 227, 1226
1963.
3
S. Y. Shiraishi and R. G. Ward, Can. Metall. Q. 3, 117 1964.
4
T. Saito, Y. Shiraishi, and Y. Sakuma, Trans. Iron Steel Inst. Jpn. 9, 118
1969.
5
L. M. Racz and I. Egry, Rev. Sci. Instrum. 66, 4254 1995.
6
E. Gorges et al., Int. J. Theor. Phys. 17, 1163 1996.
7
S. K. Chung, D. B. Thiessen, and W.-K. Rhim, Rev. Sci. Instrum. 67,
3175 1996.
8
B. Damaschke et al., Rev. Sci. Instrum. 69, 2110 1998.
9
J. Brillo and I. Egry, Int. J. Thermophys. 24, 11552003.
10
J. R. Rogers R. W. Hyers, T. Rathz, L. Savage, and M. B. Robinson,
American Institute of Physics Space Technology and Applications International Forum, Albuquerque, NM, pp. 332336 2001.
11
S.-Y. Lin, K. McKeigue, and C. Maldarelli, Am. Inst. Chem. Eng. Symp.
Ser. 36, 1785 1990.
12
S. C. Chapra and R. P. Canale, Numerical Methods for Engineers with
Software and Programming Applications, 4th ed. McGraw Hill, Boston,
2002.
13
G. L. Bradley and K. J. Smith, Calculus, 1st ed. Prentice Hall, Englewood Cliffs, NJ, 1995.
14
C. F. Gerald and P. O. Wheatley, Applied Numerical Analysis, 6th ed.
Addison-Wesley, Reading, MA, 1999.
15
www.itiball.com/ballgradecharts.htm.
1

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP:
65.110.254.159 On: Mon, 28 Dec 2015 21:24:44

You might also like