Professional Documents
Culture Documents
ASM International
Materials Park, OH 44073-0002
www.asminternational.org
Copyright 2000
by
ASM International
All rights reserved
No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the copyright
owner.
First printing, December 2000
Great care is taken in the compilation and production of this Volume, but it should be made clear that NO
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, ARE GIVEN IN CONNECTION
WITH THIS PUBLICATION. Although this information is believed to be accurate by ASM, ASM cannot
guarantee that favorable results will be obtained from the use of this publication alone. This publication is
intended for use by persons having technical skill, at their sole discretion and risk. Since the conditions of product
or material use are outside of ASMs control, ASM assumes no liability or obligation in connection with any use
of this information. No claim of any kind, whether as to products or information in this publication, and whether
or not based on negligence, shall be greater in amount than the purchase price of this product or publication in
respect of which damages are claimed. THE REMEDY HEREBY PROVIDED SHALL BE THE EXCLUSIVE
AND SOLE REMEDY OF BUYER, AND IN NO EVENT SHALL EITHER PARTY BE LIABLE FOR
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES WHETHER OR NOT CAUSED BY OR RESULTING FROM THE NEGLIGENCE OF SUCH PARTY. As with any material, evaluation of the material under
end-use conditions prior to specification is essential. Therefore, specific testing under actual conditions is
recommended.
Nothing contained in this book shall be construed as a grant of any right of manufacture, sale, use, or
reproduction, in connection with any method, process, apparatus, product, composition, or system, whether or not
covered by letters patent, copyright, or trademark, and nothing contained in this book shall be construed as a
defense against any alleged infringement of letters patent, copyright, or trademark, or as a defense against liability
for such infringement.
Comments, criticisms, and suggestions are invited, and should be forwarded to ASM International.
ASM International staff who worked on this project included E.J. Kubel, Jr., Technical Editor; Bonnie Sanders,
Manager, Production; Nancy Hrivnak, Copy Editor; Kathy Dragolich, Production Supervisor; and Scott Henry,
Assistant Director, Reference Publications.
Library of Congress Cataloging-in-Publication Data
Practical guide to image analysis.
p. cm.
Includes bibliographical references and index.
1. Metallography. 2. Image analysis. I. ASM International.
TN690.P6448 2000
669.95dc21
00-059347
ISBN 0-87170-688-1
SAN: 204-7586
ASM International
Materials Park, OH 44073-0002
www.asminternational.org
Printed in the United States of America
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER 1: Image Analysis: Historical Perspective . . . . . . . . . 1
Don Laferty, Objective Imaging Ltd.
Video Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Beginnings: 1960s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Growth: 1970s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Maturity: 1980s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Desktop Imaging: 1990s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Truly Digital: 2000 and Beyond . . . . . . . . . . . . . . . . . . . . . . . 12
CHAPTER 2: Introduction to Stereological Principles . . . . . . . 15
George F. Vander Voort, Buehler Ltd.
Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specimen Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Volume Fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Number per Unit Area . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Intersections and Interceptions per Unit Length . . . . . . . . . . .
Grain-Structure Measurements . . . . . . . . . . . . . . . . . . . . . .
Inclusion Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Measurement Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
. 17
. 18
. 19
. 22
. 23
. 23
. 31
. 32
. 33
. 33
vi
.
.
.
.
.
.
.
.
. 35
. 37
. 40
. 46
. 49
. 56
. 61
. 72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 75
. 80
. 80
. 82
. 88
. 92
. 99
. 101
. 102
. 102
. 110
. 115
. 115
. 122
. 126
.
.
.
.
.
.
. 131
. 132
. 134
. 137
. 141
. 143
vii
. 145
. 150
. 154
. 162
. 165
. 171
. 183
.
.
.
.
.
.
. 204
. 211
. 214
. 222
. 224
. 234
.
.
.
.
.
.
.
.
.
. 257
. 258
. 261
. 263
. 265
. 265
. 266
. 267
. 269
. . 269
. . 270
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
viii
Preface
Man has been using objects made from metals for more than 3000
yearsobjects ranging from domestic utensils, artwork, and jewelry, to
weapons made of brass alloys, silver, and gold. The alloys used for these
projects were developed by combining empirical knowledge developed
over centuries by trial and error. Prior to the late 1800s, engineers had no
concept of the relationship between a materials properties and its
structure. In most human endeavors, empirical observations are used to
create things, and the scientific principles that govern how the materials
behave lag far behind. Also, once the scientific concepts are understood,
practicing metallurgists often have been slow to understand how to apply
the theory to advance the industries.
The origins of the art of metallography date back to Sorbys work in
1863. While his metallographic work was ignored for 20 years, the
procedures he developed for revealing the microstructures of metals
directly lead to some of todays well-established relationships between
structure and properties. During the past 140 years, metallography has
transformed from an art into a science. Concurrent with the advances in
specimen preparation techniques has been the development of methodologies to better evaluate microstructural features quantitatively.
This book, as its title suggests, is intended to serve as a practical
guide for applying image analysis procedures to evaluate microstructural
features. Chapters 1 and 2 present an historical overview of how
quantitative image analysis developed and the evolution of todays
television computer-based analysis systems, and the science of stereology,
respectively. The third chapter provides details of how metallographic
specimens should be properly prepared for image analysis. Chapters 4
through 7 consider the principles of image analysis, what types of
measurements can be made, the characteristics of particle dispersions, and
methods for analysis and interpretation of the results. Chapter 8 illustrates
how macro programs are developed to perform several specific image
analysis applications. Chapter 9 illustrates the use of color metallography
for image analysis problems.
This book considers most of the aspects that are required to apply image
analysis to materials problems. The book should be useful to engineers,
scientists, and technicians that need to extract quantitative information
from material systems. The principles discussed can be applied to typical
quality control problems and standards, as well as to problems that may
be encountered in research and development probjects. In many image
ix
CHAPTER
Image Analysis:
Historical Perspective
Don Laferty
Objective Imaging Ltd.
QUANTITATIVE MICROSCOPY, the ability to rapidly quantify microstructural features, is the result of developments that occurred over a
period of more than 100 years, beginning in the mid-1800s. The roots of
quantitative microscopy lie in the two logical questions from scientists
after the first microscopes were invented: how large is a particular feature
and how much of a particular constituent is present?
P.P. Anosov first used a metallurgical microscope in 1841 to reveal the
structure of a Damascus knife (Ref 1). Natural curiosity most likely
spurred a further question: what are the volume quantities of each
constituent? This interest in determining how to relate observations made
using a microscope from a two-dimensional field of view to three
dimensions is known as stereology. The first quantitative stereological
relationship developed using microscopy is attributed to A. Delesse (Ref
2). From his work is derived the equivalency of area fraction (AA) and
volume fraction (VV), or AA VV.
Many of the early studies of metallography (the study of the structure
of metals and alloys) are attributed to Sorby. He traced the images of
rocks onto paper using projected light. After cutting out the one phase
present and weighing the pieces of paper representing each phase, he
estimated the volume fraction of the phases.
Lineal analysis, the relationship between lineal fraction (LL) and
volume fraction, or LL VV, was demonstrated by Rosiwal in 1898 (Ref
3). Sauveur conducted one of the first studies to correlate chemical
composition with structure in 1896 (Ref 4). From this work, the
relationship between the carbon content of plain carbon steel and the
volume fraction of the various constituents was discovered. Later, the
relationship between volume fraction and points in a test grid was
Video Microscopy
Television, or TV, as we know it today, evolved from the early work of
Philo Taylor Farnsworth in the 1920s. There were several commercial
demonstrations in the late 1920s and early 1930s (Ref 7), but the
technology was applied to building radar systems for the military during
World War II and was not commercialized until after the war.
One early video technique used the flying spot. The output of a
cathode ray tube was used as the source of illumination; this bright spot
was rastered (scanned) across a specimen. A detector tube was used to
analyze the output signal. Systems such as this were used to evaluate
blood cells (Ref 8) and to assess nonmetallic inclusions in steel (Ref 9).
As the technology advanced, ordinary television cameras were used to
convert the output signal of the microscope into an electronic signal and
a corresponding image on a video tube. These early systems were analog
devices.
The advancement and continually increasing sophistication of television and computer systems have allowed the development of powerful
image analysis (IA) systems, which have largely supplanted manual
measurement methods. Today, measurements and calculations that previously required many hours to perform can be made in seconds and even
microseconds. In reality, an IA system is only a simple point counter.
However, operating a point counter in conjunction with a computer allows
the use of highly sophisticated analysis algorithms to rapidly perform
many different types of measurements. While the same measurements
could be made manually, measurement time would be prohibitive.
The practice of IA has seen many changes in the nearly 40 years since
the development of the first television-based image analyzers in the
1960s. From limited hardware systems first used for quantitative metallographic characterizations to modern, highly flexible image processing
software applications, IA has found a home in an enormous range of
industrial and biomedical applications. The popularity of digital imaging
in various forms is still growing. The explosion of affordable computer
technologies during the 1990s coupled with recent trends in digital-image
acquisition devices places a renewed interest in how digital images are
created, managed, processed, and analyzed. There now is an unprecedented, growing audience involved in the digital imaging world on a
daily basis. Who would have imagined in the early days of IA, when an
imaging system cost many tensif not hundredsof thousands of
dollars, the degree of powerful image processing software that would be
available today for a few hundred dollars at the corner computer store?
Yet, beneath all these new technological developments, underlying
common elements that are particular to the flavor of imaging referred to
as scientific image analysis have changed little since their very
beginnings. Photographers say that good pictures are not taken but
instead are carefully composed and considered; IA similarly relies on
intelligent decisions regarding how a given subjectthe specimen
should be prepared for study and what illumination and optical configurations provide the most meaningful information.
In acquiring the image, the analyst who attends to details such as
appropriate video levels and shading correction ensures reliable and
repeatable results, which build confidence. The myriad digital image
processing methods can be powerful allies when applied to both simple
and complex imaging tasks. The real benefits of image processing,
however, only come when the practitioner has the understanding and
experience to choose the appropriate tools, and, perhaps more importantly, knows the boundaries inside which tool use can be trusted.
The goal for IA is information from which to distill a manageable set of
meaningful quantitative descriptions from the specimen (or better, a set of
specimens). In practice, successful quantification depends on an understanding of the nature of these measurements so that, when the proper
parameters are selected, accuracy, precision, and repeatability, as well as
the efficiency of the whole process, are maximized.
Set against the current, renewed emphasis on digital imaging is a
history of TV-based IA that spans nearly four decades. Through various
generations of systems, techniques for specimen preparation, image
acquisition, processing, measurement, and analysis have evolved from the
first systems of the early 1960s into the advanced general purpose systems
of the 1980s, finally arriving at the very broad spectrum of imaging
options available into the 21st century. A survey of the methods used in
practice today shows that for microstructure evaluations, many of the
actual image processing and analysis techniques used now do not differ all
that much from those used decades ago.
Beginnings: 1960s
The technology advancement of the industrial age placed increasing
importance on characterizing materials microstructures. This need has
driven the development of efficient practical methods for the manual
measurement of count, volume fraction, and size information for various
microstructures. From these early attempts evolved the science of
stereology, where a mathematical framework was developed that allowed
systematic, manual measurements using various overlay grids. Marked by
the founding of the International Society for Stereology in 1961, these
efforts continued to promote the development and use of this science for
accurate and efficient manual measurements of two-dimensional and
three-dimensional structures in specimens. Despite the considerable
labor-saving stereological principles provided, microstructure characterizations using manual point and intercept counting are a time-consuming,
tiring process. In many cases, tedious hours are spent achieving the
desired levels of statistical confidence. This provided the background for
a technological innovation that would alleviate part of a quantitative
metallurgists workload.
Image analysis as we know it todayin particular that associated with
materials and microscopysaw in the early 1960s two major developments: TV-based image analyzers and mathematical morphology. The
first commercial IA system in the world was the Quantimet A from Metals
Research in 1963, with the very first system off the production line being
sold to British Steel in Sheffield, UK (Ref 10). Metals Research was
established in 1957 by Cambridge University graduate Dr. Michael Cole
and was based above Percivals Coach Company, beside the Champion of
the Thames pub on King Street, Cambridge, UK. The person inspiring the
design of the Quantimet was Dr. Colin Fisher, who joined the company in
1962, and, for its design, Metals Research was awarded numerous
awards, including the Queens Award to Industry on six occasions. The
QTM notation has been applied to IA systems because the Quantimet A
was referred to as a quantitative television microscope (QTM). While this
system served primarily as a densitometer, it was the beginning of the age
of automation.
These early IA systems were purely hardware-based systems. The
Quantimet B was a complete system for analyzing phase percentage of
microstructures and included a purpose-built video camera and specialized hardware to measure and display image information (Fig. 1). While
these early systems had relatively limited application mainly geared
toward phase percentage analysis and counting, they also achieved
extremely high performance.
It was necessary to continuously gather information from the live video
signal, because there was no large-scale memory to hold the image
information for a period longer than that of the video frame rate.
Typically, only a few lines of video were being stored at one time. In one
regard, these systems were very simple to use. For example, to gage the
area percentage result using the original Quantimet A, the investigator
needed to simply read the value from the continuously updated analog
meter. Compared with the tedium of manual point counting using grid
overlays, the immediate results produced by this new QTM gave a hint of
the promise of applying television technology to microstructure characterization.
The first system capable of storing a full black and white image was the
Bausch and Lomb QMS introduced in 1968 (Ref 11). Using a light pen,
the operator could measure properties of individual objects, now referred
to as feature specific properties, for the first time.
The second major foundation of IA in these early days was mathematical morphology, developed primarily by French mathematicians J. Serra
and G. Matheron and at Ecole des Mines de Paris (Ref 12). The
mathematical framework for morphological image processing was introduced by applying topology and set theory to problems in earth and
materials sciences. In mathematical morphology, the image is treated in a
numerical format as a set of valued points, and basic set transformations
such as the union and intersection are performed. This results in concepts
such as the erosion and dilation operations, which are, in one form or
another, some of the most heavily used processing operations in applied
IA even today.
Fig. 1
Growth: 1970s
By the 1970s, the field of IA was prepared for rapid growth into a wide
range of applications. The micro-Videomat system was introduced by
Carl Zeiss (Ref 13), and the Millipore MC particle-measurement system
was being marketed in America and Europe (Ref 14). The first IA system
to use mathematical morphology was the Leitz texture analysis system
(TAS) introduced in 1974. Also, a new field specific system named the
Histotrak image analyzer was introduced by the British firm Ealing-Beck
(Ref 15).
In the meantime, Metals Research had become IMANCO (for Image
Analyzing Computers), and its Quantimet 720 system offered a great deal
more flexibility than the original systems of the 1960s (Fig. 2). Still
hardware based, this second generation of systems offered many new and
useful features. The Q720 used internal digital-signal processing hardware, a built-in binary morphological image processor with selection of
structuring element and size via dials on the front panel, and advanced
feature analysis with the size and shape of individual objects measured
and reported on-screen in real time. The system was also flexible, due to
programmability implemented via a logic matrix configured using sets of
twisted pairs of wire. Other impressive innovations included a light pen
for direct editing of the image on the video monitor and automated control
of microscope stage and focus. Other systems offered in the day, such as
the TAS and pattern analysis system, (PAS) (Bausch and Lomb, USA),
had many similar processing and measurement capabilities.
The performance of early hardware-based systems was very high, even
by the standards of today. Using analog tube-video cameras, high-
Fig. 2
Fig. 3
Maturity: 1980s
The heyday of hardware-based IA arrived in the 1980s, while at the
same time a new paradigm of personal computer-based (PC-based)
imaging began to emerge. Increasing power and declining cost of
computers fueled both developments. In the case of systems that
continued to use dedicated image processing hardware, computers now
contained integrated microprocessors and built-in memory for flexible
image and results storage. Systems appearing in the 1980s combined
these features with vast new options for programmability, giving rise to
increasingly sophisticated applications and penetration of scientific IA
into many research and routine environments. Many of these systems still
are used today. Though generally slower than their purely hardware-based
predecessors, systems of the 1980s made up for slowness by being
significantly easier to use and more flexible.
Many systems of the early to mid-1980s provided a richer implementation of morphological image processing facilities than was possible in
their purely hard-wired predecessors. These operations were performed
primarily on binary images due to the memory and speed available at
those times. For instance, Cambridge Instruments Q900 system provided
high, nearly megapixel resolution imaging and allowed for a wide range
of morphological operations such as erosion, dilation, opening, closing, a
host of skeletonization processes, and a full compliment of Boolean
operations (Fig. 4). The Q970 system included true-color, high-resolution
image acquisition using a motorized color filter wheel (Fig. 5), a
technique still used today in some high-end digital cameras. In these and
other systems were embodied a range of image acquisition, processing,
and measurement capabilities, which, when coupled with the flexibility
offered by general purpose microcomputers, became commonplace,
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
References
1. P.P. Anosov, Collected Works, Akad. Nauk SSSR, 1954
2. A. Delesse, Procede Mechanique Pour Determiner la Composition
des Roches, Ann. Mines (IV), Vol 13, 1848, p 379
3. A. Rosiwal, On Geometric Rock Analysis. A Simple Surface Measurement to Determine the Quantitative Content of the Mineral
Constituents of a Stony Aggregate, Verhandl. K.K. Geol. Reich., 1898,
p 143
4. A. Sauveur, The Microstructure of Steel and the Current Theories of
Hardening, TAIME, 1896, p 863
5. E. Thompson, Quantitative Microscopic Analysis, J. Geol., Vol 27,
1930, p 276
6. A.A. Glagolev, Mineralog. Mater., 1931, p10
7. D.E. Fisher and J.F. Marshall, Tube: the Invention of Television,
Counterpoint, 1996
8. W.E. Tolles, Methods of Automatic Quantification of Micro-Autoradiographs, Lab. Invest., Vol 8, 1959, p1889
9. R.A. Bloom, H. Walz, and J.G. Koenig, An Electronic ScannerComputer for Determining the Non-Metallic Inclusion Content of
Steels, JISI, 1964, p 107
CHAPTER
Introduction to
Stereological Principles
George F. Vander Voort
Buehler Ltd.
THE FUNDAMENTAL RELATIONSHIPS for stereologythe foundation of quantitative metallographyhave been known for some time,
but implementation of these concepts has been limited when performed
manually due to the tremendous effort required. Further, while humans
are quite good at pattern recognition (as in the identification of complex
structures), they are less capable of accurate, repetitive counting. Many
years ago, George Moore (Ref 1) and members of ASTM Committee E-4
on Metallography conducted a simple counting experiment asking about
400 persons to count the number of times the letter e appeared in a
paragraph without striking out the letters as they counted. The correct
answer was obtained by only 3.8% of the group, and results were not
Gaussian. Only 4.3% had higher values, while 92% had lower values,
some much lower. The standard deviation was 12.28. This experiment
revealed a basic problem with manual ratings: if a familiar subject (as in
Moores experiment) results in only one out of 26 persons obtaining a
correct count, what level of counting accuracy can be expected with a less
familiar subject, such as microstructural features?
By comparison, image analyzers are quite good at counting but not as
competent at recognizing features of interest. Fortunately, there has been
tremendous progress in the development of powerful, user-friendly image
analyzers since the 1980s.
Chart methods for rating microstructures have been used for many years
to evaluate microstructures, chiefly for conformance to specifications.
Currently, true quantitative procedures are replacing chart methods for
such purposes, and they are used increasingly in quality control and
research studies. Examples of the applications of stereological measurements were reviewed by Underwood (Ref 2).
Units
Description
...
Pp
...
Point fraction (number of point elements per total number of test points)
Common name
...
Point count
mm
PL
mm1
...
...
LL
mm/mm
mm2
mm2
...
mm3
...
AA
SV
VV
...
NL
mm1
mm2
PA
LA
NA
PV
LV
NV
Number of features
mm3
Lineal fraction
...
Areal fraction
...
Volume fraction
...
Lineal density
...
Perimeter (total)
Areal density
...
...
Volumetric density
mm
...
mm2
...
mm2
...
mm3
...
Note: Fractional parameters are expressed per unit length, area or volume. Source: Ref 3
Sampling
Sampling of the material is an important consideration, because
measurement results must be representative of the material. Ideally,
random sampling would be best, but this can rarely be performed, except
for small parts like fasteners where a specific number of fasteners can be
drawn from a production lot at random. It generally is impossible to select
specimens at random from the bulk mass of a large component such as a
forging or casting, so the part is produced with additional material added
to the part, which provides material for test specimens.
For a casting, it may be possible to trepan (machine a cylinder of
material from a section) sections at locations that will be machined
anyway later in the production process. Another approach used is to cast
a separate, small chunk of material of a specified size (called a keel
block) along with the production castings, which provides material for
test specimens. However, material from the keel block may produce
results markedly different than those obtained from the casting if there is
a large difference in size and solidification and cooling rates between
casting and keel block.
After obtaining specimens, there still is a sampling problem, particularly in wrought (hot worked) material, such as rolled, extruded, or forged
material. Microstructural measurements made on a plane parallel to the
deformation axis, for example, will often be quite different from those
taken on a plane perpendicular to the deformation axis, especially for
features such as nonmetallic inclusions. In such cases, the practice is to
compare results on similarly oriented planes. It generally is too time
consuming to measure the microstructural feature of interest on the three
primary planes in a flat product such as plate or sheet, so that the true
three-dimensional nature of the structure cannot be determined except,
perhaps, in research studies.
Specimen Preparation
In the vast majority of work, the measurement part of the task is simple,
and 90% or more of the difficulty is in preparing the specimens properly
so that the true structure can be observed. Measurement of inclusions is
done on as-polished specimens because etching brings out extraneous
details that may obscure the detection of inclusions. Measurement of
graphite in cast iron also is performed on as-polished specimens. It is
possible, however, that shrinkage cavities often present in castings may
interfere with detection of the graphite, because shrinkage cavities and
graphite have overlapping gray scales. When the specimen must be etched
to see the constituent of interest, it is best to etch the specimen so that only
the constituent of interest is revealed. Selective etchants are best.
Preparation of specimens today is easier than ever before with the
introduction of automated sample-preparation equipment; specimens so
prepared have better flatness than manually prepared specimens. This is
especially important if the edge must be examined and measurements
performed. The preparation sequence must establish the true structure,
free of any artifacts. Automated equipment can produce a much greater
number of properly prepared specimens per day than the best manual
operator. A more detailed description on specimen preparation is in
Chapter 3.
Volume Fraction
It is well known that the amount of a second phase or constituent in a
two-phase alloy can have a significant influence on its properties and
behavior. Consequently, determination of the amount of the second phase
is an important measurement. The amount of a second phase is defined as
the volume of the second phase per unit volume, or volume fraction.
There is no simple experimental technique to measure the volume of a
second phase or constituent per unit volume of specimen. The closest
approach might be to use an acid digestion method, where a cube of metal
is weighed and then partially dissolved in an appropriate electrolyte that
dissolves the matrix but not the phase of interest. The residue is cleaned,
dried, and weighed. The remains of the cube (after cleaning and drying)
are weighed, and weight loss is calculated. The weight of the undissolved
second phase is divided by the weight loss to get an estimate of the
volume fraction of the second phase, with the densities of the matrix and
second phase known. This is a tedious method, not applicable to all
situations and subject to interferences. Three experimental approaches for
estimating the volume fraction have been developed using microscopy
methods: the area fraction, the lineal fraction, and the point fraction
methods.
The volume fraction was first estimated by areal (relating to area)
analysis by A. Delesse, a French geologist, in 1848. He showed that the
area fraction was an unbiased estimate of the volume fraction. Several
procedures have been used on real structures. One is to trace the second
phase or constituent with a planimeter and determine the area of each
particle. These areas are summed and divided by the field area to obtain
the area fraction, AA.
Another approach is to weigh a photograph and then cut out the
second-phase particles and weigh them. Then the two weights are used to
calculate the area fraction, as the weight fraction of the micrograph should
be equivalent to the area fraction. Both of these techniques are only
possible with a coarse second phase. A third approach is the so-called
occupied squares method. A clear plastic grid containing 500 small
square boxes is superimposed over a micrograph or live image. The
operator then counts the number of grid boxes that are completely filled,
34 filled, 12 filled, and 14 filled by the second phase or constituent. These
data are used to calculate the area covered by the second phase, which
then is divided by the image area to obtain the area fraction. All three
methods give a precise measurement of the area fraction of one field. An
enormous amount of effort must be extended per field. However, it is well
recognized that the field-to-field variability in volume fraction has a larger
influence on the precision of the volume fraction estimate than the error
in rating a specific field, regardless of the procedure used. So, it is not
P
PT
(Eq 1)
where P is the number of grid points lying inside the feature of interest,
, plus one-half the number of grid points lying on particle boundaries
and PT is the total number of grid points. Studies show that the point
fraction is equivalent to the lineal fraction, LL, and the area fraction, AA,
and all three are unbiased estimates of the volume fraction, VV, of the
second-phase particles:
PP LL AA VV
(Eq 2)
Point counting is much faster than lineal or areal analysis and is the
preferred manual method. Point counting is always performed on the
minor phase, where VV < 0.5. The amount of the major (matrix) phase can
be determined by the difference.
The fields measured should be selected at locations over the entire
polished surface and not confined to a small portion of the specimen
surface. The field measurements should be averaged, and the standard
deviation can be used to assess the relative accuracy of the measurement,
as described in ASTM E 562.
Fig. 1
particles
VV
NA
(Eq 3)
Grain-Structure Measurements
For single-phase grain structures, it is usually easier to count the
grain-boundary intersections with a line of known length, especially for
circular test lines. This is the basis of the Heyn intercept grain size
procedure described in ASTM E 112. For most work, a circular test grid
composed of three concentric circles with a total line length of 500 mm
is preferred. Grain size is defined by the mean lineal intercept length, l:
l
1
1
PL NL
(Eq 4)
(Eq 5)
and
LA PL
2
(Eq 6)
PL PL
(Eq 7)
PL 0.571PL
PL(a)
PL||(a)
PL/PL(a)
, %
As-received
114.06
98.86
1.15
8.9
12% reduction
126.04
75.97
1.66
29.6
30% reduction
167.71
60.6
2.77
52.9
70% reduction
349.4
34.58
Cold-rolled
10.1
85.3
t r / 2
(Eq 8)
1
NL
(Eq 9)
1VV
NL
(Eq 10)
For the structure illustrated in Fig. 1, the volume fraction of the particles
was estimated as 0.147. Therefore, is 66.1 m, or 0.066 mm.
The mean lineal intercept distance, l, for these particles is determined by:
l
(Eq 11)
For this example, l is 11.4 m, or 0.0114 mm. This value is smaller than
the caliper diameter of the particles because the test lines intercept the
particles at random, not only at the maximum dimension. The calculated
mean lineal intercept length for a circle with a 15 m diameter is 11.78
m. Again, stereological field measurements can be used to determine a
characteristic dimension of individual features without performing individual particle measurements.
Grain Size. Perhaps the most common quantitative microstructural
measurement is that of the grain size of metals, alloys, and ceramic
(Eq 12)
(Eq 13)
The metric grain size number, GM, is slightly lower than the ASTM grain
size number, G, for the same structure:
G GM 0.046
(Eq 14)
This very small difference usually can be ignored (unless the value is near
a specification limit).
Planimetric Method. The oldest procedure for measuring the grain size
of metals is the planimetric method introduced by Zay Jeffries in 1916
based upon earlier work by Albert Sauveur. A circle of known size
(generally 79.8 mm diameter, or 5000 mm2 area) is drawn on a
micrograph or used as a template on a projection screen. The number of
grains completely within the circle, n1, and the number of grains
intersecting the circle, n2, are counted. For accurate counts, the grains
must be marked off as they are counted, which makes this method slow.
The number of grains per square millimeter at 1, NA, is determined by:
NA f (n1 n2 / 2)
(Eq 15)
where f is the magnification squared divided by 5000 (the circle area). The
average grain area, A, in square millimeters, is:
A
1
NA
(Eq 16)
1
(NA)1/2
(Eq 17)
The ASTM grain size, G, can be found by using the tables in ASTM E 112
or by the following equation:
G [3.322 (log NA) 2.95]
(Eq 18)
the test line, n2, however, is slightly different. In this method, grains will
intercept the four corners of the square or rectangle. Statistically, the
portions intercepting the four corners would be in parts of four such
contiguous test patterns. So, when counting n2, the grains intercepting the
four corners are not counted but are weighted as 1. Count all of the other
grains intercepting the test square or rectangle (of known size). Equation
15 is modified as follows:
NA f (n1 n2/2 1)
(Eq 19)
where n1 is still the number of grains completely within the test figure
(square or rectangular grid), n2 is the number of grains intercepting the
sides of the square or rectangle, but not the four corners, 1 accounts for
the corner grain interceptions, and f is the magnification divided by the
area of the square or rectangle grid.
Fig. 2
The ferrite grain size of a carbon sheet steel (shown at 500, 2% nital
etch) was measured by the planimetric method with images at 200,
500, and 1000 using the Jeffries planimetric method (79.8 mm diameter test
circle). This produced NA values (using Eq 15) of 2407.3, 2674.2, and 3299
grains per mm2 (ASTM G values of 8.28, 8.43, and 8.73, respectively) for the 200,
500 and 1000 images, respectively. The planimetric method was also performed on these three images using the full rectangular image eld and the
alternate grain counting method. This produced NA values of 2400.4, 2506.6,
and 2420.2 grains per mm2 (ASTM G values of 8.28, 8.34, and 8.29, respectively). This experiment shows that the standard planimetric method is inuenced
by the number of grains counted (n1 was 263, 39, and 10 for the 200, 500 and
1000 images, respectively). In practice, more than one eld should be
evaluated due to the potential for eld-to-eld variability.
(Eq 20)
Fig. 3
The ferrite grain size of the specimen analyzed using the Jeffries
method in Fig. 2 (shown at 200 magnication) (2% nital etch), was
measured by the intercept method with a single test circle (79.8 mm diameter) at
200, 500 and 1000 magnications. This yielded mean lineal intercept lengths
of 17.95, 17.56, and 17.45 m (for the 200, 500 and 1000 images, respectively)
corresponding to ASTM G values of 8.2, 8.37, and 8.39, respectively. These are
in reasonably good agreement. In practice, more than one eld should be
evaluated due to the eld-to-eld variability of specimens.
(VV)(L/M )
N
(Eq 21)
where L is the line length and M is the magnification. The ASTM grain
size number can be determined from the tables in ASTM E 112 or by use
of Eq 20. The method is illustrated in Fig. 4.
Fig. 4
Inclusion Content
Assessment of inclusion type and content commonly is performed on
high-quality steels. Production evaluations use comparison chart methods
such as those described in ASTM E 45, SAE J422a, ISO 4967, and the
German standard SEP 1570 (DIN 50602). In these chart methods, the
inclusion pictures are defined by type and graded by severity (amount).
Either qualitative procedures (worst rating of each type observed) or
quantitative procedures (all fields in a given area rated) are used. Only the
Measurement Statistics
It is necessary to make stereological measurements on a number of
fields and average the results. Measurements on a single field may not be
representative of bulk material conditions, because few (if any) materials
are sufficiently homogeneous. Calculation of the standard deviation of
field measurements provides a good indication of measurement variability. Calculation of the standard deviation can be done quite simply with an
inexpensive pocket calculator.
A further refinement of statistical analysis is calculation of the 95% CL
based on the standard deviation, s, of the field measurements. The 95%
CL is calculated from the expression:
95% CL
ts
N1/2
(Eq 22)
95% CL
X
(Eq 23)
[200 s] 2
[%RA X]
(Eq 24)
Image Analysis
The measurements described in this brief review, and other measurements not discussed, can be made by use of automatic image analyzers.
These devices rely primarily on the gray level of the image on the
television monitor to detect the desired features. In some instances,
complex image editing can be used to aid separation. Some structures,
however, cannot be separated completely, which requires the use of
semiautomatic digital tracing devices to improve measurement speed.
Conclusions
Many of the simple stereological counting measurements and simple
relationships based on these parameters have been reviewed. More
complex measurements are discussed in Chapters 5 to 8. The measurements described are easy to learn and use. Their application enables the
metallographer to discuss microstructures in a more quantitative manner
and reveals relationships between the structure and properties of the
material.
References
1. G.A. Moore, Is Quantitative Metallography Quantitative?, Application of Modern Metallographic Techniques, STP 480, ASTM, 1970, p
348
2. E.E. Underwood, Applications of Quantitative Metallography, Mechanical Testing, Vol 8, Metals Handbook, 8th ed., American Society
for Metals, 1973, p 3747
3. E.E. Underwood, Quantitative Stereology, Addison-Wesley, 1970
4. J.E. Hilliard and J. W. Cahn, An Evaluation of Procedures in
Quantitative Metallography for Volume-Fraction Analysis, Trans.
AIME, Vol 221, April 1961, p 344352
5. G.F. Vander Voort and A. Rosz, Measurement of the Interlamellar
Spacing of Pearlite, Metallography, Vol 17, Feb 1984, p 117
6. H. Abrams, Grain Size Measurements by the Intercept Method,
Metallography, Vol 4, 1971, p 5978
7. G.F. Vander Voort, Grain Size Measurement, Practical Applications
of Quantitative Metallography, STP 839, ASTM, 1984, p 85131
8. G.F. Vander Voort, Examination of Some Grain Size Measurement
Problems, Metallography: Past, Present and Future, STP 1165,
ASTM, 1993, p 266294
9. G.F. Vander Voort, Metallography: Principles and Practice, ASM
International, 1999
10. S.A. Saltykov, Stereometric Metallography, 2nd ed., Metallurgizdat,
Moscow, 1958
11. G.F. Vander Voort, Inclusion Measurement, Metallography as a
Quality Control Tool, Plenum Press, New York, 1980, p 188
12. G.F. Vander Voort and J. F. Golden, Automating the JK Inclusion
Analysis, Microstructural Science, Vol 10, Elsevier North-Holland,
NY, 1982, p 277290
13. G.F. Vander Voort, Measurement of Extremely Low Inclusion Contents by Image Analysis, Effect of Steel Manufacturing Processes on
the Quality of Bearing Steels, STP 987, ASTM, 1988, p 226249
14. G.F. Vander Voort, Characterization of Inclusions in a Laboratory
Heat of AISI 303 Stainless Steel, Inclusions and Their Influence on
Materials Behavior, ASM International, 1988, p 4964
15. G.F. Vander Voort, Computer-Aided Microstructural Analysis of
Specialty Steels, Mater. Charact., Vol 27 (No. 4), Dec 1991, p
241260
16. G.F. Vander Voort, Inclusion Ratings: Past, Present and Future,
Bearing Steels Into the 21st Century, STP 1327, ASTM, 1998, p
1326
17. R.T. De Hoff, Quantitative Metallography, Techniques of Metals
Research, Vol II, Part 1, Interscience, New York, 1968, p 221253
CHAPTER
Specimen Preparation
for Image Analysis
George F. Vander Voort
Buehler Ltd.
Sampling
The specimen or specimens being prepared must be representative of
the material to be examined. Random sampling, as advocated by
statisticians, rarely can be performed by metallographers. An exception is
fastener testing where a production lot can be randomly sampled.
However, a large forging or casting, for example, cannot be sampled
randomly because the component might be rendered useless commercially. Instead, systematically selected test locations are widely used,
based on sampling convenience. Many material specifications dictate the
sampling procedure. In failure studies, specimens usually are removed to
study the origin of failure, examine highly stressed areas or secondary
cracks, and so forth. This, of course, is not random sampling. It is rare to
Sectioning
Bulk samples for sectioning may be removed from larger pieces or parts
using methods such as core drilling, band and hack sawing, flame cutting,
and so forth. When these techniques must be used, the microstructure will
be heavily altered in the area of the cut. It is necessary to resection the
piece in the laboratory using an abrasive-wheel cutoff system to establish
the location of the desired plane of polish. In the case of relatively brittle
materials, sectioning may be accomplished by fracturing the specimen at
the desired location.
Abrasive-Wheel Cutting. By far the most widely used sectioning
devices in metallographic laboratories are abrasive cut-off machines (Fig.
1). All abrasive-wheel sectioning should be done wet; direct an ample
flow of water containing a water-soluble oil additive for corrosion
protection into the cut. Wet cutting produces a smooth surface finish and,
most importantly, guards against excessive surface damage caused by
overheating. Abrasive wheels should be selected according to the recommendations of the manufacturer. In general, the bond strength of the
material that holds the abrasive together in the wheel must be decreased
with increasing hardness of the workpiece to be cut, so the bond material
can break down and release old dulled abrasive and introduce new sharp
abrasive to the cut. If the bond strength is too high, burning results, which
severely damages the underlying microstructure. The use of proper bond
strength eliminates the production of burnt surfaces. Bonding material
may be a polymeric resin, a rubber-based compound, or a mixture of the
two. In general, rubber offers the lowest-bond-strength wheels used to cut
the most difficult materials. Such cuts are characterized by an odor that
can become rather strong. In such cases, there should be provisions to
properly exhaust and ventilate the saw area. Specimens must be fixtured
securely during cutting, and cutting pressure should be applied carefully
to prevent wheel breakage. Some materials, such as commercial purity
(CP) titanium (Fig. 2), are more prone to sectioning damage than many
other materials.
Precision Saws. Precision saws (Fig. 3) commonly are used in
metallographic preparation and may be used to section materials intended
for IA. As the name implies, this type of saw is designed to make very
precise cuts. They are smaller in size than the typical laboratory abrasive
cut-off saw and use much smaller blades, typically from 8 to 20 mm (3 to
8 in.) in diameter. These blades are most commonly of the nonconsumable
type, made of copper-base alloys and having diamond or cubic boron
nitride abrasive bonded to the periphery of the blade. Consumable blades
incorporate alumina or silicon carbide abrasives with a rubber bond and
only work on a machine that operates at speeds higher than 1500 rpm.
These blades are much thinner than abrasive cutting wheels. The load
applied during cutting is much less than that used for abrasive cutting,
Fig. 1
and, therefore, much less heat is generated during cutting, and depth of
damage is very shallow.
While small section-size pieces that would normally be sectioned with
an abrasive cutter can be cut with a precision saw, cutting time is
appreciably greater, but the depth of damage is much less. These saws are
widely used to section sintered carbides, ceramic materials, thermallysprayed coatings, printed circuit boards, and electronic components.
Fig. 2
Fig. 3
Specimen Mounting
The primary purpose of mounting metallographic specimens is to
provide convenience in handling specimens of difficult shapes or sizes
during the subsequent steps of metallographic preparation and examination. A secondary purpose is to protect and preserve outer edges or surface
defects during metallographic preparation. Care must be exercised when
selecting the mounting method so that it is in no way injurious to the
microstructure of the specimen. Most likely sources of injurious effects
are mechanical deformation and heat.
Clamp Mounting. Clamps offer a quick, convenient method to mount
metallographic cross sections in the form of thin sheets, where several
specimens can be clamped in sandwich form. Edge retention is excellent
when done properly, and there is no problem with seepage of fluids from
crevices between specimens. The outer clamp edges should be beveled to
minimize damage to polishing cloths. Improper use of clamps leaves gaps
between specimens, allowing fluids and abrasives to become entrapped
and seep out, obscuring edges. Ways to minimize this problem include
proper tightening of clamps, using plastic spacers between specimens,
and coating specimen surfaces with epoxy before tightening. A disadvantage of clamps is the difficulty encountered in placing specimen information on the clamp for identification purposes.
Compression Mounting. The most common mounting method uses
pressure and heat to encapsulate the specimen within a thermosetting or
thermoplastic mounting material. Common thermosetting resins include
phenolic (Bakelite), diallyl phthalate, and epoxy, while methyl methacrylate is the most commonly used thermoplastic mounting resin. Both
thermosetting and thermoplastic materials require heat and pressure
during the molding cycle. After curing, mounts made of thermosetting
materials may be ejected from the mold at the maximum molding
temperature, while mounts made of thermoplastic resins must be cooled
to ambient under pressure. However, cooling thermosetting resins under
pressure to at least a temperature of 55 C (130 F) before ejection
reduces shrinkage gap formation. A thermosetting resin mount should
never be water cooled after hot ejection from the molding temperature.
This causes the metal to pull away from the resin, producing shrinkage
gaps that promote poor edge retention (see Fig. 4). Thermosetting epoxy
resins provide the best edge retention of these resins and are less affected
by hot etchants than phenolic resins. Mounting presses vary from simple
laboratory jacks with a heater and mold assembly to fully automated
devices, as shown in Fig. 5. Compression mounting resins have the
advantage that a fair amount of information can be scribed on the
backside with a vibratory pencil-engraving device for specimen identification.
Fig. 4
Poor edge retention due to shrinkage gap between metal specimen and the resin mount caused by water cooling
a hot-ejected thermosetting resin mount. Specimen is carburized AISI 8620 alloy steel, etched using 2% nital.
Fig. 5
(a)
Fig. 6
(b)
Visibility problem caused by plating the specimen surface with a compatible metal (electroless nickel in this
case) to help edge retention. It is difcult to discern the free edge of (a) a plated nitrided AISI 1215 steel specimen,
due to poor image contrast between the nickel plate and the nitrided layer. By comparison, (b) the unplated specimen
reveals good image contrast between specimen and thermosetting epoxy resin mount, which allows clear distinction of the
nitrided layer. Etchant is 2% nital.
Fig. 7
Etching stains emanating from gaps between the specimen and resin
mount. Specimen is M2 high-speed steel etched with Vilellas reagent.
(a)
(b)
Fig. 8
(a)
(b)
Fig. 9
Examples of perfect edge retention of two different materials in Epomet (Buehler Ltd., Lake Bluff, IL)
thermosetting epoxy mounts. (a) Ion-nitrided H13 tool steel specimen etched with 2% nital. (b) Coated carbide
tool specimen etched with Murakamis reagent
Fig. 10
O Properly mounted specimens yield better edge retention than unmounted specimens; rounding is difficult, if not impossible, to prevent
at a free edge. Hot compression mounts yield better edge preservation
than castable resins.
O Electrolytic or electroless plating of the surface of interest provides
excellent edge retention. If the compression mount is cooled too
quickly after polymerization, the plating may be pulled away from the
specimen, leaving a gap. When this happens, the plating is ineffective
for edge retention.
O Thermoplastic compression mounting materials are less effective than
thermosetting resins. The best thermosetting resin is the epoxy-based
resin containing a hard filler material.
O Never hot eject a thermosetting resin after polymerization and cool it
quickly to ambient (e.g., by water cooling), because a gap will form
between specimen and mount due to the differences in thermal
contraction rates. Automated mounting presses cool the mounted
specimen to near ambient under pressure, greatly minimizing gap
formation due to shrinkage.
O Automated grinding/polishing equipment produces flatter specimens
than manual, or hand, preparation.
O In automated grinder/polisher use, central-pressure mode provides
better flatness than individual pressure mode (both modes defined later
in this chapter).
O Orient the position of the smaller diameter specimen holder so its
periphery slightly overlaps the periphery of the larger diameter platen
as it rotates.
O Use pressure-sensitive-adhesive-backed silicon carbide (SiC) grinding
paper (if SiC is used) and pressure-sensitive-adhesive-backed polishing
cloths rather than stretched cloths.
O Use hard, napless surfaces for rough polishing until the final polishing
step(s). Use a low-nap to medium-nap cloth for the final step, and keep
it brief.
O Rigid grinding disks produce excellent flatness and edge retention and
should be used when possible.
Grinding
Grinding should commence with the finest grit size that will establish an
initially flat surface and remove the effects of sectioning within a few
minutes. An abrasive grit size of 180 or 240 is coarse enough to use on
specimen surfaces sectioned using an abrasive cut-off wheel. Rough
surfaces, such as those produced using a hacksaw and bandsaw, usually
require abrasive grit sizes in the range of 60 to 180 grit. The abrasive used
for each succeeding grinding operation should be one or two grit sizes
Fig. 11
Fig. 12
Polishing
Polishing is the final step (or steps) used to produce a deformation-free
surface, which is flat, scratch-free, and mirrorlike in appearance. Such a
surface is necessary for subsequent qualitative and quantitative metallographic interpretation. The polishing technique used should not introduce
extraneous structures such as disturbed metal (Fig. 13), pitting (Fig. 14),
dragging out of graphite and inclusions, comet tailing (Fig. 15), and
staining (Fig. 16). Relief (height differences between different constituents, or between holes and constituents) (Fig. 17 and 18) must be
minimized.
Polishing usually consists of rough, intermediate, and final stages.
Rough polishing traditionally is done using 6 or 3 m diamond abrasive
charged onto napless or low-nap cloths. For hard materials such as
through-hardened steels, ceramics, and cemented carbides, an additional
rough polishing step may be required. For such materials, initial rough
polishing may be followed by polishing with 1 m diamond on a napless,
low-nap, or medium-nap cloth. A compatible lubricant should be used
sparingly to prevent overheating and/or surface deformation. Intermediate
polishing should be performed thoroughly to keep final polishing to a
minimum. Final polishing usually consists of a single step but could
involve two steps, such as polishing using 0.3 m and 0.05 m alumina,
or a final polishing step using alumina or colloidal silica followed by
vibratory polishing, using either of these two abrasives.
(a)
Fig. 13
(b)
Examples of residual sectioning/grinding damage in polished specimens. (a) Waspaloy etched with Frys
reagent. (b) Commercially pure titanium etched with Krolls reagent. Differential interference-contrast (DIC)
illumination
Fig. 14
Fig. 15
Fig. 16
(a)
(b)
Fig. 17
Examples of relief (in this case, height differences between different constituents) at hypereutectic silicon
particles in Al-19.85% Si aluminum alloy. (a) Excessive relief. (b) Minimum relief. Etchant is 0.5 HF
(hydrouoric acid).
(a)
Fig. 18
(b)
Relief (in this case, height differences between constituents and holes) in microstructure of a braze. (a)
Excessive relief. (b) Low relief. Etchant is glyceregia.
Fig. 19
Vibratory polisher for nal polishing. Its use produces imageanalysis and publication-quality specimens.
because coarse abrasive can carry over to a finer abrasive stage and
produce problems.
Automatic Polishing. Mechanical polishing can be automated to a
high degree using a wide variety of devices ranging from relatively
simple systems (Fig. 20) to rather sophisticated, minicomputer-controlled
or microprocessor-controlled devices (Fig. 21). Units also vary in
capacity from a single specimen to a half-dozen or more at a time. These
systems can be used for all grinding and polishing steps and enable an
Fig. 20
Fig. 21
Table 1 Traditional method used to prepare most metal and alloy metallographic
specimens
Abrasive
Polishing surface
Load
Type
Grit size
lb
Speed, rpm
120
27
240300
240
27
240300
Complementary
12
320
27
240300
Complementary
12
400
27
240300
Complementary
12
600
27
240300
Complementary
12
Canvas
6 m
27
120150
Complementary
1 m
27
120150
Complementary
0.3 m
27
120150
Complementary
0.5 m
27
120150
Complementary
Waterproof paper
Microcloth pad
Direction(a)
Time, min
papers through a series of grits, then rough polishing with one or more
sizes of diamond abrasive, followed by fine polishing with one or more
alumina suspensions of different particle size. This procedure will be
called the traditional method and is described in Table 1.
This procedure is used for manual preparation as well as using a
machine, but control of the force applied to a specimen in manual
preparation cannot be controlled as accurately and as consistently as with
a machine. Complementary motion means that the specimen holder is
rotated in the same direction as the platen and does not apply to manual
preparation. Some machines can be set so that the specimen holder rotates
in the direction opposite to that of the platen, called contra. This
provides a more aggressive action but was not part of the traditional
approach. This action is similar to the manual polishing procedure of
running the specimen in a circular path around the wheel in a direction
opposite to that of the platen rotation. The steps of the traditional method
are not rigid, as other polishing cloths may be substituted and one or more
of the polishing steps might be omitted. Times and pressures can be
varied, as well, to suit the needs of the work or the material being
prepared. This is the art side of metallography.
Contemporary Methods. During the 1990s, new concepts and new
preparation materials have been introduced that have enabled metallographers to shorten the process while producing better, more consistent
results. Much of the effort focused on reducing or eliminating the use of
silicon carbide paper in the five grinding steps. In all cases, an initial
grinding step must be used, but there is a wide range of materials that can
be substituted for SiC paper. If a central-force automated device is used,
the first step must remove the sectioning damage on each specimen and
bring all of the specimens in the holder to a common plane perpendicular
to the axis of the specimen-holder drive system. This first step is often
called planar grinding, and SiC paper can be used, although more than
one sheet may be needed. Alternatives to SiC paper include the following:
O
O
O
O
O
O
O
Alumina paper
Alumina grinding stone
Metal-bonded or resin-bonded diamond discs
Wire mesh discs with metal-bonded diamond
Stainless steel mesh cloths (diamond is applied during use)
Rigid grinding discs (RGD) (diamond is applied during use)
Lapping platens (diamond is applied and becomes embedded in the
surface during use)
Load, N (lb)
Speed, rpm/direction
Time, min
Waterproof discs
Abrasive/grit size
27 (6)
240300/comp
Until plane
Napless cloth
Diamond/9 m
27 (6)
120150/comp
Napless cloth
Diamond/ 3 m
27 (6)
120150/comp
Colloidal silica or
sol-gel alumina
suspension/0.05
m
27 (6)
120150/contra
Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled
Waterproof discs
Abrasive/grit size
Load, N (lb)
Speed, rpm/direction
Time, min
27 (6)
240300/comp
Until plane
5
Diamond suspension/9 m
27 (6)
120150/comp
Napless cloth
Diamond/3 m
27 (6)
120150/comp
Colloidal silica or
sol-gel alumina
suspension/0.05 m
27 (6)
120150/contra
Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled
Abrasive/grit size
Load, N (lb)
Speed, rpm/direction
Time, min
Diamond suspension/30 m
22 (5)
240300/contra
Diamond suspension/9 m
27 (6)
240300/contra
Napless cloth
Diamond/3 m
27 (6)
120150/contra
Napless cloth
Colloidal silica or
sol-gel alumina
suspension/0.5 m
27 (6)
120150/contra
Load, N (lb)
Speed, rpm/direction
Time, min
Waterproof discs
SiC(a)/240 or 320
Abrasive/grit size
22 (5)
240300/comp
Until plane
Napless cloth
Diamond/9 m
40 (9)
120150/comp
Napless cloth
Diamond/3 m
36 (8)
120150/comp
Colloidal silica or
sol-gel alumina
suspension/0.05 m
31 (7)
120150/contra
Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled
Table 6 Three-step practice used to prepare titanium and Ti-alloy metallographic specimens
Polishing surface
Load, N (lb)
Speed, rpm/direction
Time, min
SiC(a)/320
Abrasive/grit size
27 (6)
240300/comp
Until plane
Napless cloth
Diamond/9 m
27 (6)
120150/contra
10
Medium-nap cloth
Colloidal silica
plus attack
polish(b)/0.05
m
27 (6)
120150/contra
10
Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled. (b) Attack polish is five parts colloidal silica plus one part hydrogen peroxide, 30% concentration. Use
with caution.
(175 HV, for example), although some softer materials can be prepared
using them. This disc can also be used for the planar grinding step. An
example of such a practice applicable to nearly all steels (results are
marginal for solution annealed austenitic stainless steels) is given in Table
3. The first step of planar grinding could also be performed using the rigid
grinding disc and 30 m diamond. Rigid grinding discs contain no
abrasive; they must be charged during use. Suspensions are the easiest
way to do this. Polycrystalline diamond suspensions are favored over
monocrystalline synthetic diamond suspensions for most metals and
alloys due to their higher cutting rate.
As examples of tailoring these types of procedures to other metals,
alloys, and materials, the following three methods are shown in Tables 4
to 6 for sintered carbides (these methods also work for ceramics),
aluminum, and titanium alloys. Because sintered carbides and ceramics
are cut with a precision saw that produces very little deformation and an
excellent surface finish, a coarser grit diamond abrasive is not needed for
planar grinding (Table 4). Pressure-sensitive-adhesive-backed silk cloths
are excellent for sintered carbides. Nylon is also quite popular.
A four-step practice for aluminum alloys is presented in Table 5. While
MgO was the preferred final polishing abrasive for aluminum and its
alloys, it is a difficult abrasive to use and is not available in very fine sizes,
and colloidal silica has replaced magnesia. This procedure retains all of
the intermetallic precipitates observed in aluminum and its alloys and
minimizes relief. Synthetic napless cloths may also be used for the final
step with colloidal silica, and they will introduce less relief than a low-nap
or medium-nap cloth but may not remove fine polishing scratches as well.
For very pure aluminum alloys, this procedure could be followed by
vibratory polishing to improve the surface finish, as these are quite
difficult to prepare totally free of fine polishing scratches.
The contemporary practice for titanium and its alloys (Table 6)
demonstrates the use of an attack-polishing agent added to the final
polishing abrasive to obtain the best results, especially for commercially
pure titanium, a rather difficult metal to prepare free of deformation for
color etching, heat tinting, and/or polarized light examination of the grain
structure. Attack-polishing solutions added to the abrasive slurry or
suspension must be treated with great care to avoid burns. (Caution: use
good, safe laboratory practices and wear protective gloves.) This
three-step practice could be modified to four steps by adding a 3 m or 1
m diamond step.
There are a number of attack polishing agents for use on titanium. The
simplest is a mixture of 10 mL, 30% concentration hydrogen peroxide
(caution: avoid skin contact) and 50 mL colloidal silica. Some metallographers add either a small amount of Krolls reagent to this mixture or a
few milliliters of nitric and hydrofluoric acidsthese latter additions may
cause the suspension to gel. In general, these acid additions do little to
Etching
Metallographic etching encompasses all processes used to reveal
particular structural characteristics of a metal that are not evident in the
as-polished condition. Examination of a properly polished specimen
before etching may reveal structural aspects such as porosity, cracks,
graphite, intermetallic precipitates, nitrides, and nonmetallic inclusions.
Certain constituents are best measured using image analysis without
etching, because etching reveals unwanted detail, making detection
difficult or impossible. Classic examples of analyzing unetched specimens
are the measurement of inclusions in steel and graphite in cast iron,
although many intermetallic precipitates and nitrides also can be measured effectively in the as-polished condition. Grain size also can be
revealed adequately in the as-polished condition using polarized light in
certain nonferrous alloys having noncubic crystallographic structures,
such as beryllium, hafnium, magnesium, titanium, uranium, and zirconium. Figure 22 shows the microstructure of beryllium viewed in
cross-polarized light, which produces grain coloration rather than a flat
etched appearance where only the grain boundaries are dark. This image
could be used in color image analysis but would not be useful for image
analysis using a black and white system.
Etching Procedures. Microscopical examination usually is limited to
a maximum magnification of 1000the approximate useful limit of the
light microscope, unless oil-immersion objectives are used. Many image
analysis systems use relay lenses that yield higher screen magnifications,
which may make detection of fine structures easier. However, resolution
is not raised above the general limit of about 0.3 m for the light
microscope. Microscopical examination of a properly prepared specimen
clearly reveals structural characteristics such as grain size, segregation,
and the shape, size, and distribution of the phases and inclusions that are
present. The microstructure also reveals prior mechanical and thermal
treatments that the metal has received. Microstructural features are
Fig. 22
(a)
Fig. 23
(b)
Examples of conditions that obscure the true microstructure. (a) Improper drying of the specimen. (b) Water
stains emanating from shrinkage gaps between 6061-T6 aluminum alloy and phenolic resin mount. Both
specimens viewed using differential interference-contrast (DIC) illumination.
(a)
(b)
(c)
Fig. 24
Examples of different behavior of etchants on the same low-carbon steel sheet. (a) 2% nital etch reveals ferrite
grain boundaries and cementite. (b) 4% picral etch reveals cementite aggregates and no ferrite grain
boundaries. (c) Tint etching with Berahas solution colors all grains according to their crystallographic orientation. All
specimens are viewed using bright eld illumination. For color version of Fig. 24(c), see endsheets of book.
(a)
(b)
Fig. 25
(c)
Examples of selective etching of ferrite-cementite-iron phosphide ternary eutectic in gray cast iron. (a)
Picral/nital etch reveals the eutectic surrounded by pearlite. (b) Boiling alkaline sodium-picrate etch colors
only the cementite phase. (c) Boiling Murakamis reagent etch darkly colors the iron phosphide and lightly colors cementite
after prolonged etching. All specimens are viewed using bright eld illumination.
iron phosphide darkly and lightly colors the cementite after prolonged
etching. The ferrite could be colored preferentially using Klemm I
reagent.
Selective etching has been commonly applied to stainless steels to
detect, identify, and measure -ferrite, ferrite in dual phase grades, and
-phase. Figure 26 shows examples of the use of a number of popular
etchants to reveal the microstructure of 7Mo Plus (Carpenter Technology
Corporation, Reading, PA) (UNS S32950), a dual-phase stainless steel, in
the hot-rolled and annealed condition. Figure 26(a) shows a welldelineated structure when the specimen was immersed in ethanolic 15%
HCl for 30 min. All of the phase boundaries are clearly revealed, but there
is no discrimination between ferrite and austenite, and twin boundaries in
the austenite are not revealed. Glyceregia, a popular etchant for stainless
steels, is not suitable for this grade because it appears to be rather
orientation-sensitive (Fig. 26b). Many electrolytic etchants are used to
etch stainless steels, but only a few have selective characteristics. Of the
four shown in Fig. 26 (c to f), only aqueous 60% nitric acid produces any
gray level discrimination, which is weak, between the phases. However,
all nicely reveal the phase boundaries. Two electrolytic reagents are
commonly used to color ferrite in dual phase grades and -ferrite in
martensitic grades (Fig. 26 g, h). Of these, aqueous 20% sodium
hydroxide (Fig. 26g) usually gives more uniform coloring of the ferrite.
Murakamis and Groesbecks reagents also are used for this purpose. Tint
etchants developed by Beraha nicely color the ferrite phase, as illustrated
in Fig. 26(i).
Selective etching techniques have been more thoroughly developed for
use on iron-base alloys than other alloy systems but are not limited to
iron-base alloys. For example, selective etching of -phase in - copper
alloys is a popular subject. Figure 27 illustrates coloring of -phase in
naval brass (UNS C46400) using Klemm I reagent. Selective etching has
long been used to identify intermetallic phases in aluminum alloys; the
method was used for many years before the development of energydispersive spectroscopy. It still is useful for image analysis work. Figure
28 shows selective coloration of -phase, CuAl2, in the Al-33% Cu
eutectic alloy. Figure 29 illustrates the structure of a simple sintered
tungsten carbide (WC-Co) cutting tool. In the as-polished condition (Fig.
29a), the cobalt binder is faintly visible against the more grayish tungsten
carbide grains, and a few particles of graphite are visible. Light relief
polishing brings out the outlines of the cobalt binder phase, but this image
is not particularly useful for image analysis (Fig. 29b). Etching in a
solution of hydrochloric acid saturated with ferric chloride (Fig. 29c)
attacks the cobalt and provides good uniform contrast for measurement of
the cobalt binder phase. A subsequent etch using Murakamis reagent at
room temperature reveals the edges of the tungsten carbide grains, which
is useful to evaluate grain size (Fig. 29d).
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 26
Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.
(g)
(h)
(i)
Fig. 26
Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.
(a)
Fig. 27
(b)
Selective etching of naval brass with Klemm I reagent reveals the -phase (dark constituent) in the -
copper alloy. (a) Transverse section. (b) Longitudinal section
Fig. 28
(a)
(b)
(c)
(d)
Fig. 29
Selective etching of sintered tungsten carbide-cobalt (WC-Co) cutting tool material. (a) Some graphite
particles are visible in the as-polished condition. (b) Light relief polishing outlines cobalt binder phase. (c)
Hydrochloric acid saturated with ferric chloride solution etch darkens the cobalt phase. (d) Subsequent Murakamis
reagent etch reveals edges of WC grains. Viewed using bright eld illumination
Fig. 30
Fig. 31
Conclusions
Preparation of metallographic specimens is based on scientific principles that are easily understood. Sectioning creates damage that must be
removed by the grinding and polishing steps if the true structure is to be
examined. Each sectioning process produces a certain amount of damage,
thermal and/or mechanical. Consequently, select a procedure that produces the least possible damage. Grinding also causes damage, with the
depth of damage decreasing with decreasing abrasive size. Materials
respond differently to the same size abrasive, so it is not possible to
generalize on metal removal depth. Removal rates also decrease with
decreasing abrasive size. With experience, good, reproducible procedures
can be established by each laboratory for the materials being prepared.
Automation in specimen preparation offers much more than a reduction in
labor. Specimens prepared using automated devices consistently have
much better flatness, edge retention, relief control, and freedom from
artifacts such as scratches, pull out, smearing, and comet tailing.
Some image analysis work is performed on as-polished specimens, but
many applications require some etching technique to reveal the microstructural constituent of interest. Selective etching techniques are best.
These may involve immersion tint etchants, electrolytic etching, potentiostatic etching, or techniques such as heat tinting or vapor deposition. In
each case, the goal is to reveal only the constituent of interest with strong
References
1. J.R. Pickens and J. Gurland, Metallographic Characterization of
Fracture Surface Profiles on Sectioning Planes, Proc. Fourth International Congress for Stereology, National Bureau of Standards
Special Publication 431, U. S. Government Printing Office, Washington, D. C., 1976, p 269272
2. A.M. Gokhale, W.J. Drury, and S. Mishra, Recent Developments in
Quantitative Fractography, Fractography of Modern Engineering
Materials: Composites and Metals, Second Volume, STP 1203,
ASTM, 1993, p 322
3. A.M. Gokhale, Unbiased Estimation of Curve Length in 3-D Using
Vertical Slices, J. Microsc., Vol 159 (Part 2), August 1990, p 133141
4. A.M. Gokhale and W. J. Drury, Efficient Vertical Sections: The
Trisector, Metall. Mater. Trans. A, 1994, Vol 25, p 919928
5. B.R. Morris, A.M. Gokhale, and G.F. Vander Voort, Grain Size
Estimation in Anisotropic Materials, Metall. Mater. Trans. A, Vol 29,
Jan 1998, p 237244
6. Metallography and Microstructures, Vol 9, Metals Handbook, 9th ed.,
American Society for Metals, 1985
7. G.F. Vander Voort, Metallography: Principles and Practice, ASM
International, 1999
8. L.E. Samuels, Metallographic Polishing by Mechanical Methods, 3rd
ed., American Society for Metals, 1982
9. G. Petzow and V. Carle, Metallographic Etching, 2nd ed., ASM
International, 1999
10. G.F. Vander Voort, Grain Size Measurement, Practical Applications
of Quantitative Metallography, STP 839, ASTM, 1984, p 85131
11. G.F. Vander Voort, Wetting Agents in Metallography, Mater. Charact.,
Vol 35 (No. 2), Sept 1995, p 135137
12. A.Skidmore and L. Dillinger, Etching Techniques for Quantimet
Evaluation, Microstruct., Vol 2, Aug/Sept 1971, p 2324
13. G.F. Vander Voort, Etching Techniques for Image Analysis, Microstruct. Sci., Vol 9, Elsevier North-Holland, NY, 1981, p 135154
14. G.F. Vander Voort, Phase Identification by Selective Etching, Appl.
Metallography, Van Nostrand Reinhold Co., NY, 1986, p 119
15. E.E. Stansbury, Potentiostatic Etching, Appl. Metallography, Van
Nostrand Reinhold Co., NY, 1986, p 2139
16. E. Beraha and B. Shpigler, Color Metallography, American Society
for Metals, 1977
17. G. F. Vander Voort, Tint Etching, Metal Progress, Vol 127, March
CHAPTER
4
Principles of
Image Analysis
James C. Grande
General Electric Research and Development Center
Image Considerations
An image in its simplest form is a three-dimensional array of numbers
representing the spatial coordinates (x and y, or horizontal and vertical)
and intensity of a visualized object (Fig. 2). The number array is the
fundamental form by which mathematical calculations are performed to
enhance an image or to make quantitative measurements of features contained in an image. In the digital world, the image is composed of small,
usually square (to avoid directional bias) picture elements called pixels.
The gray level, or intensity, of each pixel relates to the number of light photons striking the detector within a camera. Images typically range in size
from arrays of 256 256 pixels to those as large as 4096 4096 pixels
using specialized imaging devices. There are a myriad number of cameras
having wide-ranging resolutions and sensitivities available today. In the
Fig. 1
Fig. 2
Image analysis process steps. Each step has a decision point before the
next step can be achieved.
mid to late 1980s, 512 512 pixel arrays were the standard. Older
systems typically had 64 (26) gray levels, whereas at the time of this
publication, all commercial systems offer at least 256 (28) gray levels,
although there are systems having 4096 (212) and 65,536 (216) gray
levels. These are often referred to 6 bit, 8 bit, 12 bit, and 16 bit cameras,
respectively.
The process of converting an analog signal to a digital one has some
limitations that must be considered during image quantification. For
example, pixels that straddle the edge of a feature of interest can affect the
accuracy and precision of each measurement because an image is
composed of square pixels having discrete intensity levels. Whether a
pixel resides inside or outside a feature edge can be quite arbitrary and
dependent on positioning of the feature within the pixel array. In addition,
the pixels along the feature edge effectively contain an intermediate
intensity value that results from averaging adjacent pixels. Such considerations suggest a desire to minimize pixel size and increase the number
of gray levels in a systemparticularly if features of interest are very
small relative to the entire imageat the most reasonable equipment cost.
Resolution versus Magnication. Two of the more confusing aspects
of a digital image are the concepts of resolution and magnification.
Resolution can be defined as the smallest feature that can be resolved. For
example, the theoretical limit at which it is no longer possible to
distinguish two distinct adjacent lines using light as the imaging method
is at a separation distance of about 0.3 m. Magnification, on the other
hand, is the ratio of an object dimension in an image to the actual size of
the object. Determining that ratio sometimes can be problematic, especially when the actual dimension is not known.
The displayed dimension of pixels is determined by the true magnification of the imaging setup. However, the displayed pixel dimension can
vary considerably with display media, such as on a monitor or hard-copy
(paper) print out. This is because a typical screen resolution is 72 dots per
inch (dpi), and unless the digitized image pixel resolution is exactly the
same, the displayed image might be smaller or larger than the observed
size due to the scaling of the visualizing software. For example, if an
image is digitized into a computer having a 1024 1024 pixel array, the
dpi could be virtually any number, depending on the imaging program
used. If that same 1024 1024 image is converted to 150 dpi and viewed
on a standard monitor, it would appear to be twice as large as expected
due to the 72 dpi monitor resolution limit.
The necessary printer resolution for a given image depends on the
number of gray levels desired, the resolution of the image, and the
specific print engine used. Typically, printers require a 4 4 dot array for
each pixel if 16 shades of gray are needed. An improvement in output dpi
by a factor of 1.5 to 2 is possible with many printers by optimizing the
raster, which is a scanning pattern of parallel lines that form the display
number of pixels (Fig. 4). If the user is doing more than just determining
whether or not a feature exists, the relative accuracy of a system is the
limiting factor in making any physical property measurements or correlating a microstructure.
When small features exist within an array of larger features, increasing
the magnification to improve resolving power forces the user to systematically account for edge effects and significantly increases the need for a
larger number of fields to cover the same area that a lower magnification
can cover. Again, the tradeoff has to be balanced with the accuracy
needed, the system cost, and the speed desired for the application. If a
high level of shape characterization is needed, a greater number of pixels
may be needed to resolve subtle shape variations.
Fig. 3
Small features magnied over 25 times showing the differences in the size and number density of
pixels within features when comparing a 760 560 pixel camera and a 1024 1024 pixel
camera
Fig. 4
Image Acquisition
Image acquisition devices include light microscopes, electron microscopes (e.g., scanning electron, transmission electron, and Auger), laser
Users generally turn to the use of dc power supplies, which isolate power
from house current to minimize subtle voltage irregularities. Also, some
systems contain feedback loops that continuously monitor the amount of
light emanating from the light source and adjust the voltage to compensate for intensity fluctuations. Another way of achieving consistent
intensities is to create a sample that can be used as a standard when setting
up the system. This can be done by measuring either the actual intensity
or feature size of a specified area on the sample.
Image Processing
Under ideal conditions, a digitized image can be directly binarized
(converted to black and white) and measured to obtain desired features.
However, insufficient contrast, artifacts, and/or distortions very often
prevent straightforward feature analysis. Image processing can be used in
this situation to compensate for the plethora of image deficiencies,
enabling fast and accurate analysis of features of interest.
Gray-level image processing often is used to enhance features in an
image either for visualization purposes or for subsequent quantification.
The rapid increase of algorithms over the years offers many ways to
enhance images, and many of these algorithms can be used in real time
with the advent of low-cost/high-performance computers.
Shading Correction. Image defects that are caused by uneven illumination or artifacts in the imaging path must be taken into account during
image processing. Shading correction is used when a large portion of an
image is darker or lighter than the rest of the image due to, for example,
bulb misalignment or by the use poor optics in the system. The relative
differences between features of interest and the background are usually
the same, but features in one area of the image have a different gray-level
range than the same type of feature in another portion of the image. The
main methods of shading correction use a background reference image,
either actual or artificial, and polynomial fitting of nearest-neighbor
pixels.
A featureless reference image requires the acquisition of an image using
the same lighting conditions but without the features of interest. The
reference image is then subtracted or divided (depending on light
response) from the shaded image to level the background. If a reference
image cannot be obtained, it is sometimes possible to create a pseudoreference image by using rank-order processing (which is discussed later)
to diminish the features and blend them into the background (Fig. 5).
Polynomial fitting also can be used to create a pseudo-background image,
but it is difficult to generate if the features are neither distinct nor
somewhat evenly distributed.
(a)
(b)
(c)
(d)
Fig. 5
Rank-order processing used to create a pseudoreference image. (a) Image without any features
in the light path showing dust particles and shading of dark regions to light regions going from
the upper left to the lower right. (b) Same image after shading correction. (c) Image of particles without
shading correction. (d) Same image after shading correction showing uniform illumination across the
entire image
(a)
Fig. 6
(b)
(c)
Reected bright-eld image of an oxide coating before and after use of a gamma curve transformation that
translates pixels with lower intensities to higher intensities while keeping the original lighter pixels near the same
levels
Fig. 7
Sharpening an image
Eliminating noise
Smoothing edges
Finding edges
Accentuating subtle features
(a)
(b)
(c)
(d)
Fig. 8
Reected-light image of an aluminum-silicon alloy before and after gray-level histogram equalization, which
signicantly improves contrast of the subtle smaller silicon particles by uniformly distributing intensities
Fig. 9
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 10
Examples of neighborhood kernel processing using various processes. (a) Original reected-light image of a titanium alloy. Image
using (b) gradient lter, (c) median lter, (d) Sobel operator, (e) top-hat processing,
(f) gray-level opening
(a)
Fig. 11
(b)
(c)
Defect shown with different image enhancements. (a) High-resolution image from a transmission electron
microscope of silicon carbide defect in silicon showing the alignment of atoms. (b) Power spectrum after
application of fast Fourier transform (FFT) showing dark peaks that result from the higher-frequency periodic silicon
structure. (c) Defect after masking the periodic peaks and performing an inverse FFT
Feature Discrimination
Thresholding. As previously described, an image that has 256 gray
values needs to be processed in such a way as to allow quantification by
reducing the available gray values in an image to only the features of
interest. The process in which 256 gray values are reduced to 2 gray
values (black and white, or 0 and 1) is called thresholding. It is
accomplished by selecting the gray-level range of the features of interest.
Pixels within the selected gray-level range are assigned as foreground, or
detected features, and everything else as background, or undetected
features. In other terms, thresholding simply converts the image to a
series of 0s and 1s, which represent undetected and detected features,
respectively. Whether white features represent foreground or vice a versa
varies with image analysis systems, but it does not affect the analysis in
any way and usually is a matter of the programmers preference.
The segmentation process usually yields three types of images depending on the system: a black and white image, a bit-plane image, and a
feature-boundary representation (Fig. 12). The difference between the
methods is analogous to a drawing program versus a painting program. A
drawing program creates images using lines and/or polygons to represent
features and uses much less space. It also can quickly redraw, scale, and
change an image comprising multiple features. By comparison, a painting
program processes images one pixel at a time and allows the user to
change the color of individual pixels because each image comprises
various pixel arrangements.
The replicated black and white image is more memory intensive
because, generally, it creates another image of the same size and
gray-level depth after processing and thresholding, and requires the same
amount of computer storage as the original image. A bit-plane image is a
binary image, usually having a color that represents the features of
interest. It often is easier to track binary image processing steps during
image processing development using the bit-plane method. Featureboundary representation is more efficient when determining feature
perimeter and shape. There is no inherent advantage to any methodology
because the final measurements are similar and the range of processing
algorithms and possible feature measurements remain competitive.
Segmentation. Basically, there are three ways that a user indicates to
an image analysis system the appropriate threshold for segmentation
using gray level:
O Enter the gray-level values that represent the desired range.
O Select both width (gray-level range) and location (gray-level values) by
moving a slider along a gray-level spectrum bar (Fig. 13). This is
known as the interactive method. Interactive selection usually affects
(a)
(b)
(c)
(d)
Fig. 12
There are more issues to consider when thresholding color images for
features of interest. Most systems use red, green, and blue (RGB)
channels to establish a color for each pixel in an image. It is difficult to
determine the appropriate combination of red, green, and blue signals to
distinguish features. Some systems allow the user to point at a series of
points in a color image and automatically calculate the RGB values,
which are used to threshold the entire image. A better methodology than
RGB color space for many applications is to view a color image in hue,
intensity, and saturation (HIS) space. The advantage of this method is that
color information (hue and saturation) is separated from brightness
(intensity). Hue essentially is the color a user observes, while the
saturation is the relative strength of the color. For example, translating
dark green to an HIS perspective would use dark as the level of
saturation (generally ranges as a value between 0 and 100%) and green as
Fig. 13
Fig. 14
Thresholding gray levels in an image by selecting the gray-level peaks that are characteristic
of the features of interest
the hue observed. While saturation describes the relative strength of color,
intensity is associated with the brightness of the color. Intensity is
analogous to thresholding of gray values in black and white space. Hue,
intensity, and saturation space also is described as hue, lightness, and
saturation (HLS) space, where L quantifies the dark-light aspect of
colored light (see Chapter 9, Color Image Processing).
Nonuniform Segmentation. Selecting the threshold range of gray
levels to segment foreground features sometimes results in overdetecting
some features and underdetecting others. This is due not only to varying
brightness across an image, but also is often due to the gradual change of
gray levels while scanning across a feature. Delineation enhancement is
a useful gray-level enhancement tool in this situation (Fig. 15). This
algorithm processes the pixels that surround features by transforming
their gradual change in gray level to a much steeper curve. In this way, as
features initially fall within the selected gray-level range, the apparent
size of the feature will not change much as a wider band of gray levels is
selected to segment all features.
There are other gray-level image processing tools that can be used to
delineate edges prior to segmentation and to improve contrast in certain
regions of an image, and their applicability to a specific application can
be determined by experimenting with them.
Watershed Segmentation. Watershed transformations are iterative
processes performed on images that have space-filling features, such as
grains. The enhancement usually starts with the basic eroded point or the
last point that exists in a feature during successive erosions, often referred
to as the ultimate eroded point. Erosion/dilation is the removal and/or
addition of pixels to the boundary of features based on neighborhood
(a)
(b)
Fig. 15
Delineation lter enhances feature edges by sharpening the transition of gray values considerably, providing more leeway when
thresholding. (a) Magnied original gray-level image of particles showing gradual
transition of gray levels along the feature edges. (b) The same image after using
a delineation lter
relationships. The basic eroded point is dilated until the edge of the
dilating feature touches another dilating feature, leaving a line of
separation (watershed line) between touching features.
Another much faster approach is to create a Euclidean distance map
(EDM), which assigns successively brighter gray levels to each dilation
iteration in a binary image (Ref 2). The advantage of this approach is that
the periphery of each feature grows until impeded by the growth front of
another feature. Although watershed segmentation is a powerful tool, it is
fraught with application subtleties when applied to a wide range of
images. The reader is encouraged to refer to Ref 2 and 3 to gain a better
understanding of the proper use and optimization of this algorithm and for
a detailed discussion on the use of watershed segmentation in different
applications.
Texture Segmentation. Many images contain texture, such as lamellar
structures, and features of widely varying size, which may or may not be
the features of interest. There are several gray-level algorithms that are
particularly well suited to images containing texture because of the
inherent frequency or spatial relationships between structures. These
operators usually transform gradually varying features (low frequency) or
highly varying features (high frequency) into an image with significantly
less texture.
Algorithms such as Laplacian, Variance, Roberts, Hurst, and Frei and
Chen operators often are used either alone or in combination with other
processing algorithms to delineate structures based on differing textures.
Methodology to characterize banding and orientation microstructures of
metals and alloys is covered in ASTM E 1268 (Ref 4).
Pattern-matching algorithms are powerful processing tools used to
discriminate features of interest in an image. Usually, they require prior
knowledge of the general shape of the features contained in the image.
For instance, if there are cylindrical fibers orientated in various ways
within a two-dimensional section of a composite, a set of boundaries can
be generated that correspond to the angles at which a cylinder might occur
in three-dimensional space. The resulting boundaries are matched to the
actual fibers that exist in the section, and the resulting angles are
calculated based on the matched patterns (Fig. 16). Generally, patternmatching algorithms are used when required measurements cannot be
directly made or calculated from the shape of a binary feature of interest.
(a)
(b)
Fig. 16
AND
OR
Exclusive OR (XOR)
NOT
These basic four often are combined in various ways to obtain a desired
result, as illustrated in Fig. 17.
A simple way to represent Boolean logic is by using a truth table, which
shows the criteria that must be fulfilled to be included in the output image.
When comparing two images, the AND Boolean operation requires that
the corresponding pixels from both images be ON (1 ON, 0 OFF).
Such a truth table would look like this:
Fig. 17
compared between images (Fig. 18). The resultant image contains the
entire feature instead of just the parts of a feature that are affected by the
Boolean comparison. Feature-based logic uses artificial features, such as
geometric shapes, and real features, such as grain boundaries, to ascertain
information about features of interest.
There are a plethora of uses for Boolean operators on binary images and
also in combination with gray-scale images. Examples include coating
thickness measurements, stereological measurements, contiguity of
phases, and location detection of features.
Morphological Binary Processing. Beyond combining images in
unique ways to achieve a useful result, there also are algorithms that alter
individual pixels of features within binary images. There are hundreds of
specialized algorithms that might help particular applications and merit
further experimentation (Ref 2, 3). Several of the most popular algorithms
are mentioned below.
Hole lling is a common tool that removes internal holes within
features. For example, one technique completely fills enclosed regions of
features (Fig. 19a, b) using feature labeling. This identifies only those
features that do not touch the image edge, and these are combined with
the original image using the Boolean OR operator to reconstruct the
original inverted binary image with the holes filled in. There is no limit on
how large or tortuous a shape is. The only requirement for hole filling is
that the hole is completely contained within a feature.
A variation of this is morphological-based hole filling. In this technique,
the holes are treated as features in the inverted image and processed in the
desired way before inverting the image back. For example, if only holes
of a certain size are to be filled, the image is simply inverted, features
below the desired size are eliminated, and then the image is inverted back
Fig. 18
(Fig. 19a, c, d). It also is possible to fill holes based on other shape
criteria.
Erosion and Dilation. Common operations that use neighborhood
relationships between pixels include erosion and dilation. These operations simply remove or add pixels to the periphery (both externally and
internally, if it exists) of a feature based on the shape and location of
neighborhood pixels. Erosion often is used to remove extraneous pixels,
which may result when overdetection during thresholding occurs, because
some noise has the same gray-level range as the features of interest. When
used in combination with dilation (referred to as opening), it is possible
to separate touching particles. Dilation often is used to connect features
by first dilating the features followed by erosion to return the features to
their approximate original size and shape (referred to as closing).
(a)
(b)
(c)
(d)
Fig. 19
Fig. 20
Fig. 21
iterations
Fig. 22
Further Considerations
The binary operations described in this chapter are only a partial list of
the most frequently used operations and can be combined in useful ways
to produce an image that lends itself to straightforward quantification of
features of interest. Today, image analysis systems incorporate many
processing tools to perform automated, or at least fast-feature, analysis.
Creativity is the final tool that must be used to take full advantage of the
power of image analysis. The user must determine if the time spent in
developing a set of processing steps to achieve computerized analysis is
justified for the application. For example, if you have a complicated
(a)
(b)
(c)
(d)
Fig. 23
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 24
image that has minimal contrast but somewhat obvious features to the
human eye and only a couple of images to quantify, then manual
measurements or tracing of the features might be adequate. However, the
benefit of automated image analysis is that sometimes-subtle feature
characterizations can yield answers that the user might never have
guessed based on cursory inspections of the microstructure.
References
1. E. Pirard, V. Lebrun, and J.-F. Nivart, Optimal Acquisition of Video
Images in Reflected Light Microscopy, Microsc. Anal., Issue 37, 1999
p 1921
2. J.C. Russ, The Image Processing Handbook, 2nd ed., CRC Press,
1994
3. L. Wojnar, Image Analysis, Applications in Materials Engineering,
CRC Press, 1998
4. Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures, E 1268-94, Annual Book of ASTM Standards, ASTM, 1999
CHAPTER
5
Measurements
John J. Friel
Princeton Gamma Tech
Contrast Mechanisms
The principles of how to acquire, segment, and calibrate images have
been discussed in Chapter 4. However, one concept that must be
considered before making measurements is the choice of signal and
contrast mechanism. The contrast mechanism selected carries the information to be quantified, but it is the signal used for acquisition that
actually carries one or more contrast mechanisms. It is useful, therefore,
to distinguish between the contrast bearing signals suitable for digitiza-
Table 1
Signal
Reflected light
Contrast mechanism
Topographic
Crystallographic
Composition
True color
Interference colors
Transmitted light
True color
Interference colors and figures
Biological/petrographic structure
Secondary electrons
Topographic
Voltage
Magnetic types 1 and 3
Backscattered electrons
Atomic number
Topographic (trajectory)
Crystallographic (electron channeling patterns,
electron backscattered patterns)
Magnetic type 2
Biological structure (stained)
X-rays
Absorbed electrons
Composition
Atomic number
Charge (EBIC)
Crystallographic
Magnetic type 2
Transmitted electrons
Mass thickness
Crystallographic (electron diffraction)
Cathodoluminescence
Composition
Electron state
tion and the contrast mechanism itself that will be used to enhance and
quantify. In routine metallography, bright-field reflected-light microscopy
is the usual signal, but it may carry many varied contrast types depending
on the specimen, its preparation, and etching. The mode of operation of
the microscope also affects the selection of contrast mechanisms. A list of
some signals and contrast mechanisms is given in Table 1.
Direct Measurements
Field Measurements
Field measurements usually are collected over a specified number of
fields, determined either by statistical considerations of precision or by
compliance with a standard procedure. Standard procedures, or norms, are
published by national and international standards organizations to conform to agreed-upon levels of precision. Field measurements also are the
output of comparison chart methods.
Statistical measures of precision, such as 95% confidence interval (CI)
or percent of relative accuracy (%RA) are determined on a field-to-field
O
O
O
O
O
O
O
O
O
O
Field number
Field area
Number of features
Number of features excluded
Area of features
Area of features filled
Area fraction
Number of intercepts
NL
NA (number of features divided by the total area of the field)
High
Area, %
106
10.22
107
161
80.05
162
255
9.73
equivalent to the volume fraction, VV, for a field of features that do not
have preferred orientation.
The average area of features can be calculated from feature-specific
measurements, but it also is possible to derive it from field measurements
as follows:
A
AA
(Eq 1)
NA
(a)
(b)
Fig. 1
Fig. 2
LL
NL
(Eq 2)
Fig. 3
Fig. 4
by the selected parameter. For example, large features are not given any
more weight than small ones in a number histogram of area, but in an
area-weighted histogram, the dividing line between coarse and fine is
more easily observed. Using more than one operator to agree on the
selection point improves the precision of the analysis by reducing the
variance, but any inherent bias still remains. Determination of duplex
grain size described in the section Grain Size is an example of this
situation.
Feature orientation in an image constitutes a field measurement even
though it could be determined by measuring the orientation of each
feature and calculating the mean for the field. This is easily done using a
computer, but there is a risk that there might not be enough pixels to
sufficiently define the shape of small features. For this reason, orientation
measurements are less precise. Moreover, if all features are weighted
equally regardless of size, small ill-defined features will add significant
error to the results, and the measurement may not be truly representative.
Because orientation of features relates to material properties, measurements of many fields taken from different samples are more representative
of the material than measurements summed from individual features. This
situation agrees nicely with the metallographic principle, do more less
well; that is, measurements taken from more samples and more fields give
a better representative result than a lot of measurements on one field. A
count of intercepts, NL, made in two or more directions on the specimen
can be used either manually or automatically to derive a measure of
preferred orientation. The directions, for example, might be perpendicular
and parallel to the rolling direction in a wrought metal. The term
orientation as used here refers to an alignment of features recognizable in
a microscope or micrograph. It does not refer to crystallographic
orientation, as might be ascertained using diffraction methods.
ASTM E 1268 (Ref 3) describes a procedure for measuring and
reporting banding in metals. The procedure calls for measuring NL
perpendicular and parallel to the observed banding and calculating an
anisotropy index, AI, or a degree of orientation, 12, as follows (Ref 4):
AI
NL
NL
(Eq 3)
and
12
NL NL
NL 0.571NL
(Eq 4)
1 AA
NL
(Eq 5)
In the case of space filling grains, the area fraction equals one, and,
therefore, the mean free path is zero. However, for features that do not
occupy 100% of the image, mean free path gives a measure of the
distance between features on a field-by-field basis.
Surface Area. There is at least one way to approximate the surface
area from two images using stereoscopy. If images are acquired from two
different points of view, and the angle between them is known, the height
at any point, Z, can be calculated on the basis of displacement from the
optic axis as follows:
Z
P
2M sin(/2)
(Eq 6)
the scale represents the fractal dimension of the surface. Fractal dimension is discussed further in the section Derived Measurements.
Direct measurement of surface area using methods such as profilometry
or the scanning probe microscopy techniques of scanning tunneling
microscopy (STM) and atomic force microscopy (AFM), will not be
considered here because they are not image analysis techniques.
Feature-Specic Measurements
Feature-specific measurements logically imply the use of an AIA
system. In the past, so-called semiautomatic systems were used in which
the operator traced the outline of features on a digitizing tablet. This type
of analysis is time consuming and is only useful to measure a limited
number of features. However, it does have the advantage of requiring the
operator to confirm the entry of each feature into the data set.
Although specimen preparation and image collection have been discussed previously, it should be emphasized again that automatic image
analysis is meaningful only when the image(s) accurately reflects(s) the
properties of the features to be measured. The feature finding program
ordinarily detects features using pseudocolor. As features are found, their
position, area, and pixel intensity are recorded and stored. Other primitive
measures of features can be made by determining Feret diameters
(directed diameters, or DD) at some discrete number of angles. These
include measures of length, such as longest dimension, breadth, and
diameter. Theoretically, shape determination becomes more accurate with
increasing use of Feret diameters. However, in practice, resolution,
threshold setting, and image processing are more likely to be limiting
factors than is the number of Feret diameters. A list of some featurespecific primitive measurements follows:
O
O
O
O
O
O
O
O
O
O
O
Position x and y
Area
Area filled
Directed diameters (including maximum, minimum, and average)
Perimeter
Inscribed x and y (including maximum and minimum)
Tangent count
Intercept count
Hole count
Feature number
Feature angle
the threshold setting. For any given pixel resolution, the smaller the
feature, the less precise is its area measurement. This problem is even
greater for shape measurements, as described in the section Derived
Measurements.
If a microstructure contains features of significantly different sizes, it
may be necessary to perform the analysis at two different magnifications.
However, there is a magnification effect in which more features are
detected at higher magnification, which may cause bias. Underwood (Ref
7) states in a discussion of magnification effect that the investigator sees
more at higher magnifications. Thus, more grains or particles are counted
at higher magnification, so values of NA are greater. The same is true for
lamellae, but spacings become smaller as more lamellae are counted.
Other factors that can influence area measurement include threshold
setting, which can affect the precision of area measurement, and specimen
preparation and image processing, which can affect both the precision and
bias of area measurement.
Length. Feature-specific descriptor functions such as maximum, minimum, and average are readily available with a computer, and are used to
define longest dimension (max DD), breadth (min DD), and average
diameter. Average diameter as used here refers to the average directed
diameter of each feature, rather than the average over all of the features.
Length measurements of individual features are not readily accommodated using manual methods, but they can be done. For example, the
mean lineal intercept distance, L, can be determined by averaging the
chord lengths measured on each feature.
As with area measures, the precision of length measurements is limited
by pixel resolution, the number of directed diameters constructed by the
computer, and threshold setting. Microstructures containing large and
small features may have to be analyzed at two different magnifications, as
with area measurements.
Bias in area and length measurements is influenced by threshold setting
and microscope and image analyzer calibration. Calibration should be
performed at a magnification as close to that used for the analysis as
possible, and, for SEMs, x and y should be calibrated separately (Ref 8).
Perimeter. Measurement of perimeter length requires special consideration because representation of the outer edge of features in a digital
image consists of steps between adjacent pixels, which either are square
or some other polygonal shape. The greater the pixel resolution, the closer
will be the approximation to the true length of a curving perimeter.
Because the computer knows the coordinates of every pixel, an approximation of the perimeter can be made by calculating the length of the
diagonal line between the centers of each of the outer pixels and summing
them. However, this approach typically still underestimates the true
perimeter of most features. Therefore, AIA systems often use various
adjustments to the diagonal distance to minimize bias.
(Eq 7)
1
r
(n45 n135)
r (n0 n90)
2
4
2
(Eq 8)
Figure 5 shows two grids having the same arbitrary curve of unknown
length superimposed. The grid lines in Fig. 5(a) are equally spaced;
therefore, those at 45 and 135 do not necessarily coincide with the
diagonals of the squares. In Fig. 5(b), the squares outlined by black lines
represent pixels in a digital image, and the 45 and 135 lines in blue are
constructed along the pixel diagonals. Equation 7 applies to the intersection count from Fig. 5(a), and Eq 8 applies to Fig. 5(b).
For example, in Fig. 5(a), the number of intersections of the red curve
with the grid equals 56. Therefore, L 22.0 in accordance with Eq 7. By
comparison, in Fig. 5(b), n 31 and n 36, where n refers to the
number of intersections with the black square gridlines, and n refers to
(a)
(b)
Fig. 5
Grids used to measure Crofton perimeter. (a) Equally spaced grid lines.
(b) Rectilinear lines and diagonal lines. Please see endsheets of book
for color versions.
Fig. 6
Derived Measurements
Field Measurements
Stereological Parameters. Stereology is a body of knowledge for
characterizing three-dimensional features from their two-dimensional
representations in planar sections. A detailed review of stereological
relationships can be found in Chapter 2, Introduction to Stereological
Principles, and in Ref 4 and 10. The notation uses subscripts to denote
a ratio. For example, NA refers to the number of features divided by the
total area of the field. A feature could be a phase in a microstructure, a
particle in a dispersion, or any other identifiable part of an image. Volume
fraction, VV, is a quantity derived from the measured area fraction, AA,
although in this case, the relationship is one of identity, VV AA. There
are various stereological parameters that are not directly measured but
that correlate well with material properties. For example, the volume
fraction of a dispersed phase may correlate with mechanical properties,
and the length per area, LA, of grain boundaries exposed to a corrosive
medium may correlate with corrosion resistance.
The easiest measurements to make are those that involve counting
rather than measuring. For example, if a test grid of lines or points is used,
the number of lines that intercept a feature of interest or the number of
points that lie on the feature are counted and reported as NL or point
count, PP. ASTM E 562 describes procedures for manual point counting
and provides a table showing the expected precision depending on the
number of points counted, the number of fields, and the volume fraction
of the features (Ref 1). Automatic image analysis systems consider all
pixels in the image, and it is left to the operator to tell the computer which
pixels should be assigned to a particular phase by using pseudocolor. It
also is easy to count the number of points of interest that intersect lines
in a grid, PL. If the objects of interest are discrete features, such as
particles, then the number of times the features intercept the test lines
gives NL. For space filling grains, PL NL, and for particles, PL 2 NL.
In an AIA system, the length of the test line is the entire raster; that is, the
total length of lines comprising the image in calibrated units of the
microstructure. Similarly, it is possible to count the number per area, NA,
but this has the added difficulty of having to rigorously keep track of each
feature or grain counted to avoid duplication.
All of the parameters above that involve counting are directly measurable, and several other useful parameters can be derived from these
measurements. For example, the surface area per volume, SV, can be
calculated as follows:
SV 2PL
(Eq 9)
2PL
(Eq 10)
AA
NA
(Eq 11)
grains in a given area of the specimen, such as the number of grains per
square inch at 100 magnification. For more on grain size measurement,
see Chapter 2, Introduction to Stereological Principles; Chapter 7,
Analysis and Interpretation; Chapter 8, Applications, and Ref 11.
Realizing the importance of accurate grain size measurement, ASTM
Committee E 4 on Metallography took on the task of standardizing grain
size measurement. ASTM E 112 is the current standard for measuring
grain size and calculating an ASTM G value (Ref 12). The relationship
between G and the number of grains per square inch at a magnification of
100, n, follows:
n 2G1
(Eq 12)
However, G is generally calculated from various easily measured stereological parameters, such as NL and NA. ASTM E 1382 (Ref 13) describes
the procedures for measuring G using automatic or semiautomatic image
analysis, and gives two equations:
G (6.643856 log NL ) 3.288
(Eq 13)
(Eq 14)
where NA is in mm2.
The procedures prescribed in ASTM E 112 assume an approximately
log-normal grain size distribution. There are other conditions in which
grain size needs to be measured and reported differently, such as a
situation in which a few large grains are present in a finer-grained matrix.
This is reported as the largest grain observed in a sample, expressed as
ALA (as large as) grain size. The procedure for making this measurement
is described in ASTM E 930.
Duplex grain size is an example of features distinguished by their size,
shape, and position discussed previously. ASTM E 1181 describes various
duplex conditions, such as bimodal distributions, wide-range conditions,
necklace conditions, and ALA. Figure 7 shows an image containing
bimodal duplex grain size in an Inconel Alloy 718 (UNS N07718)
nickel-base superalloy. Simply counting grains and measuring their
average grain size (AGS) yields 1004 grains having an ASTM G value of
9.2. However, such an analysis completely mischaracterizes the sample
because the grain distribution is bimodal.
Figure 8 shows an area-weighted histogram of the microstructure in
Fig. 7, which suggests a division in the distribution at an average diameter
of approximately 50 m (Ref 14). The number percent and area percent
histograms are superimposed, and the area-weighted plot indicates the
bimodal nature of the distribution. The number percent of the coarse
grains is only 2%, but the area percent is 32%. Repeating the analysis for
grains with a diameter greater than 50 m yields 22 grains having a G
value of 4.9. The balance of the microstructure consists of 982 grains
having a G value of 9.8. The report on grain size, as specified by ASTM
Fig. 7
Fig. 8
E 1181 on Duplex Grain Size, is given as: Duplex, Bimodal, 68% AGS
ASTM No. 10, 32% AGS ASTM No. 5.
The fractal dimension of a surface, such as a fracture surface, can be
derived from measurements of microstructural features. Although fractal
dimension is not a common measure of roughness, it can be calculated
from measurements such as the length of a trace or the area of a surface.
The use of fractal measurements in image analysis is described by Russ
(Ref 15) and Underwood (Ref 16).
The concept of fractals involves a change in some dimension as a
function of scale. A profilometer provides a measure of roughness, but the
scale is fixed by the size of the tip. A microscope capable of various
magnifications provides a suitable way to change the scale. An SEM has
a large range of magnification over which it can be operated, which makes
it an ideal instrument to measure length or area as a function of scale. Two
linear measurements that can be made are length of a vertical profile of a
rough surface and length of the outline of features in serial sections. These
and other measurements for describing rough surfaces are extensively
reviewed by Underwood and Banerji (Ref 6).
If you can measure or estimate the surface area, using, for example,
stereoscopy discussed above, then the fractal dimension can be calculated. Such an analysis on a fracture surface is described by Friel and
Pande (Ref 17). Figure 9 shows a pair of images of the alloy described in
Ref 17 taken at two different tilt angles using an SEM. From stereopairs
Fig. 9
Fig. 10
Fractal plot of fracture surface area versus scanning electron microscope magnication
Fig. 11
Images of clustered, ordered, and random features with their tessellated counterparts
(a)
Fig. 12
(b)
Tessellation cells constructed from features in a microstructure. (a) Scanning electron microscope
photomicrograph of porosity in TiO2. (b) Tessellation cells constructed based on pores
Descriptor
Definition
Area (A)
Pixel count
Perimeter (P)
Breadth (B)
4A / (diameter, if circular)
4A / P2 (perimeter-sensitive, always 1)
Circularity
A / projected length (x or y)
Roughness
P / D (D perimeter of circle)
Volume of a sphere
0.75225
Fiber length
p 2 16A)
0.25 (P p 2 16A)
Fiber width
0.25 (P
Fig. 13
Because the image consists only of space filling grains, the measurement
of area fraction and mean free path are not meaningful. However, in a real
digital image, the grain boundaries comprise a finite number of pixels.
Therefore, the measured area fraction is less than one, and the calculated
mean free path is greater than zero. Even when image processing is used
to thin the boundaries, their fraction is still not zero, and the reported
results should consider this. Tables 3 and 4 show typical field and feature
measurements, respectively, made on a binary version of the image in
Fig. 13.
Standard Methods
Throughout this Chapter, standard procedures have been cited where
appropriate. These procedures are the result of consensus by a committee
of experts representing various interested groups, and as such, they are a
good place to start. A list of various relevant standards from ASTM is
given in Tables 5 and 6. International Standards Organization (ISO) and
Table 3
Measurement
Average
Field area, m2
365,730.8
Total features
1,538
Total intercepts
15,517
Area of features, m2
252,340.9
4.2
63.7
164.1
14.7
1.5
Table 4
Feature
Average
Median
Minimum
Maximum
164.07
103.74
5.79
2,316.15
Perimeter
48.12
41.45
12.72
353.23
x Feret
14.58
13.52
3.00
66.08
y Feret
14.67
12.01
3.00
75.09
13.42
12.01
3.00
60.07
Area
Longest dimension
8.33
16.06
5.40
87.08
Breadth
11.78
10.51
3.00
60.25
Average diam
15.43
13.62
5.15
73.97
Convex perimeter
48.47
42.80
16.17
232.37
54.30
12.84
11.49
4.48
Form factor
0.75
0.77
0.18
1.23
Circularity
2.06
1.93
1.28
12.57
Mean x intercept
9.18
8.26
1.93
36.83
Mean y intercept
9.22
8.23
1.75
36.67
Roughness
0.97
0.97
0.79
1.52
2,228.30
794.87
47.18
83,851.82
1.53
1.43
1.00
8.80
Volume of sphere
Aspect ratio
Subject
Measurement standards
E 562
E 1122
E 1382
E 1181
E 1245
Inclusions by stereology
E 1268
Degree of banding
B 487
Coating thickness
Material standards
C 856
Microscopy of concrete
D 629
Microscopy of textiles
D 686
Microscopy of paper
D 1030
Microscopy of paper
D 2798
Microscopy of coal
D 3849
Table 6
Microscopy-related standards
ASTM No.
Subject
E3
Specimen preparation
E 766
E 883
Reflected-light microscopy
E 986
E 1351
Preparation of replicas
E 1558
Electropolishing
F 728
References
1. Standard Test Method for Determining Volume Fraction by Systematic Manual Point Count, E 562, Annual Book of ASTM Standards,
Vol 03.01, ASTM, 1999, p 507
2. E.E. Underwood, Quantitative Metallography, Metallography and
CHAPTER
Characterization of
Particle Dispersion
Mahmoud T. Shehata
Materials Technology Laboratory/CANMET
The results obtained for a particle dispersion by all five techniques are
compared in each case with results for ordered, random, and clustered
dispersions. In addition, the techniques are evaluated, and their usefulness
and limitations are discussed.
Fig. 1
1
NAi NA2
n
i
(Eq 1)
where NAi is the observed number of inclusions per unit area in the ith
location (field of view) and is the average number of particles per unit
area viewed on the sample. Maximum homogeneity is characterized by a
minimum standard deviation; thus, the degree of homogeneity increases
with a decreasing value of standard deviation. To compare relative
homogeneity of samples with different number densities, the standard
deviation must be normalized by the value of the mean, which is the
coefficient of variation, V, defined as V / NA.
Fig. 2
0.96
0.24
0.06
0.015
0.23
0.34
0.51
0.66
0.23
0.38
0.62
0.81
0.31
0.49
0.69
0.84
(Eq 2)
Es2
(Eq 3)
4 1
4 NA
(Eq 4)
Q value
R value
Random dispersion
Ordered dispersion
2>Q>1
0<R<1
Clustered dispersion
0<Q<1
0<R<1
<1
>1
For example, the comparison shown in Fig. 3 indicates that the observed
distribution is composed of clusters superimposed on random dispersion
(Q 0.8 and R 2.83).
The nearest-neighbor spacing technique can be very useful in describing the observed dispersion as being ordered, random, clustered, or
Fig. 3
Fig. 4
Fig. 5
Frequency versus spacing distribution results of dilation and counting procedure for point patterns in Fig. 1. (a) Random. (b) Ordered.
(c) Clustered
higher spacings. For the clustered dispersion, on the other hand, the
frequency spacing distribution has a large peak at small spacings and a
small peak (or peaks) at very large spacings. The first peak corresponds
to spacings of particles inside the clusters, whereas the second peak(s)
correspond(s) to spacing among clusters.
The dilation and counting technique is very useful to characterize
particle dispersion and clustering. It is applied to reveal both the presence
and degree of clustering in several aluminum-silicon alloy sheet materials
where dispersion and clustering are critical to sheet formability (Ref 5).
Note that in this technique, the particle-to-particle spacing is the clear
(edge-to-edge) spacing between particles and not center-to-center spacing. Both spacings are very similar in the case of very small particles and
very large spacings as, for example, inclusions in a steel sample.
However, they are significantly different in the case of larger particles and
smaller spacings like, for example, second-phase particles in a metalmatrix composite (Fig. 6).
The edge-to-edge spacing obtained using the dilation and counting
technique has more relevance to modeling properties because the technique looks at clear spacing and takes into account particle morphologies
(long stringer versus round particles). Particle morphology is ignored
when center-to-center spacing is considered. A limitation of the dilation
and counting technique is that it does not provide local volume fraction
values, which are useful in modeling fracture. Such information can only
be provided using tessellation techniques described below.
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Dirichlet network for a computer-generated random dispersion corresponding to the steel sample in Fig. 8
constructed for the steel sample and the random dispersion is shown in
Fig. 10. Note that the results of the area distribution of the cells
constructed for a randomly generated point dispersion follow a Poissons
distribution (Ref 6). Therefore, comparisons can be made directly with the
Poissons distribution without generating random dispersions.
An important advantage of the Dirichlet tessellation method is that it
can be used to yield parameters that relate more directly to fracture
properties, namely local volume fraction of particles, which is equivalent
to the local area fraction. The local area fraction takes into account two
important parameters that relate directly to the fracture process. The size
of the particle (inclusion or void) and the size of the Dirichlet cell give an
indication of near-neighbor distances, which relate to crack propagation.
The area fraction for each particle is calculated as the ratio of the area of
the particle to the area of the cell around it. For this reason, the areas of
the particles are recorded together with the coordinates of the centroids,
and the computer program was expanded to obtain the local area fraction
for the inclusion dispersion.
In addition, random dispersions of local area fractions are also
generated (Ref 6), and the local area fraction distributions for both the
random dispersion and the steel sample are shown in Fig. 11. Note that the
local area fraction distributions follow a log-normal distribution (approxi-
Fig. 10
mately a straight line on the logarithmic probability plot in Fig. 11). The
standard deviation and coefficient of variation, as represented by the slope
of the line (Fig. 11), are higher for the steel sample than for the random
case and are a measure of inhomogeneity. Another advantage of the
Dirichlet tessellation method is that one can identify clustered regions and
the degree of clustering from local area fraction values of the Dirichlet
regions for the whole inclusion dispersion. An example is shown in Fig.
12, where clustered regions are identified by comparing the local area
fraction for each Dirichlet region with the average local area fraction for
the particle dispersion shown in Fig. 8.
All the limitations of the Dirichlet tessellation technique stem from the
fact that the network is based on particle centroids and, therefore, is not
affected by either particle size or shape. This usually is acceptable in the
case where particles are round and very small compared with spacings. A
problem arises when particles are as large as or larger than the free
spacing between them. For example, if a small particle is close to a large
particle, a particle larger than the free spacing between them, then the cell
boundary based on centroids will cut across the large particle. Therefore,
for a metal-matrix composite like that shown in Fig. 6, cell boundaries
must be constructed at the spacing of mid-edge to edge rather than
mid-spacing between centroids. This procedure is achieved using tessellation by dilation.
Fig. 11
Fig. 12
Fig. 13
(matrix); that is, the background is successively eroded, but not the last
pixel, until it is eroded to a line (skeleton). For this reason, the technique
also is called background erosion technique, background thinning technique, and background skeletonization technique. For simplicity, here it is
referred to as dilation technique. It should be noted that the cell
boundaries using the dilation technique are constructed at the mid-edgeto-edge spacing between a particle and all near neighbors of that particle.
Therefore, it does take into account the particle size and shape, as
discussed above, and this is one of the better advantages of the dilation
technique. Figure 11 in Chapter 5, Measurements, shows an example of
a tessellation network produced using this technique corresponding to
clustered, ordered, and random point patterns.
It is arguable that the boundary constructed between any particles is not
the normal bisector line, as in Dirichlet tessellation, but rather a number
of segments of lines approximating the normal bisector. The reason is that
the dilation follows a particular square, hexagonal, or octagonal grid.
Therefore, boundary lines can only be constructed at particular angles
depending on the grid used. For a square grid, the boundary lines can be
constructed only at angles every 45 (0, 45, 90 and 135), and, therefore,
the normal bisector (that can take any angle) is approximated by segments
of lines at those particular angles, as shown in Fig. 13. This can result in
polygons with irregular shapes. The shape irregularity decreases by using
a hexagonal or an octagonal grid, where line segments can be constructed
every 30 and 22.5, respectively. However, in any case, the area of the
polygons constructed using the dilation technique is very similar to that
constructed by the Dirichlet tessellation technique. This also results in a
local area fraction (area of the particle divided by the area of the cell
around it) that is very similar. In addition, the situation improves
significantly when the particles are larger and spacings are smaller, as in
the case of metal matrix composites, for example (Fig. 6). In this case, the
dilation technique becomes the most appropriate technique to measure
local area fractions in the particle dispersion.
Fig. 14
One of the limitations of the dilation technique is the edge effect, where
polygons around particles at the edge of the field cannot be constructed
based on the particles present in neighboring fields. However, this is
overcome by eliminating in each field the measurements for those
particles lying at the edge of the field. This is achieved by making the
measuring frame smaller than the image frame, as shown in Fig. 14. In
this case, only particles that are inside the measuring frame are measured.
The measuring frame can then be chosen so that only the particles that
have the correct cells constructed around them are contained in the
measuring frame. In some cases, this can significantly reduce the
measuring frame compared with the image frame. Then it becomes a
matter of measuring more fields to cover the sample. This is accomplished
rapidly by automatic image analysis systems.
Conclusions
Characterization of particle dispersion by means of image analysis is
described using five different techniques. The number density technique
depends on field size, and the field area should be as small as the area of
the cluster to sense particle clusters using this technique. The nearestneighbor spacing distribution technique is used to characterize particle
dispersions as ordered, random, and clustered, and provides some
quantification. However, it is not very sensitive because it considers only
the nearest neighbor. The dilation counting technique considers near
neighbors and is very useful to characterize particle dispersion. However,
it does not provide local area fraction measurements, which only can be
obtained using the tessellation technique. Dirichlet tessellation is the most
comprehensive technique to characterize particle dispersion. It is based on
construction of a Dirichlet network at mid-distances between particle
centroids It provides local area fraction measurements and is used to
identify clustered regions. Because it is based on particle centroids, its
limitations appear when larger particles and smaller spacings are considered at the same time.
References
1. P. Poruks, D. Wilkinson, and J.D. Embury, Role of Particle Distribution on Damage Leading to Ductile Fracture, Microstructural Science,
Vol 22, ASM International, 1998, p 329336
2. M.J. Worswick, A.K. Pilkey, C.I.A. Thompson, D.J. Lloyd, and G.
Burger, Percolation Damage Prediction Based on Measured Second
Phase Particle Distributions, Microstructural Science, Vol 26, ASM
International, 1998, p 507514
CHAPTER
7
Analysis and
Interpretation
Leczek Wojnar
Cracow University of Technology
Microstructure-Property Relationships
Before discussing microstructure-property relationships, it is necessary
to develop appropriate theoretical models, which allow selection of
suitable quantities that are useful for further analysis. A modern approach
to materials engineering takes into account many different materials
properties, including:
O Mechanical properties: these include yield and flow stress, ultimate
strength, hardness, and fracture toughness
Fig. 1
of the matrix. The tensile and yield strengths of the ductile iron
considered here (the volume fraction of the graphite is 11%) should be
approximately 89% of the corresponding strength of silicon-ferrite, which
is confirmed (Fig. 3) over a wide range of temperatures.
Both phases in question have entirely different ductilities, which also
can be explained using the model presented in Fig. 1. Assume the fracture
process is controlled by the energy absorbed in the material during its
deformation. In nodular cast iron, the graphite nodules are very weakly
bonded with the metallic matrix and, therefore, function almost similar to
pores. These pores break the metallic matrix into the elements, the sizes
of which are defined by the distance between neighboring nodules, as
illustrated in Fig. 2 (Ref 2). The distance is equal to the mean free path
between graphite nodules (mean free path is described in Chapter 2,
Introduction to Stereological Principles).
Based on the hypothesis mentioned above and theoretical considerations lying outside the scope of this text, the lower and upper limits for
the relationship between the mean free path and fracture toughness
(measured by the J-integral) are established. Almost all the experimental
data and literature available at the time of the study fits nicely to these
theoretically calculated bounds, as is illustrated in Fig. 4.
Fig. 2
To summarize, the properties analyzed are related to different microstructural quantities. The tensile strength is sensitive to changes in the
graphite volume fraction, but there is no correlation with the interparticle
distance. On the other hand, in the case of fracture toughness, the opposite
is observed. The model presented allows a better understanding of the
processes analyzed and quick choice of appropriate parameters to be
measured. It is important to note that a model of the material being
analyzed is helpful to explain the properties of the material, as it quickly
shows which parameters are relevant and which are not. To obtain a
quantitative description of the structure of a material, even a very
simplistic model forms a better basis to solve problems than a no-model
Fig. 3
Fig. 4
initial particle. Similar to the convex hull is the bounding rectangle (Fig.
5k), suitable for shape characterization. Figure 5(l) illustrates the number
of holes. Other characteristics also are available, for example, number of
Euler points (suitable for convexity/concavity quantification and deviation moments). These are the most common measurements, and most
image analysis specific software can be used to obtain their values.
Quantification of some of these measurements is very difficult, if not
impossible, without the use of a computer.
One of the most important tasks during microstructure quantification is
to develop appropriate links between the classical stereological parameters (which usually have a well-elaborated theoretical background) with
parameters offered by image analysis. Specific problems arise when
taking into account the digital (discontinuous) nature of computerized
images. Examples of application difficulties are measurements along
curvilinear test lines (used in the method of vertical sections) and errors
in perimeter evaluation.
Any microstructure should be described in a qualitative way prior to
quantitative characterization because the latter simply expresses the
former in numbers. Otherwise, large sets of meaningless values are
Fig. 5
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Basic measures of a single particle: (a) initial particle, (b) area, (c)
perimeter, (d) and (e) Feret diameters, (f) maximum width, (g) intercept,
(h) coordinates of the center of gravity, (i) coordinates of the rst point, (j) convex
hull, (k) bounding rectangle, and (l) number of holes
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 6
black phase. Note that there is no difference in size (even the size
distribution is identical), shape, and arrangement (in both cases the
arrangement is statistically uniform) between the images in Fig. 6(a) and
6(b). The structure in Fig. 6(c) is obtained by changing only the size of the
objects, without altering their amount, shape, and arrangement. Changing
the arrangement of structure in Fig. 6(c) results in a structure shown in
Fig. 6(d), which differs from Fig. 6(a) in size and arrangement of the
objects. Keeping the amount, size, and arrangement of the black phase in
Fig. 6(a) and changing only the shape yields results shown in Fig. 6(e).
Finally, the structure shown in Fig. 6(f) differs from that in Fig. 6(a) in all
the characteristics; that is, amount, size, shape, and arrangement.
Rationale for classifying into four basic characteristics presented above
is not proven theoretically. However, in all cases known to the authors,
characterization of the structure can be performed within the framework
of amount, size, shape, and arrangement. The most important advantage
of this simplification is that structural constituents usually require only
four characteristics versus a host of parameters (such as those listed at the
beginning of this Chapter) supplied by classical stereological methods,
and even more parameters offered by contemporary image analysis. The
following discussion provides some guidelines both on how to choose the
optimal parameter subsets to quantify a structure and how to interpret the
results obtained.
quantify the amount of microstructural constituents even if phase distribution over the test material volume is highly inhomogeneous.
To avoid misinterpretation of the term amount, consider the following
discussion. Any material has a 3-D structure (even if available only in the
form of extremely thin layers or fibers), which is described by 2-D or
one-dimensional (1-D) geometrical models. Therefore, the term amount is
used only to characterize 3-D microstructural constituents. Two-dimensional features (e.g., grain boundaries) and 1-D, or linear, features (e.g.,
dislocation lines) are characterized by means of their densities, LV and SV,
respectively.
Size of Structural Constituents. Although there is a clear intuitive
understanding of what the term size means, measurement of size is not
straightforward. The usual way to measure the size of 3-D objects is by
measuring their volume. However, other measurements, such as total
surface area, mean section area, largest dimension, mean intercept length,
and diameter (for a sphere) also can be used. Thus, in the case of size,
choosing a proper parameter is not that easy. Fortunately, there are
guidelines for selecting the best size characteristics, which are summarized below.
The measurement used to quantify size must be adequate for the process
model under consideration. For example, in the case of nodular cast iron,
the amount of graphite is approximately stable and (assuming a constant
amount of the graphite) its fracture toughness is linearly proportional to
the mean graphite nodule diametera proper size parameter in this case
(Ref 2). Application of the mean nodule volume for this analysis results
in highly nonlinear relationships, which can easily lead to false conclusions. By comparison, the grain growth process in steel at high temperatures is controlled by the presence of small precipitates at the grain
boundaries. In this case, characterizing grain size using the surface area of
grain boundaries per unit volume, SV, is better than characterizing grain
size using any linear dimension of the grain.
Size measurements should be sensitive to changes in the microstructure
that can affect the properties studied. For example, especially in the case
of recrystallized materials, two entirely different materials could have the
same mean values of grain size, but grain size distribution could differ
significantly. Thus, even if the grain volume theoretically is the best
measure of grain size, it should not be used in this case because it is
difficult to evaluate the volumetric distribution of grains (this requires
serial sectioning and subsequent 3-D reconstruction). It is recommended
in such a case to analyze section areas or intercept lengths because
distributions of these variables are easily obtained and compared. Further,
a safer approach is to analyze the intercept length distribution because the
distribution of grain section areas is much more sensitive to errors in grain
boundary detection (Fig. 7). The structure in Fig. 7 consists of grains
having partially lost grain boundary lines, or segments, which are
Fig. 7
a
b
(Eq 1)
where a and b are the length and width of the minimum bounding
rectangle, or a is the maximum Feret diameter, while b is the Feret
diameter measured perpendicular to it. These two sets of values used to
determine elongation can produce slightly different values of the shape
factor due to the digital nature of computer images.
Aspect ratio reaches a minimum value of 1 for an ideal circle or square
and has higher values for elongated shapes. Unfortunately, elongation is
not useful to assess irregularity, where all particles have an f1 value very
close to 1, despite the fact that the particles have profoundly different
shapes. This is illustrated in Fig. 8(b). Circularityone of the most
popular shape factorsoffers a good solution to this situation:
f2
L2
4A
(Eq 2)
where L is the perimeter and A is the surface area of the analyzed particle
(Fig. 9). The f2 shape factor is very sensitive to any irregularity of the
shape of circular objects. It has a minimum value of 1 for a circle and
higher values for all other shapes. However, it is much less sensitive to
elongation.
Fig. 8
d2
d1
(Eq 3)
Fig. 9
Fig. 10
Arrangement of Microstructural Constituents and its Quantication. Arrangement quantification is discussed on the basis of examples
that later are summarized to get a more general understanding of the
problem it presents. The first question to answer when dealing with
arrangement is how do you quantify inhomogeneity? Note that in any
two-phase material observed at high magnification, there are distinct regions occupied only by one of the phases. At increasingly higher magnification, a point is reached where in most fields of view, only a single phase
is observed; the second phase is outside the field of view. Generally, the
single-phase materials can be treated as being homogeneous, and they differ in homogeneity, but on a microscale, any two-phase material is highly
inhomogeneous. (This intuitive meaning of homogeneity often is used in
everyday life, especially in the kitchen, where many dishes are prepared
using constituents that lead to something homogeneous.)
The difficulty in quantifying homogeneity is somewhat similar to the
problem of shape characterization. In both cases, there is no clear
definition of the quantified characteristics and no clearly defined measurements, and, yet, both characteristics are crucial to define material
properties. The most commonly applied solution in quantification of
homogeneity is dividing the microstructure into smaller partscall them
cellsspecified by their size. It is assumed that some global characteristics of microstructural features (e.g., particle volume fraction) should be
approximately stable over randomly selected cells. If a large variation in
the value of a selected characteristic is observed during the movement
from one cell to another, the material is judged as inhomogeneous at the
scale defined by the chosen cell size.
Quantitative characterization of other aspects of arrangement requires
similar individual, context-oriented analysis. For example, to determine
whether particles tend to concentrate on grain boundaries, detect the
boundaries and compare the amounts of particles lying on and away from
the grain boundaries.
To summarize, it is impossible, even from the theoretical point of view,
to fully characterize in a quantitative way all aspects of arrangement of
microstructural constituents. Partial solutions applicable to a limited
number of cases can be prepared on the basis of a thorough analysis of the
process history of the material being analyzed. Quantification of arrangement of microstructural constituents is one of the most important
characteristics of the material microstructure even though it is difficult to
perform.
Advanced techniques for arrangement quantification are beyond the
scope of this text, and, therefore, only the most general characteristics of
this problem are outlined. Chapter 5, Measurements, and Chapter 6,
Characterization of Particle Dispersion describe other methods for
quantification in simple problems related to distribution, such as orientation of structural constituents.
Step
Action
Example
Second-phase particles
Interpret results.
Fig. 11
Fig. 12
affect the results. Errors include improper specimen sampling (the first
possible source of bias), polishing artifacts, over-etching, staining,
specimen corrosion, errors induced by the optical system, and the effects
of magnification and inhomogeneity.
Improper polishing is the main source of bias in images. Soft materials
and materials containing both very soft and very hard constituents are
especially difficult to prepare and are sensitive to smearing and relief (see
Fig. 13 and also Chapter 3, Specimen Preparation for Image Analysis).
Ferritic gray cast iron is such a material; graphite is very soft, ferrite is
soft and easily deformed, and the iron-phosphorous eutectic is hard and
brittle, characteristics which tend to produce relief, even in the early
stages of grinding on papers (again, see Chapter 3, Specimen Preparation
for Image Analysis, for further information). Such polishing artifacts
cannot be corrected during further polishing. Also, plastic deformation
Fig. 13
Fig. 14
recognizes at least 256 gray levels, and its sensitivity to different gray
levels is stable, as opposed to nonlinear in the case of humans. Therefore,
an image analysis system registers all these subtle variations, which
affects the results of the analysis. New-generation optics in contemporary
microscopes compensate for the errors mentioned, leading to perfect
images (Fig. 16).
While sources of bias are difficult to quantify objectively, the following
guidelines help to minimize bias:
O Follow specimen preparation guidelines carefully.
O Thoroughly clean the specimen after each grinding and polishing step
(even a single particle of abrasive material can damage the polished
section).
O Use automatic polishing equipment if possible.
O Use selective etching, if possible, to reveal microstructural features of
interest.
O Use a modern microscope with the best optics.
Proper interpretation of image analysis results requires consideration of
the potential effect of magnification at which a microstructure is observed.
Magnification influences variation in the values measured for each
analyzed image; at high magnification, the microstructure of a material is
likely to be inhomogeneous. Figure 17 shows images of microstructures
taken at random positions from a randomly oriented section of a
two-phase structure. Graphical results of volume fraction measurements
carried out on each field show a considerable scatter. However, the same
material, if studied under sufficiently low magnification (between 100
and 200), yields nearly constant values of the volume fraction of the
Fig. 15
Fig. 16
Fig. 17
1.22
2 NA
(Eq 4)
modern microscopes is 22 mm (0.9 in.) (20 mm, or 0.8 in., for earlier
models).
Next, it is necessary to compute the physical resolution of a CCD
image. A typical low-cost camera containing a 13 mm (0.5 in.) CCD
element with an image size of 640 480 pixels yields 15,875 m per
pixel. Comparing this value with the theoretical cell size in Table 2
suggests that the simplest camera is sufficient if no additional lenses are
placed between the camera and the objective (this is the most common
case). Many metallographic microscopes allow an additional zoom up to
2.5, which, if applied, leads to a smaller theoretical cell size than that
offered by the camera discussed above, even if it is ideally inscribed into
the field of view.
For light microscopy, using a camera having a resolution higher than
1024 1024 pixels generally results only in a greater amount of data and
longer analysis timewithout providing any additional information!
Figure 18 illustrates the effects of camera resolution. An over-sampled
image (for example, a very high-resolution camera was used) provides a
smooth image, but nothing more is observed than that observed using an
optimally grabbed image (Table 2). Reducing resolution loses some data;
however, major microstructural features still are detected using half the
optimal resolution.
Image processing can begin as soon as the image is placed in the
computer memory. Note that almost all image processing procedures lose
a part of initial information. This occurs because the largest amount of
information always is in the initial image, even if its quality is poor (this
reinforces why the image quality is so important). Also, some information
is lost at every step of image processing, even if the image looks
satisfactory to the human eye. Thus, it is strongly recommended that the
number of processing steps be minimized.
Two tricks can be used to improve the image quality without significant
(if any) loss of information. In the case of poor illumination, the image
grabbed by a camera can be very noisy, but grabbing a series of frames
and subsequently averaging them produces much better image quality
(this solution is sometimes built into the software for image acquisition).
Numerical
aperture
Objective
resolution, m
Theoretical
cell size, m
Number of cells
per 22 mm
0.10
3.36
13.4
1642
10
0.25
1.34
13.4
1642
20
0.40
0.84
16.8
1310
40
0.65
0.52
20.6
1068
60
0.95
0.35
21.2
1038
60, oil
1.40
0.24
14.4
1528
100, oil
1.40
0.24
24.0
917
In the case of a noisy image, it often is helpful to digitize the image using
a resolution two times higher than necessary and, after appropriate
filtering, decrease the resolution to obtain a clean image. This technique
is used, for example, in scanning printed images, which removes the
printer raster. In general, image quality can be significantly improved, or,
in other words, some information can be recovered by knowing exactly
how the image was distorted prior to digitization.
There is no perfect procedure to detect microstructural features.
Depending on the particular properties of the image, as well as the
algorithms applied, some grain boundaries or particles will be lost and
some nonexistent grain boundaries or particles will be detected. This is
illustrated in the example of polystyrene foam, shown in Fig. 19. There is
no specific solution to this problem, as each operator will draw the grain
boundaries in a different manner, and the resulting number of grains will
show some scatter.
Figure 20 compares results of the number of grains in an austenitic steel
measured manually and automatically, which shows very good agreement
(a)
(b)
(c)
(d)
Fig. 18
(correlation coefficient >0.99). This indicates that automatic measurements, although not perfect, can yield fully applicable results. However,
any segmentation method should be thoroughly tested on numerous
images prior to its final application in analysis.
Despite some errors that occur in practically every analyzed image,
automated methods have several advantages over manual methods:
O They are at least one order of magnitude faster than manual methods;
thus, they follow the well-known rule of stereologydo more less
wellmentioned elsewhere in this book.
O They are completely repeatable; that is, repeated analysis of any image
always yields the same results.
O They require almost no training or experience to execute once the
proper procedures have been established.
Nearly all measurements are executed on binary images, and binarization
of microstructures can be the source of numerous errors in analysis.
Therefore, binarization, or thresholding, is one of the most important
operations in image analysis. Image analysis software packages offer a
variety of thresholding procedures, which are carefully detailed in the
accompanying operation manuals.
The simplest method of binarization, called the flicker method, is
based on a comparison of the live and binary images displayed interchangeably. However, this method is not very precise, even if the same
person operates the apparatus. A much better, more objective method is
based on the analysis of profiles and histograms (Fig. 21).
(a)
(b)
Fig. 19
Fig. 20
(a)
(b)
(c)
Fig. 21
Effect of different methods of choosing the binary threshold on binarization results. (a) Initial image; (b)
prole with threshold level indicated by an arrow and corresponding binary image; (c) histogram with
threshold level indicated by an arrow and corresponding binary image
(a)
(b)
(c)
(d)
Fig. 22
For an unbiased particle count, all particles totally included within the
guard frame and touching or crossing its right and bottom edges are
considered, while all particles touching or crossing the left and upper
frame edge, as well as the upper right-hand or lower left-hand corners, are
removed. This particle selection method (gray particles in Fig. 23b) may
seem artificial, but it ensures that size distribution is not distorted. Note
that when using a series of guard frames touching each other, the selection
rules ensure that every particle is counted only once.
Other methods of particle selection for further analysis are given in Ref
4. However, the general idea is always the same, and the results obtained
are equivalent. If only the number of particles (features) are of interest,
this is easily determined using the Jeffries planimetric method, described
in Chapter 2, Introduction to Stereological Principles, and Chapter 5,
Measurements. It is not recommended to count particles in images after
removing particles crossed by the image edge (even if recommended in
standard procedures) because this always introduces a systematic error.
Moreover, the error is impossible for a priori estimation because it is a
function of the number, size, and shape of analyzed features.
Area Fraction. The classical stereological approach to measure area
fraction is based on setting a grid of test points over the image and
counting the points hitting the phase constituent of interest. With image
analysis, the digital image is a collection of points, so it is unnecessary to
overlay a grid of test points, and it is sufficient to simply count all the
points belonging to the phase under consideration. This counting method
is used to evaluate surface area. The statistical significance of such
analysis is different from that performed using classical stereological
methods. In other words, the old rules for estimating the necessary
number of test points are no longer valid, and other rules for statistical
evaluation of the scatter of results (for example, based on the standard
deviation) should be applied.
(a)
Fig. 23
frame
(b)
Fig. 24
measurements) analysis of very small objects. Changing the magnification can minimize bias for the latter source. Also, correction procedures
can be applied to the results of measurements carried out on small objects.
As a result, the main source of bias in digital measurements remains
improper detection. This reinforces the necessity of the highest possible
quality at all the stages of metallographic procedure (specimen preparation, microscopic inspection, and image processing), as each stage can
introduce unacceptably large errors to the final results of analysis.
In practice, it is relatively difficult to characterize measurement errors
quantitatively. One possible solution is to use model materials or images
with structural characteristics that are known to exactly experimentally
determine the error. Comparing apparent and predicted results allows an
approximation of the measure of accuracy of applied methods. It also is
Table 3
Type of
measurement
Characteristics
Suggested action
Counting objects
Incorrect detection or
Apply proper correction
erroneous counting of
procedures (described in
particles crossed by the
this Chapter)
image edge
Distances
Avoid measurement of
objects having length
smaller than 10 pixels
Fig. 25
Incorrect detection
Fig. 26
Feature
analyzed
Example of
structure
Suggested secondary
criterion
Grains filling
the space
Austenitic steels
Dispersed
particles, 10%
area fraction
Small dispersed
particles
Mixture of two
constituents
Ferritic-pearlitic steels
Inclusions
Pores in sintered
materials, nonmetallic
inclusions
i
i1
n
(Eq 5)
n
1
(x x)2
n 1 i1 i
(Eq 6)
(Eq 7)
where t,n1 is the value of the t-statistics (Student) for the significance
level , and n is 1 degree of freedom. Exact values of t-statistics can be
found in statistical tables, or are easily accessible in any software for
statistical evaluation, including most popular spreadsheets.
Estimating the confidence level for 0.05, for example, means that
when repeating the measurements, 95% (1 ) of the results will be
greater than x CL and smaller than x CL. Another useful conclusion
is that if the number of measurements is large (>30), then approximately
99% of the results should not exceed the confidence level defined as
follows:
CL0.99 3s
(Eq 8)
To summarize, confidence level is a very convenient statistical characteristic for interpretation of results, and always is proportional to the
standard deviation. Therefore, evaluation of the standard deviation is, in
s
x
(Eq 9)
Graphite contents
Mean Deviation
CV
14.1
0.52
0.037
14.1
1.93
0.14
Both samples have the same mean graphite content, but the deviation in
content (individual results) for sample B is four times higher than for
sample A. Moreover, the scatter of results for sample B is so large that the
99% confidence level is approximately equal to half of the measured
value. One interpretation of results is that the graphite content in sample
B is inhomogeneous. Another is that the measurements for sample B were
performed with different precision. If these interpretations cannot be
Data Interpretation
Interpretation of the numbers quantifying a microstructure can be
carried out from two perspectives: purely mathematical, which concentrates on statistical significance, and materials science needs oriented,
which correlates microstructural descriptors with materials properties. In
the case of the former approach, the key question is whether or not the
obtained estimates of a given parameter are precise enough to verify
theoretical models for materials properties and processes from the
microstructure. It is important to keep in mind the stochastic (random)
nature of the geometry of microstructural features with respect to
diversity in their size, shape, and a profound degree of disorder in their
spatial distribution. This randomness means that the description of
microstructural elements requires the application of mathematical statistics and, in turn, concepts of populations, distribution functions, estimators, and intervals of confidence. In this context, the population of grains
shown in Fig. 27 for a polycrystalline nickel-manganese alloy is characterized by the following numbers: v 25 m3 and v 5 m3, where v
is the mean value of volume and s is the standard deviation.
Microstructural elements cannot be characterized by single numbers of
an unconditional character due to the diversity in numbers describing size,
shape, and location of individual elements. The numbers derived from
measurements are, in fact, estimates of preselected parameters, not true
values. More advanced statistical analysis shows that for a 95% confidence level, the mean volume of grains is estimated as E(V) 25 2
m3. In a series of experiments carried out on the same set of specimens,
it is likely that different values of the same parameter will be obtained,
which underlines the need for analysis of statistical significance of
differences in the values of microstructural descriptors. Various statistical
tests for proving statistical significance are available. These tests, not
discussed in the present text, can again be found in a number of books on
mathematical statistics.
An example of data interpretation based on explaining microstructural
descriptors to understand materials properties is illustrated in Fig. 28,
which shows the relationships between porosity and density of a ceramic
material (Fig. 28a) and between hardness and the mean intercept length
for a polycrystalline metal (Fig. 28b). The relationship between density
and porosity can be used to predict the true density of a sintered
compound, while the relationship between hardness and the mean
intercept length can be used to explain how material hardness can be
controlled by means of controlling grain size.
Parameters used to model the properties of a material almost never can
be measured directly from images of its microstructures because information is from a two-dimensional (2-D) view rather than from three-
Fig. 27
(a)
Fig. 28
(b)
Experimental data illustrating the relationships between material properties and microstructures.
(a) Plot of ceramic density against volume fraction of pores. Courtesy P. Marchlewski. (b) Plot of
Brinell hardness of an austenitic stainless steel against the inverse of square root of the mean intercept length,
or ( l )1/2, m1/2. Courtesy J.J. Bucki
Fig. 29
Binary image of grains in austenitic steel ready for digital measurements. All artifacts are removed, and the continuous network of
grain boundaries is only one pixel wide.
Fig. 30
Grain size area (in pixels) distribution obtained from binary image in
Fig. 29
Fig. 31
Number-weighted (black bars) and area-weighted (gray bars) distributions of grain size of an annealed austenitic steel. The areaweighted distribution seems to be more informative.
In conclusion, note that the scope and depth of interpretation of the data
describing the microstructural features of a material also depends on the
purpose of quantification, including:
O Quality control
O Modeling materials properties
O Quantitative description of a microstructure
Each application of quantitative descriptions differs in the required degree
of interpretation. In the case of quality control, attention is focused on
reproducibility of the data; no physical meaning of the numbers obtained
from image analysis is needed. Various procedures can be used, and the
basic requirement is to follow systematically the one selected. The major
limitation of this approach is that the results obtained in one study cannot
be compared with the results obtained in other investigations. On the
other hand, no special requirements are imposed on the image analysis
technique except the condition of reproducibility of measurements.
In the case of measurements carried out to derive numbers for modeling
materials properties, the freedom in selecting microstructural parameters
is restricted. In this situation, a given physical and mathematical model
explaining the properties of the material is required in addition to the
reproducibility requirement.
the types of plots in Fig. 33. The distribution functions can be used to
determine the degree to which they agree with theoretical functions, and
changes in distribution functions can be interpreted in terms of a shift in
the mean value and possible increase in their width. In the case of grain
size, the shift is an indication of grain growth. If the shape of the
normalized distribution function does not change (compare Fig. 33a and
b), the shift could be interpreted as normal grain growth. By comparison,
changes in the shape of normalized distribution functions indicate
abnormal grain growth (compare Fig. 33a and c).
Due to the randomness of the size of grain sections, the relatively large
grain sections that occupy a significant fraction of the image account for
a small fraction of the total population; two large sections in Fig. 32(c)
account for approximately 15% of the image area. Weighted plots of
relevant values make it easier to visualize the effect of such grains
appearing with low frequency. The area-weighted plot of A, or A f(A),
is more sensitive to the presence of large grain sections (Fig. 34).
The results of grain section area frequently are converted into circle
equivalent grain section diameters. Both the mean value of grain section,
A , and equivalent diameter, d , can be used as a measure of grain size.
However, it should be pointed out that these two parameters do not
account for the 3-D character of grain and can be used only in
comparative studies.
Mean intercept length is a more rigorous stereological measure of grain
size. In the case of an isotropic (uniform properties in all directions)
structure, it can be measured using a system of parallel test lines on
random sections of the material. Anisotropic (different properties in
different axial directions) structures require the use of vertical sections
and a system of test cycloids, which provide only a mean value, and not
a distribution of intercept length.
(a)
Fig. 32
(b)
(c)
Three typical microstructures of a single-phase polycrystalline material, differing in the grain size (grain size in
images a, b, and c is in ascending order) and in grain-size homogeneity (grains in image c are more
inhomogeneous compared to those in a and b).
(a)
(b)
(c)
Fig. 33
scale
Fig. 35). Mean values of shape factors can be obtained for grain sections
and appropriate distribution functions. As a rule of thumb, 10% or less
variation of the mean value of shape factors can be viewed as insignificant.
Two-Phase Materials. Figure 36 shows examples of two-phase microstructures, which basically are described in terms of the volume
fraction of phases, (VV)a and (VV)b. Measurements of volume fractions
usually are easily carried out using image analyzers, and the precision of
such measurements are estimated using model reference images. In
practical applications, the major source of an error in estimating volume
fraction is biased selection of images for analysis. Images must be
randomly sampled in terms of sectioning of material and positioning the
Fig. 34
Fig. 35
observation field. Some other source of errors, such as over etching, have
been discussed previously.
Estimates of volume fraction are directly used to interpret the physical
and mechanical properties of several materials; for example, density,
conductivity, and hardness. Some properties also are influenced by the
degree of phase dispersion (see also Chapter 6, Characterization of
Particle Dispersion). Under the condition of constant volume fractions in
particulate systems, the dispersion of phases can be described by size of
individual particles (concept of the size of particles cannot be applied to
interpenetrated structures of some materials). Another approach to the
description of dispersion can be based on measurements of specific
surface area, SV, of interphase boundaries. This parameter defines SV for
interphase boundaries in unit volume.
Values of specific SV for interphase boundaries can be obtained with the
help of image analyzers relatively easily for isotropic structures. In this
case, one can apply a grid of parallel test lines. On the other hand,
(a)
(b)
Fig. 36
Typical two-phase microstructures of particle-modied polycrystalline matrix (the analyzed phase, graphite, is black). Particle volume
accounts for (a) Particle volume, 10.5% of material volume; surface-to-volume
ratio, 18.6 mm1; number of particles per unit volume computed from stereological equations, 8480 mm3. (b) Particle volume, 12% of the material volume;
surface-to-volume ratio, 10.8 mm1; number of particles per unit volume
computed from stereological equations, 1300 mm3
Fig. 37
Fig. 38
Fig. 39
sections. On the other hand, small scatter of the points indicates that the
total projected length is a well-chosen parameter for analysis of the
fracture process of HSLA steels.
Effect of Boron Addition on the Microstructure of Sintered Stainless Steel. Sintering metal powders allows obtaining exotic and traditional (e.g., tool steels) materials having properties impossible to obtain
via other processing routes. One of the problems arising during sintering
of stainless steels is the presence of porosity in the final product. This
problem can be overcome by liquid-phase sintering by means of the
addition of boron. This requires determining the optimum boron content,
which is discussed subsequently based on part of a larger project devoted
to this problem (Ref 18).
A typical example of a microstructure of a material produced via
liquid-phase sintering is shown in Fig. 40(a), which contains both pores
and a eutectic phase. From a theoretical point of view, it is important to
quantify the development of the eutectic network, and it is evaluated by
counting the number of closed eutectic loops per unit test area. Measurement results of detected loops (denoted in Fig. 40b as white regions) are
shown in Fig. 41. The amount of closed loops rapidly increases with the
increasing boron content. Therefore, this unusual parameter seems to be
a good choice for interpretation of the process of network formation. Note
that no eutectic phase is detected for boron additions smaller than 0.4%.
Liquid-phase sintering causes a decrease in porosity, as shown in Fig. 42.
The amount of pores is easily quantified using simple area fraction
measurements. Based on these relatively simple measurements, it is
clearly visible that a boron addition greater than 0.6% does not result in
further decrease in porosity. This observation was also confirmed by
means of classical density measurements (Ref 18).
(a)
Fig. 40
(b)
Fig. 41
Fig. 42
Conclusions
The preceding analyses and examples show that there is a specific
relation between classical stereology and image analysis. Stereological
methods (which have much longer history than image analysis) have an
appropriate, well-established theoretical background based on the developments of, for instance, geometrical probability, statistics, and integral
geometry. Thus, stereological methods constitute the basis of any quantitative interpretation of the results. On the other hand, image analysis
allows automation, better repeatability, and reproducibility of the quantification of microstructures. The main goal in interpretation of data from
image analysis is to skillfully adopt the rules of classical stereology to
these new tools. The methods of interpretation presented in this Chapter
should help resolve the most frequent, basic issues encountered in
laboratory practice.
References
1. L. Wojnar and W. Dziadur, Fracture of Ferritic Ductile Iron at
Elevated Temperatures, Proc. 6th European Conference on Fracture,
1986, p 19411954
2. L. Wojnar, Effect of Graphite on Fracture Toughness of Nodular Cast
Iron, Ph.D. dissertation, Cracow University of Technology, 1985 (in
Polish)
3. E.E. Underwood, Quantitative Stereology, Addison Wesley, 1970, p
1274
4. L. Wojnar, Image Analysis: Applications in Materials Engineering,
CRC Press, 1998, p 1245
5. J. Rys, Stereology of Materials, Fotobit-Design, 1995, p 1323 (in
Polish)
6. K.J. Kurzydowski and B. Ralph, The Quantitative Description of the
Microstructure of Materials, CRC Press, 1995, p 1418
7. A.J. Baddeley, H.J.G. Gundersen, and L.M. Cruz Orive, J. Microsc.,
142, 259, 1986
8. J. Bystrzycki, W. Przetakiewicz, and K.J. Kurzydowski, Acta Metall.
Mater., 41, 2639, 1993
9. J.C. Russ, The Image Processing Handbook, 2nd ed., CRC Press,
1995, p 1674
ski, Analysis of Stereological Methods Applicability for
10. J. Chrapon
Grain Size Evaluation in Polycrystalline Materials, Ph.D. dissertation, Silesian University of Technology, Katowice, 1998 (in Polish)
11. J.R. Taylor, An Introduction to Error Analysis, Oxford University
Press, 1982, p 1297
12. K.J. Kurzydowski and J.J.Bucki, Scr. Metall., 27, 117, 1992
13. Standard Test Methods for Characterizing Duplex Grain Sizes, E
1181-87, Annual Book of ASTM Standards, ASTM, 1987
14. G. Z ebrowski, Application of the Modified Hall-Petch Relationship
for Analysis of the Flow Stress in Austenitic-Pearlitic Steels, Ph.D.
dissertation, Warsaw University of Technology, Warsaw, 1998 (in
Polish)
15. Standard Practice for Determining Inclusion Content of Steel and
Other Metals by Automatic Image Analysis, E 1245-89, Annual
Book of ASTM Standards, ASTM, 1989
16. Standard Practice for Preparing and Evaluating Specimens for
Automatic Inclusion Assessment of Steel, E 768-80, Annual Book of
ASTM Standards, ASTM, 1985
17. A. Zaczyk, W. Dziadur, S. Rudnik, and L. Wojnar, Effect of
Non-Metallic Inclusions Stereology on Fracture Toughness of HSLA
Steel, Acta Stereol., 1986, 5/2, p 325330
18. R. Chrzaszcz, Application of Image Analysis in the Inspection of the
Sintering Process, masters thesis, Cracow University of Technology,
Cracow, 1994 (in Polish)
CHAPTER
8
Applications
Dennis W. Hetzner
The Timken Co.
While a macro can perform a very large number of different instructions, the basic operations performed in any IA procedure generally
include all or some of the following operations:
1. SYSTEM SETUP: Information that needs to be defined only once is
included at the start of the macro. This may include information
regarding the type of input to be processed, definition of the
properties, limits of things being measured, and system magnification.
2. ACQUIRE GRAY IMAGE
3. SHADING CORRECTION
4. GRAY IMAGE PROCESSING
5. IMAGE SEGMENTATION: Creates a binary image
6. BINARY IMAGE PROCESSING: Image amendment or editing
7. MEASUREMENTS
8. WRITE MEASUREMENTS TO DATABASE
9. FINAL ANALYSIS: Statistical analysis, histograms, off-line processing of data, and further analysis
For repetitive measurements, a for or while statement is inserted after the
preliminary setup, and a continue or next instruction follows the instructions statements for measurements. The examples that follow briefly
outline what operations are used to solve the specific IA problems and
include photomicrographs showing the IA operations and calculations of
final results. In several cases, the macros used to perform the IA problem
are generically described using standard computer programming terminology and stereological terms. Terminology specifically related to a
particular brand of IA system is avoided.
Gray Images
The quality and integrity of the specimens that are to be evaluated are
probably the most critical factors in IA. No IA system can compensate for
poorly prepared specimens. While IA systems of today can perform gray
image transformations at very rapid rates relative to systems manufactured in the 1970s and 1980s, gray image processing should be used as a
last resort for metallographic specimens. Different etches and filters (other
than the standard green filter used in metallography) should be evaluated
prior to using gray image transformations. To obtain meaningful results,
the best possible procedures for polishing and etching the specimens to be
observed must be used. Factors such as inclusion pullout, comet tails, or
poor or variable contrast caused by improper etching cannot be eliminated
by the IA system. There is no substitute for properly prepared metallo-
Applications / 205
Fig. 1
(a)
Fig. 2
(b)
Analysis of ideal gray level square in Fig. 1: (a) histogram; (b) cumulative gray level histogram
Applications / 207
(a)
(b)
Fig. 3
particular, consider the pixel with a gray level of 120 (the center pixel in
the 3 3 matrix shown). Applying a filter of the form
a b c
d e f
g h i
to the matrix of pixels
240
110
19
255
120
20
270
130
21
1185/9 137
(a)
Fig. 4
(b)
Scanner response: (a) electron-beam machined circles; (b) AM 350 precipitation-hardenable stainless steel (UNS
S35000)
Applications / 209
the indentation where a contrast change occurs, the image is bright. Thus,
the Sobel filter reveals the edges of the indentation, but not the remainder
of the image. Table 1 lists several common filters and corresponding
kernels; these are just a few of the possible kernels that can be
constructed. For example, at least five different 3 3 kernels can be
(a)
(b)
Fig. 5
Vickers microindentation in ingot-iron specimen. (a) Regular image. (b) Sobel transformation of image
Table 1
Operator
Kernel
Comments
Low-pass, mean
1 1 1
1 1 1 (19)
1 1 1
Median
a b c
d e f
g h i
Gaussian
1 2 1
2 4 2
1 2 1
Smoothing
Laplacian
0 1 0
1 4 1
0 1 0
Edge enhancement
Gradient
0 1 0
1 0 1
0 1 0
Edge enhancement
Smooth
0 1 0
1 0 1 (14)
0 1 0
Smoothing
Noise reduction
Applications / 211
Image Measurements
Consider again only the square consisting of pixels with a gray level of
125 in Fig. 1. If all the pixels with a gray level of less than 200 were
counted, only the pixels within the square would be used for measurements. A binary image of this field of view can be formed by letting the
detected pixels be white and the remaining pixels be black. Assume that
each pixel is 2 m long and 2 m high. Thus, the perimeter of the square
is 5 5 5 5 pixels, and the actual perimeter of the square is 20 pixels
by 2 m, or 40 m. Similarly, the area of the square is 5 5, or 25, square
pixels, and the actual area is 25 4 m2 for each pixel, which equals
100 m2.
This is the simplest problem that could be encountered. Because most
images contain more than one object or feature and a range of gray levels,
different parameters describing the image can be measured. Two types of
generic measurements can be made for all IA problems: field measurements and feature-specific measurements (see also Chapter 5, Measurements, and Chapter 7, Analysis and Interpretation). Field measurements refer to bulk properties of all features or portions of features
contained within a measurement frame. Feature-specific measurements
refer to properties associated with individual objects within a particular
measurement frame.
There are several different methods of defining the measurement frame.
The simplest approach is to measure everything that is observed on the
TV monitor. The measured features are represented by the white dots in
Fig. 6(a). The use of this type of measuring frame is well suited for field
measurements but has some limitations when feature-specific measurements are required. When making feature-specific measurements, it is
very important to understand how the selection of the measurement frame
can affect the accuracy of measurements. Even if the features being
measured do not vary greatly in size or shape, measuring everything
within the frame border will bias the results because any feature on the
frame border is truncated in size, as shown in Fig. 6 (a). Similarly, if only
the objects totally inside the image frame are measured, long objects that
extend beyond the boundaries of the measurement frame are disregarded,
as shown in Fig. 6(b). All objects can be properly measured if the
measuring frame is sized to be somewhat smaller than the input frame, as
shown in Fig. 6(c). However, in situations where contiguous fields of
view are analyzed, use of the frame in Fig. 6(c) may result in some
features being measured two times. A possible solution in this situation is
to measure objects extending beyond the top and right side of the frame
(Fig. 6d) or to measure objects extending beyond the bottom and left side
of the frame (Fig. 6e). These types of measuring frames are well suited for
analysis using a motorized stage on the microscope.
Definition
Area
Perimeter
Vertical projection
Horizontal projection
(a)
(d)
Fig. 6
(b)
(e)
(c)
(f)
(g)
Measurement frames used for image analysis. (a) All features displayed
on the monitor are measured. (b) Features only within the frame are
measured. (c) Features inside and touching the frame are measured. (d) Features
inside and touching the right and top sides of the frame are measured. (e) Features
inside and touching the left and bottom sides of the frame are measured. (f)
Features only inside the circular frame are measured. (g) Features inside and
touching the circular frame are measured.
Applications / 213
Parameter
Definition
Area
Perimeter
Feret 0
Maximum feature diameter in the direction of the x-axis; Feret 0 x max x min
Feret 90
Maximum feature diameter in the direction of the y-axis; Feret 90 y max y min
Length
Width
X max
Maximum distance along the x-axis where pixels for the feature are detected
X min
Minimum distance along the x-axis where pixels for the feature are detected
(a)
(b)
(c)
(d)
Fig. 7
(Eq 2)
where x corresponds to the gray level and y is the gray level frequency
value.
The first derivative of this curve is y' a1 2a2x. Setting the value of
y' at 0 yields the relative minimum point of the curve. The relative
minimum is the proper threshold setting. This type of analysis can be
performed by exporting the gray level distribution to a spreadsheet.
Similarly, the relative minimum can be visually approximated by carefully observing the gray level histogram.
When a line scan from this specimen is compared to that from the
electron-beam machined circles in Fig. 4(a), the phase to be detected
varies in gray level in addition to beam overshoot and undershoot. Despite
a wide amount of scatter in the gray level histogram (Fig. 8a), it can be
represented by a second-degree polynomial (Fig. 8b). Using this analysis,
the threshold setting to use to measure the area fraction of delta ferrite in
AM 350 is 137. No image editing or image amendment was used for the
Applications / 215
(a)
(b)
Fig. 8
Fig. 9
(a)
(b)
Fig. 10
Annealed AISI type 1018 carbon steel etched with picral: (a) gray
histogram; (b) second-degree least-squares t
Applications / 217
(a)
(b)
Fig. 11
(c)
Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using picral etch; (b) pearlite
detected; (c) holes lled
ACQUIRE TV INPUT
APPLY SHADING CORRECTION
IMAGE SEGMENTATION: based on previously described methods
BINARY IMAGE PROCESSING - FILL HOLES
MEASUREMENTS: field pearlite area percent
STORE MEASUREMENT IN DATABASE
Applications / 219
Fig. 12
In fact, this very simple program may not properly operate on every
system. The early computer control systems used on IA systems were
quite slow when compared with the high-speed processor-based systems
of today. Furthermore, since the early stages of automation through
todays systems, system hardware components, such as the automatic
stage, automatic focus, and TV camera, have always responded at a much
slower rate than the computer can send instructions to them. It often is
necessary to artificially slow down the IA system to achieve proper
performance.
For example, consider what happens as the computer instructs the stage
to move to the next position. The stage rapidly responds and some
vibration occurs when the stage stops moving. Even before the stage has
started to move, the computer is instructing the microscope to focus or
move to the proper focusing position. As this instruction is being carried
out, the computer is further instructing the system to acquire an image.
Image processing begins at this point, and soon the stage is again moving.
Several problems can develop as the program runs. Once the stage
changes position, no instructions should be processed until any vibrations
created by the stage motion have subsided. At this point, it may be
necessary for the TV camera to have one or two seconds exposure to the
next field of view to allow the automatic gain control (if included in the
camera system) to set up the proper white level for the input signal. In
addition, automatic focusing should not be attempted until any possible
vibrations from the stage movement have subsided and the camera has
stabilized. After a little time passes, the auto-focus instruction can be
initiated and the image can be captured. This may be accomplished by
replacing instruction 2 in the macro by the following group of instructions:
2a. FOR timer 1; timer < 10; timer timer 1
2b. TVINPUT
2c. Wait (0.1 sec)
Applications / 221
is used to analyze CDA brass (Fig. 13a), the minimum gray level between
the two phases is not well defined (Fig. 13b). However, illuminating the
same field of view with red light yields a much sharper distinction
between the constituents (Fig. 13c). To appreciate the difference between
the two sources of illumination, the reader should try the following
experiment. Using a 32 objective, first focus the microscope using a
green filter. Then, replace the green filter with a red filter and refocus the
microscope. Notice that there is a difference in the focal plane (z position)
of the stage of approximately 8 m.
(a)
(b)
Fig. 13
(c)
Thresholding brass: (a) microstructure of cold-drawn and annealed brass etched with Klemms reagent; (b) gray
level histogram using green lter; (c) gray level histogram using red lter
Image Amendment
In the previous discussion on thresholding annealed 1018 carbon steel,
pearlite detection is easily achieved due to the good contrast between
pearlite and ferrite obtained using a picral etch. A solution of 2 to 4% nital
etchant also can be used to reveal the microstructure of annealed 1018
carbon steel. In this case, ferrite grain boundaries are revealed together
with major constituents ferrite and pearlite (Fig. 14a). Therefore, when
the pearlite is properly detected, some ferrite grain boundaries also are
(a)
(b)
Fig. 14
(c)
Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using 2% nital ecth; (b) image
detected; (c) holes lled followed by a square morphology of open 2
Applications / 223
Table 3
Name
Erode: n
Instruction
Dilate: n
Outline
Open: n
Erode: n Dilate: n
Close: n
Dilate: n Erode: n
Ultimate erosion
Ultimate dilation
Skeleton normal
Skeleton invert, or skeleton by Expands binary image but maintains boundaries between objects
influence zones (SKIZ)
Fill holes
(a)
(b)
(e)
(c)
(d)
(f)
(g)
Accept
Instruction
Reject
Line
Draw a line
Cut
Separate feature
Cover
Erase
Applications / 225
(Eq 3)
(Eq 4)
or
G 2.886 ln(NL ) 3.2888 2.886 ln(40.08) 3.2888 7.36
(Eq 5)
(a)
(b)
(c)
(e)
(d)
(f)
(g)
Fig. 16
Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshalls reagent; (b) grain
boundary detection and image inversion; (c) holes lled; (d) image inverted; (e) close 2, octagon performed;
(f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection and three circles; (i) grain boundaries and
circle intercepts; (j) grain areas within a rectangular image frame are measured; (k) number of grains within a circular image
are counted
Having obtained the grain boundary image, the area of each grain can
easily be measured by inverting the binary image. This process makes the
boundaries black and the grains white. The area of each grain can then be
measured as a feature-specific property. For this type of analysis, selection
of the proper measuring frame is important. Only grains that are
Applications / 227
(h)
(i)
(j)
(k)
Fig. 16 (continued)
Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshalls
reagent; (b) grain boundary detection and image inversion; (c) holes lled; (d) image
inverted; (e) close 2, octagon performed; (f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection
and three circles; (i) grain boundaries and circle intercepts; (j) grain areas within a rectangular image frame are measured;
(k) number of grains within a circular image are counted
(Eq 6)
or
G 16.99 1.443 ln(A) 16.99 1.443 ln(537.87) 7.92
(Eq 7)
Similarly, the number of grains within a known image frame can be used
to obtain the stereological parameter NA, or the mean number of
interceptions of features divided by the total test area. In this analysis, a
circular image frame with a radius of 215 pixels (165.44 m) is used. The
area of this frame is 85,989 m2 (or 0.0860 mm2). When this type of
analysis is performed using manual methods, the grains completely within
the circular measuring frame are counted as one grain. The grains that are
only partially within the frame are counted as one-half of a grain. The
number of grains to use for NA is the sum of all the grains inside the frame
and the sum of the grains having values of 12. When using automated IA
procedures, only the grains that are completely within the measuring
frame are counted. NA is determined by the number of grains within the
measuring frame and the actual sum of the area of these grains. Because
the IA system can measure the actual area of the grains, this method
produces more accurate results than counting grains that are partially
within the measuring frame as one half grain.
The average number of grains within the test frame is 122.6, and the
standard deviation is 14.4. The average number of grains within the
measuring frame for the 10 fields of view is 73,859 m2, and the standard
deviation is 3,762 m2 (Fig. 16k). Therefore, NA 1659. The grain size
is calculated from:
G 1.443 ln(NA) 2.954 1.433 ln(1659) 2.954 7.74
(Eq 8)
In this particular example, the average area of the grains and grain
boundaries is 77,424 m2. This is approximately 4.8 % of the area of the
grains. If this value were used for the measured grain area, the calculated
grain size would be 7.68. Thus, the grain boundary area could account for
approximately 0.06 of a grain size number.
To ensure the same fields of view are used to measure grain size using
each of the three methods, a set of 10 images is first stored on the
computer hard drive. After storage, the images are recalled to proceed
with the grain size analysis.
Macro Development. Before developing the macros used to solve
these different problems, the different types of variables used are
considered. Four types of variables commonly encountered in IA macro
routines, and for most computer languages, are:
O Integer
O Real
Applications / 229
O Character
O Boolean
Integer and real variables are used in situations where numbers are
required to describe a certain property. Examples of integer variables in
IA applications are an image number, field of view number, indexing
variable controlling a loop, and a line in a database. Variables such as the
system calibration constant or the area, length, and perimeter of an object
are examples of real variables. Real variables often are referred to as
floating-point variables, which means there is a decimal point associated
with the variables. Character-type variables include letters of the alphabet
and other symbols used to describe the properties of something. Character
often is abbreviated as Char. A group of characters placed together in
a specific order is referred to as a string. There are only two Boolean
variables, true and false. Boolean variables are used in macros to help
control how instructions are performed as the macro is running. Some
examples of the different types of variables are:
O INTEGER (a 1, field number 33, X pixel 7, and number of
Ferets 8)
O REAL (perimeter 376.2048, area percent 23.20876, and NL
14.8239)
O CHAR (a, b, x, %, and ?)
O BOOLEAN (x <12, y 17, and z 125.87)
O STRING (At first, this may seem confusing.)
The following examples briefly discuss how these variables may be used
in a macro routine. Again, each system may handle these types of
variables in a slightly different manner. For a more complete discussion of
variable types, the reader should refer to a basic computer textbook.
MACRO STORE IMAGES
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
C:\ASM_BOOK\GRAINS.
2. Define String Variables:
a. s1 grain
b. s2 .TIF
FOR LOOP 1; loop <10; loop loop 1
3. TVINPUT 1
4. Apply shading correction, resulting image is 2
5. s s1 string(loop) s2
6. Write s
7. Save Image 2, s
8. Move to next field of view
END FOR
The macro STORE IMAGES operates in the following manner:
1. Delete from the system any images and graphics from previous
applications. Load shading corrector for later use. Set path to instruct
the computer where to store the images that will be used for further
analysis. In this example, images will be stored on the C-drive in
folder ASM_BOOK \ GRAINS.
2. Define two string variables (their use is explained later). Create a loop
instruction; the loop control variable, or loop, varies from 1 through
10.
3. Once in the loop, acquire a live TV image numbered 1.
4. Apply the shading corrector to the live image (the resulting image is
identified as number 2).
5. Define a new string variable s, which is the sum of three other string
variables.
6. Computer is instructed to write the variable s. The first time through
the loop, loop 1. Thus s grain1.TIF. The second time through the
loop, s has the value grain2.TIF; the third time through the loop, s
grain3.TIF, and so forth.
7. Rename image 2 as s and store to the computer C-drive in the
appropriate folder. Referring to the setup instructions, the first image
stored is C:\ASM_BOOK\GRAINS\grain1.TIF, the second image is
stored as C:\ASM_BOOK\GRAINS\grain2.TIF, and so forth.
8. Start the process again.
Stored images can later be recalled for grain size analysis. Because the
grain size is determined using three different methods, grain boundaries
are reconstructed and the reconstructed grain images are saved to disk for
additional analysis. The macro used to reconstruct the grains is:
MACRO RECONSTRUCT GRAINS
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
C:\ASM_BOOK \GRAINS.
2. Define String Variables:
a. s1 grain
Applications / 231
b. s2 .TIF
c. b1 bin_gr
d. b2 .img
FOR LOOP 1; loop <10; loop loop 1
s s1 string( loop ) s2
bb b1 string(count) b2
write s, b
3. LOAD IMAGE s, #1 (Fig. 16a)
4. IMAGE PROCESSING
b. SEGMENT IMAGE (detect grains and invert Darker than 200)
#1, #2
c. BINARY FILL HOLES #2, #3
d. INVERT IMAGE #3, #4
e. BINARY CLOSE, 2 pixels octagonally, #4, #5
f. INVERT IMAGE #5, #6
g. BINARY EXOSKELETONIZE IMAGE #6, #7
h. DISPLAY IMAGE #7 (white grains, black grain boundaries)
5. SAVE IMAGE #7, bb (the grains)
END FOR
System setup for this macro is similar to that for MACRO STORE
IMAGES. Two additional defined string variables, b1 and b2 are used to
store the reconstructed images, while s1 and s2 are used to retrieve the
previously stored gray images of the grain boundaries. A loop is created
to do ten different fields of view using the same variables from the
previous macro. Step 3 loads the previously stored images from the
computer; the image to be loaded is defined as s, and the new image is
referred to as 1. The terminology used for image processing is image
process input, output. For any image processing step, the first number or
name is the input image, and the second number or name is the output
image. The new image is segmented so the grains are detected (instruction
4b). Next, a binary hole filling operation (instruction 4c) helps remove
undetected regions within the grains. The image is then inverted so that
boundaries are an operational part of the image (instruction 4d). The
octagonal binary close by 2 operator first dilates the grain boundaries by
two pixels. This joins any pixels that are within four pixels of each other.
Then the image is eroded by two pixels restoring most of the grain
boundaries to their original thickness (instruction 4e). Another image
inversion (instruction 4f) makes the grains the portion of the image that
can be processed. A binary exoskeletonize operation thins the grain
boundaries that were enlarged as a result of the earlier operations
(instruction 4g). The reconstructed binary image is saved to the computer
drive, and the image number is defined by the value of the variable bb.
Applications / 233
Feature-Specic Distributions
Nonmetallic inclusion distribution in steel is a good example of
using IA to make feature-specific measurements. This experiment involves the analysis of an AISI type 8620 (UNS G86200) low-alloy steel
specimen from a commercial-quality, air-melted heat. The material is
calcium treated to modify inclusion morphology (shape) and to lower the
sulfur and oxygen content of the steel. The specimen is from a
longitudinal plane from the center of a 150 mm (6 in.) diameter bar. The
weight percent of the inclusion-forming elements is 0.020S, 0.022Al, 27
ppm Ca, 9 ppm O, and 0.85Mn.
ASTM E 1245 provides a good set of guidelines to determine the
volume fraction of inclusions (second-phase constituents) in a metal using
automatic IA (Ref 10). For this particular alloy steel, there are very few
oxide inclusions relative to the number of manganese-calcium sulfides
because the oxygen content of the steel is only 9 ppm. Furthermore, the
majority of the oxide inclusions are encapsulated by sulfides due to
calcium treating. Therefore, in accordance with E 1245, all types of
inclusions are treated the same. Procedures to distinguish between the
various types of inclusions have been described elsewhere (Ref 11).
This analysis is performed using a 32 objective lens, and the
calibration constant of the system is 0.77 m/pixel. A 620 480 pixel
image frame corresponds to a measuring frame area of 181,059.8 m2.
The analysis is performed on 300 fields of view, which corresponds to a
Applications / 235
Perimeter,
m
Mean
30.76
Stdev
82.44
Sum X
Sum X2
Parameter
Length,
m
Anisotropy,
length/thickness
Roundness,
P2/4 A
22.68
9.66
2.65
1.67
28.82
13.07
2.01
1.31
115,089
84,862
36,132
9,929
6,241
16,842
28,964,783
5,031,112
988,181
41,525
Min
2.37
6.75
2.76
1.13
1.03
Max
3647.12
439.73
212.44
23.48
25.99
P, perimeter; A, area
total observed area of 54.3 mm2 (2.1 in2). Length (maximum Feret),
thickness (minimum Feret), area, perimeter, anisotropy (length divided by
thickness), and roundness (R P2/4A) are measured for each inclusion.
These types of measurements are referred to as feature specific because
properties associated with each individual constituent in the image frame
are measured. In contrast, field measurements represent the entire sum of
a particular parameter for an entire field of view.
The specimen is oriented on the microscope stage so the longitudinal
plane of deformation (i.e., the rolling direction) is coincident with the
y-axis of the stage (Fig. 17). Shading correction is applied after acquiring
an image, and binary operations are used following image segmentation.
Any holes within the inclusions are filled (Fig. 17b), and a vertical close
operation of 3 pixels is used to join inclusions separated from each other
by a distance of approximately 4.6 m in the deformation direction (Fig.
17c). A vertical open of 1 pixel is used to eliminate from the population
any inclusions less than 1.5 m long (Fig. 17d), and a square binary close
of 2 pixels is used to join any inclusions within 3 m of each other in any
direction (Fig. 17e). Results of measurements for each field of view are
written to a database, and statistical parameters, such as sums of
parameters and sums of the squares of the parameters, are tabulated (see
Table 4).
The inclusion length histogram (Fig. 18) is skewed to the right. That is,
there are many more class sizes of long inclusions than short inclusions
relative to the mode of the distribution (see data in Table 5). A plot of the
log of inclusion length as a function of cumulative probability (Fig. 19)
suggests that the inclusion length has a log-normal distribution. The
statistical parameters of the log-normal distribution are calculated using
the length values listed in Table 4. The mean value of the inclusion length
is given by:
n
Li
i1
n
(Eq 9)
(a)
(b)
(c)
(d)
(e)
Fig. 17
AISI type 8620 alloy steel bar (UNS G86200); (a) inclusions; (b) holes lled; (c) vertical close of three pixels;
(d) vertical open of 1 pixel; (e) square close of 2 pixels
Applications / 237
n Li2
i1
(L )
n
i1
n(n 1)
(Eq 10)
[( ) ]
2
1
(Eq 11)
ln [() 1]
2
(Eq 12)
(Eq 13)
Fig. 18
(Eq 14)
Inclusion-length (Feret max) histogram for AISI type 8620 alloy steel
containing 0.029% sulfur
Table 5
Length, m
Inclusion length distribution data of an AISI 8620 alloy steel containing 0.029 % sulfur
Frequency
Cumulative frequency
Cumulative
probability, %
Length, m
Frequency
Cumulative frequency
Cumulative
probability, %
0.00
16
37
3210
85.78
0.00
17
43
3253
86.93
465
465
12.43
18
37
3290
87.92
781
1246
33.30
19
28
3318
88.67
547
1793
47.92
20
19
3337
89.18
370
2163
57.80
25
16
3452
92.25
225
2388
63.82
30
11
3534
94.44
227
2615
69.88
40
3625
96.87
118
2733
73.04
45
3654
97.65
10
74
2807
75.01
50
3672
98.13
11
119
2926
78.19
60
3691
98.64
12
68
2994
80.01
70
3711
99.17
13
63
3057
81.69
80
3722
99.47
14
65
3122
83.43
90
3725
99.55
15
51
3173
84.79
100
3731
99.71
Fig. 19
Applications / 239
and
L99.9% exp( 3.09 )
(Eq 15)
Mean 9.6584
1.7473
1.0203
L50% 5.74 m
L84% 15.92 m
Sum L2 988181
L99.9% 134.30 m
WHILE (STAGE)
2.
3.
4.
5.
6.
Applications / 241
4 A
(Eq 16)
Mean 23.5632
3.0547
0.4581
L50% 21.22 m
L84% 33.54 m
L99.9% 87.39 m
Table 6
Diameter,
m
Cumulative,
probability, %
Diameter,
m
Frequency
Cumulative,
probability, %
0.00
23
76
56.81
0.00
24
56
59.72
0.00
25
66
63.15
0.05
26
59
66.22
0.05
27
45
68.56
0.05
28
61
71.73
0.05
29
42
73.91
12
0.68
30
29
76.04
34
2.44
35
25
84.25
10
54
5.25
40
15
90.49
11
65
8.63
45
11
94.23
12
70
12.27
50
96.83
13
80
16.42
55
98.49
14
87
20.95
60
98.96
15
85
25.36
65
99.43
16
83
29.68
70
99.74
17
92
34.46
75
99.90
18
83
38.77
80
99.90
19
70
42.41
85
99.95
20
61
45.58
90
99.95
21
70
49.22
95
99.95
22
70
52.86
100
99.95
Table 7
Perimeter,
m
Feretmin,
m
Feretmax,
m
Anisotropy,
length/thickness
Roundness,
P2/4 A
Mean
537.87
90.02
20.18
33.06
1.64
1.43
Stdev
560.16
47.74
9.89
17.54
0.36
0.20
Sum X
1,034,315
173,105
38,812
63,568
3156.8
2754.3
Sum X2
Parameter
1,159,395,781
19,963,446
971,502
2,692,816
5429.1
4023.4
Min
7.11
9.86
2.33
3.84
1.09
1.09
Max
5660.59
370.76
65.45
133.82
3.50
2.62
P, perimeter; A, area
Applications / 243
(a)
(b)
Fig. 20
(a)
(b)
20 m
:
(c)
Fig. 21
(d)
Carbide distribution in AISI M 42 high-speed tool steel: (a) microstructure revealed by Marbles reagent etch
(4% nital) in optical micrograph; (b)secondary electron image of etched specimen; (c) backscattered electron
image of unetched specimen; (d) digitized backscattered electron image (four grays)
Applications / 245
NL
(Eq 17)
NL
NL NL
(Eq 18)
NL 0.571 NL
1
NL
(Eq 19)
, or mean free path (mean edge to edge spacing of the bands) is given
by:
1 VV
(Eq 20)
NL
Image analyis
equivalent
Definition
Number of features
Number
Pixel
PP
Area fraction
Area / calibration
constant (calconst)
LL
Area fraction
LA
Area fraction /
calibration constant
Area
NL
HP / frame area
NL||
VP / frame area
NA
(a)
(b)
Fig. 22
AI
F90
F0
(Eq 21)
For a field of view containing n objects, the anisotropy index for all the
objects is:
Applications / 247
in
AI
F90
i1
in
(Eq 22)
F0
i1
However, for simple geometric shapes such as this, the anisotropy index
can be represented by the corresponding field parameters:
AI
HP
(Eq 23)
VP
(Eq 24)
Because one horizontal scan line contains M pixels, the true length of a
horizontal scan line follows (Note that braces, { }, are used in the
Fig. 23
{pixels} {m/pixel}
(Eq 25)
{m}
(Eq 26)
{pixels}
(Eq 27)
For this field of view, the number of intercepts per unit length is the total
number of intercepts divided by the total length of the test line:
nL
Y
L
HP /cc
(Eq 28)
M N cc
or
nL
HP
(Eq 29)
M N cc2
Because M N is the total number of pixels in the frame and cc2 is the
true area of each pixel:
nL
HP
Frame Area
{m1}
(Eq 30)
VP
Frame Area
{m1}
(Eq 31)
Consequently, by making two rapid field measurements and using the true
area of the image frame, the stereological parameters described in Eq 17
through 20 are easily calculated using an IA system. Other stereological
Applications / 249
xend N
Draw Vector: xstart, ystart, xend, yend, color 15
END FOR
3. Merge the graphics into the Image Plane.
4. Save Image to Hard Drive: C:\ASM_BOOK\Vert_lines.tif.
This is a very simple macro to construct. For all operations in the loop,
ystart equals 0 and yend equals 480. The first time through the loop, xstart
is N and xend is N. The draw vector function draws a line from the pixel
having screen coordinates P1 (xstart, ystart) to the pixel having
coordinates P2 (xend, yend). Numerically, the coordinates are P1 (1,0)
and P2 (1, 480). The second time through the loop, N 3 and the new
vector is drawn through points P1 (3,0) and P2 (3, 480). Repeating the
process eventually results in a grid of 320 vertical lines. If the particular
IA system being used does not have these graphic functions, a measuring
frame having a width of one pixel can be created. A similar grid of lines
is constructed by detecting everything within this frame and moving it
along.
Carpet Fibers. Manufacturers of carpet fibers routinely use IA
techniques to check quality control (Ref 18). Carpet fibers have a trilobal
shape in a cross-sectional, or transverse, view (Fig. 25a). One measure of
carpet-fiber quality is the modification ratio, or MOD ratio (related to
carpet-fiber wear properties), wherein two circles are placed on a fiber
using either manual or automated procedures. The larger circle is sized so
it circumscribes the fiber, while the smaller circle is inscribed within the
fiber (Fig. 25b), and the MOD ratio is the area of the large circle divided
by the area of the small circle:
MOD ratio
The MOD ratio decreases with decreasing sharpness of the lobes (Fig.
25c).
Large-Area Mapping (LAM). In most IA applications, looking at one
field of view of a particular specimen is like being in a submarine;
microstructural details are observed through a 100 porthole, but the big
picture is missing. In more scientific terms, high resolution is required to
detect, record, and evaluate microstructural details (Ref 19), but lower
magnification is required to present and retain the big picture, that is, the
context of the details on the macroscopic scale.
An ideal solution would be a megamicroscope to present a complete
picture of the sample at a low magnification, such as 100. Large-area
mapping, frequently referred to as image tiling, is a step in this direction.
The specimen is completely analyzed at high magnification, storing the
Applications / 251
(a)
(b)
(c)
(d)
(e)
Fig. 24
Coating thickness determination: (a) plasma-sprayed coating; (b) graphically created vertical measuring lines;
(c) detected image and holes lled; (d) vertical close of 10 pixels; (e) coating thickness formed by Boolean
operator image (d) <AND> image (b)
(a)
(b)
(c)
Fig. 25
Carpet ber analysis: (a) bundle of carpet bers; (b) trilobal analysis, circles inscribed in and circumscribed
around bers; (c) trilobal analysis, modication ratios of two bers
Applications / 253
images in the memory of the computer. The images then are displayed at
a reduced resolution to show the big picture, while regions of particular
interest can be analyzed at the higher resolution. The following examples
illustrate the use of this procedure.
Detection of Alumosilicate Inclusions. During the process of continuous
casting of steel, the metal flowing out of the weir is washed constantly
by blowing inert argon gas through it. Gas bubbles physically draw
nonmetallic material with them towards the surface where the slag can be
separated away from steel. Optimizing this process brings certain cost
advantages. Blowing too much gas is expensive, but decreasing the gas
flow too much results in lower steel quality (that is, higher inclusion
content). Proper process control minimizes gas use and maintains product
quality.
Alumosilicate inclusions appear as finely dispersed clusters, visible
only at 100 or higher magnification. Typical frequency is 1 cluster/cm2,
which means that only one field in about two hundred checked will
possibly be of interest.
The usual method to check material quality is to ultrasonically test the
product after hot working, which is very costly and time consuming.
Thus, this situation provides an excellent opportunity to apply LAM. A
large number of fields of view are scanned automatically, and field
analysis results are digitized and put into a composite picture to show the
microcharacteristics of the sample at a macroscale (Fig. 26a). After
completing the scan, the analyst can use the composite picture as a sort
of a map and position the sample to analyze points of interest (Fig. 26b
and c).
Detecting and Evaluating Cracks. A similar analysis can be applied to
long, thin objects, such as a crack, where the features are too long to be
detected completely as one single object at a magnification required to see
the thickness satisfactorily. The image shown in Fig. 27 illustrates a
typical case, and the solution to the problem is discussed subsequently.
The crack in the image is approximately 5 mm (0.2 in.) long with an
average thickness of approximately 50 m, so a macro-objective must be
used to see the complete crack. However, a macro-objective reduces the
thickness of the crack to a few pixels or can, in some cases, make the
crack disappear, causing artifacts.
The problem is solved using a scanning stage and tiling neighboring
images into one complete image. The positioning accuracy of the
motorized stage generally is satisfactory; however, if accuracy is a
concern or if the stage is not motorized, then it is possible to let the
computer decide on the optimal tiling arrangement, as shown in the
sequence of images in Fig. 28.
The three images show a traverse over a steel sample containing
hydrogen-induced cracks. The tile positions (rectangular inserts) are
adjusted to get the optimal overlap, the quality of which is indicated in the
(a)
(b)
(c)
Fig. 26
Detection of aluminosilicate inclusions in steel using large-area mapping. (a) Composite picture of 2000 elds
of view at 100 from a specimen with dimensions 4 2 cm (1.5 0.75 in.). Arrows point to areas having
easily discernable inclusion clusters along with polishing artifacts, mostly scratches. (b) and (c) Inclusion clusters indicated
by arrows. Source: Ref 19
Fig. 27
Applications / 255
Fig. 28
oval inserts. In most cases, the automated scanning stage is used to move
the sample from one field to another and to ensure that a statistically
significant number of measurements, either in terms of images analyzed
or in terms of objects of interest is observed and evaluated. Automated
stage control is used in the preceding two cases to do large-area mapping
to achieve the required resolution while retaining the big picture.
References
1. G.L. Sturgess and D.W. Braggins, Performance Criteria for Image
Analysis Equipment, The Microscope, 1972, p 275286
2. H.P. Hougardy, Measurement of the Performance of Quantitative
Image Analysing Instruments, Prakt. Metallogr., 1975, p 624635
3. Segment Functions, Threshold Automatic, Zeiss KS 400 Imaging
System Users Guide, Vol 2, Munich, Germany, 1997, p 4517
4. D.W. Hetzner, Analytical Threshold Settings for Image Analysis,
Metallographic Techniques and the Characterization of Composites,
Stainless Steels, and Other Engineering Materials, Vol 22, Microstructural Science, ASM International, 1995, p 157
5. G.L. Kehl, The Principles of Metallographic Laboratory Practice,
McGraw-Hill, 1949, p 87
6. J. Serra, Image Analysis and Mathematical Morphology, Academic
Press, London, 1982
7. J.J. Friel, E.B. Prestridge, and F. Glazer, Grain Boundary Reconstruction for Grain Sizing, STP 1094, MiCon 90: Advances in Video
Technology for Microstructural Control, G.F. Vander Voort, Ed.,
ASTM, 1990, p 170184
8. Standard Test Methods for Determining Average Grain Size Using
Semiautomatic and Automatic Image Analysis, E 1382, Annual Book
of ASTM Standards, ASTM, 1997
9. Standard Test Method for Determining Average Grain Size, E 112,
Annual Book of ASTM Standards, ASTM, 1996
10. Standard Practice for Determining the Inclusion of Second-Phase
Content of Metals by Automatic Image Analysis, E 1245, Annual
Book of ASTM Standards, ASTM, 1995
11. D.W. Hetzner, Quantitative Image Analysis Methodology for HighSulfur Calcium Treated Steels, STP 1094, MiCon 90: Advances in
Video Technology for Microstructural Control, G.F. Vander Voort,
Ed., ASTM, 1990
12. D.W. Hetzner and B.A. Pint, Sulfur Content, Inclusion Chemistry, and
Inclusion Size Distribution in Calcium Treated 4140 Steel, Inclusions
and Their Influence on Mechanical Behavior, R. Rungta, Ed., ASM
International, 1988, p 35
13. Standard Test Method for Determining the Inclusion Content of
Steel, E 45, Annual Book of ASTM Standards, ASTM, 1997
14. Standard Practice for Obtaining JK Inclusion Ratings Using Automatic Image Analysis, E 1122, Annual Book of ASTM Standards,
ASTM, 1997
15. F. Schucker, Grain Size, Quantitative Microscopy, R.T. DeHoff and
F.N. Rhines, Ed., McGraw-Hill, 1968, p 201265
16. D.W. Hetzner and J.A. Norris, Effect of Austenitizing Temperature on
the Carbide Distributions in M42 Tool Steel, Image Analysis and
Metallography, Vol 17, Microstructural Science, ASM International,
1989, p 91
17. Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures, E 1268, Annual Book of ASTM Standards,
ASTM, 1997
18. J.J. Friel, Princeton Gamatech, personal communication
19. V. Smolej, Carl Zeiss Vision, personal communication
CHAPTER
Modeling Color
The RGB model is based on three primary colorsred, green, and
bluethat can be combined in various proportions to produce different
colors. When dealing with illuminating light (e.g., a computer screen or
some other source of illumination), the RGB primary colors are called
additive because, when combined in equal portions, they produce white.
(a)
Fig. 1
(b)
Color Spaces
As shown in Fig. 2, there are several complementary ways to look at
and interpret color space, including:
O
O
O
O
(a)
(c)
Fig. 2
(b)
(d)
Color space models. (a) Red-green-blue (RGB) and cyan-magenta-yellow (CMY). (b) Lab space. (c)
Hue-lightness-saturation (HLS) space. (d) Munsell space. Please see endsheets of book for color
versions.
The RGB space model is the color system on which all the TV
cameras and, thus, color digitalization and color processing systems, are
based and can be viewed as a cube with three independent directions
red, green, and bluespanning the available space. The CMY system of
coordinates is a straightforward complement of RGB space.
Lab Space Model. The body diagonal of the RGB cubic contains grays
(in other words, colorless information) about how light or dark the signal
is. The Lab coordinate system is obtained by turning the body diagonal of
the RGB cube into a vertical position. The Lab color system is one of the
standard systems in colorimetry (i.e., the science of measuring colors).
Here L, or lightness coordinate, quantifies the dark-light aspect of the
colored light. The two other coordinate axes, a and b, are orthogonal to
the L axis. They are based on the opponent-colors approach. In fact, a
color cannot be red and green at the same time, or yellow and blue at the
same time, but can very well be, for example, both yellow and red, as in
oranges, or red and blue, as in purples. Thus, redness or greenness can be
expressed as a single number a, which is positive for red and negative for
green colors. Similarly the yellowness/blueness is expressed by the
coordinate b.
The YIQ and YUV systems used in television broadcasting are defined
in a similar manner. The Y component, called luminance, stands for the
color-independent component of the signal (what one would see on a
black and white TV screen) and is identical to the lightness of the Lab
system. The two remaining coordinates define the specific color. The
following set of equations defines, for instance, the RGB-YIQ conversion:
Y 0.299 R 0.587 G 0.114 B
I 0.596 R 0.274 G 0.322 B
Q 0.211 R 0.523 G 0.312 B
Note that Y is a weighted sum of the three intensities, with green standing
for 58.7% of the final value. Note also, that I is roughly the difference
between the red and the sum of green and blue. In other words, the I
coordinate spans colors between red and cyan. Q is analogously defined
as the difference between combination of red and blue (i.e., magenta) and
green. If all three signals are equal, that is, R G B L, then the
matrix will provide the following values: Y L, I 0, and Q 0. This
corresponds to a black and white image of intensity L, as expected. A
similar matrix is used to transform from RGB to YUV. Here the U and V
axes span roughly the green-magenta and blue-yellow directions.
As mentioned above, the RGB-to-YIQ signal conversion is useful for
transmission: the I and Q signals can be transmitted at lower resolutions
without compromising the signal quality, eventually improving the use of
the broadcast channel or, alternatively, allowing for enhanced Y signal
quality. When the signal is received, it can be transformed back into the
RGB space, for instance, to display it on the TV monitor.
HLS Space Model. Both RGB and YIQ/YUV coordinate systems are
hardware based. RGB comes from the way camera sensors and display
phosphors work, while YIQ/YUV stems from broadcast considerations.
Both are intricately connectedthere is no RGB camera without broadcasting and vice versa.
Two aspects of human vision come up short in this context. The first one
is the hue, or color tone. For example, a specific blue color might be used
to color-stain a sample for image analysis. The results from staining may
be stronger or weaker, but it is still the same color hue. The second aspect,
which is, in a sense, complementary to the color hue, is color saturation,
that is, how brilliant or pure a given color is as opposed to washed-out,
weak, or grayish. In this case, we would be looking for a specific color
hue without any specific expectations regarding the color saturation and
lightness.
HLS color space, which takes into account these two aspects of human
vision, can be viewed as a double cone, hexagonal or circular, as shown
in Fig. 2. The axis of the cone is the gray scale progression from black to
white. The distance from the central axis is the saturation, and the
direction or the angle is the hue.
The transformation from RGB to HLS space and back again is
straightforward and easy with digital images. Thus, a lot of interactive
image analysis is done in HLS space given the fact that HLS tries to
match the human perception.
Munsell Space. This system is mentioned for the sake of completeness. It consists of a set of standard colored samples, which can be used
for visual comparison of colors, with wide use in the textile industry,
printing, and forensics. The colors of the standard samples have been
chosen in an equidistant fashion; that is, a close (enough) match in the
Munsell set of samples can be found for every imaginable color (see Fig.
2). Thus, the Munsell system is based on how the human eye, and not how
some electronic component, sees colors. Note that the Munsell system has
more red than green samples: the human eye resolves reds far better than
greens.
Color Images
It has been mentioned elsewhere in this volume that black and white
(B/W) images are digitized at 8 bit resolution; that is, they are digitized
at 256 gray levels per image pixel. In case of a color image, essentially
three detectors are used: one for each of the red, green, and blue primary
colors. Thus, a color image is a triplet of gray-level images.
If every color component consists of 28 (256) gray levels, then the
number of possible colors equals 224(16 million) different colors. In
reality, the number is much smaller, such as in the color image in Fig. 3,
where the number of colors is approximately 249. Note that these 249
colors should not be confused with 256 gray levels of a typical B/W
image; they contain information that would not be available were it not
for the color.
(a)
(b)
(c)
(d)
Fig. 3
Color image and its red-green-blue (RGB) components. (a) Original image. (b) Red channel. (c) Green channel.
(d) Blue channel. The original color image is a test image used to check for color blindness; not seeing the
number 63 is one of manifestation of color blindness. Please see endsheets of book for color versions.
(a)
(d)
Fig. 4
(b)
(e)
(c)
(f)
(g)
Conversion between red-green-blue (RGB) and hue-lightness-saturation (HLS) color space. (a) Original image. (b)
Red. (c) Green. (d) Blue. (e) Hue. (f) Lightness. (g) Saturation. Please see endsheets of book for color versions.
(a)
(b)
(c)
(d)
Fig. 5
Color image processing. (a) Original image. (b) After image processing. (c) Detail before image processing. (d)
Detail after image processing. Please see endsheets of book for color versions.
Color Discrimination
Discrimination defines the actual objects in the image that need to be
evaluated. The output of this process is always a binary image where the
uninteresting part of the image is made black and the objects of interest
are set to white. In the case of gray images, the most common approach
is to set two threshold levels, which define the interval of gray levels of
interest. In a lot of instances, these two levels are determined automatically. However, there still are instances where the levels must be set
interactively, a difficult task for some, especially if implementation of the
function has not been thought out very well.
For a color image, the number of levels to be determined grows to six,
which is unmanageable by any standard if done the wrong way.
(a)
(b)
(c)
Fig. 6
Interactive color discrimination. (a) Original image; the color to be discriminated is circled by the user. (b) Points
with the indicated range of colors are marked green. The user now adds another region. (c) Interactive
discrimination is complete. Please see endsheets of book for color version.
Fig. 7
Color Measurement
Thus far in the discussion, color has been used only to detect the actual
phase or part of the sample to be analyzed. Color also can be measured
and quantified by itself, in a manner similar to densitometric measurements used to determine gray levels in gray images. For color-densitometric measurement, the original color image (Fig. 8a) and the image of
regions of objects (the binary image of spots in Fig. 8b) are required. Data
extracted (one set for every spot) include the x and y position, RGB
averages, and RGB standard deviations.
Given the RGB values, it is possible to calculate the values for
alternative color models, for instance, CMY or Lab. Note that to be able
to compare measurements, the experimental conditions, such as the color
temperature of the illumination, should be kept constant.
(a)
(b)
Fig. 8
(a)
(b)
(c)
(d)
Fig. 9
Quantitative analysis of color sample to determine volume percent of phases. (a) Original sample. (b) Selection
of brown phase using interactive discrimination; brown phase is selected by circling a typical part of the phase.
(c) Results of color discrimination in hue-lightness-saturation (HLS) space; phase is black, background is white. (d) Two other
phases are detected and color coded to allow easy comparison. Please see endsheets of book for color versions.
the result of color discrimination in HLS space (phase is black, background is white). The other two phases also are detected (steps omitted for
the sake of clarity) and color-coded to allow easy comparison (Fig. 9d).
Having the phases separated makes it easy to generate quantitative data,
which, in the case discussed previously, leads to following results:
Phase
Vol%
Red
8.80
Green
3.59
Blue
19.98
Conclusions
Image analysis is a tool that converts information in an image into
numerical form. From an image containing possibly several megabytes of
information, image analysis generates a data file of a few kilobytes of
relevant information. Image analysis is concerned with the following
stepwise scheme:
Step
Result of step
Information content, kB
Image acquisition
Color image
1000
Color/gray image
500
Discrimination
Binary image
80
Binary image
40
Data file
Measurement
Acknowledgment
The section Electronic Recording of Color Images was written in
collaboration with Marcus Capellaro of Carl Zeiss Vision.
Index
A
Aberration
chromatic ................................................ 220
spherical .................................................. 220
Abrams three-circle method in manual
analysis (ASTM E 112) .................... 225
Abrasive grinding ....................................... 36
Abrasive grit sizes ................................ 4647
Abrasives, for polishing .............................. 56
Abrasive-wheel cutting ........ 3738(F), 39(F)
Absorbed electrons, contrast
mechanisms associated with ......... 102(T)
Accuracy, measurements in applied
methods ............................................... 181
Acid digestion method ............................... 19
Acrylic resins, for mounting ................. 4142
Additive color theory ............................... 258
Advanced statistical analysis .......... 186, 187
Agglomeration ....................................... 5253
ALA (as large as) grain size .................... 117
measurement procedure
(ASTM E 930) ............................... 117
Alloys, metallographic specimen
preparation contemporary practice .. 58(T)
Alloy steels, mounting of ....................... 41(F)
Alpha-alumina slurries .............................. 52
Alpha-beta copper alloys, etching .. 66, 69(F)
Alpha grains ..................................... 2931(F)
Alumina
as abrasive used during polishing ........... 50
acidic suspensions .................................... 53
calcined abrasives ............................... 5253
inclusions ........................................ 217218
Alumina slurries, for final polishing ......... 52
Aluminosilicate inclusions, detection by
large-area mapping ................ 253, 254(F)
Aluminum
anodizing .................................................. 69
etching ...................................................... 62
grinding ..................................................... 48
Aluminum alloys
anodizing ....................................... 69, 71(F)
etching .......................... 62, 63(F), 66, 70(F)
grinding and embedded SiC
particles ........................................ 48(F)
metallographic specimen preparation,
contemporary practice .......... 59(T), 60
polishing of ............................................... 60
relief .............................................. 49, 52(F)
Aluminum-silicon alloys
dilation and counting technique ............. 136
image processing ........................... 84, 85(F)
Amount of microstructure
constituents .................. 153(F), 154155
Analog-camera technology .............. 262, 263
advantages .............................................. 262
disadvantages .......................................... 262
Analog devices .............................................. 2
Analog meter ................................................. 5
Analog tube-video cameras ..................... 67
Anisotropic structures .............................. 192
Anisotropy, morphological ........................ 197
Anisotropy index (AI) ..... 108, 245, 246247
determined by using measurement
frame ............................................... 214
Annealing twins .................................... 26, 63
Anodizing .......................................... 6870(F)
metals affected by ................... 69, 70, 71(F)
Anosov, P.P. ................................................... 1
Apple-class computers ................................. 9
Applications .............................. 203255(F,T)
Area ................................. 110, 123(T), 126(T)
Area
average ...................................................... 22
definition, as feature-specific
parameter ................................... 213(T)
Area equivalent diameter ......... 101, 123(T),
126(T)
Area filled .................................................. 110
Area fraction ... 1, 20, 22, 115, 126, 150, 178
average ............................................... 105(F)
determined by using measurement
frame ............................................... 214
for space filling grains ........................... 109
manual measurement by point counting
(ASTM E 562) ............................... 104
standard notation for ........................... 16(T)
method ...................................................... 19
Areal density, standard notation for ...... 16(T)
Area of features ........... 104, 110111, 126(T)
filled ........................................................ 104
Area-weighted distributions ...... 190(F), 192,
194(F)
Arithmetic processing of images .............. 86
Arrangement of microstructural
constituents .................. 153(F), 160161
Aspect ratio ... 101, 122124(T), 126(T), 157,
158(F), 213
ASTM Committee E-4 on
Metallography .............................. 15, 27
ASTM grain size .......................... 27, 29, 225
equation .................................................... 26
number ................................. 23, 26, 30, 233
scale .......................................................... 26
ASTM standards ................................. 127(T)
Atomic force microscopy (AFM) ............ 110
Attack-polishing agents .................. 59(T), 60
Auger electron microscope .................. 8081
Austenitic stainless steel
grain size distributions ...................... 190(F)
relationship between hardness and
mean intercept length ....... 187, 188(F),
189(F)
Auto-delineating processor ...................... 210
Autodelineation filters ................................ 11
Autofocusing ............................................... 32
Automated grain boundary
reconstruction procedures ....... 224225
Automated IA procedures of inclusion
analysis (ASTM E 1122) .................. 241
Automated sample-preparation
equipment ............................................ 18
Automated stage movement ...................... 32
Automatic focus ................................ 219220
Automatic gain control ............................ 219
Automatic image analysis (AIA) ... 101, 103,
104, 109, 110
adjusts diagonal distance to minimize
bias .................................................. 111
combined selection criteria .................... 123
computer-based, combining primitive
descriptors ....................................... 124
measuring frame choice ......................... 143
specimen quality requirements ......... 167(F)
vs. stereological methods ............... 200201
Automatic image analyzers ................. 17, 33
Automatic stage ........................................ 219
Average area fraction .............................. 105
Average area of features .... 105106, 126(T)
Average diameter ........... 111, 123(T), 126(T)
Average grain size (AGS) ........................ 117
Average length of features ................. 126(T)
Averaging filter ........................................... 86
B
Background erosion
technique.................................141143(F)
Background skeletonization
technique ............................... 141143(F)
Background thinning
technique ............................. 141143(F)
Backscattered electrons ..................... 81, 104
contrast mechanisms
associated with .................. 102(T), 105
Backscattered electron signal .................. 104
Bakelite ........................................................ 40
Banding in metals, procedure for
measuring and reporting
(ASTM E 1268) ................................. 108
Banding measurement in a
microstructure, methodology (ASTM E
1268) ................................... 244246(F,T)
Band sawing ................................................ 37
Bausch and Lomb QMS .............................. 5
Beryllium
grain size revealed by etching ................. 61
microstructure ................................ 61, 62(F)
Beta-phase, in copper alloys ........... 66, 69(F)
Bias ........................................................ 27, 30
of digital measurements,
characterization of ..................... 181(T)
from magnification effect ....................... 111
from segmenting error ........................... 103
guidelines to minimize ........................... 168
in area and length measurements ........... 111
in image selection .................................. 194
introduced by image processing and
digital measurements ...... 171183(F,T)
introduced by specimen preparation
and image acquisition ....... 165171(F)
quantification of ..................... 180182(F,T)
Bimodal distributions, grain sizes ........... 117
Bimodal duplex grain size ....................... 117
distribution .............................................. 190
Binarization ................. 171, 174175, 176(F)
Binary A, grain boundary image .............. 225
Binary B, circles on grain boundaries ...... 225
Binary exoskeletonize operation ............. 231
Binary image function A <AND>B ....... 225,
226(F)
Binary image processing
Boolean logic ................................. 9295(F)
convex hull .................................. 97100(F)
dilation ................................ 9697(F), 98(F)
erosion .............................. 9697(F), 98(F)
hole filling ..................................... 9596(F)
morphological binary processing ............. 95
pruning ........................................ 97100(F)
skeleton by influence zones
(SKIZ) .................................. 97100(F)
skeletonization ............................. 97100(F)
Binary logic ............................................... 160
Binary morphological processing
methods ................................................ 11
Bioscan ......................................................... 10
Bismuth, grinding ........................................ 48
Bitmap ......................................................... 80
Bit-plane method ........................................ 88
Black, gray level of zero assigned ............ 205
Boolean AND operator ............................ 233
Boolean logic .................................... 9295(F)
feature-based .................................. 9495(F)
operations ................................ 93, 94(F), 95
pixel-based ..................................... 9395(F)
truth table ................................................. 93
Boolean operations ....................................... 8
Boolean variables ..................................... 229
Border kill .................................... 176177(F)
Boron, addition effect on
sintered stainless steel
microstructure ........................ 199200(F)
Bottom-poured heats .................................. 18
Bounding rectangle ............................. 152(F)
Brass
beta-phase volume fraction,
thresholding of ............................... 214
thresholding of ........................... 220221(F)
Braze, relief ...................................... 49, 52(F)
Breadth .................................... 123(T), 126(T)
min DD ................................................... 111
Bright field illumination ........... 64(F), 65(F),
70(F), 81
Bright-field reflected-light
microscopy ......................................... 102
British steel ................................................... 4
Brown phase, selection in determining
phase volume content . . . 269270, 271(F)
Buehler Ltd. ................................................ 11
C
Cadmium, grinding ..................................... 48
Calcination process .................................... 52
Calconst ............................................. 225, 247
Calibration, of image analyzer ................. 111
Calibration constant (calconst) ...... 225, 247
Cambridge Instruments/Olympus Q10
system ......................................... 9, 10(F)
Cambridge Instruments
Q900 system ...................................... 8, 9(F)
Q970 system ...................................... 8, 9(F)
Carbides, in steel, x-ray
intensity maps ........................ 114, 115(F)
Carbon black, electron microscopy of,
material standard
(ASTM D 3849) ............................ 127(T)
Carbon content ............................................. 1
Carbon steel
grain size estimation using planimetric
method ......................................... 28(F)
microstructure consisting of ferrite
grains ................................. 125(F), 126
D
Dark-field illumination .............................. 81
Database flush command ......................... 220
Data interpretation ...................... 186191(F)
examples .................................... 191201(F)
dc power supplies ....................................... 82
Deformation, effect on grain shape,
sampling to study ................................. 37
Degree of banding measurement
(ASTM E 1268) ........................... 127(T)
Degree of freedom .................................... 184
Degree of orientation () ......... 12, 108, 245
DeHoff, R.T. ................................................ 33
Delesse, A. ......................................... 1, 19, 20
Delineation enhancement ...................... 91(F)
Delineation filter .................................... 91(F)
Delta-ferrite ............................... 66, 67(F), 69
in stainless steel, thresholding of area
fraction ............................... 214215(F)
E
Ealing-Beck ................................................... 6
Ecole des Mines de Paris ............................. 5
Edge effects .................................. 142(F), 143
Edge of features, sharpening of ............... 207
Edge preservation ............................ 4246(F)
Edge retention ........................... 4246(F), 72
by automatic polishing device ................. 56
guidelines ............................................ 4546
Edge rounding ................................. 41(F), 42
Electroless nickel plating ...................... 42(F)
Electroless plating ...................................... 46
Electrolytic plating ..................................... 46
Electronic components, precision saws
for sectioning ........................................ 39
Electron microscopes ................ 8182, 87(F)
Electropolishing .................................... 5354
standard (ASTM E 1558) ................. 127(T)
vs. electrolytic etching ............................. 68
Ellipse, ideal
gray level
analyzed as histograms ............. 206, 207(F)
defined ............................................ 205206
Elongation ............................ 157, 158(F), 159
Embedding .................................. 48(F), 49(F)
Energy-dispersive spectroscopy ................ 66
Epoxy resins ............ 40, 4142, 43, 45(F), 46
for mounting ............................ 4142(F), 43
Equivalent diameter ................................. 192
Equivalent grain diameters ..................... 241
cumulative distribution for
ingot iron ................................... 243(F)
for ingot iron ..................................... 242(T)
histogram for ingot iron .................... 243(F)
Erode (binary image amendment
operator) ....................................... 223(T)
Eroded point ......................................... 9192
Erosion .......................... 9697(F), 98(F), 134
Erosion/dilation, definition ................... 9192
Erosion operation ................................ 5, 7, 8
Estimates, weighted, of properties ........... 189
Estimating basic characteristics ..... 183186
Estimators ................................................. 186
Etchant developments (ASTM E 407) ..... 62
Etchants ..................................... 18, 36, 45(F)
Barkers reagent ............................ 69, 71(F)
Berahas solution ................................. 64(F)
Beraha tint etchant ............. 66, 67(F), 68(F)
Etchants (continued)
chromic acid .................................. 66, 67(F)
development of ......................................... 62
electrolytic reagents ...................... 66, 67(F)
Frys reagent ........................................ 50(F)
glyceregia ...................................... 66, 67(F)
Groesbecks reagents ................................ 66
hydrofluoric acid ...................... 48(F), 52(F)
immersion tint ......................... 63, 64(F), 72
Klemm I reagent ..................... 64, 66, 69(F)
Krolls reagent ..................................... 50(F)
Murakamis reagent .......... 45(F), 65(F), 66,
70(F)
nital .. 41(F), 42(F), 44(F), 45(F), 63, 64(F),
65(F)
nitric acid ...................................... 66, 67(F)
oxalic acid ..................................... 66, 67(F)
picral ....................... 45(F), 63, 64(F), 65(F)
potassium hydroxide .......... 66, 67(F), 68(F)
selective .............................. 50(F), 6368(F)
selective, defined ...................................... 63
sodium hydroxide .............. 66, 67(F), 68(F)
sodium picrate ..................................... 65(F)
tint ..................... 63, 66, 67(F), 68(F), 70(F)
Vilellas reagent ................................... 43(F)
Wecks reagent, modified .................... 39(F)
Etches ......................................................... 204
Etching ..................... 18, 37, 6172(F), 7273
aluminum alloys .................................. 52(F)
and bias in specimen preparation .. 166167
as bias source ......................................... 195
deep, and image quality ............ 167, 168(F)
depth ......................................................... 62
drying problems that obscure true
microstructure ........................ 62, 63(F)
electrolytic .................... 67(F), 6870(F), 72
of grain boundaries .................................. 62
heat tinting .................................... 71, 72(F)
immersion tint ............................... 63, 64(F)
interference layer method ........................ 71
iron-base alloys ........................................ 66
metals affected by ....... 62, 66, 67(F), 68(F),
69(F), 6970(F)
necessity for imaging ............................. 167
potentiostatic ...................................... 63, 72
prior-austenite grain boundary ........... 6263
procedures ...................................... 6163(F)
and residual sectioning/grinding
damage during polishing .................. 50
selective ............. 50(F), 6368(F), 167, 168
stains .............................................. 42, 43(F)
time involved ............................................ 62
tint ...................................... 64(F), 66, 70(F)
Etching stains ......................................... 43(F)
F
Farnsworth, Philo Taylor ............................ 2
Fast Fourier transform (FFT)
algorithm .................................. 8687(F)
inverse .................................................. 87(F)
Feature angle ............................................. 110
Feature finding program ......................... 110
Feature measurement report ............. 126(T)
Feature number ........................................ 110
Feature positioning effect ..................... 79(F)
Features
number excluded .................................... 104
number of ............................................... 104
total .................................................... 126(T)
Feature-specific
descriptors ................................. 122, 123(F)
distributions ............................ 234255(F,T)
measurements .................... 211, 224234(F)
parameters, primary .......................... 213(T)
properties .................................................... 5
Feret diameters ....... 110, 112, 121124, 151,
152(F), 213(F)
horizontal and vertical .............. 151, 152(F)
maximum ................................... 151, 152(F)
user-oriented .............................. 151, 152(F)
Feret 90, definition, as feature-specific
parameter ....................................... 213(T)
Feret 0, definition, as feature-specific
parameter ....................................... 213(T)
Ferrite ............................................... 66, 67(F)
Fiber length ................................. 123(T), 124
Fiber-reinforced composite, mechanical
properties ............................... 146, 147(F)
Fibers ......................................................... 123
Fiber width .................................. 123(T), 124
Field area ..................................... 104, 126(T)
Field horizontal projection
measurement, number of pixels used to
create ................................................... 247
Field measurement report .................. 126(T)
Field measurements ............ 211, 224234(F)
Field NA ................................................ 126(T)
Field NL ................................................ 126(T)
Field number ....................... 104, 218, 219(F)
Field size .................................................... 132
Fields of view, optimum
number of ........................... 182183(F,T)
Fill holes ....................................... 216, 217(F)
image amendment procedure ............ 223(T)
Fill holes command .................................. 220
G
Gamma-alumina slurries ........................... 52
Gamma curve transformation ............. 84(F)
Gaussian filter ............................................. 86
Gaussian gray processing
function ................................. 209210(T)
Glagolev, A.A. ........................................... 12
Glass fiber composite, pattern
matching .................................... 92, 93(F)
Gloves, use of .............................................. 60
Glyceregia, as etchant ............................ 52(F)
G measurement (grain size) procedure,
using automatic or semiatuomatic image
analysis (ASTM E 1382) ................... 117
Gradient filter ........................................ 87(F)
Gradient gray processing
function ................................. 209210(T)
Grain area, average .............................. 26, 27
Grain boundaries, removal of ................. 223
Grain diameter, average ...................... 26, 27
H
Hacksawing ................................................. 37
Hafnium, grain size revealed
by etching................................................61
Hall-Petch relationship ............................ 193
Halo error ............................................. 210
Hardness, volume fraction and specific
surface area effects ........................ 196(F)
Heat tinting ...................................... 71, 72(F)
metals affected by ......................... 71, 72(F)
Heyn, Emil .................................................. 29
I
IBM-class computers ................................... 9
IBM-compatible computers ....................... 10
Illumination
uneven ....................................... 166(F), 167
uniform, from high-quality optical
microscope ........................ 168, 169(F)
Image acquisition
bias introduction ........................ 165171(F)
devices ................................................ 8082
Image amendment .................... 222224(F,T)
Image analysis
area, actual image .......................... 7576(F)
binary image processing ............. 92100(F)
feature discrimination ........ 8892(F), 93(F)
feature positioning effect ..................... 79(F)
image acquisition ............................... 8082
image compression ................................... 80
image storage ........................................... 80
macro program ............................... 203, 204
macro system, bits of gray used ....... 8, 205
measurement issues ....................... 7880(F)
principles ..................................... 75100(F)
processing of image ...................... 8287(F)
process steps .................................. 75, 76(F)
resolution vs. magnification ............... 7778
routine ..................................................... 203
size range of images .......................... 7677
stepwise scheme ..................................... 270
system pixel arrays ....................... 76(F), 77
Image analysis software ........................... 156
Image analysis specialists ............................ 8
Image analysis (IA) systems ........................ 2
Image analysis technique,
reproducibility of measurements ....... 191
Image analyzers .......................................... 32
for two-phase materials ............. 194197(F)
TV-based, development of ......................... 4
Image editing ............................................ 224
procedures for ......................................... 224
Image-input devices, conditions for ........ 262
Image measurements ................ 211213(F,T)
Image memory, on-board ........................... 10
Image number ........................................... 231
Image processing .............................. 110, 120
arithmetic processing ............................... 86
frequency domain
transformation ........................ 8687(F)
gray-level .................................................. 82
image acquisition ................................... 172
initial image information ....................... 172
neighborhood kernel processing .. 8486(F),
87(F)
pixel point operations ........ 8384(F), 85(F)
polynomial fitting ..................................... 82
rank-order processing .................... 82, 83(F)
shading correction .............................. 8283
Image process input ................................. 231
Image process output ............................... 231
Image-Pro software package ..................... 12
Image segmentation
(thresholding) ....................... 214221(F)
Image smoothing ...................................... 210
Image tiling .......................... 241, 250255(F)
IMANCO (for Image Analyzing
Computers) ....................................... 6(F)
Q 360 system ............................................. 7
Inclusion content .................................. 3132
assessment by chart methods (ASTM E
45) ............................................... 3132
Inclusion length, mean value of ............... 235
Inclusion length (Feret max)
histogram .............................. 235, 237(F)
Inclusion lengths, standard
deviation of ......................................... 237
Inclusion pullout ....................................... 204
Inclusion rating by automatic image
analysis (ASTM E 1122) ............. 127(T)
Inclusion ratings ......................................... 18
Inclusions ................................................... 120
in steel ...................................................... 61
length measurement ................................. 32
nonmetallic ......................................... 17, 61
role in void or crack initiation ............... 129
by stereology (ASTM E 1245) ......... 127(T)
Individual pressure mode .......................... 46
Ingot-iron image reconstruction ............ 225,
226227(F)
Ingot iron
equivalent grain diameter ................. 242(T)
histogram ....................................... 243(F)
cumulative distribution .................. 243(F)
statistical parameters from
feature-specific grain data ......... 242(T)
Inhomogeneity
degree of ................................................. 131
quantification of ..................................... 161
Input image ............................................... 231
Inscribed x and y ...................................... 110
Integer variables ............................... 228229
Integral parameters ................................. 150
Intensity ............................ 90, 91, 114115(F)
definition ................................................... 91
Interactive selection method .......... 8891(F)
Intercept count ............................. 4, 110, 225
Interceptions per unit length .................... 23
Intercept length, average ............................ 26
Intercept method ............................. 29, 30(F)
Intercepts ...................................... 106107(F)
total .................................................... 126(T)
database .................................. 232, 233234
Interference layer method
(Pepperhoff) ......................................... 71
Interlamellar spacing ................................. 24
Intermetallic precipitates ........................... 61
International Society for Stereology
(ISS) ....................................................... 4
standard system of notation ................ 16(T)
J
Jeffries, Zay ..................................... 27, 28(F)
Jeffries planimetric method ............ 156, 178
Jernkontoret (JK) rating production
using image analysis
(ASTM E 1122) ................................... 32
J-integral ................................................... 148
Joint Photography Experts Group
(JPEG) compression algorithm ........ 80
K
Keel block .................................................... 17
Kernel ................................... 208, 209(T), 210
Kontron and Clemex
Technologies Inc. ................................ 11
L
Lab color space model ....... 258, 259(F), 260
Lab color system ...................................... 260
Lab coordinate system ............................. 260
Lamellar structures ..................... 146147(F)
Laplacian algorithm ................................... 92
Laplacian filter ........................................... 86
Laplacian gray processing
function ................................. 209210(T)
Lapping ........................................................ 49
Laps .............................................................. 49
Large-area mapping (LAM) ...... 250255(F)
automated stage control used ................. 255
Laser scanning ...................................... 8081
Lead
grinding ..................................................... 48
grinding with embedded diamond
abrasive particles ......................... 49(F)
Least-square curve fitting ......................... 86
Leco Corporation ....................................... 11
Leica Microsystems Q550 MW ........... 12(F)
Leica Q570 system ................................. 11(F)
M
Macintosh (Mac)Apple computers ....... 10
Macro ................................................. 203204
construction information ........................ 218
development ................................... 228231
Macrocode steps ....................................... 266
MACRO GRAIN_INTERCEPTS
(macro for intercept
counting) .................................... 232234
MACRO INCLUSION
MEASUREMENT .................... 239241
Macro program ........................................ 203
controlling automatic stage movement
and focusing of microscope ........... 216
MACRO RECONSTRUCT
GRAINS ..................................... 230231
Macro routine ................................... 229231
MACRO STORE IMAGES .... 229230, 231
MACRO VERTICAL LINES ......... 249250
Magnesium, grain size revealed by
etching .................................................. 61
Magnification ....................... 7778, 79(F), 80
and bias in digital measurements .......... 181
definition ................................................... 77
empty ...................................................... 171
influence on values of analyzed
images ................................ 168171(F)
optimum value ........................ 182183(F,T)
verification of correct amount of ........... 171
Magnification effect .................................. 111
Manganese sulfide inclusions .......... 217218
Manual methods of inclusion analysis
(ASTM E 45) .................... 103, 240241
Manual point counting measurement
(ASTM E 562) ............................. 127(T)
procedures ............................................... 116
Material Safety Data Sheets (MSDSs) ..... 56
Mathematical morphology ...................... 5, 6
development of ........................................... 4
Matheron, G. ................................................. 5
Max horizontal chord ......................... 126(T)
Maximum Feret diameter ....................... 158
Maximum intercept length ...................... 151
Maximum particle width ............ 151, 152(F)
Mean .................................................. 125, 131
Mean center-to-center planar
spacing ...................................... 21(F), 25
Mean center-to-center spacing of
bands .................................................. 245
Mean chord length, determined by using
measurement frame ............................ 214
Mean diameter .......................................... 124
Mean edge-to-edge spacing of bands ..... 245
Mean free path .... 25, 32, 108109, 126, 245
of graphite, on fracture toughness of
ferritic ductile iron ............ 148, 149(F)
Mean grain size number calculation
(ASTM E 1382) ........ 227228, 233234
Mean graphite nodule diameter ............. 155
Mean intercept length ........ 123(T), 192, 193
Mean lineal intercept (L3) .............. 106107)
Mean lineal intercept distance (L) ... 25, 111
Mean lineal intercept length ............... 23, 29
of -grains ................................................... 30
Mean number of interceptions of
features divided by the total test area
(NA) .................................................... 228
N
National Television System Committee
(NTSC) color-encoding scheme ...... 261
Naval brass, etching ........................ 66, 69(F)
Nearest-neighbor spacing distribution .. 130,
131(F), 132134(F), 143
Near-neighbor distances .......................... 137
Near-neighbor spacing ............................. 130
Neighborhood kernel processing . . 8486(F),
87(F), 97(F)
Network at mid-edge/edge-particle
spacing ............................................... 130
Next instruction ........................................ 204
Nickel, etching ............................................. 62
Nickel-base superalloys, bimodal duplex
grain size ............................... 117, 118(F)
Nickel-manganese alloys, 3-D
reconstruction of microstructure ....... 164,
165(F)
O
Occupied squares method ..................... 19
Offset threshold error .............................. 210
On-board image memory .......................... 10
On-board processors ............................ 1112
Opening ................................. 9697(F), 98(F)
Open operator ..................................... 223(T)
Opponent-colors approach ...................... 260
Optical microscope, illumination system
of ......................................................... 205
Optical scanner ........................................... 81
Optic axis ................................................... 109
Optimas software ....................................... 10
Orientation .................................... 2324, 108
degree of .......................................... 24, 245
Orthogonal sections, of
microstructures ...................... 164, 165(F)
Outline, image amendment
procedure ....................................... 223(T)
Output image ............................................ 231
Oxide coatings ........................................ 84(F)
P
Paper, microscopy of, material
standard (ASTM D 686) ............. 127(T)
Paper, microscopy of, material
standard (ASTM D 1030) ........... 127(T)
Parallax angle ........................................... 109
Parallax distance ...................................... 109
Parallel distribution of phases ................ 146
Particle
basic measures of ...................... 151, 152(F)
convex ..................................................... 151
Particle dispersions
characterization of ..................... 129144(F)
clustered .................................... 129, 130(F)
definition ................................................. 129
ordered ....................................... 129, 130(F)
random ....................................... 129, 130(F)
techniques used to characterize ............. 130
Particle perimeter ....................... 151, 152(F)
Particles, per unit intercept length ............ 179
Particle surface area ................................ 151
Particle-to-particle spacing ..................... 136
Pattern analysis system (PAS) (Bausch
and Lomb, USA) .................................. 6
Pattern-matching algorithms ......... 92, 93(F)
PCI frame grabber ..................................... 12
Pearlite, in steel, thresholding
of ............................... 215218(F), 219(F)
Percent of relative accuracy
(% RA) .............................................. 102
Perimeter ........ 110, 111113(F), 123, 123(T),
126(T), 152
definition, as feature-specific
parameter ................................... 213(T)
total, standard notation for .................. 16(T)
Personal computers (PC) ............................. 9
Phase Alternation Line (PAL)
color-encoding scheme ..................... 261
Phase percentage .......................................... 4
Phase volume content, determination
example ......................... 269270, 271(F)
Phenolic resins ............... 40, 43, 44(F), 63(F)
Physical properties ................................... 146
Pitting corrosion .............................. 49, 50(F)
Pixel diagonals .......................................... 112
Pixels ............................................... 76(F), 205
definition ................................................... 76
dimension ................................................. 77
perimeter ................................................. 206
Planimeter ................................................... 19
Planimetric method ......................... 27, 28(F)
Plastic deformation, as bias
source ..................................... 166167(F)
Point count, standard notation for ......... 16(T)
Point counting ............ 2, 4, 2022(F), 2930,
31(F), 116
to manually measure area ...................... 104
Point count method (ASTM
E 562) ........................................ 2022(F)
Point fraction ............................... 20, 22, 151
method ...................................................... 19
Poissons distribution .......... 132133(F), 139
Polarized light ............................................. 54
to reveal grain size in nonferrous
alloys ................................................. 61
Polisher, vibratory ................................... 53(F)
Polishing ...... 36, 37, 4345(F), 46, 4956(F),
72
abrasives ................................................... 56
automatic ....................................... 5556(F)
chemical .............................................. 5354
cloths ......................................................... 56
cloths, pressure-sensitive-adhesivebacked ............................................... 46
electrolytic .......................................... 5354
final .............................................. 50, 52, 58
improper, as source of bias
in images ................................... 166(F)
intermediate .............................................. 50
mechanical ................................................ 53
pressure applied ........................................ 54
relief .............................................. 66, 70(F)
rough ................................................... 4950
specimen movement during ..................... 54
stages ........................................................ 49
vibratory ............................................. 50, 60
Polishing safety issues (ASTM E 2014) ... 56
Polymerization ............................................ 46
Polystyrene foam, resolution of images
and errors introduced ............ 173, 174(F)
Populations ........................................ 186187
Pores .................................................. 120, 123
Porosity, boron addition effect on
sintered stainless steel ........... 199200(F)
Position ............................................... 113114
Position x and y ........................................ 110
Potentiostat .................................................. 63
Power spectrum display ....................... 87(F)
Precipitates, intermetallic ........................... 61
Precision and bias statement (ASTM E
112) ..................................................... 234
Precision saws .................................. 3839(F)
Preparation of replicas (ASTM
E 1351) .......................................... 127(T)
Preprocessing ............................ 171, 172173
Q
Q720 (IMANCO) ..................................... 7(F)
QTM. See Quantitative television
microscope.
Quality control
and data interpretation ........................... 191
inclusion content of HSLA steels .......... 197
Quantimet A ....................................... 4, 5, 12
Quantimet B ........................................ 4, 5(F)
Quantimet 720 (Q720) system ............... 6(F)
Quantitative description of the
microstructure .................................. 145
Quantitative microscopy .............................. 1
definition ..................................................... 1
Quantitative television microscope
(QTM) ........................................... 4, 5(F)
Quasi-homogeneous distribution ............ 132
Queens Award to Industry .......................... 4
R
Radar systems ............................................... 2
Raman spectroscopy ................................. 114
Random access memory (RAM) ...... 81, 220
Random distributions ................. 132133(F)
Random sampling .............. 35, 162163, 164
Random sectioning ................................... 163
Rank-order filter ........................................ 86
Raster ...................................... 7778, 81, 116
definition ............................................. 7778
Rastering ........................................... 205, 247
Raster lines ................................................ 107
Real variables ................................... 228229
Red filter, for thresholding brass ......... 221(F)
Red, green, and blue (RGB) channels,
to establish color for pixels ................. 90
Red, green, and blue (RGB) color
space model . . 90, 257258, 259(F), 262,
263, 264(F)
S
Sampling .............. 1718, 3537, 162165(F)
of 1-D and 2-D objects .......................... 162
random .................................... 162163, 164
systematic .................................. 162165(F)
of 3-D objects ......................................... 162
Sampling procedures for inclusion
studies (ASTM E 45) ......................... 37
Sampling procedures for inclusion
studies (ASTM E 1122) ...................... 37
Sampling procedures for inclusion
studies (ASTM E 1245) ..................... 37
Sat Image (saturation image) ..... 266267(F)
Saturation .............................................. 9091
definition ....................................................... 90
T
Tagged image file format (TIFF)
compression algorithm ....................... 80
Tangent count ........................................... 110
Tantalum, anodizing ................................... 69
TAS ................................................................. 6
Television, development of ........................... 2
Temperature, effect on tensile properties
of Si-ferrite and ductile iron ............. 148,
149(F)
Template .................................................... 104
Tensile strength, maximum ..................... 146
Tessellation ................................... 120122(F)
by conditional dilation ...................... 141(F)
by dilation technique ........ 140, 141143(F)
Tessellation network ................................. 142
U
Ultimate dilation, image amendment
procedure ....................................... 223(T)
Ultimate eroded point ................................ 91
Ultimate erosion, image amendment
procedure ....................................... 223(T)
Ultimate tensile strength
of ferritic ductile iron .. 147148(F), 149(F)
of silicon ferrite ............ 147148(F), 149(F)
Ultrasonic cleaning ..................................... 54
Unbiased characterization ....................... 162
Units per pixel ..................................... 126(T)
V
Vacuum evaporator .................................... 71
Vanadium, anodizing .................................. 69
Vapor deposition ......................................... 72
Variance ..................................................... 184
Variance algorithm ..................................... 92
Vertical projection ............... 213, 246247(F)
Vertical sectioning method, and random
sampling ............................................. 163
Video frame rate ...................................... 45
Video microscopy ..................................... 23
Visible light, wavelength range for
human observers ................................. 257
Void coalescence ....................................... 129
Voids ........................................................... 129
Volume fraction .... 12, 4, 1922(F), 30, 217
of beta-phase in CDA brass,
thresholding of ............................... 214
compared to area fraction ...... 104105, 115
definition ................................................... 19
as descriptor or integral parameter ........ 150
and dilation and counting technique ...... 115
estimated ................................................. 154
estimates to interpret physical and
mechanical properties ..................... 195
of inclusions ............................................. 32
of inclusions, guidelines for
determination (ASTM E 1245) ...... 234
magnification effect on analyzed
images ................................ 168171(F)
necessary to predict mechanical
properties ........................................ 146
needed for calculating mean free path .... 25
of pores in sintered stainless steel,
boron addition effect ......... 199200(F)
and random sampling ............................. 163
range of value ......................................... 157
related by fracture toughness of HSLA
steels .................................. 197199(F)
standard notation for ........................... 16(T)
stereological definition of ........................ 32
Volume of prolate spheroid ................ 123(T)
Volume of sphere ................... 123(T), 126(T)
Volumetric density, standard
notation for ...................................... 16(T)
W
Waspaloy, residual sectioning/grinding
damage from polishing ..............49, 50(F)
Watershed transformations ....................... 11
Water stains, eliminated in specimen
preparation .......................................... 210
Wavelength-sensitive detectors (RGB) ... 257
Weighted distributions ........................ 190(F)
Weight fraction ................................... 19, 217
Weight loss .................................................. 19
While loop ................................................. 240
While statement ........................................ 204
White, gray level assigned ........................ 205
White light ................................................. 257
Width ................................................. 122123
definition, as feature-specific
parameter ................................... 213(T)
Worst-field report ............................. 102103
X
x Feret ................................................... 126(T)
Xmax, definition, as feature-specific
parameter ....................................... 213(T)
Xmin, definition, as feature-specific
parameter ....................................... 213(T)
X-ray mapping .......................................... 114
X-rays, contract mechanisms associated
with ................................................ 102(T)
Y
y Feret ................................................... 126(T)
Yield strength
of ferritic ductile iron .. 147148(F), 149(F)
of silicon ferrite ............ 147148(F), 149(F)
YIG/YUV coordinate system .................. 260
YIQ system ................................................ 260
YUV system ............................................... 260
Z
Zeiss, Carl ..................................................... 6
Zeiss and Clemex ....................................... 12
Zirconium
anodizing .................................................. 69
grain size revealed by etching ................. 61
Zirconium alloys, attack polishing
solution added to slurry ....................... 58