You are on page 1of 299

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Practical Guide to Image Analysis

ASM International
Materials Park, OH 44073-0002
www.asminternational.org

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Copyright 2000
by
ASM International
All rights reserved
No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the copyright
owner.
First printing, December 2000

Great care is taken in the compilation and production of this Volume, but it should be made clear that NO
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, ARE GIVEN IN CONNECTION
WITH THIS PUBLICATION. Although this information is believed to be accurate by ASM, ASM cannot
guarantee that favorable results will be obtained from the use of this publication alone. This publication is
intended for use by persons having technical skill, at their sole discretion and risk. Since the conditions of product
or material use are outside of ASMs control, ASM assumes no liability or obligation in connection with any use
of this information. No claim of any kind, whether as to products or information in this publication, and whether
or not based on negligence, shall be greater in amount than the purchase price of this product or publication in
respect of which damages are claimed. THE REMEDY HEREBY PROVIDED SHALL BE THE EXCLUSIVE
AND SOLE REMEDY OF BUYER, AND IN NO EVENT SHALL EITHER PARTY BE LIABLE FOR
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES WHETHER OR NOT CAUSED BY OR RESULTING FROM THE NEGLIGENCE OF SUCH PARTY. As with any material, evaluation of the material under
end-use conditions prior to specification is essential. Therefore, specific testing under actual conditions is
recommended.
Nothing contained in this book shall be construed as a grant of any right of manufacture, sale, use, or
reproduction, in connection with any method, process, apparatus, product, composition, or system, whether or not
covered by letters patent, copyright, or trademark, and nothing contained in this book shall be construed as a
defense against any alleged infringement of letters patent, copyright, or trademark, or as a defense against liability
for such infringement.
Comments, criticisms, and suggestions are invited, and should be forwarded to ASM International.
ASM International staff who worked on this project included E.J. Kubel, Jr., Technical Editor; Bonnie Sanders,
Manager, Production; Nancy Hrivnak, Copy Editor; Kathy Dragolich, Production Supervisor; and Scott Henry,
Assistant Director, Reference Publications.
Library of Congress Cataloging-in-Publication Data
Practical guide to image analysis.
p. cm.
Includes bibliographical references and index.
1. Metallography. 2. Image analysis. I. ASM International.
TN690.P6448 2000
669.95dc21
00-059347
ISBN 0-87170-688-1
SAN: 204-7586
ASM International
Materials Park, OH 44073-0002
www.asminternational.org
Printed in the United States of America

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

About the Authors


John J. Friel is technical director at Princeton Gamma-Tech (Princeton,
NJ). He received his undergraduate education at the University of
Pennsylvania, his M.S. degree from Temple University, and his Ph.D.
from the University of Pennsylvania. He did postdoctoral work at Lehigh
University and worked at Homer Research Lab, Bethlehem Steel Corp.
before joining PGT. In addition to his work on x-ray microanalysis for
PGT, John serves as an adjunct Professor of Ceramics at Rutgers
University and is a member of the International Centre for Diffraction
Data (ICDD). John is the author of over 50 technical publications on
subjects in materials science, x-ray microanalysis, and image analysis,
and also authored a book entitled X-Ray and Image Analysis in Electron
Microscopy. He is past-president of the Microbeam Analysis Society and
chairman of ASTM Subcommittee E04.11 on X-Ray and Electron
Metallography.
James C. Grande is leader, Light Microscopy and Image Analysis at GE
Corporate Research and Development Center (Schenectady, NY). He
received his B.S. degree in mechanical engineering from Northeastern
University in 1980 and began working in the microscopy lab at the R&D
Center as a metallographer during his undergraduate work. Jim has
authored several articles, presented several talks on metallography and
image-analysis techniques, and has taught training courses on stereology
and image analysis. He has more than 20 years of experience in a research
environment using image analysis to characterize many different materials in a variety of applications.
Dennis Hetzner is a research specialist at the Timken Co. (Canton, OH).
Dennis received his B.S. and M.S. degrees in metallurgical engineering
from Illinois Institute of Technology and his Ph.D. in Metallurgical
Engineering from University of Tennessee. He specializes in quantitative
metallography and conducted research in the areas of powder metal
processing, high-temperature, mechanical property testing, rolling contact
fatigue, and laser glazing of bearings. He has presented several papers and
tutorial lectures regarding the use of image analysis to solve problems
related to quantification of materials microstructural features, and he
teaches courses on quantitative image analysis. Dennis is chairman of
ASTM Subcommittee E04.05 on Microindentation Hardness Testing, and
he is a member of ASM International, the International Metallographic
Society (IMS), and ASTM.
Krzysztof Kurzydowski is head, Dept. of Materials Science and
Engineering, Warsaw University of Technology (Warsaw, Poland). He
received his undergraduate (1978) and Ph.D. (1981) degrees from
iii

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000

Warsaw University of Technology and his D.Sc. degree (1990) from


Sileasian University of Technology. His research interests include quantification of materials microstructures, materials modeling, design of
polycrystalline materials and composites, environmental effect on materials properties, and prediction of in-service materials degradation. Kris
has authored four books/monographs and authored or coauthored more
than 50 technical publications. He is a member of International Society
for Stereology, Materials Research Society, European Materials Research
Society, and American Society of Mechanical Engineers, and he is a
Fellow of the Institute of Materials.
Don Laferty is director, Research and Development at Objective Imaging
Ltd. (Cambridge, England). Don received his B.A. degree in physics and
philosophy from Denison University in 1988. He has been involved with
optics and digital imaging for more than 13 years. His early interests in
real-time optical pattern recognition systems evolved into digital image
analysis and microscopy upon joining Cambridge Instruments Inc. (now
part of Leica Microsystems) in 1989. While at Cambridge Instruments, he
was active in applying techniques based on mathematical morphology to
scene segmentation problems in all areas of microscopy and furthering
the practical use of image processing and analysis in cytogenetics,
pathology, metallurgy, materials, and other microscopy-related fields.
Don currently is involved with high-performance hardware and software
solutions for automated microscope-based image analysis.
Mahmoud T. Shehata is research scientist at Materials Technology
Laboratory/CANMET (Ottawa, Ontario, Canada) since 1978 conducting
research in microstructural characteriazation of engineering materials for
industrial clients. He received his B.S. degree in metallurgical engineering from University of Cairo (Egypt) and his Ph.D. degree in materials
science from McMaster University (Ontario, Canada). Mahmoud has
used metallographic analysis throughout his research career mostly in the
area of microstructure/property relationships of engineering materials,
particularly in the area of effects of nonmetallic inclusions on steel
properties. He is the author of more than 50 technical papers and more
than 100 reports, and he is a member of several technical organizations
including ASM International, International Metallographic Society, and
Canadian Institute of Mining, Metallurgy, and Petroleum.
Vito Smolej is research engineer at Carl Zeiss Vision (Munich, Germany). He received his undergraduate degree in technical physics from
University of Ljubljana (Slovenia) in 1971, his masters degree in
biophysics from University of Zagreb (Croatia) in 1977, and his Ph.D. in
solid state chemistry from University of Ljubljana in 1977. He did
research and post-doctoral work at the Dept. of Applied Mathematics of
the Josef Stefan Institute in Ljubljana and taught programming technologies and advanced-programming languages at University of Maribor
(Kranj, Slovenia). After spending a PostDoc year at the Max Planck
iv

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000

Institute for Materials Science in Stuttgart (Germany) in 1982, Vito joined


Carl Zeiss Canada where he was involved with software-based imageanalysis systems produced by Kontron Elektornik. In 1988, he moved to
Kontron Elektronik Bild Analyse (which became Carl Zeiss Vision in
1997) in Munich, where he is involved in software development and
systems design. He is author or coauthor of more than 40 technical
publications.
George F. Vander Voort is director, Research and Technology at Buehler
Ltd. (Lake Bluff, IL). He received his B.S. degree in metallurgical
engineering from Drexel University in 1967 and his M.S. degree in
Metallurgy and Materials Science from Lehigh University in 1974. He
has 29 years experience in the specialty steel industry with Bethlehem
Steel and Carpenter Technology Corp. George is author of more than 150
publications including the book Metallography: Principles and Practice,
as well as the ASM International video course Principles of Metallography. George has been active with ASTM since 1979 as a member of
committees E-4 on Metallography and E-28 on Mechanical Testing. He is
a member of several technical organizations including ASM International,
International Metallographic Society, ASTM, International Society for
Stereology, Microscope Society of America, and State Microscopy
Society of Illinois.
Leczek Wojnar is associate professor, Institute of Materials Science,
Cracow University of Technology (Cracow, Poland). He studied at
Cracow University of Technology and Academy of Mining and Metallurgy and graduated in 1979, and he received his Ph.D. degree from
Cracow University of Technology in 1985. His research interests include
the application of computer technology in materials science, including
image analysis, stereology, materials engineering, and software development to assess weldability. Leszek has authored three books and more
than 50 technical publications. His work Principles of Quantitative
Fractography (1990) was the first such complete monograph in Poland
and gave him the D.Sc. position. He is a member of International Society
for Stereology, Polish Society for Materials Engineering, and Polish
Society for Stereology.

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 11 OUTPUT: Thu Oct 26 15:57:06 2000

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER 1: Image Analysis: Historical Perspective . . . . . . . . . 1
Don Laferty, Objective Imaging Ltd.
Video Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Beginnings: 1960s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Growth: 1970s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Maturity: 1980s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Desktop Imaging: 1990s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Truly Digital: 2000 and Beyond . . . . . . . . . . . . . . . . . . . . . . . 12
CHAPTER 2: Introduction to Stereological Principles . . . . . . . 15
George F. Vander Voort, Buehler Ltd.
Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specimen Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Volume Fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Number per Unit Area . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Intersections and Interceptions per Unit Length . . . . . . . . . . .
Grain-Structure Measurements . . . . . . . . . . . . . . . . . . . . . .
Inclusion Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Measurement Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

. 17
. 18
. 19
. 22
. 23
. 23
. 31
. 32
. 33
. 33

CHAPTER 3: Specimen Preparation for Image Analysis . . . . . . 35


George F. Vander Voort, Buehler Ltd.
Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sectioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specimen Mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Grinding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Polishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Examples of Preparation Procedures . . . . . . . . . . . . . . . . . .
Etching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

.
.
.
.
.
.
.
.

. 35
. 37
. 40
. 46
. 49
. 56
. 61
. 72

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 15:57:06 2000

CHAPTER 4: Principles of Image Analysis . . . . . . . . . . . . . . . . 75


James C. Grande, General Electric Research and
Development Center
Image Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Storage and Compression . . . . . . . . . . . . . . . . . . . .
Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . .
Binary Image Processing . . . . . . . . . . . . . . . . . . . . . . . . .
Further Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

. 75
. 80
. 80
. 82
. 88
. 92
. 99

CHAPTER 5: Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 101


John J. Friel, Princeton Gamma Tech
Contrast Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Direct Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature Specific Measurements . . . . . . . . . . . . . . . . . . . . .
Derived Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature-Specific Derived Measurements . . . . . . . . . . . . . .
Standard Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 101
. 102
. 102
. 110
. 115
. 115
. 122
. 126

CHAPTER 6: Characterization of Particle Dispersion . . . . . . . 129


Mahmoud T. Shehata, Materials Technology
Laboratory/CANMET
Number Density Variation Technique . . . . . . . . . . . . . . . . .
Nearest-Neighbor Spacing Distribution . . . . . . . . . . . . . . . .
Dilation and Counting Technique . . . . . . . . . . . . . . . . . . . .
Dirichlet Tessellation Technique . . . . . . . . . . . . . . . . . . . .
Tessellation by Dilation Technique . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

. 131
. 132
. 134
. 137
. 141
. 143

CHAPTER 7: Analysis and Interpretation . . . . . . . . . . . . . . . . 145


Leczek Wojnar, Cracow University of Technology
Krzysztof J. Kurzydowski, Warsaw University of Technology
Microstructure-Property Relationships . . . . . . . . . . . . . . . . .
Essential Characteristics for Microstructure Description . . . . .
Parameters and Their Evaluation . . . . . . . . . . . . . . . . . . . . .
Sampling Strategy and Its Effect on Results . . . . . . . . . . . . .
Bias Introduced by Specimen Preparation and Image
Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bias Introduced by Image Processing and Digital
Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Estimating Basic Characteristics . . . . . . . . . . . . . . . . . . . . .

vii

. 145
. 150
. 154
. 162
. 165
. 171
. 183

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 14 OUTPUT: Thu Oct 26 15:57:06 2000

Data Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186


Data Interpretation Examples . . . . . . . . . . . . . . . . . . . . . . . . 191
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
CHAPTER 8: Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Dennis W. Hetzner, The Timken Co.
Gray Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Segmentation (Thresholding) . . . . . . . . . . . . . . . . . .
Image Amendment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Field and Feature-Specific Measurements . . . . . . . . . . . . . .
Feature-Specific Distributions . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

. 204
. 211
. 214
. 222
. 224
. 234

CHAPTER 9: Color Image Processing . . . . . . . . . . . . . . . . . . 257


Vito Smolej, Carl Zeiss Vision
Modeling Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Color Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Electronic Recording of Color Images . . . . . . . . . . . . . . . .
Color Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Color Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . .
RGB-HLS Model Conversion . . . . . . . . . . . . . . . . . . . . . .
Color Processing and Enhancement . . . . . . . . . . . . . . . . . .
Color Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Color Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quantitative Example: Determining Phase Volume
Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

. 257
. 258
. 261
. 263
. 265
. 265
. 266
. 267
. 269

. . 269
. . 270

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

viii

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Preface
Man has been using objects made from metals for more than 3000
yearsobjects ranging from domestic utensils, artwork, and jewelry, to
weapons made of brass alloys, silver, and gold. The alloys used for these
projects were developed by combining empirical knowledge developed
over centuries by trial and error. Prior to the late 1800s, engineers had no
concept of the relationship between a materials properties and its
structure. In most human endeavors, empirical observations are used to
create things, and the scientific principles that govern how the materials
behave lag far behind. Also, once the scientific concepts are understood,
practicing metallurgists often have been slow to understand how to apply
the theory to advance the industries.
The origins of the art of metallography date back to Sorbys work in
1863. While his metallographic work was ignored for 20 years, the
procedures he developed for revealing the microstructures of metals
directly lead to some of todays well-established relationships between
structure and properties. During the past 140 years, metallography has
transformed from an art into a science. Concurrent with the advances in
specimen preparation techniques has been the development of methodologies to better evaluate microstructural features quantitatively.
This book, as its title suggests, is intended to serve as a practical
guide for applying image analysis procedures to evaluate microstructural
features. Chapters 1 and 2 present an historical overview of how
quantitative image analysis developed and the evolution of todays
television computer-based analysis systems, and the science of stereology,
respectively. The third chapter provides details of how metallographic
specimens should be properly prepared for image analysis. Chapters 4
through 7 consider the principles of image analysis, what types of
measurements can be made, the characteristics of particle dispersions, and
methods for analysis and interpretation of the results. Chapter 8 illustrates
how macro programs are developed to perform several specific image
analysis applications. Chapter 9 illustrates the use of color metallography
for image analysis problems.
This book considers most of the aspects that are required to apply image
analysis to materials problems. The book should be useful to engineers,
scientists, and technicians that need to extract quantitative information
from material systems. The principles discussed can be applied to typical
quality control problems and standards, as well as to problems that may
be encountered in research and development probjects. In many image
ix

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

analysis problems, statistical evaluation of the data is required. This book


attempts to provide simple solutions for each problem presented; however, when necessary, a more rigorous analysis is included. Hopefully,
readers will find all aspects of the book to be useful, as their skill levels
increase.
The authors represent a very diverse group of individuals, and each has
been involved in some aspect of image analysis for 20 or more years. As
indicated in their biographies, each brings a unique contribution to this
book. Several are active members of ASTM Committee E04 on Metallography, and most are involved in professional societies dealing with
testing, metallography, stereology, and materials.
I enjoyed writing the Applications chapter and got a firsthand appreciation of the technical breadth and quality of the information contained
in this book from having the opportunity to review each chapter. I would
like to thank Ed Kubel of ASM, who has done an excellent job of
technical editing all chapters.
This book should be an excellent addition to the technical literature and
assist investigators at all levels of training and expertise in using image
analysis.
Dennis W. Hetzner
June 2000

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000

CHAPTER

Image Analysis:
Historical Perspective
Don Laferty
Objective Imaging Ltd.

QUANTITATIVE MICROSCOPY, the ability to rapidly quantify microstructural features, is the result of developments that occurred over a
period of more than 100 years, beginning in the mid-1800s. The roots of
quantitative microscopy lie in the two logical questions from scientists
after the first microscopes were invented: how large is a particular feature
and how much of a particular constituent is present?
P.P. Anosov first used a metallurgical microscope in 1841 to reveal the
structure of a Damascus knife (Ref 1). Natural curiosity most likely
spurred a further question: what are the volume quantities of each
constituent? This interest in determining how to relate observations made
using a microscope from a two-dimensional field of view to three
dimensions is known as stereology. The first quantitative stereological
relationship developed using microscopy is attributed to A. Delesse (Ref
2). From his work is derived the equivalency of area fraction (AA) and
volume fraction (VV), or AA VV.
Many of the early studies of metallography (the study of the structure
of metals and alloys) are attributed to Sorby. He traced the images of
rocks onto paper using projected light. After cutting out the one phase
present and weighing the pieces of paper representing each phase, he
estimated the volume fraction of the phases.
Lineal analysis, the relationship between lineal fraction (LL) and
volume fraction, or LL VV, was demonstrated by Rosiwal in 1898 (Ref
3). Sauveur conducted one of the first studies to correlate chemical
composition with structure in 1896 (Ref 4). From this work, the
relationship between the carbon content of plain carbon steel and the
volume fraction of the various constituents was discovered. Later, the
relationship between volume fraction and points in a test grid was

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 16 OUTPUT: Thu Oct 26 14:43:15 2000


2 / Practical Guide to Image Analysis

established both by Thompson (Ref 5) and Glagolev (Ref 6) in 1930 and


1931, respectively, establishing the relationship PP VV, where PP is the
point count.
From these first experiments has evolved the now well-known relationship:
PP LL AA VV
Initially, the procedures developed to perform stereological measurements were based on laborious, time-consuming manual measurements.
Of all these manual procedures, point counting is probably the most
important. From a metallographers perspective, point counting is the
easiest way to manually estimate the volume fraction of a specific
constituent. Regarding image analysis, point counting will be shown to be
equally important.

Video Microscopy
Television, or TV, as we know it today, evolved from the early work of
Philo Taylor Farnsworth in the 1920s. There were several commercial
demonstrations in the late 1920s and early 1930s (Ref 7), but the
technology was applied to building radar systems for the military during
World War II and was not commercialized until after the war.
One early video technique used the flying spot. The output of a
cathode ray tube was used as the source of illumination; this bright spot
was rastered (scanned) across a specimen. A detector tube was used to
analyze the output signal. Systems such as this were used to evaluate
blood cells (Ref 8) and to assess nonmetallic inclusions in steel (Ref 9).
As the technology advanced, ordinary television cameras were used to
convert the output signal of the microscope into an electronic signal and
a corresponding image on a video tube. These early systems were analog
devices.
The advancement and continually increasing sophistication of television and computer systems have allowed the development of powerful
image analysis (IA) systems, which have largely supplanted manual
measurement methods. Today, measurements and calculations that previously required many hours to perform can be made in seconds and even
microseconds. In reality, an IA system is only a simple point counter.
However, operating a point counter in conjunction with a computer allows
the use of highly sophisticated analysis algorithms to rapidly perform
many different types of measurements. While the same measurements
could be made manually, measurement time would be prohibitive.
The practice of IA has seen many changes in the nearly 40 years since
the development of the first television-based image analyzers in the

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 16 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 3

1960s. From limited hardware systems first used for quantitative metallographic characterizations to modern, highly flexible image processing
software applications, IA has found a home in an enormous range of
industrial and biomedical applications. The popularity of digital imaging
in various forms is still growing. The explosion of affordable computer
technologies during the 1990s coupled with recent trends in digital-image
acquisition devices places a renewed interest in how digital images are
created, managed, processed, and analyzed. There now is an unprecedented, growing audience involved in the digital imaging world on a
daily basis. Who would have imagined in the early days of IA, when an
imaging system cost many tensif not hundredsof thousands of
dollars, the degree of powerful image processing software that would be
available today for a few hundred dollars at the corner computer store?
Yet, beneath all these new technological developments, underlying
common elements that are particular to the flavor of imaging referred to
as scientific image analysis have changed little since their very
beginnings. Photographers say that good pictures are not taken but
instead are carefully composed and considered; IA similarly relies on
intelligent decisions regarding how a given subjectthe specimen
should be prepared for study and what illumination and optical configurations provide the most meaningful information.
In acquiring the image, the analyst who attends to details such as
appropriate video levels and shading correction ensures reliable and
repeatable results, which build confidence. The myriad digital image
processing methods can be powerful allies when applied to both simple
and complex imaging tasks. The real benefits of image processing,
however, only come when the practitioner has the understanding and
experience to choose the appropriate tools, and, perhaps more importantly, knows the boundaries inside which tool use can be trusted.
The goal for IA is information from which to distill a manageable set of
meaningful quantitative descriptions from the specimen (or better, a set of
specimens). In practice, successful quantification depends on an understanding of the nature of these measurements so that, when the proper
parameters are selected, accuracy, precision, and repeatability, as well as
the efficiency of the whole process, are maximized.
Set against the current, renewed emphasis on digital imaging is a
history of TV-based IA that spans nearly four decades. Through various
generations of systems, techniques for specimen preparation, image
acquisition, processing, measurement, and analysis have evolved from the
first systems of the early 1960s into the advanced general purpose systems
of the 1980s, finally arriving at the very broad spectrum of imaging
options available into the 21st century. A survey of the methods used in
practice today shows that for microstructure evaluations, many of the
actual image processing and analysis techniques used now do not differ all
that much from those used decades ago.

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000


4 / Practical Guide to Image Analysis

Beginnings: 1960s
The technology advancement of the industrial age placed increasing
importance on characterizing materials microstructures. This need has
driven the development of efficient practical methods for the manual
measurement of count, volume fraction, and size information for various
microstructures. From these early attempts evolved the science of
stereology, where a mathematical framework was developed that allowed
systematic, manual measurements using various overlay grids. Marked by
the founding of the International Society for Stereology in 1961, these
efforts continued to promote the development and use of this science for
accurate and efficient manual measurements of two-dimensional and
three-dimensional structures in specimens. Despite the considerable
labor-saving stereological principles provided, microstructure characterizations using manual point and intercept counting are a time-consuming,
tiring process. In many cases, tedious hours are spent achieving the
desired levels of statistical confidence. This provided the background for
a technological innovation that would alleviate part of a quantitative
metallurgists workload.
Image analysis as we know it todayin particular that associated with
materials and microscopysaw in the early 1960s two major developments: TV-based image analyzers and mathematical morphology. The
first commercial IA system in the world was the Quantimet A from Metals
Research in 1963, with the very first system off the production line being
sold to British Steel in Sheffield, UK (Ref 10). Metals Research was
established in 1957 by Cambridge University graduate Dr. Michael Cole
and was based above Percivals Coach Company, beside the Champion of
the Thames pub on King Street, Cambridge, UK. The person inspiring the
design of the Quantimet was Dr. Colin Fisher, who joined the company in
1962, and, for its design, Metals Research was awarded numerous
awards, including the Queens Award to Industry on six occasions. The
QTM notation has been applied to IA systems because the Quantimet A
was referred to as a quantitative television microscope (QTM). While this
system served primarily as a densitometer, it was the beginning of the age
of automation.
These early IA systems were purely hardware-based systems. The
Quantimet B was a complete system for analyzing phase percentage of
microstructures and included a purpose-built video camera and specialized hardware to measure and display image information (Fig. 1). While
these early systems had relatively limited application mainly geared
toward phase percentage analysis and counting, they also achieved
extremely high performance.
It was necessary to continuously gather information from the live video
signal, because there was no large-scale memory to hold the image

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 5

information for a period longer than that of the video frame rate.
Typically, only a few lines of video were being stored at one time. In one
regard, these systems were very simple to use. For example, to gage the
area percentage result using the original Quantimet A, the investigator
needed to simply read the value from the continuously updated analog
meter. Compared with the tedium of manual point counting using grid
overlays, the immediate results produced by this new QTM gave a hint of
the promise of applying television technology to microstructure characterization.
The first system capable of storing a full black and white image was the
Bausch and Lomb QMS introduced in 1968 (Ref 11). Using a light pen,
the operator could measure properties of individual objects, now referred
to as feature specific properties, for the first time.
The second major foundation of IA in these early days was mathematical morphology, developed primarily by French mathematicians J. Serra
and G. Matheron and at Ecole des Mines de Paris (Ref 12). The
mathematical framework for morphological image processing was introduced by applying topology and set theory to problems in earth and
materials sciences. In mathematical morphology, the image is treated in a
numerical format as a set of valued points, and basic set transformations
such as the union and intersection are performed. This results in concepts
such as the erosion and dilation operations, which are, in one form or
another, some of the most heavily used processing operations in applied
IA even today.

Fig. 1

Quantitative television microscope, the Quantimet B (Metals Research, Cambridge, U.K.)

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000


6 / Practical Guide to Image Analysis

Growth: 1970s
By the 1970s, the field of IA was prepared for rapid growth into a wide
range of applications. The micro-Videomat system was introduced by
Carl Zeiss (Ref 13), and the Millipore MC particle-measurement system
was being marketed in America and Europe (Ref 14). The first IA system
to use mathematical morphology was the Leitz texture analysis system
(TAS) introduced in 1974. Also, a new field specific system named the
Histotrak image analyzer was introduced by the British firm Ealing-Beck
(Ref 15).
In the meantime, Metals Research had become IMANCO (for Image
Analyzing Computers), and its Quantimet 720 system offered a great deal
more flexibility than the original systems of the 1960s (Fig. 2). Still
hardware based, this second generation of systems offered many new and
useful features. The Q720 used internal digital-signal processing hardware, a built-in binary morphological image processor with selection of
structuring element and size via dials on the front panel, and advanced
feature analysis with the size and shape of individual objects measured
and reported on-screen in real time. The system was also flexible, due to
programmability implemented via a logic matrix configured using sets of
twisted pairs of wire. Other impressive innovations included a light pen
for direct editing of the image on the video monitor and automated control
of microscope stage and focus. Other systems offered in the day, such as
the TAS and pattern analysis system, (PAS) (Bausch and Lomb, USA),
had many similar processing and measurement capabilities.
The performance of early hardware-based systems was very high, even
by the standards of today. Using analog tube-video cameras, high-

Fig. 2

Q720 (IMANCO, Cambridge, U.K.) image analyzer with digital image


processing hardware

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 7

resolution images of 896 704 pixels were achieved with around 10


frames/s display rates. That these specialized systems of the 1970s
performed image acquisition, thresholding, binary image morphological
processing such as erosion and dilation, and feature measurement, and
provided continuously displayed results for each video frame and many
times a second, is impressive. The primary issues for these systems were
their accuracy and reliability. In the best systems, care was taken to ensure
that the geometry of the video input was homogeneous and that the
system could accurately measure the possible range of shapes, sizes,
orientation, and number of features within the image without encountering systematic problems.
The 1970s also saw the introduction of general-purpose computers
coupled to IA systems. Dedicated systems were connected to generalpurpose minicomputers so results could be more conveniently stored for
later review (Fig. 3). Although the computer became an integral part of
the overall system, image processing and analysis still were performed by
the specialized hardware of the image analyzer. The introduction of the
general-purpose computer into the system actually slowed down the
process. After all, the IA systems of the day were processing nearly
megapixel images very fast. For instance, the IMANCO Q360 achieved
an analysis throughput of 20 fields per second, which included acquiring
the image, analyzing it, and moving the specimen (Ref 16). This is a
remarkable rate even for the technology of today, and the general-purpose
computers of the 1970s could not even come close to this performance on
their own.

Fig. 3

Q720 (IMANCO, Cambridge, U.K.) with minicomputer for exible


results handling

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000


8 / Practical Guide to Image Analysis

Each development during the 1970s led to more widespread use of IA


in a variety of fields. From the introductory systems of the early 1960s
that were applied primarily to metallurgical examinations, the 1970s saw
image analyzers put to good use in cytology, botany, and cytogenetics, as
well as in general materials and inspection applications. The price of these
systemsmeasured typically in the hundreds of thousands of dollars
coupled with their specialized operation and complexity led to the rise of
dedicated IA specialists, who required an in-depth understanding of all
aspects of IA. Video tubes required regular calibration and maintenance,
and shading effects complicated image acquisition. The various quirks
associated with early computing systems required the appropriate level of
respect. Also, IA specialists needed to learn how to combine all this
knowledge to coax these systems into producing appropriate results for
unique applications.

Maturity: 1980s
The heyday of hardware-based IA arrived in the 1980s, while at the
same time a new paradigm of personal computer-based (PC-based)
imaging began to emerge. Increasing power and declining cost of
computers fueled both developments. In the case of systems that
continued to use dedicated image processing hardware, computers now
contained integrated microprocessors and built-in memory for flexible
image and results storage. Systems appearing in the 1980s combined
these features with vast new options for programmability, giving rise to
increasingly sophisticated applications and penetration of scientific IA
into many research and routine environments. Many of these systems still
are used today. Though generally slower than their purely hardware-based
predecessors, systems of the 1980s made up for slowness by being
significantly easier to use and more flexible.
Many systems of the early to mid-1980s provided a richer implementation of morphological image processing facilities than was possible in
their purely hard-wired predecessors. These operations were performed
primarily on binary images due to the memory and speed available at
those times. For instance, Cambridge Instruments Q900 system provided
high, nearly megapixel resolution imaging and allowed for a wide range
of morphological operations such as erosion, dilation, opening, closing, a
host of skeletonization processes, and a full compliment of Boolean
operations (Fig. 4). The Q970 system included true-color, high-resolution
image acquisition using a motorized color filter wheel (Fig. 5), a
technique still used today in some high-end digital cameras. In these and
other systems were embodied a range of image acquisition, processing,
and measurement capabilities, which, when coupled with the flexibility
offered by general purpose microcomputers, became commonplace,

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 9

Fig. 4

Q900 (Cambridge Instruments) with integrated microcomputer

tried-and-true tools accepted today for their practicality in solving a broad


range of IA applications.
A revolutionary event during the 1980s was the introduction of the
personal computer (PC). With both Apple-class and IBM-class computers, the costs of computing reached new lows, and it was not long before
the first imaging systems relying heavily on PC technology were made
available (Fig. 6). The first purely software-only approaches to image
processing on the PC were hopelessly slow compared with their hardware-powered predecessors, but they did offer a low-cost alternative
suitable for experimenting with image processing and analysis techniques
without the need for expensive hardware. However, it still was necessary
to acquire the image, so a variety of image-acquisition devicesthe

Fig. 5

Q970 (Cambridge Instruments) with high-resolution true color acquisition

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 13 OUTPUT: Thu Oct 26 14:43:15 2000


10 / Practical Guide to Image Analysis

Fig. 6

Early PC-based system, the Q10 (Cambridge Instruments/Olympus)

frame-grabber boards that fit into the PC open architecturebecame


popular (Ref 17). Many of these early devices included on-board image
memory with special processors to improve performance for often-used
facilities, such as look-up table transformations (LUTs) and convolutions.
In some respects, the architecture of the Apple computers of the 1980s,
such as the Macintosh (Mac), was more suitable for dealing with the
large amounts of memory required for image processing applications, as
is evident in the popularity of the early Macs used for desktop publishing.
With the introduction of the Microsoft Windows operating system for
IBM-compatible computers, the availability of imaging software for both
major PC platforms grew. For example, Optimas software (Bioscan, now
part of Media Cybernetics, U.S.) was one of the first IA software packages
introduced to the new Windows operating system.

Desktop Imaging: 1990s


The development of large hardware-based systems peaked in the 1980s,
followed in the 1990s by the rise of the PC as an acceptable platform for
most traditional IA applications due to the dramatic improvement in PC
performance. Development shifted from specialized, purpose-built IA
hardware to supporting various frame grabber cards, developing efficient
software-based algorithms for image processing and analysis, and creating user-interfaces designed for ease of use, with sufficient power for a
variety of applications. Personal computer performance increased as PC
prices dropped, and increasingly sophisticated software development

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 15 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 11

tools resulted in many new software-based imaging products. The focus


on software development and reliance on off-the-shelf computers and
components in the 1990s turned out many dozens of new imaging
companies, compared with the previous two decades where only a few
companies had the resources required to design and manufacture high-end
imaging systems. At the same time, imaging was finding its way into
dedicated application-specific turnkey systems, tailored to specific tasks
in both industrial and biomedical areas.
In the early 1990s, specialized image processors for morphology
provided a new level of performance allowing practical, routine use of
gray scale, as well as binary morphological processing methods. Some of
the major U.S. suppliers of metallographic consumables, such as Buehler
Ltd., Leco Corp., and Struers Inc., introduced field-specific machines and
systems, which performed limited but specialized measurements. For
more generalized material analysis, systems such as Leicas Q570 (Fig.
7), and similar systems manufactured by Kontron and Clemex Technologies Inc. (Canada), having high-speed processors for gray-scale image
amendment, watershed transformations, and morphological reconstruction methods were used to solve a wide variety of challenging image
processing problems. These new capabilities, together with tried and true
binary transforms and early gray-processing developments, such as
autodelineation and convolution filters, offered the image analyst a broad
range of tools having sufficient performance to allow experimentation and
careful selection for the desired effect.
By the end of the decade, PC technology had advanced so rapidly that
even the need for specific on-board processors was relegated to specialized

Fig. 7

Q570 (Leica) with high-speed gray-scale morphological processors

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 18 OUTPUT: Thu Oct 26 14:43:15 2000


12 / Practical Guide to Image Analysis

real-time machine-vision applications. For traditional IA work, the only


specialized hardware required was a PCI frame grabber, which quickly
transferred image data directly into computer memory or onto the display.
Computer central processing units (CPUs) now were sufficiently fast to
handle most of the intensive pixel-processing jobs on their own.

Truly Digital: 2000 and Beyond


Today, the IA landscape consists of a wide a range of options. Basic
imaging software libraries allow users to program their own customized
solutions. General-purpose IA software packages, such as Image-Pro
(Media Cybernetics, U.S.), support a range of acquisition and automation
options having macrocapabilities. Fully configured systems offered by
Zeiss and Clemex, similar to the earlier generations of IA systems, still
are available for use in imaging applications in all aspects of microscopy.
From a historical perspective, the current direct descendant in the lineage
that began some 38 years ago with the first Quantimet A is Leica
Microsystems Q550MW (Fig. 8), a system that targets applicationspecific tasks in the analysis of material microstructures, not at all unlike
its original ancestor.
What will the future bring in the area of imaging technology? A striking
recent addition is the availability of fully digital cameras for use in both
scientific and consumer applications. Unlike analog cameras or traditional

Fig. 8

Q550MW (Leica Microsystems) materials workstation

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000


Image Analysis: Historical Perspective / 13

photomicrography, the digital camera relies heavily on the use of a


computer and software, and so this development too owes a debt to fast,
inexpensive PC technology. Now, traditional image-documentation tasks
are taking on a digital flavor, with the benefit that additional information
regarding the specimen can be archived easily into an image database and
electronically mailed to colleagues and clients using the latest major
development in computing, networks, and the Internet. As this digital
transition occurs, the need to understand the practical issues that arise
when dealing with these images, particularly when processing them for
quantitative information, becomes more important than ever.
Wherever the developments of the future lead, high-quality results that
can be trusted and used with confidence always will depend on an
in-depth understanding of the following fundamental aspects of scientific
image analysis:
O Use the best procedures possible to prepare the specimens for analysis.
No amount of image enhancement can correct problems created by
poorly prepared specimens.
O Take care in acquiring high-quality images.
O Apply only the image processing necessary to reveal what counts.
O Keep the final goals in mind when deciding what and how to measure.

References
1. P.P. Anosov, Collected Works, Akad. Nauk SSSR, 1954
2. A. Delesse, Procede Mechanique Pour Determiner la Composition
des Roches, Ann. Mines (IV), Vol 13, 1848, p 379
3. A. Rosiwal, On Geometric Rock Analysis. A Simple Surface Measurement to Determine the Quantitative Content of the Mineral
Constituents of a Stony Aggregate, Verhandl. K.K. Geol. Reich., 1898,
p 143
4. A. Sauveur, The Microstructure of Steel and the Current Theories of
Hardening, TAIME, 1896, p 863
5. E. Thompson, Quantitative Microscopic Analysis, J. Geol., Vol 27,
1930, p 276
6. A.A. Glagolev, Mineralog. Mater., 1931, p10
7. D.E. Fisher and J.F. Marshall, Tube: the Invention of Television,
Counterpoint, 1996
8. W.E. Tolles, Methods of Automatic Quantification of Micro-Autoradiographs, Lab. Invest., Vol 8, 1959, p1889
9. R.A. Bloom, H. Walz, and J.G. Koenig, An Electronic ScannerComputer for Determining the Non-Metallic Inclusion Content of
Steels, JISI, 1964, p 107

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000


14 / Practical Guide to Image Analysis

10. Leica Microsystems Customer History, http://www.leica-microsystems.com/


11. Bausch & Lomb advertisements, 1969
12. J. Serra, Image Analysis and Mathematical Morphology, Academic
Press, 1982
13. The Microscope, Vol 18, 3rd quarter, July 1970, p xiii
14. The Microscope, Vol 19, 2nd quarter, April 1971, p xvii
15. The Microscope, Vol 23, 2nd quarter, April 1975, p vii
16. The Microscope, Vol 20, 1st quarter, January 1972, back cover
17. S. Inou, Video Microscopy, Plenum Press, 1986

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000

CHAPTER

Introduction to
Stereological Principles
George F. Vander Voort
Buehler Ltd.

THE FUNDAMENTAL RELATIONSHIPS for stereologythe foundation of quantitative metallographyhave been known for some time,
but implementation of these concepts has been limited when performed
manually due to the tremendous effort required. Further, while humans
are quite good at pattern recognition (as in the identification of complex
structures), they are less capable of accurate, repetitive counting. Many
years ago, George Moore (Ref 1) and members of ASTM Committee E-4
on Metallography conducted a simple counting experiment asking about
400 persons to count the number of times the letter e appeared in a
paragraph without striking out the letters as they counted. The correct
answer was obtained by only 3.8% of the group, and results were not
Gaussian. Only 4.3% had higher values, while 92% had lower values,
some much lower. The standard deviation was 12.28. This experiment
revealed a basic problem with manual ratings: if a familiar subject (as in
Moores experiment) results in only one out of 26 persons obtaining a
correct count, what level of counting accuracy can be expected with a less
familiar subject, such as microstructural features?
By comparison, image analyzers are quite good at counting but not as
competent at recognizing features of interest. Fortunately, there has been
tremendous progress in the development of powerful, user-friendly image
analyzers since the 1980s.
Chart methods for rating microstructures have been used for many years
to evaluate microstructures, chiefly for conformance to specifications.
Currently, true quantitative procedures are replacing chart methods for
such purposes, and they are used increasingly in quality control and
research studies. Examples of the applications of stereological measurements were reviewed by Underwood (Ref 2).

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 29 OUTPUT: Thu Oct 26 14:44:16 2000


16 / Practical Guide to Image Analysis

Basically, two types of measurements of microstructures are made. The


first group includes measurements of depth, such as depth of decarburization, depth of surface hardening, thickness of coatings and platings,
and so forth. These measurements are made at a specific location (the
surface) and may be subject to considerable variation. To obtain reproducible data, surface conditions must be measured at a number of
positions on a given specimen and on several specimens if the material
being sampled is rather large. Standard metrology methods, which can be
automated, are used. Metrology methods are also used for individual
feature analysis of particle size and shape measurement.
The second group of measurements belongs to the field referred to as
stereology. This is the body of measurements that describe relationships
between measurements made on the two-dimensional plane of polish and
the characteristics of the three-dimensional microstructural features
sampled. To facilitate communications, the International Society for
Stereology (ISS) proposed a standard system of notation, as shown in
Table 1 (Ref 3), that lists the most commonly used notations. Notations
have not been standardized for many of the more recently developed
procedures.
These measurements can be made manually with the aid of templates
outlining a fixed field area, systems of straight or curved lines of known
Table 1 Standard notation recommended by international Society for
Stereology
Symbol

Units

Description

...

Number of point elements or test points

Pp

...

Point fraction (number of point elements per total number of test points)

Common name

...
Point count

mm

PL

mm1

Length of linear elements or test-line length

...

Number of point intersections per unit length of test line

...

LL

mm/mm

mm2

Planar area of intercepted features or test area

mm2

Surface area or interface area, generally reserved for curved surfaces

...

mm3

Volume of three-dimensional structural elements or test volume

...

Sum of linear intercept lengths divided by total test-line length

AA

mm2/mm2 Sum of areas of intercepted features divided by total test area

SV

mm2/mm3 Surface or interface area divided by total test volume


(surface-to-volume ratio)

VV

mm3/mm3 Sum of volumes of structural features divided by total test volume

...

NL

mm1

Number of interceptions of features divided by total test-line length

mm2

Number of point features divided by total test area

PA
LA
NA
PV
LV
NV

Number of features

mm/mm2 Sum of lengths of linear features divided by total test area


mm2

Number of interceptions of features divided by total test area

mm3

Number of points per test volume

mm/mm3 Length of features per test volume


mm3

Number of features per test volume

Lineal fraction
...

Areal fraction
...
Volume fraction
...
Lineal density
...
Perimeter (total)
Areal density
...
...
Volumetric density

mm

Mean linear interception distance, LL/NL

...

mm2

Mean area intercept, AA/NA

...

mm2

Mean particle surface area, SV /NV

...

mm3

Mean particle volume, VV /NV

...

Note: Fractional parameters are expressed per unit length, area or volume. Source: Ref 3

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 17

length, or a number of systematically spaced points. The simple counting


measurements, PP, PL, NL, PA, and NA are most important and are easily
made. These measurements are useful by themselves and can be used to
derive other important relationships, and they can be made using
semiautomatic tracing tablets or automatic image analyzers.
This Chapter describes the basic rules of stereology with emphasis on
how these procedures are applied manually. Other Chapters describe how
these ideas can be implemented using image analysis (IA). Image analysis
users should understand these principles clearly before using them. When
developing a new measurement routine, it is good practice to compare IA
data with data developed manually. It is easy to make a mistake in setting
up a measurement routine, and the user needs a check against such
occurrences.

Sampling
Sampling of the material is an important consideration, because
measurement results must be representative of the material. Ideally,
random sampling would be best, but this can rarely be performed, except
for small parts like fasteners where a specific number of fasteners can be
drawn from a production lot at random. It generally is impossible to select
specimens at random from the bulk mass of a large component such as a
forging or casting, so the part is produced with additional material added
to the part, which provides material for test specimens.
For a casting, it may be possible to trepan (machine a cylinder of
material from a section) sections at locations that will be machined
anyway later in the production process. Another approach used is to cast
a separate, small chunk of material of a specified size (called a keel
block) along with the production castings, which provides material for
test specimens. However, material from the keel block may produce
results markedly different than those obtained from the casting if there is
a large difference in size and solidification and cooling rates between
casting and keel block.
After obtaining specimens, there still is a sampling problem, particularly in wrought (hot worked) material, such as rolled, extruded, or forged
material. Microstructural measurements made on a plane parallel to the
deformation axis, for example, will often be quite different from those
taken on a plane perpendicular to the deformation axis, especially for
features such as nonmetallic inclusions. In such cases, the practice is to
compare results on similarly oriented planes. It generally is too time
consuming to measure the microstructural feature of interest on the three
primary planes in a flat product such as plate or sheet, so that the true
three-dimensional nature of the structure cannot be determined except,
perhaps, in research studies.

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000


18 / Practical Guide to Image Analysis

The sampling plan also must specify the number of specimens to be


tested. In practice, the number of specimens chosen is a compromise
between minimizing testing cost and the desire to perform adequate
testing to characterize the lot. Excessive testing is rare. Inadequate
sampling is more likely due to physical constraints of some components
and a desire to control testing costs. In the case of inclusion ratings, a
testing plan was established years ago by the chart method as described
in ASTM E 45.
The procedure calls for sampling billets at locations representing the top
and bottom or top, middle, and bottom of the first, middle, and last ingots
on a heat. The plane of polish is longitudinal (parallel to the hot-working
axis) at the midthickness location. This yields either six or nine specimens, providing an examination surface area of 160 mm2 per specimen,
or a total of 960 and 1440 mm2, respectively. This small area establishes
the inclusion content and is the basis for a decision as to the quality (and
salability) of a heat of steel, which could weigh from 50 to 300 tons. For
bottom-poured heats, there is no first, middle, and last ingot, and
continuous casting eliminates ingots, so alternative sampling plans are
required. In the writers work with inclusion testing, characterization is
improved by using at least 18 or 27 specimens per heat from the surface,
midradius, and center locations at each billet location (top and bottom or
top, middle, and bottom of the first, middle, and last top-poured ingots;
and three ingots at random from a bottom-poured heat).

Specimen Preparation
In the vast majority of work, the measurement part of the task is simple,
and 90% or more of the difficulty is in preparing the specimens properly
so that the true structure can be observed. Measurement of inclusions is
done on as-polished specimens because etching brings out extraneous
details that may obscure the detection of inclusions. Measurement of
graphite in cast iron also is performed on as-polished specimens. It is
possible, however, that shrinkage cavities often present in castings may
interfere with detection of the graphite, because shrinkage cavities and
graphite have overlapping gray scales. When the specimen must be etched
to see the constituent of interest, it is best to etch the specimen so that only
the constituent of interest is revealed. Selective etchants are best.
Preparation of specimens today is easier than ever before with the
introduction of automated sample-preparation equipment; specimens so
prepared have better flatness than manually prepared specimens. This is
especially important if the edge must be examined and measurements
performed. The preparation sequence must establish the true structure,
free of any artifacts. Automated equipment can produce a much greater
number of properly prepared specimens per day than the best manual
operator. A more detailed description on specimen preparation is in
Chapter 3.

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 19

Volume Fraction
It is well known that the amount of a second phase or constituent in a
two-phase alloy can have a significant influence on its properties and
behavior. Consequently, determination of the amount of the second phase
is an important measurement. The amount of a second phase is defined as
the volume of the second phase per unit volume, or volume fraction.
There is no simple experimental technique to measure the volume of a
second phase or constituent per unit volume of specimen. The closest
approach might be to use an acid digestion method, where a cube of metal
is weighed and then partially dissolved in an appropriate electrolyte that
dissolves the matrix but not the phase of interest. The residue is cleaned,
dried, and weighed. The remains of the cube (after cleaning and drying)
are weighed, and weight loss is calculated. The weight of the undissolved
second phase is divided by the weight loss to get an estimate of the
volume fraction of the second phase, with the densities of the matrix and
second phase known. This is a tedious method, not applicable to all
situations and subject to interferences. Three experimental approaches for
estimating the volume fraction have been developed using microscopy
methods: the area fraction, the lineal fraction, and the point fraction
methods.
The volume fraction was first estimated by areal (relating to area)
analysis by A. Delesse, a French geologist, in 1848. He showed that the
area fraction was an unbiased estimate of the volume fraction. Several
procedures have been used on real structures. One is to trace the second
phase or constituent with a planimeter and determine the area of each
particle. These areas are summed and divided by the field area to obtain
the area fraction, AA.
Another approach is to weigh a photograph and then cut out the
second-phase particles and weigh them. Then the two weights are used to
calculate the area fraction, as the weight fraction of the micrograph should
be equivalent to the area fraction. Both of these techniques are only
possible with a coarse second phase. A third approach is the so-called
occupied squares method. A clear plastic grid containing 500 small
square boxes is superimposed over a micrograph or live image. The
operator then counts the number of grid boxes that are completely filled,
34 filled, 12 filled, and 14 filled by the second phase or constituent. These
data are used to calculate the area covered by the second phase, which
then is divided by the image area to obtain the area fraction. All three
methods give a precise measurement of the area fraction of one field. An
enormous amount of effort must be extended per field. However, it is well
recognized that the field-to-field variability in volume fraction has a larger
influence on the precision of the volume fraction estimate than the error
in rating a specific field, regardless of the procedure used. So, it is not

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000


20 / Practical Guide to Image Analysis

wise to spend a great deal of effort to obtain a very precise measurement


on one or only a few fields.
Delesse also stated that the volume fraction could be determined by a
lineal analysis approach, but he did not develop such a method. This was
done in 1898 by A. Rosiwal, a German geologist, who demonstrated that
a sum of the lengths of line segments within the phase of interest divided
by the total length, LL, would provide a valid estimate of the volume
fraction with less effort than areal analysis.
However, studies show that a third method, the point count, is a more
efficient method than lineal analysis; that is, it yields the best precision
with minimal effort (Ref 4). The point count method is described in
ASTM E 562 and is widely used to estimate volume fractions of
microstructural constituents. To perform this test, a clear plastic grid with
a number of systematically spaced points is placed on a micrograph or a
projection screen, or inserted as an eyepiece reticle (crosses primarily are
used, where the point is the intersection of the arms, typically
consisting of 9, 16, 25, 49, 64, or 100 points). The number of points lying
on the phase or constituent of interest is counted and divided by the total
number of grid points. Points lying on a boundary are counted as
half-points. This procedure is repeated on a number of fields selected
without bias; that is, without looking at the image.
The point fraction, PP, is given by:
P

P
PT

(Eq 1)

where P is the number of grid points lying inside the feature of interest,
, plus one-half the number of grid points lying on particle boundaries
and PT is the total number of grid points. Studies show that the point
fraction is equivalent to the lineal fraction, LL, and the area fraction, AA,
and all three are unbiased estimates of the volume fraction, VV, of the
second-phase particles:
PP LL AA VV

(Eq 2)

Point counting is much faster than lineal or areal analysis and is the
preferred manual method. Point counting is always performed on the
minor phase, where VV < 0.5. The amount of the major (matrix) phase can
be determined by the difference.
The fields measured should be selected at locations over the entire
polished surface and not confined to a small portion of the specimen
surface. The field measurements should be averaged, and the standard
deviation can be used to assess the relative accuracy of the measurement,
as described in ASTM E 562.

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 21

In general, the number of points on the grid should be increased as the


volume fraction of the feature of interest decreases. One study (Ref 4)
suggested that the optimum number of grid test points is 3/VV. Therefore,
for volume fractions of 0.5 (50%) and 0.01 (1%), the optimum numbers
of grid points are 6 and 300, respectively. If the structure is heterogeneous, measurement precision is improved by using a low-point-density
grid and increasing the number of fields measured. The field-to-field
variability in the volume fraction has a greater influence on the measurement than the precision in measuring a specific field. Therefore, it is better
to assess a greater number of fields with a low-point-density grid than to
assess a small number of fields using a high-point-density grid, where the
total number of points is constant. In manual measurements, the saying,
Do more, less well refers to this problem.
To illustrate the point-counting procedure, Fig. 1 shows a 1000 image
of a microstructural model consisting of fourteen circular particles 15 m
in diameter within a field area of 16,830 m2, or 0.0168 mm2. The total
area of the circular particles is 2474.0 m2, which is an area fraction of
0.147, or 14.7%. This areal measurement is a very accurate estimate of the
volume fraction for such a geometrically simple microstructure and will
be considered to be the true value. To demonstrate the use of point
counting to estimate the volume fraction, a grid pattern was drawn over
this field, producing 45 intersection points. Six of these intersections are
completely within the particles and two lie on particle interfaces. The
number of hits is, therefore, 6 plus 12 times (2), or 7. Thus, PP (7
divided by 45) is 0.155, or 15.5%, which agrees well with the calculated

Fig. 1
particles

Point-counting method for estimating minor-phase volume fraction


using a microstructural model containing identical sized circular

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000


22 / Practical Guide to Image Analysis

area fraction (5.4% greater). For an actual microstructure, the time


required to point count one field is far less than the time to do an areal
analysis on that field. In practice, a number of fields would be point
counted, and the average value would be a good estimate of the volume
fraction acquired in a small fraction of the time required to do areal
analysis on an adequate number of fields. An areal fraction measurement
can only be done easily when the feature of interest is large in size and of
simple shape. Point counting is the simplest and most efficient technique
to use to assess the volume fraction. The area fraction, AA, and the point
fraction, PP, are unbiased estimates of the volume fraction, VV, as long as
the sectioning plane intersects the structural features at random.
The lineal fraction, LL, can also be determined for the microstructural
model shown in Fig. 1. The length of the horizontal and vertical line
segments within the circular particles was measured and found to be
278.2 m. The total test line length is 1743 m. Consequently, the lineal
fraction (278.2 divided by 1743) is 0.16, or 16%. This is a slightly higher
estimate, about 8.8% greater than that obtained by areal analysis. Again,
if a number of fields were measured, the average would be a good
estimate of the volume fraction. Lineal analysis becomes more tedious as
the structural features become smaller. For coarse structures, however, it
is rather simple to perform. Lineal analysis is commonly performed when
a structural gradient must be measured; that is, a change in second-phase
concentration as a function of distance from a surface or an interface.

Number per Unit Area


The count of the number of particles within a given measurement area,
NA, is a useful microstructural parameter and is used in other calculations.
Referring again to Fig. 1, there are 14 particles in the measurement area
(16,830 m2, or 0.0168 mm2). Therefore, the number of particles per unit
area, NA, is 0.0008 per m2, or 831.8 per mm2. The average crosssectional area of the particles can be calculated by dividing the volume
fraction, VV, by NA:
A

VV
NA

(Eq 3)

This yields an average area, A, of 176.725 m2, which agrees extremely


well with the calculated area of a 15 m diameter circular particle of
176.71 m2. The above example illustrates the calculation of the average
area of particles in a two-phase microstructure using stereological field
measurements rather than individual particle measurements.

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 23

Intersections and Interceptions per Unit Length


Counting of the number of intersections of a line of known length with
particle boundaries or grain boundaries, PL, or the number of interceptions of particles or grains by a line of known length, NL, provides two
very useful microstructural parameters. For space-filling grain structures
(single phase), PL NL, while for two-phase structures, PL 2NL (this
may differ by one count in actual cases).

Grain-Structure Measurements
For single-phase grain structures, it is usually easier to count the
grain-boundary intersections with a line of known length, especially for
circular test lines. This is the basis of the Heyn intercept grain size
procedure described in ASTM E 112. For most work, a circular test grid
composed of three concentric circles with a total line length of 500 mm
is preferred. Grain size is defined by the mean lineal intercept length, l:
l

1
1

PL NL

(Eq 4)

This equation must be modified, as described later, for two-phase


structures. The value l can be used to calculate the ASTM grain size
number. Grain size determination is discussed subsequently in more
detail.
PL measurements can be used to define the surface area per unit volume,
SV, or the length per unit area, LA, of grain boundaries:
SV 2PL

(Eq 5)

and

LA PL
2

(Eq 6)

For single-phase structures, PL and NL are equal, and either measurement


can be used. For two-phase structures, it is best to measure PL to
determine the phase-boundary surface area per unit volume, or phaseboundary length per unit area.
Partially Oriented Structures. Many deformation processes, particularly those that are not followed by recrystallization, produce partially
oriented microstructures. It is not uncommon to see partially oriented
grain structures in metals that have been cold deformed. The presence and
definition of the orientation (Ref 3) can only be detected and defined by

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000


24 / Practical Guide to Image Analysis

examination of specimens on the principle planes. Certain microstructures


have a high degree of preferred directionality on the plane of polish or
within the sample volume. A structure is completely oriented if all of its
elements are parallel. Partially oriented systems are those with features
having both random and oriented elements. Once the orientation axis has
been defined, then a plane parallel to the orientation axis can be used for
measurements. PL measurements are used to assess the degree of
orientation of lines or surfaces.
Several approaches can be used to assess the degree of orientation of a
microstructure. For single-phase grain structures, a simple procedure is to
make PL measurements parallel and perpendicular to the deformation axis
on a longitudinally oriented specimen, as the orientation axis is usually
the longitudinal direction. The degree of grain elongation is the ratio of
perpendicular to parallel PL values; that is, PL/PL . Another very useful
procedure is to calculate the degree of orientation, , using these PL
values:

PL PL

(Eq 7)

PL 0.571PL

To illustrate these measurements, consider a section of low-carbon steel


sheet, cold rolled to reductions in thickness of 12, 30, and 70%. PL and
PL measurements were made using a grid with parallel straight test lines
on a longitudinal section from each of four specimens (one specimen for
each of the three reductions, plus one specimen of as-received material).
The results are given in Table 2. As shown, cold working produces an
increased orientation of the grains in the longitudinal direction.
Spacing. The spacing between second-phase particles or constituents
is a very structure-sensitive parameter influencing strength, toughness,
and ductile fracture behavior. NL measurements are more easily used to
study the spacing of two-phase structures than PL measurements. Perhaps
the commonest spacing measurement is that of the interlamellar spacing
of eutectoid (such as pearlite) or eutectic structures (Ref 5). The true
interlamellar spacing, t, is difficult to measure, but the mean random
spacing, r, is readily assessable and is directly related to the mean true
spacing:
Table 2 Degrees of grain orientation for four
samples of low-carbon steel sheet
Sample

PL(a)

PL||(a)

PL/PL(a)

, %

As-received

114.06

98.86

1.15

8.9

12% reduction

126.04

75.97

1.66

29.6

30% reduction

167.71

60.6

2.77

52.9

70% reduction

349.4

34.58

Cold-rolled

(a) Number of grain-boundary intersections per millimeter

10.1

85.3

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 14 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 25

t r / 2

(Eq 8)

The mean random spacing is determined by placing a test grid consisting


of one or more concentric circles on the pearlite lamellae in an unbiased
manner. The number of interceptions of the carbide lamellae with the test
line(s) is counted and divided by the true length of the test line to obtain
NL. The reciprocal of NL is the mean random spacing:
r

1
NL

(Eq 9)

The mean true spacing, t, is 12 r. To make accurate measurements, the


lamellae must be clearly resolved; therefore, use of transmission electron
microscope (TEM) replicas is quite common.
NL measurements also are used to measure the interparticle spacing in
a two-phase alloy, such as the spacing between carbides or intermetallic
precipitates. The mean center-to-center planar spacing between particles
over 360, , is the reciprocal of NL. For the second-phase particles in the
idealized two-phase structure shown in Fig. 1, a count of the number of
particles intercepted by the horizontal and vertical test lines yields 22.5
interceptions. The total line length is 1743 m; therefore, NL 0.0129 per
m or 12.9 per mm and is 77.5 m, or 0.0775 mm.
The mean edge-to-edge distance between such particles over 360,
known as the mean free path, , is determined in like manner but requires
knowledge of the volume fraction of the particles. The mean free path is
calculated from:

1VV
NL

(Eq 10)

For the structure illustrated in Fig. 1, the volume fraction of the particles
was estimated as 0.147. Therefore, is 66.1 m, or 0.066 mm.
The mean lineal intercept distance, l, for these particles is determined by:
l

(Eq 11)

For this example, l is 11.4 m, or 0.0114 mm. This value is smaller than
the caliper diameter of the particles because the test lines intercept the
particles at random, not only at the maximum dimension. The calculated
mean lineal intercept length for a circle with a 15 m diameter is 11.78
m. Again, stereological field measurements can be used to determine a
characteristic dimension of individual features without performing individual particle measurements.
Grain Size. Perhaps the most common quantitative microstructural
measurement is that of the grain size of metals, alloys, and ceramic

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 14 OUTPUT: Thu Oct 26 14:44:16 2000


26 / Practical Guide to Image Analysis

materials. Numerous procedures have been developed to estimate grain


size; these procedures are summarized in detail in ASTM E 112 and
described in Ref 6 to 9. Several types of grain sizes can be measured:
ferrite grain size, austenite grain size, and prior-austenite grain size. Each
type presents particular problems associated with revealing these boundaries so that an accurate rating can be obtained (Ref 6, 9). While this
relates specifically to steels, ferrite grain boundaries are identical (geometrically) to grain boundaries in any alloy that does not exhibit
annealing twins while austenite grains are identical (geometrically) to
grain boundaries in any alloy that exhibits annealing twins. Therefore,
charts depicting ferrite grains in steel can be used to rate grain size in
metals such as aluminum, chromium, and titanium, while charts depicting
austenite grains can be used to rate grain size in metals such as copper,
brass, cobalt, and nickel.
A variety of parameters are used to measure grain size:
O
O
O
O
O
O
O

Average grain diameter, d


Average grain area, A
Number of grains per unit area, NA
Average intercept length, L
Number of grains intercepted by a line of fixed length, N
Number of grains per unit volume, NV
Average grain volume, V

These parameters can be related to the ASTM grain size number, G.


The ASTM grain-size scale was established using the English system of
units, but no difficulty is introduced using metric measurements, which
are more common. The ASTM grain size equation is:
n 2G1

(Eq 12)

where n is the number of grains per square inch at 100. Multiplying n


by 15.5 yields the number of grains per square millimeter, NA, at 1. The
metric grain size number, GM, which is used by International Standards
Organization (ISO) and many other countries, is based upon the number
of grains per mm2, m, at 1, and uses the following formula:
m 8 (2GM)

(Eq 13)

The metric grain size number, GM, is slightly lower than the ASTM grain
size number, G, for the same structure:
G GM 0.046

(Eq 14)

This very small difference usually can be ignored (unless the value is near
a specification limit).

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 16 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 27

Planimetric Method. The oldest procedure for measuring the grain size
of metals is the planimetric method introduced by Zay Jeffries in 1916
based upon earlier work by Albert Sauveur. A circle of known size
(generally 79.8 mm diameter, or 5000 mm2 area) is drawn on a
micrograph or used as a template on a projection screen. The number of
grains completely within the circle, n1, and the number of grains
intersecting the circle, n2, are counted. For accurate counts, the grains
must be marked off as they are counted, which makes this method slow.
The number of grains per square millimeter at 1, NA, is determined by:
NA f (n1 n2 / 2)

(Eq 15)

where f is the magnification squared divided by 5000 (the circle area). The
average grain area, A, in square millimeters, is:
A

1
NA

(Eq 16)

and the average grain diameter, d, in millimeters, is:


d (A) 1/2

1
(NA)1/2

(Eq 17)

The ASTM grain size, G, can be found by using the tables in ASTM E 112
or by the following equation:
G [3.322 (log NA) 2.95]

(Eq 18)

Figure 2 illustrates the planimetric method. Expressing grain size in terms


of d is being discouraged by ASTM Committee E-4 on Metallography
because the calculation implies that grain cross sections are square in
shape, which they are not.
In theory, the test line will, on average, bisect grains intercepting a
straight line. If the test line is curved, however, bias is introduced (Ref
10). This bias decreases as the number of grains within the circle
increases. If only a few grains are within the circle, the error is large, for
example, a 10% error if only 10 grains are within the circle. ASTM E 112
recommends adjusting the magnification so that about 50 grains are
within the test circle. Under this condition, the error is reduced to about
2% (Ref 10). This degree of error is not too excessive. If the magnification
is decreased or the circle is enlarged to encompass more grains, for
example, 100 or more, obtaining an accurate count of the grains inside the
test circle becomes very difficult.
There is a simple alternative to this problem, and one that is amenable
to image analysis. If the test pattern is a square or rectangle, rather than
a circle, bias can be easily eliminated. Counting of the grains intersecting

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 28 OUTPUT: Thu Oct 26 14:44:16 2000


28 / Practical Guide to Image Analysis

the test line, n2, however, is slightly different. In this method, grains will
intercept the four corners of the square or rectangle. Statistically, the
portions intercepting the four corners would be in parts of four such
contiguous test patterns. So, when counting n2, the grains intercepting the
four corners are not counted but are weighted as 1. Count all of the other
grains intercepting the test square or rectangle (of known size). Equation
15 is modified as follows:
NA f (n1 n2/2 1)

(Eq 19)

where n1 is still the number of grains completely within the test figure
(square or rectangular grid), n2 is the number of grains intercepting the
sides of the square or rectangle, but not the four corners, 1 accounts for
the corner grain interceptions, and f is the magnification divided by the
area of the square or rectangle grid.

Fig. 2

The ferrite grain size of a carbon sheet steel (shown at 500, 2% nital
etch) was measured by the planimetric method with images at 200,
500, and 1000 using the Jeffries planimetric method (79.8 mm diameter test
circle). This produced NA values (using Eq 15) of 2407.3, 2674.2, and 3299
grains per mm2 (ASTM G values of 8.28, 8.43, and 8.73, respectively) for the 200,
500 and 1000 images, respectively. The planimetric method was also performed on these three images using the full rectangular image eld and the
alternate grain counting method. This produced NA values of 2400.4, 2506.6,
and 2420.2 grains per mm2 (ASTM G values of 8.28, 8.34, and 8.29, respectively). This experiment shows that the standard planimetric method is inuenced
by the number of grains counted (n1 was 263, 39, and 10 for the 200, 500 and
1000 images, respectively). In practice, more than one eld should be
evaluated due to the potential for eld-to-eld variability.

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 30 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 29

Intercept Method. The intercept method, developed by Emil Heyn in


1904, is faster than the planimetric method because the micrograph or
template does not require marking to obtain an accurate count. ASTM E
112 recommends use of a template consisting of three concentric circles
with a total line length of 500 mm (template available from ASTM). The
template is placed over the grain structure without bias, and the number
of grain-boundary intersections, P, or the number of grains intercepted, N,
is counted. Dividing P or N by the true line length, L, gives PL or NL,
which are identical for a single-phase grain structure. It is usually easier
to count grain-boundary intersections for single-phase structures. If a
grain boundary is tangent to the line, it is counted as 12 of an intersection.
If a triple-point line junction is intersected, it is counted as 112 or 2. The
latter is preferred because the small diameter of the inner circle introduces
a slight bias to the measurement that is offset by weighing a triple-line
intersection as 2 hits.
The mean lineal intercept length, l, determined as shown in Eq 4, is a
measure of ASTM grain size. The value l is smaller than the maximum
grain diameter because the test lines do not intersect each grain at their
maximum breadth. The ASTM grain size, G, can be determined by use of
the tables in ASTM E 112 or can be calculated from:
G [6.644 (log l ) 3.288]

(Eq 20)

where l is in millimeters. Figure 3 illustrates the intercept method for a


single-phase alloy.
Nonequiaxed Grains. Ideally, nonequiaxed grain structures should
be measured on the three principal planes: longitudinal, planar, and
transverse. However, in practice, measurements on any two of the three
are adequate. For such structures, the intercept method is preferred, but
the test grid should consist of a number of straight, parallel test lines
(rather than circles) of known length oriented as described subsequently.
Because the ends of the straight lines generally end within grains, these
interceptions are counted as half-hits. Three mutually perpendicular
orientations are evaluated using grain-interception counts:
O NLlparallel to the grain elongation, longitudinal or planar surface
O NLtperpendicular to the grain elongation (through-thickness direction), longitudinal or transverse surface
O NLPperpendicular to the grain elongation (across width), planar or
transverse surface
The average NL value is obtained from the cube root of the product of
the three directional NL values. G is determined by reference to the tables
in ASTM E 112 or by use of Eq 20 (l is the reciprocal of NL; see Eq 4).
Two-Phase Grain Structures. The grain size of a particular phase in
a two-phase structure requires determination of the volume fraction of the

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000


30 / Practical Guide to Image Analysis

Fig. 3

The ferrite grain size of the specimen analyzed using the Jeffries
method in Fig. 2 (shown at 200 magnication) (2% nital etch), was
measured by the intercept method with a single test circle (79.8 mm diameter) at
200, 500 and 1000 magnications. This yielded mean lineal intercept lengths
of 17.95, 17.56, and 17.45 m (for the 200, 500 and 1000 images, respectively)
corresponding to ASTM G values of 8.2, 8.37, and 8.39, respectively. These are
in reasonably good agreement. In practice, more than one eld should be
evaluated due to the eld-to-eld variability of specimens.

phase of interest, by point counting, for example. The minor, or second,


phase is point-counted and the volume fraction of the major, or matrix,
phase is determined by the difference.
Next, a circular test grid is applied to the microstructure without bias
and the number of grains of the phase of interest intercepted by the test
line, N, is counted. The mean lineal intercept length of the -grains, l,
is determined by:
l

(VV)(L/M )
N

(Eq 21)

where L is the line length and M is the magnification. The ASTM grain
size number can be determined from the tables in ASTM E 112 or by use
of Eq 20. The method is illustrated in Fig. 4.

JOBNAME: PGIAspec 2 PAGE: 17 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 31

Determination of -phase grain size in two-phase microstructure of


heat treated Ti-8Al-1Mo-1V (1010 C or 1850 F/air cooled (AC)/593
C or 1100 F 8 h/AC) etched using Krolls reagent. Volume fraction of alpha
grains is determined by point counting ( in the - eutectoid constituent is not
included) to be 0.452. The number of proeutectoid grains, N, intercepted by
a 79.8 mm diameter test circle is 27. From Eq 20, the mean lineal intercept length
in the -phase is 8.4 m (ASTM G 10.5). In practice 500

Fig. 4

Inclusion Content
Assessment of inclusion type and content commonly is performed on
high-quality steels. Production evaluations use comparison chart methods
such as those described in ASTM E 45, SAE J422a, ISO 4967, and the
German standard SEP 1570 (DIN 50602). In these chart methods, the
inclusion pictures are defined by type and graded by severity (amount).
Either qualitative procedures (worst rating of each type observed) or
quantitative procedures (all fields in a given area rated) are used. Only the

JOBNAME: PGIAspec 2 PAGE: 18 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000


32 / Practical Guide to Image Analysis

Japanese standard JIS-G-0555 uses actual volume fraction measurements


for the rating of inclusion content, although the statistical significance of
the data is questionable due to the limited number of counts required.
Manual measurement of the volume fraction of inclusions requires
considerable effort to obtain acceptable measurement accuracy due to the
rather low volume fractions usually encountered (Ref 11). When the
volume fraction is below 0.02, or 2%, which is the case for inclusions
(even in free-machining steels), acceptable relative accuracies (Eq 22)
cannot be obtained by manual point counting without a vast amount of
counting time (Ref 11). Consequently, image analyzers are extensively
used to overcome this problem. Image analyzers separate the oxide and
sulfide inclusions on the basis of their gray-level differences. By using
automated stage movement and autofocusing, enough field measurements
can be made in a relatively short time to obtain reasonable statistical
precision. Image analysis also is used to measure the length of inclusions
and to determine stringer lengths.
Two image analysis-based standards have been developed: ASTM E
1122 (Ref 12) and E 1245 (Ref 1316). E 1122 produces Jernkontoret
(JK) ratings using image analysis, which overcome most of the weaknesses of manual JK ratings. E 1245 is a stereological approach defining,
for oxides and sulfides, the volume fraction (VV), number per unit area
(NA), average length, average area, and the mean free path (spacing in the
through-thickness direction). These statistical data are easily incorporated
into a database, and mean values and standard deviations can be
developed. This allows comparison of data from different tests using
statistical methods to determine if the differences between the measurements are valid at a particular confidence limit (CL).

Measurement Statistics
It is necessary to make stereological measurements on a number of
fields and average the results. Measurements on a single field may not be
representative of bulk material conditions, because few (if any) materials
are sufficiently homogeneous. Calculation of the standard deviation of
field measurements provides a good indication of measurement variability. Calculation of the standard deviation can be done quite simply with an
inexpensive pocket calculator.
A further refinement of statistical analysis is calculation of the 95% CL
based on the standard deviation, s, of the field measurements. The 95%
CL is calculated from the expression:

95% CL

ts
N1/2

(Eq 22)

JOBNAME: PGIAspec 2 PAGE: 19 SESS: 25 OUTPUT: Thu Oct 26 14:44:16 2000


Introduction to Stereological Principles / 33

where t is the Students t value that varies with N, the number of


measurements. Many users standardize on a single value of t, 2, for
calculations, irrespective of N. The measurement value is expressed as the
average, X, the 95% CL value. This means that if the test were
conducted 100 times, the average values would be between plus and
minus the average, X, in 95 of the measurements. Next, it is possible to
calculate the relative accuracy (% RA) of the measurement by:
% RA

95% CL
X

(Eq 23)

Usually, a 10% relative accuracy is considered to be adequate. DeHoff


(Ref 17) developed a simple formula to determine how many fields, N,
must be measured to obtain a specific desired degree of relative accuracy
at the 95% CL:
N

[200 s] 2
[%RA X]

(Eq 24)

Image Analysis
The measurements described in this brief review, and other measurements not discussed, can be made by use of automatic image analyzers.
These devices rely primarily on the gray level of the image on the
television monitor to detect the desired features. In some instances,
complex image editing can be used to aid separation. Some structures,
however, cannot be separated completely, which requires the use of
semiautomatic digital tracing devices to improve measurement speed.

Conclusions
Many of the simple stereological counting measurements and simple
relationships based on these parameters have been reviewed. More
complex measurements are discussed in Chapters 5 to 8. The measurements described are easy to learn and use. Their application enables the
metallographer to discuss microstructures in a more quantitative manner
and reveals relationships between the structure and properties of the
material.

JOBNAME: PGIAspec 2 PAGE: 20 SESS: 23 OUTPUT: Thu Oct 26 14:44:16 2000


34 / Practical Guide to Image Analysis

References
1. G.A. Moore, Is Quantitative Metallography Quantitative?, Application of Modern Metallographic Techniques, STP 480, ASTM, 1970, p
348
2. E.E. Underwood, Applications of Quantitative Metallography, Mechanical Testing, Vol 8, Metals Handbook, 8th ed., American Society
for Metals, 1973, p 3747
3. E.E. Underwood, Quantitative Stereology, Addison-Wesley, 1970
4. J.E. Hilliard and J. W. Cahn, An Evaluation of Procedures in
Quantitative Metallography for Volume-Fraction Analysis, Trans.
AIME, Vol 221, April 1961, p 344352
5. G.F. Vander Voort and A. Rosz, Measurement of the Interlamellar
Spacing of Pearlite, Metallography, Vol 17, Feb 1984, p 117
6. H. Abrams, Grain Size Measurements by the Intercept Method,
Metallography, Vol 4, 1971, p 5978
7. G.F. Vander Voort, Grain Size Measurement, Practical Applications
of Quantitative Metallography, STP 839, ASTM, 1984, p 85131
8. G.F. Vander Voort, Examination of Some Grain Size Measurement
Problems, Metallography: Past, Present and Future, STP 1165,
ASTM, 1993, p 266294
9. G.F. Vander Voort, Metallography: Principles and Practice, ASM
International, 1999
10. S.A. Saltykov, Stereometric Metallography, 2nd ed., Metallurgizdat,
Moscow, 1958
11. G.F. Vander Voort, Inclusion Measurement, Metallography as a
Quality Control Tool, Plenum Press, New York, 1980, p 188
12. G.F. Vander Voort and J. F. Golden, Automating the JK Inclusion
Analysis, Microstructural Science, Vol 10, Elsevier North-Holland,
NY, 1982, p 277290
13. G.F. Vander Voort, Measurement of Extremely Low Inclusion Contents by Image Analysis, Effect of Steel Manufacturing Processes on
the Quality of Bearing Steels, STP 987, ASTM, 1988, p 226249
14. G.F. Vander Voort, Characterization of Inclusions in a Laboratory
Heat of AISI 303 Stainless Steel, Inclusions and Their Influence on
Materials Behavior, ASM International, 1988, p 4964
15. G.F. Vander Voort, Computer-Aided Microstructural Analysis of
Specialty Steels, Mater. Charact., Vol 27 (No. 4), Dec 1991, p
241260
16. G.F. Vander Voort, Inclusion Ratings: Past, Present and Future,
Bearing Steels Into the 21st Century, STP 1327, ASTM, 1998, p
1326
17. R.T. De Hoff, Quantitative Metallography, Techniques of Metals
Research, Vol II, Part 1, Interscience, New York, 1968, p 221253

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 71 OUTPUT: Thu Oct 26 14:44:56 2000

CHAPTER

Specimen Preparation
for Image Analysis
George F. Vander Voort
Buehler Ltd.

SPECIMEN PREPARATION is an extremely important precursor to


image analysis work. In fact, more than 90% of the problems associated
with image analysis work center on preparation. Once a well prepared
specimen is obtained and the phase or constituent of interest is revealed
selectively with adequate contrast, the actual image-analysis (IA) measurement is generally quite simple. Experience has demonstrated that
getting the required image quality to the microscope is by far the biggest
problem. Despite this, many treat the specimen preparation stage as a
trivial exercise. However, the quality of the data is primarily a function of
specimen preparation. This can be compared to the classic computer
adage, garbage in, garbage out.

Sampling
The specimen or specimens being prepared must be representative of
the material to be examined. Random sampling, as advocated by
statisticians, rarely can be performed by metallographers. An exception is
fastener testing where a production lot can be randomly sampled.
However, a large forging or casting, for example, cannot be sampled
randomly because the component might be rendered useless commercially. Instead, systematically selected test locations are widely used,
based on sampling convenience. Many material specifications dictate the
sampling procedure. In failure studies, specimens usually are removed to
study the origin of failure, examine highly stressed areas or secondary
cracks, and so forth. This, of course, is not random sampling. It is rare to

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000


36 / Practical Guide to Image Analysis

encounter excessive sampling, because testing costs usually are closely


controlled. Inadequate sampling is more likely to occur.
In the vast majority of cases, a specimen must be removed from a larger
mass and then prepared for examination. This requires application of one
or more sectioning methods. For example, in a manufacturing facility, a
piece may be cut from incoming metal barstock using a power hacksaw
or an abrasive cutter used without a coolant. This sample is sent to the
laboratory where it must be cut smaller to obtain a size more convenient
for preparation. All sectioning processes produce damage; some methods,
such as flame cutting and dry abrasive cutting, produce extreme amounts
of damage. Traditional laboratory sectioning procedures using abrasive
cut-off saws introduce a minor amount of damage that varies with the
material being cut and the thermal and mechanical history of the material.
Generally, it is unwise to use the sample face from the original cut made
in the shop as the starting point for metallographic preparation because
the depth of damage at this location can be quite extensive. This damage
must be removed if the true structure is to be examined. However, the
preparation sequence must be carefully planned and performed because
abrasive grinding and polishing steps also produce damage (depth of
damage decreases with decreasing abrasive size), and preparation-induced artifacts will be interpreted as structural elements.
The preparation method should be as simple as possible, yield consistent, high-quality results in a minimum of time and cost, and must be
reproducible. The prepared specimen should have the following characteristics, which can be segmented and measured, to reveal the true
structure:
O Deformation induced by sectioning, grinding, and polishing must be
removed or be shallow enough to be removed by the etchant.
O Coarse grinding scratches must be removed, although very fine
polishing scratches often do not interfere with image segmentation.
O Pullout, pitting, cracking of hard particles, smear, and so forth must be
avoided.
O Relief (i.e., excessive surface height variations between structural
features of different hardness) must be minimized.
O The surface must be flat, particularly at edges (if they are of interest).
O Coated or plated surfaces must be kept flat to be able to precisely
measure width.
O Specimens must be cleaned adequately between preparation steps, after
preparation, and after etching (avoid staining).
O The etchant chosen must be selective in its action (that is, it must reveal
only the phase or constituent of interest, or at least produce strong
contrast or color differences between two or more phases present),
produce crisp, clear phase or grain boundaries, and produce strong
contrast.

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 37

Many metallographic image analysis studies require more than one


specimen. A classic case is evaluation of the inclusion content of steel.
One specimen is not representative of the entire lot of steel, so sampling
becomes important. ASTM standards E 45, E 1122, and E 1245 give
advice on sampling procedures for inclusion studies.
To study grain size, it is common to use a single specimen from a lot.
This may or may not be adequate, depending on the nature of the lot.
Good engineering judgment should dictate sampling. In many cases, a
product specification may rigorously define the procedure. Because grain
structure is not always equiaxed, it can be misleading to select only a
plane oriented perpendicular to the deformation axis (transverse plane)
for such a study. If grains are elongated due to processing, the transverse
plane usually shows that the grains are equiaxed in shape and smaller in
diameter than the true grain size. To study the effect of deformation on the
grain shape of wrought metals, a minimum of two sections is required:
one perpendicular to, and the other parallel to, the direction of deformation. Techniques used to study anisotropic structures in metals incoporate
unique vertical sampling procedures, such as in the trisector method (Ref
15).
Preparation of metallographic specimens (Ref 68) generally requires
five major operations: (1) sectioning, (2) mounting (optional), (3)
grinding, (4) polishing, and (5) etching (optional).

Sectioning
Bulk samples for sectioning may be removed from larger pieces or parts
using methods such as core drilling, band and hack sawing, flame cutting,
and so forth. When these techniques must be used, the microstructure will
be heavily altered in the area of the cut. It is necessary to resection the
piece in the laboratory using an abrasive-wheel cutoff system to establish
the location of the desired plane of polish. In the case of relatively brittle
materials, sectioning may be accomplished by fracturing the specimen at
the desired location.
Abrasive-Wheel Cutting. By far the most widely used sectioning
devices in metallographic laboratories are abrasive cut-off machines (Fig.
1). All abrasive-wheel sectioning should be done wet; direct an ample
flow of water containing a water-soluble oil additive for corrosion
protection into the cut. Wet cutting produces a smooth surface finish and,
most importantly, guards against excessive surface damage caused by
overheating. Abrasive wheels should be selected according to the recommendations of the manufacturer. In general, the bond strength of the
material that holds the abrasive together in the wheel must be decreased
with increasing hardness of the workpiece to be cut, so the bond material
can break down and release old dulled abrasive and introduce new sharp

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000


38 / Practical Guide to Image Analysis

abrasive to the cut. If the bond strength is too high, burning results, which
severely damages the underlying microstructure. The use of proper bond
strength eliminates the production of burnt surfaces. Bonding material
may be a polymeric resin, a rubber-based compound, or a mixture of the
two. In general, rubber offers the lowest-bond-strength wheels used to cut
the most difficult materials. Such cuts are characterized by an odor that
can become rather strong. In such cases, there should be provisions to
properly exhaust and ventilate the saw area. Specimens must be fixtured
securely during cutting, and cutting pressure should be applied carefully
to prevent wheel breakage. Some materials, such as commercial purity
(CP) titanium (Fig. 2), are more prone to sectioning damage than many
other materials.
Precision Saws. Precision saws (Fig. 3) commonly are used in
metallographic preparation and may be used to section materials intended
for IA. As the name implies, this type of saw is designed to make very
precise cuts. They are smaller in size than the typical laboratory abrasive
cut-off saw and use much smaller blades, typically from 8 to 20 mm (3 to
8 in.) in diameter. These blades are most commonly of the nonconsumable
type, made of copper-base alloys and having diamond or cubic boron
nitride abrasive bonded to the periphery of the blade. Consumable blades
incorporate alumina or silicon carbide abrasives with a rubber bond and
only work on a machine that operates at speeds higher than 1500 rpm.
These blades are much thinner than abrasive cutting wheels. The load
applied during cutting is much less than that used for abrasive cutting,

Fig. 1

Abrasive cut-off machine used to section a specimen for metallographic preparation

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 39

and, therefore, much less heat is generated during cutting, and depth of
damage is very shallow.
While small section-size pieces that would normally be sectioned with
an abrasive cutter can be cut with a precision saw, cutting time is
appreciably greater, but the depth of damage is much less. These saws are
widely used to section sintered carbides, ceramic materials, thermallysprayed coatings, printed circuit boards, and electronic components.

Fig. 2

Damage to commercially pure titanium metallographic specimen


resulting from sectioning using an abrasive cut-off wheel. The specimen was etched using modied Wecks reagent.

Fig. 3

A precision saw used for precise sectioning of metallographic specimens

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 54 OUTPUT: Thu Oct 26 14:44:56 2000


40 / Practical Guide to Image Analysis

Specimen Mounting
The primary purpose of mounting metallographic specimens is to
provide convenience in handling specimens of difficult shapes or sizes
during the subsequent steps of metallographic preparation and examination. A secondary purpose is to protect and preserve outer edges or surface
defects during metallographic preparation. Care must be exercised when
selecting the mounting method so that it is in no way injurious to the
microstructure of the specimen. Most likely sources of injurious effects
are mechanical deformation and heat.
Clamp Mounting. Clamps offer a quick, convenient method to mount
metallographic cross sections in the form of thin sheets, where several
specimens can be clamped in sandwich form. Edge retention is excellent
when done properly, and there is no problem with seepage of fluids from
crevices between specimens. The outer clamp edges should be beveled to
minimize damage to polishing cloths. Improper use of clamps leaves gaps
between specimens, allowing fluids and abrasives to become entrapped
and seep out, obscuring edges. Ways to minimize this problem include
proper tightening of clamps, using plastic spacers between specimens,
and coating specimen surfaces with epoxy before tightening. A disadvantage of clamps is the difficulty encountered in placing specimen information on the clamp for identification purposes.
Compression Mounting. The most common mounting method uses
pressure and heat to encapsulate the specimen within a thermosetting or
thermoplastic mounting material. Common thermosetting resins include
phenolic (Bakelite), diallyl phthalate, and epoxy, while methyl methacrylate is the most commonly used thermoplastic mounting resin. Both
thermosetting and thermoplastic materials require heat and pressure
during the molding cycle. After curing, mounts made of thermosetting
materials may be ejected from the mold at the maximum molding
temperature, while mounts made of thermoplastic resins must be cooled
to ambient under pressure. However, cooling thermosetting resins under
pressure to at least a temperature of 55 C (130 F) before ejection
reduces shrinkage gap formation. A thermosetting resin mount should
never be water cooled after hot ejection from the molding temperature.
This causes the metal to pull away from the resin, producing shrinkage
gaps that promote poor edge retention (see Fig. 4). Thermosetting epoxy
resins provide the best edge retention of these resins and are less affected
by hot etchants than phenolic resins. Mounting presses vary from simple
laboratory jacks with a heater and mold assembly to fully automated
devices, as shown in Fig. 5. Compression mounting resins have the
advantage that a fair amount of information can be scribed on the
backside with a vibratory pencil-engraving device for specimen identification.

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 67 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 41

Castable Resins for Mounting. Cold mounting materials require


neither pressure nor external heat and are recommended for mounting
heat-sensitive and/or pressure-sensitive specimens. Acrylic resins are the
most widely used castable resins due to their low cost and fast curing
time. However, shrinkage is somewhat of a problem. Epoxy resins,
although more expensive than acrylics, commonly are used because
epoxy physically adheres to specimens and can be drawn into cracks and
pores, particularly if a vacuum impregnation chamber is used. Therefore,
epoxies are very suitable for mounting fragile or friable specimens and
corrosion or oxidation specimens. Dyes or fluorescent agents are added to
some epoxies to study porous specimens such as thermal spray coated
specimens. Most epoxies are cured at room temperature, with curing
times varying from 2 to 20 h. Some can be cured in less time at slightly

Fig. 4

Poor edge retention due to shrinkage gap between metal specimen and the resin mount caused by water cooling
a hot-ejected thermosetting resin mount. Specimen is carburized AISI 8620 alloy steel, etched using 2% nital.

Fig. 5

Automated mounting press used to encapsulate metallographic specimen in a resin mount

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 81 OUTPUT: Thu Oct 26 14:44:56 2000


42 / Practical Guide to Image Analysis

elevated temperatures; the higher temperature must not adversely affect


the specimen. Castable resins are not as convenient as compression
mounts for scribing identification information on the mount.
Edge Preservation. Edge preservation is a long-standing metallographic problem, the solution of which has resulted in the development
and promotion of many tricks (most pertaining to mounting, but some
to grinding and polishing). These methods include the use of backup
material in the mount, the application of coatings to the surfaces before
mounting, and the addition of a filler material to the mount. Plating of a
compatible metal on the surface to be protected (electroless nickel is
widely used) generally is considered to be the most effective procedure.
However, image contrast at an interface between a specimen and the
electroless nickel may be inadequate in certain cases. Figure 6 shows the
surface of a specimen of AISI type 1215 free-machining steel (UNS
G12150) that was salt bath nitrided. Both specimens (one plated with
electroless nickel) are mounted in Epomet (Buehler Ltd., Lake Bluff, IL)
thermosetting epoxy resin. For the plated specimen, it is hard to discern
where the nitrided layer stops, because of poor image contrast between
the nickel and nitrided surface (Fig. 6a). The problem does not exist for
the unplated specimen (Fig. 6b).
Edge-preservation problems have been reduced with advancements in
equipment. For example, mounting presses now can cool the specimen to
near ambient temperature under pressure, producing much tighter mounts.
Gaps that form between specimen and resin are a major contributor to
edge rounding, as shown in Fig. 4. Staining at shrinkage gaps also may be

(a)

Fig. 6

(b)

Visibility problem caused by plating the specimen surface with a compatible metal (electroless nickel in this
case) to help edge retention. It is difcult to discern the free edge of (a) a plated nitrided AISI 1215 steel specimen,
due to poor image contrast between the nickel plate and the nitrided layer. By comparison, (b) the unplated specimen
reveals good image contrast between specimen and thermosetting epoxy resin mount, which allows clear distinction of the
nitrided layer. Etchant is 2% nital.

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 43

a problem, as shown in Fig. 7. Semiautomatic and automatic grinding/


polishing equipment increases surface flatness and edge retention over
that obtained using manual (hand) preparation. However, to obtain the
best results, the position of the specimen holder relative to the platen must
be adjusted so the outer edge of the specimen holder rotates out over the
edge of the surface on the platen during grinding and polishing. Use of
harder, woven and nonwoven napless surfaces for polishing using
diamond abrasives maintains flatness better than softer cloths, such as
canvas, billiard, and felt. Final polishing using low-nap cloths for short
times introduces very little rounding compared with the use of higher nap,
softer cloths.
These procedures produce better edge retention with all thermosetting
and thermoplastic mounting materials. Nevertheless, there are still differences between polymeric materials used for mounting. Thermosetting
resins provide better edge retention than thermoplastic resins. Of the
thermosetting resins, diallyl phthalate provides little improvement over
the much less expensive phenolic compounds. By far, the best results are
obtained with epoxy-base thermosetting resins that contain a filler
material. For comparison, Fig. 8 shows micrographs of the nitrided 1215
steel specimen mounted in a phenolic resin (Fig. 8a) and in methyl
methacrylate (Fig. 8b) at 1000. These specimens were prepared in the
same specimen holder as those shown in Fig. 6, but neither displays
acceptable edge retention at 1000. Figure 9 shows examples of perfect
edge retention, as also illustrated in Fig. 6. These are three markedly
different materials all mounted in the thermosetting epoxy resin.
Very fine aluminum oxide spheres have been added to epoxy mounts to
help maintain edge retention. However, this really is not a satisfactory
solution because the particles are extremely hard (approximately 2000

Fig. 7

Etching stains emanating from gaps between the specimen and resin
mount. Specimen is M2 high-speed steel etched with Vilellas reagent.

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 86 OUTPUT: Thu Oct 26 14:44:56 2000


44 / Practical Guide to Image Analysis

(a)

(b)

Fig. 8

These nitrided 1215 specimens were prepared in the same holder as


those specimens shown in Fig. 6 but did not exhibit acceptable edge
retention due to the choice of mounting compound. Both thermosetting and
thermoplastic mounting resins can result in poor edge retention if proper
polishing techniques are not used, as seen in (a) thermosetting phenolic mount
and (b) thermoplastic methyl methacrylate resin mount. Specimens were etched
with 2% nital

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000


45 / Practical Guide to Image Analysis

(a)

(b)

Fig. 9

Examples of perfect edge retention of two different materials in Epomet (Buehler Ltd., Lake Bluff, IL)
thermosetting epoxy mounts. (a) Ion-nitrided H13 tool steel specimen etched with 2% nital. (b) Coated carbide
tool specimen etched with Murakamis reagent

Fig. 10

H 13 annealed tool steel specimen, etched with 4% picral. Use of


soft ceramic shot helps maintain edge retention.

HV, or Vickers hardness), and their grinding/polishing characteristics are


incompatible with the softer metals placed inside the mount. Soft ceramic
shot (approximately 775 HV) offers grinding/polishing characteristics
more compatible with metallic specimens placed in the mount. Figure 10
shows an example of edge retention using Flat-Edge Filler (Buehler, Ltd.,
Lake Bluff, IL) soft ceramic shot in an epoxy mount.
In summary, to obtain the best possible edge retention, use the
following guidelines, some of which are more critical than others:

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000


46 / Practical Guide to Image Analysis

O Properly mounted specimens yield better edge retention than unmounted specimens; rounding is difficult, if not impossible, to prevent
at a free edge. Hot compression mounts yield better edge preservation
than castable resins.
O Electrolytic or electroless plating of the surface of interest provides
excellent edge retention. If the compression mount is cooled too
quickly after polymerization, the plating may be pulled away from the
specimen, leaving a gap. When this happens, the plating is ineffective
for edge retention.
O Thermoplastic compression mounting materials are less effective than
thermosetting resins. The best thermosetting resin is the epoxy-based
resin containing a hard filler material.
O Never hot eject a thermosetting resin after polymerization and cool it
quickly to ambient (e.g., by water cooling), because a gap will form
between specimen and mount due to the differences in thermal
contraction rates. Automated mounting presses cool the mounted
specimen to near ambient under pressure, greatly minimizing gap
formation due to shrinkage.
O Automated grinding/polishing equipment produces flatter specimens
than manual, or hand, preparation.
O In automated grinder/polisher use, central-pressure mode provides
better flatness than individual pressure mode (both modes defined later
in this chapter).
O Orient the position of the smaller diameter specimen holder so its
periphery slightly overlaps the periphery of the larger diameter platen
as it rotates.
O Use pressure-sensitive-adhesive-backed silicon carbide (SiC) grinding
paper (if SiC is used) and pressure-sensitive-adhesive-backed polishing
cloths rather than stretched cloths.
O Use hard, napless surfaces for rough polishing until the final polishing
step(s). Use a low-nap to medium-nap cloth for the final step, and keep
it brief.
O Rigid grinding disks produce excellent flatness and edge retention and
should be used when possible.

Grinding
Grinding should commence with the finest grit size that will establish an
initially flat surface and remove the effects of sectioning within a few
minutes. An abrasive grit size of 180 or 240 is coarse enough to use on
specimen surfaces sectioned using an abrasive cut-off wheel. Rough
surfaces, such as those produced using a hacksaw and bandsaw, usually
require abrasive grit sizes in the range of 60 to 180 grit. The abrasive used
for each succeeding grinding operation should be one or two grit sizes

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 47

smaller than that used in the preceding operation. A satisfactory fine


grinding sequence might involve SiC papers having grit sizes of 240, 320,
400, and 600 grit (in the ANSI/CAMI scale). This technique is known as
the traditional approach.
As in abrasive-wheel sectioning, all grinding should be done wet using
water, provided that water has no adverse effects on any constituents of
the microstructure. Wet grinding minimizes loading of the abrasive with
metal removed from the specimen being prepared and minimizes specimen heating.
Each grinding step, while producing damage itself, must remove the
damage from the previous step. Depth of damage decreases with the
abrasive size, but so does metal removal rate. For a given abrasive size,
the depth of damage introduced is greater for soft materials than for hard
materials.
There are a number of options available to circumvent the use of SiC
paper. One option, used mainly with semiautomatic and automatic
systems, is to grind a number of specimens placed in a holder simultaneously using a conventional grinding stone generally made of coarse grit
alumina to remove cutting damage. This step, often called planar
grinding, has the second goal of making all of the specimen surfaces
coplanar. This requires a special purpose machine, because the stone must
rotate at a high speed (1500 rpm) to cut effectively. The stone must be
dressed regularly with a diamond tool to maintain flatness, and embedding of alumina abrasive in specimens can be a problem. Silicon carbide
and alumina abrasive papers, usually of 120-, 180-, or 240-grit size, have
been used for planar grinding and are very effective.
Other materials have been used both for the planar grinding stage and
to replace SiC paper after planar grinding. For very hard materials such
as ceramics and sintered carbides, two or more metal-bonded or resinbonded diamond disks having grit sizes from about 70 to 9 m can be
used. An alternative type of disk has diamond particles suspended in a
resin applied in small blobs, or spots, to a disk surface. These are
available with diamond sizes from 120 to 6 m. Another type of disk,
available in several diamond sizes, uses diamond attached to the edges of
a perforated, screenlike metal disk. Another approach uses a stainless
steel woven mesh cloth on a platen charged with coarse diamond,
usually in slurry form, for planar grinding. After obtaining a planar
surface, there are several single-step procedures available that avoid the
need to use finer SiC papers including the use of platens, woven polyester,
or silk PSA-cloths, and rigid grinding disks. A coarse diamond size (most
commonly 9 m) is used with each of these.
Grinding Media. Grinding abrasives commonly used in the preparation of metallographic specimens are SiC, aluminum oxide, or alumina
(Al2O3), emery (Al2O3-Fe3O4), composite ceramics, and diamond. Emery paper is rarely used today in metallography due to its low cutting

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000


48 / Practical Guide to Image Analysis

efficiency. SiC is more readily available than alumina as waterproof paper,


although alumina papers do have a better cutting rate than SiC for some
metals (Ref 8). These abrasives are generally bonded to paper, polymeric,
or cloth-backing materials of various weights in the form of sheets, disks,
and belts of various sizes. Grinding wheels consisting of abrasives
embedded in a bonding material see limited use. Abrasives also may be
used in powder form by charging the grinding surfaces with loose
abrasive particles or with abrasive in a premixed slurry or suspension.
When grinding soft metals, such as lead, tin, cadmium, bismuth, and
aluminum, SiC particles, particularly with the finer grit-size papers,
embed readily in the metal specimen as shown in Fig. 11. Embedding of
diamond abrasive also is a problem with these soft metals, mainly with
slurries when napless cloths are used (Fig. 12).
Grinding Equipment. Although rarely used in industry, stationary
grinding paper that is supplied in strips or rolls still is used in some
introductory instruction to metallographic techniques. Holding the specimen on the paper away from his person, the operator manually slides the
specimen against the paper toward him. Grinding in one direction usually
keeps surfaces flatter than grinding in both directions. While this can be
done dry for certain delicate materials, water is usually added to keep the
specimen surface cool and to carry away the swarf.
Most labs have belt grinders, which mainly are used to remove burrs
from sectioning, to round edges that need not be preserved for examination, to flatten cut surfaces to be macroetched, and to remove sectioning
damage. Generally only very coarse abrasive papers (60 to 240 grit) are
used. Most grinding work is done on a rotating wheel; that is, a
motor-driven platen on which the SiC paper is attached.

Fig. 11

Silicon-carbide particles from grinding paper embedded in a soft


6061-T6 aluminum alloy weldment. Etchant was 0.5% HF (hydrouoric acid).

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 49

Fig. 12

Fine (6 m) diamond abrasive particles embedded in soft lead


specimen

Lapping is an abrasive technique in which the abrasive particles roll


freely on the surface of a carrier disk commonly made of cast iron or plastic. During the lapping process, the disk is charged with small amounts of a
hard abrasive such as diamond or silicon carbide. Some platens, referred to
as laps, are charged with diamond slurries. Initially the diamond particles
roll over the lap surface (just as with other grinding surfaces), but soon they
become embedded in and cut the surface, producing chips. Lapping disks
can produce a flatter specimen than that produced by grinding, but lapping
does not remove metal as does grinding, and, therefore, is not commonly
used in metallographic preparation.

Polishing
Polishing is the final step (or steps) used to produce a deformation-free
surface, which is flat, scratch-free, and mirrorlike in appearance. Such a
surface is necessary for subsequent qualitative and quantitative metallographic interpretation. The polishing technique used should not introduce
extraneous structures such as disturbed metal (Fig. 13), pitting (Fig. 14),
dragging out of graphite and inclusions, comet tailing (Fig. 15), and
staining (Fig. 16). Relief (height differences between different constituents, or between holes and constituents) (Fig. 17 and 18) must be
minimized.
Polishing usually consists of rough, intermediate, and final stages.
Rough polishing traditionally is done using 6 or 3 m diamond abrasive
charged onto napless or low-nap cloths. For hard materials such as
through-hardened steels, ceramics, and cemented carbides, an additional

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000


50 / Practical Guide to Image Analysis

rough polishing step may be required. For such materials, initial rough
polishing may be followed by polishing with 1 m diamond on a napless,
low-nap, or medium-nap cloth. A compatible lubricant should be used
sparingly to prevent overheating and/or surface deformation. Intermediate
polishing should be performed thoroughly to keep final polishing to a
minimum. Final polishing usually consists of a single step but could
involve two steps, such as polishing using 0.3 m and 0.05 m alumina,
or a final polishing step using alumina or colloidal silica followed by
vibratory polishing, using either of these two abrasives.

(a)

Fig. 13

(b)

Examples of residual sectioning/grinding damage in polished specimens. (a) Waspaloy etched with Frys
reagent. (b) Commercially pure titanium etched with Krolls reagent. Differential interference-contrast (DIC)

illumination

Fig. 14

Polishing pits in as-polished cold drawn Cu-20% Zn specimen

JOBNAME: PGIAspec 2 PAGE: 17 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 51

Fig. 15

Comet tailing at hard nitride precipitates in AISI HI3 tool steel.


Differential interference-contrast illumination emphasizes topigraphical detail.

Fig. 16

Staining from polishing solution on as-polished Ti-6Al-2Sn-4Zr-2Mo


titanium alloy

JOBNAME: PGIAspec 2 PAGE: 18 SESS: 56 OUTPUT: Thu Oct 26 14:44:56 2000


52 / Practical Guide to Image Analysis

For inclusion analysis, a fine (1 m) diamond abrasive may be adequate


as the last preparation step. Traditionally, aqueous fine alumina slurries
have been used for final polishing using medium-nap cloths. Alphaalumina (0.3 m) and gamma-alumina (0.05 m) slurries (or suspensions)
are popular for final polishing, either in sequence or singularly. Alumina
abrasives made by the sol-gel process produce better surface finishes than
alumina abrasives made by the traditional calcination process. Calcined
alumina abrasives always have some degree of agglomeration, regardless

(a)

(b)

Fig. 17

Examples of relief (in this case, height differences between different constituents) at hypereutectic silicon
particles in Al-19.85% Si aluminum alloy. (a) Excessive relief. (b) Minimum relief. Etchant is 0.5 HF
(hydrouoric acid).

(a)

Fig. 18

(b)

Relief (in this case, height differences between constituents and holes) in microstructure of a braze. (a)
Excessive relief. (b) Low relief. Etchant is glyceregia.

JOBNAME: PGIAspec 2 PAGE: 19 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 53

of the efforts to keep them from agglomerating, while sol-gel alumina is


free of this problem. Basic colloidal silica suspensions (around 9.5 pH)
and acidic alumina suspensions (3 to 4 pH) are very good final polishing
abrasives, particularly for difficult to prepare materials. Vibratory polishers (Fig. 19) often are used for final polishing, particularly with more
difficult to prepare materials, for image analysis studies or for publicationquality work.
Mechanical Polishing. The term mechanical polishing frequently is
used to describe the various polishing procedures involving the use of fine
abrasives on cloth. The cloth may be attached to a rotating wheel or a
vibratory polisher bowl. Cloths either are stretched over the wheel and
held in place using an adjustable clamp on the platen periphery or held in
place using a pressure-sensitive adhesive bonded to the back of the cloth.
Cutting is less effective if a stretched cloth moves under the applied
pressure during polishing. Stretched cloths can rip if used on an
automated polishing head, especially when preparing unmounted specimens. In mechanical polishing, the specimens are held by hand, held
mechanically in a fixture, or merely confined within the polishing area.
Electrolytic Polishing. Electrolytic polishing, or electropolishing, is
rarely used to prepare specimens for image analysis work, because
electropolished surfaces tend to be wavy rather than flat so stage
movement and focus control over any reasonable size area is difficult.
Electropolishing tends to round edges associated with external surfaces,
cracks, and pores. Also, in two-phase alloys, one phase polishes at a
different rate than another, leading to excessive relief, and in some cases,
one phase may be attacked preferentially. Chemical polishing has the

Fig. 19

Vibratory polisher for nal polishing. Its use produces imageanalysis and publication-quality specimens.

JOBNAME: PGIAspec 2 PAGE: 20 SESS: 57 OUTPUT: Thu Oct 26 14:44:56 2000


54 / Practical Guide to Image Analysis

same problems and restrictions. Consequently, electrolytic polishing is


not recommended, except possibly as a very brief step at the end of a
mechanical polishing cycle to remove minor damage that persists. Use of
electropolishing should be limited to polishing single-phase structures
where maximum polarized light response is required.
Manual Preparation. Hand-preparation techniques still follow the
basic practice established many years ago, aside from the use of improved
grinding surfaces, polishing cloths, and abrasives.
Specimen Movement during Grinding. For grinding, hold the specimen
rigidly against the rotating SiC paper and slowly move from the center to
the edge of the wheel. Rinse after each step and examine to ensure that
scratches are uniform and that grinding removed the previous cut or
ground surface. After grinding on the first SiC paper (often 120 grit),
rotate the specimen 45 to 90 and abrade as before on the next finer paper.
Examine the specimen periodically to determine if the current abrasive
paper removed the scratch marks from the previous step. Repeat the
procedure through all SiC abrasive size papers in the particular grinding
process. In some cases, it may be necessary to use more than one sheet of
paper of a given size before moving to the next finer paper. This is a
common situation for the first step and sometimes for the finer papers.
Specimen Movement during Polishing. For polishing, hold the specimen with one or both hands and rotate around the wheel in a circular
pattern in a direction counter to the rotation of the polishing wheel, which
usually is counterclockwise. In addition, continuously move the specimen
back and forth between the center and the edge of the wheel, thereby
ensuring even distribution of the abrasive and uniform wear of the
polishing cloth. (Some metallographers use a small wrist rotation while
moving the specimen from the center to the edge of one side of the
wheel.) The main reason to rotate the specimen is to prevent formation of
comet tails, polishing artifacts that results from directional polishing of
materials containing hard inclusions or precipitates (Fig. 15).
Polishing Pressure. In general, firm hand pressure is applied to the
specimen. The correct amount of applied pressure must be determined by
experience.
Washing and Drying. The specimen is washed and swabbed in warm
running water, rinsed with ethanol, and dried in a stream of warm air.
Excessively hot water may cause pitting of some materials. Scrubbing
with cotton soaked with an aqueous soap solution followed by rinsing
with water also is commonly used. Alcohol usually can be used to wash
the specimen when the abrasive carrier is not soluble in water or if the
specimen cannot tolerate water. Ultrasonic cleaning may be required if the
specimen is porous or cracked.
Cleanness. Precautions for maintaining cleanness must be strictly
observed. It usually is advisable to separate grinding operations from
polishing operations, especially in a large, high-volume laboratory,

JOBNAME: PGIAspec 2 PAGE: 21 SESS: 58 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 55

because coarse abrasive can carry over to a finer abrasive stage and
produce problems.
Automatic Polishing. Mechanical polishing can be automated to a
high degree using a wide variety of devices ranging from relatively
simple systems (Fig. 20) to rather sophisticated, minicomputer-controlled
or microprocessor-controlled devices (Fig. 21). Units also vary in
capacity from a single specimen to a half-dozen or more at a time. These
systems can be used for all grinding and polishing steps and enable an

Fig. 20

Simple automated mechanical polishing system

Fig. 21

Sophisticated automatic polishing system

JOBNAME: PGIAspec 2 PAGE: 22 SESS: 81 OUTPUT: Thu Oct 26 14:44:56 2000


56 / Practical Guide to Image Analysis

operator to prepare a large number of specimens per day with a higher


degree of quality than that with hand polishing and at reduced consumable costs. Automatic polishing devices produce the best surface flatness
and edge retention.
Two approaches for handling specimens are central force and individual
force. Central force uses a specimen holder with each specimen held in
place rigidly. The holder is pressed downward against the preparation
surface with the force coming uniformly. This method yields the best edge
retention and specimen flatness. Individual force uses a holder that holds
specimens loosely in place. Force is applied to each specimen by means
of a piston (thus the term individual force). This method provides
convenience in examining individual specimens during the preparation
cycle, without the problem of regaining planarity for all specimens in the
holder on the next step. Also, if the etch results are deemed inadequate,
the specimen is simply put back in the holder, repeating the last step. The
drawback to this method is that slight rocking of the specimen may occur,
especially if the specimen height is too great, which slightly reduces edge
retention.
Polishing Cloths. The requirements of a good polishing cloth include
the ability to hold an abrasive, long life, absence of any foreign material
that may cause scratches, and absence of any processing chemical (such
as dye or sizing) that may react with the specimen. Many cloths of
different fabrics, woven or nonwoven, with a wide variety of naps, or
napless, are available for metallographic polishing. Napless and low-nap
cloths are recommended for rough polishing using diamond-abrasive
compounds. Low-nap, medium-nap, and occasionally high-nap cloths are
used for final polishing, but this step should be brief to minimize relief.
Polishing Abrasives. Polishing usually involves the use of one or
more of the following abrasives: diamond, aluminum oxide (Al2O3),
magnesium oxide (MgO), and silicon dioxide (SiO2). For certain materials, cerium oxide, chromium oxide, or iron oxide may be used. With the
exception of diamond, these abrasives normally are used in a distilledwater suspension, but if the metal to be polished is not compatible with
water, other solvents such as ethylene glycol, alcohol, kerosene, or
glycerol may be required. All flammable materials must be handled with
care to avoid accidents. See ASTM E 2014 and related textbooks,
Material Safety Data Sheets (MSDSs), and so forth for guidance on safety
issues. Diamond abrasive should be extended only with the carrier
recommended by the manufacturer.

Examples of Preparation Procedures


The Traditional Method. Over the past 40 years, a general procedure
has been developed that is quite successful for preparing most metals and
alloys. The method is based on grinding using silicon carbide waterproof

JOBNAME: PGIAspec 2 PAGE: 23 SESS: 82 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 57

Table 1 Traditional method used to prepare most metal and alloy metallographic
specimens
Abrasive
Polishing surface

Load

Type

Grit size

lb

Speed, rpm

SiC (water cooled)

120

27

240300

Complementary Until plane

SiC (water cooled)

240

27

240300

Complementary

12

SiC (water cooled)

320

27

240300

Complementary

12

SiC (water cooled)

400

27

240300

Complementary

12

SiC (water cooled)

600

27

240300

Complementary

12

Canvas

Diamond paste with extender

6 m

27

120150

Complementary

Billiard or felt cloth

Diamond paste with extender

1 m

27

120150

Complementary

Aqueous -alumina slurry

0.3 m

27

120150

Complementary

Aqueous -alumina slurry

0.5 m

27

120150

Complementary

Waterproof paper

Microcloth pad

Direction(a)

Time, min

(a) Complementary, in the same direction in which the wheel is rotating.

papers through a series of grits, then rough polishing with one or more
sizes of diamond abrasive, followed by fine polishing with one or more
alumina suspensions of different particle size. This procedure will be
called the traditional method and is described in Table 1.
This procedure is used for manual preparation as well as using a
machine, but control of the force applied to a specimen in manual
preparation cannot be controlled as accurately and as consistently as with
a machine. Complementary motion means that the specimen holder is
rotated in the same direction as the platen and does not apply to manual
preparation. Some machines can be set so that the specimen holder rotates
in the direction opposite to that of the platen, called contra. This
provides a more aggressive action but was not part of the traditional
approach. This action is similar to the manual polishing procedure of
running the specimen in a circular path around the wheel in a direction
opposite to that of the platen rotation. The steps of the traditional method
are not rigid, as other polishing cloths may be substituted and one or more
of the polishing steps might be omitted. Times and pressures can be
varied, as well, to suit the needs of the work or the material being
prepared. This is the art side of metallography.
Contemporary Methods. During the 1990s, new concepts and new
preparation materials have been introduced that have enabled metallographers to shorten the process while producing better, more consistent
results. Much of the effort focused on reducing or eliminating the use of
silicon carbide paper in the five grinding steps. In all cases, an initial
grinding step must be used, but there is a wide range of materials that can
be substituted for SiC paper. If a central-force automated device is used,
the first step must remove the sectioning damage on each specimen and
bring all of the specimens in the holder to a common plane perpendicular
to the axis of the specimen-holder drive system. This first step is often
called planar grinding, and SiC paper can be used, although more than
one sheet may be needed. Alternatives to SiC paper include the following:

JOBNAME: PGIAspec 2 PAGE: 24 SESS: 82 OUTPUT: Thu Oct 26 14:44:56 2000


58 / Practical Guide to Image Analysis

O
O
O
O
O
O
O

Alumina paper
Alumina grinding stone
Metal-bonded or resin-bonded diamond discs
Wire mesh discs with metal-bonded diamond
Stainless steel mesh cloths (diamond is applied during use)
Rigid grinding discs (RGD) (diamond is applied during use)
Lapping platens (diamond is applied and becomes embedded in the
surface during use)

This huge range of products to choose from makes it difficult to determine


what to use because each of these products has advantages and disadvantagesand this is only the first step.
One or more steps using diamond abrasives on napless surfaces usually
follow planar grinding. Pressure-sensitive-adhesive-backed silk, nylon, or
polyester cloths are widely used. These give good cutting rates, maintain
flatness, and avoid relief. Silk cloths provide the best flatness and
excellent surface finishes for the diamond size used. Synthetic chemotextiles are excellent for retaining second phase particles and inclusions.
Diamond suspensions are most popular for use with automated polishers
because they can be added easily during polishing, although it is still best
to charge the cloth initially with diamond paste of the same size to get
polishing started quickly. Final polishing can be performed using a very
fine diamond size, such as 0.1 m diamond, depending on the material,
needs, and personal preferences. Otherwise, final polishing is performed
using colloidal silica or alumina slurries with low-nap to medium-nap
cloths. For some materials, such as titanium and zirconium alloys, an
attack polishing solution is added to the slurry to enhance deformation
and scratch removal and improve polarized light response. Contra rotation
is preferred as the slurry stays on the cloth better, although this will not
work if the head rotates at a high rpm. Examples of generic and specific
contemporary preparation practices are given in Tables 2 to 6. The
starting abrasive size depends on the degree of the cutting damage and the
material. Never start with a coarser abrasive than necessary to remove the
cutting damage and achieve planar conditions in a reasonable time. The
Table 2 Generic four-step contemporary practice used to prepare many metal
and alloy metallographic specimens
Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof discs

SiC(a)/120, 180, or 240

Abrasive/grit size

27 (6)

240300/comp

Until plane

Napless cloth

Diamond/9 m

27 (6)

120150/comp

Napless cloth

Diamond/ 3 m

27 (6)

120150/comp

Low- or medium-nap cloth

Colloidal silica or
sol-gel alumina
suspension/0.05
m

27 (6)

120150/contra

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled

JOBNAME: PGIAspec 2 PAGE: 25 SESS: 83 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 59

generic four-step procedure in Table 2 can be extended to five steps for


difficult to prepare materials by adding a 1 m diamond step on a napless
cloth for 2 to 3 min as step 4.
Similar procedures can be developed using rigid grinding discs, which
generally are restricted for use with materials above a certain hardness
Table 3 Four-step contemporary practice used to prepare steel metallographic
specimens using a rigid grinding disc
Polishing surface

Waterproof discs

Abrasive/grit size

SiC(a)/120, 180, or 240

Load, N (lb)

Speed, rpm/direction

Time, min

27 (6)

240300/comp

Until plane
5

Rigid grinding disc

Diamond suspension/9 m

27 (6)

120150/comp

Napless cloth

Diamond/3 m

27 (6)

120150/comp

Low- or mediumnap cloth

Colloidal silica or
sol-gel alumina
suspension/0.05 m

27 (6)

120150/contra

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled

Table 4 Four-step practice used to prepare sintered-carbide metallographic


specimens using two rigid grinding disc steps
Polishing surface

Abrasive/grit size

Load, N (lb)

Speed, rpm/direction

Time, min

Rigid grinding disc

Diamond suspension/30 m

22 (5)

240300/contra

Rigid grinding disc

Diamond suspension/9 m

27 (6)

240300/contra

Napless cloth

Diamond/3 m

27 (6)

120150/contra

Napless cloth

Colloidal silica or
sol-gel alumina
suspension/0.5 m

27 (6)

120150/contra

Contra, opposite to the direction in which the wheel is rotating

Table 5 Four-step practice used to prepare aluminum alloy metallographic


specimens
Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof discs

SiC(a)/240 or 320

Abrasive/grit size

22 (5)

240300/comp

Until plane

Napless cloth

Diamond/9 m

40 (9)

120150/comp

Napless cloth

Diamond/3 m

36 (8)

120150/comp

Low- or medium-nap cloth

Colloidal silica or
sol-gel alumina
suspension/0.05 m

31 (7)

120150/contra

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled

Table 6 Three-step practice used to prepare titanium and Ti-alloy metallographic specimens
Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof paper discs

SiC(a)/320

Abrasive/grit size

27 (6)

240300/comp

Until plane

Napless cloth

Diamond/9 m

27 (6)

120150/contra

10

Medium-nap cloth

Colloidal silica
plus attack
polish(b)/0.05
m

27 (6)

120150/contra

10

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel
is rotating. (a) Water cooled. (b) Attack polish is five parts colloidal silica plus one part hydrogen peroxide, 30% concentration. Use
with caution.

JOBNAME: PGIAspec 2 PAGE: 26 SESS: 70 OUTPUT: Thu Oct 26 14:44:56 2000


60 / Practical Guide to Image Analysis

(175 HV, for example), although some softer materials can be prepared
using them. This disc can also be used for the planar grinding step. An
example of such a practice applicable to nearly all steels (results are
marginal for solution annealed austenitic stainless steels) is given in Table
3. The first step of planar grinding could also be performed using the rigid
grinding disc and 30 m diamond. Rigid grinding discs contain no
abrasive; they must be charged during use. Suspensions are the easiest
way to do this. Polycrystalline diamond suspensions are favored over
monocrystalline synthetic diamond suspensions for most metals and
alloys due to their higher cutting rate.
As examples of tailoring these types of procedures to other metals,
alloys, and materials, the following three methods are shown in Tables 4
to 6 for sintered carbides (these methods also work for ceramics),
aluminum, and titanium alloys. Because sintered carbides and ceramics
are cut with a precision saw that produces very little deformation and an
excellent surface finish, a coarser grit diamond abrasive is not needed for
planar grinding (Table 4). Pressure-sensitive-adhesive-backed silk cloths
are excellent for sintered carbides. Nylon is also quite popular.
A four-step practice for aluminum alloys is presented in Table 5. While
MgO was the preferred final polishing abrasive for aluminum and its
alloys, it is a difficult abrasive to use and is not available in very fine sizes,
and colloidal silica has replaced magnesia. This procedure retains all of
the intermetallic precipitates observed in aluminum and its alloys and
minimizes relief. Synthetic napless cloths may also be used for the final
step with colloidal silica, and they will introduce less relief than a low-nap
or medium-nap cloth but may not remove fine polishing scratches as well.
For very pure aluminum alloys, this procedure could be followed by
vibratory polishing to improve the surface finish, as these are quite
difficult to prepare totally free of fine polishing scratches.
The contemporary practice for titanium and its alloys (Table 6)
demonstrates the use of an attack-polishing agent added to the final
polishing abrasive to obtain the best results, especially for commercially
pure titanium, a rather difficult metal to prepare free of deformation for
color etching, heat tinting, and/or polarized light examination of the grain
structure. Attack-polishing solutions added to the abrasive slurry or
suspension must be treated with great care to avoid burns. (Caution: use
good, safe laboratory practices and wear protective gloves.) This
three-step practice could be modified to four steps by adding a 3 m or 1
m diamond step.
There are a number of attack polishing agents for use on titanium. The
simplest is a mixture of 10 mL, 30% concentration hydrogen peroxide
(caution: avoid skin contact) and 50 mL colloidal silica. Some metallographers add either a small amount of Krolls reagent to this mixture or a
few milliliters of nitric and hydrofluoric acidsthese latter additions may
cause the suspension to gel. In general, these acid additions do little to

JOBNAME: PGIAspec 2 PAGE: 27 SESS: 85 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 61

improve the action of the hydrogen peroxide (the safer 3% concentration


is not effective).
It is impossible to describe in this book all methods that can be used to
prepare all materials, but the above examples illustrate the approach to
use. The approach can be modified to suit other materials. Materialpreparation methods can be found in many sources, such as Ref 68.
Some ASTM standards also provide material-preparation guidelines, such
as ASTM E 3 (general preparation suggestions), E 768 (guidelines to
prepare steel specimens for inclusion analysis), and E 1920 (guidelines to
prepare thermally sprayed metallic specimens).

Etching
Metallographic etching encompasses all processes used to reveal
particular structural characteristics of a metal that are not evident in the
as-polished condition. Examination of a properly polished specimen
before etching may reveal structural aspects such as porosity, cracks,
graphite, intermetallic precipitates, nitrides, and nonmetallic inclusions.
Certain constituents are best measured using image analysis without
etching, because etching reveals unwanted detail, making detection
difficult or impossible. Classic examples of analyzing unetched specimens
are the measurement of inclusions in steel and graphite in cast iron,
although many intermetallic precipitates and nitrides also can be measured effectively in the as-polished condition. Grain size also can be
revealed adequately in the as-polished condition using polarized light in
certain nonferrous alloys having noncubic crystallographic structures,
such as beryllium, hafnium, magnesium, titanium, uranium, and zirconium. Figure 22 shows the microstructure of beryllium viewed in
cross-polarized light, which produces grain coloration rather than a flat
etched appearance where only the grain boundaries are dark. This image
could be used in color image analysis but would not be useful for image
analysis using a black and white system.
Etching Procedures. Microscopical examination usually is limited to
a maximum magnification of 1000the approximate useful limit of the
light microscope, unless oil-immersion objectives are used. Many image
analysis systems use relay lenses that yield higher screen magnifications,
which may make detection of fine structures easier. However, resolution
is not raised above the general limit of about 0.3 m for the light
microscope. Microscopical examination of a properly prepared specimen
clearly reveals structural characteristics such as grain size, segregation,
and the shape, size, and distribution of the phases and inclusions that are
present. The microstructure also reveals prior mechanical and thermal
treatments that the metal has received. Microstructural features are

JOBNAME: PGIAspec 2 PAGE: 28 SESS: 85 OUTPUT: Thu Oct 26 14:44:56 2000


62 / Practical Guide to Image Analysis

measured either according to established image analysis procedures


(ASTM standards, for example) or internally developed procedures.
Etching is carried out by means of immersion or swabbing or
electrolytically, using a suitable chemical solution that basically produces
selective corrosion. Swabbing is preferred for metals and alloys that form
a tenacious oxide surface layer when exposed to the atmosphere, such as
stainless steels, aluminum, nickel, niobium, and titanium. It is best to use
surgical grade cotton that will not scratch the polished surface. Etch time
varies with etchant strength and can only be determined by experience. In
general, for examination at high magnification, the etch depth should be
shallow; while for examination at low magnification, a deeper etch yields
better image contrast. Some etchants produce selective results; that is,
only one phase is attacked or colored. For information on the vast number
of etchants that have been developed, see Ref 68, 9, and ASTM E 407.
After achieving the desired degree of etching, rinse the specimen under
running water, displace the water from the specimen surface with alcohol
(ethanol is safer to use than methanol), and dry the specimen under hot
air. Drying can be challenging if there are cracks, pores, or other holes in
the specimen, or shrinkage gaps between specimen and mount. Figure 23
shows two examples of drying problems that obscure the true microstructure.
Etchants that reveal grain boundaries are very important for successful
determination of the grain size. Grain boundary etchants are given in (Ref
69). Problems associated with grain boundary etching, particularly
prior-austenite grain boundary etching, are given in Ref 7, 10, and 11.

Crossed-polarized light (Ahrens polarizer/Polaroid lter analyzer


Berek prism pre-polarizer). This light colorizes the grains of beryllium microstructure, a phenomenon that is useful for color image analysis but
harder to utilize in black and white image analysis. For color version of Fig. 22,
see endsheets of book.

Fig. 22

JOBNAME: PGIAspec 2 PAGE: 29 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 63

Measurement of grain size in austenitic, or face-centered cubic (fcc),


metals that exhibit annealing twins is a commonly encountered problem.
Etchants that will reveal grain boundaries but not twin boundaries are
reviewed in Ref 7.
Selective Etching. Image analysis work is facilitated if the etchant
selected improves the contrast between the feature of interest and
everything else. Only a small number of the thousands of etchants that
have been developed over the years are selective in nature. Although the
selection of the best etchant and its proper use are very critical phases of
the image analysis process, only a few publications have addressed this
problem (Ref 1214). Selective etchants, that is, etchants that preferentially attack or color a specific phase, are listed (Ref 69, 13, and 14) and
shown in Fig. 13 and 14. Stansbury (Ref 15) describes how potentiostatic
etching works and lists many preferential potentiostatic-etching methods.
The potentiostat offers the ultimate in control over the etching process and
is an outstanding tool for this purpose. Many tint etchants function
selectively in that they color either the anodic or cathodic constituent in
a microstructure. Tint etchants are listed and illustrated in several
publications (Ref 68, 14, 1621).
A classic example of the different behavior of etchants is shown in Fig.
24 where low-carbon sheet steel has been etched using the standard nital
and picral etchants and a color tint etch. Etching with 2% nital reveals the
ferrite grain boundaries and cementite (Fig. 24a). Note that many of the
ferrite grain boundaries are missing or very fainta problem that
degrades the accuracy of grain size ratings. Etching with 4% picral
reveals the cementite aggregatesthis cannot be called pearlite because it

(a)

Fig. 23

(b)

Examples of conditions that obscure the true microstructure. (a) Improper drying of the specimen. (b) Water
stains emanating from shrinkage gaps between 6061-T6 aluminum alloy and phenolic resin mount. Both
specimens viewed using differential interference-contrast (DIC) illumination.

JOBNAME: PGIAspec 2 PAGE: 30 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000


64 / Practical Guide to Image Analysis

is too nonlamellar in appearance and some of the cementite exists as


simple grain boundary filmbut no ferrite grain boundaries. If the
interest is knowing the amount and nature of the cementite (which can
influence formability), then the picral etch is far superior to the nital etch,
because picral reveals only the cementite. Tint etching using Berahas
solution (Klemm I etchant also can be used) colors the grains according
to their crystallographic orientation (Fig. 24c). This image can now be

(a)

(b)

(c)

Fig. 24

Examples of different behavior of etchants on the same low-carbon steel sheet. (a) 2% nital etch reveals ferrite
grain boundaries and cementite. (b) 4% picral etch reveals cementite aggregates and no ferrite grain
boundaries. (c) Tint etching with Berahas solution colors all grains according to their crystallographic orientation. All
specimens are viewed using bright eld illumination. For color version of Fig. 24(c), see endsheets of book.

JOBNAME: PGIAspec 2 PAGE: 31 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 65

used quite effectively to provide accurate grain size measurements using


a color image analyzer because all grains are colored.
Figure 25 shows a somewhat more complex example of selective
etching. The micrographs show the ferrite-cementite-iron phosphide
ternary eutectic in gray iron. Etching sequentially with picral and nital
reveals the eutectic surrounded by pearlite (Fig. 25a). Etching with
boiling alkaline sodium picrate (Fig. 25b) colors only the cementite
phase, including that in the surrounding pearlite (a higher magnification
is required to see the very finely spaced cementite that is more lightly
colored). Etching with boiling Murakamis reagent (Fig. 25c) colors the

(a)

(b)

Fig. 25

(c)

Examples of selective etching of ferrite-cementite-iron phosphide ternary eutectic in gray cast iron. (a)
Picral/nital etch reveals the eutectic surrounded by pearlite. (b) Boiling alkaline sodium-picrate etch colors
only the cementite phase. (c) Boiling Murakamis reagent etch darkly colors the iron phosphide and lightly colors cementite
after prolonged etching. All specimens are viewed using bright eld illumination.

JOBNAME: PGIAspec 2 PAGE: 32 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000


66 / Practical Guide to Image Analysis

iron phosphide darkly and lightly colors the cementite after prolonged
etching. The ferrite could be colored preferentially using Klemm I
reagent.
Selective etching has been commonly applied to stainless steels to
detect, identify, and measure -ferrite, ferrite in dual phase grades, and
-phase. Figure 26 shows examples of the use of a number of popular
etchants to reveal the microstructure of 7Mo Plus (Carpenter Technology
Corporation, Reading, PA) (UNS S32950), a dual-phase stainless steel, in
the hot-rolled and annealed condition. Figure 26(a) shows a welldelineated structure when the specimen was immersed in ethanolic 15%
HCl for 30 min. All of the phase boundaries are clearly revealed, but there
is no discrimination between ferrite and austenite, and twin boundaries in
the austenite are not revealed. Glyceregia, a popular etchant for stainless
steels, is not suitable for this grade because it appears to be rather
orientation-sensitive (Fig. 26b). Many electrolytic etchants are used to
etch stainless steels, but only a few have selective characteristics. Of the
four shown in Fig. 26 (c to f), only aqueous 60% nitric acid produces any
gray level discrimination, which is weak, between the phases. However,
all nicely reveal the phase boundaries. Two electrolytic reagents are
commonly used to color ferrite in dual phase grades and -ferrite in
martensitic grades (Fig. 26 g, h). Of these, aqueous 20% sodium
hydroxide (Fig. 26g) usually gives more uniform coloring of the ferrite.
Murakamis and Groesbecks reagents also are used for this purpose. Tint
etchants developed by Beraha nicely color the ferrite phase, as illustrated
in Fig. 26(i).
Selective etching techniques have been more thoroughly developed for
use on iron-base alloys than other alloy systems but are not limited to
iron-base alloys. For example, selective etching of -phase in - copper
alloys is a popular subject. Figure 27 illustrates coloring of -phase in
naval brass (UNS C46400) using Klemm I reagent. Selective etching has
long been used to identify intermetallic phases in aluminum alloys; the
method was used for many years before the development of energydispersive spectroscopy. It still is useful for image analysis work. Figure
28 shows selective coloration of -phase, CuAl2, in the Al-33% Cu
eutectic alloy. Figure 29 illustrates the structure of a simple sintered
tungsten carbide (WC-Co) cutting tool. In the as-polished condition (Fig.
29a), the cobalt binder is faintly visible against the more grayish tungsten
carbide grains, and a few particles of graphite are visible. Light relief
polishing brings out the outlines of the cobalt binder phase, but this image
is not particularly useful for image analysis (Fig. 29b). Etching in a
solution of hydrochloric acid saturated with ferric chloride (Fig. 29c)
attacks the cobalt and provides good uniform contrast for measurement of
the cobalt binder phase. A subsequent etch using Murakamis reagent at
room temperature reveals the edges of the tungsten carbide grains, which
is useful to evaluate grain size (Fig. 29d).

JOBNAME: PGIAspec 2 PAGE: 33 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 67

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 26

Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.

JOBNAME: PGIAspec 2 PAGE: 34 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000


68 / Practical Guide to Image Analysis

(g)

(h)

(i)

Fig. 26

Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.

JOBNAME: PGIAspec 2 PAGE: 35 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 69

Electrolytic Etching and Anodizing. The procedure for electrolytic


etching basically is the same as that used for electropolishing, except that
voltage and current densities are considerably lower. The specimen is
made the anode, and some relatively insoluble but conductive material
such as stainless steel, graphite, or platinum is used for the cathode.
Direct-current electrolysis is used for most electrolytic etching, and for
small specimens (13 by 13 mm, or 0.5 by 0.5 in., surface to be etched),
one or two standard 1.5 V direct current (dc) flashlight batteries provide
an adequate power source, although the current level may be inadequate
for some work. Electrolytic etching is commonly used with stainless
steels, either to reveal grain boundaries without twin boundaries or to
color -ferrite (Fig. 26), -phases, and -phases. Anodizing is a term
applied to electrolytic etchants that develop grain coloration when viewed
with crossed polarized light, as in the case of aluminum, niobium,
tantalum, titanium, tungsten, uranium, vanadium, and zirconium (Ref 7).
Figure 30 shows the grain structure of 5754 aluminum alloy sheet (UNS
A95754) revealed by anodizing using Barkers reagent and viewed using
crossed-polarized light. Again, color image analysis now makes this
image useful for grain size measurements.

(a)

Fig. 27

(b)

Selective etching of naval brass with Klemm I reagent reveals the -phase (dark constituent) in the -
copper alloy. (a) Transverse section. (b) Longitudinal section

JOBNAME: PGIAspec 2 PAGE: 36 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000


70 / Practical Guide to Image Analysis

Fig. 28

Selective tint etching of Al-33%Cu eutectic alloy. The phase is


revealed. For color version of Fig. 28, see endsheets of book.

(a)

(b)

(c)

(d)

Fig. 29

Selective etching of sintered tungsten carbide-cobalt (WC-Co) cutting tool material. (a) Some graphite
particles are visible in the as-polished condition. (b) Light relief polishing outlines cobalt binder phase. (c)
Hydrochloric acid saturated with ferric chloride solution etch darkens the cobalt phase. (d) Subsequent Murakamis
reagent etch reveals edges of WC grains. Viewed using bright eld illumination

JOBNAME: PGIAspec 2 PAGE: 37 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 71

Heat Tinting. Although not commonly used, heat tinting (Ref 7) is an


excellent method to obtain color contrast between constituents or grains.
An unmounted polished specimen is placed face up in an air-fired furnace
and held at a set temperature as an oxide film grows on the surface.
Interference effects, as in tint etching, create coloration for film thicknesses within a certain rangeapproximately 20 to 500 nm. The observed
color is a function of the film thickness. Thermal exposure must be such
that it does not alter the microstructure. The correct temperature must be
determined by the trial-and-error approach, but the procedure is reproducible and reliable. Figure 31 shows the grain structure of commercially
pure (CP) titanium revealed by heat tinting.
Interference Layer Method. The interference layer method (Ref 7),
introduced by Pepperhoff in 1960, is another procedure used to form a
film over the microstructure that generates color by interference effects. In
this method, a suitable material is deposited on the polished specimen
face by vapor deposition to produce a low-absorption, dielectric film
having a high refractive index at a thickness within the range for
interference. Very small differences in the natural reflectivity between
constituents and the matrix can be dramatically enhanced by this method.
Suitable materials for the production of evaporation layers are summarized in Ref 22 and 23. The technique is universally applicable but
requires the use of a vacuum evaporator. Its main limitation is difficulty
in obtaining a uniformly coated large surface area for measurement.

Fig. 30

Grain coloration of a heat-treated (340C, or 645 F, 2 h) 5754


aluminum alloy sheet (longitudinal plane) obtained by anodizing
using Barkers reagent (30 V direct current, 2 min). Viewed using crossed
polarized light. For color version, see endsheets of book.

JOBNAME: PGIAspec 2 PAGE: 38 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000


72 / Practical Guide to Image Analysis

Fig. 31

Grain coloration of commercially pure titanium obtained by heat


tinting, viewed using crossed polarized light. For color version of
gure, see endsheets of book.

Conclusions
Preparation of metallographic specimens is based on scientific principles that are easily understood. Sectioning creates damage that must be
removed by the grinding and polishing steps if the true structure is to be
examined. Each sectioning process produces a certain amount of damage,
thermal and/or mechanical. Consequently, select a procedure that produces the least possible damage. Grinding also causes damage, with the
depth of damage decreasing with decreasing abrasive size. Materials
respond differently to the same size abrasive, so it is not possible to
generalize on metal removal depth. Removal rates also decrease with
decreasing abrasive size. With experience, good, reproducible procedures
can be established by each laboratory for the materials being prepared.
Automation in specimen preparation offers much more than a reduction in
labor. Specimens prepared using automated devices consistently have
much better flatness, edge retention, relief control, and freedom from
artifacts such as scratches, pull out, smearing, and comet tailing.
Some image analysis work is performed on as-polished specimens, but
many applications require some etching technique to reveal the microstructural constituent of interest. Selective etching techniques are best.
These may involve immersion tint etchants, electrolytic etching, potentiostatic etching, or techniques such as heat tinting or vapor deposition. In
each case, the goal is to reveal only the constituent of interest with strong

JOBNAME: PGIAspec 2 PAGE: 39 SESS: 65 OUTPUT: Thu Oct 26 14:44:56 2000


Specimen Preparation for Image Analysis / 73

contrast. If this is done, image analysis measurement procedures are


vastly simplified and the data are more precise and reproducible.

References
1. J.R. Pickens and J. Gurland, Metallographic Characterization of
Fracture Surface Profiles on Sectioning Planes, Proc. Fourth International Congress for Stereology, National Bureau of Standards
Special Publication 431, U. S. Government Printing Office, Washington, D. C., 1976, p 269272
2. A.M. Gokhale, W.J. Drury, and S. Mishra, Recent Developments in
Quantitative Fractography, Fractography of Modern Engineering
Materials: Composites and Metals, Second Volume, STP 1203,
ASTM, 1993, p 322
3. A.M. Gokhale, Unbiased Estimation of Curve Length in 3-D Using
Vertical Slices, J. Microsc., Vol 159 (Part 2), August 1990, p 133141
4. A.M. Gokhale and W. J. Drury, Efficient Vertical Sections: The
Trisector, Metall. Mater. Trans. A, 1994, Vol 25, p 919928
5. B.R. Morris, A.M. Gokhale, and G.F. Vander Voort, Grain Size
Estimation in Anisotropic Materials, Metall. Mater. Trans. A, Vol 29,
Jan 1998, p 237244
6. Metallography and Microstructures, Vol 9, Metals Handbook, 9th ed.,
American Society for Metals, 1985
7. G.F. Vander Voort, Metallography: Principles and Practice, ASM
International, 1999
8. L.E. Samuels, Metallographic Polishing by Mechanical Methods, 3rd
ed., American Society for Metals, 1982
9. G. Petzow and V. Carle, Metallographic Etching, 2nd ed., ASM
International, 1999
10. G.F. Vander Voort, Grain Size Measurement, Practical Applications
of Quantitative Metallography, STP 839, ASTM, 1984, p 85131
11. G.F. Vander Voort, Wetting Agents in Metallography, Mater. Charact.,
Vol 35 (No. 2), Sept 1995, p 135137
12. A.Skidmore and L. Dillinger, Etching Techniques for Quantimet
Evaluation, Microstruct., Vol 2, Aug/Sept 1971, p 2324
13. G.F. Vander Voort, Etching Techniques for Image Analysis, Microstruct. Sci., Vol 9, Elsevier North-Holland, NY, 1981, p 135154
14. G.F. Vander Voort, Phase Identification by Selective Etching, Appl.
Metallography, Van Nostrand Reinhold Co., NY, 1986, p 119
15. E.E. Stansbury, Potentiostatic Etching, Appl. Metallography, Van
Nostrand Reinhold Co., NY, 1986, p 2139
16. E. Beraha and B. Shpigler, Color Metallography, American Society
for Metals, 1977
17. G. F. Vander Voort, Tint Etching, Metal Progress, Vol 127, March

JOBNAME: PGIAspec 2 PAGE: 40 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000


74 / Practical Guide to Image Analysis

1985, p 3133, 3638, 41


18. E. Weck and E. Leistner, Metallographic Instructions for Colour
Etching by Immersion, Part I: Klemm Colour Etching, Vol 77,
Deutscher Verla fr Schweisstechnik GmbH, 1982
19. E. Weck and E. Leistner, Metallographic Instructions for Colour
Etching by Immersion, Part II: Beraha Colour Etchants and Their
Different Variants, Vol 77/II, Deutscher Verlag fr Schweisstechnik
GmbH, 1983
20. E. Weck and E. Leistner, Metallographic Instructions for Colour
Etching by Immersion, Part III: Non-Ferrous Metals, Cemented
Carbides and Ferrous Metals, Nickel-Base and Cobalt-Base Alloys,
Vol 77/III, Deutscher Verlag fr Schweisstechnik, 1986
21. P. Skocovsky, Colour Contrast in Metallographic Microscopy, Slovmetal, Z ilina, 1993
22. H.E. Bhler and H. P. Hougardy, Atlas of Interference Layer Metallography, Deutsche Gesellschaft fr Metallkunde, 1980
23. H.E. Bhler and I. Aydin, Applications of the Interference Layer
Method, Appl. Metallography, Van Nostrand Reinhold Co., NY, 1986,
p 4151

CHAPTER

4
Principles of
Image Analysis

James C. Grande
General Electric Research and Development Center

THE PROCESS by which a visualized scene is analyzed comprises


specific steps that lead the user to either an enhanced image or data that
can be used for further interpretation. A decision is required at each step
to be able to achieve the next step, as shown in Fig. 1. In addition, many
different algorithms can be used at each step to achieve desired effects
and/or measurements.
To illustrate the decision-making process, consider the following
hypothetical situation. Visualize a polished section of nodular gray cast
iron (which, decidedly, is best acquired by reflected bright-field illumination). After digitizing, the image is enhanced to delineate the edges
more clearly. Then the threshold (gray-level range) of the graphite in the
metal matrix is set, and the image is transformed into binary form. Next,
some binary image processing is performed to eliminate the graphite
flakes so the graphite nodules can be segmented as the features of interest.
Finally, image analysis software measures area fraction and size distribution of nodules, providing data that can be used to compare against the
specifications of the material being analyzed.
This chapter discusses the practice of image processing for analysis and
explores issues and concerns of which the user should be aware.

Image Considerations
An image in its simplest form is a three-dimensional array of numbers
representing the spatial coordinates (x and y, or horizontal and vertical)
and intensity of a visualized object (Fig. 2). The number array is the
fundamental form by which mathematical calculations are performed to

76 / Practical Guide to Image Analysis

enhance an image or to make quantitative measurements of features contained in an image. In the digital world, the image is composed of small,
usually square (to avoid directional bias) picture elements called pixels.
The gray level, or intensity, of each pixel relates to the number of light photons striking the detector within a camera. Images typically range in size
from arrays of 256 256 pixels to those as large as 4096 4096 pixels
using specialized imaging devices. There are a myriad number of cameras
having wide-ranging resolutions and sensitivities available today. In the

Fig. 1

Fig. 2

Image analysis process steps. Each step has a decision point before the
next step can be achieved.

Actual image area with corresponding magnied view. The individual


pixels are arranged in x, y coordinate space with gray level, or
intensity, associated with each one.

Principles of Image Analysis / 77

mid to late 1980s, 512 512 pixel arrays were the standard. Older
systems typically had 64 (26) gray levels, whereas at the time of this
publication, all commercial systems offer at least 256 (28) gray levels,
although there are systems having 4096 (212) and 65,536 (216) gray
levels. These are often referred to 6 bit, 8 bit, 12 bit, and 16 bit cameras,
respectively.
The process of converting an analog signal to a digital one has some
limitations that must be considered during image quantification. For
example, pixels that straddle the edge of a feature of interest can affect the
accuracy and precision of each measurement because an image is
composed of square pixels having discrete intensity levels. Whether a
pixel resides inside or outside a feature edge can be quite arbitrary and
dependent on positioning of the feature within the pixel array. In addition,
the pixels along the feature edge effectively contain an intermediate
intensity value that results from averaging adjacent pixels. Such considerations suggest a desire to minimize pixel size and increase the number
of gray levels in a systemparticularly if features of interest are very
small relative to the entire imageat the most reasonable equipment cost.
Resolution versus Magnication. Two of the more confusing aspects
of a digital image are the concepts of resolution and magnification.
Resolution can be defined as the smallest feature that can be resolved. For
example, the theoretical limit at which it is no longer possible to
distinguish two distinct adjacent lines using light as the imaging method
is at a separation distance of about 0.3 m. Magnification, on the other
hand, is the ratio of an object dimension in an image to the actual size of
the object. Determining that ratio sometimes can be problematic, especially when the actual dimension is not known.
The displayed dimension of pixels is determined by the true magnification of the imaging setup. However, the displayed pixel dimension can
vary considerably with display media, such as on a monitor or hard-copy
(paper) print out. This is because a typical screen resolution is 72 dots per
inch (dpi), and unless the digitized image pixel resolution is exactly the
same, the displayed image might be smaller or larger than the observed
size due to the scaling of the visualizing software. For example, if an
image is digitized into a computer having a 1024 1024 pixel array, the
dpi could be virtually any number, depending on the imaging program
used. If that same 1024 1024 image is converted to 150 dpi and viewed
on a standard monitor, it would appear to be twice as large as expected
due to the 72 dpi monitor resolution limit.
The necessary printer resolution for a given image depends on the
number of gray levels desired, the resolution of the image, and the
specific print engine used. Typically, printers require a 4 4 dot array for
each pixel if 16 shades of gray are needed. An improvement in output dpi
by a factor of 1.5 to 2 is possible with many printers by optimizing the
raster, which is a scanning pattern of parallel lines that form the display

78 / Practical Guide to Image Analysis

of an image projected on a printing head of some design. For example, a


300 dpi image having 64 gray levels requires a 600 dpi printer for correct
reproduction. While these effects are consistent and can be accounted for,
they still are issues that require careful attention because accurate
depiction of size and shape can be dramatically affected due to incorrect
interpretation of the size of the pixel array used.
It is possible to get around these effects by including a scale marker or
resolution (e.g., m/pixel) on all images. Then, accurate depiction of the
true size of features in the image is achieved both on monitor display and
on paper printout regardless of the enlargement. The actual size of a
stored image is nearly meaningless unless the dimensional pixel size (or
image size) is known because the final magnification is strictly dependent
on the image resolution and output device used.
Measurement Issues. Another issue with pixel arrays is determining
what is adequate for a given application. The decision influences the
sampling necessary to achieve adequate statistical relevance and the
necessary resolving power to obtain accurate measurements. For example, if it is possible to resolve the features of interest using the same
microscope setup and two cameras having differing resolutions, the
camera having the lowest resolution should be used because it will cover
a much greater area of the sample.
To illustrate this, consider that in a system using a 16 objective and
a 1024 resolution camera, each pixel is 0.3 m2. Measuring 10 fields to
provide sufficient sampling statistics provides a total area of 0.94 mm2
(0.001 in.2). Using the same objective but switching to a 760 574 pixel
camera, the pixel size is 0.66 m2. To measure the same total area of 0.94
mm2, it would only require the measurement of 5 fields. This could save
substantial time if the analysis is complicated and slow, or if there are
hundreds or thousands of samples to measure. However, this example
assumes that it is possible to sufficiently resolve features of interest using
either camera and the same optical setup, which often is not the case. One
of the key points to consider is whether or not the features of interest can
be sufficiently resolved.
Using a microscope, it is possible to envision a situation where camera
resolution is not a concern because, if there are small features, magnification can easily be increased to accurately quantify, for instance, feature
size and shape. However, while this logic is accurate, in reality there is
much to be gained by maximizing the resolution of a given system,
considering hardware and financial constraints.
In general, the more pixels you can pack into a feature, the more
precise is the boundary detection when measuring the feature (Fig. 3). As
mentioned previously, the tradeoff of increasing magnification to resolve
small features is a greater sampling requirement. Due to the misalignment
of square pixels with the actual edge of a feature, significant inaccuracies
can occur when trying to quantify the shape of a feature with only a small

Principles of Image Analysis / 79

number of pixels (Fig. 4). If the user is doing more than just determining
whether or not a feature exists, the relative accuracy of a system is the
limiting factor in making any physical property measurements or correlating a microstructure.
When small features exist within an array of larger features, increasing
the magnification to improve resolving power forces the user to systematically account for edge effects and significantly increases the need for a
larger number of fields to cover the same area that a lower magnification
can cover. Again, the tradeoff has to be balanced with the accuracy
needed, the system cost, and the speed desired for the application. If a
high level of shape characterization is needed, a greater number of pixels
may be needed to resolve subtle shape variations.

Fig. 3

Small features magnied over 25 times showing the differences in the size and number density of
pixels within features when comparing a 760 560 pixel camera and a 1024 1024 pixel

camera

Fig. 4

Three scenarios of the effects of a minute change in position of a


circular feature within the pixel array and the inherent errors in size
that can result

80 / Practical Guide to Image Analysis

One way to determine the acceptable magnification is to begin with a


much higher magnification and perform the measurements needed, then
repeat the same measurement using successively lower magnifications.
An analysis routine can be set up after determining the lowest acceptable
magnification for the camera resolution used.

Image Storage and Compression


Many systems store images onto a permanent medium (e.g., floppy,
hard, and optical disks) using proprietary algorithms, which usually
compress images to some degree. There also are standardized compression algorithms, for example, that of the Joint Photography Experts Group
(JPEG) and the tagged image file format (TIFF). The proliferation of
proprietary algorithms makes it cumbersome for users of imaging systems
to share images, but many systems offer the option to export images into
standard formats. Care must be exercised when storing images in standard
formats because considerable loss of information can occur during the
image compression process.
For instance, JPEG images are compressed by combining contiguous
segments of like gray/color levels in an image. A 512 512 24 bit
image having average color detail compresses to 30 kb when saved using
a mid-range level of compression but shrinks to 10 kb when the same
image without any features is compressed. The same image occupies 770
kb when stored in bitmap or TIFF form without any compression. In
addition, repeated JPEG compression of an image by opening and saving
an image results in increasing information loss, even with identical
settings. Therefore, it generally is recommended that very limited
compression (no less than half the original size) be used for images that
are for analysis as opposed to images that are for archival and visualization purposes only. The errors associated with compression depend on the
type of image being compressed and the size and gray-level range of the
features to be quantified. If compression is necessary, it is recommended
that image measurements are compared before and after compression to
determine the inaccuracies introduced (if any) for a particular application.
In general, avoid compression when measuring a large array of small
features in an image. Compression is much less of an issue when
measuring large features (e.g., coatings or layers on a substrate) that
contain thousands of pixels.

Image Acquisition
Image acquisition devices include light microscopes, electron microscopes (e.g., scanning electron, transmission electron, and Auger), laser

Principles of Image Analysis / 81

scanning, and other systems that translate a visualized scene into an


analog or digital form. The critical factor when determining whether
useful information can be gleaned from an image is whether there is
sufficient contrast between the features of interest and the background.
The acquisition device presents its own set of constraints, which must be
considered during the image processing phase of analysis. For instance,
images produced using a transmission electron microscope (TEM)
typically are difficult to analyze because the contrast mechanism uses
transition of feature gray levels as the raster scans the sample. However,
back-scattered electrons can be used to improve contrast due to the
different atomic numbers from different phases contained in a sample
examined on a flat surface with no topographic features. Alternatively,
elemental signal information might also be used to distinguish features of
interest in an appropriately equipped scanning electron microscope
(SEM) based on chemical composition of features. When using a light
microscope to image samples, dark-field illumination sometimes is used
to illuminate features that do not ordinarily reflect most of the light to the
objectiveas usually occurs under bright-field illumination.
Images are electronically converted from an analog signal to a digital
array by various means and transferred into computer random access
memory (RAM) for further processing. Earlier imaging sensors were
mainly of the vacuum tube type, designed for specific applications, such
as low-light sensitivity and stability. The main limitations of these sensors
were nonlinear light response and geometric distortion. The bulk of
todays sensors are solid-state devices, which have nearly zero geometric
distortion and linear light response and are very stable over time.
Frame-acquisition electronics (often referred to as a frame grabber), the
complimentary part to the imaging sensor, converts the signal from the
camera into a digital array. The frame grabber selected must match the
camera being used. Clock speed, signal voltage, input signals, and
computer interface must be considered when matching the frame grabber
to the camera. Some cameras have the digitizing hardware built in and
only require the appropriate cable to transfer the data to the computer.
An optical scanner is another imaging device that can produce low-cost,
very high-resolution images with minimal distortion. The device, however, requires an intermediate imaging step to produce a print or negative
that subsequently can be scanned into a computer.
Illumination uniformity and inherent fluctuations that can occur with a
camera are critical during the acquisition process. Setting up camera gain,
offset, and other variables can be critical in attaining consistent results
(Ref 1). Any system requires that two basic questions be answered:
O Do the size and shape of features change with position within the
camera?
O Is the feature gray-level range the same over time?

82 / Practical Guide to Image Analysis

Users generally turn to the use of dc power supplies, which isolate power
from house current to minimize subtle voltage irregularities. Also, some
systems contain feedback loops that continuously monitor the amount of
light emanating from the light source and adjust the voltage to compensate for intensity fluctuations. Another way of achieving consistent
intensities is to create a sample that can be used as a standard when setting
up the system. This can be done by measuring either the actual intensity
or feature size of a specified area on the sample.

Image Processing
Under ideal conditions, a digitized image can be directly binarized
(converted to black and white) and measured to obtain desired features.
However, insufficient contrast, artifacts, and/or distortions very often
prevent straightforward feature analysis. Image processing can be used in
this situation to compensate for the plethora of image deficiencies,
enabling fast and accurate analysis of features of interest.
Gray-level image processing often is used to enhance features in an
image either for visualization purposes or for subsequent quantification.
The rapid increase of algorithms over the years offers many ways to
enhance images, and many of these algorithms can be used in real time
with the advent of low-cost/high-performance computers.
Shading Correction. Image defects that are caused by uneven illumination or artifacts in the imaging path must be taken into account during
image processing. Shading correction is used when a large portion of an
image is darker or lighter than the rest of the image due to, for example,
bulb misalignment or by the use poor optics in the system. The relative
differences between features of interest and the background are usually
the same, but features in one area of the image have a different gray-level
range than the same type of feature in another portion of the image. The
main methods of shading correction use a background reference image,
either actual or artificial, and polynomial fitting of nearest-neighbor
pixels.
A featureless reference image requires the acquisition of an image using
the same lighting conditions but without the features of interest. The
reference image is then subtracted or divided (depending on light
response) from the shaded image to level the background. If a reference
image cannot be obtained, it is sometimes possible to create a pseudoreference image by using rank-order processing (which is discussed later)
to diminish the features and blend them into the background (Fig. 5).
Polynomial fitting also can be used to create a pseudo-background image,
but it is difficult to generate if the features are neither distinct nor
somewhat evenly distributed.

Principles of Image Analysis / 83

Each shading correction methodology has its own advantages and


limitations, which usually depend on the type of image and illumination
used. Commercial systems usually use one shading correction method,
which is optimized for that particular system, but also may depend on
how easily a reference image can be obtained or the degree of the
variation in the image.
Pixel point operations are a class of image enhancements that do not
alter the relationship of pixels to their neighbors. This class of algorithms
uses a type of transfer function to translate original gray levels into new
gray levels, usually called a look-up table, or LUT. For instance, a

(a)

(b)

(c)

(d)

Fig. 5

Rank-order processing used to create a pseudoreference image. (a) Image without any features
in the light path showing dust particles and shading of dark regions to light regions going from
the upper left to the lower right. (b) Same image after shading correction. (c) Image of particles without
shading correction. (d) Same image after shading correction showing uniform illumination across the
entire image

84 / Practical Guide to Image Analysis

pseudocolor LUT enhancement simply correlates a color with a gray


value and assigns a range of colors to the entire gray-level range in an
image. This technique can be very useful to delineate subtle features. For
example, it is nearly impossible to distinguish features having a difference
of, say, five gray levels. However, it is possible to delineate subtle features
by assigning different colors to different gray-level ranges because the
human eye can distinguish different hues much better than it can different
gray levels.
Another useful enhancement effect uses a transfer function that changes
the relationship between the input gray level and the output or displayed
gray level from a linear one to another that enhances the desired image
features (Fig. 6). This often is referred to as the gamma curve for the
displayed image and has many useful effects, especially when viewing
very bright objects with very dark features, such as thermal barrier
coatings.
An image can be displayed as a histogram by summing up all the pixels
in uniform ranges of gray levels and plotting the number of pixels versus

(a)

Fig. 6

(b)

(c)

Reected bright-eld image of an oxide coating before and after use of a gamma curve transformation that
translates pixels with lower intensities to higher intensities while keeping the original lighter pixels near the same

levels

Fig. 7

Example of a gray-level histogram generated from an image

Principles of Image Analysis / 85

gray level (Fig. 7). An algorithm is used to transform the histogram,


uniformly distributing intermediate brightness values evenly throughout
the full gray-level range (usually 0255), a technique called histogram
equalization. The effect is that an individual pixel has the same relative
brightness but has a shifted gray level from its original value. The shift in
gray-level gradients often provides improved contrast of previously subtle
features, as shown in Fig. 8.
Neighborhood kernel processing is a class of operations that translates individual pixels based on surrounding pixels. The concept of using
a kernel or two-dimensional array of numeric operators provides a wide
range of image enhancements including:
O
O
O
O
O

Sharpening an image
Eliminating noise
Smoothing edges
Finding edges
Accentuating subtle features

(a)

(b)

(c)

(d)

Fig. 8

Reected-light image of an aluminum-silicon alloy before and after gray-level histogram equalization, which
signicantly improves contrast of the subtle smaller silicon particles by uniformly distributing intensities

86 / Practical Guide to Image Analysis

These algorithms should be used carefully because the effect on an


individual pixel depends on its neighbors. The output image after
processing can vary considerably from image to image when making
quantitative measurements. Numerous mathematical formulas, derivatives, and least-square curve fitting also can be used to provide various
enhancements.
Neighborhood kernel processing includes rank-order, Gaussian, Laplacian, and averaging filters. An example of a rank-order filter is the Median
filter, which determines the median, or 50%, value of a set of gray values
in the selected kernel and replaces the central value with the median
value. An algorithm translates the selected kernel over to the next pixel
and applies the same process (Fig. 9). A variety of operators with the
resulting image transformation are illustrated in Fig. 10. Reference 2
describes many kernel filters in much greater detail together with example
images.
Arithmetic Processing of Images. Image processing that uses more
than one image and combines them in some mathematical way is useful
to accentuate subtle differences between images and to observe spatial
dependencies. For example, adding images is used to increase the
brightness in an image, averaging images is used to reduce noise, and
subtracting images is used to correct for background shading (see the
section Shading Correction) and to highlight subtle and not so subtle
differences. There are other math manipulations that are used occasionally, but effectiveness can vary widely due to the extreme values that can
result when multiplying or dividing gray values from two images.
Frequency domain transformation is another image enhancement,
which is particularly useful to distinguish patterns, remove very fine
texture, and determine repeating periodic structures. The most popular

Fig. 9

Schematic showing how kernel processing works by moving kernel


arrays of various sizes over an image and using a formula to transform
the central pixel accordingly. In the example shown, a median lter is used.

Principles of Image Analysis / 87

transform is Fourier transform, which uses the fast Fourier transform


(FFT) algorithm to quickly calculate the power spectrum and complex
values in frequency space. Usually, the power spectrum display is used to
determine periodic features or preferred orientations, which assists
determining the alignment in an electron microscope and identifying fine
periodic structures (Fig. 11). A more extensive description of transform
can be found in Ref 2.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 10

Examples of neighborhood kernel processing using various processes. (a) Original reected-light image of a titanium alloy. Image
using (b) gradient lter, (c) median lter, (d) Sobel operator, (e) top-hat processing,
(f) gray-level opening

(a)

Fig. 11

(b)

(c)

Defect shown with different image enhancements. (a) High-resolution image from a transmission electron
microscope of silicon carbide defect in silicon showing the alignment of atoms. (b) Power spectrum after
application of fast Fourier transform (FFT) showing dark peaks that result from the higher-frequency periodic silicon
structure. (c) Defect after masking the periodic peaks and performing an inverse FFT

88 / Practical Guide to Image Analysis

Feature Discrimination
Thresholding. As previously described, an image that has 256 gray
values needs to be processed in such a way as to allow quantification by
reducing the available gray values in an image to only the features of
interest. The process in which 256 gray values are reduced to 2 gray
values (black and white, or 0 and 1) is called thresholding. It is
accomplished by selecting the gray-level range of the features of interest.
Pixels within the selected gray-level range are assigned as foreground, or
detected features, and everything else as background, or undetected
features. In other terms, thresholding simply converts the image to a
series of 0s and 1s, which represent undetected and detected features,
respectively. Whether white features represent foreground or vice a versa
varies with image analysis systems, but it does not affect the analysis in
any way and usually is a matter of the programmers preference.
The segmentation process usually yields three types of images depending on the system: a black and white image, a bit-plane image, and a
feature-boundary representation (Fig. 12). The difference between the
methods is analogous to a drawing program versus a painting program. A
drawing program creates images using lines and/or polygons to represent
features and uses much less space. It also can quickly redraw, scale, and
change an image comprising multiple features. By comparison, a painting
program processes images one pixel at a time and allows the user to
change the color of individual pixels because each image comprises
various pixel arrangements.
The replicated black and white image is more memory intensive
because, generally, it creates another image of the same size and
gray-level depth after processing and thresholding, and requires the same
amount of computer storage as the original image. A bit-plane image is a
binary image, usually having a color that represents the features of
interest. It often is easier to track binary image processing steps during
image processing development using the bit-plane method. Featureboundary representation is more efficient when determining feature
perimeter and shape. There is no inherent advantage to any methodology
because the final measurements are similar and the range of processing
algorithms and possible feature measurements remain competitive.
Segmentation. Basically, there are three ways that a user indicates to
an image analysis system the appropriate threshold for segmentation
using gray level:
O Enter the gray-level values that represent the desired range.
O Select both width (gray-level range) and location (gray-level values) by
moving a slider along a gray-level spectrum bar (Fig. 13). This is
known as the interactive method. Interactive selection usually affects

Principles of Image Analysis / 89

the size of a colored overlay bit plane that is superimposed on the


gray-level image, which allows setting the gray-level range to agree
with the users assessment of the correct feature boundaries.
O Determine if there are any peaks that correspond to many pixels within
a specific gray-level range using a gray-level histogram (Fig. 14).
Interactive selection and histogram characteristic-peaks thresholding
methods are used frequently, sometimes together, depending on the
particular type of image being viewed. Automatic thresholding often uses
the histogram peaks method to determine where to set the gray-level
ranges for image segmentation. However, when using automatic thresholding, the user must be careful because changing overall brightness or
artifacts, or varying amounts of foreground features, can change the
location and the relative size of the peaks. Some advanced algorithms can
overcome these variations.

(a)

(b)

(c)

(d)

Fig. 12

Images showing the three main transformations from a gray-level


image to a thresholded image. (a) Original gray-level image. (b)
Black and white image. (c) Binary image using a colored bit plane. (d) Detected
feature boundaries

90 / Practical Guide to Image Analysis

There are more issues to consider when thresholding color images for
features of interest. Most systems use red, green, and blue (RGB)
channels to establish a color for each pixel in an image. It is difficult to
determine the appropriate combination of red, green, and blue signals to
distinguish features. Some systems allow the user to point at a series of
points in a color image and automatically calculate the RGB values,
which are used to threshold the entire image. A better methodology than
RGB color space for many applications is to view a color image in hue,
intensity, and saturation (HIS) space. The advantage of this method is that
color information (hue and saturation) is separated from brightness
(intensity). Hue essentially is the color a user observes, while the
saturation is the relative strength of the color. For example, translating
dark green to an HIS perspective would use dark as the level of
saturation (generally ranges as a value between 0 and 100%) and green as

Fig. 13

Fig. 14

Interactive method of selecting gray levels with graphic slider

Thresholding gray levels in an image by selecting the gray-level peaks that are characteristic
of the features of interest

Principles of Image Analysis / 91

the hue observed. While saturation describes the relative strength of color,
intensity is associated with the brightness of the color. Intensity is
analogous to thresholding of gray values in black and white space. Hue,
intensity, and saturation space also is described as hue, lightness, and
saturation (HLS) space, where L quantifies the dark-light aspect of
colored light (see Chapter 9, Color Image Processing).
Nonuniform Segmentation. Selecting the threshold range of gray
levels to segment foreground features sometimes results in overdetecting
some features and underdetecting others. This is due not only to varying
brightness across an image, but also is often due to the gradual change of
gray levels while scanning across a feature. Delineation enhancement is
a useful gray-level enhancement tool in this situation (Fig. 15). This
algorithm processes the pixels that surround features by transforming
their gradual change in gray level to a much steeper curve. In this way, as
features initially fall within the selected gray-level range, the apparent
size of the feature will not change much as a wider band of gray levels is
selected to segment all features.
There are other gray-level image processing tools that can be used to
delineate edges prior to segmentation and to improve contrast in certain
regions of an image, and their applicability to a specific application can
be determined by experimenting with them.
Watershed Segmentation. Watershed transformations are iterative
processes performed on images that have space-filling features, such as
grains. The enhancement usually starts with the basic eroded point or the
last point that exists in a feature during successive erosions, often referred
to as the ultimate eroded point. Erosion/dilation is the removal and/or
addition of pixels to the boundary of features based on neighborhood

(a)

(b)

Fig. 15

Delineation lter enhances feature edges by sharpening the transition of gray values considerably, providing more leeway when
thresholding. (a) Magnied original gray-level image of particles showing gradual
transition of gray levels along the feature edges. (b) The same image after using
a delineation lter

92 / Practical Guide to Image Analysis

relationships. The basic eroded point is dilated until the edge of the
dilating feature touches another dilating feature, leaving a line of
separation (watershed line) between touching features.
Another much faster approach is to create a Euclidean distance map
(EDM), which assigns successively brighter gray levels to each dilation
iteration in a binary image (Ref 2). The advantage of this approach is that
the periphery of each feature grows until impeded by the growth front of
another feature. Although watershed segmentation is a powerful tool, it is
fraught with application subtleties when applied to a wide range of
images. The reader is encouraged to refer to Ref 2 and 3 to gain a better
understanding of the proper use and optimization of this algorithm and for
a detailed discussion on the use of watershed segmentation in different
applications.
Texture Segmentation. Many images contain texture, such as lamellar
structures, and features of widely varying size, which may or may not be
the features of interest. There are several gray-level algorithms that are
particularly well suited to images containing texture because of the
inherent frequency or spatial relationships between structures. These
operators usually transform gradually varying features (low frequency) or
highly varying features (high frequency) into an image with significantly
less texture.
Algorithms such as Laplacian, Variance, Roberts, Hurst, and Frei and
Chen operators often are used either alone or in combination with other
processing algorithms to delineate structures based on differing textures.
Methodology to characterize banding and orientation microstructures of
metals and alloys is covered in ASTM E 1268 (Ref 4).
Pattern-matching algorithms are powerful processing tools used to
discriminate features of interest in an image. Usually, they require prior
knowledge of the general shape of the features contained in the image.
For instance, if there are cylindrical fibers orientated in various ways
within a two-dimensional section of a composite, a set of boundaries can
be generated that correspond to the angles at which a cylinder might occur
in three-dimensional space. The resulting boundaries are matched to the
actual fibers that exist in the section, and the resulting angles are
calculated based on the matched patterns (Fig. 16). Generally, patternmatching algorithms are used when required measurements cannot be
directly made or calculated from the shape of a binary feature of interest.

Binary Image Processing


Boolean Logic. Binary representation of images allows simple analysis
of features of interest while disregarding background information. There
are many algorithms that operate on binary images to correct for
imperfect segmentation. The use of Boolean logic is a powerful tool

Principles of Image Analysis / 93

(a)

(b)

Fig. 16

Pattern matching used for reconstructing glass bers in a composite.


(a) Bright-eld image of a glass ber composite with several broken
bers. (b) Computer-generated image after pattern matching, which reconstructs
the bers enabling the quantication of the degree of ber breakage after
processing

that compares two images on a pixel-by-pixel basis and then generates an


output image containing the result of the Boolean combination. Four basic
Boolean operations are:
O
O
O
O

AND
OR
Exclusive OR (XOR)
NOT

These basic four often are combined in various ways to obtain a desired
result, as illustrated in Fig. 17.
A simple way to represent Boolean logic is by using a truth table, which
shows the criteria that must be fulfilled to be included in the output image.
When comparing two images, the AND Boolean operation requires that
the corresponding pixels from both images be ON (1 ON, 0 OFF).
Such a truth table would look like this:

If a pixel is ON in one image and OFF in another, the resulting pixel


will be OFF after the AND Boolean operator is applied. The OR operator
requires only that one or the other corresponding pixel from either image

94 / Practical Guide to Image Analysis

be ON to yield a pixel which is ON. The XOR operator produces an ON


pixel as long as the corresponding pixels are different; that is, one is ON
and one is OFF. If both pixels are ON or both are OFF, then the resulting
output will be an OFF value. The NOT operator is simply the inverse of
an image, but when used in combination with other Boolean operators,
can yield interesting and useful results.
Some other truth tables are shown below:

An important use of Boolean operations is combining multiple criteria,


including spatial relationships, multiphase relationships with various
materials, brightness differences, and size or morphology within a set of
images. It is important that the order and grouping of the particular
operation be maintained when designating a particular sequence of
Boolean operations.
Feature-based Boolean logic is an extension of pixel-based Boolean
logic in that individual features, rather than individual pixels, are

Fig. 17

Examples of Boolean operators using two images

Principles of Image Analysis / 95

compared between images (Fig. 18). The resultant image contains the
entire feature instead of just the parts of a feature that are affected by the
Boolean comparison. Feature-based logic uses artificial features, such as
geometric shapes, and real features, such as grain boundaries, to ascertain
information about features of interest.
There are a plethora of uses for Boolean operators on binary images and
also in combination with gray-scale images. Examples include coating
thickness measurements, stereological measurements, contiguity of
phases, and location detection of features.
Morphological Binary Processing. Beyond combining images in
unique ways to achieve a useful result, there also are algorithms that alter
individual pixels of features within binary images. There are hundreds of
specialized algorithms that might help particular applications and merit
further experimentation (Ref 2, 3). Several of the most popular algorithms
are mentioned below.
Hole lling is a common tool that removes internal holes within
features. For example, one technique completely fills enclosed regions of
features (Fig. 19a, b) using feature labeling. This identifies only those
features that do not touch the image edge, and these are combined with
the original image using the Boolean OR operator to reconstruct the
original inverted binary image with the holes filled in. There is no limit on
how large or tortuous a shape is. The only requirement for hole filling is
that the hole is completely contained within a feature.
A variation of this is morphological-based hole filling. In this technique,
the holes are treated as features in the inverted image and processed in the
desired way before inverting the image back. For example, if only holes
of a certain size are to be filled, the image is simply inverted, features
below the desired size are eliminated, and then the image is inverted back

Fig. 18

Feature-based Boolean logic operates on entire features when


determining whether a feature is ON or OFF. This example shows
the result when using the AND Boolean operator with image A and image B from
Fig. 17. An image B outline is shown for illustrative purposes.

96 / Practical Guide to Image Analysis

(Fig. 19a, c, d). It also is possible to fill holes based on other shape
criteria.
Erosion and Dilation. Common operations that use neighborhood
relationships between pixels include erosion and dilation. These operations simply remove or add pixels to the periphery (both externally and
internally, if it exists) of a feature based on the shape and location of
neighborhood pixels. Erosion often is used to remove extraneous pixels,
which may result when overdetection during thresholding occurs, because
some noise has the same gray-level range as the features of interest. When
used in combination with dilation (referred to as opening), it is possible
to separate touching particles. Dilation often is used to connect features
by first dilating the features followed by erosion to return the features to
their approximate original size and shape (referred to as closing).

(a)

(b)

(c)

(d)

Fig. 19

Effects of different hole-lling methods. (a) Transmitted-light image


containing an array of glass particles with some interstitial clear
regions within the particles. (b) Same image after the application of the
hole-lling algorithm with dark gray regions showing the lled regions. Identied
areas 1, 2, and 3 show erroneously lled regions due to the arrangement of
particles. (c) Inverted or negative of the rst image, which treats the original
interstitial holes as individual features. (d) Image after removing features below a
certain size and inverting the image to its original binary order with only
interstitial holes lled

Principles of Image Analysis / 97

Most image analysis systems allow the option of using several


neighborhood-kernel patterns (Fig. 20) and also allow selection of the
number of iterations used. However, great care must be exercised when
using these algorithms because the feature shape (especially for small
features) can be significantly different from the original feature shape.
Parameter selection can dramatically affect features in the resulting image
because if too many iterations are used relative to the size of the feature,
it can take on the shape of the neighborhood pattern used (Fig. 21).
However, some very useful results can be achieved when using the right
erosion/dilation kernel shape. For instance, using a vertical shape closing
in a binary image of a surface can remove edges that fold over themselves
(Fig. 22), which allows determination of the roughness of an interface.
Skeletonization, SKIZ, Pruning, and Convex Hull. A specialized use
of erosion that prevents the separation of features while eroding away
pixels is called skeletonization, or thinning. This operation is useful when
thinning thick, uneven feature boundaries. Caution is advised when using
this algorithm on very thick boundaries because the resulting skeleton can
change dramatically depending on the existence of just a few pixels on an
edge or within a feature.
Skeleton by influence zones (SKIZ), a variation of skeletonization,
operates by simultaneously growing all features in an image (or eroding

Fig. 20

Examples of the effects of erosion on a feature using kernels of


various shapes and the associated shape of a single pixel after
dilation using the same kernel

98 / Practical Guide to Image Analysis

the background) to the extent possible given the zones of influence of


growing features (Fig. 23). This is analogous to nearest-neighbor determinations because drawing a line segment from the edge of one feature to
the edge of an adjacent feature results in a midpoint, which is the zone of
influence. The result of a SKIZ operation often replicates what an
arrangement of grain boundaries looks like. Additionally, it is possible to
measure the resulting zone size to quantify spatial clustering or statistics
on the overall separation between features.

Fig. 21

Particle with elongated features showing the effect of using a


number of octagonal-opening (erosion followed by a dilation)

iterations

Fig. 22

Use of a vertical shape closing in a binary image. (a) Reected-light


image of a coating having a tortuous interface. (b) Binary image of
coating. (c) Binary image after hole lling and removal of small unconnected
features. (d) Equally spaced vertical lines overlaid on the binary image. (e) Result
after a Boolean AND of the lines and the binary image. (f) Image after vertical
closing of 35 cycles, which closes off overlapping features of the interface for
measuring roughness. (g) Binary image showing the lines before (dark gray) and
the line segments lled in after closing (black). (h) Vertical lines overlaid on the
original lightened gray-level image

Principles of Image Analysis / 99

Occasionally, unconnected boundaries remain after the skeletonization


operation and can be removed by using a pruning algorithm that
eliminates features having endpoints. The convex-hull operation can be
used to fill concavities and smooth very jagged skeletons or feature
peripheries. Basically, a convex-hull operation selectively dilates concave
feature edges until they become convex (Fig. 24).

Further Considerations
The binary operations described in this chapter are only a partial list of
the most frequently used operations and can be combined in useful ways
to produce an image that lends itself to straightforward quantification of
features of interest. Today, image analysis systems incorporate many
processing tools to perform automated, or at least fast-feature, analysis.
Creativity is the final tool that must be used to take full advantage of the
power of image analysis. The user must determine if the time spent in
developing a set of processing steps to achieve computerized analysis is
justified for the application. For example, if you have a complicated

(a)

(b)

(c)

(d)

Fig. 23

Effects of the skeleton by inuence zones (SKIZ) process. (a)


Scanning electron microscope image of a superalloy. (b) Binary
image of gamma prime particles overlaid on the original gray-level image with
light gray particles touching the image boundary. (c) After application of SKIZ
showing zones of inuence. (d) Zones with original binary particles overlaid

100 / Practical Guide to Image Analysis

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 24

Images showing the use of various binary operations on a grain


structure. (a) Original bright-eld image of grain structure. (b) Binary
image after removal of small disconnected features. (c) Binary image after
skeletonization with many short arms extending from grain boundaries. (d) Binary
image after pruning. (e) Binary image after pruning and after 3 iterations of
convex hull to smooth boundaries. (f) Image showing grain structure after
skeletonization of the convex-hulled image

image that has minimal contrast but somewhat obvious features to the
human eye and only a couple of images to quantify, then manual
measurements or tracing of the features might be adequate. However, the
benefit of automated image analysis is that sometimes-subtle feature
characterizations can yield answers that the user might never have
guessed based on cursory inspections of the microstructure.

References
1. E. Pirard, V. Lebrun, and J.-F. Nivart, Optimal Acquisition of Video
Images in Reflected Light Microscopy, Microsc. Anal., Issue 37, 1999
p 1921
2. J.C. Russ, The Image Processing Handbook, 2nd ed., CRC Press,
1994
3. L. Wojnar, Image Analysis, Applications in Materials Engineering,
CRC Press, 1998
4. Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures, E 1268-94, Annual Book of ASTM Standards, ASTM, 1999

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000

CHAPTER

5
Measurements
John J. Friel
Princeton Gamma Tech

THE ESSENCE of image analysis is making a measurement or series


of measurements that quantify some aspect of the image of a microstructure. A microstructure often is the link between process and properties in
materials science, and the extent to which the aspects of the microstructure can be quantified establishes the strength of the link. Image analysis
measurements can be made manually, automatically, or even by comparing a series of images that already have been measured, as in comparison
chart methods. Manual measurement usually involves counting features,
such as points, intersections, and intercepts, but it also involves measuring
length. Area and volume, on the other hand, usually are derived values.
Generally, reference to the term image analysis is synonymous with
automatic image analysis (AIA), in which a computer makes measurements on a digital image. In this case, measurements of size, such as area,
longest dimension, and diameter, for example, are direct measurements.
Other measurements can be derived from these primitive measures,
including measures of shape, such as circularity, aspect ratio, and area
equivalent diameter. All of these measurements are easily calculated by
the computer for every feature in the image(s).

Contrast Mechanisms
The principles of how to acquire, segment, and calibrate images have
been discussed in Chapter 4. However, one concept that must be
considered before making measurements is the choice of signal and
contrast mechanism. The contrast mechanism selected carries the information to be quantified, but it is the signal used for acquisition that
actually carries one or more contrast mechanisms. It is useful, therefore,
to distinguish between the contrast bearing signals suitable for digitiza-

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 56 OUTPUT: Thu Oct 26 14:55:36 2000


102 / Practical Guide to Image Analysis

Table 1

Contrast mechanisms with associated imaging signals

Signal

Reflected light

Contrast mechanism

Topographic
Crystallographic
Composition
True color
Interference colors

Transmitted light

True color
Interference colors and figures
Biological/petrographic structure

Secondary electrons

Topographic
Voltage
Magnetic types 1 and 3

Backscattered electrons

Atomic number
Topographic (trajectory)
Crystallographic (electron channeling patterns,
electron backscattered patterns)
Magnetic type 2
Biological structure (stained)

X-rays
Absorbed electrons

Composition
Atomic number
Charge (EBIC)
Crystallographic
Magnetic type 2

Transmitted electrons

Mass thickness
Crystallographic (electron diffraction)

Cathodoluminescence

Composition
Electron state

EBIC, electron beam induced current

tion and the contrast mechanism itself that will be used to enhance and
quantify. In routine metallography, bright-field reflected-light microscopy
is the usual signal, but it may carry many varied contrast types depending
on the specimen, its preparation, and etching. The mode of operation of
the microscope also affects the selection of contrast mechanisms. A list of
some signals and contrast mechanisms is given in Table 1.

Direct Measurements
Field Measurements
Field measurements usually are collected over a specified number of
fields, determined either by statistical considerations of precision or by
compliance with a standard procedure. Standard procedures, or norms, are
published by national and international standards organizations to conform to agreed-upon levels of precision. Field measurements also are the
output of comparison chart methods.
Statistical measures of precision, such as 95% confidence interval (CI)
or percent of relative accuracy (%RA) are determined on a field-to-field

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 103

basis rather than among individual features. Some standard procedures


require a worst-field report, and often are influenced by agreements
between producer and purchaser. For example, the purchaser may specify
that the amount of a certain category of features, such as nonmetallic
inclusions in steel, cannot exceed a specified limit. While such an
approach does not reduce the number of fields to be measured, it does
reduce the amount of information that needs to be reported. Reports for
every field and feature are easily generated using automatic image
analysis, but the amount of useful information may be no greater than that
contained in the worst field report.
When making any measurements on a field of features, it always is
assumed that the image has been thresholded properly. In general,
precision is limited by the number of pixels available to represent the
contrast. Any error in segmenting the image to reveal and define the
features will result in a bias in the analysis. Automatic image analysis
usually results in a more precise and reproducible analysis because so
many more features can be counted. However, there are times when
manual image analysis is less biased because human perception can better
identify the features. This effect is particularly true when features cannot
be uniquely thresholded but are easily perceived. An example of this
situation is the measurement and quantification of lamellar pearlite in
steel if the magnification and contrast are sufficiently high. While the
computer would count each lamella as a feature, a human observer would
just perceive the pearlite phase.
Before starting to make measurements, the analyst must decide the
number of fields to measure and at what magnification. This step assumes
adequate sampling of the material. This is an area in which standard
procedures can be helpful. Such standards may be published by standards
writing organizations or they may be in-house procedures developed
to make meaningful and reproducible measurements on a particular
material.
Count. One of the easiest measurements for a computer to perform on
a field is to count the features. This may not be the easiest manual
measurement, however, especially if the number of features is large. A
similar measurement of great usefulness is a count of the number of
intercepts. To make this measurement, the computer or the operator
counts the number of times a series of selected test lines intercept the
features. (A more detailed description of how to make this measurement
is given below in the section Intercept Measurements.) Once the
number of intercepts is counted and the total length of the test line is
known, the stereological parameter of lineal density, NL, can be calculated, and from it are derived many other stereological parameters. Some
of these are discussed more fully in the Field Measurements section
under Derived Measurements. A list of some primitive field measurements follows:

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


104 / Practical Guide to Image Analysis

O
O
O
O
O
O
O
O
O
O

Field number
Field area
Number of features
Number of features excluded
Area of features
Area of features filled
Area fraction
Number of intercepts
NL
NA (number of features divided by the total area of the field)

The length of the test line in a manual analysis is measured on a template


superimposed on a photomicrograph or by use of a reticule in the
microscope, each corrected for magnification. An AIA system uses the
total length of the scan.
Area fraction is another easy measurement for an AIA system. For this
measurement, the computer simply scans the number of pixels ascribed to
a certain type of feature (phase) and divides by the total number of pixels
in the image. This operation is most easily understood by visualizing the
gray-level histogram. If the signal and contrast mechanism are suitably
selected, the peaks in the histogram correspond to phases in the
microstructure. With threshold customarily set in pseudocolor, the area
fraction of any phase is merely the sum of all pixels within a selected
region of the histogram divided by the sum of pixels in the entire
histogram. The example shown in Fig. 1(a) consists of a gray-scale image
of a multiphase ceramic varistor acquired using the backscattered electron
signal in a scanning electron microscope (SEM). The image was
thresholded in pseudocolor as shown in Fig. 1(b). Pseudocolor means
false color, and the term is used to distinguish it from the actual or true
color that is seen in the microscope or color photomicrograph. Pseudocolor customarily is used to depict those intensity levels of the microstructure that have been assigned by the thresholding operation to a
particular component (phase) within the specimen. After thresholds have
been set on the image, measurements can be made on each segment
separately.
Figure 2 shows the histogram that the computer used to automatically
set the thresholds, and the resulting area fraction phases are shown:
Low

High

Area, %

106

10.22

107

161

80.05

162

255

9.73

Manually, area fraction usually is measured by point counting. The


method used and precision that can be expected are described in ASTM
E 562 (Ref 1). From a stereological standpoint, the area fraction, AA, is

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 105

equivalent to the volume fraction, VV, for a field of features that do not
have preferred orientation.
The average area of features can be calculated from feature-specific
measurements, but it also is possible to derive it from field measurements
as follows:
A

AA

(Eq 1)

NA

where A is the average area of features in many fields, AA is the average


area fraction, and NA is the average number of intercepts. The advantage

(a)

(b)

Fig. 1

Multiphase ceramic material in (a) gray scale and (b) pseudocolor.


Please see endsheets of book for color version.

Fig. 2

Histogram of the image in Fig. 1

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


106 / Practical Guide to Image Analysis

to determining average area in this manner compared with measuring it


feature-by-feature is merely that it is easier just to make two measurements per field.
Intercept Measurements. The difference between the concepts of
intercepts and intersections must be clearly understood to evaluate the
requirements of a specified procedure. When either a manual or computergenerated test line encounters a feature, the boundary between background and feature edge is considered an intersection. As the test line
continues, that part of the line superimposed upon the feature constitutes
an intercept, and the step from the feature back to the background is
another intersection. Therefore, there are two intersections per intercept,
except in the case of space filling features, where the number of
intersections equals the number of intercepts.
Figure 3 shows a microstructure consisting of space filling grains with
three test lines to illustrate counting intercepts and intersections. The lines
are horizontal, which is appropriate for a random microstructure. If the
features in the image have a preferred orientation, then the test lines
should be random or aligned to measure a specific type of feature. The
number of grains intercepted by each line totals 21, as does the number
of intersections. The total length of the test lines, L, in this case equals
1290 m, and the lineal density, NL (the number of intercepts of features
divided by the total test line length) equals 0.016. Actual measurements
would be performed using more lines on more fields.
It also is possible to do more than count intercepts or intersections. The
length of the test line on each feature can be measured in calibrated units
of the microstructure. If the total length of the lines (linear intercept
lengths, or Li) passing through features, i, is known, it is possible to
calculate the length fraction, LL, by dividing Li by the total length of the
test lines. Moreover, it is possible to calculate the mean lineal intercept,
L3, from the field measurement, NL, by the expression:
L3

LL
NL

(Eq 2)

The mean lineal intercept usually is noted by the symbol L3 to indicate


that it is an estimate of the intercept in three dimensions.
Figure 4 shows a binary image of porosity in a ceramic with a test line
drawn to illustrate an LL measurement. The total length of the test line is
100 m, and the total length of the white portions, Li, is 42 m. Therefore
LL 0.42. Using an image analysis system to measure the same image
yields similar results; a computed value of NL 0.036 is obtained by
counting the intersections of raster lines with features. From Eq 2, L3
11.67 m. Computer measurement of the mean intercept length on each
feature directly produces a mean lineal intercept value of 10.22 m. The
difference is attributable to the use of only one test line to illustrate the LL

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 107

measurement. A more detailed review of intercept measurements and


many other measurements can be found in Ref 2.
Duplex Feature Size. Special consideration is needed in dealing with
a microstructure consisting of features distinguished only by their size or
shape, where all features are of the same material. In this case, features
cannot be discriminated on the basis of gray level. In such an instance, the
computer finds and measures all the features after binarization of the
image and selects only those that meet the selected criteria. The analysis
may have to be performed several times to report data on each type of
feature separately.
The criterion needed to distinguish features can be straightforward, as
in distinguishing fibers from pores, or the dividing line can be more
subjective. For instance, the analyst might wonder what value to use to
distinguish coarse from fine particles. A histogram describing the distribution of some size parameter, such as area or diameter, can be helpful.
However, even better than a number histogram is one that is weighted

Fig. 3

Test lines used to measure intercepts on microstructure image

Fig. 4

Image showing length measurements

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


108 / Practical Guide to Image Analysis

by the selected parameter. For example, large features are not given any
more weight than small ones in a number histogram of area, but in an
area-weighted histogram, the dividing line between coarse and fine is
more easily observed. Using more than one operator to agree on the
selection point improves the precision of the analysis by reducing the
variance, but any inherent bias still remains. Determination of duplex
grain size described in the section Grain Size is an example of this
situation.
Feature orientation in an image constitutes a field measurement even
though it could be determined by measuring the orientation of each
feature and calculating the mean for the field. This is easily done using a
computer, but there is a risk that there might not be enough pixels to
sufficiently define the shape of small features. For this reason, orientation
measurements are less precise. Moreover, if all features are weighted
equally regardless of size, small ill-defined features will add significant
error to the results, and the measurement may not be truly representative.
Because orientation of features relates to material properties, measurements of many fields taken from different samples are more representative
of the material than measurements summed from individual features. This
situation agrees nicely with the metallographic principle, do more less
well; that is, measurements taken from more samples and more fields give
a better representative result than a lot of measurements on one field. A
count of intercepts, NL, made in two or more directions on the specimen
can be used either manually or automatically to derive a measure of
preferred orientation. The directions, for example, might be perpendicular
and parallel to the rolling direction in a wrought metal. The term
orientation as used here refers to an alignment of features recognizable in
a microscope or micrograph. It does not refer to crystallographic
orientation, as might be ascertained using diffraction methods.
ASTM E 1268 (Ref 3) describes a procedure for measuring and
reporting banding in metals. The procedure calls for measuring NL
perpendicular and parallel to the observed banding and calculating an
anisotropy index, AI, or a degree of orientation, 12, as follows (Ref 4):
AI

NL
NL

(Eq 3)

and
12

NL NL

NL 0.571NL

(Eq 4)

Mean Free Path. Microstructural features have a tendency to cluster,


which is an aspect of a microstructure that is particularly difficult to

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 109

quantify. While possible approaches to this problem are discussed in the


section Derived Measurements, considered here is an easily measured
field descriptor, mean free path, , which can be calculated from NL and
yields a single value per field:

1 AA
NL

(Eq 5)

In the case of space filling grains, the area fraction equals one, and,
therefore, the mean free path is zero. However, for features that do not
occupy 100% of the image, mean free path gives a measure of the
distance between features on a field-by-field basis.
Surface Area. There is at least one way to approximate the surface
area from two images using stereoscopy. If images are acquired from two
different points of view, and the angle between them is known, the height
at any point, Z, can be calculated on the basis of displacement from the
optic axis as follows:
Z

P
2M sin(/2)

(Eq 6)

where M is magnification, P is parallax distance, and is parallax angle.


The derivation of this relationship can be found in Goldstein et al. (Ref 5)
and other texts on scanning electron microscopy (SEM). The technique is
particularly well suited to imaging using SEM because the stage is easily
tilted to provide the parallax angle, and the depth of focus is so large.
The height of any point can be calculated from manual measurements
on photomicrographs, but automatic image analysis (AIA) makes it
possible to compute an entire matrix of points over the surface. Assuming
that corresponding points can be identified in each pair of images, their
displacement in the direction perpendicular to the tilt axis can be
measured, and their height above a reference surface can be calculated
using Eq 6.
The reference surface is the locus of points representing zero displacement between the left and the right images. Because the absolute
magnitude of the reference height can be altered by shifting one entire
image relative to the other, it is possible to define the surface of interest
in relation to an arbitrarily specified surface. With a matrix of coordinates
in x, y, and z, it then is possible to calculate the area of each of the finite
number of planar rectangles or triangles defined by the points. The sum of
these planes approximates the true surface area. For a description of this
and other methods for measuring surface roughness as applied to fracture
surfaces, see Underwood and Banerji (Ref 6). If the scale of the
measurement can be varied, by magnification, for example, then the
logarithm of measured surface area when plotted against the logarithm of

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


110 / Practical Guide to Image Analysis

the scale represents the fractal dimension of the surface. Fractal dimension is discussed further in the section Derived Measurements.
Direct measurement of surface area using methods such as profilometry
or the scanning probe microscopy techniques of scanning tunneling
microscopy (STM) and atomic force microscopy (AFM), will not be
considered here because they are not image analysis techniques.

Feature-Specic Measurements
Feature-specific measurements logically imply the use of an AIA
system. In the past, so-called semiautomatic systems were used in which
the operator traced the outline of features on a digitizing tablet. This type
of analysis is time consuming and is only useful to measure a limited
number of features. However, it does have the advantage of requiring the
operator to confirm the entry of each feature into the data set.
Although specimen preparation and image collection have been discussed previously, it should be emphasized again that automatic image
analysis is meaningful only when the image(s) accurately reflects(s) the
properties of the features to be measured. The feature finding program
ordinarily detects features using pseudocolor. As features are found, their
position, area, and pixel intensity are recorded and stored. Other primitive
measures of features can be made by determining Feret diameters
(directed diameters, or DD) at some discrete number of angles. These
include measures of length, such as longest dimension, breadth, and
diameter. Theoretically, shape determination becomes more accurate with
increasing use of Feret diameters. However, in practice, resolution,
threshold setting, and image processing are more likely to be limiting
factors than is the number of Feret diameters. A list of some featurespecific primitive measurements follows:
O
O
O
O
O
O
O
O
O
O
O

Position x and y
Area
Area filled
Directed diameters (including maximum, minimum, and average)
Perimeter
Inscribed x and y (including maximum and minimum)
Tangent count
Intercept count
Hole count
Feature number
Feature angle

The area of a feature is one of the easiest measurements for a


computer to make because it is merely the sum of the pixels selected by

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 55 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 111

the threshold setting. For any given pixel resolution, the smaller the
feature, the less precise is its area measurement. This problem is even
greater for shape measurements, as described in the section Derived
Measurements.
If a microstructure contains features of significantly different sizes, it
may be necessary to perform the analysis at two different magnifications.
However, there is a magnification effect in which more features are
detected at higher magnification, which may cause bias. Underwood (Ref
7) states in a discussion of magnification effect that the investigator sees
more at higher magnifications. Thus, more grains or particles are counted
at higher magnification, so values of NA are greater. The same is true for
lamellae, but spacings become smaller as more lamellae are counted.
Other factors that can influence area measurement include threshold
setting, which can affect the precision of area measurement, and specimen
preparation and image processing, which can affect both the precision and
bias of area measurement.
Length. Feature-specific descriptor functions such as maximum, minimum, and average are readily available with a computer, and are used to
define longest dimension (max DD), breadth (min DD), and average
diameter. Average diameter as used here refers to the average directed
diameter of each feature, rather than the average over all of the features.
Length measurements of individual features are not readily accommodated using manual methods, but they can be done. For example, the
mean lineal intercept distance, L, can be determined by averaging the
chord lengths measured on each feature.
As with area measures, the precision of length measurements is limited
by pixel resolution, the number of directed diameters constructed by the
computer, and threshold setting. Microstructures containing large and
small features may have to be analyzed at two different magnifications, as
with area measurements.
Bias in area and length measurements is influenced by threshold setting
and microscope and image analyzer calibration. Calibration should be
performed at a magnification as close to that used for the analysis as
possible, and, for SEMs, x and y should be calibrated separately (Ref 8).
Perimeter. Measurement of perimeter length requires special consideration because representation of the outer edge of features in a digital
image consists of steps between adjacent pixels, which either are square
or some other polygonal shape. The greater the pixel resolution, the closer
will be the approximation to the true length of a curving perimeter.
Because the computer knows the coordinates of every pixel, an approximation of the perimeter can be made by calculating the length of the
diagonal line between the centers of each of the outer pixels and summing
them. However, this approach typically still underestimates the true
perimeter of most features. Therefore, AIA systems often use various
adjustments to the diagonal distance to minimize bias.

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


112 / Practical Guide to Image Analysis

Along with the measured approximation of the true perimeter, a convex


perimeter can be constructed from the Feret diameters. These directed
diameters form a polygon with sides touching the feature because they are
constructed at some number of angles around the feature. The number of
sides to the polygon depends on the number of directed diameters
constructed by the software. From the polygon, a convex perimeter, or
taut string perimeter, approximates the perimeter that would be formed if
a rubber band were stretched around the feature. The perimeter of a nearly
circular feature can be computed from its diameter, a descriptor that is
easier to measure precisely.
Another more complicated approach to perimeter measurement is the
Crofton perimeter (Ref 9). In this method, the derivation of which is
beyond the scope of this chapter, the length, L, of a curved line, such as
the perimeter of an irregular feature, is estimated from the number of
intersections it makes with a set of straight lines of known spacing, given
by the expression:
1
L nr
2 4

(Eq 7)

In the above equation, L is the length of a curved line such as the


perimeter of an object, n is the number of intersections with a series of
parallel lines at 45 with each other, and r is the spacing of the lines in
calibrated units.
The parallel lines at 45 and 135 have different spacings depending on
whether they are drawn manually so they are spaced equally with the 0
and 90 lines, or whether the computer uses pixel diagonals. When an
image analysis system uses pixel diagonals, the 45 and 135 line spacing
must be corrected by a factor of 1/2 2 , as in the expression:
L

1
r
(n45 n135)
r (n0 n90)
2
4
2

(Eq 8)

Figure 5 shows two grids having the same arbitrary curve of unknown
length superimposed. The grid lines in Fig. 5(a) are equally spaced;
therefore, those at 45 and 135 do not necessarily coincide with the
diagonals of the squares. In Fig. 5(b), the squares outlined by black lines
represent pixels in a digital image, and the 45 and 135 lines in blue are
constructed along the pixel diagonals. Equation 7 applies to the intersection count from Fig. 5(a), and Eq 8 applies to Fig. 5(b).
For example, in Fig. 5(a), the number of intersections of the red curve
with the grid equals 56. Therefore, L 22.0 in accordance with Eq 7. By
comparison, in Fig. 5(b), n 31 and n 36, where n refers to the
number of intersections with the black square gridlines, and n refers to

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 113

intersections with the blue diagonal lines. Therefore according to Eq 8,


L 22.2 compared with a measured length of 21.9, all in units of the grid.
Position. While the computer knows the position of every pixel, the
position of features needs to be defined. The centroid of a feature is an
obvious choice for position, but there are shapes in which the centroid lies
outside of the feature, such as crescent moon shapes and some shapes
having reentrant angles. Different image analysis systems approach the
problem differently, but one way to check how a particular analyzer
defines the position is to have the computer put a mark or label on the

(a)

(b)

Fig. 5

Grids used to measure Crofton perimeter. (a) Equally spaced grid lines.
(b) Rectilinear lines and diagonal lines. Please see endsheets of book
for color versions.

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


114 / Practical Guide to Image Analysis

feature. Regardless of the way position is defined, it is a measurement


best suited to computer-based methods.
In this discussion, position refers to absolute position in image
coordinates. However, there are situations in which position relative to
other features is importantnearest neighbor relationships and clustering
of features, for example, which are described in the section Derived
Measurements. Relative position concepts are most easily reported as
field measurements. However, if nearest neighbors can be measured
directly either manually or automatically, a distribution of measurements
can be reported.
An example of a problem that requires relative position information is
distinguishing locked from liberated minerals in the mining industry.
When the desired feature (ore mineral) is surrounded completely by
minerals having no economic value, it is said to be locked. In this
problem, it would be useful to measure the extent of contact between the
minerals of interest and each of the surrounding minerals. There are other
industrial problems that would benefit from data on relative position, but
the measurement is difficult to make automatically without sufficient
intelligence in a program specifically designed for this purpose.
Intensity. Although gray-level intensities are used in many image
processing operations, the intensity at each pixel most often is used in
image analysis only to threshold the image. Once the image is segmented
into planes or pseudocolors, the original intensity information can be lost.
A more complicated situation is that of a true color image. Here the
information contained in the separate red, green, and blue intensities is
necessary to segment the image based on shades of color (see Chapter 9,
Color-Image Processing).
Images constructed from spectroscopic techniques, such as x-ray
mapping and microinfrared or Raman spectroscopy, also use intensity
information. The intensity is necessary to display those regions of the
image corresponding to some chemical information and for the computer
to correlate or discriminate regions based on spectroscopic data. For
example, Fig. 6 shows precipitated carbides in a steel after heat treatment
for several months. The upper two images are x-ray maps of iron and
silicon, which show areas in which iron seems to be depleted and regions
in which silicon seems to be enriched, corresponding to carbide phases.
The computer can be instructed to find areas rich in carbon from the
intensity in a carbon map (not shown), and then the intensity of other
elements can be used to interpret and compare composition among
carbide grains. The lower two images, constructed by the computer to
show iron and silicon intensity in just the carbides, show that there are
two distinct phases, iron carbide (Fe3C) and silicon carbide (SiC). Other
maps of manganese and chromium showed that these elements substituted
for iron in the metal carbide, or M3C, while silicon formed a separate
phase.

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 115

Fig. 6

X-ray intensity maps of carbides in steel

Derived Measurements
Field Measurements
Stereological Parameters. Stereology is a body of knowledge for
characterizing three-dimensional features from their two-dimensional
representations in planar sections. A detailed review of stereological
relationships can be found in Chapter 2, Introduction to Stereological
Principles, and in Ref 4 and 10. The notation uses subscripts to denote
a ratio. For example, NA refers to the number of features divided by the
total area of the field. A feature could be a phase in a microstructure, a
particle in a dispersion, or any other identifiable part of an image. Volume
fraction, VV, is a quantity derived from the measured area fraction, AA,
although in this case, the relationship is one of identity, VV AA. There
are various stereological parameters that are not directly measured but
that correlate well with material properties. For example, the volume
fraction of a dispersed phase may correlate with mechanical properties,

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


116 / Practical Guide to Image Analysis

and the length per area, LA, of grain boundaries exposed to a corrosive
medium may correlate with corrosion resistance.
The easiest measurements to make are those that involve counting
rather than measuring. For example, if a test grid of lines or points is used,
the number of lines that intercept a feature of interest or the number of
points that lie on the feature are counted and reported as NL or point
count, PP. ASTM E 562 describes procedures for manual point counting
and provides a table showing the expected precision depending on the
number of points counted, the number of fields, and the volume fraction
of the features (Ref 1). Automatic image analysis systems consider all
pixels in the image, and it is left to the operator to tell the computer which
pixels should be assigned to a particular phase by using pseudocolor. It
also is easy to count the number of points of interest that intersect lines
in a grid, PL. If the objects of interest are discrete features, such as
particles, then the number of times the features intercept the test lines
gives NL. For space filling grains, PL NL, and for particles, PL 2 NL.
In an AIA system, the length of the test line is the entire raster; that is, the
total length of lines comprising the image in calibrated units of the
microstructure. Similarly, it is possible to count the number per area, NA,
but this has the added difficulty of having to rigorously keep track of each
feature or grain counted to avoid duplication.
All of the parameters above that involve counting are directly measurable, and several other useful parameters can be derived from these
measurements. For example, the surface area per volume, SV, can be
calculated as follows:
SV 2PL

(Eq 9)

where SV refers to the total surface area of features divided by their


volume, not the volume of the specimen. The length of lines intercepting
features divided by the area of the section is defined as:
LA

2PL

(Eq 10)

Another useful relationship defines the average area of a particular phase


counted over many fields as:
A

AA
NA

(Eq 11)

Today it is more common to use an AIA system to calculate the average


area directly from the area measurement of each feature.
Grain Size. A field measurement that frequently correlates with
material properties is grain size. The concept is based on the number of

JOBNAME: PGIAspec 2 PAGE: 17 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 117

grains in a given area of the specimen, such as the number of grains per
square inch at 100 magnification. For more on grain size measurement,
see Chapter 2, Introduction to Stereological Principles; Chapter 7,
Analysis and Interpretation; Chapter 8, Applications, and Ref 11.
Realizing the importance of accurate grain size measurement, ASTM
Committee E 4 on Metallography took on the task of standardizing grain
size measurement. ASTM E 112 is the current standard for measuring
grain size and calculating an ASTM G value (Ref 12). The relationship
between G and the number of grains per square inch at a magnification of
100, n, follows:
n 2G1

(Eq 12)

However, G is generally calculated from various easily measured stereological parameters, such as NL and NA. ASTM E 1382 (Ref 13) describes
the procedures for measuring G using automatic or semiautomatic image
analysis, and gives two equations:
G (6.643856 log NL ) 3.288

(Eq 13)

where NL is in mm1 and:


G (3.321928 log NA ) 2.954

(Eq 14)

where NA is in mm2.
The procedures prescribed in ASTM E 112 assume an approximately
log-normal grain size distribution. There are other conditions in which
grain size needs to be measured and reported differently, such as a
situation in which a few large grains are present in a finer-grained matrix.
This is reported as the largest grain observed in a sample, expressed as
ALA (as large as) grain size. The procedure for making this measurement
is described in ASTM E 930.
Duplex grain size is an example of features distinguished by their size,
shape, and position discussed previously. ASTM E 1181 describes various
duplex conditions, such as bimodal distributions, wide-range conditions,
necklace conditions, and ALA. Figure 7 shows an image containing
bimodal duplex grain size in an Inconel Alloy 718 (UNS N07718)
nickel-base superalloy. Simply counting grains and measuring their
average grain size (AGS) yields 1004 grains having an ASTM G value of
9.2. However, such an analysis completely mischaracterizes the sample
because the grain distribution is bimodal.
Figure 8 shows an area-weighted histogram of the microstructure in
Fig. 7, which suggests a division in the distribution at an average diameter
of approximately 50 m (Ref 14). The number percent and area percent
histograms are superimposed, and the area-weighted plot indicates the
bimodal nature of the distribution. The number percent of the coarse

JOBNAME: PGIAspec 2 PAGE: 18 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


118 / Practical Guide to Image Analysis

grains is only 2%, but the area percent is 32%. Repeating the analysis for
grains with a diameter greater than 50 m yields 22 grains having a G
value of 4.9. The balance of the microstructure consists of 982 grains
having a G value of 9.8. The report on grain size, as specified by ASTM

Fig. 7

Binary image of duplex grain structure in Inconel 718 nickel-base


superalloy

Fig. 8

Grain-size histograms of structure in Fig. 7. Source: Ref 14

JOBNAME: PGIAspec 2 PAGE: 19 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 119

E 1181 on Duplex Grain Size, is given as: Duplex, Bimodal, 68% AGS
ASTM No. 10, 32% AGS ASTM No. 5.
The fractal dimension of a surface, such as a fracture surface, can be
derived from measurements of microstructural features. Although fractal
dimension is not a common measure of roughness, it can be calculated
from measurements such as the length of a trace or the area of a surface.
The use of fractal measurements in image analysis is described by Russ
(Ref 15) and Underwood (Ref 16).
The concept of fractals involves a change in some dimension as a
function of scale. A profilometer provides a measure of roughness, but the
scale is fixed by the size of the tip. A microscope capable of various
magnifications provides a suitable way to change the scale. An SEM has
a large range of magnification over which it can be operated, which makes
it an ideal instrument to measure length or area as a function of scale. Two
linear measurements that can be made are length of a vertical profile of a
rough surface and length of the outline of features in serial sections. These
and other measurements for describing rough surfaces are extensively
reviewed by Underwood and Banerji (Ref 6).
If you can measure or estimate the surface area, using, for example,
stereoscopy discussed above, then the fractal dimension can be calculated. Such an analysis on a fracture surface is described by Friel and
Pande (Ref 17). Figure 9 shows a pair of images of the alloy described in
Ref 17 taken at two different tilt angles using an SEM. From stereopairs

Fig. 9

Stereopair of a titanium alloy fracture surface

JOBNAME: PGIAspec 2 PAGE: 20 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


120 / Practical Guide to Image Analysis

such as this, the true surface area can be calculated at different


microscope magnifications consisting of seven orders of magnitude in
scale. Figure 10 shows a plot of log area versus log scale. The slope of the
line fitted to this plot is equal to the fractal dimension, determined directly
from the two-dimensional surface.
Clustering. One of the more difficult aspects of a microstructure to
quantitatively characterize is the degree to which specific features tend to
cluster together. Pores and inclusions are two types of features that tend
to cluster, often affecting materials properties. Although total porosity or
inclusion content usually is not too difficult to measure, provided the
specimen is prepared adequately, the spatial distribution of these features
is much harder to quantify.
One method to assess clustering uses the area of a cluster and the
number of features in the cluster. This can be accomplished by dilating the
features in the cluster until they fuse and then measuring the area of the
agglomerate. Alternatively, a new image can be constructed in which
intensity is based on the number of features clustered in regions of the
original image.
A particularly powerful approach to the problem makes use of Dirichlet
tessellations. Dirichlet (Ref 18) actually did not use the term tessellation,
which simply means mosaic. However, in this context, they are cells
constructed by a computer by means of expanding regions outward from
features until they meet those of their nearest neighbors. Figure 11 shows
clustered, ordered, and random features with corresponding tessellation
cells below each feature. Every point within each cell is closer to the
feature that generated it than it is to any other featurea point Dirichlet
was first to prove mathematically. The area of the cells is roughly
proportional to the nearest-neighbor distance.
Once an image of cells is constructed for the three distribution types, as
shown in Fig. 11, the entire capability of feature analysis is available to
characterize the cells. For example, cell breadth is a measure of the first

Fig. 10

Fractal plot of fracture surface area versus scanning electron microscope magnication

JOBNAME: PGIAspec 2 PAGE: 21 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 121

nearest-neighbor distance, and the longest dimension of a cell is a


measure of the second nearest-neighbor distance. Horizontal and vertical
Feret diameters provide information about orientation. It is possible not
only to make measurements on each field, but also on every cell in the
field, and to report distributions.
In the simplest case, the computer constructs a perpendicular bisector
between the centroid of each feature and its neighbors. These lines are
terminated when they meet another, and the result is an image consisting
of polygons centered on each feature. Using this simple approach, it is
conceivable that a large feature close to another small one would actually
extend beyond its cell. That is, the cell centered on the large feature is
smaller than the feature itself. This situation occurs because the midpoint
of the centroid-to-centroid line may lie within the large particle.
A more sophisticated approach suitable for image analysis is to grow
the regions outward from the edge of the features, instead of their
centroid, using a regions of influence (SKIZ) operator (see Chapter 4,
Principles of Image Analysis). This approach requires a more powerful
computer, but it yields more meaningful results. An example of tessellation cells constructed from features in a microstructure is shown in Fig.
12. The cells are constructed from the edge of the pores; therefore, the cell
boundaries are not always straight lines. The small cells correspond to
regions of the microstructure in which porosity is clustered.

Fig. 11

Images of clustered, ordered, and random features with their tessellated counterparts

JOBNAME: PGIAspec 2 PAGE: 22 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


122 / Practical Guide to Image Analysis

Another measurement that can be calculated from an image of


tessellations is local area fraction. This measure, described by Spitzig
(Ref 19), and discussed in greater detail in Chapter 6, Characterization
of Particle Dispersion, is defined as the area of each feature divided by
the area of its cell. The measurement is made on an image consisting of
features of interest and their tessellation cells. To define this measure for
the computer, the features are treated as holes within the cells, and local
area fraction is defined as area filled minus area divided by area filled,
where area filled refers to the area of the cells with the features (holes)
filled in. Local area fraction typically is reported as a field measurement
by averaging over all the cells within a field.
For a review of various methods to quantify clustering, refer to Vander
Voort (Ref 20), and also Chapter 6, Characterization of Particle
Dispersion, for a discussion on characterizing particle dispersions.

Feature-Specic Derived Measurements


The number of feature-specific derived measurements is unlimited in
the sense that it is possible to define any combination of primitive
measurements to form a new feature descriptor. However, many such
descriptors have become widely used and now are standard in computerbased image analyzer software. Table 2 lists some common featurespecific descriptors.
Shape. Feature-specific derived measurements are most useful as
measures of shape. Shape is more easily measured on each feature than
inferred from field measurements. A simple, common derived shape
measurement is aspect ratio. To the computer, this is the ratio of
maximum Feret diameter to Feret diameter perpendicular to it, sometimes

(a)

Fig. 12

(b)

Tessellation cells constructed from features in a microstructure. (a) Scanning electron microscope
photomicrograph of porosity in TiO2. (b) Tessellation cells constructed based on pores

JOBNAME: PGIAspec 2 PAGE: 23 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 123

called width. Other common feature descriptors, such as circularity and


form factor, purport to characterize the similarity of a feature to a circle.
However, different descriptors are sensitive to different primitive measurements. For example, while both circularity and form factor quantify
the similarity in outline of a feature to a circle, circularity is most sensitive
to the aspect ratio, while form factor is sensitive to variations in perimeter
curvature. Roughness is another such measurement that is even more
perimeter-sensitive than is form factor.
Combined Selection Criteria. It often is useful to combine two or
more descriptors when imposing selection criteria to determine which
features should be included in the analysis. In other words, if more than
one population exists in the image, each must be distinguished and
analyzed separately. For example, it might be necessary to analyze
small-area circular features, such as pores, or large-area high-aspect ratio
feature, such as fibers. Generally, selection criteria such as these can be
applied separately or in combination (related by Boolean logic) in most
AIA systems. Although the computer may allow for combining numerous
criteria, experience shows that two selection criteria usually are sufficient
to describe the vast majority of useful features. It may be necessary to use
statistical methods to distinguish the populations if appropriate selection
criteria are not obvious. One such method is stepwise regression, in which
variables (size or shape parameters) are input into the regression one at a
time rather than regressing all at the same time. An advantage of this
technique is that the investigator can observe the effect of each variable
and, being familiar with the sample, can select the most significant ones.
A review of methods for distinguishing populations can be found in
Ref 21.
Indirect Measurements. Beyond using multiple selection criteria to
screen features, it also is possible to use a combination of descriptors to
Table 2

Common feature-specic descriptors

Descriptor

Definition

Area (A)

Pixel count

Perimeter (P)

Diagonal pixel center to center

Longest dimension (L)

Directed diameter max

Breadth (B)

Directed diameter min

Average diameter (D)

Directed diameter average

Aspect ratio (AR)

Max directed diameter /perpendicular directed diameter

Area equivalent diameter (AD)

4A / (diameter, if circular)

Form factor (FF)

4A / P2 (perimeter-sensitive, always 1)

Circularity

L2 / 4A (longest dimension-sensitive, always 1)

Mean intercept length

A / projected length (x or y)

Roughness

P / D (D perimeter of circle)

A3 (sphere rotated about D)

Volume of a sphere

0.75225

Volume of a prolate spheroid

8 / 3 A2 / L (ellipse rotated about L)

Fiber length

p 2 16A)
0.25 (P p 2 16A)

Fiber width

0.25 (P

JOBNAME: PGIAspec 2 PAGE: 24 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


124 / Practical Guide to Image Analysis

make a measurement that cannot be made directlythe average thickness


of curved fibers whose length is much greater than their thickness, for
example. While it may be possible to write an intelligent program to
follow the fibers and measure their thickness at intervals along the length,
there is an easier way.
Because the length is much greater than the width, only two sides
account for nearly the entire perimeter; the ends of the fibers are
insignificant. Therefore, the perimeter divided by two gives the length of
one side. The area is a direct measurement, so the area divided by the
length of one side gives the average width, while ignoring variations in
thickness. This is an example of using easily taken measurements to
calculate a measurement that is not so easily made. Furthermore, it does
not involve Ferets, which can be misleading on curving features. The
derived descriptors fiber length and fiber width in Table 2 are applicable
to this situation.
The ability of the user to define his or her own descriptors can be useful
in ways other than making indirect measurements as previously described. It is possible that two or more primitive descriptors can be
combined into a user-defined parameter that relates to material process or
properties. In this case, a computer-based AIA system is virtually a
necessity because the computer can make the tens of thousands of
measurements while the investigator sorts through the data to select those
that show significant correlation with properties.
Distributions of Measurements. One of the greatest advantages of an
automatic image analysis system over manual methods is the capability to
measure numerous parameters on every feature in a microstructure. With
all of these data stored, it is tempting to report the complete distribution
of many feature descriptors. It is beyond the scope of this chapter on
measurements to delve into the statistics of various types of distributions.
However, it should be reemphasized that field measurements taken on
many fields often better characterize a material than do lists of feature
measurements; in other words, do more less well!
Reporting feature-specific measurements in a list or a plot must be done
carefully to assure that the report captures the essential characteristics of
the distribution. In particle-size analysis, for example, a cumulative
distribution commonly is used. However, if the complete distribution
becomes unwieldy, three parameters can be used instead: mean diameter,
standard deviation, and number of particles per unit volume (Ref 22).
Plots of shape distributions are more difficult to interpret than those based
on size. Shape measurements, such as aspect ratio and circularity, even
when reported numerically, do not necessarily correspond intuitively to
the physical processes that produced the features. Moreover, the same
data may appear to be different when plotted using a different shape
measure or on a different type of plot, such as a logarithmic graph.

JOBNAME: PGIAspec 2 PAGE: 25 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 125

In other cases, such as measurements of duplex grain size described


above, a weighted distribution is appropriate. A typical distribution of
grain size is log normal, but there are grain-size distributions that are
significantly different. A few large particles among hundreds or more
small ones can have a significant effect on material properties, yet they
may be lost in a histogram consisting of a count of features versus size.
If the computer software calculates a median value, it is informative to
compare the value with the mean value to look for a distribution that is
skewed. A more complete treatment of statistics in image analysis and
types of distributions are given in Chapter 7, Analysis and Interpretation and Chapter 8, Applications. Nevertheless, before data can be
analyzed statistically, the correct measurements must be made on an
image acquired using a suitable contrast mechanism.
The decision about what to report and how to illustrate the data requires
care on the part of the operator. However, after these parameters are
established, the capability of an AIA system to locate and measure
thousands of features rapidly in many instances provides a savings in time
and an increase in reproducibility over manual methods. Comparison
chart methods may be fast, adequately precise, and free of bias for some
measurements, but for others, it might be necessary to determine the
distribution of feature-specific parameters to characterize the microstructure.
The following example illustrates a routine image analysis. Figure 13
shows a microstructure consisting of ferrite in a plain carbon steel.

Fig. 13

Microstructure of plain carbon steel consisting of ferrite grains

JOBNAME: PGIAspec 2 PAGE: 26 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


126 / Practical Guide to Image Analysis

Because the image consists only of space filling grains, the measurement
of area fraction and mean free path are not meaningful. However, in a real
digital image, the grain boundaries comprise a finite number of pixels.
Therefore, the measured area fraction is less than one, and the calculated
mean free path is greater than zero. Even when image processing is used
to thin the boundaries, their fraction is still not zero, and the reported
results should consider this. Tables 3 and 4 show typical field and feature
measurements, respectively, made on a binary version of the image in
Fig. 13.

Standard Methods
Throughout this Chapter, standard procedures have been cited where
appropriate. These procedures are the result of consensus by a committee
of experts representing various interested groups, and as such, they are a
good place to start. A list of various relevant standards from ASTM is
given in Tables 5 and 6. International Standards Organization (ISO) and
Table 3

Field measurement report

Measurement

Average

Field area, m2

365,730.8

Total features

1,538

Total intercepts

15,517

Area of features, m2

252,340.9

Field NA, mm2

4.2

Field NL, mm1

63.7

Average area of features, m2

164.1

Average length of features, m

14.7

Units per pixel, m

1.5

NA, number of features divided by area; NL, lineal density

Table 4

Feature measurement report


Measurement, m

Feature

Average

Median

Minimum

Maximum

164.07

103.74

5.79

2,316.15

Perimeter

48.12

41.45

12.72

353.23

x Feret

14.58

13.52

3.00

66.08

y Feret

14.67

12.01

3.00

75.09

Max horizontal chord

13.42

12.01

3.00

60.07

Area

Longest dimension

8.33

16.06

5.40

87.08

Breadth

11.78

10.51

3.00

60.25

Average diam

15.43

13.62

5.15

73.97

Convex perimeter

48.47

42.80

16.17

232.37

Area equivalent diam

54.30

12.84

11.49

4.48

Form factor

0.75

0.77

0.18

1.23

Circularity

2.06

1.93

1.28

12.57

Mean x intercept

9.18

8.26

1.93

36.83

Mean y intercept

9.22

8.23

1.75

36.67

Roughness

0.97

0.97

0.79

1.52

2,228.30

794.87

47.18

83,851.82

1.53

1.43

1.00

8.80

Volume of sphere
Aspect ratio

JOBNAME: PGIAspec 2 PAGE: 27 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


Measurements / 127

Table 5 ASTM measurement- and materialsrelated standards


ASTM No.

Subject

Measurement standards
E 562

Manual point counting

E 1122

Inclusion rating by AIA

E 1382

Grain size by AIA

E 1181

Duplex grain size

E 1245

Inclusions by stereology

E 1268

Degree of banding

B 487

Coating thickness

Material standards
C 856

Microscopy of concrete

D 629

Microscopy of textiles

D 686

Microscopy of paper

D 1030

Microscopy of paper

D 2798

Microscopy of coal

D 3849

Electron microscopy of carbon black

AIA, automatic image analysis

Table 6

Microscopy-related standards

ASTM No.

Subject

E3

Specimen preparation

E 766

SEM magnification calibration

E 883

Reflected-light microscopy

E 986

SEM beam size characterization

E 1351

Preparation of replicas

E 1558

Electropolishing

F 728

Microscope setup for line width

SEM, scanning electron microscope

other national standards agencies develop standards that relate to image


analysis, but ASTM standards for materials have been developed over a
period of 100 years, and often form the basis for standards published by
other agencies. Some of the standards listed in Tables 5 and 6 deal directly
with how to make measurements of, for instance, grain size and
inclusions. Others, such as microscope magnification calibration, indirectly affect measurements. Still others cover the subject of microscopy of
various materials and specimen preparation. Although these do not
prescribe specific measurements, they are essential as the first step to
valid image analysis.

References
1. Standard Test Method for Determining Volume Fraction by Systematic Manual Point Count, E 562, Annual Book of ASTM Standards,
Vol 03.01, ASTM, 1999, p 507
2. E.E. Underwood, Quantitative Metallography, Metallography and

JOBNAME: PGIAspec 2 PAGE: 28 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000


128 / Practical Guide to Image Analysis

Microstructures, Vol 9, ASM Handbook, ASM International, 1985, p


123
3. Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures, E 1268, Annual Book of ASTM Standards,
Vol 03.01, ASTM, 1999, p 780
4. E.E. Underwood, Quantitative Stereology, Addison-Wesley, 1970
5. J.I. Goldstein, D.E. Newbury, P. Echlin, D.C. Joy, A.D. Romig, C.E.
Lyman, C. Fiori, and E. Lifshin, Scanning Electron Microscopy and
X-ray Microanalysis, Plenum, 1992, p 264
6. E.E. Underwood and K. Banerji, Quantitative Fractography, Fractography, Vol 12, ASM Handbook, ASM International, 1987, p 193
7. E.E. Underwood, Practical Solutions to Stereological Problems,
Practical Applications of Quantitative Metallography, STP 839,
ASTM, p 160
8. Standard Practice for Calibrating the Magnification of a Scanning
Electron Microscope, E 766, Annual Book of ASTM Standards, Vol
03.01, ASTM, 1999, p 614
9. M.P. do Carmo, Geometry of Curves and Surfaces, Prentice-Hall,
1976, p 41
10. Stereology and Quantitative Metallography, STP 504, ASTM, 1972
11. G.F. Vander Voort, Grain Size Measurement, Practical Applications
of Quantitative Metallography, STP 839, ASTM, 1984, p 85
12. Standard Test Methods for Determining Average Grain Size, E 112,
Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 229
13. Standard Test Methods for Determining Average Grain Size Using
Semiautomatic and Automatic Image Analysis, E 1382, Annual Book
of ASTM Standards, Vol 03.01, ASTM, 1999, p 855
14. G.F. Vander Voort and J.J. Friel, Image Analysis Measurements of
Duplex Grain Structures, Mater. Charact., 1992, p 293
15. J.C. Russ, Practical Stereology, Plenum, 1986, p 124
16. E.E. Underwood, Treatment of Reversed Sigmoidal Curves for
Fractal Analysis, Advances in Video Technology for Microstructural
Control, STP 1094, ASTM, 1991, p 354
17. J.J. Friel and C.S. Pande, J. Mater. Res., Vol 8, 1993, p 100
18. G.L. Dirichlet, ber die Reduction der positiven quadratischen
Formen mit drei unbestimmten ganzen Zahlen, J. fr die reine und
angewandte Mathematik, Vol 40, 1850, p 28
19. W.A. Spitzig, Metallography, Vol 18, 1985, p 235
20. G.F. Vander Voort, Evaluation Clustering of Second-Phase Particles, Advances in Video Technology for Microstructural Control,
STP 1094, ASTM, 1991, p 242
21. J.C. Russ, Computer-Assisted Microscopy, Plenum, 1990, p 272
22. E.E. Underwood, Particle-Size Distribution, Quantitative Microscopy,
McGraw-Hill, 1968, p 149

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000

CHAPTER

Characterization of
Particle Dispersion
Mahmoud T. Shehata
Materials Technology Laboratory/CANMET

IMAGE ANALYSIS is used to quantitatively characterize particle


dispersion to determine whether and to what extent particle dispersion is
homogeneous or inhomogeneous. Particle dispersion, among other factors
like volume fraction, size distribution, and shape, has a pronounced effect
on many mechanical properties, such as fracture toughness (Ref 14) and
sheet metal formability (Ref 5). Particle dispersion in this case refers to
the dispersion of nonmetallic inclusions, precipitates, and second-phase
particles in an alloy or a metal-matrix composite material. In all cases, the
arrangement, size, and spacing of particles in space determine material
properties; that is, it is the local volume fraction of the particles rather
than the average value that determines properties (Ref 6). Examples of
ordered, random, and clustered dispersions are shown in Fig. 1. A
clustered dispersion of inclusions in a steel results in a lower fracture
toughness than that in an ordered dispersion (Ref 3). This can be
explained in terms of the significant role of inclusions in void and/or
crack initiation (Ref 7). It is the distribution of these voids or microcracks
that determines the relative ease of void coalescence and accumulation of
damage up to a critical local level to cause failure. Therefore, to model the
mechanical properties of a steel containing a dispersion of inclusions (or
any material containing second-phase particles), it is necessary to develop
techniques that quantitatively describe the characteristics of the dispersion in terms of local volume fractions and local number densities rather
than overall values. For mineral dressing, on the other hand, it is
important to know the local volume fractions of inclusions in the host
grains rather than average values to assess a mineral dressing operation
(Ref 8).

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


130 / Practical Guide to Image Analysis

Five techniques used to characterize the dispersion of particles are


described in this chapter. The examples that illustrate the techniques
mainly are for inclusion in steels, but are equally applicable to any
material containing second-phase particles. The degree of clustering in
the dispersion is quantified for each technique. The techniques are
presented in order of complexity:
1. A technique based on number density distribution measurements,
sometimes referred to as the sparse sampling technique or grid/
quadrant counting (Ref 9)
2. A technique based on nearest-neighbor spacing distribution measurements (Ref 1012)
3. A technique called dilation and counting, which involves successive
dilations and counting of the number of merging particles after each
dilation step until they are all merged
4. A technique based on the construction of a Dirichlet network at
mid-distance between particle centroids (Ref 1315)
5. A technique that leads to the construction of a network (very similar
to a Dirichlet network) at mid-edge/edge-particle spacing rather than
centroid spacing. The procedure is based on the conditional dilation
utility on the image analyzer, which essentially is a continued particle
dilation, but a boundary is set up where the dilating particles meet.
Sometimes the conditional dilation is called inverse thinning where
the inverse binary image (background) is thinned down to a line.
Techniques 3, 4, and 5 are based on near-neighbor spacing rather than
the nearest neighbor.

The results obtained for a particle dispersion by all five techniques are
compared in each case with results for ordered, random, and clustered
dispersions. In addition, the techniques are evaluated, and their usefulness
and limitations are discussed.

Fig. 1

Point patterns showing different point dispersions. (a) Ordered. (b)


Random. (c) Clustered

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 131

Number Density Variation Technique


The procedure involves counting the number of particles per unit area,
NA, in successive locations or fields. For example, the number of
inclusions are counted in various locations on a polished section of a steel
sample (Fig. 2). This procedure is easily achieved using an automatic
image analyzer equipped with an automated stage, so measurements on
successive locations of a large sample are achieved automatically. A
quantitative measure for the degree of inhomogeneity of the dispersion is
the standard deviation, , defined as:

1
NAi NA2
n
i

(Eq 1)

where NAi is the observed number of inclusions per unit area in the ith
location (field of view) and is the average number of particles per unit
area viewed on the sample. Maximum homogeneity is characterized by a
minimum standard deviation; thus, the degree of homogeneity increases
with a decreasing value of standard deviation. To compare relative
homogeneity of samples with different number densities, the standard
deviation must be normalized by the value of the mean, which is the
coefficient of variation, V, defined as V / NA.

Fig. 2

Nonmetallic inclusions in steel. Inclusion dispersion is a combination


of random and clustered dispersions.

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


132 / Practical Guide to Image Analysis

It should be noted that the number density technique has an important


limitation. The measured variation may depend on the area of the
measuring field relative to the area or size of the clusters. For example, if
the field area is so large that it contains a number of clusters (as in Fig.
2), it can be anticipated that the standard deviation will be very small. In
fact, such a large field size does not allow the variation in the number
density from clustered regions to nonclustered regions to be detected
because both regions are contained in the measured field. In other words,
a large measuring field implies a quasi-homogeneous distribution. The
effect of field size is shown in the following table, where the coefficient
of variation of number density measurements are given as a function of
the measuring field for three different samples (Ref 6).
Field area, mm2
Sample

0.96

0.24

0.06

0.015

0.23

0.34

0.51

0.66

0.23

0.38

0.62

0.81

0.31

0.49

0.69

0.84

Note that the coefficient of variation increases as the area of measuring


field decreases. This suggests that the field area should be as small as the
area of a typical cluster if the number density variation technique is to
sense clusters. For example, the field area that should be used for the
inclusion dispersion in Fig. 2 should not be larger than the size of a typical
cluster of approximately 150 by 100 m.

Nearest-Neighbor Spacing Distribution


Nearest-neighbor spacing is easily measured using many automatic
image analyzers. This is achieved by determining the x and y coordinates
of all particle centroids in the dispersion and calculating the distances
between all pairs of particles. In a dispersion of N particles, the total
number of paired spacings is N(N 1) / 2. However, only one spacing for
each particle is the smallest spacing. Therefore there are N nearestneighbor spacings in a dispersion of N particles. A frequency distribution
of these N spacings can be determined readily using an image analyzer.
An example is shown in Fig. 2, again, for inclusions in a steel sample.
To characterize particle dispersion using this technique, comparisons
usually are made with random distributions. A random distribution is
assumed to be a Poissons distribution, shown in Fig. 3. In this case, the
probability distribution function of nearest-neighbor spacing is given
by:
F() 2 NA exp NA 2

(Eq 2)

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 133

The expected mean and expected variance, s, are:


E()

Es2

(Eq 3)

4 1

4 NA

(Eq 4)

It is possible to characterize the dispersion by comparing the mean


nearest-neighbor spacing and its variance with the expected values
obtained for a random distribution (Ref 12). Based on the ratio Q of the
observed to the expected (for random) mean nearest-neighbor spacing and
the ratio R of the observed to the expected variance of the nearestneighbor spacing, the following comparisons can be made:
Distribution

Q value

R value

Random dispersion

Ordered dispersion

2>Q>1

0<R<1

Clustered dispersion

0<Q<1

0<R<1

<1

>1

Clusters in a random background

For example, the comparison shown in Fig. 3 indicates that the observed
distribution is composed of clusters superimposed on random dispersion
(Q 0.8 and R 2.83).
The nearest-neighbor spacing technique can be very useful in describing the observed dispersion as being ordered, random, clustered, or

Fig. 3

Comparison of measured nearest-neighbor spacing distribution of


inclusions in a steel sample and a Poisson distribution

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


134 / Practical Guide to Image Analysis

composed of clusters superimposed on random dispersion. However, it is


not a very sensitive technique for quantifying the extent of clustering
because it is based on the nearest-neighbor spacing only. In addition, the
technique does not provide any description for clusters as to their size and
spacing; all near neighbors should be considered where the surroundings
of each point is taken into account to obtain this information. The simplest
technique that achieves this is the dilation and counting technique
described in the following section.

Dilation and Counting Technique


Most digital image analysis systems have the capability to erode and
dilate features detected from an image. Because the detected image is a
binary image, erosion or dilation is simply removing or adding, respectively, one pixel all around the detected feature. The dilation capability,
together with the capability of programming the image analysis system, is
used to characterize particle dispersion, a procedure called dilation and
counting technique. When any two particles are successively dilated, the
particles start to merge and overlap when the amount of dilation is equal
to, or greater than, one half the clear spacing between the two particles.
When particles overlap, they appear in the binary image as one larger
particle and are counted as one larger feature.
The procedure involves successive dilation and counting of the number
of features after each dilation step until all the features are joined and
become one feature. The procedure is shown schematically in Fig. 4 for
a random dispersion of points. Results are plotted as the loss in the
number of counted features for each increment of dilation or cycle. This
corresponds exactly to a frequency-versus-spacing distribution for the
particle dispersion. Figure 5 shows the results of the procedure for the
ordered, random, and clustered dispersions in Fig. 1.
For the ordered dispersion, a peaked distribution is centered around a
spacing D 1/ NA. In fact, for a perfectly ordered hexagonal grid of
points, the distribution is a delta function at exactly the D spacing. For the
random dispersion shown in Fig. 1, the frequency spacing distribution is
roughly constant up to a value of 2D where it starts to decrease rapidly at

Fig. 4

Schematic of dilation procedure for a random point pattern after (a) 5,


(b) 10, and (c) 15 dilation cycles

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000

Fig. 5

Frequency versus spacing distribution results of dilation and counting procedure for point patterns in Fig. 1. (a) Random. (b) Ordered.
(c) Clustered

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


136 / Practical Guide to Image Analysis

higher spacings. For the clustered dispersion, on the other hand, the
frequency spacing distribution has a large peak at small spacings and a
small peak (or peaks) at very large spacings. The first peak corresponds
to spacings of particles inside the clusters, whereas the second peak(s)
correspond(s) to spacing among clusters.
The dilation and counting technique is very useful to characterize
particle dispersion and clustering. It is applied to reveal both the presence
and degree of clustering in several aluminum-silicon alloy sheet materials
where dispersion and clustering are critical to sheet formability (Ref 5).
Note that in this technique, the particle-to-particle spacing is the clear
(edge-to-edge) spacing between particles and not center-to-center spacing. Both spacings are very similar in the case of very small particles and
very large spacings as, for example, inclusions in a steel sample.
However, they are significantly different in the case of larger particles and
smaller spacings like, for example, second-phase particles in a metalmatrix composite (Fig. 6).
The edge-to-edge spacing obtained using the dilation and counting
technique has more relevance to modeling properties because the technique looks at clear spacing and takes into account particle morphologies
(long stringer versus round particles). Particle morphology is ignored
when center-to-center spacing is considered. A limitation of the dilation
and counting technique is that it does not provide local volume fraction
values, which are useful in modeling fracture. Such information can only
be provided using tessellation techniques described below.

Fig. 6

Microstructure of a metal-matrix composite containing larger particles


and smaller spacings than a microstructure containing inclusions

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 137

Dirichlet Tessellation Technique


The Dirichlet tessellation is a geometrical construction of a network of
polygons around the points in a point dispersion. The result of the
tessellation process is the division of the plane into convex polygons each
surrounding a single point. Each polygon is uniquely defined by the
positions of all near neighbors around the point. The polygons are
constructed by the normal bisectors of all the lines joining the particular
point to all neighboring points, as shown in Fig. 7 (Ref 9). Note that each
polygon (called a Dirichlet cell) encloses a region from the matrix for
each point, which is closer to that point than any other point in the point
dispersion. The cells are used to obtain information about the surroundings of each point in the dispersion because all near neighbors are
associated with the construction of the boundaries of a given cell.
Near-neighbor distances, the number of near neighbors about each point,
and the area and the shape of the cells can be used to characterize the
dispersion. In addition, comparisons can be made with a Dirichlet
network constructed for a random dispersion of points.
To construct the Dirichlet network for a particle dispersion, the
coordinates of the centroid and the area of each particle are recorded using
an automatic image analysis system. A computer program, developed at
Metals Technology Laboratories/CANMET (Natural Resources Canada)
constructs normal bisectors and assigns a unique area (Dirichlet cell) to
each particle in the dispersion. The computer program takes care of edge
effects (where cells around particles at the edge of the field are

Fig. 7

Geometrical construction of Dirichlet cells for a point array. Source:


Ref 9

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


138 / Practical Guide to Image Analysis

constructed) by considering particles in the field as well as particles in the


neighboring field.
Figure 8 shows the Dirichlet network for the inclusion dispersion in a
steel sample. For comparison, a Dirichlet network is constructed for a
random dispersion of points where the x and y coordinates of the
dispersion are obtained by a random generator. Figure 9 shows the
Dirichlet network for the random dispersion, which has the same number
per unit area as the steel sample. The area distribution for Dirichlet cell

Fig. 8

Dirichlet network for inclusion dispersion in the steel sample in Fig. 2

Fig. 9

Dirichlet network for a computer-generated random dispersion corresponding to the steel sample in Fig. 8

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 139

constructed for the steel sample and the random dispersion is shown in
Fig. 10. Note that the results of the area distribution of the cells
constructed for a randomly generated point dispersion follow a Poissons
distribution (Ref 6). Therefore, comparisons can be made directly with the
Poissons distribution without generating random dispersions.
An important advantage of the Dirichlet tessellation method is that it
can be used to yield parameters that relate more directly to fracture
properties, namely local volume fraction of particles, which is equivalent
to the local area fraction. The local area fraction takes into account two
important parameters that relate directly to the fracture process. The size
of the particle (inclusion or void) and the size of the Dirichlet cell give an
indication of near-neighbor distances, which relate to crack propagation.
The area fraction for each particle is calculated as the ratio of the area of
the particle to the area of the cell around it. For this reason, the areas of
the particles are recorded together with the coordinates of the centroids,
and the computer program was expanded to obtain the local area fraction
for the inclusion dispersion.
In addition, random dispersions of local area fractions are also
generated (Ref 6), and the local area fraction distributions for both the
random dispersion and the steel sample are shown in Fig. 11. Note that the
local area fraction distributions follow a log-normal distribution (approxi-

Fig. 10

Area distribution for Dirichlet networks for inclusion dispersion and


corresponding random dispersion for the steel sample in Fig. 2

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


140 / Practical Guide to Image Analysis

mately a straight line on the logarithmic probability plot in Fig. 11). The
standard deviation and coefficient of variation, as represented by the slope
of the line (Fig. 11), are higher for the steel sample than for the random
case and are a measure of inhomogeneity. Another advantage of the
Dirichlet tessellation method is that one can identify clustered regions and
the degree of clustering from local area fraction values of the Dirichlet
regions for the whole inclusion dispersion. An example is shown in Fig.
12, where clustered regions are identified by comparing the local area
fraction for each Dirichlet region with the average local area fraction for
the particle dispersion shown in Fig. 8.
All the limitations of the Dirichlet tessellation technique stem from the
fact that the network is based on particle centroids and, therefore, is not
affected by either particle size or shape. This usually is acceptable in the
case where particles are round and very small compared with spacings. A
problem arises when particles are as large as or larger than the free
spacing between them. For example, if a small particle is close to a large
particle, a particle larger than the free spacing between them, then the cell
boundary based on centroids will cut across the large particle. Therefore,
for a metal-matrix composite like that shown in Fig. 6, cell boundaries
must be constructed at the spacing of mid-edge to edge rather than
mid-spacing between centroids. This procedure is achieved using tessellation by dilation.

Fig. 11

Local area fraction for inclusion dispersion and corresponding


random dispersion for the steel sample in Fig. 2

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 141

Tessellation by Dilation Technique


Most image analyzers have a conditional dilation facility, which
essentially is a continued particle dilation, but a boundary line is set up
when the dilating particles meet. The result of this conditional dilation is
the construction of a network of polygons very similar to that of the
Dirichlet network shown in Fig. 13. The conditional dilation described
above also can be described as a conditional erosion of the background

Fig. 12

Dirichlet network for the inclusion dispersion of the steel sample, as


in Fig. 8, identifying clustered regions having local area fraction
higher than the average local area fraction

Fig. 13

Tessellation by conditional dilation using a square grid for inclusion dispersion

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


142 / Practical Guide to Image Analysis

(matrix); that is, the background is successively eroded, but not the last
pixel, until it is eroded to a line (skeleton). For this reason, the technique
also is called background erosion technique, background thinning technique, and background skeletonization technique. For simplicity, here it is
referred to as dilation technique. It should be noted that the cell
boundaries using the dilation technique are constructed at the mid-edgeto-edge spacing between a particle and all near neighbors of that particle.
Therefore, it does take into account the particle size and shape, as
discussed above, and this is one of the better advantages of the dilation
technique. Figure 11 in Chapter 5, Measurements, shows an example of
a tessellation network produced using this technique corresponding to
clustered, ordered, and random point patterns.
It is arguable that the boundary constructed between any particles is not
the normal bisector line, as in Dirichlet tessellation, but rather a number
of segments of lines approximating the normal bisector. The reason is that
the dilation follows a particular square, hexagonal, or octagonal grid.
Therefore, boundary lines can only be constructed at particular angles
depending on the grid used. For a square grid, the boundary lines can be
constructed only at angles every 45 (0, 45, 90 and 135), and, therefore,
the normal bisector (that can take any angle) is approximated by segments
of lines at those particular angles, as shown in Fig. 13. This can result in
polygons with irregular shapes. The shape irregularity decreases by using
a hexagonal or an octagonal grid, where line segments can be constructed
every 30 and 22.5, respectively. However, in any case, the area of the
polygons constructed using the dilation technique is very similar to that
constructed by the Dirichlet tessellation technique. This also results in a
local area fraction (area of the particle divided by the area of the cell
around it) that is very similar. In addition, the situation improves
significantly when the particles are larger and spacings are smaller, as in
the case of metal matrix composites, for example (Fig. 6). In this case, the
dilation technique becomes the most appropriate technique to measure
local area fractions in the particle dispersion.

Fig. 14

Use of a measuring frame smaller than the image frame to overcome


edge effects

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


Characterization of Particle Dispersion / 143

One of the limitations of the dilation technique is the edge effect, where
polygons around particles at the edge of the field cannot be constructed
based on the particles present in neighboring fields. However, this is
overcome by eliminating in each field the measurements for those
particles lying at the edge of the field. This is achieved by making the
measuring frame smaller than the image frame, as shown in Fig. 14. In
this case, only particles that are inside the measuring frame are measured.
The measuring frame can then be chosen so that only the particles that
have the correct cells constructed around them are contained in the
measuring frame. In some cases, this can significantly reduce the
measuring frame compared with the image frame. Then it becomes a
matter of measuring more fields to cover the sample. This is accomplished
rapidly by automatic image analysis systems.

Conclusions
Characterization of particle dispersion by means of image analysis is
described using five different techniques. The number density technique
depends on field size, and the field area should be as small as the area of
the cluster to sense particle clusters using this technique. The nearestneighbor spacing distribution technique is used to characterize particle
dispersions as ordered, random, and clustered, and provides some
quantification. However, it is not very sensitive because it considers only
the nearest neighbor. The dilation counting technique considers near
neighbors and is very useful to characterize particle dispersion. However,
it does not provide local area fraction measurements, which only can be
obtained using the tessellation technique. Dirichlet tessellation is the most
comprehensive technique to characterize particle dispersion. It is based on
construction of a Dirichlet network at mid-distances between particle
centroids It provides local area fraction measurements and is used to
identify clustered regions. Because it is based on particle centroids, its
limitations appear when larger particles and smaller spacings are considered at the same time.

References
1. P. Poruks, D. Wilkinson, and J.D. Embury, Role of Particle Distribution on Damage Leading to Ductile Fracture, Microstructural Science,
Vol 22, ASM International, 1998, p 329336
2. M.J. Worswick, A.K. Pilkey, C.I.A. Thompson, D.J. Lloyd, and G.
Burger, Percolation Damage Prediction Based on Measured Second
Phase Particle Distributions, Microstructural Science, Vol 26, ASM
International, 1998, p 507514

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000


144 / Practical Guide to Image Analysis

3. J.D. Embury and G. Burger, The Influence of Microstructure on


Toughness, Proc. of the 7th International Conf. on the Strength of
Metals and Alloys, Vol 2, Pergamon Press, 1985, p 18931915
4. W.A. Spitzig, J.F. Kelly, and O. Richmond, Quantitative Characterisation of Second-Phase Populations, Metallography, Vol 18, 1985, p
235261
5. A.K. Pilkey, J.P. Fowler, M.J. Worswick, G. Burger, and D.J. Lloyd,
Characterizing Particle Distributions in Model Aluminum Alloy
Systems, Microstructural Science, Vol 26, ASM International, 1998, p
491496
6. M.T. Shehata and J.D. Boyd, Measurements of Spatial Distribution of
Inclusions, Proc. of the World Materials Congress Symp. on Inclusions and Their Influence on Materials Behavior, ASM International,
1988, p 123131
7. L.M. Brown and J.D. Embury, The Initiation and Growth of Voids at
Second Phase Particles, Proc. of the 3rd International Conf. on
Strength of Metal and Alloys, Institute of Metals, 1973, p 164169
8. W. Petruk, The Capabilities of the Microprobe Kontron Image
Analysis Systems Application to Mineral Beneficiation, Scanning
Microsc., Vol 2, 1988, p 12471256
9. P.J. Diggle, Chapters 4 and 5 in Statistical Analysis of Spatial Point
Patterns, Academic Press, 1983
10. E.E. Underwood, Chapter 4 in Quantitative Stereology, AddisonWesley, 1970
11. S. DeVos, The Use of Nearest Neighbour Methods, Tijdschrif voor
Econ. en. Soc. Geografie, Vol 64, 1973, p 307
12. H. Schwartz and H.E. Exner, The Characterization of the Arrangement of Feature Centroids in Planes and Volumes, J. Microsc., Vol
129, 1983, p 155
13. P.J. Wray, O. Richmond, and H.L. Morrison, Use of Dirichlet
Tessellation for Characterising and Modeling Non Regular Dispersions of Second-Phase Particles, Metallography, Vol 16, 1983, p 39
14. B.N. Boots, The Arrangement of Cells in Random Networks, Metallography, Vol 15, 1982, p 53
15. I.K. Crain, The Monte-Carlo Generation of Random Polygons,
Comput. Geosciences, Vol 4, 1978, p 131

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000

CHAPTER

7
Analysis and
Interpretation
Leczek Wojnar
Cracow University of Technology

Krzysztof J. Kurzydowski, Warsaw University of Technology

MODERN MATERIALS science is centered on the relationships that


link materials properties to their microstructures. These relationships,
initially formulated in a qualitative way, have gradually changed into
quantitative specifications, which are much more helpful in design and
selection of materials for a given application. These quantitative relationships, in turn, require quantitative descriptors of the relevant microstructural features of the materials.
The term quantitative description of the microstructure in the present
context is understood as a set of numbers that define density, size, shape,
and spatial arrangements of the microstructural features. These numbers
are based on the theory of geometrical probability and can be obtained
from measurements carried out on the microstructural image. This
Chapter discusses how to interpret these numbers.

Microstructure-Property Relationships
Before discussing microstructure-property relationships, it is necessary
to develop appropriate theoretical models, which allow selection of
suitable quantities that are useful for further analysis. A modern approach
to materials engineering takes into account many different materials
properties, including:
O Mechanical properties: these include yield and flow stress, ultimate
strength, hardness, and fracture toughness

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


146 / Practical Guide to Image Analysis

O Physical properties: these include electrical, magnetic, and thermal


properties
O Chemical properties: these include corrosion resistance
Various physical and mathematical models are used to link each set of
properties with relevant microstructural features. Assumed randomness of
the microstructure implies that the models predict relationships that
concern basically mean values (and eventually other statistical moments)
of the measured parameters. Most relationships currently used in materials science are relatively simple and fall into one of three categories:
O Linear: Y aX b
O Square root: Y a(X)1/2 b
O Inverse square root: Y b a/(X)1/2
In each of these, Y stands for a given materials property (such as hardness,
thermal conductivity, or corrosion resistance) and X is an appropriate
microstructure descriptor (such as volume fraction of the strengthening
particles, density of dislocations, or mean grain size). More precise
discussion of the models of microstructure-property relationships is
beyond the scope of this Chapter. However, the following example
illustrates how such a model can be linked with the desired material
microstructural characteristics and properties.
One of the earliest structure-property relationship models (Fig. 1)
concerns two extreme cases of phases distributed parallel and in series
(denoted by A and B, respectively, in Fig. 1) and is related to mechanical
properties of the material.
The case for parallel distribution is almost perfectly realized in practice
when loading a fiber-reinforced composite in a direction parallel to the
fiber orientation. Maximum tensile strength is proportional to the content
of each phase under the condition that the stiffness of both phases is
similar. The same model applied to fiber-reinforced composites yields
lower values of maximum tensile strength (dashed line in Fig. 1). The
explanation for this decrease in strength is that the matrix usually is less
stiff than the fibers and is not heavily loaded. The fibers in composite can
withstand tensile stress approximately equal to their strength. By comparison, the matrix is designed to provide sufficient stiffness and fracture
toughness to the composite as a whole. As a result, composite properties
fall between the properties of either the fiber or matrix alone. To predict
these properties, it is necessary first to establish the volume fraction
(amount) of the constituent phases (fibers and matrix).
In the case of series distribution, a lamellar structure, for example,
mechanical properties are determined by the weakest segment, and there
is no dependency on volume fraction of constituent phases. In practice,

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 147

the concept of lamellar structure can be applied to improve properties of


large constructions; alternating thin layers of ductile and strong material
significantly improve the total ductility without weakening the construction.
Most cases deal with intermediate types of microstructures. Two
distinct, representative examples are isolated precipitates embedded in a
matrix of the major phase (denoted by C in Fig. 1) and a continuous
network of one phase along boundaries of the major phase (denoted by D
in Fig. 1). A combination of these simple geometrical forms can produce
a large variety of structures with a correspondingly wide variety of
properties. For example, continuous network can lead to tensile strength
very similar to that for materials having a lamellar structure.
Consider using this model to explain tensile and fracture properties of
ferritic ductile iron. Silicon-ferrite, used for transformer cores, is used as
a reference material because its chemical composition is very similar to
the matrix of the ferritic ductile iron (Ref 1). During tensile loading,
ductile iron can be considered as a fiber-type material (Fig. 2), so its
ultimate and yield strengths should be proportional to the volume fraction

Fig. 1

Model of relationship between the structure of a two-phase material


and its mechanical properties

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 50 OUTPUT: Thu Oct 26 15:48:29 2000


148 / Practical Guide to Image Analysis

of the matrix. The tensile and yield strengths of the ductile iron
considered here (the volume fraction of the graphite is 11%) should be
approximately 89% of the corresponding strength of silicon-ferrite, which
is confirmed (Fig. 3) over a wide range of temperatures.
Both phases in question have entirely different ductilities, which also
can be explained using the model presented in Fig. 1. Assume the fracture
process is controlled by the energy absorbed in the material during its
deformation. In nodular cast iron, the graphite nodules are very weakly
bonded with the metallic matrix and, therefore, function almost similar to
pores. These pores break the metallic matrix into the elements, the sizes
of which are defined by the distance between neighboring nodules, as
illustrated in Fig. 2 (Ref 2). The distance is equal to the mean free path
between graphite nodules (mean free path is described in Chapter 2,
Introduction to Stereological Principles).
Based on the hypothesis mentioned above and theoretical considerations lying outside the scope of this text, the lower and upper limits for
the relationship between the mean free path and fracture toughness
(measured by the J-integral) are established. Almost all the experimental
data and literature available at the time of the study fits nicely to these
theoretically calculated bounds, as is illustrated in Fig. 4.

Fig. 2

Model of ductile iron subjected to tensile stress

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 149

To summarize, the properties analyzed are related to different microstructural quantities. The tensile strength is sensitive to changes in the
graphite volume fraction, but there is no correlation with the interparticle
distance. On the other hand, in the case of fracture toughness, the opposite
is observed. The model presented allows a better understanding of the
processes analyzed and quick choice of appropriate parameters to be
measured. It is important to note that a model of the material being
analyzed is helpful to explain the properties of the material, as it quickly
shows which parameters are relevant and which are not. To obtain a
quantitative description of the structure of a material, even a very
simplistic model forms a better basis to solve problems than a no-model

Fig. 3

Effect of temperature on tensile properties of Si-ferrite and ductile iron.


Source: Ref 2

Fig. 4

Effect of graphite mean free path on the fracture toughness of ferritic


ductile iron

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


150 / Practical Guide to Image Analysis

approach offers. More realistic models can be built up based on results of


further investigations.

Essential Characteristics for Microstructure Description


Describing microstructure quantitatively leads to precise answers of
qualitative problems; that is, which material of several options is better,
or which material is adequate for use in a given application. The best way
to answer these questions is to describe the microstructure in terms of
appropriate numbers.
Generally, in microstructural analysis, no two microstructures are
identical, but sufficiently similar microstructures represent practically
identical material properties. The question is: what is sufficiently similar?
This requires the development of appropriate microstructural measurements, or descriptors, that meet the following minimum requirements.
They must be sensitive to changes in microstructural features responsible
for a given property, they should be capable of generally describing the
representative piece of material, and they should be precisely defined and
as simple as possible for practical application and interpretation.
As illustrated previously, microstructure-property relationship models
and a set of several quantities (a single one is usually insufficient) are
necessary to describe a microstructure. Precise description of single
features (grains, for example) produces large data sets, which can be
useful for advanced theoretical considerations but not for routine quality
control or process optimization.
A large set of stereological measures has been developed that satisfies
the above requirements. Taking into account only the most basic set of
so-called integral parameters that describe geometrical properties of a
system of microstructural elements (e.g., second-phase particles) yields
ten descriptors (Ref 3):
O VV: volume fraction, the total volume of features analyzed per unit
volume of a material
O SV: specific surface area, total surface area of features analyzed per unit
volume of a material
O LV: specific length, total length of lineal features analyzed per unit
volume of a material
O NV: numerical density, mean number of features analyzed per unit
volume of a material
O AA: area fraction, total surface area of intercepted features per unit test
area of a specimen
O LA: total length of lineal features analyzed per unit test area of a
specimen

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 151

O NA: surface density, mean number of interceptions of features per unit


test area of a specimen
O LL: lineal fraction, total length of lineal intercepts analyzed per unit
length of a test line
O NL: lineal density, number of interceptions of features per unit length
of a test line
O PP: point fraction, mean number of point elements or test points in
areal features per test point
V, S, A, L, N, and P are volume, surface area, planar area, length, number
of features, and number of points, respectively. Subscripts unequivocally
denote the reference space. Image analysis systems offer their own set of
parameters, often different from the set used in classical stereology.
Most image analysis programs allow for individual quantification of all
specific two-dimensional (2-D) figures (cross sections of three-dimensional, or 3-D, microstructural features) visible in the image under
consideration. Figure 5 schematically illustrates the basic quantities that
can be measured, or rather, computed (Ref 4).
Figure 5(a) shows the initial form of a single particle. This is next
converted into binary (black and white) form (Fig. 5b), which is useful for
subsequent measurements. The most natural measurement that is easily
and quickly evaluated using computerized tools is particle surface area,
which is measured by simply counting the pixels forming the particle in
the binary image. Another frequently used measure in quantitative
analysis is the particle perimeter (Fig. 5c). Unfortunately, the digital,
discontinuous nature of computer images implies large errors in evaluation of the perimeter, which should be taken into account by the user. Fast,
accurate measurements are so-called Feret diameters, characterizing the
outer dimension of the particle; horizontal and vertical Feret diameters are
shown in Fig. 5(d). Somewhat more difficult for quantification is a
user-oriented Feret diameter (Fig. 5e). This yields a set of two numbers
describing the length and orientation angle. Figure 5(f) indicates maximum particle width, which is obtained by doubling the maximum value
of the so called distance function. In a distance function, any pixel has a
value proportional to its distance from the particle edge. Among advanced
measurements is the very important maximum Feret diameter (Fig. 5g). In
the case of a convex particle, this characteristic is identical to the
maximum intercept length. This measurement also is characterized by
two numbers describing the length and orientation angle. For analysis of
spatial distribution and inhomogeneity, the coordinates of the center of
gravity (Fig. 5h) can be very useful. A measure of coordinates of the first
point (Fig. 5i) has a similar value. First point is selected as the most
left-situated pixel from the top row of particle pixels. A characteristic of
a concave particle is obtained from the convex hull (Fig. 5j). Note that all
the measures presented can be applied to the convex hull as well as to the

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


152 / Practical Guide to Image Analysis

initial particle. Similar to the convex hull is the bounding rectangle (Fig.
5k), suitable for shape characterization. Figure 5(l) illustrates the number
of holes. Other characteristics also are available, for example, number of
Euler points (suitable for convexity/concavity quantification and deviation moments). These are the most common measurements, and most
image analysis specific software can be used to obtain their values.
Quantification of some of these measurements is very difficult, if not
impossible, without the use of a computer.
One of the most important tasks during microstructure quantification is
to develop appropriate links between the classical stereological parameters (which usually have a well-elaborated theoretical background) with
parameters offered by image analysis. Specific problems arise when
taking into account the digital (discontinuous) nature of computerized
images. Examples of application difficulties are measurements along
curvilinear test lines (used in the method of vertical sections) and errors
in perimeter evaluation.
Any microstructure should be described in a qualitative way prior to
quantitative characterization because the latter simply expresses the
former in numbers. Otherwise, large sets of meaningless values are

Fig. 5

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Basic measures of a single particle: (a) initial particle, (b) area, (c)
perimeter, (d) and (e) Feret diameters, (f) maximum width, (g) intercept,
(h) coordinates of the center of gravity, (i) coordinates of the rst point, (j) convex
hull, (k) bounding rectangle, and (l) number of holes

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 153

generated that are difficult to interpret. Every material structure consists


of basic elements, called structural elements, such as fibers in a composite, chemical compounds, and small precipitates of phases. In practice,
four characteristics should be described to fully characterize a microstructure (Fig. 6):
O Amount of all the structural constituents and the following characteristics separately for each constituent
O Size (for example, particles or their colonies)
O Shape (for example, of individual particles)
O Spatial distribution or arrangement (form) of particles over the material
volume
Figure 6 shows a model of a microstructure and its modifications.
Figure 6(b) shows the structure resulting from doubling the amount of

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 6

Schematic of microstructures containing four basic characteristics of a


microstructure: amount, size, shape, and arrangement

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


154 / Practical Guide to Image Analysis

black phase. Note that there is no difference in size (even the size
distribution is identical), shape, and arrangement (in both cases the
arrangement is statistically uniform) between the images in Fig. 6(a) and
6(b). The structure in Fig. 6(c) is obtained by changing only the size of the
objects, without altering their amount, shape, and arrangement. Changing
the arrangement of structure in Fig. 6(c) results in a structure shown in
Fig. 6(d), which differs from Fig. 6(a) in size and arrangement of the
objects. Keeping the amount, size, and arrangement of the black phase in
Fig. 6(a) and changing only the shape yields results shown in Fig. 6(e).
Finally, the structure shown in Fig. 6(f) differs from that in Fig. 6(a) in all
the characteristics; that is, amount, size, shape, and arrangement.
Rationale for classifying into four basic characteristics presented above
is not proven theoretically. However, in all cases known to the authors,
characterization of the structure can be performed within the framework
of amount, size, shape, and arrangement. The most important advantage
of this simplification is that structural constituents usually require only
four characteristics versus a host of parameters (such as those listed at the
beginning of this Chapter) supplied by classical stereological methods,
and even more parameters offered by contemporary image analysis. The
following discussion provides some guidelines both on how to choose the
optimal parameter subsets to quantify a structure and how to interpret the
results obtained.

Parameters and Their Evaluation


Amount of Microstructural Constituents. Three-dimensional elements, such as particles, can be effectively and precisely expressed in
terms of volume fraction, which is easily computed from either binary
images (black and white images) or images having gray levels or colors
unequivocally coupled with structural features under consideration.
Simple counting of the number of pixels corresponding to the given color
or gray level and dividing this number by the total number of pixels in the
image yields the estimated volume fraction (see also Chapter 2, Introduction to Stereological Principles, and Chapter 5, Measurements). In
contrast to the classical stereological approach, application of imageanalysis tools does not require any additional grid of points or test lines.
Volume fraction, even if evaluated using computerized tools, is a
stochastic (of a probabilistic nature) measurement. Proper sampling of the
images and further computation of the confidence level (CL) provides a
complete characteristic of the amount of second-phase particles. Note that
this is the only characteristic of the microstructure that can be evaluated
in a simple and complete manner regardless of any geometrical details of
the structural elements. In other words, there is no obstacle to correctly

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 155

quantify the amount of microstructural constituents even if phase distribution over the test material volume is highly inhomogeneous.
To avoid misinterpretation of the term amount, consider the following
discussion. Any material has a 3-D structure (even if available only in the
form of extremely thin layers or fibers), which is described by 2-D or
one-dimensional (1-D) geometrical models. Therefore, the term amount is
used only to characterize 3-D microstructural constituents. Two-dimensional features (e.g., grain boundaries) and 1-D, or linear, features (e.g.,
dislocation lines) are characterized by means of their densities, LV and SV,
respectively.
Size of Structural Constituents. Although there is a clear intuitive
understanding of what the term size means, measurement of size is not
straightforward. The usual way to measure the size of 3-D objects is by
measuring their volume. However, other measurements, such as total
surface area, mean section area, largest dimension, mean intercept length,
and diameter (for a sphere) also can be used. Thus, in the case of size,
choosing a proper parameter is not that easy. Fortunately, there are
guidelines for selecting the best size characteristics, which are summarized below.
The measurement used to quantify size must be adequate for the process
model under consideration. For example, in the case of nodular cast iron,
the amount of graphite is approximately stable and (assuming a constant
amount of the graphite) its fracture toughness is linearly proportional to
the mean graphite nodule diametera proper size parameter in this case
(Ref 2). Application of the mean nodule volume for this analysis results
in highly nonlinear relationships, which can easily lead to false conclusions. By comparison, the grain growth process in steel at high temperatures is controlled by the presence of small precipitates at the grain
boundaries. In this case, characterizing grain size using the surface area of
grain boundaries per unit volume, SV, is better than characterizing grain
size using any linear dimension of the grain.
Size measurements should be sensitive to changes in the microstructure
that can affect the properties studied. For example, especially in the case
of recrystallized materials, two entirely different materials could have the
same mean values of grain size, but grain size distribution could differ
significantly. Thus, even if the grain volume theoretically is the best
measure of grain size, it should not be used in this case because it is
difficult to evaluate the volumetric distribution of grains (this requires
serial sectioning and subsequent 3-D reconstruction). It is recommended
in such a case to analyze section areas or intercept lengths because
distributions of these variables are easily obtained and compared. Further,
a safer approach is to analyze the intercept length distribution because the
distribution of grain section areas is much more sensitive to errors in grain
boundary detection (Fig. 7). The structure in Fig. 7 consists of grains
having partially lost grain boundary lines, or segments, which are

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


156 / Practical Guide to Image Analysis

denoted by gray circles. In principle (compare the Jeffries method


described in Chapter 5, Measurements), there are 24 grains visible;
however, due to the six lost segments, there only are 18 grains (24 6
18). Consequently, the relative error of grain number evaluation is equal
to 624, or 25%. By using intercepts, the relative error in their number (lost
intercepts are marked as dashed lines) is equal to 598, or 5.1%. So the
relative error in intercept counts is, in this case, approximately five times
smaller. The difference can be smaller or greater for other structures.
However, this does not change the general rule that intercept number is
less sensitive than number of grains to lost boundary segments. This
solution is a compromise because the distribution of intercept lengths
cannot be directly related to distribution of areas of the grains analyzed.
Nevertheless, this is sufficient for comparative purposes.
Image analysis software allows obtaining unbiased results for selected
measurements of size. For example, parameters related to individual
particles should be measured only on particles totally included in the field
of view. Measurements on particles touched by the image edge will be
incorrect because it takes into consideration only a part instead of the
whole particle. Particles cut by the edge of the image can easily be
removed. However, while this is an acceptable solution in the case of
small particles, it can introduce large errors in the case of very large
particles, because the probability of being cut by the image edge is
proportional to the size of the particle section (this problem will be
analyzed in more detail later). Quantification of the size of particles or
their colonies is relatively easy and efficient following the method
described previously.

Fig. 7

Illustration of the effect of the lost grain boundary lines (denoted by


circles) on the results of grain size quantication. The lost grain boundary segments signicantly increase the mean section area (by 25%), whereas the
mean intercept length (intercepts enlarged due to the absence of some grain
boundaries plotted using broken lines) is increased approximately 5%.

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 157

Shape Quantication and Classication of Particles. The main


difficulty in shape quantification is the lack of a precise, universal
definition of the term. Intuitively, it seems that any object can be
described in terms of its shape and size. Shape can be interpreted as the
property of the object that is not connected with its size, but shape often
is very difficult to separate from size. For example, small buildings
generally have a different shape than big buildings. The same observation
is valid for cars, plants, and animals. Another difficulty is that a single
number usually cannot describe shape. In contrast, amount and size are
limited to a number description; volume fraction (describing amount)
always falls in the range from 0 to 1 (0100%), and the size of any
microstructural constituent never exceeds the size of the test specimen.
Having no possibility to describe shape with absolute precision, parameters called shape factors, which are sensitive to the changes in shape, are
defined. Shape factors should have the following common properties to
correctly quantify a microstructure:
O Dimensionlessness that keeps their values unaltered in the case of
particles of the same shape but different size
O Quantitative descriptiveness that shows how far a given shape deviates
from a model, or theoretically ideal shape (shape factors can measure,
for example, circularity, elongation, compactness, concavity, or convexity)
O Sensitivity to particular shape changes that occur in a process under
consideration
It is impossible to define a universal shape factor that is applicable in all
the types of microstructure analysis. A sphere is a good reference shape
to discuss properties of shape factors because it is the simplest geometrical model for 3-D objects. In images of test specimens containing spheres,
the cross sections or projections of the spheres are in the form of circles.
Therefore, numerous shape factors are used in image analysis to determine how much a given shape differs from a circle. The shapes observed
in practice can deviate significantly from a circle, but finding a reference circle is intuitively easy, as illustrated using the series of sketches
shown in Fig. 8. Elongation, irregularity, and composition are illustrated
in Fig. 8(a), (b), and (c), respectively, and are discussed in more detail
below.
Elongation, also known as aspect ratio, commonly is used to describe
the shapes of particles after plastic deformation and can effectively be
measured using the following shape factor (Fig. 9) (Ref 3, 5):
f1

a
b

(Eq 1)

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


158 / Practical Guide to Image Analysis

where a and b are the length and width of the minimum bounding
rectangle, or a is the maximum Feret diameter, while b is the Feret
diameter measured perpendicular to it. These two sets of values used to
determine elongation can produce slightly different values of the shape
factor due to the digital nature of computer images.
Aspect ratio reaches a minimum value of 1 for an ideal circle or square
and has higher values for elongated shapes. Unfortunately, elongation is
not useful to assess irregularity, where all particles have an f1 value very
close to 1, despite the fact that the particles have profoundly different
shapes. This is illustrated in Fig. 8(b). Circularityone of the most
popular shape factorsoffers a good solution to this situation:
f2

L2
4A

(Eq 2)

where L is the perimeter and A is the surface area of the analyzed particle
(Fig. 9). The f2 shape factor is very sensitive to any irregularity of the
shape of circular objects. It has a minimum value of 1 for a circle and
higher values for all other shapes. However, it is much less sensitive to
elongation.

Fig. 8

Three families of shapes originating from a circle: ellipses of various


elongation (top), shapes having various edge irregularity (middle), and
a combination of the two (bottom)

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 159

Quantitative characterization of the composition of different shapes in


the structure is more complex. It can be treated as a mixture of elongation
and irregularity. A practical example of such a situation is the change in
the form of graphite precipitates during transition from nodular to flake
cast iron. This transition cannot effectively be described using alone any
of the shape factors presented previously. Very often in such circumstances, the weighted average of a few shape factors is applied. While it
is possible to successfully manipulate the weights to obtain satisfactory
results (i.e., obtain good correlation between the shape factor and
properties), the result cannot be directly interpreted. So it is betterbut
also more difficultto construct a new shape factor that has the capability
to detect necessary changes in shape.
When analyzing the shapes shown schematically in Fig. 8(c), it is
noticeable that all these particles have approximately the same surface
area, but when moving across the sequence from right to left, systematically greater objects can be drawn inside the particle. This observation
leads to a definition of the new shape factor (Fig. 9) (Ref 4):
f3

d2
d1

(Eq 3)

where d1 and d2 are the diameters of the maximum inscribed and


circumscribed circles, respectively. This shape factor works very well in
the case of complex deviations from ideal circularity. It is sensitive to
changes both in particle elongation and irregularity. However, keep in
mind that it is not possible to uniquely describe any shape by a single
number. Therefore, even the best shape factor available can only quantify
elongation or irregularity of a particle, which might be quite insufficient

Fig. 9

Basic measurements used to evaluate shape factors

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


160 / Practical Guide to Image Analysis

for correct particle recognition and/or classification. Usually more than


one parameter must be applied for the purpose of classification.
Classification of any object usually is successfully performed by
applying the rules of fuzzy logic. In binary logic, a particle can be regular
or irregular, whereas in fuzzy logic, the same particle can be, for example,
regular, irregular, somewhat regular, or nearly regular. Application of
fuzzy logic has some advantages, which are briefly outlined in this
section.
Use of fuzzy logic allows quantification of particles that do not fit a
selected assumed template, whereas binary logic allows only determination of whether a particle fits or does not fit the template. In addition, it
is easy to apply weighted averages or even more sophisticated functions
of various quantities for classification needs. Moreover, in the case of
fuzzy logic, the results always lie in the 0100% range. So there is no
difficulty in interpreting results; this is relatively close to the way a human
would classify the results.
Figure 10 shows how application of the fuzzy logic works on a
collection of graphite particle shapes observed in various grades of cast
iron. Each particle in this figure is accompanied by two numbers: a value
of f3 shape factor and a circularity rating, computed using fuzzy logic and
expressed as a percentage. The four particles at the right side of Fig. 10
are recognized as fully circular, particles in the middle are rated as
partially circular, whereas graphite flakes (shown on the left side of Fig.
10) are judged as not circular. Classification with help from fuzzy logic
works somewhat similar to neural networks; classification rules are
derived based on test sets of data, but interpretation of the rules applied
is not necessary for correct classification.

Fig. 10

Application of fuzzy logic to classify graphite particles. Upper


numbers denote the value of the shape factor, and lower numbers
(percent) indicate how well the particle ts as a circular shape.

JOBNAME: PGIAspec 2 PAGE: 17 SESS: 50 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 161

Arrangement of Microstructural Constituents and its Quantication. Arrangement quantification is discussed on the basis of examples
that later are summarized to get a more general understanding of the
problem it presents. The first question to answer when dealing with
arrangement is how do you quantify inhomogeneity? Note that in any
two-phase material observed at high magnification, there are distinct regions occupied only by one of the phases. At increasingly higher magnification, a point is reached where in most fields of view, only a single phase
is observed; the second phase is outside the field of view. Generally, the
single-phase materials can be treated as being homogeneous, and they differ in homogeneity, but on a microscale, any two-phase material is highly
inhomogeneous. (This intuitive meaning of homogeneity often is used in
everyday life, especially in the kitchen, where many dishes are prepared
using constituents that lead to something homogeneous.)
The difficulty in quantifying homogeneity is somewhat similar to the
problem of shape characterization. In both cases, there is no clear
definition of the quantified characteristics and no clearly defined measurements, and, yet, both characteristics are crucial to define material
properties. The most commonly applied solution in quantification of
homogeneity is dividing the microstructure into smaller partscall them
cellsspecified by their size. It is assumed that some global characteristics of microstructural features (e.g., particle volume fraction) should be
approximately stable over randomly selected cells. If a large variation in
the value of a selected characteristic is observed during the movement
from one cell to another, the material is judged as inhomogeneous at the
scale defined by the chosen cell size.
Quantitative characterization of other aspects of arrangement requires
similar individual, context-oriented analysis. For example, to determine
whether particles tend to concentrate on grain boundaries, detect the
boundaries and compare the amounts of particles lying on and away from
the grain boundaries.
To summarize, it is impossible, even from the theoretical point of view,
to fully characterize in a quantitative way all aspects of arrangement of
microstructural constituents. Partial solutions applicable to a limited
number of cases can be prepared on the basis of a thorough analysis of the
process history of the material being analyzed. Quantification of arrangement of microstructural constituents is one of the most important
characteristics of the material microstructure even though it is difficult to
perform.
Advanced techniques for arrangement quantification are beyond the
scope of this text, and, therefore, only the most general characteristics of
this problem are outlined. Chapter 5, Measurements, and Chapter 6,
Characterization of Particle Dispersion describe other methods for
quantification in simple problems related to distribution, such as orientation of structural constituents.

JOBNAME: PGIAspec 2 PAGE: 18 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


162 / Practical Guide to Image Analysis

Sampling Strategy and Its Effect on Results


This book focuses on applications of image analysis related to materials
science; that is, analysis of images that reveal microstructural features of
solids. Image analysis can be carried out on images of any origin. The
basic requirement for these images within this context is that they
properly delineate size, shape, and spatial distribution of the microstructural elements of interest (e.g., second-phase particles). However, this
condition itself is insufficient to quantitatively describe a microstructure;
the image of microstructures must be representative of the microstructure
being studied.
The basic condition for mathematical interpretation of a microstructural
image is that it must provide a basis for unbiased characterization of the
material. The term unbiased means that any element of microstructure has
the same probability of being recorded in the image disregarding its size,
shape, and position. Detailed procedures used to ensure that this condition
is fulfilled are described in a number of texts on geometrical probability
and stereology. Although these procedures may vary depending on the
dimensionality of the microstructural feature studied, a general rule says
that images of microstructure should be taken randomly.
Sample selection and preparation are covered in detail in Chapter 3,
Specimen Preparation for Image Analysis, but it is worth mentioning
important issues and concerns again here. Random selection of images
usually is realized via random selection of the observation field of the
microscope. However, stereological considerations show that randomly
selected fields are sufficient for unbiased sampling images of 1-D and 2-D
features (such as dislocations and grain boundaries, respectively), but
cannot be used for 3-D elements. Such elements are revealed in images
with a frequency influenced by the size of the elements. As discussed
below, proper sampling images of 3-D objects requires at least a pair of
images.
Note that although images of microstructures are central to the process
of microstructural analysis, the process must not be reduced to image
processing of such images. Despite the progress made in computer-aided
image processing, successful interpretation of the results obtained via
image analysis requires careful design of the experimental procedure,
which starts with preparation of the specimen.
Typical Experimental Procedures. Characterization of the microstructure of a material usually proceeds along some general lines defined
by the steps described in Table 1 (Ref 6) and in Chapter 3, Specimen
Preparation for Image Analysis. Sample selection (random versus
systematic) and preparation and positioning of the microscope stage
(selection of observation fields) are of special interest and can be carried
out according to two basic schemes: random sampling of specimens and

JOBNAME: PGIAspec 2 PAGE: 19 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 163

images, and systematic sampling over a specified volume. The decision of


which sampling scheme should be used is always in the hands of the
person performing microstructure characterization. The following comments are offered to help make this decision in a thoughtful way.
Random sampling in most cases results in procedures that are less
labor and time consuming. This sampling method is sufficient for
statistical inference and estimation of averaged microstructural descriptors, such as grain size and volume fraction. Random sampling is
recommended for homogeneous materials.
Random is used here in terms of probability to reveal a given
microstructural element disregarding its size, shape, and location. It
should be pointed out that under no circumstances could the criterion of
random sampling automatically be reduced to random sampling of
images. In fact, such a simplification is possible only in the case of simple
parametersfor example, measurements of volume fractionand even
then additional restrictions are imposed.
The requirement of random sampling is an important, frequently
underestimated factor in the process of the microstructural characterization of specimens. Its importance sometimes is not properly addressed in
metallographic practice, which is based on the assumption that the
structure of a material that shows a random distribution of microstructural
elements can be sampled less carefully, and in the extreme case of a
totally random microstructure, any sampling is acceptable. However, the
microstructure of a material usually shows a considerable degree of order,
and specimens taken from the piece always should be selected randomly
or systematically. It should be pointed out that truly random sampling is
difficult and time consuming, as it requires random orientation of the
cross sections for microstructure observations. However, random sectioning is not required when using the vertical sectioning method (Ref 6, 7).
Systematic sampling provides more information than random sampling about a structure being studied. However, it usually is more labor
intensive and time consuming. Such a procedure is necessary in the case
of nonuniform material (characterized by a microstructural gradient) or if
Table 1

Steps of experimental procedure

Step

Action

Example

Set the purpose of the study.

Explanation of particle strengthening

Define the population of the relevant


microstructural features.

Second-phase particles

Define the parameters to be


measured and the precision required.

Particle volume fraction and density

Design the imaging technique.

Light microscopy on polished and etched sections

Prepare samples for observations


(random choice versus systematic).

Division of artifacts into sections, random or


systematic sectioning, polishing, etching

Select observation fields (random


versus scanning).

Positioning the microscope stage

Perform image analysis.

Detection of particles, binarization

Interpret results.

Computing particle volume fraction and density

JOBNAME: PGIAspec 2 PAGE: 20 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


164 / Practical Guide to Image Analysis

a 3-D reconstruction of the studied artifact (microstructural element) is


desired.
Current methods of imaging a microstructure are based on observations
carried out on a cross section (or simply, section) that is geometrically
defined by its position and orientation with respect to some reference
systemusually, but not necessarily, a system of Cartesian coordinates,
or x- and y- axes. Microstructural studies require examination of a series
of sections, which usually can be categorized as follows:
O A set of individual sections of random position and orientation
O A set of two (nearly parallel) sections of random position and
orientation
O A set of three orthogonal (perpendicular) sections of random position
(usually oriented along characteristic directions of an artifact geometry)
O A set of parallel sections (serial sections)
O A set of sections parallel to a preselected direction (vertical sections)
Random sampling of microstructural images requires that sections be
both randomly placed and oriented.
Individual sections are the most frequently used in metallographic
practice. Such sections usually are made either parallel or perpendicular
to some characteristic direction related to the geometry of an artifact of
interest. Unfortunately, this simple way of revealing the microstructure
generally is insufficient to estimate density and volume of 3-D objects.
This is due to the fact that they do not meet the criteria of random
orientation. Individual sections provide unbiased images of the structures
only in the case of isotropic (equal properties in all directions) materials.
(Note that truly isotropic microstructures are rarely observed in engineering materials.)
Individual sections provide sufficient information to estimate the
density of 1-D and 2-D features of a microstructure (e.g., surface area of
grain boundaries in a unit volume). They also are used to estimate the
density of volumetric elements, such as second-phase particles. For a
statistically unbiased estimation of the spatial density of particles, the
disector method is used, which is based on the concept of double sections
(Ref 6). Simultaneous analysis of two coupled sections (mentioned
above) is a more advanced analysis based on larger sets of parallel
sections. The method of serial sectioning allows 3-D reconstruction of the
features of interest. An example of 3-D reconstruction of the shape of twin
grains in a face-centered cubic (fcc) alloy is shown in Fig. 11 (for more
details see Ref 8). A set of three orthogonal sections provides some
remedy in the case of anisotropic (different properties in different
directions) microstructures, especially if the specific directions of microstructure anisotropy are known (Fig. 12).

JOBNAME: PGIAspec 2 PAGE: 21 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 165

Fig. 11

Images of grains in a fully recrystallized nickel-manganese alloy.


Three-dimensional geometry of each grain obtained via serial
sectioning of bulk material is depicted as a stack of their two-dimensional
sections. Courtesy J. Bystrzycki

Fig. 12

Schematic of the concept of orthogonal sections. Perpendicular


sections of a two-phase material reveal different two-dimensional
microstructural images.

Bias Introduced by Specimen Preparation and Image Acquisition


Each step of the process for metallographic examination potentially can
introduce bias in characterizing the microstructure of the specimen.
Therefore, it is very important to avoid simple errors that can strongly

JOBNAME: PGIAspec 2 PAGE: 22 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


166 / Practical Guide to Image Analysis

affect the results. Errors include improper specimen sampling (the first
possible source of bias), polishing artifacts, over-etching, staining,
specimen corrosion, errors induced by the optical system, and the effects
of magnification and inhomogeneity.
Improper polishing is the main source of bias in images. Soft materials
and materials containing both very soft and very hard constituents are
especially difficult to prepare and are sensitive to smearing and relief (see
Fig. 13 and also Chapter 3, Specimen Preparation for Image Analysis).
Ferritic gray cast iron is such a material; graphite is very soft, ferrite is
soft and easily deformed, and the iron-phosphorous eutectic is hard and
brittle, characteristics which tend to produce relief, even in the early
stages of grinding on papers (again, see Chapter 3, Specimen Preparation
for Image Analysis, for further information). Such polishing artifacts
cannot be corrected during further polishing. Also, plastic deformation

Fig. 13

Examples of errors in specimen preparation and image acquisition

JOBNAME: PGIAspec 2 PAGE: 23 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 167

introduced in the early stages of specimen preparation can affect the


etching results, even if the geometry of the polished section exhibits
perfect quality (see bottom part of Fig. 13). Sintered powder materials
containing pores also are difficult to prepare, resulting in polishing
artifacts, such as scratches and poor edge retention at pore edges (see top
part of Fig. 13), which are difficult to detect. Specimen quality for
automatic image analysis must be considerably better than that required
for classical metallographic investigation.
Proper etching is the next important consideration for producing
high-quality images. Etching usually is necessary to reveal the features of
interest; only a limited number of features can be observed without
etching, such as nonmetallic inclusions, pores in sintered powder products, and graphite in cast products. The degree of etching should be such
that a clear visualization of the feature of interest is provided. For
example, an image for grain-size analysis should contain a clear,
continuous network of grain boundaries, as shown in Fig. 14. Deep
etching can produce unwanted side effects, such as dissolving grain
boundaries and corrosive stains (Fig. 15). Selective etching is particularly
useful to observe features of interest (see Chapter 3, Specimen Preparation for Image Analysis).
Errors also can be introduced by the optical system. Uneven illumination, a common problem with earlier microscopes, results in the outer part
of the field of view being darker than the center of the image (bottom part
of Fig. 13). Corrective procedures, called shade correction, are given in
Ref 4 and 9. This irregularity in illumination often is accompanied by
nonuniform contrast and image sharpness. These errors usually are so
subtle that they are not recognized by the human eye. Humans recognize
only approximately 40 gray levels, while digital equipment typically

Fig. 14

Example of image of quality acceptable for grain size evaluation

JOBNAME: PGIAspec 2 PAGE: 24 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


168 / Practical Guide to Image Analysis

recognizes at least 256 gray levels, and its sensitivity to different gray
levels is stable, as opposed to nonlinear in the case of humans. Therefore,
an image analysis system registers all these subtle variations, which
affects the results of the analysis. New-generation optics in contemporary
microscopes compensate for the errors mentioned, leading to perfect
images (Fig. 16).
While sources of bias are difficult to quantify objectively, the following
guidelines help to minimize bias:
O Follow specimen preparation guidelines carefully.
O Thoroughly clean the specimen after each grinding and polishing step
(even a single particle of abrasive material can damage the polished
section).
O Use automatic polishing equipment if possible.
O Use selective etching, if possible, to reveal microstructural features of
interest.
O Use a modern microscope with the best optics.
Proper interpretation of image analysis results requires consideration of
the potential effect of magnification at which a microstructure is observed.
Magnification influences variation in the values measured for each
analyzed image; at high magnification, the microstructure of a material is
likely to be inhomogeneous. Figure 17 shows images of microstructures
taken at random positions from a randomly oriented section of a
two-phase structure. Graphical results of volume fraction measurements
carried out on each field show a considerable scatter. However, the same
material, if studied under sufficiently low magnification (between 100
and 200), yields nearly constant values of the volume fraction of the

Fig. 15

Corrosive stains diminish image quality.

JOBNAME: PGIAspec 2 PAGE: 25 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 169

second-phase particles (low standard deviation). It is not possible to


provide universal guidelines on what magnification should be used. On
the basis of the results presented in Fig. 17, the following conclusions can
be drawn:
O Magnification at 100 seems to be too low; the network-type
arrangement of pores is not visible, but is visible at all the higher
magnifications.
O Magnification at 400 seems to be too high; there is a very large
scatter in the results.
O Magnifications at 200 and 300 yield similar results with slightly
greater standard deviation at 300.
On the basis of this analysis, magnification of 200 seems to be the
optimum one.
In the case of homogeneous materials, differences between values of a
given microstructural parameter (e.g., volume fraction of second-phase
particles) are due to chance variations. The amplitude of the variations in
the measured values for a given image decreases with the increasing size
of the field. Mean values for the image converge to the mean value for the
population for a large number of images viewed. In contrast, a inhomogeneous material shows a systematic dependence of its microstructural
descriptors on the position of the image taken.
Another important consequence of image analysis carried out on images
at different magnifications is a systematic trend in the measured size of
some of the microstructural elements. For example, porosity volume
fraction measurements of the porous structure in Fig. 17 increase with

Fig. 16

Illustration of uniform illumination from a high-quality optical


microscope

JOBNAME: PGIAspec 2 PAGE: 26 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


170 / Practical Guide to Image Analysis

Fig. 17

Systematic changes in parameters measured at different magnication

JOBNAME: PGIAspec 2 PAGE: 27 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 171

increasing magnification. Simultaneously, a dramatic increase in the


standard deviation is observed (more about the theory of this effect can be
found in textbooks on fractals).
A good way to verify whether the magnification used is correct is to
compare results with values of the same parameter estimated from other
sources. For example, volume fraction of nonmetallic inclusions or
graphite can be quite precisely estimated on the basis of chemical
composition. Similarly, porosity can be estimated from density measurements. If the measurements at a given magnification agree with the
expected values from other sources, it is probable that other measurements of other microstructural characteristics taken at this magnification
will be correct.
True values are not absolutely necessary if the analysis is used primarily
for comparative purposes, in which case, the same conditions of observation, including magnification, should be used for the entire series of
experiments. Results obtained using a less than optimal magnification will
be much better than results obtained from observations at a different
magnification for each sample.

Bias Introduced by Image Processing and Digital Measurements


Image processing and digital measurements can add significant bias to
results. Sources of such a bias are briefly discussed here and focus on:
O
O
O
O
O
O

Effect of the resolution of digital equipment


Role of initial processing, or preprocessing
Segmentation errors
Incorrect binarization (thresholding)
Particles crossed by the image boundary (frame)
Typical problems of digital measurements

Resolution. Theoretical resolution, d, of an optical microscope is


given by the expression:
d

1.22
2 NA

(Eq 4)

where is the wavelength of light (approximately 0.55 m) and NA is the


numerical aperture of the objective. Theoretical resolutions for typical
objectives are given in Table 2. Consequently, computing the theoretical
size of a single cell of a charge coupled device (CCD) used to digitize
images is straightforward, assuming the size of a single pixel corresponds
to the microscope resolution (smaller pixels produce so-called empty
magnification). The theoretical number of cells on the diameter is
obtained by taking into account that the diameter of the field of view in

JOBNAME: PGIAspec 2 PAGE: 28 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


172 / Practical Guide to Image Analysis

modern microscopes is 22 mm (0.9 in.) (20 mm, or 0.8 in., for earlier
models).
Next, it is necessary to compute the physical resolution of a CCD
image. A typical low-cost camera containing a 13 mm (0.5 in.) CCD
element with an image size of 640 480 pixels yields 15,875 m per
pixel. Comparing this value with the theoretical cell size in Table 2
suggests that the simplest camera is sufficient if no additional lenses are
placed between the camera and the objective (this is the most common
case). Many metallographic microscopes allow an additional zoom up to
2.5, which, if applied, leads to a smaller theoretical cell size than that
offered by the camera discussed above, even if it is ideally inscribed into
the field of view.
For light microscopy, using a camera having a resolution higher than
1024 1024 pixels generally results only in a greater amount of data and
longer analysis timewithout providing any additional information!
Figure 18 illustrates the effects of camera resolution. An over-sampled
image (for example, a very high-resolution camera was used) provides a
smooth image, but nothing more is observed than that observed using an
optimally grabbed image (Table 2). Reducing resolution loses some data;
however, major microstructural features still are detected using half the
optimal resolution.
Image processing can begin as soon as the image is placed in the
computer memory. Note that almost all image processing procedures lose
a part of initial information. This occurs because the largest amount of
information always is in the initial image, even if its quality is poor (this
reinforces why the image quality is so important). Also, some information
is lost at every step of image processing, even if the image looks
satisfactory to the human eye. Thus, it is strongly recommended that the
number of processing steps be minimized.
Two tricks can be used to improve the image quality without significant
(if any) loss of information. In the case of poor illumination, the image
grabbed by a camera can be very noisy, but grabbing a series of frames
and subsequently averaging them produces much better image quality
(this solution is sometimes built into the software for image acquisition).

Table 2 Objective resolution and theoretical size of the CCD cell


Objective
magnification

Numerical
aperture

Objective
resolution, m

Theoretical
cell size, m

Number of cells
per 22 mm

0.10

3.36

13.4

1642

10

0.25

1.34

13.4

1642

20

0.40

0.84

16.8

1310

40

0.65

0.52

20.6

1068

60

0.95

0.35

21.2

1038

60, oil

1.40

0.24

14.4

1528

100, oil

1.40

0.24

24.0

917

CCD, charge coupled device

JOBNAME: PGIAspec 2 PAGE: 29 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 173

In the case of a noisy image, it often is helpful to digitize the image using
a resolution two times higher than necessary and, after appropriate
filtering, decrease the resolution to obtain a clean image. This technique
is used, for example, in scanning printed images, which removes the
printer raster. In general, image quality can be significantly improved, or,
in other words, some information can be recovered by knowing exactly
how the image was distorted prior to digitization.
There is no perfect procedure to detect microstructural features.
Depending on the particular properties of the image, as well as the
algorithms applied, some grain boundaries or particles will be lost and
some nonexistent grain boundaries or particles will be detected. This is
illustrated in the example of polystyrene foam, shown in Fig. 19. There is
no specific solution to this problem, as each operator will draw the grain
boundaries in a different manner, and the resulting number of grains will
show some scatter.
Figure 20 compares results of the number of grains in an austenitic steel
measured manually and automatically, which shows very good agreement

(a)

(b)

(c)

(d)

Fig. 18

Illustration of the effect of digital resolution on details visible in an


image. (a) Image 200% oversampled; (b) optimum resolution; (c)
resolution lowered 50%; (d) resolution lowered 25%

JOBNAME: PGIAspec 2 PAGE: 30 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


174 / Practical Guide to Image Analysis

(correlation coefficient >0.99). This indicates that automatic measurements, although not perfect, can yield fully applicable results. However,
any segmentation method should be thoroughly tested on numerous
images prior to its final application in analysis.
Despite some errors that occur in practically every analyzed image,
automated methods have several advantages over manual methods:
O They are at least one order of magnitude faster than manual methods;
thus, they follow the well-known rule of stereologydo more less
wellmentioned elsewhere in this book.
O They are completely repeatable; that is, repeated analysis of any image
always yields the same results.
O They require almost no training or experience to execute once the
proper procedures have been established.
Nearly all measurements are executed on binary images, and binarization
of microstructures can be the source of numerous errors in analysis.
Therefore, binarization, or thresholding, is one of the most important
operations in image analysis. Image analysis software packages offer a
variety of thresholding procedures, which are carefully detailed in the
accompanying operation manuals.
The simplest method of binarization, called the flicker method, is
based on a comparison of the live and binary images displayed interchangeably. However, this method is not very precise, even if the same
person operates the apparatus. A much better, more objective method is
based on the analysis of profiles and histograms (Fig. 21).

(a)

(b)

Fig. 19

Errors produced during grain boundary detection procedures. (a)


Initial image; (b) detected boundaries. Some boundary lines are lost;
some extra boundary lines are detected.

JOBNAME: PGIAspec 2 PAGE: 31 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 175

Theoretically, the optimum threshold value is the mean value of the


gray levels for the background and relevant feature. This practice
compensates for possible errors produced by a gradual (not sudden)
change in gray levels in the vicinity of a given microstructural feature
(e.g., the pores in Fig. 21). This leads to good results in simple
microstructures with two or three microstructure constituents of significantly different gray levels. In other cases, this method can lead to errors,
such as overestimated porosity, which is shown in Fig. 21(b).
Another solution, often used in algorithms for automatic thresholding,
is based on the analysis of gray-scale histograms. The pores and
background are expected to produce their own peaks in the histogram.
Therefore, the optimum threshold level is assumed to be located at the
minimum point of the histogram; which is the case in Fig. 21(c). Usually,
the best choice of binarization method varies from image to image. It is
good practice to check the threshold level using profiles and histograms
as additional sources of quantitative information about each image under
consideration.
Counting Objects. A common answer to a question about what can be
expressed in numbers via image analysis is number of objects. In fact,
counting objects is the most natural type of measurement and constitutes
a very important group of measurements in computer-aided image
analysis.
While counting objects in a binary image appears to be very simple
task, serious errors can easily be introduced if not performed with care.
The errors mainly are related to particles that are crossed by the edge of
the image. Assume that all the particles in an image, including those
crossed by the image edge, are counted. The same particles are partially
visible in the next field crossed by the image (adjacent to the one just
analyzed) and, therefore, are counted twice. Such a situation is quite

Fig. 20

Comparison of results of counting grains using manual and fully


automatic methods

JOBNAME: PGIAspec 2 PAGE: 32 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


176 / Practical Guide to Image Analysis

common in image analysis, and a method is needed to remedy this


systematic error.
Most image-analysis programs allow automatic removal of the particles
crossed by the edge of the image in a transformation called border kill.
However, removing some particles instead of counting them twice results

(a)

(b)

(c)

Fig. 21

Effect of different methods of choosing the binary threshold on binarization results. (a) Initial image; (b)
prole with threshold level indicated by an arrow and corresponding binary image; (c) histogram with
threshold level indicated by an arrow and corresponding binary image

JOBNAME: PGIAspec 2 PAGE: 33 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 177

in underestimating the number of particles because some are not counted


at all. Also, simple removal of particles crossed by the image edge affects
particle distribution. This is important because, in some cases, distribution
of some particle attribute (e.g., size) is of more interest than the exact
number of particles.
Figure 22 illustrates this situation. Figure 22(a) consists of ten large and
ten small black circles, uniformly distributed over the image and placed
completely inside the image (no circle crosses the image edge). The
population consists of 50% large and 50% small circles. Taking a little bit
smaller image (the square outlined by dashed lines in Fig. 22b) and
performing the border kill operation removes all the gray circles. The
remaining black circles consist of eight small and only four large ones.
Therefore, only 33% of the circles are large in the new image. Repeating
this procedure for even smaller images (see Fig. 22c and d) yields 29%
and 25% large circles, respectively. This simple example clearly demonstrates that the probability of removal of a particle is approximately
proportional to its size. Therefore, for correct particle counting, the
so-called guard frame should be applied.
To understand the application of the guard frame, consider the analysis
of the set of particles shown in Fig. 23(a). A guard frame (the dashed
rectangular area) is added to the image, the size of which must meet
certain criteria. It should be placed far enough from the image edges to
avoid hitting particles cut by the image edge (black particles in Fig. 23a).

(a)

(b)

(c)

(d)

Fig. 22

Effect of border-kill operation on distribution of detected particles

JOBNAME: PGIAspec 2 PAGE: 34 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


178 / Practical Guide to Image Analysis

For an unbiased particle count, all particles totally included within the
guard frame and touching or crossing its right and bottom edges are
considered, while all particles touching or crossing the left and upper
frame edge, as well as the upper right-hand or lower left-hand corners, are
removed. This particle selection method (gray particles in Fig. 23b) may
seem artificial, but it ensures that size distribution is not distorted. Note
that when using a series of guard frames touching each other, the selection
rules ensure that every particle is counted only once.
Other methods of particle selection for further analysis are given in Ref
4. However, the general idea is always the same, and the results obtained
are equivalent. If only the number of particles (features) are of interest,
this is easily determined using the Jeffries planimetric method, described
in Chapter 2, Introduction to Stereological Principles, and Chapter 5,
Measurements. It is not recommended to count particles in images after
removing particles crossed by the image edge (even if recommended in
standard procedures) because this always introduces a systematic error.
Moreover, the error is impossible for a priori estimation because it is a
function of the number, size, and shape of analyzed features.
Area Fraction. The classical stereological approach to measure area
fraction is based on setting a grid of test points over the image and
counting the points hitting the phase constituent of interest. With image
analysis, the digital image is a collection of points, so it is unnecessary to
overlay a grid of test points, and it is sufficient to simply count all the
points belonging to the phase under consideration. This counting method
is used to evaluate surface area. The statistical significance of such
analysis is different from that performed using classical stereological
methods. In other words, the old rules for estimating the necessary
number of test points are no longer valid, and other rules for statistical
evaluation of the scatter of results (for example, based on the standard
deviation) should be applied.

(a)

Fig. 23
frame

(b)

Concept of a guard frame. Schematic of (a) initial image and guard


frame and (b) selection based on particles crossing the edge of the

JOBNAME: PGIAspec 2 PAGE: 35 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 179

Evaluation of Mean Number of Particles per Unit Intercept


Length. Similar to the case discussed above, these measurements are
performed in classical stereology based on a set of test lines and counting
the number of particles hit by the lines. Image analysis, on the other hand,
allows counting the number of specific pixel configurations. One possible
configuration is a pair of points lying along a horizontal line. Counting the
number of such pairs, in which the left point belongs to the matrix and the
right one to the particle, yields the number of particles hit by the system
of horizontal test lines built from all the points of the image. The total
length of the test lines is equal to the number of rows of pixels in the
image multiplied by the horizontal dimension of the image.
Typical Problems of Digital Measurements. Digital measurements
are based on analysis of the spatial distribution of pixels within an image,
which usually are very precise, especially compared with manual measurements, and appropriate measuring procedures are built into the image
analysis software. However, estimating the length of a feature perimeter
is problematic, so avoid this measurement if possible, or check the
accuracy of measurements using shapes of known perimeters, such as
differently sized circles and squares.
Connectivity, a property of binary images that allows a computer to
determine whether or not the analyzed features are connected, also
presents a problem, which is solved using procedures that depend on the
adopted grid of pixels. There is no problem in the case of a hexagonal
grid; if two points touch each other, they produce unity (i.e., they are
connected). Unfortunately, only a limited number of image analysis
devices use a hexagonal grid. Instead, most follow the graphics adapters
in computersa square grid of pixels. In this case, the number of closest
neighbors of any pixel can be defined. For a pixel with four closest
neighbors (see the four gray pixels surrounding the black one in the upper
left part of Fig. 24), four-connected analysis is performed, while for a
pixel with eight closest neighbors (see the upper right corner of Fig. 24),
eight-connected analysis is performed.
The decision on connectivity can seriously affect the results. For
example, the two squares (lower left part of Fig. 24) form two objects if
treated as four-connected space, but form a single figure if treated as
eight-connected space. The curve (lower right part of Fig. 24) produces an
even more dramatic effect. The feature is considered as a curve in
eight-connected space, whereas it is considered as ten separate particles in
four-connected space.
In some cases, an eight-connected analysis results in isolated features
being erroneously treated as one object. Such an error is especially
important when measurements in images are started with particles
(grains) separated by only a line one pixel wide. On the other hand,
operations associated with, for example, particle growth or hole filling,
usually return better results if performed using procedures based on eight

JOBNAME: PGIAspec 2 PAGE: 36 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


180 / Practical Guide to Image Analysis

closest-neighbor pixels. The decision about connectivity has to be made


on a case-by-case basis.
Quantication of Bias. Sources of bias mentioned previously are
discussed from a qualitative standpoint. Some theoretical formulae for
quantification of the bias of a given method have been developed in
classical stereology. These formulae also can be used to predict the
required number of measurements (number of fields of view, total
intercept length, or number of test points). Unfortunately, all these
considerations provide estimates of the bias for a particular test method
based on abstract, ideal conditions of their application. This allows
prediction of how the bias will change if, for example, the number of test
points is doubled. However, these methods usually are not helpful to
predict bias quantification under real experimental conditions. Consequently, errors estimated by means of statistical analysis of results
(generally based on evaluation of the standard deviation) usually are
greater than estimates based on theoretical, stereological considerations.
In the case of image analysis, it is not necessary to evaluate a bias of the
measurement strategy used because all of the information stored in the
analyzed image is taken into consideration. Digital measurements introduce some bias, but it is relatively easily estimated (see Table 3). Digital
measurements also are both accurate and inaccurate. They are accurate (or
precise) with respect to the applied algorithms, which are expected to give
identical measurement results for the same image regardless of which
software and computer are used. However, measurement results can be far
from the correct value, as demonstrated in Fig. 25. The bias of the data in
Fig. 25 is surprisingly high for circles smaller than ten pixels in diameter.
Note, however, that every single pixel accounts for 5% of the area in the
case of a circle having a diameter of five pixels.
Figure 25 and Table 3 show that there are two main sources of bias in
digital measurements: incorrect detection and (in the case of length

Fig. 24

Illustration of the concept of four-connected and eight-connected


analysis

JOBNAME: PGIAspec 2 PAGE: 37 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 181

measurements) analysis of very small objects. Changing the magnification can minimize bias for the latter source. Also, correction procedures
can be applied to the results of measurements carried out on small objects.
As a result, the main source of bias in digital measurements remains
improper detection. This reinforces the necessity of the highest possible
quality at all the stages of metallographic procedure (specimen preparation, microscopic inspection, and image processing), as each stage can
introduce unacceptably large errors to the final results of analysis.
In practice, it is relatively difficult to characterize measurement errors
quantitatively. One possible solution is to use model materials or images
with structural characteristics that are known to exactly experimentally
determine the error. Comparing apparent and predicted results allows an
approximation of the measure of accuracy of applied methods. It also is

Table 3

Characterization of the bias of digital measurements

Type of
measurement

Characteristics

Most probable source


of the bias

Suggested action

Counting objects

Provides exact results only


if the particles crossed by
the image edge are
correctly analyzed

Incorrect detection or
Apply proper correction
erroneous counting of
procedures (described in
particles crossed by the
this Chapter)
image edge

Distances

Accuracy to a single pixel

Incorrect detection of very


small objects

Avoid measurement of
objects having length
smaller than 10 pixels

Length (including Can be inaccurate,


Incorrect detection of
Avoid these measurements,
curved lines)
especially in the case of
objects having edges of
especially if the radius of
lines having small radius of
very small radius of
curvature is smaller than 5
curvature
curvature
pixels
Area

Can be measured precisely;


improper binarization can
introduce large errors,
especially in the case of
small precipitates

Fig. 25

Incorrect detection

If possible, avoid particles


smaller than 10 pixels in
diameter and test how
variation in threshold level
affects the results

Relative error of digital measurements of the area and perimeter of


circles of different diameters

JOBNAME: PGIAspec 2 PAGE: 38 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


182 / Practical Guide to Image Analysis

possible to use computer models of the experimental procedure in


question (Ref 10).
The problem of bias in image analysis might have less importance in
practical applications than in the context of deriving correct values for
every object measured. This is especially true if image analysis is used in
quality control. For example, consider a large set of samples, M, and a
selection criterion, C, based on digital measurements. Then, using C (of
unknown bias), divide all the samples into two groups, M1 and M2; that
is, suitable or not suitable, respectively, for a given application. If other
tests of, for example, mechanical properties, show that all the materials
from subgroup M1 are correctly accepted and all the materials from
subgroup M2 are correctly rejected, criterion C is relatively safe to use for
classification purposes, whatever the error of its estimation is. While this
conclusion may seem surprising, most decisions based on digital measurements belong to the yes/no space, which does not allow for more or
less accurate judgment. Any decision is either correct or wrong. From this
perspective, the main requirement of using digital methods is to obtain the
highest repeatability and reproducibility of results, which is almost
always achieved at a satisfactory level.
Optimum Number of Fields of View and Magnication. With any
image processing procedure, it is necessary to determine the required
number of fields of view and the necessary magnification to assume a
positive outcome of the investigation. In the case of routine, standardized
procedures for grain size assessment and evaluation of nonmetallic
inclusion content, simply follow the recommendations of a given standard.
The approach is not as straightforward for nonstandardized cases. The
starting point is schematically illustrated in Fig. 26. First, decide what
magnification is the most appropriate. Increasing magnification increases

Fig. 26

Effect of magnication and number of elds of view on the scatter of


results

JOBNAME: PGIAspec 2 PAGE: 39 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 183

the scatter of results, as demonstrated previously, so magnification should


be low enough to avoid unnecessary scatter but high enough to detect the
existing variation in the structure. A good practice is to choose magnification on the basis of the mean size of the feature to be detected. Different
criteria should be applied depending on the features to be analyzed. Some
guidelines, based on the authors experience, are presented in Table 4.
The next decision is the number of fields of view to use. Variance of
results decreases with increasing number of fields of view, reaching an
asymptotic value (different for each magnification) at a very high number
of fields. This value should be determined experimentally, followed by a
number of fields of view selected that assures a stable value of the
variance. The criterion of stability has to be selected individually, but
approximately 510% above the expected asymptotic value is an acceptable compromise. The magnification and number of fields of view
selected should be kept constant during the whole series of analyses.

Estimating Basic Characteristics


Measurements are performed to obtain statistical characteristics of the
microstructure of a material. Analysis of the errors of estimation is an
independent, well-developed discipline (Ref 11). The discussion here is
limited only to the most simple, basic concepts, sufficient for preliminary
analysis of experimental data. All the relationships listed subsequently are
valid for the data complying with normal distribution. Two reasons for
such a choice are:
O This type of distribution is very common for the results of measurements carried out on material microstructures. In some cases, normal
behavior also is observed for logarithms of measured values. In
computer-aided measurements, the change from X to log X is automated.
Table 4

Guidelines for adequate magnication choice for digital measurements

Feature
analyzed

Example of
structure

Proposed criterion for


magnification choice

Suggested secondary
criterion

Grains filling
the space

Austenitic steels

Size of field 200 the mean


grain section area

100300 grains in a single field


of view

Dispersed
particles, 10%
area fraction

Graphite in cast iron

Mean diameter or length of


precipitates equal to 510% of
the diagonal of the field of view

100 precipitates visible in the


field of view

Small dispersed
particles

Carbides in tool steels

Mean area of precipitates equal


to at least 25 pixels

At least 50 prior-austenite grains


visible (in order to preserve
spatial distribution)

Mixture of two
constituents

Ferritic-pearlitic steels

A mean from magnifications


optimal for both constituents

Magnification optimal for


analysis of the constituent more
important for subsequent analysis

Inclusions

Pores in sintered
materials, nonmetallic
inclusions

Possibility to analyze the shape


of the feature of mean size

Preservation of the spatial


distribution of features analyzed

JOBNAME: PGIAspec 2 PAGE: 40 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


184 / Practical Guide to Image Analysis

O The statistical properties of a normal distribution are well described


and relatively simple to evaluate.
The most basic operation is an estimation of the mean value, x, of a given
parameter, defined by:
n

i
i1
n

(Eq 5)

where xi is ith individual result, and n is the number of measurements.


Mean value characterizes the population studied, even if an element
characterized by such a value does not exist ( this occurs, for example, if
there are two groups of values, very low and very high; the mean value
in this case falls between these two). This should be taken into account
when features analyzed, such as grains, consist of two families of different
size. However, the analysis of such cases is beyond the scope of this
Chapter.
The most commonly used measure of the scatter of analyzed values is
standard deviation, s, defined as a square root of the variance:
s

n
1
(x x)2

n 1 i1 i

(Eq 6)

Standard deviation is often used for evaluation of the confidence level:


CL ta,n1 s

(Eq 7)

where t,n1 is the value of the t-statistics (Student) for the significance
level , and n is 1 degree of freedom. Exact values of t-statistics can be
found in statistical tables, or are easily accessible in any software for
statistical evaluation, including most popular spreadsheets.
Estimating the confidence level for 0.05, for example, means that
when repeating the measurements, 95% (1 ) of the results will be
greater than x CL and smaller than x CL. Another useful conclusion
is that if the number of measurements is large (>30), then approximately
99% of the results should not exceed the confidence level defined as
follows:
CL0.99 3s

(Eq 8)

To summarize, confidence level is a very convenient statistical characteristic for interpretation of results, and always is proportional to the
standard deviation. Therefore, evaluation of the standard deviation is, in

JOBNAME: PGIAspec 2 PAGE: 41 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 185

most cases, sufficient for comparative analysis. In addition, Eq 6 refers to


the so-called corrected standard deviation. This means that the expression
takes into account the number of observations, and the values obtained
can be compared even if the numbers of measurements are not identical.
(Nevertheless, using the same number of measurements is strongly
recommended.)
Practically all microstructural elements (e.g., grains, pores, particles)
have different sizes. This property can be characterized by means of
distribution functions. For comparative purposes and based on the
requirements of preliminary analysis, it is possible to apply much simpler
methods, namely evaluation of the coefficient of variation, CV:
CV

s
x

(Eq 9)

The interpretation of CV is simple: the greater the value of CV is, the


more inhomogeneous the results are. Note that CV should be used for
parameters complying with normal distribution only. An example of such
a parameter is the amount of a given structural feature, such as porosity
or graphite content in cast iron. Parameters describing size, such as grain
size and particle diameter, frequently are described by means of lognormal distribution. Consequently, logarithms of the values have a normal
distribution, which allows for application of the parameters defined
above.
Example. Consider two pieces of cast iron (samples A and B) from
which graphite content is to be determined. Both are prepared and tested
in the same way (e.g., specimen preparation, magnification, image
processing), and graphite content measured in the first 15 fields of view,
yielding the basic statistical measures summarized below:
Sample

Graphite contents

Mean Deviation

CV

14.0, 14.3, 14.8, 13.7,


13.0, 14.5, 14.2, 14.0,
14.0, 13.9, 14.6, 13.3,
14.0, 14.9, 14.3

14.1

0.52

0.037

10.8, 14.3, 17.0, 13.1,


11.2, 14.5, 16.8, 15.9,
12.2, 14.6, 13.6, 15.9,
15.7, 12.6, 13.3

14.1

1.93

0.14

CV, coefficient of variation

Both samples have the same mean graphite content, but the deviation in
content (individual results) for sample B is four times higher than for
sample A. Moreover, the scatter of results for sample B is so large that the
99% confidence level is approximately equal to half of the measured
value. One interpretation of results is that the graphite content in sample
B is inhomogeneous. Another is that the measurements for sample B were
performed with different precision. If these interpretations cannot be

JOBNAME: PGIAspec 2 PAGE: 42 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


186 / Practical Guide to Image Analysis

rejected a priori, it is wise to repeat the analysis or increase the number


of measurements.
Advanced Statistical Analysis. The statistical parameters described
above refer to a very limited subset of possible methods of interpretation.
Following are some other questions that can be answered by means of
statistical analysis. Appropriate methods are described in detail in the
literature, such as (Ref 10). The tasks of advanced statistical methods
include:
O Comparison of mean values or deviations, which can help to decide
whether or not the results obtained for two different materials are
significantly different
O Comparison of distributions, which can be used to judge the extent of
agreement with a given distribution function
O Correlation analysis, which is suitable to evaluate the relationship
between two or more quantities (this analysis is often used to verify
theoretical models for materials properties of materials)
O Verification of statistical hypotheses, which helps to eliminate some
parameters that are not important from the point of view of a given
analysis (e.g., checking if small changes in nonmetallic inclusion
content have any effect on the grain size of steel)
Statistical interpretation can be complex. Fortunately, in the case of
routine control, appropriate methods of statistical evaluation are included
in the standardized procedures that describe the methodology of specimen
preparation, testing procedure, and analysis of the results. Interpretation
of nonstandard measurements could require an experienced statistician.

Data Interpretation
Interpretation of the numbers quantifying a microstructure can be
carried out from two perspectives: purely mathematical, which concentrates on statistical significance, and materials science needs oriented,
which correlates microstructural descriptors with materials properties. In
the case of the former approach, the key question is whether or not the
obtained estimates of a given parameter are precise enough to verify
theoretical models for materials properties and processes from the
microstructure. It is important to keep in mind the stochastic (random)
nature of the geometry of microstructural features with respect to
diversity in their size, shape, and a profound degree of disorder in their
spatial distribution. This randomness means that the description of
microstructural elements requires the application of mathematical statistics and, in turn, concepts of populations, distribution functions, estimators, and intervals of confidence. In this context, the population of grains

JOBNAME: PGIAspec 2 PAGE: 43 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 187

shown in Fig. 27 for a polycrystalline nickel-manganese alloy is characterized by the following numbers: v 25 m3 and v 5 m3, where v
is the mean value of volume and s is the standard deviation.
Microstructural elements cannot be characterized by single numbers of
an unconditional character due to the diversity in numbers describing size,
shape, and location of individual elements. The numbers derived from
measurements are, in fact, estimates of preselected parameters, not true
values. More advanced statistical analysis shows that for a 95% confidence level, the mean volume of grains is estimated as E(V) 25 2
m3. In a series of experiments carried out on the same set of specimens,
it is likely that different values of the same parameter will be obtained,
which underlines the need for analysis of statistical significance of
differences in the values of microstructural descriptors. Various statistical
tests for proving statistical significance are available. These tests, not
discussed in the present text, can again be found in a number of books on
mathematical statistics.
An example of data interpretation based on explaining microstructural
descriptors to understand materials properties is illustrated in Fig. 28,
which shows the relationships between porosity and density of a ceramic
material (Fig. 28a) and between hardness and the mean intercept length
for a polycrystalline metal (Fig. 28b). The relationship between density
and porosity can be used to predict the true density of a sintered
compound, while the relationship between hardness and the mean
intercept length can be used to explain how material hardness can be
controlled by means of controlling grain size.
Parameters used to model the properties of a material almost never can
be measured directly from images of its microstructures because information is from a two-dimensional (2-D) view rather than from three-

Fig. 27

Distribution of the volume of grains in an austenitic steel as an


example of data for microstructure analysis

JOBNAME: PGIAspec 2 PAGE: 44 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


188 / Practical Guide to Image Analysis

dimensional (3-D) space. Therefore, the parameters measured using


image analysis methods need to be properly interpreted before they can be
used to verify materials models. The necessary links between parameters
measured on an image and their true 3-D descriptors have been developed
in stereology (a comprehensive treatment of stereology for material
science can be found in, for example, Ref 3 and 6).
Grain size is another example of a frequently estimated microstructural
parameter relevant for modeling properties of polygrained aggregates and
polycrystalline materials, for example. The mechanical properties of
polycrystals (such as yield and flow stress, fatigue limit, and plasticity)
are linear functions of grain size. A number of other parameters (discussed
in the preceding Chapters) can be used to describe relevant microstructural descriptors. Quantitative interpretation of the numbers obtained via
image analysis may require numerical models of the microstructures in
question. Figure 29 shows an image of traces of grain boundaries
emerging on the cross section of a polycrystal. Determining the surface
area of an individual polygon, A, is a simple measurement that can be
performed using image analysis tools. The results of measurements of A
for the image in Fig. 29 are given in Fig. 30.
The relationships between properties and microstructures of a material
usually link mean values of the respective parameters (e.g., the mean
values of density and porosity in Fig. 28a). More precisely, the term mean
value is understood as a numerical, or true, mean value, obtained by
averaging over all elements of a given population, such as particles in
two-phase material. The true mean value of a given parameter can be

(a)

Fig. 28

(b)

Experimental data illustrating the relationships between material properties and microstructures.
(a) Plot of ceramic density against volume fraction of pores. Courtesy P. Marchlewski. (b) Plot of
Brinell hardness of an austenitic stainless steel against the inverse of square root of the mean intercept length,
or ( l )1/2, m1/2. Courtesy J.J. Bucki

JOBNAME: PGIAspec 2 PAGE: 45 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 189

properly estimated only if any element of the population studied is


sampled with the same probability.
Unfortunately, microstructural elements appear on images of microstructures with probability that depends on their size. As a result, large
elements in the microstructure are generally over represented in the
image. If this situation is not taken into consideration or corrected (using
appropriate stereological methods), image analysis will yield weighted
estimates of a given property. Weighted estimates still can be used to
model materials properties if properly done. For example, volumeweighted mean particle volume can be a more meaningful parameter for
some material properties than a numeric mean volume (see, for example,
Ref 12).

Fig. 29

Binary image of grains in austenitic steel ready for digital measurements. All artifacts are removed, and the continuous network of
grain boundaries is only one pixel wide.

Fig. 30

Grain size area (in pixels) distribution obtained from binary image in
Fig. 29

JOBNAME: PGIAspec 2 PAGE: 46 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


190 / Practical Guide to Image Analysis

The issue of weighted distributions arises in the interpretation of


distribution data. For example, consider the analysis of grain size
distribution shown in Fig. 31. The simplest, most logical measurement
appears to be grain section area, yielding a number-weighted distribution
(black bars in the plot). Number-weighted means that the grain sections
are counted proportionally to their amount measured by numbers. In other
words, the distribution illustrates how many (in terms of numbers) grain
sections of a given size there are in the image. In fact, the black bars in
Fig. 31 indicate what the probability is that a single grain section fits into
a given size class interval.
Another concept of data presentation is possible, called area-weighted
distribution, which is at least equally valuable to, if not more valuable
than, the preceding analysis. Assume that the probability of hitting any
particle visible in the image with a single, randomly thrown point is
directly proportional to the area of this particle. Similarly, numerous
materials properties (e.g., resistance to plastic deformation and electrical
resistance) are related to the particle volume rather than to the number of
particles. Thus, a distribution in which the weighting parameter is area
instead of a number seems to provide a better relationship among material
properties. Note that the area-weighted distribution (gray bars in Fig. 31)
is shifted toward larger grain sections compared with the numberweighted distribution. This phenomenon is even more pronounced in the
case of a bimodal grain size distribution. The two maxima produced by
grains of both sizes usually are hardly visible in the number-weighted
distribution but very clear in the area-weighted distribution (Ref 13).

Fig. 31

Number-weighted (black bars) and area-weighted (gray bars) distributions of grain size of an annealed austenitic steel. The areaweighted distribution seems to be more informative.

JOBNAME: PGIAspec 2 PAGE: 47 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 191

In conclusion, note that the scope and depth of interpretation of the data
describing the microstructural features of a material also depends on the
purpose of quantification, including:
O Quality control
O Modeling materials properties
O Quantitative description of a microstructure
Each application of quantitative descriptions differs in the required degree
of interpretation. In the case of quality control, attention is focused on
reproducibility of the data; no physical meaning of the numbers obtained
from image analysis is needed. Various procedures can be used, and the
basic requirement is to follow systematically the one selected. The major
limitation of this approach is that the results obtained in one study cannot
be compared with the results obtained in other investigations. On the
other hand, no special requirements are imposed on the image analysis
technique except the condition of reproducibility of measurements.
In the case of measurements carried out to derive numbers for modeling
materials properties, the freedom in selecting microstructural parameters
is restricted. In this situation, a given physical and mathematical model
explaining the properties of the material is required in addition to the
reproducibility requirement.

Data Interpretation Examples


Grain Size. Grains in polycrystals form an aggregate of multiface
polyhedrons, which differ in size and shape. Images of polycrystalline
materials reveal sections of grains in the form of polygons, the section
size of which can be described by surface area, A. Figure 32 shows three
examples of images of grain sections in polycrystals, and the results of the
measurements of grain section area for each image are given in Fig. 33.
Such results can be either presented in the form of histograms of
measured values or normalized by dividing measured values by the
average grain section area. Use of a frequency histogram to visualize the
data directly provides information about the most frequently observed
grain section area. The normalized plot is more useful to study the relative
spread in the size of grains. Figure 32(c) contains the highest diversity in
grain section areas.
Frequently, results of size measurements (in this case size of grain
section area) also are plotted on a logarithmic scale; a logarithmic plot of
the distribution function of grain section area, f(A), from the three
structures in Fig. 32 is shown at the bottom of Fig. 33. The experimental
data distribution function for one of the structures is approximately a
normal distribution. There are benefits to presenting measured values in

JOBNAME: PGIAspec 2 PAGE: 48 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


192 / Practical Guide to Image Analysis

the types of plots in Fig. 33. The distribution functions can be used to
determine the degree to which they agree with theoretical functions, and
changes in distribution functions can be interpreted in terms of a shift in
the mean value and possible increase in their width. In the case of grain
size, the shift is an indication of grain growth. If the shape of the
normalized distribution function does not change (compare Fig. 33a and
b), the shift could be interpreted as normal grain growth. By comparison,
changes in the shape of normalized distribution functions indicate
abnormal grain growth (compare Fig. 33a and c).
Due to the randomness of the size of grain sections, the relatively large
grain sections that occupy a significant fraction of the image account for
a small fraction of the total population; two large sections in Fig. 32(c)
account for approximately 15% of the image area. Weighted plots of
relevant values make it easier to visualize the effect of such grains
appearing with low frequency. The area-weighted plot of A, or A f(A),
is more sensitive to the presence of large grain sections (Fig. 34).
The results of grain section area frequently are converted into circle
equivalent grain section diameters. Both the mean value of grain section,
A , and equivalent diameter, d , can be used as a measure of grain size.
However, it should be pointed out that these two parameters do not
account for the 3-D character of grain and can be used only in
comparative studies.
Mean intercept length is a more rigorous stereological measure of grain
size. In the case of an isotropic (uniform properties in all directions)
structure, it can be measured using a system of parallel test lines on
random sections of the material. Anisotropic (different properties in
different axial directions) structures require the use of vertical sections
and a system of test cycloids, which provide only a mean value, and not
a distribution of intercept length.

(a)

Fig. 32

(b)

(c)

Three typical microstructures of a single-phase polycrystalline material, differing in the grain size (grain size in
images a, b, and c is in ascending order) and in grain-size homogeneity (grains in image c are more
inhomogeneous compared to those in a and b).

JOBNAME: PGIAspec 2 PAGE: 49 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 193

Values of mean intercept length measurements are used to directly


quantify differences in the grain size of polycrystals. The mean intercept
length also is used to estimate the density of grain boundaries (grain
boundary surface area per unit volume) from the stereological relationship SV 2 / l . Both SV and l can be used to study the effect of grain size
on the mechanical properties of polycrystals. The Hall-Petch relationship
predicts that the flow stress of polycrystals is a linear function of (SV)1/2
or (l)1/2.
Differences in the values of SV and l for a series of polycrystals can be
interpreted in terms of the differences in the grain size only under the
assumption that the shape of grains remains constant; for a constant
volume of grains, both SV and l increase as grains are elongated. Thus,
interpretation of grain size data generally requires examination of the
shape of the grains (circularity shape factor, for example, as shown in

(a)

(b)

(c)

Fig. 33
scale

Histograms of grain section areas for microstructures in Fig. 32. (a)


Plotted in linear form; (b) normalized by the mean area; (c) in log

JOBNAME: PGIAspec 2 PAGE: 50 SESS: 50 OUTPUT: Thu Oct 26 15:48:29 2000


194 / Practical Guide to Image Analysis

Fig. 35). Mean values of shape factors can be obtained for grain sections
and appropriate distribution functions. As a rule of thumb, 10% or less
variation of the mean value of shape factors can be viewed as insignificant.
Two-Phase Materials. Figure 36 shows examples of two-phase microstructures, which basically are described in terms of the volume
fraction of phases, (VV)a and (VV)b. Measurements of volume fractions
usually are easily carried out using image analyzers, and the precision of
such measurements are estimated using model reference images. In
practical applications, the major source of an error in estimating volume
fraction is biased selection of images for analysis. Images must be
randomly sampled in terms of sectioning of material and positioning the

Fig. 34

Histogram of the area-weighted distribution of grain sections for


microstructures in Fig. 32. The plots are slightly distorted to better
illustrate the differences.

Fig. 35

Distribution of the shape factor of grains of microstructure in Fig. 29


computed as a reciprocal of the shape factor from Eq 2

JOBNAME: PGIAspec 2 PAGE: 51 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 195

observation field. Some other source of errors, such as over etching, have
been discussed previously.
Estimates of volume fraction are directly used to interpret the physical
and mechanical properties of several materials; for example, density,
conductivity, and hardness. Some properties also are influenced by the
degree of phase dispersion (see also Chapter 6, Characterization of
Particle Dispersion). Under the condition of constant volume fractions in
particulate systems, the dispersion of phases can be described by size of
individual particles (concept of the size of particles cannot be applied to
interpenetrated structures of some materials). Another approach to the
description of dispersion can be based on measurements of specific
surface area, SV, of interphase boundaries. This parameter defines SV for
interphase boundaries in unit volume.
Values of specific SV for interphase boundaries can be obtained with the
help of image analyzers relatively easily for isotropic structures. In this
case, one can apply a grid of parallel test lines. On the other hand,

(a)

(b)

Fig. 36

Typical two-phase microstructures of particle-modied polycrystalline matrix (the analyzed phase, graphite, is black). Particle volume
accounts for (a) Particle volume, 10.5% of material volume; surface-to-volume
ratio, 18.6 mm1; number of particles per unit volume computed from stereological equations, 8480 mm3. (b) Particle volume, 12% of the material volume;
surface-to-volume ratio, 10.8 mm1; number of particles per unit volume
computed from stereological equations, 1300 mm3

JOBNAME: PGIAspec 2 PAGE: 52 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


196 / Practical Guide to Image Analysis

measurements for anisotropic structures require the use of vertical


sections and set of cycloids. Figure 37 shows a plot of measured values
of SV and VV against material hardness for series specimens of duplex
steel. The hardness is influenced by both volume fraction of austenite and
the amount of the austenite-ferrite boundaries.
Modern image analyzers also provide a large number of other parameters to characterize images of two-phase materials including:
O Number of particle sections per image area, NA
O Number of particle sections per length of test lines superimposed on
the image, NL
O Particle section area, A, and equivalent diameter, d2
O Particle section perimeter, P
O Shape factors
O Distances between particle sections
In terms of mathematical statistics, each of these parameters is considered
a well-defined random variable that can be used to characterize microstructural images. Unfortunately, these parameters do not have a simple
interpretation in terms of a description of a microstructure and model of
its properties. This is due to the fact that they refer to sections of particles
and not the particles themselves. Although deconvoluting geometry of
3-D objects from their sections is possible, it currently is a complex
exercise and is generally not recommended in simple applications. The

Fig. 37

Effect of two microstructural parameters, volume fraction, VV, and


specic surface, SV, on hardness. Source: Ref 14

JOBNAME: PGIAspec 2 PAGE: 53 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 197

parameters listed above can be used for comparative studies of the


microstructure of two-phase materials. Such studies yield acceptable
results under the assumption that the shape of particles in the studied
series of specimens is constant.
For particles of constant shape, the number of particle sections per unit
area, NA, can be used to rank the materials in terms of density of particles
per unit volume. For the images in Fig. 36, the higher particle density
occurs in Fig. 36(a).
Measurements of NA and NL are used to estimate the density of
particles, NV, if only the shape of particles is known. This is achieved
using either stereological relationships, developed for a number of simple
particle shapes, or computer models of particle geometry. Values of NV
computed under the assumption of spherical particles in Fig. 36 are listed
in the figure caption. On the hand, the validity of the assumption about the
shape of particles can be tested by measurements of the shape factors for
the particle sections.
Modern image analyzers also can describe more complex characteristics of two-phase microstructures, such as:
O Contiguity of phases
O Morphological anisotropy
O Randomness in spatial distribution
The readers interested in using such descriptions are referred to papers
and textbooks on quantitative stereology.
Effect of Nonmetallic Inclusions on Fracture Toughness of HighStrength, Low-Alloy Steel. Mechanical properties of high-strength,
low-alloy steel (HSLA), especially fracture characteristics, are highly
influenced by nonmetallic inclusions. Consequently, testing the inclusion
content is important in quality control of these materials and therefore is
standardized (Ref 15, 16).
Numerous models of ductile crack initiation predict that the crack
opening displacement, c, is affected by the interparticle distance.
Obtaining an unbiased estimation of this parameter is very difficult, but,
if the size distribution of inclusions is assumed to be invariant, the mean
distance between them should be inversely proportional to the volume
fraction. Consequently, the fracture toughness should also be inversely
proportional to the volume fraction of inclusions. This conclusion is
experimentally verified (Ref 17) and results are shown in Fig. 38. The plot
in Fig. 38 shows a large scatter of the results, which is interpreted to be
a result of the majority of inclusions in HSLA steels being plastic sulfides.
These inclusions are highly deformed during plastic working and,
consequently, can introduce noticeable variation in the interparticle
distance in spite of stable volume fraction and size.

JOBNAME: PGIAspec 2 PAGE: 54 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


198 / Practical Guide to Image Analysis

The effect total projected length of inclusions, LA, a parameter relatively


easy to estimate and also sensitive to deformation of inclusions on HSLA
fracture toughness of HSLA steels, is shown in Fig. 39 (Ref 17). The test
points tend to concentrate along two separate curves corresponding to
longitudinal and transverse sections. This is interpreted two ways:
O Plastic deformation influences not only the inclusions, but also the
metallic matrix, and material anisotropy affects the fracture toughness.
O Inclusions tested on transverse sections appear significantly smaller
than those on longitudinal sections, which may result in underestimation of the projected length on transverse sections.
This hypothesis is supported by closer analysis of Fig. 39. Test points
from transverse sections fall below the points obtained from longitudinal

Fig. 38

Effect of the volume fraction of inclusions on crack-opening displacement. Source: Ref 17

Fig. 39

Relationship between the total projected length of inclusions, LA,


and crack-opening displacement, c. Source: Ref 17

JOBNAME: PGIAspec 2 PAGE: 55 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 199

sections. On the other hand, small scatter of the points indicates that the
total projected length is a well-chosen parameter for analysis of the
fracture process of HSLA steels.
Effect of Boron Addition on the Microstructure of Sintered Stainless Steel. Sintering metal powders allows obtaining exotic and traditional (e.g., tool steels) materials having properties impossible to obtain
via other processing routes. One of the problems arising during sintering
of stainless steels is the presence of porosity in the final product. This
problem can be overcome by liquid-phase sintering by means of the
addition of boron. This requires determining the optimum boron content,
which is discussed subsequently based on part of a larger project devoted
to this problem (Ref 18).
A typical example of a microstructure of a material produced via
liquid-phase sintering is shown in Fig. 40(a), which contains both pores
and a eutectic phase. From a theoretical point of view, it is important to
quantify the development of the eutectic network, and it is evaluated by
counting the number of closed eutectic loops per unit test area. Measurement results of detected loops (denoted in Fig. 40b as white regions) are
shown in Fig. 41. The amount of closed loops rapidly increases with the
increasing boron content. Therefore, this unusual parameter seems to be
a good choice for interpretation of the process of network formation. Note
that no eutectic phase is detected for boron additions smaller than 0.4%.
Liquid-phase sintering causes a decrease in porosity, as shown in Fig. 42.
The amount of pores is easily quantified using simple area fraction
measurements. Based on these relatively simple measurements, it is
clearly visible that a boron addition greater than 0.6% does not result in
further decrease in porosity. This observation was also confirmed by
means of classical density measurements (Ref 18).

(a)

Fig. 40

(b)

Example of sintered stainless steel microstructure: (a) Pores and


eutectic phase clearly visible; (b) detected pores and eutectic (black)
and regions surrounded by closed loops of the eutectic (white). Source: Ref 18

JOBNAME: PGIAspec 2 PAGE: 56 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


200 / Practical Guide to Image Analysis

Fig. 41

Effect of boron addition on the number of closed loops of eutectic


phase in sintered stainless steel. White bars denote sintering at 1240
C (2265 F) and black bars denote sintering at 1180 C (2155 F). Source: Ref 18

Fig. 42

Effect of boron addition on the volume fraction of pores. White bars


denote sintering at 1240 C (2265 F) and black bars denote
sintering at 1180 C (2155 F). Source: Ref 18

Conclusions
The preceding analyses and examples show that there is a specific
relation between classical stereology and image analysis. Stereological
methods (which have much longer history than image analysis) have an

JOBNAME: PGIAspec 2 PAGE: 57 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


Analysis and Interpretation / 201

appropriate, well-established theoretical background based on the developments of, for instance, geometrical probability, statistics, and integral
geometry. Thus, stereological methods constitute the basis of any quantitative interpretation of the results. On the other hand, image analysis
allows automation, better repeatability, and reproducibility of the quantification of microstructures. The main goal in interpretation of data from
image analysis is to skillfully adopt the rules of classical stereology to
these new tools. The methods of interpretation presented in this Chapter
should help resolve the most frequent, basic issues encountered in
laboratory practice.

References
1. L. Wojnar and W. Dziadur, Fracture of Ferritic Ductile Iron at
Elevated Temperatures, Proc. 6th European Conference on Fracture,
1986, p 19411954
2. L. Wojnar, Effect of Graphite on Fracture Toughness of Nodular Cast
Iron, Ph.D. dissertation, Cracow University of Technology, 1985 (in
Polish)
3. E.E. Underwood, Quantitative Stereology, Addison Wesley, 1970, p
1274
4. L. Wojnar, Image Analysis: Applications in Materials Engineering,
CRC Press, 1998, p 1245
5. J. Rys, Stereology of Materials, Fotobit-Design, 1995, p 1323 (in
Polish)
6. K.J. Kurzydowski and B. Ralph, The Quantitative Description of the
Microstructure of Materials, CRC Press, 1995, p 1418
7. A.J. Baddeley, H.J.G. Gundersen, and L.M. Cruz Orive, J. Microsc.,
142, 259, 1986
8. J. Bystrzycki, W. Przetakiewicz, and K.J. Kurzydowski, Acta Metall.
Mater., 41, 2639, 1993
9. J.C. Russ, The Image Processing Handbook, 2nd ed., CRC Press,
1995, p 1674
ski, Analysis of Stereological Methods Applicability for
10. J. Chrapon
Grain Size Evaluation in Polycrystalline Materials, Ph.D. dissertation, Silesian University of Technology, Katowice, 1998 (in Polish)
11. J.R. Taylor, An Introduction to Error Analysis, Oxford University
Press, 1982, p 1297
12. K.J. Kurzydowski and J.J.Bucki, Scr. Metall., 27, 117, 1992
13. Standard Test Methods for Characterizing Duplex Grain Sizes, E
1181-87, Annual Book of ASTM Standards, ASTM, 1987
14. G. Z ebrowski, Application of the Modified Hall-Petch Relationship
for Analysis of the Flow Stress in Austenitic-Pearlitic Steels, Ph.D.
dissertation, Warsaw University of Technology, Warsaw, 1998 (in

JOBNAME: PGIAspec 2 PAGE: 58 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000


202 / Practical Guide to Image Analysis

Polish)
15. Standard Practice for Determining Inclusion Content of Steel and
Other Metals by Automatic Image Analysis, E 1245-89, Annual
Book of ASTM Standards, ASTM, 1989
16. Standard Practice for Preparing and Evaluating Specimens for
Automatic Inclusion Assessment of Steel, E 768-80, Annual Book of
ASTM Standards, ASTM, 1985
17. A. Zaczyk, W. Dziadur, S. Rudnik, and L. Wojnar, Effect of
Non-Metallic Inclusions Stereology on Fracture Toughness of HSLA
Steel, Acta Stereol., 1986, 5/2, p 325330
18. R. Chrzaszcz, Application of Image Analysis in the Inspection of the
Sintering Process, masters thesis, Cracow University of Technology,
Cracow, 1994 (in Polish)

CHAPTER

8
Applications
Dennis W. Hetzner
The Timken Co.

THE PRECEDING CHAPTERS of this book provide a fundamental


understanding of stereological principles, how to best select and prepare
specimens for image analysis (IA), basic steps of the IA process, what
parameters can be measured, and how to interpret the results obtained
from these measurements. In this Chapter, several examples of how to
solve practical problems by IA are considered. The problems and
solutions presented in this Chapter are intended for persons well versed in
metallography but who are new to the field of IA. Some basic mathematical and statistical analysis is required to understand how the measurements are performed and how the results are presented. It is further
assumed that the readers are not well versed in computer programming.
Therefore, detailed explanations of how the macros are constructed and
what each line of programming actually does are included with each
example.
The statistical variations of properties measured using IA have already
been discussed. Because of these statistical variations within a single
specimen or group of specimens, it is best to perform measurements on a
large number of fields of view to accurately determine the average values
of the properties being measured and what variations can occur in these
properties. For these reasons, multiple fields of view are evaluated in most
instances. Because the same set of measurements is performed on each
field of view, it is most useful to have the IA system computer repetitively
perform these tasks. Depending on the manufacturer of the system, this
instruction set may be called an IA program, routine, or macro program.
For the remainder of this discussion, the term macro will be used to refer
to a set of instructions that are performed by an IA system. Because many
fields of view are generally analyzed by a macro, it is most helpful to have
an automatic stage and focusing system that can be controlled by the
computer.

204 / Practical Guide to Image Analysis

While a macro can perform a very large number of different instructions, the basic operations performed in any IA procedure generally
include all or some of the following operations:
1. SYSTEM SETUP: Information that needs to be defined only once is
included at the start of the macro. This may include information
regarding the type of input to be processed, definition of the
properties, limits of things being measured, and system magnification.
2. ACQUIRE GRAY IMAGE
3. SHADING CORRECTION
4. GRAY IMAGE PROCESSING
5. IMAGE SEGMENTATION: Creates a binary image
6. BINARY IMAGE PROCESSING: Image amendment or editing
7. MEASUREMENTS
8. WRITE MEASUREMENTS TO DATABASE
9. FINAL ANALYSIS: Statistical analysis, histograms, off-line processing of data, and further analysis
For repetitive measurements, a for or while statement is inserted after the
preliminary setup, and a continue or next instruction follows the instructions statements for measurements. The examples that follow briefly
outline what operations are used to solve the specific IA problems and
include photomicrographs showing the IA operations and calculations of
final results. In several cases, the macros used to perform the IA problem
are generically described using standard computer programming terminology and stereological terms. Terminology specifically related to a
particular brand of IA system is avoided.

Gray Images
The quality and integrity of the specimens that are to be evaluated are
probably the most critical factors in IA. No IA system can compensate for
poorly prepared specimens. While IA systems of today can perform gray
image transformations at very rapid rates relative to systems manufactured in the 1970s and 1980s, gray image processing should be used as a
last resort for metallographic specimens. Different etches and filters (other
than the standard green filter used in metallography) should be evaluated
prior to using gray image transformations. To obtain meaningful results,
the best possible procedures for polishing and etching the specimens to be
observed must be used. Factors such as inclusion pullout, comet tails, or
poor or variable contrast caused by improper etching cannot be eliminated
by the IA system. There is no substitute for properly prepared metallo-

Applications / 205

graphic specimens (for more information, see also Chapter 3, Specimen


Preparation form Image Analysis, Chapter 4, Principles of Image
Analysis, and Chapter 7, Analysis and Interpretation).
One of the most common hardware variables that can affect system
performance is the illumination system of an optical microscope. Proper
alignment of the microscope lamp and correct adjustment are of paramount importance for optimal performance of the IA system. The
alignment, resolution, and response of the TV scanner used to convert the
optical image into an electrical signal can have a large influence on the
performance of the IA system. After properly aligning the microscope,
some minor variations in the gray level of a blank image might be
observed. Because neither the scanner tube nor the lenses in the
microscope are perfect, some slight variations in the gray image can
occur. The shading corrector observes each pixel in a blank field of view
and changes its gray level to produce a uniform white background. This
correction factor is then applied to every image to correct for minor
problems in the system. If a large degree of correction is required, this
may indicate that the microscope is seriously misaligned, or the scanner
tube has deteriorated from use. Once these concerns have been properly
rationalized, the next major variable to consider is how to select the gray
level of the objects to be characterized.
To understand how to properly select the gray level of the features to be
measured, consider how a television (TV) scanner converts an optical
image into an electrical signal. The image on the monitor consists of
many scanned lines. The TV camera scans (rasters) the image from left to
right and from top to bottom of the screen. The image consists of a large
number of small square elements called pixels. The gray levels of the
pixels on the monitor for an image of a typical microstructure range from
dark to bright. Historically, black has been assigned a gray level of zero,
and white has been assigned the highest values of possible gray levels.
Earlier systems use six bits of gray, so white had a gray level of 26 1
63, while today, most IA systems use eight bits of gray, so white has a
gray level of 255; that is, 28 1.
A schematic of a portion of a monitor is illustrated in Fig. 1 where each
small square represents a pixel. For this example, the square and the
ellipse are assumed to have a uniform gray level, say 125, and the gray
level of the background is white (255). As an ideal beam scans across the
upper portion of Fig. 1, and the background pixels are counted as gray
levels of 255. When the beam enters the square, a gray level value of 125
is recorded for each pixel contained within the square. Thus, the gray
level histogram for this portion of the image contains 25 squares of gray
level 125, and the remaining pixels have a gray level of 255 (Fig. 2a). A
cumulative gray level distribution for this case is shown in Fig. 2(b).
For the ideal ellipse, the gray level is, by definition, 125, but consider
how the IA system sees the ellipse, or more specifically, consider a pixel

206 / Practical Guide to Image Analysis

on the perimeter of the ellipse. No pixel on the perimeter of the ellipse


completely contains all white or all gray. Thus, pixels on the perimeter
contain gray material that is white (255) and gray material with a level of
125. Thus, the perimeter pixels have a gray level that can vary between
125 and 255. The histogram and cumulative distributions for the ellipse
are shown in Fig. 3.
What would happen to the square if it were slightly shifted or rotated
from the position shown in Fig. 1? In reality, a square with this orientation
does not exist. Any object viewed in an IA system will, at best, be the
ideal ellipse.

Fig. 1

(a)

Fig. 2

Uniform gray level square and ellipse

(b)

Analysis of ideal gray level square in Fig. 1: (a) histogram; (b) cumulative gray level histogram

Applications / 207

In the previous examples, some under-detection might occur as the


rastered beam moves from a bright area to a dark area. Conversely, as the
beam moves from the dark area to a bright area, some over-detection
might occur. This phenomenon can even occur for objects with very sharp
edges and uniform gray levels, such as the electron-beam machined
circles in Fig. 4(a). For etched specimens that usually do not have
extremely sharp edges, there is a transition zone between the matrix and
the etched constituent. Thus, in the direction of scanning, determining the
true edge or boundary of the object to be measured may not be
straightforward.
Sharpening of the edges of features can be handled several ways using
an IA system, such as using a 3 3 matrix filter comprising elements a,
b, c,. . .i (Table 1). For example, consider an image with a boundary that
contains a light gray region surrounding a much darker object. In

(a)

(b)

Fig. 3

Analysis of ideal gray level ellipse in Fig. 1: (a) histogram; (b)


cumulative gray level histogram

208 / Practical Guide to Image Analysis

particular, consider the pixel with a gray level of 120 (the center pixel in
the 3 3 matrix shown). Applying a filter of the form
a b c

d e f

g h i
to the matrix of pixels

240

110

19

255

120

20

270

130

21

yields a new median pixel value, e, which is defined by:


(240 110 19 255 120 20 270 130 21) / 9
(Eq 1)

1185/9 137

The matrix containing elements a, b, c,. . .i is called a kernel. Applying


the kernel to every pixel in the original gray image yields a transformed
image (see also Chapter 4, Principles of Image Analysis). For example,
consider the Vickers hardness microindentation in Fig. 5(a). When an
edge detection filter (such as the Sobel) is applied to the image, most of
the transformed image appears dark (Fig. 5b). However, at the edges of

(a)

Fig. 4

(b)

Scanner response: (a) electron-beam machined circles; (b) AM 350 precipitation-hardenable stainless steel (UNS
S35000)

Applications / 209

the indentation where a contrast change occurs, the image is bright. Thus,
the Sobel filter reveals the edges of the indentation, but not the remainder
of the image. Table 1 lists several common filters and corresponding
kernels; these are just a few of the possible kernels that can be
constructed. For example, at least five different 3 3 kernels can be

(a)

(b)

Fig. 5

Vickers microindentation in ingot-iron specimen. (a) Regular image. (b) Sobel transformation of image

Table 1

Selected common gray processing functions

Operator

Kernel

Comments

Low-pass, mean

1 1 1
1 1 1 (19)
1 1 1

Median

a b c
d e f
g h i

Smoothing, e is replaced by the median


value pixel of a, b, c. . .i

Gaussian

1 2 1
2 4 2
1 2 1

Smoothing

Laplacian

0 1 0
1 4 1
0 1 0

Edge enhancement

Gradient

0 1 0
1 0 1
0 1 0

Edge enhancement

Smooth

0 1 0
1 0 1 (14)
0 1 0

Smoothing

Noise reduction

210 / Practical Guide to Image Analysis

defined for the Gaussian, Laplacian, gradient, and smooth operators. It is


possible to apply kernels other than 3 3 kernels to gray images, such
as 5 5 and 7 7 kernels. In general, the more elements in the kernel,
the greater the tendency of the operator to smooth over features in the
image.
The image created by the transformation also can be added or
subtracted from the original image for additional processing or segmentation. Some types of filters are used to smooth an image, but image
smoothing generally is not used in materials science applications because
properly defining the boundaries between various constituents is a
necessity in those applications. These types of gray image processing
were very slow in earlier IA systems, but are performed at very high rates
in computer systems of today.
Other IA systems handle the problems associated with edge detection
using image processing by means of hardware versus software. As
previously described, there is a finite rise time of the video signal as it
moves from dark to light areas because of the limited resolution of any
scanner. The only threshold setting that results in determining a feature
boundary is the midpoint of the video signal on each side of the boundary.
An auto-delineating processor evaluates points on the boundaries between
different constituents, and on each side of these boundaries, and squares
the transition between gray levels to produce an ideal waveform.
Squaring the waveform eliminates the offset threshold error and the
halo error encountered when detecting different constituents using
simple detectors.
Of paramount importance for any type of IA problem is the absolute
requirement of a metallographic specimen of the highest possible quality
with respect to preparation and etching. Specimen preparation procedures
must eliminate artifacts, such as scratches, comet tails, and water stains,
and the etchant selected should create sharp edges and the greatest
amount of contrast between constituents. No amount of gray level
processing or other types of image correction can make up for poorly
prepared specimens (see Chapter 3, Specimen Preparation for Image
Analysis, and Chapter 7, Analysis and Interpretation, for more detail
on specimen selection, preparation, and etching).

Applications / 211

Image Measurements
Consider again only the square consisting of pixels with a gray level of
125 in Fig. 1. If all the pixels with a gray level of less than 200 were
counted, only the pixels within the square would be used for measurements. A binary image of this field of view can be formed by letting the
detected pixels be white and the remaining pixels be black. Assume that
each pixel is 2 m long and 2 m high. Thus, the perimeter of the square
is 5 5 5 5 pixels, and the actual perimeter of the square is 20 pixels
by 2 m, or 40 m. Similarly, the area of the square is 5 5, or 25, square
pixels, and the actual area is 25 4 m2 for each pixel, which equals
100 m2.
This is the simplest problem that could be encountered. Because most
images contain more than one object or feature and a range of gray levels,
different parameters describing the image can be measured. Two types of
generic measurements can be made for all IA problems: field measurements and feature-specific measurements (see also Chapter 5, Measurements, and Chapter 7, Analysis and Interpretation). Field measurements refer to bulk properties of all features or portions of features
contained within a measurement frame. Feature-specific measurements
refer to properties associated with individual objects within a particular
measurement frame.
There are several different methods of defining the measurement frame.
The simplest approach is to measure everything that is observed on the
TV monitor. The measured features are represented by the white dots in
Fig. 6(a). The use of this type of measuring frame is well suited for field
measurements but has some limitations when feature-specific measurements are required. When making feature-specific measurements, it is
very important to understand how the selection of the measurement frame
can affect the accuracy of measurements. Even if the features being
measured do not vary greatly in size or shape, measuring everything
within the frame border will bias the results because any feature on the
frame border is truncated in size, as shown in Fig. 6 (a). Similarly, if only
the objects totally inside the image frame are measured, long objects that
extend beyond the boundaries of the measurement frame are disregarded,
as shown in Fig. 6(b). All objects can be properly measured if the
measuring frame is sized to be somewhat smaller than the input frame, as
shown in Fig. 6(c). However, in situations where contiguous fields of
view are analyzed, use of the frame in Fig. 6(c) may result in some
features being measured two times. A possible solution in this situation is
to measure objects extending beyond the top and right side of the frame
(Fig. 6d) or to measure objects extending beyond the bottom and left side
of the frame (Fig. 6e). These types of measuring frames are well suited for
analysis using a motorized stage on the microscope.

212 / Practical Guide to Image Analysis

If these types of measuring frame options are not acceptable, it may be


necessary to perform two separate sets of measurements on the same
specimen. The first set of measurements should be made at high
magnification only on small features and a second set of measurements
made at lower magnification only on large features. Combining the data
sets provides a true feature analysis. Most manufacturers of IA systems
offer circular measurement frames (Fig. 6f, g), which are not used very
often in materials science applications.
The simplest primary field measurements to make are:
Field measurement

Definition

Area

Total number of detected pixels in the live frame

Perimeter

Total perimeter of detected pixels

Vertical projection

Total length of the detected vertical chords

Horizontal projection

Total length of detected horizontal chords

These measurements are made on every feature, or portion of a feature,


and represent a bulk analysis of what is contained within the measurement
frame. Other measurements can be derived from these primary parameters, including:
O Area fraction: field area divided by measurement frame area
O Mean chord length: field area divided by horizontal projection
O Anisotropy index: volume projection divided by horizontal projection

(a)

(d)

Fig. 6

(b)

(e)

(c)

(f)

(g)

Measurement frames used for image analysis. (a) All features displayed
on the monitor are measured. (b) Features only within the frame are
measured. (c) Features inside and touching the frame are measured. (d) Features
inside and touching the right and top sides of the frame are measured. (e) Features
inside and touching the left and bottom sides of the frame are measured. (f)
Features only inside the circular frame are measured. (g) Features inside and
touching the circular frame are measured.

Applications / 213

Information regarding each specific object in the microstructure often is


required, and the measurements are referred to as feature-specific
parameters. Some primary feature-specific measurements include area,
A; perimeter, P; horizontal projection, HP; vertical projection, VP; Feret
diameter; length, L; and width (Table 2 and Fig. 7). In addition,
numerous parameters can be derived from the basic feature-specific
parameters, including:
O Aspect ratio: length divided by width
O Roundness: (P2)/(4)
O Circularity: 4A/P2
Table 2

Primary feature-specic parameters

Parameter

Definition

Area

Total number of detected pixels in the feature

Perimeter

Total length of detected pixels on the feature boundary

Feret 0

Maximum feature diameter in the direction of the x-axis; Feret 0 x max x min

Feret 90

Maximum feature diameter in the direction of the y-axis; Feret 90 y max y min

Length

Maximum of all selected Feret diameters

Width

Dimension perpendicular to length

X max

Maximum distance along the x-axis where pixels for the feature are detected

X min

Minimum distance along the x-axis where pixels for the feature are detected

(a)

(b)

(c)

(d)

Fig. 7

Examples of feature-specic measurements: (a) area (shaded) of an


object containing a hole; (b) perimeter (object outer boundary and
hole circumference); (c) Feret minimum and maximum; (d) horizontal projection
(heavy line on left side of object between top and bottom points)

214 / Practical Guide to Image Analysis

Image Segmentation (Thresholding)


Thresholding Delta Ferrite Area Fraction in Stainless Steel. Errors
concerning the proper threshold settings for IA have been addressed by
ASTM Committee E 4 on Metallography in round-robin tests measuring
the amount of delta ferrite in stainless steel specimens, such as AM 350
(UNS S35000) shown in Fig. 4(b), and the volume fraction of beta-phase
in CDA (cold-drawn and annealed) brass. To properly measure the area
fraction of delta ferrite, the gray image must be segmented to distinguish
between delta ferrite and the martensitic matrix. The most common
method used to set the detection threshold is called the flicker technique
(Ref 1, 2). This procedure involves switching back and forth between the
gray image and the detected image. The correct threshold setting is
established by determining the gray level value where the detected phase
and the gray image appear to be identical. (The major disadvantage of this
technique is operator bias, and poor reproducibility between different
observers can lead to measurement errors.)
The proper threshold setting obtained using the flicker technique
generally occurs at a gray level near the minimum value of the gray level
histogram between two constituents. Some commercial software selects
the threshold setting based on the lowest gray level frequency value
between different peaks in the gray image histogram (Ref 3). The gray
level histogram or the cumulative gray level histogram can be used to
mathematically determine the proper threshold (Ref 4). The region in the
vicinity of the relative minimum of the gray level histogram can be
represented by a second-degree polynomial:
y a0 a1x a2x2

(Eq 2)

where x corresponds to the gray level and y is the gray level frequency
value.
The first derivative of this curve is y' a1 2a2x. Setting the value of
y' at 0 yields the relative minimum point of the curve. The relative
minimum is the proper threshold setting. This type of analysis can be
performed by exporting the gray level distribution to a spreadsheet.
Similarly, the relative minimum can be visually approximated by carefully observing the gray level histogram.
When a line scan from this specimen is compared to that from the
electron-beam machined circles in Fig. 4(a), the phase to be detected
varies in gray level in addition to beam overshoot and undershoot. Despite
a wide amount of scatter in the gray level histogram (Fig. 8a), it can be
represented by a second-degree polynomial (Fig. 8b). Using this analysis,
the threshold setting to use to measure the area fraction of delta ferrite in
AM 350 is 137. No image editing or image amendment was used for the

Applications / 215

analysis. As indicated in Fig. 9, there is excellent agreement between the


round-robin testing and the methodology presented here.
Thresholding Pearlite in Annealed AISI Type 1018 Steel. Methods
of threshold settings similar to those used in the ASTM round-robin test
program can be performed on specimens having a well-documented
composition and processing history. The methodology is demonstrated
using full-annealed 1018 steel (UNS G10180). When etched with picral,
the microstructure of annealed 1018 steel consists of ferrite (a light
etching constituent) and pearlite (a darker etching constituent). Weight
percentages of the primary alloying elements in the specimen are 0.20C,
0.82Mn, 0.008S, 0.038Al, 0.28Si, and 18 ppm O. Pearlite consists of
alternate plates of ferrite and cementite (Fe3C), which is clearly revealed
at high magnification. At low magnification, pearlite appears to be a dark
etching constituent.
It is best to use a moderately low magnification (e.g., 100) to properly
assess the area fraction of pearlite; a 16 objective lens was used for this

(a)

(b)

Fig. 8

AM 350 stainless steel gray levels: (a) histogram; (b) second-degree


polynomial curve

Fig. 9

Comparison of delta ferrite measurement. Squares, ASTM round-robin


test; diamonds, six-bit gray; triangles, eight-bit gray

216 / Practical Guide to Image Analysis

experiment. Because pearlite is dark when etched and ferrite remains


relatively bright, the darker pixels in the gray level histogram correspond
to the pearlite (Fig. 10a and 11a). Using the previously described analysis,
the threshold setting used to measure the area fraction of pearlite is
determined as 176 (Fig. 10b). That is, the pearlite contains all pixels
darker than 176 and ferrite contains the brighter pixels (Fig. 11b). The
binary image operator fill holes is used to correct a situation of holes
observed in some pearlite colonies (Fig. 11c). Multiple fields of view
should be observed to obtain a good estimate of the area fraction of
pearlite. In this experiment, 100 fields of view were evaluated to
determine the average pearlite area fraction. Automatic stage movement
and focusing of the microscope was controlled by a macro program.

(a)

(b)

Fig. 10

Annealed AISI type 1018 carbon steel etched with picral: (a) gray
histogram; (b) second-degree least-squares t

Applications / 217

The mean area percentage of pearlite is 24.97%, and the standard


deviation is 1.31%. Minimum and maximum values of area percentage
and pearlite percent are 21.97% and 28.25%, respectively (Fig. 12). These
data can be used to calculate the 95% confidence level (CL) and
percentage of relative accuracy (%RA) of the measurements. Theoretically, the weight fraction of pearlite should be 24.0% for an iron-carbon
alloy containing 0.20% C by weight. Because the densities of ferrite and
Fe3C are nearly equal, the volume fraction of pearlite would be the same
as the weight fraction. Considering that there is a small amount of MnS
and Al2O3 in this material, and holes that actually may have been ferrite

(a)

(b)

Fig. 11

(c)

Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using picral etch; (b) pearlite
detected; (c) holes lled

218 / Practical Guide to Image Analysis

inside pearlite colonies were filled, a measured volume percentage of


24.9% is a very good approximation to the true amount of pearlite in this
specimen.
Macro Development. A discussion about how to develop a macro to
perform these measurements follows. Referring to the introductory
remarks about the instruction set of a basic macro program at the
beginning of the Chapter, the information used to construct the macro is:
1. SYSTEM SETUP
a.
b.
c.
d.
e.
f.
g.

Clear any previously used images, graphics, and databases.


Magnification or calibration constant
Image frame to be used (Fig. 6a)
Field measurements to be made, area percentage
Load shading corrector image
Define parameters to control the stage and auto-focus.
Define number of fields to be evaluated (Field Number 100).

WHILE (LOOP): control movement of stage while Field Number


< 100.
2.
3.
4.
5.
6.
7.

ACQUIRE TV INPUT
APPLY SHADING CORRECTION
IMAGE SEGMENTATION: based on previously described methods
BINARY IMAGE PROCESSING - FILL HOLES
MEASUREMENTS: field pearlite area percent
STORE MEASUREMENT IN DATABASE

Move stage to next position.


END WHILE: continue measuring until 100 fields have been analyzed.
8. INSPECT DATABASE LIST
a. Statistical analysis
b. Histogram analysis (Fig. 12)
This represents a very simple macro that can easily be constructed to
perform the analysis. Generalized statements using standard computer
and IA terminology are used to illustrate the basic instructions to perform
the analysis. Unfortunately, each manufacturer of IA systems has its own
jargon or phrases to describe these measurement functions, so it is not
possible to create a program that has a 1:1 correlation with each system
in use. (This is does not seem to be unusual in the domain of computer
programming; programs written in allegedly generic languages such as
PASCAL and FORTRAN may or may not properly compile on different
computer systems.)

Applications / 219

Fig. 12

Distribution of pearlite area fraction in annealed AISI type 1018


carbon steel; mean 24.97%, standard deviation 1.31%

In fact, this very simple program may not properly operate on every
system. The early computer control systems used on IA systems were
quite slow when compared with the high-speed processor-based systems
of today. Furthermore, since the early stages of automation through
todays systems, system hardware components, such as the automatic
stage, automatic focus, and TV camera, have always responded at a much
slower rate than the computer can send instructions to them. It often is
necessary to artificially slow down the IA system to achieve proper
performance.
For example, consider what happens as the computer instructs the stage
to move to the next position. The stage rapidly responds and some
vibration occurs when the stage stops moving. Even before the stage has
started to move, the computer is instructing the microscope to focus or
move to the proper focusing position. As this instruction is being carried
out, the computer is further instructing the system to acquire an image.
Image processing begins at this point, and soon the stage is again moving.
Several problems can develop as the program runs. Once the stage
changes position, no instructions should be processed until any vibrations
created by the stage motion have subsided. At this point, it may be
necessary for the TV camera to have one or two seconds exposure to the
next field of view to allow the automatic gain control (if included in the
camera system) to set up the proper white level for the input signal. In
addition, automatic focusing should not be attempted until any possible
vibrations from the stage movement have subsided and the camera has
stabilized. After a little time passes, the auto-focus instruction can be
initiated and the image can be captured. This may be accomplished by
replacing instruction 2 in the macro by the following group of instructions:
2a. FOR timer 1; timer < 10; timer timer 1
2b. TVINPUT
2c. Wait (0.1 sec)

220 / Practical Guide to Image Analysis

2d. Display image on the monitor


2e. END FOR
2f. Auto-focus
2g. TVINPUT
These instructions will delay the IA system approximately 1.5 to 2
seconds after the stage moves to its new position. As the system acquires
a new image ten times and displays the image, vibrations from the stage
movement will subside, and the camera will become adjusted to the gray
level of the new field of view. The microscope then will auto-focus and
grab a good image for additional processing. The exact placement of this
instruction set within the macro may not be the same for every system.
However, the idea of slowing down the system remains the same.
Regardless of how the system operates, the instruction set should be
placed after the stage-move instruction and ahead of the focus and
image-acquire instruction.
Slowing down the execution of a macro program to obtain the correct
analysis can involve commands other than (1) move stage, (2) wait (1 or
2 s), and (3) TV input, which ensures that vibrations caused by the stage
movement have stopped. For example, in the run mode, the macro may
not pause to update graphic or display windows; this optimizes operational speed but can cause problems. Commands such as graphic flush
and update can be used to assure all graphic and image processing has
been properly completed. The use of similar commands for database
processing, such as database flush, ensures that all data in the RAM
(random access memory) of the computer is correctly written to the
appropriate databases.
In the preceding example, an image display instruction inserted after
segmentation and the fill holes command ensures that the proper image
processing has been completed prior to moving to the next instruction.
Similarly, a command such as update database or flush database ensures
that all database operations have been completed prior to starting the next
instruction in the macro.
Thresholding Brass. Historically, a green filter was used in most
metallographic applications for black and white and gray image processing because film used for photomicroscopy is well suited for exposure to
green light but not sensitive to red light (such as CPO, or contrast process
ortho). Furthermore, use of light of just one wavelength minimizes the
effects of chromatic aberration and spherical aberration. Because green
light was used, earlier achromatic lenses were spherically corrected only
to the green-yellow part of the visible spectrum (Ref 5). Coated lenses of
today are far superior in optical quality than those of earlier times.
In IA applications, the metallographer is not limited to just one
wavelength of useful radiation. For example, red light is superior to green
light in some applications in thresholding brass alloys. When green light

Applications / 221

is used to analyze CDA brass (Fig. 13a), the minimum gray level between
the two phases is not well defined (Fig. 13b). However, illuminating the
same field of view with red light yields a much sharper distinction
between the constituents (Fig. 13c). To appreciate the difference between
the two sources of illumination, the reader should try the following
experiment. Using a 32 objective, first focus the microscope using a
green filter. Then, replace the green filter with a red filter and refocus the
microscope. Notice that there is a difference in the focal plane (z position)
of the stage of approximately 8 m.

(a)

(b)

Fig. 13

(c)

Thresholding brass: (a) microstructure of cold-drawn and annealed brass etched with Klemms reagent; (b) gray
level histogram using green lter; (c) gray level histogram using red lter

222 / Practical Guide to Image Analysis

Image Amendment
In the previous discussion on thresholding annealed 1018 carbon steel,
pearlite detection is easily achieved due to the good contrast between
pearlite and ferrite obtained using a picral etch. A solution of 2 to 4% nital
etchant also can be used to reveal the microstructure of annealed 1018
carbon steel. In this case, ferrite grain boundaries are revealed together
with major constituents ferrite and pearlite (Fig. 14a). Therefore, when
the pearlite is properly detected, some ferrite grain boundaries also are

(a)

(b)

Fig. 14

(c)

Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using 2% nital ecth; (b) image
detected; (c) holes lled followed by a square morphology of open 2

Applications / 223

detected (Fig. 14b). To completely measure the pearlite, the grain


boundaries must be removed. They can be removed using automated
binary image amendment procedures or using manual image editing.
Automated methodology is preferred because manual procedures are slow
and subject to operator bias. Two simple binary image amendment
operators are dilate and erode. Dilation involves adding pixels to, and
erosion removing pixels from, a binary object. The operators open and
close are combinations of erosion and dilation (Table 3). Other binary
image operations can be combined to handle other types of problems.
In this particular case of pearlite image amendment for the nital-etched
specimen, eroding (shrinking) the binary image by two pixels removes the
grain boundaries. Restoring the pearlite colonies to their original size
requires dilating (expanding) by two pixels. Thus, the instruction open 2
yields a binary image to properly assess the pearlite colony area fraction
(Fig. 14c). This is a relatively simple example of applying image
amendment to solve the problem.
Both the magnitude and directionality of an amendment can be
controlled using various directional operators (Fig. 15) referred to as
structural elements (Ref 6). Operators shown in Fig. 15(a) through (e) are
used when amendment in specific directions is required. Figures 15(f) and
(g) show operators used to amend an image in square and octagonal
patterns, respectively.
These examples show the use of field parameters to quantify a
microstructural feature of interest. Other possible useful measurements
might be the quantity of nonmartensitic constituents in quenched and
tempered steel products, the amount of porosity in castings or powdermetal components, the porosity level in plasma-sprayed coatings, the
amount of a second phase present, and nearest-neighbor distances
between constituents.

Table 3

Image amendment procedures (binary processing)

Name

Erode: n

Instruction

Shrinks the binary image by n pixels

Dilate: n

Expands the binary image by n pixels

Outline

Only pixels on the perimeter remain

Open: n

Erode: n Dilate: n

Close: n

Dilate: n Erode: n

Ultimate erosion

Minimum set of pixels produced by performing a series of erosions; every


set of points that disappears after the next set of erosions remains unaltered

Ultimate dilation

Dilation without joining regions together

Skeleton normal

Shrinks binary image to the last connected points

Skeleton invert, or skeleton by Expands binary image but maintains boundaries between objects
influence zones (SKIZ)
Fill holes

Holes within regions are filled

224 / Practical Guide to Image Analysis

(a)

(b)

(e)

(c)

(d)

(f)

(g)

Fig. 15 Structural elements, directional operators used to amend image: (a)


horizontal line; (b) diagonal line at 45; (c) verticle line; (d) diagonal
line at 135; (e) cross; (f) square; (g) octagon

Field and Feature-Specic Measurements


Grain size can be measured using field and feature-specific parameters.
To illustrate, consider an ingot-iron specimen deformed 50% by cold
rolling then heated to a temperature of 700 C (1290 F) for 2 h and
cooled to room temperature. After metallographic preparation, the specimen is etched using Marshalls reagent to reveal the grain boundaries.
Most of the grain boundaries are clearly revealed; however, some are
lightly etched and not clearly delineated (Fig. 16a). Image editing is the
best method to restore the missing grain boundaries. A major disadvantage is that it is a manual procedure and very time consuming. Earlier IA
systems used a light pen to highlight or modify the image on the monitor;
today, the computer mouse is used for editing. Typical image editing
procedures are listed below:
Function

Accept

Instruction

Analyze selected features

Reject

Disregard selected features

Line

Draw a line

Cut

Separate feature

Cover

Add to the detected image

Erase

Delete part of the detected feature

In specific situations where most of the grain boundaries are complete,


automated procedures can be used to restore missing portions of grain
boundaries. When using automated grain boundary reconstruction proce-

Applications / 225

dures, it is important to pay careful attention to the resulting images to


avoid creating erroneous grain boundaries and/or improperly rejoining
existing grain boundaries. Some degree of continuous operator inspection
is required. Another factor to consider is the thickness of the grain
boundaries in the digital image. Earlier investigators have shown that the
actual grain boundary area in a digital image may be as high as 19% prior
to thinning procedures (Ref 7).
In this example, nearly all of the grain boundaries are properly etched.
Image processing was performed using the following simplified procedure. Thresholding is performed to detect the dark etching grain boundaries, and then the image is inverted. At this stage of image processing,
the grains appear white, and the grain boundaries are black, as shown in
Fig. 16(b). The holes in the grains are filled (Fig.16c), and the image is
reinverted (Fig. 16d). An octagonal close operator of 2 pixels is used to
join nearby incomplete grain boundaries (Fig. 16e). The image is
reinverted so that the grains are the operative feature; that is, white or
detected features (Fig. 16f). Then, a binary exoskeleton operator is used
to thin the grain boundaries and form a reconstructed set of grain
boundaries (Fig. 16g).
The first method of grain size measurement is based on the three-circle
method of intercept counting in accordance with ASTM E 1382 (Ref 8).
The procedure is similar to the Abrams three-circle method used in
manual analysis in accordance with ASTM E 112 (Ref 9). A macro routine
is written to draw three circles in the graphics plane. The radii of the
circles are 200, 150, and 75 pixels, respectively. The circles are then
stored in the computer memory as a binary image. The image of the
circles is placed on the grain boundaries that have been outlined (Fig.
16h). The grain boundary image is called binary A, and the circles are
binary B. The binary image function A <AND> B results in an image
containing only intercepts of the circles and the grain boundaries (Fig.
16i), which is the intercept count for the image. In this example, the total
length of the circumferences of the three circles is 2796 pixels. The
calibration constant used in conjunction with the 32 objective lens is
0.770 m/pixel. Therefore, the length of the intercept circles is 2151.5 m
(2.152 mm, or 0.08 in.). Ten fields of view were used in the evaluation.
The mean number of intercepts was 88.4; therefore, the number of
intercepts per unit length, NL, is:
NL 88.4 / 2.152 40.08 mm1

(Eq 3)

The ASTM grain size is calculated from:


G 6.643856 log(NL ) 3.288

(Eq 4)

or
G 2.886 ln(NL ) 3.2888 2.886 ln(40.08) 3.2888 7.36

(Eq 5)

226 / Practical Guide to Image Analysis

(a)

(b)

(c)

(e)

(d)

(f)

(g)

Fig. 16

Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshalls reagent; (b) grain
boundary detection and image inversion; (c) holes lled; (d) image inverted; (e) close 2, octagon performed;
(f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection and three circles; (i) grain boundaries and
circle intercepts; (j) grain areas within a rectangular image frame are measured; (k) number of grains within a circular image
are counted

Having obtained the grain boundary image, the area of each grain can
easily be measured by inverting the binary image. This process makes the
boundaries black and the grains white. The area of each grain can then be
measured as a feature-specific property. For this type of analysis, selection
of the proper measuring frame is important. Only grains that are

Applications / 227

(h)

(i)

(j)

(k)

Fig. 16 (continued)

Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshalls
reagent; (b) grain boundary detection and image inversion; (c) holes lled; (d) image
inverted; (e) close 2, octagon performed; (f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection
and three circles; (i) grain boundaries and circle intercepts; (j) grain areas within a rectangular image frame are measured;
(k) number of grains within a circular image are counted

completely within the measuring frame should be analyzed; otherwise,


the grains that extend beyond the measuring frame may be truncated. This
would decrease the average grain area and, thus, increase the calculated
grain size number (Fig. 16j). In this particular case, for ten fields of view,
1923 grains contained in the rectangular image frame were measured. The
mean grain area is 537.87 m2, and the standard deviation is 560.16 m.
The mean grain size number is then calculated based on the equations
from ASTM E 1382:
G 3.3223 log(A) 2.9555

(Eq 6)

228 / Practical Guide to Image Analysis

or
G 16.99 1.443 ln(A) 16.99 1.443 ln(537.87) 7.92

(Eq 7)

Similarly, the number of grains within a known image frame can be used
to obtain the stereological parameter NA, or the mean number of
interceptions of features divided by the total test area. In this analysis, a
circular image frame with a radius of 215 pixels (165.44 m) is used. The
area of this frame is 85,989 m2 (or 0.0860 mm2). When this type of
analysis is performed using manual methods, the grains completely within
the circular measuring frame are counted as one grain. The grains that are
only partially within the frame are counted as one-half of a grain. The
number of grains to use for NA is the sum of all the grains inside the frame
and the sum of the grains having values of 12. When using automated IA
procedures, only the grains that are completely within the measuring
frame are counted. NA is determined by the number of grains within the
measuring frame and the actual sum of the area of these grains. Because
the IA system can measure the actual area of the grains, this method
produces more accurate results than counting grains that are partially
within the measuring frame as one half grain.
The average number of grains within the test frame is 122.6, and the
standard deviation is 14.4. The average number of grains within the
measuring frame for the 10 fields of view is 73,859 m2, and the standard
deviation is 3,762 m2 (Fig. 16k). Therefore, NA 1659. The grain size
is calculated from:
G 1.443 ln(NA) 2.954 1.433 ln(1659) 2.954 7.74

(Eq 8)

In this particular example, the average area of the grains and grain
boundaries is 77,424 m2. This is approximately 4.8 % of the area of the
grains. If this value were used for the measured grain area, the calculated
grain size would be 7.68. Thus, the grain boundary area could account for
approximately 0.06 of a grain size number.
To ensure the same fields of view are used to measure grain size using
each of the three methods, a set of 10 images is first stored on the
computer hard drive. After storage, the images are recalled to proceed
with the grain size analysis.
Macro Development. Before developing the macros used to solve
these different problems, the different types of variables used are
considered. Four types of variables commonly encountered in IA macro
routines, and for most computer languages, are:
O Integer
O Real

Applications / 229

O Character
O Boolean
Integer and real variables are used in situations where numbers are
required to describe a certain property. Examples of integer variables in
IA applications are an image number, field of view number, indexing
variable controlling a loop, and a line in a database. Variables such as the
system calibration constant or the area, length, and perimeter of an object
are examples of real variables. Real variables often are referred to as
floating-point variables, which means there is a decimal point associated
with the variables. Character-type variables include letters of the alphabet
and other symbols used to describe the properties of something. Character
often is abbreviated as Char. A group of characters placed together in
a specific order is referred to as a string. There are only two Boolean
variables, true and false. Boolean variables are used in macros to help
control how instructions are performed as the macro is running. Some
examples of the different types of variables are:
O INTEGER (a 1, field number 33, X pixel 7, and number of
Ferets 8)
O REAL (perimeter 376.2048, area percent 23.20876, and NL
14.8239)
O CHAR (a, b, x, %, and ?)
O BOOLEAN (x <12, y 17, and z 125.87)
O STRING (At first, this may seem confusing.)
The following examples briefly discuss how these variables may be used
in a macro routine. Again, each system may handle these types of
variables in a slightly different manner. For a more complete discussion of
variable types, the reader should refer to a basic computer textbook.
MACRO STORE IMAGES
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
C:\ASM_BOOK\GRAINS.
2. Define String Variables:
a. s1 grain
b. s2 .TIF
FOR LOOP 1; loop <10; loop loop 1
3. TVINPUT 1
4. Apply shading correction, resulting image is 2
5. s s1 string(loop) s2

230 / Practical Guide to Image Analysis

6. Write s
7. Save Image 2, s
8. Move to next field of view
END FOR
The macro STORE IMAGES operates in the following manner:
1. Delete from the system any images and graphics from previous
applications. Load shading corrector for later use. Set path to instruct
the computer where to store the images that will be used for further
analysis. In this example, images will be stored on the C-drive in
folder ASM_BOOK \ GRAINS.
2. Define two string variables (their use is explained later). Create a loop
instruction; the loop control variable, or loop, varies from 1 through
10.
3. Once in the loop, acquire a live TV image numbered 1.
4. Apply the shading corrector to the live image (the resulting image is
identified as number 2).
5. Define a new string variable s, which is the sum of three other string
variables.
6. Computer is instructed to write the variable s. The first time through
the loop, loop 1. Thus s grain1.TIF. The second time through the
loop, s has the value grain2.TIF; the third time through the loop, s
grain3.TIF, and so forth.
7. Rename image 2 as s and store to the computer C-drive in the
appropriate folder. Referring to the setup instructions, the first image
stored is C:\ASM_BOOK\GRAINS\grain1.TIF, the second image is
stored as C:\ASM_BOOK\GRAINS\grain2.TIF, and so forth.
8. Start the process again.

Stored images can later be recalled for grain size analysis. Because the
grain size is determined using three different methods, grain boundaries
are reconstructed and the reconstructed grain images are saved to disk for
additional analysis. The macro used to reconstruct the grains is:
MACRO RECONSTRUCT GRAINS
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
C:\ASM_BOOK \GRAINS.
2. Define String Variables:
a. s1 grain

Applications / 231

b. s2 .TIF
c. b1 bin_gr
d. b2 .img
FOR LOOP 1; loop <10; loop loop 1
s s1 string( loop ) s2
bb b1 string(count) b2
write s, b
3. LOAD IMAGE s, #1 (Fig. 16a)
4. IMAGE PROCESSING
b. SEGMENT IMAGE (detect grains and invert Darker than 200)
#1, #2
c. BINARY FILL HOLES #2, #3
d. INVERT IMAGE #3, #4
e. BINARY CLOSE, 2 pixels octagonally, #4, #5
f. INVERT IMAGE #5, #6
g. BINARY EXOSKELETONIZE IMAGE #6, #7
h. DISPLAY IMAGE #7 (white grains, black grain boundaries)
5. SAVE IMAGE #7, bb (the grains)
END FOR
System setup for this macro is similar to that for MACRO STORE
IMAGES. Two additional defined string variables, b1 and b2 are used to
store the reconstructed images, while s1 and s2 are used to retrieve the
previously stored gray images of the grain boundaries. A loop is created
to do ten different fields of view using the same variables from the
previous macro. Step 3 loads the previously stored images from the
computer; the image to be loaded is defined as s, and the new image is
referred to as 1. The terminology used for image processing is image
process input, output. For any image processing step, the first number or
name is the input image, and the second number or name is the output
image. The new image is segmented so the grains are detected (instruction
4b). Next, a binary hole filling operation (instruction 4c) helps remove
undetected regions within the grains. The image is then inverted so that
boundaries are an operational part of the image (instruction 4d). The
octagonal binary close by 2 operator first dilates the grain boundaries by
two pixels. This joins any pixels that are within four pixels of each other.
Then the image is eroded by two pixels restoring most of the grain
boundaries to their original thickness (instruction 4e). Another image
inversion (instruction 4f) makes the grains the portion of the image that
can be processed. A binary exoskeletonize operation thins the grain
boundaries that were enlarged as a result of the earlier operations
(instruction 4g). The reconstructed binary image is saved to the computer
drive, and the image number is defined by the value of the variable bb.

232 / Practical Guide to Image Analysis

Three-Circle Method. The three circles used to measure grain size by


means of intercept counting can be created several ways. A television
monitor comprising 640 pixels in the x direction and 480 pixels in the y
direction is used in these experiments, and the circle radii are arbitrarily
chosen to be 220, 150, and 75 pixels. The circles are first constructed in
the graphics plane using a macro command of the form: Graphic Circles
(CenterX, CenterY, Radius, Color). The center of the monitor is selected
as the center of each circle. Thus, CenterX 320 and CenterY 240.
Circles are drawn as color 15, or white. After merging the circles into the
image plane, they are stored on the computer hard drive as 3_circles.tif.
If the particular IA system being used does not support graphic operations, it still is possible to create circles for this measuring methodology.
The procedure to create a circle 200 pixels in diameter is:
1. Set up a circular detection frame 200 pixels in diameter.
2. Detect everything inside the frame.
3. Name the detected image DIA_200.
4. Change the diameter of the circular image frame to 198 or 196 pixels
in diameter.
5. Detect everything within this frame.
6. Name the resulting image DIA_198.
The Boolean operation DIA_200 (not DIA_198) results in a circle having
an outside diameter of 200 pixels. Smaller diameter circles are similarly
constructed and saved for future use. The next step is to develop a macro
for intercept counting.
MACRO GRAIN_INTERCEPTS
1. SYSTEM SETUP
a. Previously used variables are created.
b. Calibration factor for 32 objective lens; Calconst 0.7695
m/pixel
c. FIELD FEATURES to be measured are defined as COUNT.
d. DRAW measured field features in color.
e. LOAD IMAGE 3_circles.tif, circles.
f. b1 bin_gr
g. b2 .img
FOR loop 1; loop < 10; loop loop 1
bb b1 string(loop) b2
2. LOAD IMAGE bb, 1
3. IMAGE PROCESSING
h. INVERT IMAGE 1, grain_bounds
i. BOOLEAN thin < AND > circles, COUNTS

Applications / 233

4. IMAGE DISPLAY COUNTS


5. MEASURE FIELD PARAMETERS, COUNTS, INTERCEPTS
6. DRAW MEASURED OBJECTS IN COLOR
END FOR
This macro sets up a group of parameters similar to those used in the
previous examples. However, now the pixels in the image are given a
calibrated value by instruction 1b. The macro defines the field parameters
to be measured. Only the total number of counts per field has to be
determined for intercept counting. An additional instruction to display the
measured field objects (intercepts) in color is included. The image of the
three circles from the previous macro is loaded into memory and named
circles. Looping instructions and image loading are done in the same
manner as in other examples. The image of the grains is inverted to reveal
the grain boundaries (instruction 3h). The Boolean AND operator is used
to determine which pixels are contained in the set comprising the grain
boundaries and the circles (instruction 3i). (After this operation, a dilation
of 2 or 3 pixels helps to show the results more clearly.) The intercepts are
then counted as a field parameter (instruction 5) and stored in the database
INTERCEPTS; results are displayed on the monitor in color (instruction
6). The calibration constant is used to convert intercept counts into
number of intercepts per unit length, NL.
The macro used to measure the area of the grains is similar to the
intercept macro. SETUP defines feature-specific parameters instead of
measuring field parameters. Feature-specific properties measured for each
grain include area, perimeter, Feret maximum, Feret minimum, anisotropy, and roundness. Only grains completely within the 620 440 pixel
imaging frame are measured (Fig. 16j). Measured properties are written
into a database after each of ten fields of view, and a histogram of
frequency versus grain area is constructed. Statistical data, such as mean
grain area and standard deviation of the grain area, are obtained from the
histogram, and this information is used to calculate the ASTM grain
number (Eq 7).
The number of grains within a frame of known area is determined using
a similar macro. Field parameter count is the basic measurement using a
circular image frame with a radius of 235 pixels; use the calibration
constant to calculate the actual frame area. The number of grains in each
field of view divided by the frame yields the number of grains per unit
area in each field of view (Fig. 16k). The mean number of grains per unit
area is used to calculate the ASTM grain number as described by Eq 8.
Grain size determined in accordance with ASTM E 1382 yields slightly
different results for each of the three measurement methods described in
the standard even though the exact same images were used for each
analysis. Differences stem from two sources: each method calculates the
grain number in a different way, and the image frames used in each

234 / Practical Guide to Image Analysis

procedure are not identical. These differences would decrease by using a


much larger number of fields of view to calculate the grain size. This
suggests why experimental results obtained by several observers or
several laboratories often are quite different. Differences in the final
results are noted even though the procedures described in the standard are
properly followed. However, the differences fall within the results
reported in the precision and bias statement in ASTM E 112.
In this example, the entire grain boundary network is drawn to clearly
illustrate the three procedures. There is an advantage to using the
three-circle methodology compared with the area procedures, which is
particularly important in situations where a large amount of grain
boundary reconstruction and image processing is required to create
images suitable for further analysis. Only the intercepts between the
circles and grain boundaries need to be counted in the three-circle
technique. Therefore, the mouse can be used to just make tick marks
where the grain boundaries and the circles intercept. The A <AND> B
Boolean IA procedure is then used to highlight the intersection of the
circles and the tick marks. This procedure can be accomplished very
quickly compared with manually reconstructing all the grain boundaries.

Feature-Specic Distributions
Nonmetallic inclusion distribution in steel is a good example of
using IA to make feature-specific measurements. This experiment involves the analysis of an AISI type 8620 (UNS G86200) low-alloy steel
specimen from a commercial-quality, air-melted heat. The material is
calcium treated to modify inclusion morphology (shape) and to lower the
sulfur and oxygen content of the steel. The specimen is from a
longitudinal plane from the center of a 150 mm (6 in.) diameter bar. The
weight percent of the inclusion-forming elements is 0.020S, 0.022Al, 27
ppm Ca, 9 ppm O, and 0.85Mn.
ASTM E 1245 provides a good set of guidelines to determine the
volume fraction of inclusions (second-phase constituents) in a metal using
automatic IA (Ref 10). For this particular alloy steel, there are very few
oxide inclusions relative to the number of manganese-calcium sulfides
because the oxygen content of the steel is only 9 ppm. Furthermore, the
majority of the oxide inclusions are encapsulated by sulfides due to
calcium treating. Therefore, in accordance with E 1245, all types of
inclusions are treated the same. Procedures to distinguish between the
various types of inclusions have been described elsewhere (Ref 11).
This analysis is performed using a 32 objective lens, and the
calibration constant of the system is 0.77 m/pixel. A 620 480 pixel
image frame corresponds to a measuring frame area of 181,059.8 m2.
The analysis is performed on 300 fields of view, which corresponds to a

Applications / 235

Table 4 Statistical parameters from feature-specic inclusion data for an AISI


8620 alloy steel containing 0.029% sulfur
Area, m2

Perimeter,
m

Mean

30.76

Stdev

82.44

Sum X
Sum X2

Parameter

Length,
m

Anisotropy,
length/thickness

Roundness,
P2/4 A

22.68

9.66

2.65

1.67

28.82

13.07

2.01

1.31

115,089

84,862

36,132

9,929

6,241
16,842

28,964,783

5,031,112

988,181

41,525

Min

2.37

6.75

2.76

1.13

1.03

Max

3647.12

439.73

212.44

23.48

25.99

P, perimeter; A, area

total observed area of 54.3 mm2 (2.1 in2). Length (maximum Feret),
thickness (minimum Feret), area, perimeter, anisotropy (length divided by
thickness), and roundness (R P2/4A) are measured for each inclusion.
These types of measurements are referred to as feature specific because
properties associated with each individual constituent in the image frame
are measured. In contrast, field measurements represent the entire sum of
a particular parameter for an entire field of view.
The specimen is oriented on the microscope stage so the longitudinal
plane of deformation (i.e., the rolling direction) is coincident with the
y-axis of the stage (Fig. 17). Shading correction is applied after acquiring
an image, and binary operations are used following image segmentation.
Any holes within the inclusions are filled (Fig. 17b), and a vertical close
operation of 3 pixels is used to join inclusions separated from each other
by a distance of approximately 4.6 m in the deformation direction (Fig.
17c). A vertical open of 1 pixel is used to eliminate from the population
any inclusions less than 1.5 m long (Fig. 17d), and a square binary close
of 2 pixels is used to join any inclusions within 3 m of each other in any
direction (Fig. 17e). Results of measurements for each field of view are
written to a database, and statistical parameters, such as sums of
parameters and sums of the squares of the parameters, are tabulated (see
Table 4).
The inclusion length histogram (Fig. 18) is skewed to the right. That is,
there are many more class sizes of long inclusions than short inclusions
relative to the mode of the distribution (see data in Table 5). A plot of the
log of inclusion length as a function of cumulative probability (Fig. 19)
suggests that the inclusion length has a log-normal distribution. The
statistical parameters of the log-normal distribution are calculated using
the length values listed in Table 4. The mean value of the inclusion length
is given by:
n

Li
i1
n

(Eq 9)

236 / Practical Guide to Image Analysis

(a)

(b)

(c)

(d)

(e)

Fig. 17

AISI type 8620 alloy steel bar (UNS G86200); (a) inclusions; (b) holes lled; (c) vertical close of three pixels;
(d) vertical open of 1 pixel; (e) square close of 2 pixels

Applications / 237

where Li is the length of the inclusion and n is the total number of


inclusions measured. Similarly, letting L2I be the square of the length of
the ith inclusion, the standard deviation of inclusion lengths is:
n

n Li2
i1

(L )
n

i1

n(n 1)

(Eq 10)

Using these parameters, the mean of the log-normal distribution, , can be


calculated by:
ln

[( ) ]

2
1

(Eq 11)

and the standard deviation of the log-normal distribution can be obtained


from:

ln [() 1]
2

(Eq 12)

The median value, L50% for the cumulative distribution is calculated


from:
L50% exp()

(Eq 13)

and, similarly, the values corresponding to probabilities of L84% and L99%


are:
L84% exp( )

Fig. 18

(Eq 14)

Inclusion-length (Feret max) histogram for AISI type 8620 alloy steel
containing 0.029% sulfur

238 / Practical Guide to Image Analysis

Table 5
Length, m

Inclusion length distribution data of an AISI 8620 alloy steel containing 0.029 % sulfur
Frequency

Cumulative frequency

Cumulative
probability, %

Length, m

Frequency

Cumulative frequency

Cumulative
probability, %

0.00

16

37

3210

85.78

0.00

17

43

3253

86.93

465

465

12.43

18

37

3290

87.92

781

1246

33.30

19

28

3318

88.67

547

1793

47.92

20

19

3337

89.18

370

2163

57.80

25

16

3452

92.25

225

2388

63.82

30

11

3534

94.44

227

2615

69.88

40

3625

96.87

118

2733

73.04

45

3654

97.65

10

74

2807

75.01

50

3672

98.13

11

119

2926

78.19

60

3691

98.64

12

68

2994

80.01

70

3711

99.17

13

63

3057

81.69

80

3722

99.47

14

65

3122

83.43

90

3725

99.55

15

51

3173

84.79

100

3731

99.71

Fig. 19

Cumulative distribution of inclusion length in AISI type 8620 alloy


steel containing 0.029% sulfur

Applications / 239

and
L99.9% exp( 3.09 )

(Eq 15)

After calculating the log-normal parameters, it is possible to determine


what number or percentage of inclusions per unit area that are greater in
length than some particular size are likely to be found (Ref 12).
Number 3741

Sum lengths 36132

Mean 9.6584

Standard deviation 13.0733

1.7473

1.0203

L50% 5.74 m

L84% 15.92 m

Sum L2 988181

L99.9% 134.30 m

A brief description of the macro used for the analysis is:


MACRO INCLUSION MEASUREMENT
1. SETUP
a.
b.
c.
d.
e.
f.
g.

Load calibration factors.


Load shading corrector.
Define feature properties to be measured.
Define color set to display measured features.
Measurement frame definition
Define stage scanning parameters (15 X fields, 20 Y fields).
Focus stage at corner positions.

WHILE (STAGE)
2.
3.
4.
5.
6.

MOVE STAGE (x, y, z)


TVINPUT 1
SHADING CORRECTION 1, 0, correct
IMAGE SEGMENTATION correct, detected, 175
IMAGE AMENDMENT
a.
b.
c.
d.

BINARY FILL HOLES detected, filled


CLOSE VERTICAL by 3, filled, v_join
OPEN VERTICAL by 1, v_join, no_small
CLOSE SQUARE by 2, no_small, result

7. IMAGE DISPLAY, result


8. MEASURE FEATURE PROPERTIES, result, inclusion
9. COLOR DISPLAY MEASURED INCLUSIONS
END WHILE
10. DISPLAY DATALIST, inclusion
11. HISTOGRAM, inclusion, Feretmax

240 / Practical Guide to Image Analysis

The inclusion Macro contains many of the setup instructions discussed


previously, as well as stage movement information (instruction 1f) and
information regarding the determination of the focusing plane at each of
the four corner fields of view to be analyzed (instruction 1g). With this
type of instruction set, the IA system calculates the position of the focus
plane for each field of view. As the stage moves from field to field, the
focus control automatically moves to the calculated focus position. By
moving in this manner, an auto-focus instruction does not have to be
executed after every stage movement. This instruction set is only one
method of stage control and may not be available on all IA systems.
Again, the stage and focus instructions in the macro reinforce the
importance of proper specimen preparation. Implicit in determining the
focus position (z-coordinate) for each field of view is the assumption that
the specimen is perfectly flat. Any deviation from a flat specimen results
in poor focusing as the system scans the predefined fields of view.
Because 300 fields of view were examined to create the data set, the
stage must automatically be moved to each field of view. This is
accomplished by using a while loop. The first instruction in the loop tells
the system to move to the appropriate x, y, and z positions (instruction 2).
This is followed by shading correction and image segmentation prior to
image amendment. The specimen is placed on the stage so the direction
of deformation is parallel to the y-axis of the stage (the vertical direction
on the monitor). After detecting inclusions, undetected regions within the
inclusions are filled using a binary hole-filling operation (instruction 6a).
A vertical close of 3 pixels is used to join inclusions that are within
approximately 4.2 m of each other in the direction of hot working
(instruction 6b). Because each pixel is 0.77 m long, a dilation of 3 pixels
opens up the top and bottom of each inclusion by 3 0.77 m 2.1 m.
Thus, the tops and bottoms of adjacent inclusions are joined together if
they are less than two times this distance apart. Next, a vertical open of
1 pixel (instruction 6c) removes inclusions less than approximately 1.4
m in height. The last image amendment is a square open of 2 pixels
(instruction 6d), which combines inclusions that are within 2.8 m of each
other in any direction. Instruction 7 displays the results of image
amendment processing. The inclusions are then measured, and the
predefined properties are written to database inclusion for further analysis
(instruction 8). Then, a color display of the measured inclusions appears
(instruction 9).
The system continues in this manner until 300 fields of view have been
analyzed, after which the database is opened for inspection and the
histogram of the maximum Feret diameters is plotted. Further analysis
(described in Eq 915) is possible using statistical information stored in
the database.
The binary image amendment used in this example shows how to join
inclusions that are relatively close to each other to create a long inclusion.
It is not the same degree of amendment required to perform an inclusion

Applications / 241

analysis using manual methods in accordance with ASTM E 45 or using


automated IA procedures described in ASTM E 1122 (Ref 13, 14). These
procedures require that type B oxide inclusions within 15 m of the
inclusion centerlines be joined together, and that type C inclusion
stringers that are within 40 m be counted as one long inclusion.
Many IA systems have customized software to perform this type of
analysis. The problem is handled using a procedure referred to as image
tiling, or creating a large montage comprising many images obtained at
higher magnification [see the section Large-Area Mapping (LAM) later
in this Chapter]. After creating the composite image of the entire
specimen, inclusions from all the fields of view are joined together using
image amendment in accordance with the previously mentioned parameters, and an inclusion rating for the entire specimen is determined. The
details of creating these types of images and their evaluation are beyond
the scope of this Chapter.
Grain Size Distributions. Refer to the areas of grains the ingot-iron
specimen previously discussed. While the shapes of the individual grains
are very complex, they can be considered approximately to be spheres
(Ref 15). Let the equivalent spherical diameter of the grain areas be
represented by:
D

4 A

(Eq 16)

The distribution of equivalent grain diameter (converted from the areas of


the previously measured 1923 grains) appears to have a log-normal
distribution (see Table 6 and Fig. 20a and b). The analysis of equivalent
grain diameter distribution is identical to that used for inclusion length
distributions using Eq 9 through 15 and the data in Table 7, yielding the
following results:
Number 1923

Sum Deq 45312

Mean 23.5632

Standard deviation 11.3874

3.0547

0.4581

L50% 21.22 m

L84% 33.54 m

Sum Deq2 1316928

L99.9% 87.39 m

High-Speed Steel. Carbide distribution in high-speed tool steel (M 42,


UNS T11342) can be analyzed using feature-specific measurements. Most
of the carbides in these materials generally are much smaller than
inclusions in alloy steels. To properly resolve the carbides optically, the
specimens must be examined at a magnification of 1000 in the etched
condition (Fig. 21a). Because these alloys often contain retained austenite
(a light-etching constituent), detection of only the carbides is difficult.
Both small and large carbides are resolved at the higher magnifications of
a scanning electron microscope (SEM). However, in the secondary mode
of operation, other features, such as grain boundaries, are observed in an

242 / Practical Guide to Image Analysis

etched specimen (Fig. 21b); these are eliminated using an unetched


specimen and backscattered images (Fig. 21c). Digitizing the SEM image
(Fig. 21d) further enhances the distinction between the carbides and the
matrix. A feature-specific analysis identical to that used for inclusions in
alloy steels is possible using these images (Ref 16).

Table 6
Diameter,
m

Equivalent grain diameter data for ingot iron


Frequency

Cumulative,
probability, %

Diameter,
m

Frequency

Cumulative,
probability, %

0.00

23

76

56.81

0.00

24

56

59.72

0.00

25

66

63.15

0.05

26

59

66.22

0.05

27

45

68.56

0.05

28

61

71.73

0.05

29

42

73.91

12

0.68

30

29

76.04

34

2.44

35

25

84.25

10

54

5.25

40

15

90.49

11

65

8.63

45

11

94.23

12

70

12.27

50

96.83

13

80

16.42

55

98.49

14

87

20.95

60

98.96

15

85

25.36

65

99.43

16

83

29.68

70

99.74

17

92

34.46

75

99.90

18

83

38.77

80

99.90

19

70

42.41

85

99.95

20

61

45.58

90

99.95

21

70

49.22

95

99.95

22

70

52.86

100

99.95

Table 7

Statistical parameters from feature-specic grain data for ingot iron


Area, m2

Perimeter,
m

Feretmin,
m

Feretmax,
m

Anisotropy,
length/thickness

Roundness,
P2/4 A

Mean

537.87

90.02

20.18

33.06

1.64

1.43

Stdev

560.16

47.74

9.89

17.54

0.36

0.20

Sum X

1,034,315

173,105

38,812

63,568

3156.8

2754.3

Sum X2

Parameter

1,159,395,781

19,963,446

971,502

2,692,816

5429.1

4023.4

Min

7.11

9.86

2.33

3.84

1.09

1.09

Max

5660.59

370.76

65.45

133.82

3.50

2.62

P, perimeter; A, area

Applications / 243

(a)

(b)

Ingot-iron equivalent grain diameters, d 4 Al: (a) equivalent


grain diameter histogram; (b) cumulative distribution of equivalent
grain diameters

Fig. 20

244 / Practical Guide to Image Analysis

Stereological Parameters. Examples discussed previously deal with


parameters that are directly measured by the IA system including length,
area, area fraction, and the number of features within a certain field of
view. The following discussion will address standard stereological parameters listed in Table 8, such as number per unit area and number per
unit length. While basic stereological parameters are not measured using
normal IA functions, they can be derived from the standard parameters
measured by most IA systems. A good way to understand the methodologies of obtaining these parameters is to consider how to measure
banding in a microstructure in accordance with ASTM E 1268 (Ref 17).
ASTM E 1268 contains examples of banding in plate steel, high-speed
tool steel, alloy steel, and stainless steel. The photomicrographs were
analyzed by several different observers to determine average values of the
number of intercepts and intersections parallel to and perpendicular to the
axis of deformation. For the remainder of this discussion, only the number

(a)

(b)

20 m

:
(c)

Fig. 21

(d)

Carbide distribution in AISI M 42 high-speed tool steel: (a) microstructure revealed by Marbles reagent etch
(4% nital) in optical micrograph; (b)secondary electron image of etched specimen; (c) backscattered electron
image of unetched specimen; (d) digitized backscattered electron image (four grays)

Applications / 245

of feature intercepts is used to calculate various stereological parameters,


which can be used to quantitatively define microstructural banding.
Letting NL and NL represent the average number of intercepts per unit
length parallel to and perpendicular to the longitudinal axis of deformation, respectively, the following parameters are defined. Anisotropy index,
or AI, is given by:
AI

NL

(Eq 17)

NL

Degree of orientation, 12, is given by:


12

NL NL

(Eq 18)

NL 0.571 NL

SB, or mean center to center spacing of the bands, is given by:


SB

1
NL

(Eq 19)

, or mean free path (mean edge to edge spacing of the bands) is given
by:

1 VV

(Eq 20)

NL

Quantitative results obtained using the procedures described in E 1268 are


superior to assessments of banding made using comparative charts.
However, major disadvantages of the procedure are that it is slow and
labor intensive and results obtained by different persons may show large
Table 8 Relationship of standard stereological parameters to image analysis
parameters
Stereological
parameter

Image analyis
equivalent

Definition

Number of features

Number

Number of point elements or test points

Pixel

PP

Point fraction, or point count

Area fraction

Length of linear elements or test line length

Area / calibration
constant (calconst)

LL

Sum of linear intercept lengths divided by total test


line length, lineal fraction

Area fraction

LA

Length per unit area, or perimeter

Area fraction /
calibration constant

Planar area of intercepted features or test area

Area

NL

Number of intercepts per unit length perpendicular


to the deformation direction

HP / frame area

NL||

Number of intercepts per unit length parallel to the


deformation direction

VP / frame area

NA

Number of feature interceptions per unit area

(HP / cc) / frame area

246 / Practical Guide to Image Analysis

(a)

(b)

Fig. 22

Stereological parameters derived from feature measurements: (a)


horizontal intercepts form horizontal projection, HP, and F90; (b)
vertical intercepts form vertical projection, VP, and F0

differences. The following discussion describes techniques to perform


banding analysis using automated IA procedures, including the use of
image amendment to have the automated procedures agree with the
results obtained using manual measurements.
To assess microstructural banding using an automated IA system, it is
first necessary to develop mathematical relationships between the parameters these systems usually measure and primary stereological parameters.
Then the parameters are plugged into the equations cited in E 1268 to
assess the degree of banding.
Mathematical Relationships. Automated IA systems are programmed
to make field and feature measurements but generally do not provide
basic stereological parameters. However, in many instances, stereological
parameters or very close approximations can be derived from field and
feature measurements.
For example, consider the off-axis, solid ellipse shown in Fig. 22(a). An
outline of the left side of the ellipse is formed as horizontal lines first
intercept the object. The feature-specific vertical height of the ellipse is
defined as F90. Because the outline is created from horizontal intercepts,
the horizontal projection, HP, of the ellipse is the same as F90. Similarly,
downward-moving vertical lines starting at the top right of the ellipse
form another outline from the vertical intercepts, called vertical projection, VP (Fig. 22b). The specific width of this outline is F0.
The anisotropy index for this particular object is:

AI

F90
F0

(Eq 21)

For a field of view containing n objects, the anisotropy index for all the
objects is:

Applications / 247

in

AI

F90

i1
in

(Eq 22)

F0
i1

However, for simple geometric shapes such as this, the anisotropy index
can be represented by the corresponding field parameters:
AI

HP

(Eq 23)

VP

The primary advantage obtained using field measurements is that IA


systems make field measurements much more rapidly than they do
feature-specific measurements.
It is possible to use the field parameters HP and VP to obtain the number
of intercepts per unit length (Fig. 23). For most IA systems, the image is
formed by rastering a beam from left to right and top to bottom on a
specimen. The image on the TV monitor is composed of a number of
square pixels. The image has M pixels in the x direction and N pixels in
the y, or vertical, direction; M and N are dimensionless quantities. One
horizontal line contains M pixels, and the entire image frame is N pixels
high. Thus, the total number of pixels used to create a field horizontal
projection measurement is M N.
Each pixel in a properly calibrated IA system has a certain true length
at a particular magnification. Thus, a calibration constant, or calconst,
relates the size of the pixels on the TV monitor to the actual image
scanned in the microscope:
cc calconst length/pixel

(Eq 24)

Because one horizontal scan line contains M pixels, the true length of a
horizontal scan line follows (Note that braces, { }, are used in the

Fig. 23

Schematic view of monitor having M pixels along the horizontal axis


and N pixels along the vertical axis

248 / Practical Guide to Image Analysis

equations in this discussion to indicate the units of the particular terms


being discussed):
l M cc,

{pixels} {m/pixel}

(Eq 25)

The total length for the entire frame is:


L N (M calconst),

{m}

(Eq 26)

The horizontal projection of the elliptical object is HP {m}. The object


is Y pixels, or intercepts, high; thus:
HP
Y ,
cc

{pixels}

(Eq 27)

For this field of view, the number of intercepts per unit length is the total
number of intercepts divided by the total length of the test line:
nL

Y
L

HP /cc

(Eq 28)

M N cc

or
nL

HP

(Eq 29)

M N cc2

Because M N is the total number of pixels in the frame and cc2 is the
true area of each pixel:
nL

HP
Frame Area

{m1}

(Eq 30)

For best IA measurement resolution and accuracy, the specimen should be


positioned with its deformation axis parallel to the y-axis on the TV
monitor. Therefore, Eq 30 represents the number of intercepts per unit
length perpendicular to the deformation axis. Equation 31 shows a similar
relationship for vertical intercepts:
nL

VP
Frame Area

{m1}

(Eq 31)

Consequently, by making two rapid field measurements and using the true
area of the image frame, the stereological parameters described in Eq 17
through 20 are easily calculated using an IA system. Other stereological

Applications / 249

parameters can be derived using similar procedures and different IA field


and feature-specific measurements.
Coating Thickness. The thickness of a coating can be measured by
creating a series of lines perpendicular to the coating. For example,
consider a plasma-sprayed coating on a substrate with its axis parallel to
the x-axis of the monitor (Fig. 24a). A vertical grid of test lines is created
using the IA system graphics functions or by moving a thin measuring
frame across the specimen (Fig. 24b). In this example, a binary fill
operation is used after image segmentation (Fig. 24c). A vertical binary
close of 3 pixels is used to smooth the outer surface of the coating, and
a square binary open of 10 pixels is used to eliminate any unwanted
features, such as detected objects in the substrate (Fig. 24d). Following
this, the Boolean operator coating <AND> vertical grid results in a series
of vertical lines, each of which is a measure of the coating thickness at
that particular x position on the monitor (Fig. 24e). In IA terminology, the
Feret 90 measurement of each line is the coating thickness. Coating data
are statistically analyzed after obtaining measurements from ten fields of
view:
O
O
O
O
O

Average thickness 384.02 m


STDEV 30.34 m
Number measurements 3100
Minimum thickness 308.42 m
Maximum thickness 458.10 m

Coating thickness is measured quite rapidly using this type of macro. In


this example, 10 fields of view totaling 3100 coating thickness measurements are performed in approximately one minute. Details of the macro
used for the measurements are not presented here because instructions are
very similar to some of the previous macros. However, following is a
general version of the macro to create a series of vertical lines for a 640
480 pixel (x-axis and y-axis, respectively) monitor:
MACRO VERTICAL LINES
1. SETUP
a. Clear old images and graphics.
b. Define variables.
xstart 0
ystart 0
xend 0
yend 480
2. For N 1; N < 640; N N 2
xstart N

250 / Practical Guide to Image Analysis

xend N
Draw Vector: xstart, ystart, xend, yend, color 15
END FOR
3. Merge the graphics into the Image Plane.
4. Save Image to Hard Drive: C:\ASM_BOOK\Vert_lines.tif.
This is a very simple macro to construct. For all operations in the loop,
ystart equals 0 and yend equals 480. The first time through the loop, xstart
is N and xend is N. The draw vector function draws a line from the pixel
having screen coordinates P1 (xstart, ystart) to the pixel having
coordinates P2 (xend, yend). Numerically, the coordinates are P1 (1,0)
and P2 (1, 480). The second time through the loop, N 3 and the new
vector is drawn through points P1 (3,0) and P2 (3, 480). Repeating the
process eventually results in a grid of 320 vertical lines. If the particular
IA system being used does not have these graphic functions, a measuring
frame having a width of one pixel can be created. A similar grid of lines
is constructed by detecting everything within this frame and moving it
along.
Carpet Fibers. Manufacturers of carpet fibers routinely use IA
techniques to check quality control (Ref 18). Carpet fibers have a trilobal
shape in a cross-sectional, or transverse, view (Fig. 25a). One measure of
carpet-fiber quality is the modification ratio, or MOD ratio (related to
carpet-fiber wear properties), wherein two circles are placed on a fiber
using either manual or automated procedures. The larger circle is sized so
it circumscribes the fiber, while the smaller circle is inscribed within the
fiber (Fig. 25b), and the MOD ratio is the area of the large circle divided
by the area of the small circle:
MOD ratio

Diameter circumscribed circle


Diameter inscribed circle

The MOD ratio decreases with decreasing sharpness of the lobes (Fig.
25c).
Large-Area Mapping (LAM). In most IA applications, looking at one
field of view of a particular specimen is like being in a submarine;
microstructural details are observed through a 100 porthole, but the big
picture is missing. In more scientific terms, high resolution is required to
detect, record, and evaluate microstructural details (Ref 19), but lower
magnification is required to present and retain the big picture, that is, the
context of the details on the macroscopic scale.
An ideal solution would be a megamicroscope to present a complete
picture of the sample at a low magnification, such as 100. Large-area
mapping, frequently referred to as image tiling, is a step in this direction.
The specimen is completely analyzed at high magnification, storing the

Applications / 251

(a)

(b)

(c)

(d)

(e)

Fig. 24

Coating thickness determination: (a) plasma-sprayed coating; (b) graphically created vertical measuring lines;
(c) detected image and holes lled; (d) vertical close of 10 pixels; (e) coating thickness formed by Boolean
operator image (d) <AND> image (b)

(a)

(b)

(c)

Fig. 25

Carpet ber analysis: (a) bundle of carpet bers; (b) trilobal analysis, circles inscribed in and circumscribed
around bers; (c) trilobal analysis, modication ratios of two bers

Applications / 253

images in the memory of the computer. The images then are displayed at
a reduced resolution to show the big picture, while regions of particular
interest can be analyzed at the higher resolution. The following examples
illustrate the use of this procedure.
Detection of Alumosilicate Inclusions. During the process of continuous
casting of steel, the metal flowing out of the weir is washed constantly
by blowing inert argon gas through it. Gas bubbles physically draw
nonmetallic material with them towards the surface where the slag can be
separated away from steel. Optimizing this process brings certain cost
advantages. Blowing too much gas is expensive, but decreasing the gas
flow too much results in lower steel quality (that is, higher inclusion
content). Proper process control minimizes gas use and maintains product
quality.
Alumosilicate inclusions appear as finely dispersed clusters, visible
only at 100 or higher magnification. Typical frequency is 1 cluster/cm2,
which means that only one field in about two hundred checked will
possibly be of interest.
The usual method to check material quality is to ultrasonically test the
product after hot working, which is very costly and time consuming.
Thus, this situation provides an excellent opportunity to apply LAM. A
large number of fields of view are scanned automatically, and field
analysis results are digitized and put into a composite picture to show the
microcharacteristics of the sample at a macroscale (Fig. 26a). After
completing the scan, the analyst can use the composite picture as a sort
of a map and position the sample to analyze points of interest (Fig. 26b
and c).
Detecting and Evaluating Cracks. A similar analysis can be applied to
long, thin objects, such as a crack, where the features are too long to be
detected completely as one single object at a magnification required to see
the thickness satisfactorily. The image shown in Fig. 27 illustrates a
typical case, and the solution to the problem is discussed subsequently.
The crack in the image is approximately 5 mm (0.2 in.) long with an
average thickness of approximately 50 m, so a macro-objective must be
used to see the complete crack. However, a macro-objective reduces the
thickness of the crack to a few pixels or can, in some cases, make the
crack disappear, causing artifacts.
The problem is solved using a scanning stage and tiling neighboring
images into one complete image. The positioning accuracy of the
motorized stage generally is satisfactory; however, if accuracy is a
concern or if the stage is not motorized, then it is possible to let the
computer decide on the optimal tiling arrangement, as shown in the
sequence of images in Fig. 28.
The three images show a traverse over a steel sample containing
hydrogen-induced cracks. The tile positions (rectangular inserts) are
adjusted to get the optimal overlap, the quality of which is indicated in the

254 / Practical Guide to Image Analysis

(a)

(b)

(c)

Fig. 26

Detection of aluminosilicate inclusions in steel using large-area mapping. (a) Composite picture of 2000 elds
of view at 100 from a specimen with dimensions 4 2 cm (1.5 0.75 in.). Arrows point to areas having
easily discernable inclusion clusters along with polishing artifacts, mostly scratches. (b) and (c) Inclusion clusters indicated
by arrows. Source: Ref 19

Fig. 27

Large-area map of a long, thin crack. Source: Ref 19

Applications / 255

Fig. 28

Large-area map of hydrogen-induced crack in a steel specimen. Source: Ref 19

oval inserts. In most cases, the automated scanning stage is used to move
the sample from one field to another and to ensure that a statistically
significant number of measurements, either in terms of images analyzed
or in terms of objects of interest is observed and evaluated. Automated
stage control is used in the preceding two cases to do large-area mapping
to achieve the required resolution while retaining the big picture.

References
1. G.L. Sturgess and D.W. Braggins, Performance Criteria for Image
Analysis Equipment, The Microscope, 1972, p 275286
2. H.P. Hougardy, Measurement of the Performance of Quantitative
Image Analysing Instruments, Prakt. Metallogr., 1975, p 624635
3. Segment Functions, Threshold Automatic, Zeiss KS 400 Imaging
System Users Guide, Vol 2, Munich, Germany, 1997, p 4517
4. D.W. Hetzner, Analytical Threshold Settings for Image Analysis,
Metallographic Techniques and the Characterization of Composites,
Stainless Steels, and Other Engineering Materials, Vol 22, Microstructural Science, ASM International, 1995, p 157
5. G.L. Kehl, The Principles of Metallographic Laboratory Practice,
McGraw-Hill, 1949, p 87
6. J. Serra, Image Analysis and Mathematical Morphology, Academic
Press, London, 1982

256 / Practical Guide to Image Analysis

7. J.J. Friel, E.B. Prestridge, and F. Glazer, Grain Boundary Reconstruction for Grain Sizing, STP 1094, MiCon 90: Advances in Video
Technology for Microstructural Control, G.F. Vander Voort, Ed.,
ASTM, 1990, p 170184
8. Standard Test Methods for Determining Average Grain Size Using
Semiautomatic and Automatic Image Analysis, E 1382, Annual Book
of ASTM Standards, ASTM, 1997
9. Standard Test Method for Determining Average Grain Size, E 112,
Annual Book of ASTM Standards, ASTM, 1996
10. Standard Practice for Determining the Inclusion of Second-Phase
Content of Metals by Automatic Image Analysis, E 1245, Annual
Book of ASTM Standards, ASTM, 1995
11. D.W. Hetzner, Quantitative Image Analysis Methodology for HighSulfur Calcium Treated Steels, STP 1094, MiCon 90: Advances in
Video Technology for Microstructural Control, G.F. Vander Voort,
Ed., ASTM, 1990
12. D.W. Hetzner and B.A. Pint, Sulfur Content, Inclusion Chemistry, and
Inclusion Size Distribution in Calcium Treated 4140 Steel, Inclusions
and Their Influence on Mechanical Behavior, R. Rungta, Ed., ASM
International, 1988, p 35
13. Standard Test Method for Determining the Inclusion Content of
Steel, E 45, Annual Book of ASTM Standards, ASTM, 1997
14. Standard Practice for Obtaining JK Inclusion Ratings Using Automatic Image Analysis, E 1122, Annual Book of ASTM Standards,
ASTM, 1997
15. F. Schucker, Grain Size, Quantitative Microscopy, R.T. DeHoff and
F.N. Rhines, Ed., McGraw-Hill, 1968, p 201265
16. D.W. Hetzner and J.A. Norris, Effect of Austenitizing Temperature on
the Carbide Distributions in M42 Tool Steel, Image Analysis and
Metallography, Vol 17, Microstructural Science, ASM International,
1989, p 91
17. Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures, E 1268, Annual Book of ASTM Standards,
ASTM, 1997
18. J.J. Friel, Princeton Gamatech, personal communication
19. V. Smolej, Carl Zeiss Vision, personal communication

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000

CHAPTER

Color Image Processing


Vito Smolej
Carl Zeiss Vision

COLOR IS the result of the interaction of illuminating light with the


object or scene under observation. It is a very important aspect of human
vision and, as such, of our ability to communicate with our environment.
To appreciate this, compare the 40 kHz bandwidth, which, to our ears, is
sufficient for the best-quality sound, with the standard TV bandwidth of
6MHz that, although 150 times wider, still leaves much to be desired.
How we see colors and a description of the color models and color
spaces used to model human vision is discussed first, followed by a brief
discussion on the subject of electronic recording of images. Image
analysis is the point of interest, so the ensuing discussion follows the
common processing path from a recorded color image, through color
image processing and color discrimination, to color measurement.
Color is our perception of different wavelengths of light. Light visible
to humans ranges in wavelength from 380 nm for violet light to 760 nm
for red light. A mixture of all the spectral colors produces white
lightsunlight, for instance, is a close approximation of white light.
It generally is accepted that, in terms of spectral response, the human
retina contains three wavelength-sensitive detectors. Although their
spectral response is rather complex, red, green, and blue (RGB) approximation fits reasonably well the real world of seeing.

Modeling Color
The RGB model is based on three primary colorsred, green, and
bluethat can be combined in various proportions to produce different
colors. When dealing with illuminating light (e.g., a computer screen or
some other source of illumination), the RGB primary colors are called
additive because, when combined in equal portions, they produce white.

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000


258 / Practical Guide to Image Analysis

(a)

Fig. 1

(b)

Color wheels. (a) Additive. (b) Subtractive. Please see endsheets of


book for color versions.

Primary colors are digitized as integers, which in case of 8 bit systems,


gives a value ranging from 0 to 255. The colors produced by combining
the three primaries are a result of the relative strength (value) of each
primary (Fig. 1). For example, pure red has a red value of 255, a green
value of 0, and a blue value of 0. Yellow has a red value of 255, a green
value of 255, and a blue value of 0. An absence of the three primary colors
results in black; when all three have values of 255, they produce white.
Additive color theory explains what we see in term of colors when we
look at an illuminating object such as a light bulb, a TV, or a computer
monitor. In real life however, most of the light entering our eyes is
reflected off different objects and surfaces. The cyan, magenta, yellow,
and black (CMYK) model, a subtractive color model, is based on light
being absorbed and reflected by paint and ink and is used, for instance, in
the printing industry. The primary colorscyan, magenta, and yellow
are mixed to produce the other colors. When all three are combined, they
produce black. Because impurities in the ink make it difficult to produce
a true black, a fourth color, black (the K) is added when printing.

Color Spaces
As shown in Fig. 2, there are several complementary ways to look at
and interpret color space, including:
O
O
O
O

RGB and CMY


Lab
Hue-lightness-saturation (HLS)
Munsell

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 259

(a)

(c)

Fig. 2

(b)

(d)

Color space models. (a) Red-green-blue (RGB) and cyan-magenta-yellow (CMY). (b) Lab space. (c)
Hue-lightness-saturation (HLS) space. (d) Munsell space. Please see endsheets of book for color

versions.

The RGB space model is the color system on which all the TV
cameras and, thus, color digitalization and color processing systems, are
based and can be viewed as a cube with three independent directions
red, green, and bluespanning the available space. The CMY system of
coordinates is a straightforward complement of RGB space.

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000


260 / Practical Guide to Image Analysis

Lab Space Model. The body diagonal of the RGB cubic contains grays
(in other words, colorless information) about how light or dark the signal
is. The Lab coordinate system is obtained by turning the body diagonal of
the RGB cube into a vertical position. The Lab color system is one of the
standard systems in colorimetry (i.e., the science of measuring colors).
Here L, or lightness coordinate, quantifies the dark-light aspect of the
colored light. The two other coordinate axes, a and b, are orthogonal to
the L axis. They are based on the opponent-colors approach. In fact, a
color cannot be red and green at the same time, or yellow and blue at the
same time, but can very well be, for example, both yellow and red, as in
oranges, or red and blue, as in purples. Thus, redness or greenness can be
expressed as a single number a, which is positive for red and negative for
green colors. Similarly the yellowness/blueness is expressed by the
coordinate b.
The YIQ and YUV systems used in television broadcasting are defined
in a similar manner. The Y component, called luminance, stands for the
color-independent component of the signal (what one would see on a
black and white TV screen) and is identical to the lightness of the Lab
system. The two remaining coordinates define the specific color. The
following set of equations defines, for instance, the RGB-YIQ conversion:
Y 0.299 R 0.587 G 0.114 B
I 0.596 R 0.274 G 0.322 B
Q 0.211 R 0.523 G 0.312 B
Note that Y is a weighted sum of the three intensities, with green standing
for 58.7% of the final value. Note also, that I is roughly the difference
between the red and the sum of green and blue. In other words, the I
coordinate spans colors between red and cyan. Q is analogously defined
as the difference between combination of red and blue (i.e., magenta) and
green. If all three signals are equal, that is, R G B L, then the
matrix will provide the following values: Y L, I 0, and Q 0. This
corresponds to a black and white image of intensity L, as expected. A
similar matrix is used to transform from RGB to YUV. Here the U and V
axes span roughly the green-magenta and blue-yellow directions.
As mentioned above, the RGB-to-YIQ signal conversion is useful for
transmission: the I and Q signals can be transmitted at lower resolutions
without compromising the signal quality, eventually improving the use of
the broadcast channel or, alternatively, allowing for enhanced Y signal
quality. When the signal is received, it can be transformed back into the
RGB space, for instance, to display it on the TV monitor.
HLS Space Model. Both RGB and YIQ/YUV coordinate systems are
hardware based. RGB comes from the way camera sensors and display
phosphors work, while YIQ/YUV stems from broadcast considerations.
Both are intricately connectedthere is no RGB camera without broadcasting and vice versa.

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 261

Two aspects of human vision come up short in this context. The first one
is the hue, or color tone. For example, a specific blue color might be used
to color-stain a sample for image analysis. The results from staining may
be stronger or weaker, but it is still the same color hue. The second aspect,
which is, in a sense, complementary to the color hue, is color saturation,
that is, how brilliant or pure a given color is as opposed to washed-out,
weak, or grayish. In this case, we would be looking for a specific color
hue without any specific expectations regarding the color saturation and
lightness.
HLS color space, which takes into account these two aspects of human
vision, can be viewed as a double cone, hexagonal or circular, as shown
in Fig. 2. The axis of the cone is the gray scale progression from black to
white. The distance from the central axis is the saturation, and the
direction or the angle is the hue.
The transformation from RGB to HLS space and back again is
straightforward and easy with digital images. Thus, a lot of interactive
image analysis is done in HLS space given the fact that HLS tries to
match the human perception.
Munsell Space. This system is mentioned for the sake of completeness. It consists of a set of standard colored samples, which can be used
for visual comparison of colors, with wide use in the textile industry,
printing, and forensics. The colors of the standard samples have been
chosen in an equidistant fashion; that is, a close (enough) match in the
Munsell set of samples can be found for every imaginable color (see Fig.
2). Thus, the Munsell system is based on how the human eye, and not how
some electronic component, sees colors. Note that the Munsell system has
more red than green samples: the human eye resolves reds far better than
greens.

Electronic Recording of Color Images


Until recently, the common way of producing a digital image was to
take a shot using a TV camera and then digitize the analogue signal with
a frame-grabber card. The color-encoding schemes (both National Television System Committee, or NTSC, used in the United States, and Phase
Alternation Line, or PAL, used in Europe) were developed as a compatible add-on to existing monochrome television broadcast without any
provision for the necessary increase in the bandwidth. The result is that
the color is given even less lateral resolution than the brightness
information, a limitation that can be overlooked in the case of a TV signal
but not in the case of image processing. It is because of this that digital
imaging is fast replacing stationary cameras, and it is only a question of
time before photographic film will also be replaced by electronic media.

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 41 OUTPUT: Thu Oct 26 15:56:12 2000


262 / Practical Guide to Image Analysis

Image-input devices must satisfy several, usually conflicting, conditions:


O Spatial resolution: There should be no compromise in terms of spatial
resolutionall component images should have identical x and y
resolution.
O Time resolution: Live means seeing changes in the scene at approximately 30 frames per second. Note that even if the sample does not
change with time, it is desirable to see right away how the sample looks
when moving to a new position or adjusting the focus on the
microscope.
O Veracity and dynamics of the color signal: the image should show the
same thing that the human eye seesno more and no less. Additionally, there should be some leeway when enhancing the image taken
without the need to compromise.
Competing analog and digital technologies each have their own
advantages and disadvantages. Analog-camera advantages include:
O Well-defined signal standardsfor timing (for instance, PAL in Europe
and NTSC in the United States) and signal transfer standard (Luminance/Chroma color Model, or Y/C, or Red-Green-Blue color model,
or RGB)
O Simple connection to frame grabber/monitor/printer
O Real-time online image (NTSC: 30 frames per second) on any video
monitor
O Operation without PC possible
O Inexpensive technology
Analog-camera disadvantages include:
O Limited resolution of 640 525 pixels (NTSC) or 768 625 (PAL).
Maximum resolution for color is possible only with a three-chip
camera.
O Limited dynamic range of 8 bit, which equals approximately 48 dB
O Interlaced signal that causes fringing horizontal edges with moving
objects
O Fixed integration time of 40 ms per frame
O Frame grabber required for connection to the PC, which produces
additional costs
Digital-camera advantages include:
O High signal quality/dynamic range: The signal normally is digitized
within the camera itself and also is transmitted digitally. This means
that there is no loss due to modulation/demodulation of the TV carrier
signal, as in the case of an analog signal.

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 41 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 263

O High geometrical precision: Identity exists between the image detail


on the target and pixel content in the digitized image.
O No limitations due to TV standards: This means that higher spatial
resolutions and longer exposure times are possible. Also, there is no
need for a frame grabber, although some sort of hardware interface, for
instance SCSI, still is required.
Digital-camera disadvantages include:
O No live image at 25 or 30 frames per second and standard image size.
Either the spatial resolution has to be reduced to achieve time
resolution, or fewer frames per second are available to keep the image
size intact.
O Hardware interface is not standardized so solutions, based parallel port,
SCSI, RS422/644, IEEE 1334, and proprietary hardware, compete for
market share.
O Thick and unwieldy connectors
O Expensive technology, although this could change once the economy
of scale is reached
To summarize, analog cameras are preferred in instances where a fast,
live image is required without a need for high resolution. However, they
cannot match digital cameras when the expectations in terms of integration time, spatial resolution, or signal dynamics exceed the video
standard.

Color Images
It has been mentioned elsewhere in this volume that black and white
(B/W) images are digitized at 8 bit resolution; that is, they are digitized
at 256 gray levels per image pixel. In case of a color image, essentially
three detectors are used: one for each of the red, green, and blue primary
colors. Thus, a color image is a triplet of gray-level images.
If every color component consists of 28 (256) gray levels, then the
number of possible colors equals 224(16 million) different colors. In
reality, the number is much smaller, such as in the color image in Fig. 3,
where the number of colors is approximately 249. Note that these 249
colors should not be confused with 256 gray levels of a typical B/W
image; they contain information that would not be available were it not
for the color.

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000


264 / Practical Guide to Image Analysis

(a)

(b)

(c)

(d)

Fig. 3

Color image and its red-green-blue (RGB) components. (a) Original image. (b) Red channel. (c) Green channel.
(d) Blue channel. The original color image is a test image used to check for color blindness; not seeing the
number 63 is one of manifestation of color blindness. Please see endsheets of book for color versions.

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 41 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 265

Color Image Processing


The intention of color image processing is either to enhance the useful
part of the image and/or eliminate the information that is not of interest.

RGB-HLS Model Conversion


Figure 4 shows the results of conversions between different models. The
example shows how useful the RGB-HLS transformation eventually can
be; different hues in the original color (i.e., 24 bit) image are converted
into separate gray levels in the hue (i.e., 8 bit) image. The same applies
to the saturation image, as well as to any of the gray images obtained from
the original.
This is useful for discrimination purposes because the problem of color
discrimination can be reduced to two steps: (1) a conversion into the
appropriate color model, and (2) a discrimination step on a gray image.
The same reasoning can be applied to the original RGB image itself; for
example, the red component is sufficient to obtain the binary image of
numbers in Fig. 3.

(a)

(d)

Fig. 4

(b)

(e)

(c)

(f)

(g)

Conversion between red-green-blue (RGB) and hue-lightness-saturation (HLS) color space. (a) Original image. (b)
Red. (c) Green. (d) Blue. (e) Hue. (f) Lightness. (g) Saturation. Please see endsheets of book for color versions.

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 44 OUTPUT: Thu Oct 26 15:56:12 2000


266 / Practical Guide to Image Analysis

Color Processing and Enhancement


Figure 5(a) shows a CuPb2 particle (image obtained at 1000 in dark
field, using a digital camera at 1300 1000 pixel resolution). The color
processing and enhancement procedure stretches the saturation to make
colors richer and more distinct. Additionally, it also stretches the dark
colors by changing the lightness image to its own square root to bring out
details. The Macrocode steps are:
1. ConvertToHLS Input,2
2. SatImage 2[1,3]
3. LitImage 2[1,1]
4. Normalize SatImage, SatImage
5. Binnot LitImage, LitImage
6. Multiply LitImage, LitImage, LitImage, 255
7. Binnot LitImage, LitImage
8. ConvertToHLS 2,Output
The original color image is available in the image Input. In step 1, it is
converted from RGB space into the HLS space and stored in image 2. In
steps 2 and 3, the two internal assignations for the saturation and lightness
images (2[1,3] and 2[1,1]) are given meaningful names (SatImage and
LitImage, respectively), to make later commands easier to write and to
read. The saturation image (SatImage) is dealt with first. In step 4, its
contrast is stretched to enhance the saturations, which are present in the
original image. Steps 5, 6, and 7 deal with the lightness image (LitImage).
First, a negative of the lightness image is calculated in step 5. This way,
the dark parts of the lightness image become light and vice versa. The
Multiply command in step 6 can be written as follows:
LitImage LitImage LitImage/255

The result of the operation is the improvement of contrast in bright areas


of the current LitImage, which is a negative of the original. Thus, the
contrast of the dark areas of the original is improved. To get the new
version of the lightness image, step 7 inverts the changed LitImage back
to dark-is-dark and bright-is-bright mode; however, it has an improved
contrast in dark parts. Step 8 takes the image 2 with its new saturation and
lightness images (and unchanged hue image) and converts it back into the
RGB representation, which gets stored into the image Output.
Note that there is not much regard for how real the colors are in the final
image. The intention is to just lighten up the image; that is, bring out
details in the darker parts of the image, which is accomplished.

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 45 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 267

(a)

(b)

(c)

(d)

Fig. 5

Color image processing. (a) Original image. (b) After image processing. (c) Detail before image processing. (d)
Detail after image processing. Please see endsheets of book for color versions.

Color Discrimination
Discrimination defines the actual objects in the image that need to be
evaluated. The output of this process is always a binary image where the
uninteresting part of the image is made black and the objects of interest
are set to white. In the case of gray images, the most common approach
is to set two threshold levels, which define the interval of gray levels of
interest. In a lot of instances, these two levels are determined automatically. However, there still are instances where the levels must be set
interactively, a difficult task for some, especially if implementation of the
function has not been thought out very well.
For a color image, the number of levels to be determined grows to six,
which is unmanageable by any standard if done the wrong way.

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 44 OUTPUT: Thu Oct 26 15:56:12 2000


268 / Practical Guide to Image Analysis

(a)

(b)

(c)

Fig. 6

Interactive color discrimination. (a) Original image; the color to be discriminated is circled by the user. (b) Points
with the indicated range of colors are marked green. The user now adds another region. (c) Interactive
discrimination is complete. Please see endsheets of book for color version.

Experience shows that the best approach is point-and-tell, which allows


the user to circle out the colors of interest and refine the selection if
necessary by adding or removing falsely identified parts of the image
(Fig. 6). An example of a user interface for color discrimination is shown
in Fig. 7.
Evidently it is possible in the case mentioned above to adjust the six
levels individually as well. However, this method is scarcely used because
the interactive method is much easier and simpler.

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 44 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 269

Fig. 7

Example of a user interface for color discrimination. The user interface


offers all the quantitative data that might be of interest. Please see
endsheets of book for color version.

Color Measurement
Thus far in the discussion, color has been used only to detect the actual
phase or part of the sample to be analyzed. Color also can be measured
and quantified by itself, in a manner similar to densitometric measurements used to determine gray levels in gray images. For color-densitometric measurement, the original color image (Fig. 8a) and the image of
regions of objects (the binary image of spots in Fig. 8b) are required. Data
extracted (one set for every spot) include the x and y position, RGB
averages, and RGB standard deviations.
Given the RGB values, it is possible to calculate the values for
alternative color models, for instance, CMY or Lab. Note that to be able
to compare measurements, the experimental conditions, such as the color
temperature of the illumination, should be kept constant.

Quantitative Example: Determining Phase Volume Content


Figure 9 illustrates the procedure used to determine phase volume
content. The microstructure of the original sample is shown in Fig. 9(a).
Using interactive discrimination by show and tell, the brown phase is
selected by circling a typical part of the phase (Fig. 9b). Fig. 9(c) shows

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 42 OUTPUT: Thu Oct 26 15:56:12 2000


270 / Practical Guide to Image Analysis

(a)

(b)

Fig. 8

Color densitometry. Spot positions and their average colors (together


with the standard deviations) can be extracted from the two images. (a)
Color image to be evaluated. (b) Binary image of spots. Please see endsheets of
book for color version.

(a)

(b)

(c)

(d)

Fig. 9

Quantitative analysis of color sample to determine volume percent of phases. (a) Original sample. (b) Selection
of brown phase using interactive discrimination; brown phase is selected by circling a typical part of the phase.
(c) Results of color discrimination in hue-lightness-saturation (HLS) space; phase is black, background is white. (d) Two other
phases are detected and color coded to allow easy comparison. Please see endsheets of book for color versions.

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 43 OUTPUT: Thu Oct 26 15:56:12 2000


Color Image Processing / 271

the result of color discrimination in HLS space (phase is black, background is white). The other two phases also are detected (steps omitted for
the sake of clarity) and color-coded to allow easy comparison (Fig. 9d).
Having the phases separated makes it easy to generate quantitative data,
which, in the case discussed previously, leads to following results:
Phase

Vol%

Red

8.80

Green

3.59

Blue

19.98

Size distribution (not shown here) also can be performed on these


phases, as well as other studies, such as those used to determine if certain
phases have a particular relationship with one another.

Conclusions
Image analysis is a tool that converts information in an image into
numerical form. From an image containing possibly several megabytes of
information, image analysis generates a data file of a few kilobytes of
relevant information. Image analysis is concerned with the following
stepwise scheme:
Step

Result of step

Information content, kB

Image acquisition

Color image

1000

Color/gray image

500

Discrimination

Binary image

80

Binary image processing

Binary image

40

Data file

Color image processing

Measurement

Acknowledgment
The section Electronic Recording of Color Images was written in
collaboration with Marcus Capellaro of Carl Zeiss Vision.

JOBNAME: PGIAspec 2 PAGE: 1 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000

Index
A
Aberration
chromatic ................................................ 220
spherical .................................................. 220
Abrams three-circle method in manual
analysis (ASTM E 112) .................... 225
Abrasive grinding ....................................... 36
Abrasive grit sizes ................................ 4647
Abrasives, for polishing .............................. 56
Abrasive-wheel cutting ........ 3738(F), 39(F)
Absorbed electrons, contrast
mechanisms associated with ......... 102(T)
Accuracy, measurements in applied
methods ............................................... 181
Acid digestion method ............................... 19
Acrylic resins, for mounting ................. 4142
Additive color theory ............................... 258
Advanced statistical analysis .......... 186, 187
Agglomeration ....................................... 5253
ALA (as large as) grain size .................... 117
measurement procedure
(ASTM E 930) ............................... 117
Alloys, metallographic specimen
preparation contemporary practice .. 58(T)
Alloy steels, mounting of ....................... 41(F)
Alpha-alumina slurries .............................. 52
Alpha-beta copper alloys, etching .. 66, 69(F)
Alpha grains ..................................... 2931(F)
Alumina
as abrasive used during polishing ........... 50
acidic suspensions .................................... 53
calcined abrasives ............................... 5253
inclusions ........................................ 217218
Alumina slurries, for final polishing ......... 52
Aluminosilicate inclusions, detection by
large-area mapping ................ 253, 254(F)
Aluminum
anodizing .................................................. 69
etching ...................................................... 62
grinding ..................................................... 48
Aluminum alloys
anodizing ....................................... 69, 71(F)
etching .......................... 62, 63(F), 66, 70(F)
grinding and embedded SiC
particles ........................................ 48(F)
metallographic specimen preparation,
contemporary practice .......... 59(T), 60

polishing of ............................................... 60
relief .............................................. 49, 52(F)
Aluminum-silicon alloys
dilation and counting technique ............. 136
image processing ........................... 84, 85(F)
Amount of microstructure
constituents .................. 153(F), 154155
Analog-camera technology .............. 262, 263
advantages .............................................. 262
disadvantages .......................................... 262
Analog devices .............................................. 2
Analog meter ................................................. 5
Analog tube-video cameras ..................... 67
Anisotropic structures .............................. 192
Anisotropy, morphological ........................ 197
Anisotropy index (AI) ..... 108, 245, 246247
determined by using measurement
frame ............................................... 214
Annealing twins .................................... 26, 63
Anodizing .......................................... 6870(F)
metals affected by ................... 69, 70, 71(F)
Anosov, P.P. ................................................... 1
Apple-class computers ................................. 9
Applications .............................. 203255(F,T)
Area ................................. 110, 123(T), 126(T)
Area
average ...................................................... 22
definition, as feature-specific
parameter ................................... 213(T)
Area equivalent diameter ......... 101, 123(T),
126(T)
Area filled .................................................. 110
Area fraction ... 1, 20, 22, 115, 126, 150, 178
average ............................................... 105(F)
determined by using measurement
frame ............................................... 214
for space filling grains ........................... 109
manual measurement by point counting
(ASTM E 562) ............................... 104
standard notation for ........................... 16(T)
method ...................................................... 19
Areal density, standard notation for ...... 16(T)
Area of features ........... 104, 110111, 126(T)
filled ........................................................ 104
Area-weighted distributions ...... 190(F), 192,
194(F)
Arithmetic processing of images .............. 86

JOBNAME: PGIAspec 2 PAGE: 2 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000


274 / Index

Arrangement of microstructural
constituents .................. 153(F), 160161
Aspect ratio ... 101, 122124(T), 126(T), 157,
158(F), 213
ASTM Committee E-4 on
Metallography .............................. 15, 27
ASTM grain size .......................... 27, 29, 225
equation .................................................... 26
number ................................. 23, 26, 30, 233
scale .......................................................... 26
ASTM standards ................................. 127(T)
Atomic force microscopy (AFM) ............ 110
Attack-polishing agents .................. 59(T), 60
Auger electron microscope .................. 8081
Austenitic stainless steel
grain size distributions ...................... 190(F)
relationship between hardness and
mean intercept length ....... 187, 188(F),
189(F)
Auto-delineating processor ...................... 210
Autodelineation filters ................................ 11
Autofocusing ............................................... 32
Automated grain boundary
reconstruction procedures ....... 224225
Automated IA procedures of inclusion
analysis (ASTM E 1122) .................. 241
Automated sample-preparation
equipment ............................................ 18
Automated stage movement ...................... 32
Automatic focus ................................ 219220
Automatic gain control ............................ 219
Automatic image analysis (AIA) ... 101, 103,
104, 109, 110
adjusts diagonal distance to minimize
bias .................................................. 111
combined selection criteria .................... 123
computer-based, combining primitive
descriptors ....................................... 124
measuring frame choice ......................... 143
specimen quality requirements ......... 167(F)
vs. stereological methods ............... 200201
Automatic image analyzers ................. 17, 33
Automatic stage ........................................ 219
Average area fraction .............................. 105
Average area of features .... 105106, 126(T)
Average diameter ........... 111, 123(T), 126(T)
Average grain size (AGS) ........................ 117
Average length of features ................. 126(T)
Averaging filter ........................................... 86

B
Background erosion
technique.................................141143(F)
Background skeletonization
technique ............................... 141143(F)

Background thinning
technique ............................. 141143(F)
Backscattered electrons ..................... 81, 104
contrast mechanisms
associated with .................. 102(T), 105
Backscattered electron signal .................. 104
Bakelite ........................................................ 40
Banding in metals, procedure for
measuring and reporting
(ASTM E 1268) ................................. 108
Banding measurement in a
microstructure, methodology (ASTM E
1268) ................................... 244246(F,T)
Band sawing ................................................ 37
Bausch and Lomb QMS .............................. 5
Beryllium
grain size revealed by etching ................. 61
microstructure ................................ 61, 62(F)
Beta-phase, in copper alloys ........... 66, 69(F)
Bias ........................................................ 27, 30
of digital measurements,
characterization of ..................... 181(T)
from magnification effect ....................... 111
from segmenting error ........................... 103
guidelines to minimize ........................... 168
in area and length measurements ........... 111
in image selection .................................. 194
introduced by image processing and
digital measurements ...... 171183(F,T)
introduced by specimen preparation
and image acquisition ....... 165171(F)
quantification of ..................... 180182(F,T)
Bimodal distributions, grain sizes ........... 117
Bimodal duplex grain size ....................... 117
distribution .............................................. 190
Binarization ................. 171, 174175, 176(F)
Binary A, grain boundary image .............. 225
Binary B, circles on grain boundaries ...... 225
Binary exoskeletonize operation ............. 231
Binary image function A <AND>B ....... 225,
226(F)
Binary image processing
Boolean logic ................................. 9295(F)
convex hull .................................. 97100(F)
dilation ................................ 9697(F), 98(F)
erosion .............................. 9697(F), 98(F)
hole filling ..................................... 9596(F)
morphological binary processing ............. 95
pruning ........................................ 97100(F)
skeleton by influence zones
(SKIZ) .................................. 97100(F)
skeletonization ............................. 97100(F)
Binary logic ............................................... 160
Binary morphological processing
methods ................................................ 11
Bioscan ......................................................... 10
Bismuth, grinding ........................................ 48

JOBNAME: PGIAspec 2 PAGE: 3 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 275

Bitmap ......................................................... 80
Bit-plane method ........................................ 88
Black, gray level of zero assigned ............ 205
Boolean AND operator ............................ 233
Boolean logic .................................... 9295(F)
feature-based .................................. 9495(F)
operations ................................ 93, 94(F), 95
pixel-based ..................................... 9395(F)
truth table ................................................. 93
Boolean operations ....................................... 8
Boolean variables ..................................... 229
Border kill .................................... 176177(F)
Boron, addition effect on
sintered stainless steel
microstructure ........................ 199200(F)
Bottom-poured heats .................................. 18
Bounding rectangle ............................. 152(F)
Brass
beta-phase volume fraction,
thresholding of ............................... 214
thresholding of ........................... 220221(F)
Braze, relief ...................................... 49, 52(F)
Breadth .................................... 123(T), 126(T)
min DD ................................................... 111
Bright field illumination ........... 64(F), 65(F),
70(F), 81
Bright-field reflected-light
microscopy ......................................... 102
British steel ................................................... 4
Brown phase, selection in determining
phase volume content . . . 269270, 271(F)
Buehler Ltd. ................................................ 11

C
Cadmium, grinding ..................................... 48
Calcination process .................................... 52
Calconst ............................................. 225, 247
Calibration, of image analyzer ................. 111
Calibration constant (calconst) ...... 225, 247
Cambridge Instruments/Olympus Q10
system ......................................... 9, 10(F)
Cambridge Instruments
Q900 system ...................................... 8, 9(F)
Q970 system ...................................... 8, 9(F)
Carbides, in steel, x-ray
intensity maps ........................ 114, 115(F)
Carbon black, electron microscopy of,
material standard
(ASTM D 3849) ............................ 127(T)
Carbon content ............................................. 1
Carbon steel
grain size estimation using planimetric
method ......................................... 28(F)
microstructure consisting of ferrite
grains ................................. 125(F), 126

Carpet fiber quality control ....... 250, 252(F)


Cartesian coordinate systems ................. 164
Castable resins ................................ 4142, 46
Cast iron
bias introduced by improper polishing .. 166
graphite content
estimated by statistical
analysis .............................. 185186
measured in .......................................... 61
graphite particle classification by fuzzy
logic ........................................... 160(F)
Cathodoluminescence, contrast
mechanisms associated with ......... 102(T)
Cell breadth ...................................... 120121
Cells ............................................................ 161
Cemented carbides, polishing .............. 4950
Central-pressure mode .............................. 46
Central processing units (CPUs) .............. 12
Centroid of a feature ............................... 113
Centroid spacing ....................................... 130
Ceramic materials, precision saws for
sectioning .............................................. 39
Ceramics
polishing ............................................. 4950
relationships between porosity and
density ............................... 187, 188(F)
Ceramic shot material .......................... 45(F)
Ceramic varistor, multiphase ................... 104
Character, abbreviation for ....................... 229
Character-type variables ......................... 229
Charge coupled device (CCD) ... 171, 172(T)
Chart method, testing plan for inclusion
ratings (ASTM E 45) ........................... 18
Chart methods for rating
microstructures ................................... 15
Chemical properties ................................. 146
Chen operator ............................................. 92
Circles ........................................................ 233
electron-beam machined
gray levels of .................... 207, 208(F)
Circularity ..... 101, 123124(T), 126(T), 158,
159(F), 160(F), 213
ideal ........................................................ 159
rating ....................................................... 160
Close operator ..................................... 223(T)
Closing ................................... 9697(F), 98(F)
Clustering ..................... 114, 120122(F), 140
degree of ................................................. 130
of features ............................................... 108
sensed by number density variation
technique ......................................... 132
Clusters ...................................................... 133
Coal, microscopy of, material standard
(ASTM D 2798) ............................ 127(T)
Coating thickness
measurement (ASTM B 487) ........... 127(T)
measurement by IA system ........... 249250,
251(F)

JOBNAME: PGIAspec 2 PAGE: 4 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


276 / Index

Cobalt binder ................................... 66, 70(F)


Coefficient of variation
(CV) ........................... 131, 132, 140, 185
Cold drawing, copper-zinc alloys ......... 50(F)
Cold working, effect on degrees of grain
orientation ........................................ 24(T)
Cole, Michael ................................................ 4
Colloidal silica ............................................. 60
as abrasive used during polishing ........... 50
Color
additive ...................................... 257258(F)
definition ................................................. 257
densitometric measurement ...... 269, 271(F)
modeling of ............................... 257258(F)
opponent ................................................. 260
primary, digitized values of ................... 258
processing and enhancement, procedure
for ....................................... 266267(F)
subtractive .......................................... 258(F)
Color blindness, color image for
checking ......................................... 264(F)
Color densitometry ..................... 269, 271(F)
Color discrimination ................... 267269(F)
HLS space, phase volume content
determination ........... 269270, 271(F)
point-and-tell approach ..................... 268(F)
problem of .............................................. 265
user interface example .............. 268269(F)
Color-encoding schemes .......................... 261
limitations ............................................... 261
Color filter wheel, motorized .............. 8, 9(F)
Color image analysis .......... 61, 62(F), 6465
Color image processing ............... 257271(F)
Color images ................................ 263265(F)
electronic recording of ................... 261263
processing .......................... 265, 266267(F)
red-green-blue (RGB) components ....... 263,
264(F)
Colorimetry, definition .............................. 260
Color signal dynamics ............................. 262
Color signal veracity ................................ 262
Color saturation ....................................... 261
Color spaces ................................. 258261(F)
Color wheels ......................................... 258(F)
Comet tailing .............. 49, 51(F), 54, 72, 204
eliminated in specimen preparation ....... 210
Comparison chart methods ..................... 125
Composition ................................. 157, 158(F)
Compression algorithms, standardized ..... 80
Compression of images .............................. 80
Computer-aided image processing ......... 162
Concrete microscopy of material
standard (ASTM C 856) ............. 127(T)
Confidence intervals ........................ 186, 187
Confidence level (CL) ...................... 154, 184
Confidence level (95%) .................... 102, 217
Confidence limit (CL) ................................ 32

Connectivity ................................. 179180(F)


Contiguity of phases ................................. 197
Continue instruction ................................ 204
Continuous casting ..................................... 18
Contrast, variable ...................................... 204
Contrast mechanisms .................. 101102(T)
with associated imaging signals ....... 102(T)
Contrast process ortho (CPO) ................ 220
Convex hull ............... 97100(F), 151152(F)
Convexity/concavity quantification ........ 152
Convex perimeter ................................ 126(T)
Convolution filters ...................................... 11
Convolutions ............................................... 10
Copper alloys, etching ..................... 66, 69(F)
Copper-zinc alloys, pitting corrosion
from polishing ........................... 49, 50(F)
Core drilling ................................................ 37
Correlation analysis ................................. 186
Count
of features ...................................... 103, 233
of number of intercepts .......................... 103
of objects ................................... 175178(F)
Counts per field ................................ 232233
Crack-opening displacement, of HSLA
steels ...................................... 197199(F)
Cracks, detection and evaluation by
large-area mapping ................ 253255(F)
Crofton perimeter ....................... 112, 113(F)
Crossed-polarized light ....... 69, 71(F), 72(F)
Cross-polarized light ....................... 61, 62(F)
Cumulative gray level histogram ..... 206(F),
214
Cyan, magenta, and yellow (CYM)
color space model .............. 258, 259(F)
Cyan, magenta, yellow, and black
(CMYK) model ................................. 258

D
Dark-field illumination .............................. 81
Database flush command ......................... 220
Data interpretation ...................... 186191(F)
examples .................................... 191201(F)
dc power supplies ....................................... 82
Deformation, effect on grain shape,
sampling to study ................................. 37
Degree of banding measurement
(ASTM E 1268) ........................... 127(T)
Degree of freedom .................................... 184
Degree of orientation () ......... 12, 108, 245
DeHoff, R.T. ................................................ 33
Delesse, A. ......................................... 1, 19, 20
Delineation enhancement ...................... 91(F)
Delineation filter .................................... 91(F)
Delta-ferrite ............................... 66, 67(F), 69
in stainless steel, thresholding of area
fraction ............................... 214215(F)

JOBNAME: PGIAspec 2 PAGE: 5 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 277

Densitometer, QTM system as ..................... 4


Depth measurements .................................. 16
Desktop imaging, in 1990s .............. 1012(F)
Deviation moments ................................... 152
Diallyl phthalate ................................... 40, 43
Diameter of grain areas, equivalent
spherical .............................................. 241
Differential interference-contract (DIC)
illumination .............. 50(F), 51(F), 63(F)
Digital-camera technology .............. 8, 1213
advantages ...................................... 262263
disadvantages .......................................... 263
Digital-image acquisition devices ............... 3
Digital imaging ............................................. 3
Digital measurements
bias introduction ..................... 180182(F,T)
problems of ..................................... 179180
Digital-signal processing hardward ........... 6
Dilate (binary image amendment
operator) ....................................... 223(T)
Dilation ............. 9697(F), 98(F), 134136(F)
Dilation and counting technique ........... 130,
134136(F), 143
Dilation cycles .............................. 134136(F)
Dilation operation ................................ 5, 7, 8
Dilation technique ....................... 141143(F)
Directed diameters (DD) .......................... 110
Directionality ......................................... 2324
Directional operators used to amend
image ............................................. 224(F)
Dirichlet cell ......................................... 137(F)
size of .................................................... 139
Dirichlet network ............................. 130, 143
for a particle dispersion ......................... 137
Dirichlet tessellations .......... 120122(F), 143
Dirichlet tessellation
technique ...................... 137140(F), 142
Discrimination of colors ............. 267269(F)
Disector method ........................................ 164
Distance function ...................................... 151
Distribution functions ...................... 184187
for grain section area ........ 191193(F), 194
Distributions ...................................... 184187
area-weighted ............... 190(F), 192, 194(F)
bimodal grain size .................................. 190
log-normal .............................................. 237
nonmetallic inclusion ............. 234241(F,T)
normal ...... 183184, 185, 191192, 193(F)
number-weighted ............................... 190(F)
Disturbed metal, introduction of .... 49, 50(F)
Dots per inch (dpi) ..................................... 77
Dragging out, of graphite
and inclusions ............................ 49, 51(F)
Draw-vector function ............................... 250
Dry abrasive cutting .................................. 36
Ductile iron, mechanical
properties ............................... 147148(F)
Duplex feature size ........................... 107108

Duplex grain size ..... 108, 117, 118119, 125


Duplex grain size measurement (ASTM
E 1181) .......................................... 127(T)
Duplex steels, microstructural parameters
effect on hardness .......................... 196(F)
Dyes .............................................................. 41

E
Ealing-Beck ................................................... 6
Ecole des Mines de Paris ............................. 5
Edge effects .................................. 142(F), 143
Edge of features, sharpening of ............... 207
Edge preservation ............................ 4246(F)
Edge retention ........................... 4246(F), 72
by automatic polishing device ................. 56
guidelines ............................................ 4546
Edge rounding ................................. 41(F), 42
Electroless nickel plating ...................... 42(F)
Electroless plating ...................................... 46
Electrolytic plating ..................................... 46
Electronic components, precision saws
for sectioning ........................................ 39
Electron microscopes ................ 8182, 87(F)
Electropolishing .................................... 5354
standard (ASTM E 1558) ................. 127(T)
vs. electrolytic etching ............................. 68
Ellipse, ideal
gray level
analyzed as histograms ............. 206, 207(F)
defined ............................................ 205206
Elongation ............................ 157, 158(F), 159
Embedding .................................. 48(F), 49(F)
Energy-dispersive spectroscopy ................ 66
Epoxy resins ............ 40, 4142, 43, 45(F), 46
for mounting ............................ 4142(F), 43
Equivalent diameter ................................. 192
Equivalent grain diameters ..................... 241
cumulative distribution for
ingot iron ................................... 243(F)
for ingot iron ..................................... 242(T)
histogram for ingot iron .................... 243(F)
Erode (binary image amendment
operator) ....................................... 223(T)
Eroded point ......................................... 9192
Erosion .......................... 9697(F), 98(F), 134
Erosion/dilation, definition ................... 9192
Erosion operation ................................ 5, 7, 8
Estimates, weighted, of properties ........... 189
Estimating basic characteristics ..... 183186
Estimators ................................................. 186
Etchant developments (ASTM E 407) ..... 62
Etchants ..................................... 18, 36, 45(F)
Barkers reagent ............................ 69, 71(F)
Berahas solution ................................. 64(F)
Beraha tint etchant ............. 66, 67(F), 68(F)

JOBNAME: PGIAspec 2 PAGE: 6 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000


278 / Index

Etchants (continued)
chromic acid .................................. 66, 67(F)
development of ......................................... 62
electrolytic reagents ...................... 66, 67(F)
Frys reagent ........................................ 50(F)
glyceregia ...................................... 66, 67(F)
Groesbecks reagents ................................ 66
hydrofluoric acid ...................... 48(F), 52(F)
immersion tint ......................... 63, 64(F), 72
Klemm I reagent ..................... 64, 66, 69(F)
Krolls reagent ..................................... 50(F)
Murakamis reagent .......... 45(F), 65(F), 66,
70(F)
nital .. 41(F), 42(F), 44(F), 45(F), 63, 64(F),
65(F)
nitric acid ...................................... 66, 67(F)
oxalic acid ..................................... 66, 67(F)
picral ....................... 45(F), 63, 64(F), 65(F)
potassium hydroxide .......... 66, 67(F), 68(F)
selective .............................. 50(F), 6368(F)
selective, defined ...................................... 63
sodium hydroxide .............. 66, 67(F), 68(F)
sodium picrate ..................................... 65(F)
tint ..................... 63, 66, 67(F), 68(F), 70(F)
Vilellas reagent ................................... 43(F)
Wecks reagent, modified .................... 39(F)
Etches ......................................................... 204
Etching ..................... 18, 37, 6172(F), 7273
aluminum alloys .................................. 52(F)
and bias in specimen preparation .. 166167
as bias source ......................................... 195
deep, and image quality ............ 167, 168(F)
depth ......................................................... 62
drying problems that obscure true
microstructure ........................ 62, 63(F)
electrolytic .................... 67(F), 6870(F), 72
of grain boundaries .................................. 62
heat tinting .................................... 71, 72(F)
immersion tint ............................... 63, 64(F)
interference layer method ........................ 71
iron-base alloys ........................................ 66
metals affected by ....... 62, 66, 67(F), 68(F),
69(F), 6970(F)
necessity for imaging ............................. 167
potentiostatic ...................................... 63, 72
prior-austenite grain boundary ........... 6263
procedures ...................................... 6163(F)
and residual sectioning/grinding
damage during polishing .................. 50
selective ............. 50(F), 6368(F), 167, 168
stains .............................................. 42, 43(F)
time involved ............................................ 62
tint ...................................... 64(F), 66, 70(F)
Etching stains ......................................... 43(F)

Euclidean distance map (EDM) ............... 92


Euler points, number of ............................ 152
Expected mean .......................................... 133
Expected variance .................................... 133
Experimental procedure, steps of ...... 163(T)

F
Farnsworth, Philo Taylor ............................ 2
Fast Fourier transform (FFT)
algorithm .................................. 8687(F)
inverse .................................................. 87(F)
Feature angle ............................................. 110
Feature finding program ......................... 110
Feature measurement report ............. 126(T)
Feature number ........................................ 110
Feature positioning effect ..................... 79(F)
Features
number excluded .................................... 104
number of ............................................... 104
total .................................................... 126(T)
Feature-specific
descriptors ................................. 122, 123(F)
distributions ............................ 234255(F,T)
measurements .................... 211, 224234(F)
parameters, primary .......................... 213(T)
properties .................................................... 5
Feret diameters ....... 110, 112, 121124, 151,
152(F), 213(F)
horizontal and vertical .............. 151, 152(F)
maximum ................................... 151, 152(F)
user-oriented .............................. 151, 152(F)
Feret 90, definition, as feature-specific
parameter ....................................... 213(T)
Feret 0, definition, as feature-specific
parameter ....................................... 213(T)
Ferrite ............................................... 66, 67(F)
Fiber length ................................. 123(T), 124
Fiber-reinforced composite, mechanical
properties ............................... 146, 147(F)
Fibers ......................................................... 123
Fiber width .................................. 123(T), 124
Field area ..................................... 104, 126(T)
Field horizontal projection
measurement, number of pixels used to
create ................................................... 247
Field measurement report .................. 126(T)
Field measurements ............ 211, 224234(F)
Field NA ................................................ 126(T)
Field NL ................................................ 126(T)
Field number ....................... 104, 218, 219(F)
Field size .................................................... 132
Fields of view, optimum
number of ........................... 182183(F,T)
Fill holes ....................................... 216, 217(F)
image amendment procedure ............ 223(T)
Fill holes command .................................. 220

JOBNAME: PGIAspec 2 PAGE: 7 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 279

Filters ......................................................... 204


Fisher, Colin .................................................. 4
Flame cutting ........................................ 36, 37
Flat-edge filler ........................................ 45(F)
Flicker method ................................. 174, 214
Flow stress ................................................. 193
Fluorescent agents ...................................... 41
Flush database command ........................ 220
Flying spot ..................................................... 2
Focal plane ................................................ 221
Focus control ............................................. 240
Focus position (z-coordinate) for each
field of view ....................................... 240
Form factor (FF) .................... 123(T), 126(T)
For statement ............................................ 204
Fourier transform ........................... 8687(F)
Fractal dimension ..................................... 120
of the surface ..................... 109, 119120(F)
Fractals ...................................................... 171
Fracture toughness
data interpretation example ....... 197199(F)
of ferritic ductile iron ............... 148, 149(F)
of fiber-reinforced composites ............... 146
of silicon-ferrite ...................................... 148
Frame-acquisition electronics ................... 81
Frame grabber ............................................ 81
Free-machining steel
mounting ........................................ 43, 44(F)
and use of plating ................................ 42(F)
Frequency-versus-spacing
distribution .................. 134, 135(F), 136
Frei operator ............................................... 92
Frequency domain
transformation ......................... 8687(F)
Fuzzy logic ............................................ 160(F)

G
Gamma-alumina slurries ........................... 52
Gamma curve transformation ............. 84(F)
Gaussian filter ............................................. 86
Gaussian gray processing
function ................................. 209210(T)
Glagolev, A.A. ........................................... 12
Glass fiber composite, pattern
matching .................................... 92, 93(F)
Gloves, use of .............................................. 60
Glyceregia, as etchant ............................ 52(F)
G measurement (grain size) procedure,
using automatic or semiatuomatic image
analysis (ASTM E 1382) ................... 117
Gradient filter ........................................ 87(F)
Gradient gray processing
function ................................. 209210(T)
Grain area, average .............................. 26, 27
Grain boundaries, removal of ................. 223
Grain diameter, average ...................... 26, 27

Grain elongation, degree of ....................... 24


Grain orientation, degrees of ............... 24(T)
Grains intercepted by a line of fixed
length, number of ................................ 26
Grain size .... 23, 2526, 61, 116119(F), 224
austenite .................................................... 26
data interpretation examples ..... 191194(F)
ferrite ........................................................ 26
and mechanical properties of
polycrystals ........................ 188189(F)
as parameter for modeling properties of
polygrained aggregates and
polycrystals ........................ 188189(F)
prior-austenite ....................................... 26
and shape ............................... 193194(F)
of sintered tungsten-carbide-cobalt ..... 66,
70(F)
Grain size by automatic image analysis
(ASTM E 1382) ........................... 127(T)
Grain size determination,
-phase ...................................... 2931(F)
Grain size distributions ........................... 241
Grain size equation .................................... 26
Grain size estimation procedure
(ASTM E 112) .................. 26, 27, 29, 30
Grain size features conditions (ASTM
E 1181) ............................... 117, 118119
Grain size measurement standard
(ASTM E 112) ................................... 117
Grain size number ........................... 226, 227
Grain size quantification ............ 155156(F)
Grain size scale ........................................... 26
Grains per unit area, number of ............... 26
Grains per unit volume, number of .......... 26
Grain-structure measurements ... 2331(F,T)
Grain structures
nonequiaxed .............................................. 29
two-phase ....................................... 2931(F)
Grain volume, average ............................... 26
Graphic circles (macro command) ......... 232
Graphic flush command .......................... 220
Graphic slider ........................................ 90(F)
Graphite
change in precipitate form ..................... 159
in cast iron ................................................ 61
in cast products, observation of ............. 167
in nodular cast iron ................................ 155
in sintered tungsten-carbide-cobalt ......... 66,
70(F)
mean free path on fracture toughness
of ferritic ductile iron ....... 148, 149(F)
particles classified by fuzzy logic
application ................................. 160(F)
Gray cast iron, ferritic, bias introduced
by improper polishing ........................ 166
Gray/color levels ......................................... 80
Gray image processing ............................ 210

JOBNAME: PGIAspec 2 PAGE: 8 SESS: 8 OUTPUT: Thu Oct 26 15:57:10 2000


280 / Index

Gray images .............................. 204210(F,T)


Gray iron, etching, selective ................. 65(F)
Gray level
frequency value ...................................... 214
histogram ............ 104, 206(F), 214, 215(F),
216(F)
equalization ............................... 84, 85(F)
image processing ...................................... 82
opening ................................................ 87(F)
range slider .......................................... 90(F)
for segmentation ............................ 8891(F)
Gray levels ... 76(F), 77, 81, 83, 84, 9192(F)
ideal square analysis ................. 205, 206(F)
number recognized
by digital equipment .................. 167168
by the human eye ....................... 167168
selection of ............................................. 205
sensitivity of humans vs. digital
equipment ....................................... 168
uniform ellipse .......................... 205, 206(F)
uniform square .......................... 205, 206(F)
Gray scale .............................. 11, 104, 105(F)
Gray-scale image amendment ................... 11
Gray-scale morphological processors .. 11(F)
Green filter, for thresholding
brass ....................................... 220221(F)
Grid/quadrant counting ........................... 130
Grid test points, optimum number of ........ 21
Grinding ............ 36, 37, 4345, 4649(F), 72
abrasives ................................................... 47
by automatic polishing system ................ 55
embedding ................................ 48(F), 49(F)
equipment ........................................... 4849
media .................................................. 4748
metals, soft ............................................... 48
planar ..................................... 47, 5758, 60
rigid ........................................................... 60
specimen movement during ..................... 54
wet ............................................................ 47
wheels ....................................................... 48
with rigid discs .................................... 59(T)
Guard frame ................................ 177178(F)
G value, grain size ..................................... 117

H
Hacksawing ................................................. 37
Hafnium, grain size revealed
by etching................................................61
Hall-Petch relationship ............................ 193
Halo error ............................................. 210
Hardness, volume fraction and specific
surface area effects ........................ 196(F)
Heat tinting ...................................... 71, 72(F)
metals affected by ......................... 71, 72(F)
Heyn, Emil .................................................. 29

Heyn intercept grain size procedure


(ASTM E 112) ................... 23, 29, 30(F)
High-speed (M2) tool steel
carbide distribution analysis,
feature-specific ............... 241244(F,T)
mounting and etching .......................... 43(F)
High-strength low-alloy steels, data
interpretation example ........... 197199(F)
HIS space .............................................. 90, 91
Histogram characteristic-peaks
thresholding method ............... 8991(F)
Histogram equalization .................. 84, 85(F)
Histograms
area-weighted ................................. 107108
describing distribution of some size
parameter ........................................ 107
for equivalent grain diameters .......... 243(F)
for inclusion lengths (Feret max) ......... 235,
237(F)
grain size in structure, area-weighted ... 117,
118(F)
gray-level ................................................ 104
gray-scale .......................... 174175, 176(F)
showing area fraction phases .... 104, 105(F)
Histotrak image analyzer ............................ 6
Hole count .................................... 110, 152(F)
Hole filling ........................................ 9596(F)
Holes, number of .......................... 110, 152(F)
Homogeneity
maximum ................................................ 131
quantification of ..................................... 161
Horizontal projection ..... 213(F), 246247(F)
Hue ............................................................. 261
definition ................................................... 90
intensity, and saturation (HIS) space ...... 90,
91
Hue-lightness-saturation (HLS) color
space model ... 91, 258, 259(F), 260261
transformation from RGB model ............ 91,
265(F)
Hue-lightness-saturation (HLS) space
model, color discrimination
used to determine phase
volume content .............. 269270, 271(F)
Hurst algorithm .......................................... 92
Hydrochloric acid
in ethanol ....................................... 66, 67(F)
with ferric chloride ....................... 66, 70(F)
Hydrofluoric acid ....................................... 60
as etchant .................................. 48(F), 52(F)

I
IBM-class computers ................................... 9
IBM-compatible computers ....................... 10

JOBNAME: PGIAspec 2 PAGE: 9 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 281

Illumination
uneven ....................................... 166(F), 167
uniform, from high-quality optical
microscope ........................ 168, 169(F)
Image acquisition
bias introduction ........................ 165171(F)
devices ................................................ 8082
Image amendment .................... 222224(F,T)
Image analysis
area, actual image .......................... 7576(F)
binary image processing ............. 92100(F)
feature discrimination ........ 8892(F), 93(F)
feature positioning effect ..................... 79(F)
image acquisition ............................... 8082
image compression ................................... 80
image storage ........................................... 80
macro program ............................... 203, 204
macro system, bits of gray used ....... 8, 205
measurement issues ....................... 7880(F)
principles ..................................... 75100(F)
processing of image ...................... 8287(F)
process steps .................................. 75, 76(F)
resolution vs. magnification ............... 7778
routine ..................................................... 203
size range of images .......................... 7677
stepwise scheme ..................................... 270
system pixel arrays ....................... 76(F), 77
Image analysis software ........................... 156
Image analysis specialists ............................ 8
Image analysis (IA) systems ........................ 2
Image analysis technique,
reproducibility of measurements ....... 191
Image analyzers .......................................... 32
for two-phase materials ............. 194197(F)
TV-based, development of ......................... 4
Image editing ............................................ 224
procedures for ......................................... 224
Image-input devices, conditions for ........ 262
Image measurements ................ 211213(F,T)
Image memory, on-board ........................... 10
Image number ........................................... 231
Image processing .............................. 110, 120
arithmetic processing ............................... 86
frequency domain
transformation ........................ 8687(F)
gray-level .................................................. 82
image acquisition ................................... 172
initial image information ....................... 172
neighborhood kernel processing .. 8486(F),
87(F)
pixel point operations ........ 8384(F), 85(F)
polynomial fitting ..................................... 82
rank-order processing .................... 82, 83(F)
shading correction .............................. 8283
Image process input ................................. 231
Image process output ............................... 231
Image-Pro software package ..................... 12

Image segmentation
(thresholding) ....................... 214221(F)
Image smoothing ...................................... 210
Image tiling .......................... 241, 250255(F)
IMANCO (for Image Analyzing
Computers) ....................................... 6(F)
Q 360 system ............................................. 7
Inclusion content .................................. 3132
assessment by chart methods (ASTM E
45) ............................................... 3132
Inclusion length, mean value of ............... 235
Inclusion length (Feret max)
histogram .............................. 235, 237(F)
Inclusion lengths, standard
deviation of ......................................... 237
Inclusion pullout ....................................... 204
Inclusion rating by automatic image
analysis (ASTM E 1122) ............. 127(T)
Inclusion ratings ......................................... 18
Inclusions ................................................... 120
in steel ...................................................... 61
length measurement ................................. 32
nonmetallic ......................................... 17, 61
role in void or crack initiation ............... 129
by stereology (ASTM E 1245) ......... 127(T)
Individual pressure mode .......................... 46
Ingot-iron image reconstruction ............ 225,
226227(F)
Ingot iron
equivalent grain diameter ................. 242(T)
histogram ....................................... 243(F)
cumulative distribution .................. 243(F)
statistical parameters from
feature-specific grain data ......... 242(T)
Inhomogeneity
degree of ................................................. 131
quantification of ..................................... 161
Input image ............................................... 231
Inscribed x and y ...................................... 110
Integer variables ............................... 228229
Integral parameters ................................. 150
Intensity ............................ 90, 91, 114115(F)
definition ................................................... 91
Interactive selection method .......... 8891(F)
Intercept count ............................. 4, 110, 225
Interceptions per unit length .................... 23
Intercept length, average ............................ 26
Intercept method ............................. 29, 30(F)
Intercepts ...................................... 106107(F)
total .................................................... 126(T)
database .................................. 232, 233234
Interference layer method
(Pepperhoff) ......................................... 71
Interlamellar spacing ................................. 24
Intermetallic precipitates ........................... 61
International Society for Stereology
(ISS) ....................................................... 4
standard system of notation ................ 16(T)

JOBNAME: PGIAspec 2 PAGE: 10 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000


282 / Index

International Standards Organization


(ISO) .................................... 26, 126127
Internet, for electronic mailing .................. 13
Intersection ................................................ 106
Intersections per unit length ..................... 23
Inverse thinning ........................................ 130
Iron, x-ray intensity maps of carbides ..... 114,
115(F)
Iron-base alloys, etching ............................ 66
Iron-carbide, in x-ray intensity maps ..... 114,
115(F)
Irregularity .......................... 157, 158(F), 159

J
Jeffries, Zay ..................................... 27, 28(F)
Jeffries planimetric method ............ 156, 178
Jernkontoret (JK) rating production
using image analysis
(ASTM E 1122) ................................... 32
J-integral ................................................... 148
Joint Photography Experts Group
(JPEG) compression algorithm ........ 80

K
Keel block .................................................... 17
Kernel ................................... 208, 209(T), 210
Kontron and Clemex
Technologies Inc. ................................ 11

L
Lab color space model ....... 258, 259(F), 260
Lab color system ...................................... 260
Lab coordinate system ............................. 260
Lamellar structures ..................... 146147(F)
Laplacian algorithm ................................... 92
Laplacian filter ........................................... 86
Laplacian gray processing
function ................................. 209210(T)
Lapping ........................................................ 49
Laps .............................................................. 49
Large-area mapping (LAM) ...... 250255(F)
automated stage control used ................. 255
Laser scanning ...................................... 8081
Lead
grinding ..................................................... 48
grinding with embedded diamond
abrasive particles ......................... 49(F)
Least-square curve fitting ......................... 86
Leco Corporation ....................................... 11
Leica Microsystems Q550 MW ........... 12(F)
Leica Q570 system ................................. 11(F)

Leitz texture analysis system (TAS) ........... 6


Length ........................................................ 111
definition, as feature-specific
parameter ................................... 213(T)
Length fraction (LL) ................................. 106
Length of linear features analyzed per
unit test area ..................................... 150
Length per unit area .................................. 23
Light microscopes ................................. 8081
Lightness, definition .................................... 91
Lightness coordinate ................................ 260
Light pen ................................................... 5, 6
Lineal analysis .............................................. 1
Lineal density (NL) .......... 103, 104, 106, 151
standard notation for ........................... 16(T)
Lineal fraction ......................... 1, 20, 22, 151
standard notation for ........................... 16(T)
method ...................................................... 19
Linear intercept lengths (Li) ................... 106
Liquid-phase sintering, and boron
content of stainless steel ....... 199200(F)
Lit Image (lightness image) ........ 266267(F)
Local area fraction ............ 122, 139, 140(F),
141(F), 142, 143
Local number densities ............................ 129
Local volume fractions ............................ 129
Log area vs. log scale fractal plot ..... 120(F)
Log-normal distribution ......... 125, 139140,
185, 237
mean of () ............................................ 237
standard deviation of .............................. 237
Longest dimension (max DD) ... 111, 123(T),
126(T)
Look-up table (LUT) ........................... 8384
transformations ......................................... 10
Lost grain boundary lines, or
segments ................................ 155156(F)
Low-carbon steel
etching ........................................... 63, 64(F)
sheet, degrees of grain orientation ..... 24(T)
Low-pass, mean gray processing
function ......................................... 209(T)
Lubricants, to prevent overheating
during polishing ................................... 50
Luminance ................................................. 260
Luminance/Chroma color model ............ 262

M
Macintosh (Mac)Apple computers ....... 10
Macro ................................................. 203204
construction information ........................ 218
development ................................... 228231
Macrocode steps ....................................... 266
MACRO GRAIN_INTERCEPTS
(macro for intercept
counting) .................................... 232234

JOBNAME: PGIAspec 2 PAGE: 11 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 283

MACRO INCLUSION
MEASUREMENT .................... 239241
Macro program ........................................ 203
controlling automatic stage movement
and focusing of microscope ........... 216
MACRO RECONSTRUCT
GRAINS ..................................... 230231
Macro routine ................................... 229231
MACRO STORE IMAGES .... 229230, 231
MACRO VERTICAL LINES ......... 249250
Magnesium, grain size revealed by
etching .................................................. 61
Magnification ....................... 7778, 79(F), 80
and bias in digital measurements .......... 181
definition ................................................... 77
empty ...................................................... 171
influence on values of analyzed
images ................................ 168171(F)
optimum value ........................ 182183(F,T)
verification of correct amount of ........... 171
Magnification effect .................................. 111
Manganese sulfide inclusions .......... 217218
Manual methods of inclusion analysis
(ASTM E 45) .................... 103, 240241
Manual point counting measurement
(ASTM E 562) ............................. 127(T)
procedures ............................................... 116
Material Safety Data Sheets (MSDSs) ..... 56
Mathematical morphology ...................... 5, 6
development of ........................................... 4
Matheron, G. ................................................. 5
Max horizontal chord ......................... 126(T)
Maximum Feret diameter ....................... 158
Maximum intercept length ...................... 151
Maximum particle width ............ 151, 152(F)
Mean .................................................. 125, 131
Mean center-to-center planar
spacing ...................................... 21(F), 25
Mean center-to-center spacing of
bands .................................................. 245
Mean chord length, determined by using
measurement frame ............................ 214
Mean diameter .......................................... 124
Mean edge-to-edge spacing of bands ..... 245
Mean free path .... 25, 32, 108109, 126, 245
of graphite, on fracture toughness of
ferritic ductile iron ............ 148, 149(F)
Mean grain size number calculation
(ASTM E 1382) ........ 227228, 233234
Mean graphite nodule diameter ............. 155
Mean intercept length ........ 123(T), 192, 193
Mean lineal intercept (L3) .............. 106107)
Mean lineal intercept distance (L) ... 25, 111
Mean lineal intercept length ............... 23, 29
of -grains ................................................... 30
Mean number of interceptions of
features divided by the total test area
(NA) .................................................... 228

Mean random spacing ......................... 2425


Mean true spacing ................................ 2425
Mean value ...... 184, 186, 187, 188189, 192,
194
Mean x intercept ................................. 126(T)
Mean y intercept ................................. 126(T)
Measurement frames ............................... 143
circular .............................................. 214(F)
Measurement frames
methods of defining ................... 211212(F)
selection of ..................................... 226227
Measurements ........................... 101128(F,T)
derived ................................... 115127(F,T)
derived, field .............................. 115122(F)
direct .......................................... 102115(F)
distributions of ........................... 124126(F)
feature orientation .................................. 108
feature-specific ........................... 110115(F)
feature-specific, derived ......... 122126(F,T)
feature-specific, primitive ....................... 110
field ............................................ 102110(F)
imaging signals ......................... 101102(T)
indirect ............................................ 123124
intercept ............................. 103, 106107(F)
mean free path ................................ 108109
stereological parameters ................. 115116
surface area ..................................... 109110
Mechanical properties ............................. 145
Media cybernetics ....................................... 10
Median ....................................................... 125
Median filter ............................... 86(F), 87(F)
Median gray processing function ...... 209(T)
Median pixel value (e) ............................. 208
Median value L50%for cumulative
distribution (inclusion lengths) ....... 237
Megapixel images ......................................... 7
Metal, metallographic specimen
preparation contemporary practice .. 58(T)
Metallographic etching. See Etching.
Metallographic principle ......................... 108
Metallography ............................................... 1
Metal-matrix composites ......................... 142
microstructure, dilation and counting
technique .................................... 136(F)
Metals research ........................................ 4, 6
Metals Technology
Laboratories/CANMET ................... 137
Methyl methacrylate, for mounting ... 40, 43,
44(F)
Metric grain size number .......................... 26
Metrology methods ..................................... 16
Microcracks ............................................... 129
Microinfrared spectroscopy .................... 114
Microscope setup for line width
(ASTM F 728) .............................. 127(T)
Microsoft Windows operating system ...... 10
Microstructural banding, measurement
of (ASTM E 1268) ............. 244246(F,T)

JOBNAME: PGIAspec 2 PAGE: 12 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000


284 / Index

Microstructural element .................. 163164


Microstructural gradient ................. 163164
Microstructure
basic characteristics ................... 153154(F)
characterization of banding and
orientation (ASTM E 1268) ............. 92
characterizations of .............................. 4, 92
constituents of ........................... 153161(F)
descriptors ............................................... 150
essential characteristics for
description ......................... 150154(F)
property relationships ................ 145150(F)
quantitative description of ..................... 145
systematic sampling ............................... 164
Micro-Videomat system ............................... 6
Millipore MC particle-measurement
system ..................................................... 6
Mineral dressing ............................... 129130
Minicomputers, general-purpose ........... 7(F)
Modification ratio (MOD ratio) ............ 250,
252(F)
Moore, George ............................................ 15
Morphological binary processing ............. 95
Morphological image processing ................ 5
Morphological reconstruction methods ... 11
Mounting .................................... 37, 4046(F)
castable resins for ............................... 4142
clamp ........................................................ 40
compression ................................... 40, 41(F)
edge preservation ........................... 4246(F)
presses ........................................... 40, 41(F)
purposes .................................................... 40
Munsell color space model ........ 258, 259(F),
261

N
National Television System Committee
(NTSC) color-encoding scheme ...... 261
Naval brass, etching ........................ 66, 69(F)
Nearest-neighbor spacing distribution .. 130,
131(F), 132134(F), 143
Near-neighbor distances .......................... 137
Near-neighbor spacing ............................. 130
Neighborhood kernel processing . . 8486(F),
87(F), 97(F)
Network at mid-edge/edge-particle
spacing ............................................... 130
Next instruction ........................................ 204
Nickel, etching ............................................. 62
Nickel-base superalloys, bimodal duplex
grain size ............................... 117, 118(F)
Nickel-manganese alloys, 3-D
reconstruction of microstructure ....... 164,
165(F)

Nickel-manganese steel, grain volume


distribution ............................. 186187(F)
Niobium
anodizing .................................................. 69
etching ...................................................... 62
Nitric acid .................................................... 60
as etchant ....................................... 66, 67(F)
Nitrides ........................................................ 61
Nitriding ................................ 43, 44(F), 45(F)
Nodular cast iron, size measurement ...... 155
Nonequiaxed grains .................................... 29
Nonmetallic inclusion
distribution ........................ 234241(F,T)
Nonmetallic inclusions
data interpretation example ....... 197199(F)
observation of ......................................... 167
Normal distributions ............... 183184, 185,
191192, 193(F)
Notation, stereological ........................... 16(T)
Number array ....................................... 7576
Number density technique ...................... 143
Number density variation
technique ............................... 131132(F)
Number of features .................................. 104
divided by the total area of the field
(NA) ................................................ 104
excluded .................................................. 104
Number of interceptions of features per
total test area (NA) ................... 105106
Number of intercepts (NL) .............. 104, 108
parallel to longitudinal axis of
deformation .................................... 245
perpendicular to longitudinal axis of
deformation ..................................... 245
per unit length (NL) ................................ 225
perpendicular to the deformation axis ... 248
Number of particles per unit volume .... 124
Number per unit area (NA) ........... 21(A), 22
stereological definition ............................. 32
Numerical density ..................................... 150

O
Occupied squares method ..................... 19
Offset threshold error .............................. 210
On-board image memory .......................... 10
On-board processors ............................ 1112
Opening ................................. 9697(F), 98(F)
Open operator ..................................... 223(T)
Opponent-colors approach ...................... 260
Optical microscope, illumination system
of ......................................................... 205
Optical scanner ........................................... 81
Optic axis ................................................... 109
Optimas software ....................................... 10
Orientation .................................... 2324, 108
degree of .......................................... 24, 245

JOBNAME: PGIAspec 2 PAGE: 13 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 285

Orthogonal sections, of
microstructures ...................... 164, 165(F)
Outline, image amendment
procedure ....................................... 223(T)
Output image ............................................ 231
Oxide coatings ........................................ 84(F)

P
Paper, microscopy of, material
standard (ASTM D 686) ............. 127(T)
Paper, microscopy of, material
standard (ASTM D 1030) ........... 127(T)
Parallax angle ........................................... 109
Parallax distance ...................................... 109
Parallel distribution of phases ................ 146
Particle
basic measures of ...................... 151, 152(F)
convex ..................................................... 151
Particle dispersions
characterization of ..................... 129144(F)
clustered .................................... 129, 130(F)
definition ................................................. 129
ordered ....................................... 129, 130(F)
random ....................................... 129, 130(F)
techniques used to characterize ............. 130
Particle perimeter ....................... 151, 152(F)
Particles, per unit intercept length ............ 179
Particle surface area ................................ 151
Particle-to-particle spacing ..................... 136
Pattern analysis system (PAS) (Bausch
and Lomb, USA) .................................. 6
Pattern-matching algorithms ......... 92, 93(F)
PCI frame grabber ..................................... 12
Pearlite, in steel, thresholding
of ............................... 215218(F), 219(F)
Percent of relative accuracy
(% RA) .............................................. 102
Perimeter ........ 110, 111113(F), 123, 123(T),
126(T), 152
definition, as feature-specific
parameter ................................... 213(T)
total, standard notation for .................. 16(T)
Personal computers (PC) ............................. 9
Phase Alternation Line (PAL)
color-encoding scheme ..................... 261
Phase percentage .......................................... 4
Phase volume content, determination
example ......................... 269270, 271(F)
Phenolic resins ............... 40, 43, 44(F), 63(F)
Physical properties ................................... 146
Pitting corrosion .............................. 49, 50(F)
Pixel diagonals .......................................... 112
Pixels ............................................... 76(F), 205
definition ................................................... 76
dimension ................................................. 77
perimeter ................................................. 206

Planimeter ................................................... 19
Planimetric method ......................... 27, 28(F)
Plastic deformation, as bias
source ..................................... 166167(F)
Point count, standard notation for ......... 16(T)
Point counting ............ 2, 4, 2022(F), 2930,
31(F), 116
to manually measure area ...................... 104
Point count method (ASTM
E 562) ........................................ 2022(F)
Point fraction ............................... 20, 22, 151
method ...................................................... 19
Poissons distribution .......... 132133(F), 139
Polarized light ............................................. 54
to reveal grain size in nonferrous
alloys ................................................. 61
Polisher, vibratory ................................... 53(F)
Polishing ...... 36, 37, 4345(F), 46, 4956(F),
72
abrasives ................................................... 56
automatic ....................................... 5556(F)
chemical .............................................. 5354
cloths ......................................................... 56
cloths, pressure-sensitive-adhesivebacked ............................................... 46
electrolytic .......................................... 5354
final .............................................. 50, 52, 58
improper, as source of bias
in images ................................... 166(F)
intermediate .............................................. 50
mechanical ................................................ 53
pressure applied ........................................ 54
relief .............................................. 66, 70(F)
rough ................................................... 4950
specimen movement during ..................... 54
stages ........................................................ 49
vibratory ............................................. 50, 60
Polishing safety issues (ASTM E 2014) ... 56
Polymerization ............................................ 46
Polystyrene foam, resolution of images
and errors introduced ............ 173, 174(F)
Populations ........................................ 186187
Pores .................................................. 120, 123
Porosity, boron addition effect on
sintered stainless steel ........... 199200(F)
Position ............................................... 113114
Position x and y ........................................ 110
Potentiostat .................................................. 63
Power spectrum display ....................... 87(F)
Precipitates, intermetallic ........................... 61
Precision and bias statement (ASTM E
112) ..................................................... 234
Precision saws .................................. 3839(F)
Preparation of replicas (ASTM
E 1351) .......................................... 127(T)
Preprocessing ............................ 171, 172173

JOBNAME: PGIAspec 2 PAGE: 14 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000


286 / Index

Printed circuit boards, precision saws


for sectioning ........................................ 39
Probabilities of cumulative distribution
(inclusion lengths) ............................. 237
Profiles .................................. 174175, 176(F)
Profilometer ............................................... 119
Profilometry ............................................... 110
Pruning ........................................... 97100(F)
Pseudobackground image .......................... 82
Pseudocolor .................. 104, 105(F), 110, 114
to distinguish which pixels should be
assigned to a particular phase ........ 116
Pseudocolor LUT enhancement .......... 8384
Pseudoreference image ................... 82, 83(F)
Psi ()-phases ............................................. 69

Q
Q720 (IMANCO) ..................................... 7(F)
QTM. See Quantitative television
microscope.
Quality control
and data interpretation ........................... 191
inclusion content of HSLA steels .......... 197
Quantimet A ....................................... 4, 5, 12
Quantimet B ........................................ 4, 5(F)
Quantimet 720 (Q720) system ............... 6(F)
Quantitative description of the
microstructure .................................. 145
Quantitative microscopy .............................. 1
definition ..................................................... 1
Quantitative television microscope
(QTM) ........................................... 4, 5(F)
Quasi-homogeneous distribution ............ 132
Queens Award to Industry .......................... 4

R
Radar systems ............................................... 2
Raman spectroscopy ................................. 114
Random access memory (RAM) ...... 81, 220
Random distributions ................. 132133(F)
Random sampling .............. 35, 162163, 164
Random sectioning ................................... 163
Rank-order filter ........................................ 86
Raster ...................................... 7778, 81, 116
definition ............................................. 7778
Rastering ........................................... 205, 247
Raster lines ................................................ 107
Real variables ................................... 228229
Red filter, for thresholding brass ......... 221(F)
Red, green, and blue (RGB) channels,
to establish color for pixels ................. 90
Red, green, and blue (RGB) color
space model . . 90, 257258, 259(F), 262,
263, 264(F)

conversion to HLS model ................. 265(F)


Reference circle ........................... 157, 158(F)
Reference image ......................................... 82
Reference space ........................................ 151
Reference surface ..................................... 109
Reference systems ..................................... 164
Reflected light, contrast mechanisms
associated with .............................. 102(T)
Reflected-light microscopy (ASTM E
883) ................................................ 127(T)
Regions of influence (SKIZ) operator . . . 121
Relative accuracy, desired degree at 95%
CL ......................................................... 33
Relative accuracy (%RA) of
measurement ....................................... 33
Relative minimum point of gray level
frequency value ................................. 214
Relief .......... 49, 52(F), 53, 58, 60, 72, 166(F)
Removal rates ............................................. 72
Resolution ...................................... 7778, 110
camera ................................................ 78, 80
definition ................................................... 77
of digital equipment ............... 171175(F,T)
objective, and theoretical size of CCD
cell ..................................... 171, 172(T)
theoretical, of an optical microscope .... 171
Reticule ...................................................... 104
RGB averages .............................. 269, 271(F)
RGB-HLS model conversion .............. 265(F)
RGB primary colors ........................ 257258
RGB standard deviations ........... 269, 271(F)
RGB-to-YIQ signal conversion,
television broadcasting ..................... 260
Rise time of video signal ......................... 210
Roberts algorithm ...................................... 92
Rosiwal, A. .............................................. 1, 20
Roughness ............................... 123(T), 126(T)
Roundness ................................................. 213
Round-robin testing, delta ferrite
measurement ........................ 214215(F)

S
Sampling .............. 1718, 3537, 162165(F)
of 1-D and 2-D objects .......................... 162
random .................................... 162163, 164
systematic .................................. 162165(F)
of 3-D objects ......................................... 162
Sampling procedures for inclusion
studies (ASTM E 45) ......................... 37
Sampling procedures for inclusion
studies (ASTM E 1122) ...................... 37
Sampling procedures for inclusion
studies (ASTM E 1245) ..................... 37
Sat Image (saturation image) ..... 266267(F)
Saturation .............................................. 9091
definition ....................................................... 90

JOBNAME: PGIAspec 2 PAGE: 15 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 287

Saturation image ...................................... 265


Sauveur, Albert ....................................... 1, 27
Saws, precision .................................. 3839(F)
Scanning, of printed images, and
resolution ............................................ 173
Scanning electron microscope ............ 8081,
99(F), 104
Scanning electron microscope beam
size characterization
(ASTM E 986) ............................. 127(T)
Scanning electron microscope
magnification calibration (ASTM E
766) ................................................ 127(T)
Scanning electron microscopy ................ 109
Scanning probe microscopy .................... 110
Scanning tunneling microscopy (STM) .. 110
Scientific image analysis .............................. 3
Scratches, elminated in specimen
preparation .......................................... 210
Secondary electrons, contrast
mechanisms associated with ......... 102(T)
Second-degree least-squares
fit function .................................... 216(F)
Second-phase particles ............................. 164
Sectioning ................................... 36, 3739(F)
damage created by ................................... 72
grinding to remove burrs from ................ 48
Segmentation .................................... 8891(F)
and bias introduction ....... 171, 172174(F),
175(F)
nonuniform .......................................... 91(F)
texture ....................................................... 92
watershed ............................................ 9192
Selection criterion(C) ............................... 182
Semiautomatic digital tracing devices ..... 33
Semiautomatic systems ............................ 110
Semiautomatic tracing tablets .................. 17
Serial sectioning method ............ 164, 165(F)
Series distribution of phases ........... 146147
Serra, J. ......................................................... 5
Setup of system ......................................... 204
information used to construct macro ......... 218
Shade correction ....................................... 167
Shading corrector ..................................... 205
Shape .................................................. 122123
Shape factors ......... 157, 158, 159(F), 160(F),
196, 197
definition ................................................. 159
of grain sections ................................ 194(F)
Shape of microstructural
constituents .............. 153(F), 157160(F)
Shrinkage cavities ....................................... 18
Sigma-phase .......................................... 66, 69
Significance level ....................................... 184
Silica, basic colloidal suspensions .............. 53
Silicon, x-ray intensity maps of
carbides .................................. 114, 115(F)
Silicon carbide
image processing ................................. 87(F)

in x-ray intensity maps ............. 114, 115(F)


Silicon carbide grinding paper
alternatives used for grinding ............ 5758
pressure-sensitive adhesive backed ......... 46
Silicon-ferrite, mechanical
properties ............................... 147148(F)
Sintered carbides
metallographic specimen preparation,
contemporary practice .......... 59(T), 60
precision saws for sectioning ................... 39
Sintered powder materials, and bias in
specimen preparation ............ 166(F), 167
Sintered powder products, observation
of ......................................................... 167
Sintered stainless steel, boron addition
effect on microstructure ........ 199200(F)
Sintered tungsten-carbide-cobalt,
etching ....................................... 66, 70(F)
Size of microstructural
constituents .............. 153(F), 155156(F)
Skeleton by influence zones
(SKIZ) ..................................... 97100(F)
image amendment procedure ............ 223(T)
Skeleton invert, image amendment
procedure ....................................... 223(T)
Skeletonization ........................... 8, 97100(F)
Skeleton normal, image amendment
procedure ....................................... 223(T)
SKIZ. See Skeleton by influence zones.
Smearing .................................................... 166
Smooth gray processing
function ................................. 209210(T)
Sobel edge detection filter .......... 208209(F)
Sobel operator ........................................ 87(F)
Sodium hydroxide ................ 66, 67(F), 68(F)
Soft ceramic shot ................................... 45(F)
Software, image analysis specific ............. 152
Software packages, for image analysis ...... 10
Sol-gel alumina ..................................... 5253
Sol-gel process ............................................. 52
Solid-state devices ....................................... 81
Sorby .............................................................. 1
Space filling grains .......... 106, 109, 116, 126
Spacing .................................................. 2425
Sparse sampling technique ...................... 130
Spatial distribution
of microstructural constituents ......... 153(F),
160161
randomness in ......................................... 197
Spatial resolution ...................................... 262
Specialized real-time machine-vision .. 1112
Specific length ........................................... 150
Specific surface area ................................ 150
Specimen preparation ................................ 18
bias introduction ........................ 165171(F)
for image analysis ...................... 3574(F,T)
for metallography (ASTM E 3) .............. 61,
127(T)

JOBNAME: PGIAspec 2 PAGE: 16 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000


288 / Index

Specimen preparation (continued)


generic contemporary methods ........... 58(T)
manual .......................................... 51, 5455
traditional method ......................... 5657(T)
Spreadsheets .............................................. 184
Stability, criterion of ................................. 183
Stage control ............................................. 240
Staining ............................................. 49, 51(F)
as bias source .................................... 166(F)
from etching .............................. 167, 168(F)
Stainless steels
boron addition effect on microstructure
after sintering ..................... 199200(F)
delta ferrite measurement, thresholding
of ........................................ 214215(F)
dual phase duplex ......................... 66, 67(F)
electrolytic etching ................................... 69
etching ...................................................... 62
etching, selective ........................... 66, 67(F)
scanner response, electron-beam
machined circles ........................ 208(F)
Standard deviation ................. 124, 132, 140,
184185, 186, 187
corrected ................................................. 185
definition ................................................. 131
of field measurements .............................. 32
volume fraction measurements and
magnifications used ........... 168171(F)
Steel
annealed, thresholding of
pearlite .................. 215218(F), 219(F)
area distribution for Dirichlet cell
constructed and random
dispersion ........................... 138139(F)
Dirichlet network for a computergenerated random dispersion ... 138(F)
Dirichlet network for inclusion
dispersion ...................... 138(F), 141(F)
inclusion measured in .............................. 61
metallographic specimen preparation
contemporary practice ................. 59(T)
nearest-neighbor spacing
distribution ................................. 133(F)
nonmetallic inclusions, number density
variation technique for
evaluation ................................... 131(F)
nonmetallic inclusion distribution,
feature-specific
measurements ................. 234241(F,T)
particle dispersions ................................. 129
threshold setting, steps in .......... 222223(F)
x-ray intensity maps of carbides ........... 114,
115(F)

Steel specimen preparation guidelines


for inclusion analysis (ASTM
E 768) ................................................... 61
Stepwise regression .................................. 123
Stereological definition for oxides and
sulfides (ASTM E 1245) ..................... 32
Stereological methods, vs. automated
image analysis ............................ 200201
Stereological parameters
calculation of .................................. 248249
derived from feature
measurements ............................ 246(F)
derived from standard parameters
measured by most IA
systems ........................... 244246(F,T)
mathematical relationships with
parameters measured ......... 246249(F)
relationship to image analysis
parameters ................................. 245(T)
Stereology .................................................. 1, 2
definition ............................................. 1, 115
principles of ................................... 1534(F)
rule of ..................................................... 174
Stiffness, of fiber-reinforced composites ... 146
String ......................................................... 229
Stringers, length measurement ................... 32
String variables ................................. 229231
Structural elements .................................. 153
Struers Inc. .................................................. 11
Students t value ................................... 3233
Superalloy, binary image processing ..... 99(F)
Surface area ...................................... 109110
Surface area per unit volume ................... 23
Surface area per volume ......................... 116
Surface density ......................................... 151
Swabbing ..................................................... 62
Swarf ............................................................ 48
Systematic sampling .................... 162165(F)

T
Tagged image file format (TIFF)
compression algorithm ....................... 80
Tangent count ........................................... 110
Tantalum, anodizing ................................... 69
TAS ................................................................. 6
Television, development of ........................... 2
Temperature, effect on tensile properties
of Si-ferrite and ductile iron ............. 148,
149(F)
Template .................................................... 104
Tensile strength, maximum ..................... 146
Tessellation ................................... 120122(F)
by conditional dilation ...................... 141(F)
by dilation technique ........ 140, 141143(F)
Tessellation network ................................. 142

JOBNAME: PGIAspec 2 PAGE: 17 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000


Index / 289

Tessellation technique ...................... 136, 143


Textiles, microscopy of material
standard (ASTM D 629) ............. 127(T)
Test circle ..................................................... 27
Test lines ................... 2728, 29, 30, 103, 179
curvilinear ............................................... 152
length measurement ....................... 104, 106
used to measure intercepts ................ 106(F)
Thermal barrier coatings .......................... 84
Thermally sprayed coatings, precision
saws for sectioning ............................... 39
Thermally sprayed metallic specimen
preparation guidelines for
metallography (ASTM E 1920) ........ 61
Thermoplastic mounting resins ......... 40, 43,
44(F), 46
Thermosetting mounting resins .... 40, 41(F),
43, 44(F), 45(F), 46
Theta-phase, in copper alloys ......... 66, 70(F)
Thinning (skeletonization) ............ 97100(F)
Thompson, E. ............................................ 12
Three-circle method ......................... 232234
Three-circle method of intercept
counting (ASTM E 1382) ................ 225
3-D reconstructions ..................... 164, 165(F)
Thresholding ... 88, 89(F), 91(F), 104105(F),
171, 174176(F)
automatic .................................................. 89
of beta-phase in CDA brass ................... 214
brass ........................................... 220221(F)
color images ............................................. 90
definition ................................................... 88
of delta ferrite area fraction in stainless
steel .................................... 214215(F)
of delta ferrite in stainless
steel .................................... 214215(F)
of pearlite in annealed steel ...... 215218(F)
steps in ....................................... 222223(F)
to detect dark etching grain
boundaries ....................................... 225
Threshold setting .............................. 110111
Through-hardened steels, polishing .... 4950
Tilt axis ...................................................... 109
Time resolution ......................................... 262
Tin, grinding ................................................ 48
Titania, tessellation cells constructed
from features in a microstructure ..... 121,
122(F)
Titanium, anodizing .................................... 69
Titanium
commercially pure
residual sectioning, grinding damage
from polishing ....................... 49, 50(F)
heat tinting ................................ 71, 72(F)
polishing of ........................................... 60
sectioning damage ..................... 38, 39(F)
etching ...................................................... 62

grain size revealed by etching ................. 61


metallographic specimen preparation,
contemporary practice .......... 59(T), 60
Titanium alloys
-phase grain size determination ........ 31(F)
attack polishing solution added to
slurry ................................................. 58
image processing ................................. 87(F)
metallographic specimen preparation,
contemporary practice .......... 59(T), 60
staining from polishing solution ... 49, 51(F)
stereopair of fracture surface ............ 119(F)
Tool steels
carbide distribution analysis,
feature-specific ............... 241244(F,T)
comet tailing from polishing ........ 49, 51(F)
mounting .............................................. 45(F)
Top-hat processing ................................. 87(F)
Total projected length of inclusions ....... 198
Transformed image .................................. 208
Transition zone, between matrix and
etched constituent ............................... 207
Transmission electron microscope
(TEM) ........................... 25, 8081, 87(F)
Transmitted electrons, contrast
mechanisms associated with ......... 102(T)
Transmitted light, contrast mechanisms
associated with .............................. 102(T)
Trepanning .................................................. 17
Trilobal analysis .......................... 250, 252(F)
Trisector method ........................................ 37
True line length .......................................... 29
True surface area ..................................... 109
Truth tables ................................................. 93
t-statistics (Student) .................................. 184
Tungsten, anodizing .................................... 69
Tungsten carbide-cobalt, etching ... 66, 70(F)
Turnkey systems ......................................... 11
TV scanner, alignment, resolution, and
response .............................................. 205
Two-phase materials, data
interpretation .......................... 194197(F)

U
Ultimate dilation, image amendment
procedure ....................................... 223(T)
Ultimate eroded point ................................ 91
Ultimate erosion, image amendment
procedure ....................................... 223(T)
Ultimate tensile strength
of ferritic ductile iron .. 147148(F), 149(F)
of silicon ferrite ............ 147148(F), 149(F)
Ultrasonic cleaning ..................................... 54
Unbiased characterization ....................... 162
Units per pixel ..................................... 126(T)

JOBNAME: PGIAspec 2 PAGE: 18 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000


290 / Index

Update command ..................................... 220


Update database ....................................... 220
Uranium
anodizing .................................................. 69
grain size revealed by etching ................. 61

V
Vacuum evaporator .................................... 71
Vanadium, anodizing .................................. 69
Vapor deposition ......................................... 72
Variance ..................................................... 184
Variance algorithm ..................................... 92
Vertical projection ............... 213, 246247(F)
Vertical sectioning method, and random
sampling ............................................. 163
Video frame rate ...................................... 45
Video microscopy ..................................... 23
Visible light, wavelength range for
human observers ................................. 257
Void coalescence ....................................... 129
Voids ........................................................... 129
Volume fraction .... 12, 4, 1922(F), 30, 217
of beta-phase in CDA brass,
thresholding of ............................... 214
compared to area fraction ...... 104105, 115
definition ................................................... 19
as descriptor or integral parameter ........ 150
and dilation and counting technique ...... 115
estimated ................................................. 154
estimates to interpret physical and
mechanical properties ..................... 195
of inclusions ............................................. 32
of inclusions, guidelines for
determination (ASTM E 1245) ...... 234
magnification effect on analyzed
images ................................ 168171(F)
necessary to predict mechanical
properties ........................................ 146
needed for calculating mean free path .... 25
of pores in sintered stainless steel,
boron addition effect ......... 199200(F)
and random sampling ............................. 163
range of value ......................................... 157
related by fracture toughness of HSLA
steels .................................. 197199(F)
standard notation for ........................... 16(T)
stereological definition of ........................ 32
Volume of prolate spheroid ................ 123(T)
Volume of sphere ................... 123(T), 126(T)
Volumetric density, standard
notation for ...................................... 16(T)

W
Waspaloy, residual sectioning/grinding
damage from polishing ..............49, 50(F)
Watershed transformations ....................... 11
Water stains, eliminated in specimen
preparation .......................................... 210
Wavelength-sensitive detectors (RGB) ... 257
Weighted distributions ........................ 190(F)
Weight fraction ................................... 19, 217
Weight loss .................................................. 19
While loop ................................................. 240
While statement ........................................ 204
White, gray level assigned ........................ 205
White light ................................................. 257
Width ................................................. 122123
definition, as feature-specific
parameter ................................... 213(T)
Worst-field report ............................. 102103

X
x Feret ................................................... 126(T)
Xmax, definition, as feature-specific
parameter ....................................... 213(T)
Xmin, definition, as feature-specific
parameter ....................................... 213(T)
X-ray mapping .......................................... 114
X-rays, contract mechanisms associated
with ................................................ 102(T)

Y
y Feret ................................................... 126(T)
Yield strength
of ferritic ductile iron .. 147148(F), 149(F)
of silicon ferrite ............ 147148(F), 149(F)
YIG/YUV coordinate system .................. 260
YIQ system ................................................ 260
YUV system ............................................... 260

Z
Zeiss, Carl ..................................................... 6
Zeiss and Clemex ....................................... 12
Zirconium
anodizing .................................................. 69
grain size revealed by etching ................. 61
Zirconium alloys, attack polishing
solution added to slurry ....................... 58

You might also like