Professional Documents
Culture Documents
KZast
astern
E conom y
^Tdition
. *
TAPAN P. BAGCHI
Professor, Industrial and Management Engineering
Indian Institute o f Technology, Kanpur
..
1993 by Prentice-Hall of India Private Limited, New Delhi. All rights reserved.
No part of this book may be reproduced in any form, by mimeograph or any other
means, without permission in writing from the publishers.
ISBN-0-87692-808-4
The export rights of this book are vested solely with the publisher.
Preface ix
3L Design of Experiments 4 1 -6 0
3-1 Testing Factors One-at-a-Time is Unscientific 41
vi TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
Glossary 197-202
References 203-204
Index 205-209
Preface
Taguchi methods are the most recent additions to the toolkit of design, process,
and manufacturing engineers, and Quality Assurance (QA) experts. In contrast to
Statistical Process Control (SPC), which attempts to control the factors that
adversely affect the quality of production, Taguchi methods focus on designthe
development of superior performance designs (of both products and manufacturing
processes) to deliver quality.
Taguchi methods lead to excellence in the selection and setting of product/
process design parameters and their tolerances. In the past decade, engineers have
applied these methods in over 500 automotive, electronics, information technology,
and process industries worldwide. These applications have reduced cracks in
castings, increased the life of drill bits, produced VLSI with fewer defects, speeded
up the response time of UNIX V, and even guided human resource management
systems design.
Taguchi methods systematically reveal the complex cause-effect relationships
between design parameters and performance. These in turn lead to building quality
performance into processes and products before actual production begins.
Taguchi methods have rapidly attained prominence because wherever they
have been applied, they have led to major reductions in product/process development
lead time. They have also helped in rapidly improving the manufacturability of
complex products and in the deployment of engineering expertise within an enterprise.
The First objective of Taguchi methods which are empirical is reducing
the variability in quality. A key premise of Taguchi methods is that society incurs
a loss any time a product whose performance is not on target gets shipped to a
customer This loss is measurable by the loss function, a quantity dependent on the
deviation of the products performance from its target performance. Loss functions
are directly usable in determining manufacturing tolerance limits.
Delivering a robust design is the second objective of Taguchi methods. Often
there are factors present in the environment on which the user of a product has little
or no control. The robust design procedure adjusts the design features of the product
such that the performance of the product remains unaffected by these factors. For
a process, the robust design procedure optimizes the process parameters such that
the quality of the product that the process delivers, stays on target, and is unaffected
by factors beyond control. Robust design minimizes variability (and thus the lifetime
cost of the product), while retaining the performance of the product on target.
Statistically designed experiments using orthogonal arrays and signal-to-noise
{SIN) ratios constitute the core of the robust design procedure.
This text provides the practising engineer an overview of the state-of-the-art
in Taguchi methods the methods for engineering superior and lasting performance
into products and processes.
X PREFACE
Chapters 1-3 introduce the reader to the basic ideas in the engineering of
quality, and the needed tools in probability and statistics. Chapter 4 presents the
additive cause-effect model, the foundation of the Taguchi methodology for design
optimization. Chapter 5 defines the signal-to-noise ratiothe key performance metric
that measures the robustness of a design. Chapter 6 describes the use of orthogonal
arrays (OAs), the experimental framework in which empirical studies to determine
the dependency of performance on design and environmental factors can be efficiently
done. Chapter 7 illustrates the use of these methods in reducing the sensitivity of
a manufacturing process to uncontrolled environmental factors. Chapter 8 provides
the guidelines for the selection of appropriate orthogonal arrays (OAs) for real-life
robust design problems. A case study in Chapter 9 shows how one optimizes a
product design. Chapter 10 presents a constrained optimization approach which
would be of assistance when the design parameter effects interact. Chapter 11
shows how Taguchi loss functions can be used in setting tolerances for
manufacturing. Chapter 12 places Taguchi methods in the general framework of
Total Quality Management (TQM) in an enterprise.
Throughout the text, examples and exercises have been provided for enabling
the reader to have a better grasp of the ideas presented. Besides, the fairly large
number of References should stimulate the student to delve deeper into the subject.
I am indebted to Jim Templeton, my doctoral guide and Professorfrom him
I had the privilege of imbibing much of my knowledge in applied probability. I
am also grateful to Birendra Sahay and Manjit Kalra, whose enormous confidence
in me led to the writing of this book. I wish to thank Mita Bagchi, my wife, and
Damayanti Singh, Rajesh Bhaduri and Ranjan Bhaduri whose comments and
suggestions have been of considerable assistance in the preparation of the manuscript.
The financial assistance provided by the Continuing Education Centre, Indian
Institute of Technology, Kanpur to partially compensate for the preparation of the
manuscript is gratefully acknowledged. Finally, this book could not have been
completed without the professionalism and dedication demonstrated by the
Publishers, Prentice-Hall of India, both during the editorial and production stages.
Any comments and suggestions for improving the contents would be warmly
appreciated.
Tapan P. Bagchi
%
What Are Taguchi Methods? i
target. Taguchi has argued that any deviation from target performance results in
a loss to society. He has redefined the term quality to be the losses a product
imparts to society from the time it is shipped.
2. Product and process design requires a systematic development, progressing
stepwise through system design, parametric design, and finally, tolerance design.
Taguchi methods provide an efficient, experimentation-based framework to
achieve this.
1
2 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
A large sum of money spent on appraisal can help screen out defective
products, preventing them from getting to customers. This is inspection-based
QA. As of today, most Very Large Scale Integration (VLSI) chips have to be
produced this way. It should be clear, however, that QA based on appraisal
is reactive and not preventive it takes action after production. If resources
can be directed to prevention instead, one increases the likelihood of preventing
defects and quality problems from developing. With preventive action, prevention
costs for an enterprise may rise, but failure costs are often greatly reduced [9].
Reduction of defective production directly cuts down in-house scraps and
rejects. This also reduces returns from customers and their dissatisfaction with
the product. Also, the producer projects a quality image, which often gives a
marketing edge. It may be possible, of course, to go overboard with quality if we
disregard real requirements. The ISO 9000 Standards document [10] as well as
QFD [34] also emphasize the value of establishing the customers real needs first.
Business economists suggest that the target for quality should be set at a level at
which the profit contribution of the product is most favourable (Fig. 1.1).
c
*
o
+ >
3 C o n trib u tio n
JD
C
o
o
CO
M a rke t value
o
O M a n u fa c tu rin g co st
Q> C o n trib u tio n
3
O
>
c
55
o
0)
t _
o / \
c In c re a s in g p re c is io n N
\
In his writings Taguchi has stated that delivering a high quality product
at low cost involves engineering, economics, use of statistical methods, and an
appropriate management approach emphasizing continuous improvement. To
this end Taguchi has proposed a powerful preventive procedure that he calls
robust design. This procedure optimizes product and process designs such that
the final performance is on target and it has minimum variability about this target.
One major outcome of off-target performance, be it with ill-fitting shoes,
defective keyboards, or a low yielding chemical process, is the increase in the
lifetime cost of the product or process (see Table 1.1). We may classify this total
cost as the cost that the product/process imposes on society the producer, the
consumer, and others who may not even be its direct users as follows:
WHAT ARE TAGUCHI METHODS? 5
* F.M. Gryna (1977): Quality Costs User vs. Manufacturer, Quality Progress, June,
pp. 10-13.
** Includes repairs (part, material and labour), contract labour, defective product produced
and lost production.
Generally, the producer incurs the R&D and manufacturing costs and then
passes these on to the consumer. In addition, the consumer incurs the operating
cost as he uses the product, especially when performance deviates from target.
The knowledge emerging from Taguchis work affirms that high quality
means lower operating cost and vice versa. Loss functions provide a means to
quantify this statement.
6 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
The robust design method the key QA procedure put forth by Taguchi
is a systematic method for keeping the producers costs low while delivering the
highest quality to the consumer. Concerning the manufacturing process, the focus
of robust design is to identify process setting regions that are most sensitive to
inherent process variation. As will be shown later, this eventually helps improve
the quality of what is produced by minimizing the effect of the causes of
variation without necessarily eliminating the causes.
Sony -U .S .A
Sony - Ja p a n
C o lo u r
m -5 m m+ 5 d e n s ity
D B A B D Grade
examining the total loss a product causes because of its functional variation from
this ideal quality and any harmful side effect the product causes. The primary goal
of robust design is to evaluate these losses and effects, and determine (a) process
conditions that would assure the product made is initially on target, and (b)
characteristics of a product, which would make its performance robust (insensitive)
to environmental and other factors not always in control at the site of use so that
performance remains on target during the products lifetime of use.
To enforce these notions Taguchi (re)-defined the quality of a product to be
the loss imparted to society from the time the product is shipped. Experts feel
this loss should also include societal loss during manufacturing [6 ],
The loss caused to a customer ranges from mere inconvenience to monetary
loss and physical harm. If Y is the performance characteristic measured on a
continuous scale when the ideal or target performance level is r, then, according
to Taguchi, the loss caused L (Y ) can be effectively modelled by a quadratic
function (Fig. 1.3)
L(Y) = k ( Y - t ) 2
Note here that the loss function relates quality to a monetary loss, not to a gut
ftTliag or other mere emotional reactions. As will be shown later, the quadratic
Io b ta c tk m provides the necessary information (through signal-to-noise ratios)
Id achieve effective quality improvement. Loss functions also show why it is not
good o r a g h for products to be within specification limits. Parts and components
that most fit together to function are best made at their nominal (or the midpoint
specification) dimensions than merely within their respective specification
tolerances [11].
When performance varies, one determines the average loss to customers
by statistically averaging the quadratic loss. The average loss is proportional
to the mean squared error of Y about its target value r, found as follows: If one
8 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
L oss
P erfo rm a n c e characteristic
Fig. 1.3 The relationship between quality loss and performance deviation
from target.
/ a n 1 2
= k Ol-T) + O'
where fj, = E y-Jn and a 2 - Z (yi - fJ)2/(n - 1). Thus the average loss, caused by
variability, has two components:
1. The average performance (jj)f being different from the target r, contributes
the loss k(ii - r)2.
2. Loss k a 2 results from the performance {#} of the individual items being
different from their own average \x.
Thus the fundamental measure of variability is the mean squared error of Y
(about the target t), and not the variance a 2 alone. Interestingly, it may be noted
that ideal performance requires perfection in both accuracy (implying that jx be
equal to r) as well as precision (implying that a 2 be zero).
A high quality product performs near the target performance value consistently
throughout the life span of the product.
WHAT ARE TAGUCHI METHODS? 9
environmental and other factors that the customer would perhaps not be able to
or wish to control.
T A B L E 1.2
F A C T O R S A F F E C T IN G P R O D U C T A N D P R O C E S S P E R F O R M A N C E
An efficient tool for locating and identifying the potential factors that
may affect product or process performance is the Ishikawa Cause-Effect diagram
(Fig. 1.4).
Operator Machine
tired
amplitude weight
s ^ y s i c a l ^ _____
cutting
c o n d itio n ^ lmlarge L light
nervous ^----------------* g rin d e r
training
delay
awareness
inadequate ro ta tio n a l
frequency
Mony
grinding
time return contaminants cracks
crock V Short does not join mixed in
removal P block
pressure
solution 7/
crack /
/ 'fruitsr
i l t e r fluid
/
Tiuia
7
contaminant L grinding
long
removing depleted shaving
fluid grinding remains time
Material Method
Fig. 1.4 Cause and effect diagram for potential causes leading to cracks
during contact lens grinding.
o g 3
where s' is the Laplace variable. From this transfer function, the filter cutoff
frequency coc and the galvanometer full-scale deflection D may be found respectively
as
( * 2 + * ,) ( * , + *3) + *3
fl, = ------------------------------------------------------------------------------------------------------------------------------------------
2n(R 2 + R )R 3RSC
Vs
g s
D=
sen G s c n [ ( R 2 ^ R g ) ( R s ^ R 3) + R s R , ]
The design parameters (DPs) that the designer is free to specify are R2, R$, and C.
Another design example, from chemical engineering, illustrates a similar
functional process model also a mathematical relationship between the design
parameters and performance. Many chemical processes apply mechanical
agitation to promote contacting of gases with liquids to encourage reaction. Based
on reaction engineering principles, the relationship between the utilization of
the reacting gas and the two key controllable process variables may be given by
Utilization (%) = K (mixing HP/1000 g)A (superficial velocity)5
As will be illustrated through a case study in Chapter 9, such mathematical
models can be as useful as physical prototypes in achieving a robust design.
Traditionally, product and process design receive maximum attention
during functional design. Most engineering disciplines expound the translation of
scientific concepts to their applications so that the designer is able to develop the
functional design. Refinements to this initial design by trial and error may be
attempted on the shop floor combined possibly with limited field testing of the
prototype. True optimization of the design, however, is rarely thus achieved or
attempted [12].
The Taguchi philosophy sharply contrasts with this traditional approach to
design. Taguchi has contended that, besides building function into a product, its
design should engineer quality also. In his own words: quality is a virtue of design.
harmful side effects are designed to be small. During its design, one takes
countermeasures to assure this objective. The use of Taguchi methods makes it
possible that measures may be taken at the product design stage itself to achieve
(a) a manufacturing process that delivers products on target and (b) a product that
has robust performance and continues to perform near its target performance.
As already stated, the performance of a robust product is minimally affected
by environmental conditions in the field, or by the extent of use (aging) or due to
item-to-item variation during manufacturing.
Besides, robust product design aims at the selection of parts, materials,
components, and nominal operating conditions so that the product will be
producible at minimum cost.
The three steps involved in robust design are:
1. Planning the statistical experiments is the first step and includes
identification of the products main function(s), what side effects the product may
have, and factor(s) constituting failure. This planning step spells out the quality
characteristic Y to be observed, the control factors {0i, fy, #3}> the observable
noise factors {wi, w2, vv3}, and the levels at which they will be set during the
various test runs (experiments). It also states which orthogonal design will be
employed (see Fig. 1.5) to conduct the statistical experiments and how the observed
data {yj. y2, >3, . ..} will be analyzed.
Observed Computed
p e rfo rm a n c e performance
D esign m a t r i x Noise m a trix characteristic statistic
R
U (Control fa c to rs ) (Noise fa c to rs )
M
| d, 02 03 W1 W2 w 3
1 1
f 1 1 \ - y,
1 2 2 - y2
2 1 2 2 Z (0),
c
2 1 2 - y3
3 1 3 3
2 2 1 y4
2 t 2
The outer orthogonal array made up of observable
2 2 3 noise f a c t o r levels; each noise factor has two
distinct levels.
2 3 1
3 % 3
1 \ 1 y33
1 2 2 y34
3 2 t Z (0)9
2 1 2 y35
9 3 3 2 2 2 t ^36
The inner orthogonal array constructed using the
different design factor treatments; three treatments
for each factor are available.
V e h i c le " T y p e " ^
_<
>
ro<
V4
ro
<D
Q.
>
i
T, M, M2 m3 m4
C
o
W This e x p erim ent will
<D
t2 m2 m3 m4 M, be run by driving vehicle
V3 f i t t e d w ith t y r e s made
i of m a t e r i a l M 4 - o n
t e r r a i n T2
t3 m3 m4 M, m2
t4 m4 m, m2 m3
Fig. 1.6 The Latin square design. (Sixteen experiments can evaluate the effect of
tyre material on wear when four different vehicle types and four different
terrains are involved.)
EXERCISES
1. Choose a product that you use at your desk. Show how you, as a customer,
expect this product to perform.
2. Which aspects of the performance of the product you have selected are
quantifiable? Which aspects are qualitative? Which aspects of performance relate
respectively to the primary function, operability, long term performance, and
maintainability of the product?
3. Identify the major attributes (the design parameters) of this product fixed by its
designer. For each of these attributes, identify the choices of the designer (e.g.,
different materials, finish, source of power, and weight). Also identify the noise
factors (the factors in the environment in which you will use the product) on
which you, the user of the product, have little control.
4. Reflecting on your experience with the actual performance of this product,
enumerate the experiments that you would have liked to conduct with a prototype
of this product, to help find the optimum choices for the key design parameters.
(Taguchi methods provide, reliably and economically, the techniques for testing
several such factors together, besides showing their individual effects.)
1 m I Handling Uncertainty
m
1 n
m = X X: ( 2 . 1. 1)
n ,-=i
where n is the sample size and {x,, i= 1, 2, n] are the n individual measure
ments obtained from the sample. If the population average is /i, and the size of the
population is N, then one defines the population standard deviation, a, by
a = (Xj-iif/N (2.1.2)
s = ^ L ( x i - m )2l{n - 1) (2.1.3)
where m is again the sample average calculated from the n sample measurements
x u x2, x3, . . . , x n by Eq. (2.1.1).
20 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
The average and the standard deviation together constitute the most
common (though not the complete set o f) parameters for describing the
statistical character of a sample or a population which possesses some features
that vary from one item to another.
Since real populations may be often too large to study by examining every
member of the population, we may never know the true value of jx or a. Instead,
one bases many practical statistical methods on the study of samples, using *bar
and s, which estimate fi and a respectively.
EXAMPLE 2.1: Estimation o f sample mean (xbar) and variance (s2). In order
to produce an estimate of the total number of words in a book, someone selected
randomly 25 lines from randomly opened pages in that book and the words
appearing in each of these lines were counted as shown on Table 2.1. Determine
xbar and s2.
T A B L E 2.1
x B A R A N D a2 F R O M W O R D C O U N T S P R O D U C E D B Y R A N D O M S A M P L IN G
6 3.08 9.4864
16 - 6 .9 2 47.8864
18 - 8 .9 2 79.5664
5 4.08 16.6464
12 - 2 .9 2 8.5264
8 1.08 1.1664
11 - 1 .9 2 3.6864
10 - 0 .9 2 0.8464
9 0.08 0.0064
17 - 7 .9 2 62.7264
11 - 1.92 3.6864
2 7.08 50.1264
8 1.08 1.1664
12 - 2 .9 2 8.5264
5 4.08 16.6464
13 - 3 .9 2 15.3664
9 0.08 0.0064
4 5.08 25.8064
4 5.08 25.8064
2 7.08 50.1264
5 4.08 16.6464
6 3.08 9.4864
16 - 6 .9 2 47.8864
14 - 4 .9 2 24.2064
4 5.08 25.8064
Total 227 ' 0.00 551.8399
The solution uses Eqs. (2.1.1) and (2.1.3). The calculated sample average, xbar, is
9.08. The sum of squares of deviations when divided by (25 - 1) produces the
estimate of s2 as 22.9933.
HANDLING UNCERTAINTY 21
1
f [x] = e x p { - [ ( x - / i ) / c r ] 2/2} (2.2.1)
a^{2n)
(a)
35 40 45 50 55
(b)
Fig. 2.1 The normal probability distribution: (a) Probability density function
of the N[fu <7] distribution; (b) three normal probability distributions
with mean equal to 45 and standard deviations of 2, 4, and 6.5,
respectively; and (c) three normal probability distributions with equal
standard deviations and means equal to 10, 23, and 36, respectively.
m ~ N [fa cr/Vn]
where fi is the population mean, <7 the population standard deviation, and n the
size of the sample, m is an unbiased estimator of /a and the sample standard
deviation s is an unbiased estimator of population standard deviation a. One
calls an estimator unbiased if its expected value is equal to the parameter one is
estimating. The expected value of a random variable is the long term average value
of the variable that would be approached if one observed the value {x,} of the
variable X a large number of times.
Admittedly, in working with an estimate such as m (or s), one must have
some sense of correctness: How fa r is the real parameter fi (or a) from its estimated
value? Statistics provides the answer here as a confidence interval and a related
probability statement.
One does not know the true value of ji. But, since m - N[fa <r/Vn], one
knows from the property of the normal distribution that there is a 0.95 proba
bility that fi lies in the interval
may be given for estimates of other population parameters also. The spread
(1.96(0*/ Vw)) of the estimate produced is called margin of sampling error; here
it is within about 2 standard deviations on either side of the sample mean m
Generally, the larger the sample size n, the narrower the margin of error.
/ - ULzJL (2.2.3)
si^fn
EXAM PLE 2.2: Using the data of Example 2.1, establish a 95% confidence
interval for /z, the average count of words per line.
Solution: Since one does not know the standard deviation a, it must be estimated
by s. Since s2 = 22.9933, one obtains s = 4.795. The 95% confidence interval for
V using f*. 1, 0.025 is given by
Prob [m - /_li0.025 is/yfn) < /i < m + tn_l0 025 (s/V n)] = 0.95
Since n = 25, m (= xbar from Example 2.1) = 9.08, and *24,0.025 (from Appendix A)
= 2.064, one obtains
7.1006 < fx< 11.0594
as the 95% confidence interval for fi, the average words/page.
One finds the expected value E[X] (also known as the average) of a random
variable X, if X is discrete, having a probability density function p(x)> given
by
E[X] = xp(x)
x :p ( x )> 0
26 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
If X is continuous, then
[X ] = | xp(x) dx
X
This provides us a way of finding the expected value of the sum of several
random variables.
Two random variables X and Y are said to be independent if the knowledge
of the value of one does not change the distribution of the other. Many random
variables in real life are independent of each other while many others are not.
Mathematically,
The variance of a random variable X shows the extent of dispersion in the values
of X, measured about the average E[X]. The. variance is
Var [X] = E[(X - E[X])2] (2.3.6)
Notice that variance is the expected value of the square of the deviation of X from
its average E[X\. For normally distributed random variables, the square root of
variance, known as the standard deviation, forms an important parameter
describing the variability in the distribution of these random variables.
If a and b are constants, then
If there exist two random variables X and Y, then one expresses their influencing
each others values* by their covariance, given by
Cov [XY\ = 0
Note, however, that Cov [XY] = 0 does not imply that X and Y are independent.
In general,
n
Var x x, = L Var [ x l (2.3.9)
i=1 i= 1
This formula helps us in finding the sum of the variance of independent random
variables.
Equation (2.3.9) leads to another useful result. Suppose Xb X2, X3, . . . , Xn
ire independent and identically distributed random variables with identical
variance o2. Then the variance of the sample mean Xbar (= EX,-/n) may be found
as follows:
1 71
Var [Xbar] - Var - I X.
n i = i 1
Tbe pooled variance is useful, for instance, in obtaining the confidence interval
Sbr the difference between /it and fi2, the two respective population means.
In statistical tests the probabilities of committing these two errors, a and /?,
are labelled as
a = Prob [Type I error in testing]
= Prob [reject H0\H0 is true]
fi = Prob [Type II error in testing]
= Prob [do not reject H01H0 is false]
respectively. The probability of Type I error, a, is called the significance level of
a statistical test. It is common to limit this probability to 5%, which is equivalent
to being wrong one in twenty applications of the test by drawing samples.
The Z-statistic may be used in testing whether the population mean of data
values {xi, x2, jc3, . . . } is equal to some speculated quantity jx, if the characteristic
X of the population is distributed normally. Also, as mentioned in Section 2.2,
even if one does not know cr, one may still perform a test for the population
mean, using the ^-statistic instead of Z; t is then defined as
Xbar - /x
1 ~ s l^ n
where s, the calculated sample standard deviation, replaces a, Xbar being the
sample mean given by Eq. (2.1.1).
In Section 2.8 we shall introduce another important statistic known as
the F-statistic that is highly useful in testing the equality of variance using
observed data and also in reaching robust designs.
Based on the above data, can one deduce that the difference in the averages forecast
(Xbar! - Xbar2) by the two groups is statistically significant? In order to test this
theory*, we propose the following hypotheses:
* 0 . /^brokers = /^investors
^ 1 * /^brokers ^ /^investors
The null hypothesis H0 states that there is no difference between the brokers and
the investors forecasts. H Q may be re-phrased as the statement The parameter
(^brokers ~ investors) *s zero. The test procedure will try to determine if the
observed difference (Xbarj - Xbar2) is statistically significant or insignificant,
if we assume H0 to be true.
The acceptability of H0 (or that the two population means /inkers and /Westons
are equal) can be tested by examining the difference between the respective
estimates, the sample means Xbar! and Xbar2. This test (using Xbar! and Xbar2) is
elementary, since by the central limit theorem, both Xbar! and Xbar2 are distributed
normally when sample sizes (nj and n2) are reasonably large. Thus
In this test one makes use of a standard result in statistics. If samples are indepen
dent, then the difference between the sample means is also distributed normally,
with an expected mean equal to (Xbar2 - Xbar2) and a standard deviation equal
to the square root of the sum of the two respective variances of Xbarx and Xbar2.
Therefore,
since (Xbar* - Xbar2) is distributed normally, one may now check to see if the
observed difference (Xbar! - Xbar2) is a high probability or a low probability
event. Further, since the normal distribution governs the probabilities and the
needed standard deviations a* and a2 are known, one uses here a Z-test. In the
Z-test one calculates a Z-statistic as
z = ( X b a r i - X b a ^ ) - ^ , - ^ ) 5
y [ G \ / n x + a \ l n 2\
Using the data from the Dow Jones example, if we hypothesize that /^brokers =
Mmvestors then we find
it may be verified from a Z-table that the observed 350 point forecast difference
is more than three standard deviations away from zero (see Appendix A at the
end of the book). A reference to the Z-table also points out that the Z-value
calculated above (16.2) has only a < 0.01 probability of occurrence. It is improbable
itiat such a high Z value occurred only by chance and therefore it must be
concluded that the forecasts made by the two groups are different.
If the samples are small and one assumes that both samples are random
samples from normal distributions with the same but unknown standard deviation,
then the /-statistic should be used to assess the difference between two means.
The statistic used will be
2
where sp is a pooled estimate (see Eq. (2.3.11)) of the variance determined as
4 = + (-2 - ( 2 .5 3 )
p nx+ n2 - 2
larger than the 1.98 cutoff for a two-tailed 5% /-test with (41 + 7 5 - 2 ) or 114
degrees of freedom (see Appendix A). Again, therefore, we cannot accept the
speculation H0 that the two forecasts are not different.
One need not restrict tests of hypothesis to speculating about average only.
One may also verify, for example, whether a set of sample data is distributed
normally (with certain mean and variance). The statistic used here is called the
chi-square statistic.
The chi-square distribution (Fig. 2.3), with m degrees of freedom, is the
distribution of the sum of m independent, squared standard normal variables,
z? + z 2 + z 32 + ... + z 2.
f ( X 2)
influences and variabilities. The factor of interest here may be quantifiable (i.e.,
measurable in numbers) or merely observable as a condition (morning shift vs.
afternoon shift, or steel vs. copper, etc.).
There is a question here of sufficiency of data. Obviously, a single secretary
typing only business letters cannot confirm whether wordprocessor A is easier
to use than wordprocessor B. In order to establish that A is easier or harder to
use (than B), one would be well-advised to test several secretaries on several
different machines, working on various assignments.
In hypothesis testing, when the response (the output of the process one is
studying) is observed or the performance measurements are taken, one must
ensure that data collected would allow the investigator to compare, for instance,
machine-to-machine, person-to-person, assignment-to-assignment and word-
processor-to-wordprocessor variations in performance. These factors can affect
the secretarys experience, other than any effect that changing the wordprocessor
alone may produce.
In making such a comparison, one must also decide what summaries
(statistics) would be calculated from the observed data, and what tests should be
applied on these statistics. Fisher [2] in 1926 established that the analysis o f
variance (ANOVA) procedure provides one of the best procedures to conduct such
comparisons.
Why is it necessary that several different machines, assignments, and
secretaries be involved in such tests? Perhaps one feels that such elaboration adds
needlessly to the complexity of the study and is perhaps wasteful of time and
resources. If the same person is going to use always the same machine and type
only business memos, one may perhaps get away by doing the convenient
investigation. However, after the investigator has made his recommendation, the
typing assignments would perhaps differ some requiring text work, numbers,
columns, and tables, or even flow charts. It would be desirable then to use a method
of comparison that is valid under less restrictive and perhaps more realistic conditions.
Further, as we shall see later, some influencing factors might be beyond the
investigators control. One here needs randomizing, a procedure that attempts to
average out the influence of the uncontrolled factors on all observations.
The scientific approach of evaluating or comparing the effects of various
factors uses statistically designed experiments, a systematic procedure of drawing
observations after setting the factors at certain desired levels, and then analyzes
the observed data using ANOVA procedure.
quality, amount of sunlight, seed quality, moisture, etc. To reach a valid conclusion
in this investigation, therefore, one would have to neutralize these influences by
randomizing the plant growth trials with respect to these factors.
If this randomizing is without any plan or logic, it is possible that by the
luck of the draw, most plain water-fed plants would end up growing, for instance,
in shade. In order to avoid this, some deliberate balancing would have to be
planned. If 16 plants are to be grown, eight would be given plain water, while the
other eight would be given Miragro. However, randomizing would decide which
plants would receive Miragro, and which plain water, regardless of where one
plants them.
Suppose that one obtains the following height measurements after 12 weeks
of planting, beginning with 16 equally healthy seedlings:
Treatm ent
Miragro Plain Water
26 25
28 27
30 29
33 30
22 21
24 23
26 25
27 24
The calculated means and variances under the two treatments immediately show
t a t the mean heights of the plants grown under the two treatments differ. Also,
obce the considerable difference in height from plant to plant under each of
the two treatments. This suggests that one cannot be certain that the fertilizer
treatment caused the difference in means, and not chance (chance here includes
all the factors the investigator did not or could not control).
The observed difference between the two sample means, 27 and 25.5 inches,
under the two treatments could be either because of a true difference influenced
by these two treatments, or the large variance of a single distribution of plant
heights under various influences. Therefore, to probe the hypothesis H0 further,
we begin by assuming that the effects of plain water and Miragro are unequal and
set up two simple cause-effect models:
Height with plain water: y = Pi +
Height with Miragro: y = Pi +
In these models the parameter /J, is the expected effect on height (caused by
Miragro or by water), and is the unexplained deviation or error, a random
(chance-influenced) variable representing the influence of all other uncontrolled
factors (sunshine, moisture, soil condition, etc.).
36 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
variance variance
across plants + across plants
with Miragro with plain water
a 2 = ------------------------ -------------------------- (2.7.1.)
= (10.25 + 8)/2
= 9.125
The reader should verify Eq. (2.7.1) using Eq. (2.5.3). With this common
variance of individual plant heights known, we can, given sample size as n, next
estimate the variance of sample averages. This will equal (a 2/n), (see Eq. (2.3.10)).
In the present example, the averages 27 and 25.5 are sample averages, each with
sample size 8. Therefore, the variance of the sample averages is
9 125
= 1.140625 (2.7.2)
Since two sample means (27 and 25.5) were estimated with their average being
26.25, one could directly calculate the variance of sample means, using the
definition of variance (Eq. 2.3.6), as
A calculated ratio (of variance estimates) near 1.0, as one would expect, would
suggest that all the plant growth data came from a single large population,
suggesting that application of Miragro fertilizer made no difference. If, on the
other hand, the ratio (F) is much larger than 1.0, this suggests that the variance of
sample means, directly calculated using the observed sample means obtained at
different treatments, is large. This may also suggest that the sample means
obtained under different treatment conditions (here Miragro feeding and plain
water) differ considerably from each other, or are too large to be explained away
by the sampling variation from one plant to the next.
In the Miragro fertilizer example, F = 0.986, which is not a convincing
evidence that adding Miragro makes a difference.
The general character of the F-statistic is as follows: The distribution of the
F-statistic depends on two factors: the number of distinct treatments ( k \ and the
lumber of observations (n) in each treatment. In the above example, k is 2 and
i is 8. The numerator of the F-statistic is the directly calculated variance of the
sample mean, determined from the k sample means obtained under k different
38 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
treatments. The numerator has (k - 1) degrees of freedom, one dof being used in
estimating the average of sample means for the variance of sample means to be
calculated. The denominator, the estimate of the variance of sample means for a
sample size n based on the averaged variance of individual observations, uses a
total of k x n observations. The calculation of the mean of k sample variances (in
the k treatments) from these kn observations, however, requires that one first calculate
k sample means (one each in the k treatments used). Thus, the denominator of F
will have k(n - 1 ) degrees of freedom.
The two degrees of freedom (that of the numerator and the denominator)
determine the exact distribution of the F-statistic. Various F-distributions appear
on Fig. 2.4. One should remember that an F value near 1.0 indicates that the
effects of the treatments do not differ. On the other hand, if the F- statistic is
significantly larger, it would suggest that the mean treatment effects vary
significantly from each other.
(a)
(b)
Fig. 2.4 The F-distribution: (a) With critical values F 0.975 and F 0 025; and
(b) three F-distributions with different degrees of freedom.
HANDLING UNCERTAINTY 39
Fi + Yr> + ... + yv
Fbar = ----- -2------------ ^
kn
If one now calculates the total sum of squares of the deviation of each obser-
Yj from Fbar, one obtains
kn ~
Total sum of squares - L (F- - Fbar) (2.9.1)
;=i 7
Now. for treatment i, which consists of n observations {F;S j = ( ( / - 1 )n + 1),
7 - Fbar)2 in treatment i may be
ii - 1>i + 2), ((/ - l)/z + 3), . . m}, the term (F,-
as follows:
(Yj - Fbar)2 = [(Yj - Fbar,-) + (Fbar,- - Fbar)]2
= (Yj - Fbar,)2 + (Fbar,- - Fbar)2 + 2(F; - Fbar,) (Fbar, - Fbar)
Now
m
I (F. - Fbar.) (Fbar. - Fbar)
_-= : - I , n + l J
in in
I r(y b a r,. - ybar) - I ybar, (ybar,. - ybar)
; = (i-l)n+l
= I (Y. - Kbar)2
3
The first term on the right-hand side of Eq. (2.9.2) is the sum of squares of
deviations of the individual observations {>},;' = 1, 2 , . . . , kn} about the respective
treatment means {7bar,, i - 1, 2, 3, . . . , k}. The second term is the sum of the
squares of the difference of each Fbar, (at treatment level /) from the grand mean
average, ybar.
Recall that the total sum of squares is a measure of variation among the
individual observations {K; }. The above decomposition shows that this total
variation is the sum of (a) how much each observation varies about the mean
i
of each treatment and (b) how much the average value of Y varies from one
treatment to the next. This important result can also be expressed as
Total variation in observations = variation within treatments
+ variation between treatments (2.9.3)
The purpose of decomposing the observation-to-observation variation in an
experiment as mentioned above is to clarify that the effect observed, Y, varies for
the following reasons:
1. Each (controlled) treatment may have a different effect on Y.
2. For any given treatment, there are other uncontrolled factors that also
affect Y and cause it to vary about its expected value.
The uncontrolled factors lead to the within treatment variability in observed data.
If there is no difference in the effect attributable to the different treatments, the total
variation observed would only equal the within-treatment variability. If there is a
treatment-to-treatment difference, the total variation observed and then quantified
by the total sum of squares will be significantly larger than the within-treatment
sum of squares. This, as we shall see in Section 3.4, may be detected by the F-test.
EXERCISES
1. An additional set of 25 lines was randomly picked from the book used in
Example 2.1, with the data summarized as follows:
Word Count o f 25 Randomly Selected Lines
9 7 11 5 13
14 13 8 5 2
11 12 4 2 13
8 14 7 13 9
14 2 13 7 10
Combine the above data with the data of Table 2.1 and confirm that the new 95%
confidence interval for ji using the 50 random observations will be the narrower interval
7.8131 < / i < 10.3069
[Hint: Use r(0.025,49) from Appendix A.]
2. Conduct an F-test to accept or refute the hypothesis that the variance of the
two sets of data presented in Table 2.1 and Exercise 1 above are equal. If there
are 435 pages in the book in question, give estimates for the total word count
for this book and the variance of this count.
Design of Experiments
3.1 TESTING FACTORS ONE-AT-A-TIME IS UNSCIENTIFIC
P)
Disputes over why quality is lacking or why a factory cant produce acceptable
goods often last for months and even years. Even experts sometimes dont seem
to agree on the remedy a switch over of material, loading methods, operator
skills, tools, or QA practices. For want of irrefutable evidence, the blame may
subsequently fall on manufacturing, R&D, the design office, suppliers and even
the customer.
This chapter elaborates the F-test a highly precise data analysis method
that ranks among the best known methods for empirically exploring what factors
influence which features. Establishing the existence of cause-effect relationships
scientifically is pivotal in resolving disputes and questions such as those cited
above and guiding later decisions and actions. As we shall see, the F-test plays
a key role in identifying design features that have significant influence on
performance and robustness.
In the study of physical processes aimed at predicting the course of these
processes, one often explores cause-effect relationships using regression analysis.
Strictly speaking, however, regression should be attempted only after one has
established the presence of a cause-effect relationship, and the variables involved
are measurable. When one has not already established the cause-effect relationship,
or when the variables are functional or all influenced by a third factor, regression
or correlation studies can be misleading. Further, regression is decidedly not useful
when the independent factors are attributive (e.g., steel vs. plastic). By contrast,
precise and reliable insight into any cause-effect relationships existing in such
cases can be obtained from statistically designed experiments.
Design is defined as the selection of parameters and specification of features
that would help the creation of a product or process with a pre-defined, expected
performance. When complete, a design improves our capability to fulfill needs
through the creation of physical or informational structures, including products,
machines, organizations, and even software. Except in the most trivial cases, however,
the designer faces the joint optimization of all design features keeping in view
objective aspects that may include functionality, manufacturability, maintain
ability, serviceability and reliability. Often this cannot be done in one step because
design as a process involves a continual interplay between the characteristics the
design should deliver and how this is to be achieved. Producing a robust design,
in particular, is a complex task. As mentioned in Chapter 1, robust design aims at
finding parameter settings which would ensure that performance is on target,
mimizing simultaneously the influence of any adverse factors (the noise) that the
po&xrt user may be unable to control economically or eliminate. Robust design
41
42 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
optimization experiments are of this type), the data are obtained in a form that
makes the prediction of the output for some specified settings of the input variables
easy. Furthermore, OAs greatly simplify the estimation of individual factor effects
even when several factors are varied simultaneously.
The study of interaction is clearly one area in which statistical experiments
continue to be the only procedure known to us. An illustration of the significance
of interaction effects is provided by the lithograph printing example [5] in
Table 3.1.
T A B L E 3.1
L I T H O G R A P H P R IN T IN G E X P E R IM E N T A L D A T A
Exposure Development
Experiment Time Time Yield (%)
1 Low Low 40
2 High Low 75
3 Low High 75
4 High High 40
The table above shows the typical observed effects of exposure and develop
ment times on yield (per cent of prints in acceptable range) in lithography. Note
the large fall in yield when one sets both exposure and development times high.
Such effect (an interaction between exposure time and development time) could
be at most suspected but not established by varying only one of these factors at a
time. If the study involves more factors, interactions would be untraceable in one-
factor-at-a-time experiments.
Statistical experiments consist of several well-planned individual experi
ments conducted together. The setting up of a statistical experiment (also known
as designing) involves several steps such as the following:
1. Selection of responses (performance characteristics of interest) that will
be observed
2. Identification of the factors (the independent or influencing conditions)
to be studied
3. The different treatm ents (or levels) at which these factors will be set in
the different individual experiments
4. Consideration of blocks (the observable noise factors that may influence
die experiments as a source of error or variability).
In the lithography example above, yield % is the response, and exposure time
aod development time are the process design or influencing factors. Each of these
factors has two possible treatment levels (high and low) at which the
ikbographer would set them as needed. Non-uniformity of processing temperature
m i that of the concentration of chemicals would constitute the noise factors here.
Before the investigator plans statistical experiments, he must clearly know
ctojective of conducting the experiments. The clarity in this objective is of
c e o n o o s value. For example, when one states the experiments objective as Select
44 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
the optimal values for resistance R l and inductance L2 in the design of a power
conditioner unit to minimize sensitivity to input voltage and frequency variations,
it has the required clarity. The domain in which the results of a set of designed
experiments are applicable is called the influence space. It is important that one
makes this influence space sufficiently wide by selecting well-spread factor settings
without the concern for off-quality production during the conduct of the
experimental investigation. During such experimentation, the investigator should
uncover the range of the input variable over which performance improves, as also
the range of input settings over which performance deteriorates. Only then appropriate
countermeasures can be identified and devised.
The elements of this domain on which the experiments are conducted are
called experimental units. The experimental units are the objects, prototypes,
mathematical models, or materials to which the investigator applies the different
experimental conditions and then observes the response.
In statistical experimentation, one distributes the experimental units randomly
in the backdrop of noise factors to represent correctly the character of the overall
influence space. This minimizes the chances of any biasing effect caused by the
uncontrolled factors. For example, in testing the productive utility of fertilizers,
one takes care to distribute the planting of seedling so that the effects of sun/shade,
soil differences, depth of tilling, planting, etc. average out. These are the factors
that the investigator does not control during the trials.
In design optimization experiments as proposed by Taguchi, the investigator
changes the settings of the parameters under study from trial to trial in a systematic
manner. Special matrices, called OAs (Fig. 1.5), guide these settings. OAs are
matrices that specify the exact combinations of factor treatments with which one
conducts the different experimental trials. It is common to symbolically represent
or code the distinct levels of each design or noise parameter by (1, 0, -1), or
(1, 2, 3), etc. (Fig. 1.5) to distinguish the different combinations of parameter
settings from each other. The foremost reason for using OAs rather than other
possible arrangements in robust design experiments is that OAs allow rapid estimation
of the individual factor (also known as main) effects, without the fear of distortion
of results by the effect of other factors.
In design optimization, one uses OAs to the maximum extent possible to
achieve efficiency and economy. Orthogonal arrays also simplify the simultaneous
study of the effects of several parameters.
observations {Fy} to the influence (//,-) of the control factor; Eq. (3.2.1) also links
{Yij} to [Eij], the effect of the uncontrolled factors.
The statistical analysis of the results of the one-factor investigation depends
strongly on three assumptions: linearity, additivity, and separability of effects of
the control factor and the uncontrolled factors. Only under these assumptions the
simple model of Eq. (3.2.1) would be a valid description of the factor-response
relationship.
The one-factor statistical investigation can be useful in situations such as
the following: A bridge designer may speculate that the material chosen (steel,
Alloy X, or Alloy Y) to fabricate a structural beam has an influence on the beams
deflection characteristics under a standard load, independent of other factors.
Subsequently, the deflections observed of prototype beams built using these materials
may be used to (a) establish whether material choice has any influence on deflection,
and (b) identify the material with the maximum or minimum flexibility.
As mentioned earlier, for the conclusions to be valid, it is critical, even in
this apparently straightforward investigation, that one randomizes the runs and
their replications with respect to the uncontrolled factors.
When one uses k treatment levels (say k different materials to construct the
beam), it is common to summarize the observed data in a table such as Table 3.2,
which shows that the investigator obtained a, replicated observations at Treatment
/, and a total of nx + n2 + n$ + . . . + n*(= AO observations.
TABLE 3.2
OBSERVATIONS AND SAMPLE MEANS FO R A ONE-FACTOR EXPERIMENT
n ,
Y i\ Y w Ybar- 1 Y-. \
;=i
The observation averages rbar1# Fbar2, etc. shown under Sample Mean in
Table 3.1 estimate the treatment effects i = 1, 2, 3, To determine now
whether the treatment effects are unequal, one would statistically compare the two
following sources that may cause the {Tbar,-} averages to differ from each other
(see Eq. 2.9.3):
1. Within-factor (also called within-treatment) variability.
2. Between-factor (also called between-treatment) variability.
If between-treatment variability is (statistically speaking) larger than what one
expects from variation that occurs within a typical treatment when one replicates
observations, one would question whether the effects / = 1 , 2 , 3 , . . . , *, are all
the same. Perhaps the reader can see that the approach here parallels the ideas
that led to the illustration of ANOVA in Section 2.7.
One key measure of variability in a set of observations is how far a. single
observation deviates from its expected average. For a group of observations, one
determines variability collectively by summing up the squares of the differences
of the individual observations from the average. One calls this sum the sum of
squares of deviations or more explicitly the error sum of squares. This quantity
measures the experimental error (resulting from the influence of uncontrolled
factors) in replicating or repeating observations (/z,* times) when treatment i is
held constant.
One computes the experimental error, which reflects the variability caused
by all factors not in control or not deliberately set, as the sum of squares of the
deviation of individual observations from their respective expected averages.
Thus, if the observations resulting from replicating the experiment at treatment
i are j = 1, 2, 3, . . . , nh and their average is Ybaih then the experimental
error accumulated by replicated runs at treatment level i is
(Yy - Khar)2
j =i
The average variability among the observations is called the mean sum of squares
or mean square error. The mean square error at treatment i is
T ^i - 1' K )2
In the above, the quantity (n, - 1) is called the degree offreedom (or dof) of the
mean sum of squares at treatment i. The dof acknowledges that of the ,observations
obtained, if one calculates a statistic (Khar,) using these data values, then this
statistic (Fbar,) and (n, - 1) observations together can determine the value of the
one remaining (i.e., the ti, th) observation.
TABLE 3.3
BEAM DEFLECTION TEST RESULTS
Steel 82 86 79 83 85 84 86 87 84 48
Alloy X 74 82 78 75 76 77 77 40
Alloy Y 79 79 77 78 82 79 79 14
What can one infer from these test results? Note the following salient aspects of
these observations:
1. The number of observations (i.e., replications done by building several
beams and measuring their deflection under standard load) for the three different
materials (i.e., the treatments) is unequal. This is an important fact about one-
factor investigations in general. In these investigations it is not necessary that an
equal number of observations be obtained for each treatment.
2. It is not readily apparent from the average deflections {Fbar7} calculated
for each material type that under standard load a material-to-material difference
in flexibility (manifested by deflection) exists.
3. One cannot yet comment on the Sum of Square values (a measure of
variability from the respective expected average) in the table, for these contain
contributions from an unequal number of data points (replications).
With the help of these observations, an important data summary (statistic)
can be calculated. If one pools all the three Sum of Squares and then averages
them by dividing by the total dof for this pooled sum (as done to define the
estimated pooled variance in Eq. (2.5.3)), one produces the overall observation-
to-observation (or error) variability, known as the within treatment variability
(see Section 2.6). This variability equals
I k n
2
Mean SSeiTor = - Z (r - /bar,)
Mean SSenor, often called the Mean Sum of Squares, and as already mentioned,
reflects the typical observation-to-observation variability when any particular
treatment is held constant and replicate observations are made. In the beam deflection
example, replications are made by fabricating several identical beams using the
same material and observing their respective deflections under the standard load.
Note that the averaging to get Mean SStTT0Tuses all N observations and it spans
across each of the k treatments used.
The other variability, the one that is closer to the objective of the one-factor
experimental investigation, manifests the impact on the observations caused by
different treatments. One calculates this variability as the between-treatment sum
of squares. One determines the between-treatment sum of squares by setting a
DESIGN O F EXPERIMENTS 49
reference average value equal to the grand average of all observations, or ybar,
1 k nt
ybar - E E y
f=1 ;= 1 j
Recall that we have used a total of k treatments here and nxrepresents the number
of observations obtained at treatment level i. A total of N observations were
originally obtained. One finds the between-treatment sum of squares as
^treatment = i (^ba^ - ybar)2 + rt2(ybar2 - ybar)2
+ . . . + a*(ybar* - ybar)2
Since one uses up one dof in the k treatments in calculating the grand average ybar,
one calculates the mean between-treatment sum of squares statistic as
experimental data, namely, the between- and the within-treatment variability. The
purpose of ANOVA, which one performs with the mean sums of squares, is to
separate and then compare such variabilities. Also, as we will see later, ANOVA
applies not only to one-factor investigations, but also to multiple factor studies.
This is a considerable capability for a test because variabilities may be caused by
one or several independently influencing factors, and by their interaction.
Recall first that if one squares the deviation of each observed data {Yjj}
from the grand average and then sums these squares to a total, one ends up with
the result (due to Fisher [13]) derived in Section 2.9:
Total Sum of Squares = Sum of Squares due to error
+ Sum of Squares due to treatment
(X; - Xbar f t a 2
1= 1
represents the sum of k standard normal variates and therefore has a chi-square
distribution, Xbar being XX,/fc. As mentioned in Section 2.5, the chi-square
distribution is also a standard probability distribution like the normal distribution.
The chi-square distribution has only one parameter, its dof. With the squares of
deviations of k observations {X,-, i = 1, 2, 3 , . . . , k) from their mean Xbar summed,
the chi-square variable written above will have (k - 1) degrees of freedom.
In the one-factor investigation, if the mean effects fix, //2, Hi * IM-> *>
due to the k different treatments are all equal, then the total N observations taken
in the experiments would all belong to the same normal population with
variance <7 2, because due to randomization the influence of the uncontrolled
factors may be assumed to be identical in all the observations. This suggests that
the quantity (a random variable)
will also have a chi-square distribution, with (N - 1) dof. (In this context, the
reader may review the material in Section 2.9.)
The quantity Error Sum of Squares (or the sum of squares of deviations or
errors caused by the uncontrolled factors) computed as
DESIGN O F EXPERIMENTS 51
TABLE 3.4
THE ANOVA TABLE
treatment
Treatment Z ni(Yboxi - 7 bar)2 k- 1
i =1 k- 1
terror
Error E E (YuJ - Ybai'j2 N -k
1=1 7= 1 N -k
k ni k ni
= I Z K, 2 - [ Z I Y-f/N
/=1 7= 1 i = 1 7= 1 ( 3 .3 .1 )
ni -) ni o
t [ E [E I
Looking at the imposing appearance of Eqs. (3.3.1) and (3.3.2), one might wonder
if this is any simplification! However, the truth is that both these final relations
(the Total Sum of Squares and SStreatment) involve only the squares of observed
values {Y/,}, and the squares of certain sums of these observations both being
easier to calculate manually. Thus, the use of Eqs. (3.3.1) and (3.3.2) avoids
calculation of the N individual deviations {(Fy - Fbar,)2} and their squares
directly, by hand. The following example uses this modified procedure to evaluate
^treatm ent-
EXAMPLE 3.3: Sum of squares for beam deflection. Returning to the beam
deflection problem, one finds that k = 3, nx= 8 , n2 = n3 = 6 , and thus N = 20. Also,
k j
Z L Yu = 82 + 86 + ... + 82 + 79 = 1608
i=1 ; =1 J
X y = 74 + 82 + ... + 77 = 462
X y = 79 + 79 + ... + 79 = 474
j
Therefore,
TABLE 3.5
ANOVA FO R THE BEAM DEFLECTION EXAM PLE
Notice that the mean sum of squares of deviations (92.4) caused by changing
materials to construct the beams is considerably larger than the average or mean
variability that occurs because of experimental error (6 .0 ) from measurements
repeated with the same material.
DESIGN O F EXPERIMENTS 53
One may be tempted to conclude at this point that the effect due to materials
is significant (over background noise). The proper analysis procedure to apply here,
however, is the F-Test, described in Section 3.4.
Ml = 1*2 = M3 = * = M*
against the alternative hypothesis (Hx) that at least one effect {,*} is different.
The F-test is so precise that it is able to detect even if only one fii is different
from the overall average effect ji.
Recall that we estimated the experimental error (the average observation-
to-observation variability at a given treatment level or the within-treatment
variability) by the mean square error (Mean SSerror). Mean 5Serror reflects the
influence of the uncontrolled factors and thus estimates a 2 (the variance of {y}y
defined in Section 3.2). This is true irrespective of whether changing the treatments
has an effect on response Y or not.
The between-treatment mean sum of squares also would be an estimate of
(because it will not be different from) the experimental error variance a 2 provided
all treatment effects (jiu ji 2>M3> >M*) are the same. However, if these effects
are different, the between-treatment mean sum of squares would be affected by
this difference also (these are the differences among fi2, M3> . * , M*) and,
therefore, be generally much larger than a 2.
If the treatment effects fih jx2, M3>etc. are each different from the average
overall effect of treatments (i.e., //), then it can be shown that
I n.-Oi; - H )2
Expected Mean S,5 treatinent = cr2 + 1 k l ---
We mentioned in the section above that both S S ^ ^ J a 2 and SSen0T/a 2 are chi-
square variables. The fact that the ratio of two chi-square random variables is a
random variable that has the F-distribution (Section 2.8) enables us to determine if
the effect on Y varies with treatment. Thus the F-test answers the question: Based
on the experimental observations would it be reasonable to assume that
Ml = M2 ~ M3 = * = Mfc
or is it that at least one treatment effect is different? One might guess that since the
ratio of variances is the basis of the F-test, this test may enable one to compare the
variability caused by treatment effects to the noise or within-treatment variability.
The special way in which one sets up the one-factor statistical experiment causes the
54 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
noise variability to occur only because of the error in replicating an experiment when
a given factor treatment (e.g., the material used in the beam) is held unchanged.
We are not aware of any other arrangement of experimenting with a factor
and observing the response to evaluate the effect of the factor that exceeds the
precision of the F-test either in theory or in practice. The F-test procedure
transforms observed data suitably to calculate the experimentally realized value of
a (standard) F-variable, called the tests F-statistic. If this value turns out to be a
rare F-value (one that will be observed or realized only rarely, i.e., with low
probability), then one concludes that the initial hypothesis that treatment effects are
all equal is not tenable.
Mathematically speaking, if ^ = n 2 = * . = M*(= ft), then the quantity
Mean SStreatmenl
Mean SSm0I
FcaIc < F (a ; k - 1, N - k)
then there is not enough evidence in the observed data to reject the hypothesis that
where Fcak is the observed F-ratio, Mean SStreatment/Mean SSerror. This is the
probability of obtaining a realization from the F(k - 1, N - k) distribution that is
at least as large as the observed F-ratio. If this probability is < a, the test suggests
that one should accept the alternative hypothesis Hi that at least the effect of one
treatment is not equal to that of the other treatments. Otherwise, one accepts the
null hypothesis H 0 that there is no effect because of treatment changes, i.e.,
Mi = fh = = Ik = V
EXAMPLE 3.3: The F-test for the beam deflection data. In the beam design
problem, the F-ratio calculated, i.e., Fca]c is 92.4/6.0 = 15.4. The critical value of
F with a = 0.05 is F(0.05; 2, 17) = 3.59 (Appendix A). The critical value with
a = 0.01 is 6.11. This strongly suggests that one should reject the hypothesis
DESIGN O F EXPERIMENTS 55
that //] = fi2 = H3, or that the material with which the beam is fabricated does
not affect deflection under the standard load.
It should be noted that the one- or single-factor ANOVA model assumes that
errors are all independent and normally distributed with an identical variance
(a 2) because of identical influence of the uncontrolled factors on each treatment
group. It is possible to check whether the residuals {Cy} are distributed as N[0, a],
by calculating and examining these residuals. The residuals will be the differences
between the observations Yy and their group average Ybar, at the respective treatment
setting /, written as {y ( = Yy - ybar,)}. No regularity or patterns in a plot of the
residuals should appear. Rather, if the influence of the uncontrolled factors is
uniform and proper randomization has occurred, the residuals should display a
random scatter about zero.
TABLE 3.6
AN EXPERIMENTAL DESIGN TO INVESTIGATE FIVE MAIN FACTOR EFFECTS AND
TW O 2-FACTOR INTERACTIONS
A C Ax C B D BxC E Factor
Assigned
1 3 4 5 6 1 < Lg Column
Experiment Observation
1 1 1 1 1 1 1 1 y\
2 1 1 1 2 2 2 2 yi
3 1 2 2 1 1 2 2 ya
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2 ys
6 2 1 2 2 1 2 1 >6
7 "> 2 1 1 2 2 1 yi
8 0 2 1 2 1 1 2 ys
Similarly, one defines Abar2, ZJbarj, /?bar2, Cbarj, etc. One may evaluate the main
factor effects or the main effect dependencies as follows:
Average Effect^ = Abar2 - Abarx
Average Effect^ = Z?bar2 - Bbaxx
Average Effect^ = Cbar2 - Cbarx
Average Effect^ = >bar2 - Dbarj
Average Effect^ = bar2 - Ebari
The two-factor interactions are calculated as follows: Let
AjCi = Sum of observations with factor A set at 1
and factor C set at 1
= yi + ?2
A\C2 - Sum of observations with factor A set at 1
and factor C set at 2
= ^3 + y4
A2Cx = Sum of observations with factor A set at 2
and factor C set at 1
= >>5 + >>6
A2C2 = Sum of observations with factor A set at 2
and factor C set at 2
= yi + Js
Then, if we define A,C) bar as A{C-}12, we have
Interaction^ = [(AiC\ bar + A2C2 bar) - (AXC2 bar + A2C\bar)]/4
To carry out ANOVA of the observations, the sums of squares of certain
deviations are required. One determines these sums of squares as follows: Let
r = L all observations = yx + y2 + y3 + y* + ys + y<$ + yi + y
The correction factor (CF) is defined as
CF = T2IN
where N = total number of observations obtained. The Total Sum of Squares (Sr) is
i=1
The Factor Sums of Squares are
Sa = [Aj]2/NA1 + [A2?IN a2 - CF
SB = m 2/NBl + [B2]2/NB2 - CF
Sc = [C tflN a + IC2]2/Nc2 - CF
SD = m 2INm + [D2]2/Nm - CF
SE = [Etf/Nn + [E2]2!Ne2 - CF
DESIGN O F EXPERIMENTS 57
Hence,
SA = [A,]2/Nm + [A2f lN A2 - CF
where NAiq = number of observations with factor A set at level i and factor C set
at level j. Substituting the appropriate quantities, we obtain
Similarly,
Se = Sf SA Sg Sc Sd Sg SAxC Sbxc
We then determine the respective dof. The total dof f T is given by
fT = total number of observations - 1, or (N - 1).
The other dof are as follows:
fA = (number of distinct levels of A) - 1
= 2 - 1=1
= 2- 1 = 1
/ax C = Ia X fc ~ 1 X 1 = 1
I bxc = / b x / c = 1 x 1 = 1
Hie dof for error (the influence of uncontrolled factors influencing the response)
0&
3Y be found as
m
This provides
for substitution into the formula for the F-statistic given above
3.6 SUMMARY
A key objective in Taguchi methods is to uncover how the various design parameters
and environmental factors affect the ultimate performance of the product or
process being designed. Performance may be affected not only individually by the
design parameters and by some factors in the environment, but also by possible
interactions among design factors and the interaction between the design and
DESIGN OF EXPERIMENTS 59
3. Three different nozzle designs are available for assembling fire extinguishers.
Five test runs made using each nozzle type with discharge velocities under
identical inlet conditions produced observations as shown in E 3.2. Confirm diat
at the significance level a = 0.05, the performance difference among the nozzle
designs cannot be ignored.
TABLE E 3.2
RESULTS OF NOZZLE DISCHARGE TESTS
What factors might affect the above observations? Describe a scheme for
randomizing the experimental trials.
The Foundation of Taguchi Methods:
The Additive Cause-Effect Model
4.1 WHAT IS ADDITIVITY?
An experienced plant engineer is hardly surprised at finding that product or
process performance Y depends on several different influencing parameters P, Q,
R, S, etc. These dependencies, in general, can be quite complicated. As a result,
the empirical studies to determine them can become large and even difficult to
run. Fortunately, as pointed out by Taguchi, in many practical situations these
studies can be restricted to the main-ejfect dependencies (Section 3.5). In these
cases the dependencies are additive and can be satisfactorily represented by what
one calls the additive (or main factor) cause-effect model. The additive model
has the form
y = M + Pi + Qj + rk + si + e (4.1.1)
$
where fi is the mean value of y in the region of experiment, piy q j, etc. are the
individual or main effects of the influencing factors P, Q, etc., and is an error
term.
The term main effect designates the effect on the response y that one can
trace to a single process or design parameter (DP), such as P. In an additive model
such as the one given by Eq. (4.1.1), one assumes that interaction effects are
absent. In this model, pt represents the portion of the deviation of y (or the effect
on y) caused by setting the factor P at treatment Pit qj that due to the factor Q at
Qj, and rk that due to setting R at Rk is rk, and so on. The term represents the
combined errors resulting from the additive approximation (i.e., the omission of
interactions) and the limited repeatability of an experiment run with experimental
factor P set at Pv Q at Qj, R at Rh and S at /. Repeated experiments usually show
some variability, which reflects the influence of factors the investigator does not
control.
The additivity assumption also implies that the individual effects of the
factors Py Qt R, etc. on performance Y are separable. Under this assumption the
effect of each factor can be linear, quadratic, or of higher order, but the additive
model assumes that there exist no cross product effects (interactions) among the
individual factors. (Recall the instance of interaction of effects seen between
exposure time and development time in the lithography example, Table 3.1.)
If we assume that the respective effects (a and /}) of two influencing factors
A and B on the response variable Y are additive, we are then effectively saying
that the model
Yij( = l^ij + ij) = ji + ctj + j3j + eg (4.1.2)
represents the total effect of the factors A and B on Y. Note again that this
representation assumes that there is no interaction between factors A and B, i.e.,
61
62 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
the effect of factor A does not depend on the level-of factor B and vice versa.
Interactions make the effects of the individual factors non-additive. If at any
time n }j is different from ( jli + a, + /3;), where a, and fy are the individual (or the
main) effects of the respective factors, then one says that the additivity (or
separability) of main factor effects does not hold, and the effects interact. The
chemical process model shown below provides an example of an interaction
among the two process factors:
For this process, the effect on the response variable Utilization (%) is multiplicative
rather than additive. Here, the effect of mixing HP/1000 g depends on the level
of the second process factor, superficial velocity, and vice versa. This effect may
be modelled by
ldtJ = (4.1.3)
Sometimes one is able to convert the multiplicative (or some other non-additive)
model into an additive model by mathematically transforming the response Y
into log [F], or 1IY, or -/F, etc. Such a conversion greatly helps in planning and
running multi-factor experiments using OAs. (We shall see in the next section
that OAs impart much efficiency and economy to statistical experiments.) The
presence of additivity also simplifies the analysis of experimental data. The
transformation that would convert the above chemical process model (which
involves the interaction of factors mixing HP per 1000 g and superficial velocity)
is the taking of logarithms on both sides. This gives
The model equation (4.1.3) then becomes additive, and is written equivalently as
Hy = /J + or, + Pj (4.1.4)
To remind the reader, because the interaction terms are absent in it, one often
calls the additive model the main effects model.
treatment levels. If the respective effects of these factors are additive, then
performance Y may be modelled by
Y = fi + pi + qj + rk + Si + (4.2.1)
Since this model contains no interaction terms, it is an additive or main factor
model. Now, since each of the four factors (P, Q, R, and S) may be set at three
distinct treatment levels, there will be 34 or 81 ways of combining these different
treatments. It may then appear that to investigate the effects of the four factors,
one has to run each one of these 81 experiments. We now show that if
additivity of main effects is present, then only a small subset (shown in Table 4.1)
of the possible 81 experiments needs to be run to evaluate the effect of the four
design factors. This subset is called the orthogonal matrix experiment.
TABLE 4.1
AN ORTHOGONAL M ATRIX EXPERIMENT AND ITS RESULTS
The Orthogonal
Matrix of Treatments
Experiment P Q R s By Additivity Assumption, yt
1 Pi Qi . R, Si y\= f1 + Pi + + ri + si + \
2 Pi Qi Ri Si yi = + Pi + #2 + r2 + s2 + E2
3 Pi e 3 Ri S3 M + Pl + #3 + r3 + s3 + 3
4 Pi Qi r2 S3 y4 = ju + pi + qi + r2 + S3 + 4
5 Pi Qi r3 Si ys = M + P2 + #2 + r3 + sl + 5
6 Pi 23 Si ye = f1 + P2 + 93 + rl + s2 + 6
7 Pi Qi r3 S2 yi - M + P3 + Ql + r3 + ^2 + 1
8 P3 Qi *1 S3 = M + P3 + 92 + rx + 53 + g
9 P3 Qi Ri Si yg = H + Pl + q3 + r2 + S1 +
Pi + Pi + p 3 = 0 (4.2.2)
Similarly,
<7i + Qi + #3 0 (4.2.3)
rj + r2 + r3 = 0 (4.2.4)
+ S2 + S$ == 0 (4.2.5)
Therefore, to find the effect of setting P at P 3 on Y, one simply computes an
arithmetic average of certain {y,}, as follows: First, note what happens when one
adds the three observations (y7, y8> and y9); in which the P treatment equals P^
and then averages them.
independent of each other and will not be random variables with zero mean
and (a e)2 variance.
As an alternative to the orthogonal matrix for planning experimental
studies, one may consider the full factorial statistical design.
book). (In Chapters 6 and 8 the method for selecting the right OA for a given
problem is discussed.)
Recall that the effect of some factor A on the response y is the average
change in the response it produces when the setting of factor A goes from its
low level (symbolically represented by -1) to its high level (1)- Suppose
now that the factors A, B, and C produced the responses {y,} in experiments
run with the different treatments of A, B, and C as shown in Table 4.2. For
each experiment (represented by a row in the table) the symbols 1 and - 1
show the particular coded combination of treatments used in that experiment.
TABLE 4.2
THE L8 ORTHOGONAL ARRAY
1 1 J 1 yi
2 1 1 1 yi
3 1 1 ~1 ys
4 1 1 1 y4
5 1 1 1 ys '
6 1 1 I ^6
7 1 1 1 yi
8 1 1 1 78
>>5 + + yi + y% y\ + % + % + ^
hand. As one may observe, the layout of the response table (Table 4.3) is quite
straightforward. For most OAs such a table may be constructed [15],
Note that the response table shown (Table 4.3) includes a Random Order
column. This column is the reminder to the investigator that he must randomize
the experimental trials to minimize any biases in results that may develop if the
trials are run in some systematic order, such as Trial 1 to Trial 8 in a sequence [11].
Such biases are due to uncontrolled factors. For example, the ambient temperature
may rise as the experiments are run, or that Operator X runs the first few experi
ments, followed by the later ones by Operator Y.
TABLE 4.3
THE RESPONSE TABLE FOR A THREE-FACTOR EXPERIMENT
Random Observed B
Trial
Order Response
+1 -1 +1 +1 -1
1 y,1 y,1 y y1
y; y; y, y,
y y y y
y. y. y. y.
y, y, y, y,
y, y, y, y,
y y y y
8 y8 y8 y8 y8
T o ta l (sum of observations in columns above goes here)
No. of data values 8
Table 4.4 shows the hand calculations done on a response table. The
calculations shown are for a process optimization investigation conducted with
three process design factors F, S, and T, and a response called Yield. The table
shows the treatments used. The bottom row of the response table shows main
effects calculated.
Another well-known calculation method is due to Yates [111.
68 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS T O ROBUST DESIGN
TABLE 4.4
A COMPLETED RESPONSE TABLE
Random Observed F S T
Trial
Order Yield | a B 60 80 70 82
4 1 164 164 164
1 166 166 166
8 161 161 161 161
5 160 160 160 160
6 184 184 184 184
3 187 187 187 187
2 179 179 179 179
7 8 182 182 182 182
Total 1383 651 732 701 682 68 8 695
No. of data values 8
Average 172.9 162.8 183.0 175.3 170.5 172.0 173.8
Estimated main effect 20.2 -4.8 1.8
with factor T set at 82. These settings would maximize yield. The projected maximum
yield is, simply,
ybarmax = ybar + (Fbar2 - ybar) + (Sbarj - ybar) + (7bar2 - ybar)
= 172.9 + (183.0 - 172.9) + (175.3 - 172.9) + (173.8 - 172.9)
= 186.3
70 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
When one has obtained the projected optimum performance as above, one should
run the verification experiment. This is to be done by setting factor F at B, factor
S at 60 rpm and factor T at 82, to confirm that the actual yield is indeed close to
this projection. (As mentioned in Section 4.2, the verification experiment alone
puts the additivity assumption to test, and hence the acceptability of the main
factor model as the basis for performance optimization.)
EXAMPLE 4.1: Ina Seito Company's Tile Manufacturing Experiment [4], The
Ina Seito Tile Company of Japan, in the late 1950s faced the problem of high
variability in the dimensions of the ceramic floor tiles it produced. Such
variability made many of the tiles unacceptable and reduced process yield. Analysis
of rejected tiles showed that tiles in the centre of the pile fired inside the kiln
experienced lower temperature than those near the periphery. Brainstorming by
Ina employees led to the listing of many process factors whose effects, they felt,
should be investigated.
Seven factors (designated A, B, C, D, E, F and G), each of which could be
set at two distinct levels in practice, were identified. The investigator assumed
initially that the effect of each of these factors was independent of the presence of
other factors (i.e., there were no interactions) and that the effects were additive.
Since the study involved seven factors, each of which could be set at two possible
treatments, the investigator selected an L 8 orthogonal array (shown in Table 4.5) to
guide the statistical experiments. (In Chapter 6 we shall describe how one made
this choice.) All seven factors in these experiments concerned the apportionment
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 71
of materials or the tile making recipe. Table 4.5 shows the results of the eight
orthogonal experiments run.
TABLE 4.5
RESULTS OF RUNNING THE TILE-MAKING EXPERIMENTS
Experiment 1
Orthogonal Array Columns No. of Tiles
1 2 3 4 5 6 7- Found Defective
1 A1 B1 Cl D1 El FI G1 16/100
2 A1 B1 Cl D2 E2 F2 G2 17/100
3 A1 B2 C2 D1 El F2 G2 12/100
4 A1 B2 C2 D2 E2 FI G1 6/100
5 A2 B1 C2 D1 E2 FI G2 6/100
6 A2 B1 C2 D2 El F2 G1 68/100
7 A2 B2 Cl D1 E2 F2 G1 42/100
8 A2 B2 Cl D2 El FI G2 26/100
TABLE 4.6
SUMMARY OF ESTIMATED FACTOR EFFECTS
A1 51 12.75
A2 142 35.50
B1 107 26.75
B2 86 21.50
Cl 101 25.25
C2 92 23.00
D1 76 19.00
D2 117 29.25
El 122 30.50
E2 71 17.75
FI 54 13.50
F2 139 34.75
G1 132 33.00
G2 61 15.25
indeed minimized the tile size variability. The actual results confirmed this [4].
Figure 4.2 shows the factor effects graphically.
that includes all combinations of the treatments of the different factors. However,
full factorial experiments involve the largest number of individual trials for a given
number of factors and treatments. Since the publication of Fishers work, many
statisticians have proposed special experimental designs (combination of factors
and treatments) to study factor effects with fewer trials [3, 17, 18].
Each such special design has a rational relationship to the purpose of
experimentation, the needs of the investigator, and the physical limitations of the
experiments. All such designs begin with the statement of the investigators
objective and the identification of the factors that have the greatest potential
influence upon response. Some common statistical designs are:
74 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
Completely randomized
Orthogonal
Factorial
Blocked factorial
Randomized factorial
Randomized block
Balanced incomplete block
Latin square, etc.
In contrast to such formal designs, some statisticians feel that parametric
design (or robust design) using OAs and the analysis proposed by Taguchi do not
form a formal statistical methodology [10, 16, 17]. They note specifically that
orthogonal arrays can overlook particular effects (e.g., a confounded interaction)
in exchange for general effects. The Taguchi school responds here by saying that
for the sake of a substantial reduction in experimental effort, one may initially
overlook interactions and run a verification experiment to assess later whether
such an omission was reasonable.
Provided no serious non-additive effects or interactions are present in the
relationship between performance and the design parameters, many studies
completed since the publication of Taguchi methods suggest these methods can be
quite useful in practice [5, 7, 19]. Advocates of Taguchi methods suggest that
even if the methods lack statistical sophistication, if one runs the experiments using
OAs and then verifies the conclusions by a verification experiment, the outcome
can be quite effective, useful, and efficient in leading to the rapid empirical
optimization of designs [5, 6 , 7, 14].
However, interactions play a central role in seeking out the robust design.
The novel idea behind parameter design (page 13) is to minimize the effect of the
variation in the noise factors by choosing the settings of the design factors judiciously
to exploit the interactions between design and noise factors [32], rather than by
reaching for high precision and expensive parts, components and materials and
manufacturing control schemes.
In Section 4.6, we showed that Taguchi methods may estimate the factor
effects from the simple averaging of certain observations. Nonetheless, Taguchi
methods may also use ANOVA when appropriate to determine if the effect of a
particular factor on the response or its variability is significant. In particular,
F-tests on S/N ratios are common in robust design studies. In such studies one uses
ANOVA to compare the relative magnitudes of certain sums of squares, as one
does in classical statistical experiments.
The two examples we now give illustrate the application of ANOVA in
Taguchi methods. The first example shows how one determines the dof in a
design optimization problem. This count is a key parameter that guides the selection
of the appropriate OA on which the statistical experiments arc to be based. The
second example illustrates the ANOVA steps in a multi-factor design optimization
investigation.
The notion of dof is an important one in statistical analysis. The number of
independent aspects associated with an experimental design (or a factor, or a
sum of squares) is called its dof. A statistical experiment with nine rows in the
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 75
matrix has nine dof. The proper counting of the dof is essential in the correct
analysis of the results obtained in statistical experiments.
In any ANOVA table, one computes the mean sum of squares for each
factor by dividing the factors sum of squares by its dof. One computes the
experimental error variance, which equals the error mean square (Section 3.3), by
dividing the error sum of squares (Sections 3.3 and 3.5) by the dof for error.
Since each factor in the experiment discussed in Section 4.2 has three
treatments and the overall effect of each factor must satisfy Eqs. (4.1.2-4.1.5)
effectively, each factor has only two dof. Generally speaking, the dof associated
with a factor is one less than the number of different treatment levels at which
the investigator sets that factor during experimentation. One finds the dof for
error as follows:
dof for error = 1 (total No. of trials x number of repetitions) - 1
- total dof for factors and interactions
TABLE 4.7
THE PROCESS FACTORS AND THE AVAILABLE TREATMENTS
Process Sum of
Factor dof Squares (SS) Mean SS F -Value
A DBTL additive 1 11.15 11.15 11.42*
C Isocyanate temperature 1 6.55 6.55 6.71
E Polyol temperature 1 7.80 7.80 7.99
G Shot time 1 11.84 11.84 12.12*
Error 3 2.93 0.98
Total 7 40.27
The F-Value column in Table 4.,8 suggests that two process parameters, A
and G, have significant influence on variability. Consequently, A and G should be
set at their lower intensity levels, Ig and a, respectively. This completes the first
basic step of Taguchis two-step optimization procedure (see Section 6 .8 ).
The second ANOVA focussed on the mean hardness produced by the
experimental trials. Table 4.9 displays the ANOVA data for the signal (mean
hardness).
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 77
TABLE 4.9
ANOVA FO R MEAN HARDNESS
Process Sum of
Factor dof Squares (SS) Mean SS F -Value
4.8 SUMMARY
Contrary to what many engineers and scientists believe and practice, one cannot
obtain reliable and reproducible results from empirical investigations by changing
the variables one-at-a-time and observing the effect while holding the other factors
constant. The one-factor-at-a-time study misses interaction effects completely.
One runs a statistically designed experiment all-factors-at-once\Yet, because
of the soundness of the ANOVA theory behind it, such experimentation produces
highly reliable and reproducible results. In statistical experiments one varies
several influencing factors together from trial to trial in a pre-planned, systematic
fashion. The special design, or structure, or plan used in a statistical experiment
adjusts the factor settings in the different trials such that maximum information
can be generated from a minimum number of trials.
In empirical optimization, ANOVA (combined with the F-test) identifies
which influencing factors have the largest impact on (a) the average level of
performance, and (b) the variability of the response variable. ANOVA also identifies
factors that do not influence either the performance, or its variability. The statistically
designed experiments help in efficiently separating the trivial many design
parameters or process variables from the vital few' that the designer should set
optimally, to make the design robust.
Taguchi* s robust design procedure makes particular and extensive use of
additive (or main effect) models and OAs rather than the classical full-factorial
designs, a valuable shortcut.
78 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
EXERCISES
1. Using Eqs. (2.3.9) and (2.3.7), prove that the average error (e7 + e8 + e9)B in
Section 4.2 will have the variance (1/3) (<JE)2.
2. A study involved 32 experiments with nine control factors to help optimize
the routing process referred to in Exercise 3.1 (see [14]). Table E4.1 shows the factor
settings used and the resulting average router life observed in each experiment.
By summing and averaging appropriate observations, estimate the main
factor effect for each control factor and identify the optimum setting for each
factor (ignoring any interaction effects) to maximize router bit life.
TABLE E4.1
RESULTS OF EXPERIMENTS CONDUCTED TO STUDY ROUTER LIFE
3. Use the F-table in Appendix A to verify that all process factors (C, D, E,
and F) shown in Table 4.9 significantly affect mean cushion hardness. What dof
for the F-statistic would be used here? How confident are you of the assertion
that all factors affect hardness?
Optimization Using Signal-to-Noise
Ratios
5.1 SELECTING FACTORS FOR TAGUCHI EXPERIMENTS
With product and process features rapidly growing, it is nearly impossible today
to design a soundly performing product using only the first principles of science.
These principles often help the designer in the selection of the DPs to create a
product with the desired performance. However, the designer rarely has control
over factors beyond the DPs such as voltage fluctuation, raw material variations
during manufacturing, load variations in service, corrosion, etc. known as noise,
these factors often have large effect on performance. To produce a quality design
the designer must be aware of these effects also, besides the first principles of
science.
When a mathematical model expressing performance as a function of the
different DPs is available, one is often able to optimize the settings of these para
meters. However, such optimization quickly becomes unwieldy and even
impossible when the model must include also the environmental and other sources
of disturbance (Table 1.4). In particular, a robust design (for which the output must
stay at or very near target performance) cannot be reached when the designer has
only limited knowledge of the effect of the factors outside his control.
According to Taguchi, variability, which sums up the effect of all factors not
in the designers control, is the primary obstacle in achieving robust performance.
A rise in a vehicles fuel consumption, or undesirable fluctuation in the thickness
of sheets rolled by a rolling mill is perhaps typical rather than an exception. Many
factors that the user of the product/process does not control may cause performance
to vary. Thus, quality design cannot be complete when the designer succeeds in
reaching only the functional design (Section 1.7). Quality design, Taguchi suggested,
should include performance variability reduction and hence aim at robustness.
Taguchis methods may not be statistically pure [6 , 17]. However, the
engineering insight Taguchi has shown is perhaps rare. His methods enable designs
to achieve (a) minimum dispersion in performance about target, (b) minimum
sensitivity to variations transmitted from components, and (c) minimum sensitivity
to environmental noise [16]. Taguchi aimed at making the design robust first,
followed by an adjustment to put performance at the desired target. The task begins
by recognizing that the different factors influencing performance belong to two
distinct categories: Design parameters and Noise factors.
Design parameters (DPs) are the distinct and intrinsic features of the process
or the product that influence and determine its performance. The designer selects
the nominal settings of these parameters such that the resulting performance is on
target. These settings also define the design specification for the product or process
in question.
79
80 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
Noise factors are those factors that are either too hard or uneconomical to
control, even though these may cause unwanted variation in performance.
Table 1.2 summarizes some typical noise factors commonly encountered.
In order to achieve on-target performance with minimum variability, the designer
should Find ways to minimize the disturbing influence of the hard-to-control factors
among these. Taguchi proposed that whenever one does not completely know the
effect due to the different factors, one should empirically identify the optimum
settings of the DPs by doing certain special experiments. The best settings, Taguchi
showed, may be discovered by systematically varying the DPs in experiments.
One conducts these experiments directly on the prototype product or process to seek
out values for the parameters that minimize its sensitivity to the uncontrolled factors
by judiciously exploiting any DP-noise interactions present [32], Taguchi suggested
that this should be done after the functional design is complete (Section 1.7).
The plan that guides these experiments is statistical in nature, the experimental
design being fractional factorial rather than full factorial (Section 4.2). The first
step in these experiments involves developing the appropriate design parameter
matrix (the control OA, Fig. 5.1). This a iTa y shows the test settings for each DP
*
t *
c 3 2 3 .2 n 2 2 2 .. Zn2
+ >
% 4
n 2 2 2 ...
Fig. 5.1 The parametric experiment plan.
one computes the performance statistic values {//,, a,2, S/Nj}. The analysis of the
statistics obtained from the different experiments then follows. This analysis
predicts the optimal settings of the design parameters and the improved level of the
designs performance that would result from these settings. The final step in
optimization involves an empirical verification that the optimum design parameter
settings thus identified would actually deliver a performance close to the projected
improved performance.
The control and noise arrays play a key role in ensuring that one runs only
the necessary experimental trials and nothing more. As Fig. 5.1 shows, the
columns of the control array represent the different parameters changed in
experimentation. The rows represent the different combinations of the settings of
these parameters used in the particular experimental trials. As shown, for each
combination of design parameter (control factor) setting i (e.g., 3 2 3 ... 2 ) and
noise factor setting j (e.g., 1 1 2 . . . ) , one obtains an observed performance Ztj.
Later, one summarizes the different {Z;)} into performance statistics (//,, a 2), and
a special metric known as the S/N ratio.
One may need several iterations of such experiments to exploit any DP-noise
interactions to identify precisely the DP setting at which the effect of noise factors
will be sufficiently small. This final setting identifies the robust design (Section 1.9).
variations to the above loss function form are available. If the performance
characteristic y happens to be such that the smaller it is the better (as in pollution
generated per MW of electricity produced), then one expresses the loss best by
the expression
L(y )= k f
On the other hand if the performance characteristic is such that the larger it is
the better, as with the bonding strength ( y) of adhesives formulated, then
L(y) = *(1//)
In general, loss functions can be asymmetric. (An illustration of asymmetric loss
functions appears in Example 11.2.)
Market research with customer experiences may help quantify the true loss
function, which can then become the basis for design optimization [5, 16]. Instead
of directly using the loss function, however, Taguchi recommended several
special forms to which experimental data on product/process performance should
be transformed before optimization. Taguchi called these special forms S/N ratios.
The rationale for this switch over (to SIN ratios) instead of working directly with
the quality characteristic measurements is given in the following:
The S/N ratio is a concurrent statistic a special kind of data summary. A
concurrent statistic is able to look at two characteristics of a distribution and roll
these characteristics into a single number, or figure of merit. An example can
illustrate this well.
The objective of robust design is specific: robust design seeks optimum
settings of DPs to achieve a particular target performance value under most noise
conditions. Suppose that in a set of statistical experiments one finds the average
quality characteristic to be /j. and the standard deviation (caused by the noise
factors) to be a Let the desired performance be fj$. Then one must make an
adjustment in the design to get performance on target by adjusting the value of
a control factor by multiplying it by the factor (jjoI jj). However, this also
affects the standard deviation, which becomes (using Eq. (2.3.7)). Since
delivering on target performance is the goal, the loss after one has adjusted the
process is now due to the variability remaining from the new standard deviation
(of performance) only (see Section 1.4). This equals
Loss after adjustment = K j^l^i)2 a 2 = )2a 2/jx2
= constant/ { ji2/a 2)
The factor ( / i 2/ a 2) reflects the ratio of average performance jx2 (which is the
signal), and a 2 (the variance in performance) the noise.
Maximizing /i2/o-2 or the S/N ratio therefore becomes equivalent to mini
mizing the loss after adjustment. Additivity of design parameter effects is a primary
requirement that permits use of the economical orthogonal statistical experiments
in design optimization. For improving additivity (see [14], p. 297) one often
takes the logarithm of (ft2/a 2) and expresses the SIN ratio in decibels, as
It takes about half a decibel gain to obtain a 10% improvement in (/i/cr). The
range of values of fi2I a 2 is (0, ) while the range of values of S/N is (- >, + >).
The maximization of the S/N ratio by a suitable selection of the DPs makes the
design robust, a major goal of quality engineering.
Let yh y2, . . . , yn represent multiple values of a performance characteristic
Y observed in the parameter experiments. Then the following respective S/N
ratios (denoted by S/N(8)) become the most appropriate choices in guiding the
optimization of design parameter settings for the cases stated [5 ],
If the nominal value for a characteristic Y is the best for the customer, then
the designer should maximize the S/N ratio
S/N(6)) = 10 log10 ( ybar2Ar) (5.2.2)
where
n
s 2 = Z (yi - ybar)2/(n - 1)
In the above procedure, one repeats observations (under the diverse settings of the
noise factors in the noise OA) n times at each selected combination of DP settings
in the control OA. The idea that the nominal response is the best implies that if
all observations {y(} were exactly at the average (i.e., at ybar) and thus the variability
in {y/} was nil (i.e., s2 was zero), the design would be the best. If being on target
T is the best, then one should maximize
S/N( 6) = 10 log io (T2/.r) (5.2.3)
where
s2 = Z (yi - t )21(n - 1)
setting. (As pointed out in Section 5.1, frequently a designer is unable to adjust
performance to target without also affecting variability.)
The additivity of the DP effects also becomes maximum when one uses the
most appropriate S/N ratio. Without the presence of additivity one may have to
conduct more experiments in order to consider the effects of DP-DP interaction
and to achieve good predictability of performance at the optimized parameter
settings.
One is usually able to select the most appropriate S/N ratio from among
several candidate S/N ratios by experimenting with special OAs [5]. When one does
ANOVA for the SIN ratio, one reserves a few dof by convention, for estimating
the error variance. A smaller error variance of the S/N ratio (compared to the mean
square for the control factor effects) signifies that the additivity of the chosen S/N
ratio is better (or interaction effects are minimal); hence the chosen S/N ratio is
more suitable [5].
Parameter optimization using SIN ratio with DP-DP interaction minimized
often becomes a simple two-step procedure [8 ]. First, one maximizes the S/N ratio
*
without being concerned about the mean performance. This results in robustness in
respect to the uncontrolled variables. Next, one adjusts the mean performance by
using an adjustment (control) factor to bring the mean on target. (In Section 6 .8 we
show that control factors having little or no effect on SIN ratios but a high influence
on mean performance serve the best as adjustment parameters.) Equations (5.2.1) -
(5.2.6) show the different useful SIN ratios.
TABLE 5.1
W ELDING SETTING OPTIM IZATION TESTS
Minimization of voids is obviously the smaller the better case (Eq. 5.2.4). The
investigator here obtained four replicates at each welding setting to help find the
required S/TV statistic. (Note that replication lets the investigator simulate the
effect of the uncontrolled noise factors on performance. All S/N ratios shown in
this example have a negative sign. One may think of this as the case when there
OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 85
is more noise than signal, which is true as one may verify by computing the
row averages (2/y,*/4) and the standard deviations.)
One may now find the best welding setting; at this setting the S/N ratio is
maximum. The S/N ratios tabulated suggest that Welding Setting 4 (S/N ratio =
- 38.6) is the setting expected to produce the smallest volume of welding voids.
concentration of the reactants and the products resulting from the experiment is
far more beneficial than the yield of only the desired product.
2. As far as possible, the measured quality characteristic should be a
continuous variable.
3. The quality characteristic should be monotonic, monotonically rising (or
falling) with respect to the control factors. This may be difficult to judge before
the experiments are actually run. However, the lack of monotonicity makes the
study of interactions and hence the conduction of a larger number of experiments
critically important a difficult task when seven or eight control factors are
involved.
4. The quality characteristics selected should be easy, to measure and
complete they should cover all (performance) dimensions of interest.
If the effort to eliminate interactions still fails, optimization can be achieved
by expanding the experimental design to explicitly include interactions. In
Section 8.4 and Chapter 10 we discuss the methods that apply in such cases.
The S/N ratio that best suits a particular problem may be determined by
performing ANOVA of the S/N ratios. The most appropriate S/N ratio will result
in the smallest relative error mean square or the smallest ratio of error mean
square and the mean square for the factor effects. (This results from the lower
level of interactions or improved additivity of main factor effects. See [5], p. 208.)
Besides measuring the quality or performance characteristic, while the
experiments are being run, one should also measure productivity and/or cost
factors in order to eventually achieve economic trade-offs in design decisions.
an OA, the investigator changes values (settings) of the variables under study only
as specified in that OA. For instance, the row entries in the array in Table 4.1
(e.g. P 2 0,2 R?> Si in Experiment 5) rigidly indicate how these settings should be
changed on an experiment-to-experiment basis. Subsequently, the orthogonal
structure of the array makes it possible for the main effect of each variable to
separate mathematically from the main effect of the other variables.
Once the investigator has run a complete set of orthogonally designed
experiments and analyzed the data obtained, he runs the confirmation or verification
experiment. The confirmation runs aim at seeking verification that
1. the assumptions made in setting up the original product/process
performance model especially additivity and the absence of DP-DP interaction
effects are valid and reasonable; and
2. when one sets the parameters at their optimum values as suggested by
the analysis of experimental results, one actually achieves the predicted target
performance.
Suh shows that if one minimizes information during design, one minimizes also the
variability due to noise or the uncontrolled factors, as attempted in Taguchis
robust design. The undesirability of the effect of noise (on performance) emphasized
in both the Taguchi methods and the axiomatic approach is noteworthy.
5.7 SUMMARY
The objective of robust design is specific: by judiciously exploiting DP-noise
interactions it seeks optimum settings of DPs to achieve a pre-specified target
performance under most noise conditions. Experiments aimed at reaching the robust
design obtain measurements of S/N ratios to discover the effect control and noise
factors have on performance, in order to predict and optimize the products
performance and robustness. The S/N ratio is a concurrent statistic. It is able to
look at two characteristics (here the deviation of performance from the target and
its variability) of a distribution and roll these characteristics into a single number,
or figure of merit.
EXERCISES
An investigator used the L 8 orthogonal design (Appendix B) to conduct replicated
experiments involving four parameters (adhesive type, conductor material, curing
time, and integrated circuit post coating) to maximize bonding of mounted ICs on
a metallized glass substrate [15].
The parameter settings and observations were as in Table E5.1.
TABLE E5.1
IC BONDING TEST RESULTS
Average
strength 1.96 0.73 5.44 0.52 - 0.25 0.82 8.71
Then, in mathematical terminology, one calls Y{ a contrast One calls the two
contrasts Yx and Y2 orthogonal if the inner product of the vectors corresponding to
the weights { w y } and { w j } is zero. Thus, Yx and Y2 are orthogonal if
2
^ 1 1 ^ 2 1 ^ 1 2 ^ 2 2 ^13^23 + + ^19^29 = 0
We have no direct use of contrasts anywhere in this book. One should only remember
that one does not set up the columns in an OA arbitrarily. Any two OA columns
are mutually orthogonal.
Sometimes OAs can be particularly useful. Suppose that one has used
OAs to design and guide the running of experiments and the additive model
(Section 4.1) is a valid representation of the cause-effect relationship of the
process under study. Then, the simple averaging of certain observations
obtained can estimate the main effect of the individual factors under study
(Section 4.2). Use of OAs to plan matrix experiments also ensures that if the errors
in each experiment are independent and have zero mean and equal variance,
then the estimated factor effects are mutually uncorrelated. This improves the
predictive value of the cause-effect model to predict the response for treatment
combinations not directly observed experimentally. However, one may gain these
benefits of using the OAs only if one does all the experiments specified by the
orthogonal matrix.
2-3 u
4-7 1-8
8-11 Ll2
12-15 ^16
2-4 u
5-7 L27
Rather than constructing an OA anew for every design optimization problem one
faces, in practice one uses one of the many standard OAs provided in statistical
92 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
texts. Appendix B contains the commonly used standard OAs. As we shall see later,
each such standard array applies to and is most appropriate for investigating
certain specific factor effects.
To conserve resources, generally the investigator attempts to employ the
smallest size OA, meeting the purpose at hand. However, to test the validity of
the additivity assumption (Section 4.1), sometimes one uses a larger OA,
which allows the evaluation of between-factor interactions, in addition to the
main effects.
0 u u L9 Lig
1 u u Lig Li8
2 U l8 u u L j8 LlS
3 u u u Lie L l8 Li8
4 u l8 u Ll6 L is Lig
5 u Ll6 Ll6 Lie Ll8 Li8
can find OAs useful for advanced level experiments. Figure 6.2 shows a small
subset of the rules to be used in advanced robust design studies.
Cl C2
D1 D2 D1 D2
B1
A1
B2
B1
A2
B2 -
Fig. 6.3 A full factorial (24) design with four factors (A, By C,
and D), each with two treatments.
linear and quadratic terms. A non-linear effect may sometimes be useful in fine-
tuning and improving the initial design [5],
6. In the initial stages of optimization, one may limit the investigation to
the study of main effects. Later on, it is possible to run larger orthogonally
designed experiments to study interaction effects also, if necessary.
An engineer may be motivated to seek the improved settings of DPs for two
reasons: He may seek settings that will improve some performance characteristic
(the response) to some optimum value. Alternatively, he may seek to find a less
expensive and alternative design, material, or method, which will provide equivalent
performance. Orthogonal arrays often provide an efficient, empirical approach to
achieve both these goals.
Practising Japanese engineers have very effectively integrated the use of
statistical experiments in their studies and made it extensive [34]. Even as far back
as 1976, for example, the Nippon Denso Company, a world class manufacturer of
electrical parts, conducted over 2700 experiments using OAs [5].
In optimization studies, one puts noise into two broad categories: Factors
external to a product such as ambient temperature, humidity, dust, vibration, and
human variations in using the product, or in operating the process, etc., comprise
the external sources of noise. Internal sources of noise, on the other hand, are
factors that cause manufacturing imperfections and product deterioration.
Noise factors that one can observe to be at distinct levels (e.g. humid vs. dry
weather when one is testing a vehicle for fuel economy) are included in the noise
OA. If possible, each noise factor should be studied at several rather than only two
distinct levels, to improve the detection and exploitation of DP-noise interactions.
The noise OA is called the outer array of orthogonal experiments (see Fig. 5.1).
To the maximum extent possible, the outer array should include the distinctly
observable yet not-to-be-controlled noise factors that might influence the designs
performance in the field. However, not all noise factors can be thus included in the
outer array of a parameter design experiment. One may have physical limitations
or lack the exact knowledge of these factors.
factors that might influence process results. A flowchart (Fig. 6.4) adds structure to
the thought process, thus avoiding possible omission of potentially significant factors.
Most engineers probably would appreciate how quickly the factors listed in
Fig. 6.4 could be identified by flowcharting the casting problem. Note that the
objective in flowcharting is also not to solve the problem yet. That is the job
of the experimental investigation that will follow.
The Cause-Effect (also known as the Ishikawa) Diagram is perhaps the
most comprehensive tool that enables one to speculate systematically about record
and clarify the potential causes (factors) that might lead to performance deviation
or poor quality.
The development of the cause-effect diagram begins with the statement of the
basic effect of interest. It then progresses to a systematic listing of causes that may
produce this effect (Fig. 6.5). Ishikawa [20] has personally given an excellent
account of how one should develop cause-effect diagrams.
Also called the fish-bone diagram because of its appearance, the cause-effect
diagram has a cause side written on the left-hand side (of the diagram) made up
of a spine or tree trunk and its branches, and an effect side on the right. One writes
the effect (cracked castings in Fig. 6.5) directly on the diagram. The tree trunk
shows the causes leading to this effect with primary, secondary, and possibly tertiary
causes of the effect branching off the main trunk of the cause tree. One adds to this
diagram any factor or cause that might possibly affect the response. This is how
one gradually develops the diagram.
Sometimes one begins cause-effect diagrams by thinking about the broad
categories of causes materials, machinery and equipment, operating methods,
USE OF ORTHOGONAL ARRAYS 97
Effect:
* cracked
castings
Some design factors are continuous variables such as curing time or % ratio,
while others are discrete or attributive, as with Alloy X vs. steel, or Shift A vs.
Shift B in a plant. For continuous factors, one should set the levels far enough
at sufficiently high and low levels, so that the effect, if present, has a good
chance to show up in the observations. In the later round of optimization experiments,
one sets the continuous variables typically at three different levels so as to reveal
any non-linear effects on response. (A special statistical procedure known as
the Response Surface Method [RSM] allows one to explore the shape of the
response surface when several factors may influence the response simultaneously.
Taguchi did not use RSM in his presentation of the robust design methodology,
to keep the approach simple. We review the utility of RSM in robust design in
Chapter 10.)
At what specific treatment levels should one test the individual design factors?
Experts say that many engineers tend to select levels not too far from some
conventional level for fear of producing off specifications material. However, the
production of inferior quality products at the investigative stage often reveals much
about the sensitivity of the process and provides important informative data that
might be of critical value in optimization. Experts say that good experiments do not
always make good products, but good experiments will provide important
information.
Setting the levels apart also can help identity non-linear effects when present.
Further, well-separated settings may reduce the need for repeating experiments
when a high degree of background (uncontrolled) noise is present.
The goal of Taguchi experiments is to evaluate and then optimize robustness
to assure that performance stays close to target. This requires our ability to predict
the effect or performance at different factor settings to be consistent. In order to
rapidly and reliably reach this predictability, Taguchi proposed additivity as a key
requirement. Recall from Section 4.2 that additivity of effects, if present, requires
that one study only the main effects, thus reducing the total cost and number of
experiments needed.
There is no way to foresee additivity in every case, though the proper choice
of the S/N ratio (Section 5.3) may help. Matrix experiments using OAs followed
by a verification experiment are a must if one wishes to assure predictability of
performance at different combinations of parameter settings.
Decidedly, interaction among design factors makes the challenge of
optimization by experimentation tougher. If an interaction is present, then one must
include it in the input-response or cause-effect model (e.g., one should expand
model of Eq. (4.1.1) by adding the appropriate interaction term) to improve the
predictability of the model. Also, when an interaction is present, the verification
experiment (Section 4.3) would show a poor fit for the main factors only additive
model, requiring further study. Then, one should include DP-DP interactions in
the optimization studies. Taguchi suggests, however, that the investigator should
strive to keep the cause-effect model main factor only to minimize experimentation
and use an SIN ratio that displays additivity of main factor effects.
In spite of Taguchis urging us to minimize/avoid DP-DP interactions, it
should be noted that interactions play a central role in seeking out the robust
USE OF ORTHOGONAL ARRAYS 99
design. The novel idea behind parameter design, as Nair [32] points out, is to
minimize the effect of the variation in the noise factors by choosing the settings
of the DPs judiciously to exploit the interactions between design and noise factors.
If such interaction is minimal or absent, one will have to reach for high quality
and expensive parts and components to achieve robustness.
here is the residual effect caused by all other factors interaction, environmental,
eto- ignored in the study. One assumes that e has a mean of zero and a variance
that includes DP-noise interactions.
If the assumption of additivity is valid, the predicted performance at the
optimized combination of parameter settings will be close to what one will observe
by actually running an experiment with factors 0 1 , 01, etc. set at their respective
(empirically found) optimum levels.
In rare, complex situations, one may require special transformations of
observed data as shown in Section 3.5 [5, 17, 21]. These transformations may help
one to achieve additivity of DP effects so that one may still use an orthogonal
experimental plan to conduct and complete the investigation.
Step 5. Confirm that the new (optimum) settings truly improve the
performance statistic.
It should be noted that even though the outer array provides efficiency [14,
p. 92], one is advised to replace the outer array application in Step 3 above by
Monte Carlo simulation [5, 16] when noise does not have a symmetric distribution
and the prototype functional model is a mathematical model (see Sections 1 8
and 10.5).
its design, and in the manner and the environment in which one uses the product.
How the product responds (functions) is its performance. For a process, the
influencing factors are the different process parameter settings, the environmental
factors, and the inputs to the process. The response here is the quality of the
product delivered by the process.
Taguchi categorizes systems being static or dynamic [5]. W hat
distinguishes these two categories is the nature of the target (performance) one
is seeking. For a static system, this target is fixed. The designer may wish to
maximize it, minimize it, or take it to some fixed value desired by the systems
final user.
For a dynamic system, the target is a function of the setting of a signal
factor , which the user adjusts dynamically during the use of the system to obtain
a desired performance from the system. The example of a dynamic system is the
steering mechanism of an automobile. The signal factor here is the rotation of the
steering wheel, the target at some instant of the vehicles use being a desired
turning radius. A servo system is another example of a dynamic system.
The Taguchi strategy for optimizing a design may be given as follows [5]:
One begins by visualizing that all influencing factors belong to one of the
following four categories:
1. Signal factors (M). These are the factors selectively set by the product
user or the process operator to attain the target system performance. Signal factors
have the special property: a change in their setting influences the average of the
systems response, but not its variability. As already mentioned, the angular position
of an automobile steering wheel is a signal factor, the turning radius of the vehicle
being the vehicles performance. One selects signal factors based on the engineering
knowledge of the system under design. For easy operation, it is desirable that the
product performance be very sensitive to the signal factor(s).
2. Control factors (z). These are the design features or parameters that the
designer sets at set points. In general, control factors may influence both the
average and the variability of response. The objective of product design is to
determine judiciously the levels for these parameters so that one achieves the
best possible performance. A multitude of objectives may determine this best
performance, such as maximum stability and robustness of the product while
keeping the cost minimum. (Robustness here implies insensitivity of performance
to noise factors.)
3. Scaling or Levelling factors (R). Scaling factors are a subclass of control
factors that the designer can adjust easily during product/process design to achieve
a desired functional relationship between the signal factor (M) and the response
variable for a dynamic system. For a static system, scaling factors can help adjust
the systems average performance to some desired, fixed target value. The gearing
ratio in the steering mechanism of a vehicle is an example of R for one can adjust
it during design to achieve the desired sensitivity of turning radius (the response
variable) to a change in steering angle (a signal factor).
4. Noise factors (x). These include all uncontrollable factors. Generally only
102 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
the statistical influence (average value, variance, distribution, etc.) of noise factors
can be known rather than their specific values. Variations in materials or instability
in the manufacturing process are examples of noise. Sometimes it is possible
to include some of the noise factors in the noise orthogonal or outer array (see
Fig. 1.5). The inclusion of noise factors in the optimization experiments allows one
to investigate z-x interactions in order to reach a Final design that will be robust as
far as these noise factors are concerned.
Given the above four categories of influencing factors, one may represent the
system response, y, by
y = f ( x t M, z, R)
r f
Fig. 6.6 Effect of controllable factors on average pull-off force and S/N ratio.
104 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
the mean (performance), we select the factor that has the smallest S/N ratio . Such
a factor can act as the adjustment (or scaling) param eter (R).
To arrive at the optimized design, set levels of the remaining factors at their
nominal levels at which they were before the conduction of optimization experiments.
Finally, set the level of the signal (or adjustment or scaling) factor R such that the
mean response y is on target. Chapter 7 presents the actual application of these
steps to a real design problem.
Taguchi empirically showed that in many situations this two-step process
leads effectively and efficiently to optimum DP levels (treatments). At these
parameter levels the standard deviation of response (performance variability) is
minimum while mean response is on target.
6.10 SUMMARY
Orthogonal arrays are special and efficient arrangements of factor settings that
guide the planning of design optimization experiments to achieve robust process/
product performance.
The initial round of orthogonal experiments aims at arriving at an additive
cause-effect model, which shows how the systems performance is dependent on
the different DPs. Next, one optimizes the design by closely examining the effect
of each of the design factors on (a) the mean performance, and (b) the S/N ratio
a metric or measure showing the robustness of performance. One calculates the
S/N ratio from the actual observed performance measurements. One obtains the
optimum design by adjusting the settings of the design factors by a two-step process.
To achieve optimization, one first sets the control factors that show a high
degree of influence on the S/N ratio at values that correspond to the highest S/N
values. This makes the design robust about the noise factors. The objective of this
step is to judiciously exploit any DP-noise interactions to minimize the adverse
effect of noise on performance.
Next, one examines those control factors that have little influence on the
S/N ratio further to see which one among them has the maximum influence on
the mean performance. One then adjusts this factor, called the scaling factor, such
that the systems response (performance) after this adjustment is exactly on target.
The remaining factors may be left at their nominal levels. These neither influence
the S/N ratio, nor the mean of performance.
Agriculturists have used statistically designed experiments to improve farm
yield, grade fertilizers, check seed performance, etc. for over 50 years. Industry
also uses factorial experiments regularly [9,11,12,18,22]. However, most of these
classical applications of statistical experimentation aim at optimizing only the mean
value of the response variable. Parameter optimization experiments using OAs as
professed by Taguchi aim additionally at reducing the va riability o f response caused
by noise the factors not in the designers or the product users control.
EXERCISES
1. In an automotive parts manufacturing operation, a certain part was to be assembled
by gluing an elastometric sleeve onto a nylon tube. The objective was to maximize
USE OF ORTHOGONAL ARRAYS 105
the pull-off force of the assembly by appropriately manipulating the four key
mechanical assembly factors the interference between the sleeve and the tube,
sleeve wall thickness, insertion depth, and per cent adhesive dip of the sleeve. The
factors not to be controlled in th& routine assembly operation were drying time,
temperature, and humidity.
The different factors and their respective levels available for experimentation
are shown in Table E6.1. Suggest appropriate inner and outer arrays for conducting
optimization experiments.
TABLE E6.1
DESIGN AND NOISE FACTOR LEVELS
Factor Levels
Interference Low Medium High
Sleeve wall thickness Thin Medium Thick
Insertion depth Shallow Medium Deep
Per cent adhesive dip Low Medium High
Drying time 24 hr. 120 hr.
Temperature 72F 150F
Relative humidity 25% 75%
4. Check your findings with Fig. 6 . 6 and confirm that the following settings
maximize the assemblys pull-off force performance while minimizing variability
tracked by Eq. (5.2.5):
Mechanical Optimum
Assembly Factor Setting
Interference Medium
Sleeve wall thickness Medium
Insertion depth Deep
Per cent adhesive depth Low
Case Study 1: Process Optimization
m
7.1
Optical Filter Manufacture
THE PROCESS FOR MANUFACTURING OPTICAL FILTERS
Published literature reports over 400 successful industrial applications of Taguchi
methods in the western industries since 1985 [5, 6 , 7, 14, 19, 23] excluding those
from Japan [1, 4, 8 ]. One typical application of the Taguchi method to optimize a
manufacturing process is given in [14], This chapter presents the background of
this application, the procedure used, and the reported improvement in robustness
thereby achieved.
Optical filters are devices that transmit only a narrow band of visual wave
lengths, suppressing other wavelengths. The manufacture of optical filters consists
of coating a quartz substrate (the underlying base layer) with thin crystallized layers
of titanium dioxide and silicon dioxide. A filters index of refraction and its index
of absorption are the two key characteristics that determine how well the filter
functions, i.e., how well it separates lights of certain wavelengths. A major problem
faced by the manufacturers of optical filters is the high variability of the filters
refractive index, caused mainly by the variability in the thickness of the coating
layer. Even if this example is somewhat remote from processes the reader might be
interested in or is familiar with, the reasons for discussing it here are three-fold:
First, this is a typical real problem, affected by all the nuances of many factors
evading the manufacturers control and their effects not readily apparent. Second,
this study uses almost all the steps of robust design described in Chapter 6 ; hence
it serves well to illustrate those steps. Third, the background of this problem is
well-documented in [14], and therefore, those seeking more details may refer to it.
The process design (or control) parameters in the manufacture of optical
filters include the method of cleaning the quartz substrates before coating, the
temperature at which one holds the substrates, coating vapour nozzle position, etc.
By brainstorming with the manufacturing technicians, the investigators were able
to identify eight such parameters (Table 7.1). Typical robust design experiments, the
reader should note, include five to ten control parameters that one studies together.
TABLE 7.1
EXISTING AND EXPERIMENTAL SETTINGS FOR PROCESS PARAMETERS
Control Parameter Existing Setting 1 Setting 2
A Rotation method Oscillating Continuous Oscillating
B Wafer code 668G4 678D4
C Deposition temperature 1215C 1210*C 1220C
D Deposition time Low High Low
E Arsenic flow rate 57% 55% 59%
F HC1 etch temperature 1200C 1180C 1215C
G HC1 flow rate 12% 10% 14%
H Nozzle position 4 2 6
107
108 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
In this example, the investigators observed that the sources of noise factors
that were either expensive or impossible to control included ambient temperature
and humidity, drifts occurring in the settings of control parameters, and variation
in raw materials. The principal sources of noise, as the investigators could identify,
included uneven temperature, uneven Ti/Si dioxide vapour concentration, and uneven
vapour composition profiles in the chamber in which one kept the substrate for
coating. Other factors could be substrate location effects, variation in deposit
thickness across the face of the substrate, etc. The investigators chose to obtain
multiple measurements of the performance characteristic at several points on
each filter experimentally produced to assess the effect of these principal
uncontrolled (noise) factors.
1
*
i
4
1
*
*
I
4
*
'
CASE STUDY 1: PROCESS OPTIMIZATION OPTICAL FILTER MANUFACTURE 109
(A, B, C , etc.) to the OAs columns. Table 7.2 shows the final assignment. The
entries ( 1 and - 1 ) in the array designate the two coded settings for each factor
that would constitute the different individual experiments. Notice that the
investigators left some array columns unassigned.
TABLE 7.2
THE L 16 ORTHOGONAL ARRAY
Factor
Assigned A B C D E F G H
C olum n-> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Experiment 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 -1 - -1 -1 1 1 1 1 -1 -1 -1 -1
4 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 1 1
5 1 - 1 - 1 1 -1 -1 1 1 -1 -1 1 1 - 1 - 1
6 1 - 1 - 1 1 -1 -1 -1 -1 1 1 -1 -1 1 1
7 1 _i _i _
1 1 1 1 -1 -1 -1 -1 1 1
8 1 _i _i _i _
1 1 - 1 - 1 1 1 1 1 - 1 - 1
9 -1 1-1 1 - 1 -1 1-1 1 -1 1 -1 1 -1
10 -1 1-1 1 - 1 - 1 - 1 1 -1 1 -1 1-1 1
11 -1 1 -1 -1 -1 1 1-1 1 -1 -1 1-1 1
12 -1 1 - 1 -1 -1 1 -1 1 -1 1 1 -1 1-1
13 -1 -1 1 1 - -1 1 1-1 -1 1 1 - 1 - 1 1
14 -1 -1 1 1 - -1 1 -1 1 1 -1 -1 1 1-1
15 -1 -1 1 -1 1 -1 1-1 -1 1 -1 1 1-1
16 -1 -1 1 -1 1 - 1 - 1 1 1 -1 1 - 1-1 1
TABLE 7.3
MEAN COATING THICKNESS AND log OF VARIANCE BY RUN
1 14.821 - 0.4425
2 14.888 -1 .1 9 8 9
3 14.037 - 1.4307
4 13.880 - 0.6505
5 14.165 - 1.4230
6 13.860 - 0.4969
7 14.757 -0 .3 2 6 7
8 14.921 - 0.6270
9 13.972 - 0.3467
10 14.032 -0 .8 5 6 3
11 14.843 - 0.4369
12 14.415 -0.3131
13 14.878 -0 .6 1 5 4
14 14.932 -0 .2 2 9 2
15 13.907 -0 .1 1 9 0
16 13.914 -0 .8 6 2 5
showed the variability in the resulting thickness. One calculated ybar* and {(.s2),}
using the formulas
1 70
^ 70 h yi
2 1 7
= E log I 0 ( j 2 ) ; / 8 - log 10 ( 5 2 ) , / 8
i*9 /=1
= - 0.4724 - (- 0.8245) (from Table 7.3)
= 0.3521
One similarly estimated the effects of the other control factors (B to H) on
logio (s2). Table 7.4 shows these.
TABLE 7.4
AVERAGE VALUE OF log10 (s2) OBSERVED AT EACH FACTOR SETTING
The entries in the Difference column in Table 7.4 show the effect of the
corresponding factors on log 10 ( s 2), the variability in thickness. One notices that
control factors A (rotation method) and H (nozzle position) have greater absolute
effect than the six other factors. To minimize variability, therefore, one should
set the factors A and H as follows:
Rotation method at Setting 1, or continuous.
Nozzle position at Setting 2, or 6 .
If additivity of effects existed, these settings would presumably lead to a reduction
in the process logio (s2).
As mentioned in Section 6 .8 , some control factors affect only the mean perfor
mance, but not the variability of performance under noise. As mentioned there,
these factors affect the variability of performance minimally, or not at all. One
may identify such factors from the experimental results, by focussing primarily
on the average mean of each run rather than its variability. One may use these
factors to fine-tune the manufacturing process to get its output on target
without affecting or increasing variability.
One could estimate the average values of ybar (epitaxial thickness) resulting
from a given setting of a control factor from the experimental results summarized
under ybar,- in Table 7.3. As for log 10 (s2) in Section 7.4, these estimates indicate
the effect of each control factor on average thickness. The calculations are similar
to those used in finding factor effects on log 10 C?2). Table 7.5 shows the results.
TABLE 7.5
AVERAGE EPITAXIAL THICKNESS AT THE TWO SETTINGS OF EACH FACTOR
Control Parameter Setting 1 Setting 2 Difference
A Rotation method 14.4161 14.3616 - 0.0545
B Wafer code 14.3610 14.4167 0.0556
C Deposition temperature 14.4435 14.3342 -0 .1 0 9 4
D Deposition time 14.8069 13.9709 -0 .8 3 5 9
E Arsenic flow rate 14.4225 14.3552 -0 .0 6 7 4
F HC1 etch temperature 14.3589 14.4189 0.0600
G HC1 flow rate 14.4376 14.3401 - 0.0975
H Nozzle position 14.3180 14.4597 0.1417
One notices from Table 7.5 that factor D (deposition time) has the largest
effect on average thickness (ybar), the other factors having relatively little influence.
From the data in Table 7.4 one finds that deposition time has relatively small
effect on variability, so that changing deposition time to adjust average thickness
produced by the manufacturing process would not add variability. In other words,
deposition time has little interaction with noise factors, hence it would serve
effectively as the thickness adjustment or scaling parameter (Section 6 .8 ). Adjusting
the deposition time to make ybar = 14.5 pm, assuming that its effect on ybar (the
average thickness) is linear, accomplishes Step 2 of Section 6.9.
CASE STUDY 1: PROCESS OPTIMIZATION OPTICAL FILTER MANUFACTURE 113
Thus one has optimized the filter manufacturing process. Figures 7.1 and 7.2
illustrate the factor effects graphically.
F a c to r s and treatm en ts
Fig. 7.1 Effect of factors on epitaxial thickness of crystals.
F a c to r s and treatm en ts
Fig. 7.2 Effect of factors on log 1 0 (s2)
Selecting Orthogonal Arrays and
Linear Graphs
8.1 SIZING UP THE DESIGN OPTIMIZATION PROBLEM
Before one attempts to select an OA to guide the design optimization experiments,
one must answer the following critical questions:
1. How many factors are to be studied?
2. How many treatment levels are possible for each factor?
3. What specific 2-factor interactions are to be investigated?
4. Would one encounter any particular difficulty during the runs (e.g.
some factors may not permit frequent treatment changes)?
Except in unusual circumstances, the investigator will be able to locate a
standard OA fitting his needs. If a standard array does not suffice the investigation
objectives, the investigator should refer to an advanced text on Taguchi methods,
such as references [4, 5, or 8 ],
The first step in selecting the correct standard OA involves counting the
total degrees o f freedom (dof) present in the study. This count fixes the minimum
number of experiments that must be run to study the factors involved.
In counting the total dof, the investigator commits 1 dof to the overall mean
of the response under study. This begins the dof count as 1.
The number of dof associated with each factor under study equals one less
than the number o f treatment levels available for that factor. Following this the
investigator considers the 2 -factor interactions of interest.
One determines the total dof in the study as follows: If nA and % represent
the number of treatments available for two factors A and B respectively, (nA x nB)
would equal the total combinations of treatments. Then
1 = dof to be used by the overall mean
nA 1 = dof for A
nB - 1 = dof for B
(nA - 1) x (nB - 1) = dof required to study the A x B 2-factor interaction
An example will illustrate this procedure. If a design study involves one 2-level
factor (A), four 3-level factors (B, C, D, E ), and one wishes to investigate also the
A x D interaction, then the dof would be as follows:
Source o f d o f Required d o f
Overall mean
A 2-1 = 1
B, C, A E 4(3 - 1) = 8
A x D interaction (2 - 1) x (3 - 1) = 2
114
SELECTING ORTHOGONAL ARRAYS AND LINEAR GRAPHS 115
Hence,
total dof = 1 + 1 + 8 + 2 = 1 2
array columns are also available. Taguchi suggested, for instance, that factors that
are rather difficult to change from experiment to experiment should be assigned
to the columns toward the left in the array.
It is desirable that one randomly orders (i.e., randomizes) the sequencing of
the individual experiments to be run, to minimize any biasing effect of the
uncontrolled factors. Such bias may develop, for instance, from cyclic variation of
ambient temperature over 24 hours, or a shift change in the plant, or a switch in
the batch of raw material used.
Again, we suggest that the reader should refer to an advanced text on
Taguchi methods [4, 5, or 8 ] in order to find arrays appropriate for sophisticated
design optimization problems. The L 18 array is the most commonly used array
because it can study up to seven 3-level factors and one 2-level factor.
Inclusion of 2-factor (A x B type) and higher order interactions in optimi
zation studies leads to a slightly different procedure to identify the appropriate
standard array. This procedure involves a technique called linear graphs .
This graph shows that columns 3, 5, and 6 of array Lg correspond to the interactions
between columns (1, 2), (1, 4), and (2,4) of L8. One can use Column 7 to evaluate
only the main effect, and not the interactions. Table 8.1 shows these column
assignments.
TABLE 8.1
L8 COLUMN ASSIGNMENTS TO STUDY THREE
2-FACTOR INTERACTIONS
Assignment of
Column Factor or Interaction
1 A
2 B
3 A xB
4 C
5 A x C
6 BxC
7 D
columns, but the lines have a different topology. Column 3 represents interaction
between Columns 1 and 2, Column 6 , interaction between Columns 1 and 7, and
Column 5, interaction between Columns 1 and 4. No other interactions or main
effects can be studied with this arrangement without a modification of this linear
9
graph* However, suppose the objective is to study four 2-level factors A, fl, C, and
A and estimate their main effects as also interactions A x B , f l x C , and B x D.
Then the column assignments shown in Table 8.2 would accomplish that goal.
TABLE 8.2
AN ALTERNATIVE COLUMN ASSIGNMENT FOR L8
TO STUDY THREE 2-FACTOR INTERACTIONS
OA Assignment of
Column Factor or Interaction
1 B
2 A
3 A x B
4 C
5 B x C
6 BxD
7 D
With only modest effort it is usually possible to locate a standard OA and the
applicable linear graph in many design optimization studies. We emphasize again
that the selection of the OA and the linear graph must be both correct This alone can
lead to the correct experiments to study the main and interaction effects of interest
Since five 2-level factors are involved, a reference to Fig. 6.1 helps in the
preliminary selection of the Lg array as the experimental OA. A reference to
Appendix B, however, suggests that one available standard linear graph for the
L 8 OA would lead to the evaluation of four main factors and three 2-factor
interactions (Fig. 8.3(a)).
o 6 o 7
(a) (b)
Fig. 8 3 Modification of a standard linear graph: (a) Original graph; and
(b) removing an interaction to create two main effect columns ( 6 and 7).
One may use Modification Rule 1 above to remove one interaction from
Fig. 8.3(a) and create the modified linear graph as shown in Fig. 8.3(b). This
modification frees up the interaction Column 6 between Columns 1 and 7 (the
interaction being of no interest to the designer) and produces two free columns 6
and 7, which may be assigned to factors D and E respectively.
Some additional methods for modifying linear graphs to create special
OAs to suit special real needs are also available [6 , 8 ].
Even though Taguchi himself and later several others [5, 14, 17] have
urged use of suitable mathematical transformations or other means (Section 4.1)
to minimize the effect of interactions on design optimization experiments, in some
real life experiments it is simply not possible to ignore DP-DP interactions.
Fortunately, most interactions when present tend to be between two factors. Three
and higher order interactions are infrequent and usually small in comparison with
2-factor interactions [18, 24].
Interaction between two dichotomous (2-level) factors is not difficult to
estimate, provided the interaction column in the pertinent OA is unused (i.e.,
unassigned to another factor). We illustrate the procedure with two examples.
E X A M P L E 8 .2 : The columns of the standard L 8 array have been assigned to
four 2-level factors A, B, C, D, and their interactions A x B, B x C, and B x D,
according to Table 8.2. The eight experiments run have resulted respectively in
observations y h y 2, y3, y4>y 5, y& and y8.
Recall from Table 8.2 that we assigned the A x B interaction to Column 3.
Keeping in view the structure of the L 8 array (specifically the entries in Column 3,
L 8 in Appendix B), one may construct a two-way table, shown below. This
allows one to estimate the A x B interaction. The 2-level (Hi/Lo or *T72)
settings for factors A and B in Column 3 are arranged such that one obtains the
following average responses because of the particular combinations of the factor
levels present in the eight experiments that produced {y*, i = 1 , 2 , ..., 8 }:
B Hi B Lo
y\ + y i y i + y%
A Hi
2 2
js + y^ y5 + ye
A Lo
2 2
Temperature
Tx t2
Pi 9 5
Pressure
P, 8 7
The estimated main effect (on micro voids) caused by temperature going from Tx
to T2 equals
(5 + 1)12 - (9 + 8)/2 = 6.0 - 8.5 = - 1.5
The main effect on voids caused by pressure going from P x to P 2 equals
(8 + 7)/2 - (9 + 5)/2 = 7.5 - 7 == 0.5
The (temperature x pressure) interaction is the difference between differences.
The within row difference between differences equals
(9 - 5) - (8 - 7) = 3
This, one should note, is identical to the within column difference between
differences, which is
(9 _ 8 ) - (5 - 7) = 3
Therefore, the estimated interaction effect is 3, and not zero. This estimate
suggests that an interaction possibly exists between the two factors and it is
perhaps larger in magnitude than the main effects! An ANOVA (not shown
here) done with replicated observations (to result in a nonzero error dof) suggested
that all three effects the main effects of temperature and pressure, and the
interaction effect between temperature and pressure are significant.
The mere indication of the presence of interacdon, however, is not enough.
One needs here to develop the underlying prediction tnodel that will relate (within
the influence space) the levels of temperature and pressure to the level of resulting
voids. This may be done as follows:
Since one finds only the two main effects and the (Temperature x Pressure)
interaction to be present, one may assume that the total effect (the average volume
of microvoids) is given by the model
y = c + c + c pSp + (St x Sp)
where the subscripts t and p represent temperature and pressure. There being four
constants (C, Ch Cpy and Cp) in this model, it is not difficult to estimate them from
the four observations already obtained. In the model above, 5t denotes a change
within the temperature from T x, and Sp a change in pressure from P x. Accordingly,
substituting the appropriate terms in the expression for Y above, we get
9 = C + C f x 0 + Cp x 0 + C 9 ? x 0
5 = C + Cf (7i - 7\) + Cp x 0 + C* (T2 - 7\) x 0
8 = C + Ct x 0 + Cp (P 2 - Pi) + Ctp x 0 x (P2 - P x)
7 = C + C, (T2 - 7\) + Cp (P2 - P x) + Crp (T2 - Tx)(P 2 - P x)
122 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
This gives
C = 9, Cr = - 4/(T2 - 7\), Cp = - l/(P 2 - P x)
Ct p ^ y [ ( T 2 - T 1) ( P 2 - P l)}
These coefficients lead us to a prediction model (albeit a crude one), valid within
the influence space, as
y = 9 - 4St f(T2 - r ,) - Sp l(P2 - P i) + 3 8t f y i m - TX) ( P 2 - P x))
8.5 SUMMARY
A design optimization study begins with the identification of the factor effects
to be investigated, the performance response to be optimized, and the applicable
S/N ratio. Once the task has been sized up, one determines the degrees of
freedom in the study. The dof count leads to the identification of an OA suitable
for guiding the statistical experiments. To the maximum extent possible, the
investigator should attempt to use a standard OA in his investigations, based on the
guidelines provided in Figs. 6.1 and 6.2.
If certain 2-factor interactions have to be included in the study, the investi
gator should examine the linear graphs (such as those shown in Appendix B)
associated with the standard OA selected. Generally speaking, it is possible to
decide on array column assignments to main and 2 -factor interaction effects with
the help of the standard linear graphs. Occasionally it may be necessary to modify
an existing linear graph to complete the assignment of columns and then conduct
experiments to estimate effects that are unusual but are of particular interest.
EXERCISE
1. Refer to Tables 7.2 and 7.3, and the appropriate linear graph for the L 16 OA
in Appendix B, to produce the estimated 2-factor interaction effects on crystal
thickness within each of the following pairs of factors:
1. Rotation method (A) and wafer code ( B).
2. Rotation method (A) and Arsenic flow rate (E ).
3. Deposition temperature (C) and HC1 etch temperature (F).
0 Case Study 2: Product Optimization
I ' I Passive Network Filter Design
9.1 THE PASSIVE NETWORK FILTER
Chapter 7 presented an illustration of applying Taguchis two-step procedure
(Section 6.9) to optimize the control parameters of a manufacturing process. This
chapter explores a more complex design problem one requiring the simultaneous
optimization of two performance features. Filippone [23] applied Taguchis
design optimization method to this problem. Suh [13] provides a discussion of
Filippone *s results. The present problem highlights the following points;
1. The problem involves two (rather than a single) performance characteristics,
both of which the designer must simultaneously optimize.
2. This example uses a mathematical model rather than a physical proto
type of the product to conduct the experiments,
3. The source of noise here is the uncertainty in the quality o f the components
to be used in fabricating the product. This uncertainty is a factor beyond the designers
control. The designer would attempt to deliver a design that provides satisfactory
performance, in spite of the presence of this noise. The designer would thus seek
a rather unique goal in robust design and this has been given particular emphasis
by Taguchi in his writings [4]. This makes it possible to use less expensive
components in design in order to reduce manufacturing cost, without compromising
the performance of the product.
4. This example illustrates the complex character of some real design
problems. The choices available to the designer are neither very distinct nor clear.
In fact, as we shall see, the 2-step optimization procedure appears to fall somewhat
short of the final goal. However, Taguchis overall philosophy does clarify the
nature of the decisions facing the designer in such problems.
i----------------------- 1
Displacement Galvanometer
transducer
(source)
Fig. 9.1 A passive filter interfacing strain gauge output (V,) with recording
instrument output (V0).
The present discussion confines to the design of the passive filter, which
should perform two distinct functions: First, it must re-create the original mechanical
deflection pattern with minimum effect on output. This implies that the filter must
effectively filter out the carrier frequency from the output. Second, the filter must
attenuate the (filtered) output signal to a proper scale.
Typical instrumentation system design projects would consider and evaluate
several alternative configurations for the components. For instance, two different
types of displacement transducers (or a strain gauge bridge) might be considered.
Also, one might employ alternative filter circuit designs consisting of resistors and
capacitors. Also, different types of galvanometers might be used. The designer
facing the present task focussed specifically on the optimum design of the filters
network circuitry. The network consisted of a capacitor C and two resistors R2 and
/?3, as shown in Fig. 9.1.
One may summarize the challenge facing the designer as follows: Since the
parts and components to fabricate the filter and the complete instrumentation system
are to be of industrial rather than precision quality, when one actually assembles
the system and puts it to work, these parts and components may not always have
the exact characteristics as specified by the designer. Generally the price of such
parts and components goes up with the precision of their manufacture. Therefore,
the designer must treat the uncertainty in the parts and component characteristics
as a source of noise (aspects of a design or system not in the designers control,
see Section 5.1), and still design a filter with satisfactory performance.
In the present case, the designer anticipated that each purchased component
(Rs, Vs, Rgi and Gsen [the galvanometers sensitivity]) would vary 0.15% from its
specified nominal (catalogue) value (Table 9.1). Also, the store-purchased parts
used in fabricating the filter (resistors R2 and R^ and the capacitor C ) could
similarly vary from their marked values (Table 9.4). The traditional design of
electrical devices uses sensitivity analysis to help design decisions in such cases,
if the system is amenable to direct analysis [5,19]. Filippone [23] demonstrated the
use of the Taguchi method in such situations. We recall again that the motivation
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 125
here was to permit the eventual use, wherever possible, of inexpensive components
in fabricating the complete instrumentation system.
TABLE 9.1
CHARACTERISTIC VALUES AND SUPPLIER TOLERANCES FOR MARKET-
PURCHASED COMPONENTS
V0 RgR3
K ~ (R 2 + R g) ( R s + * 3) + R 3R , + (R 2 + Rg) R 3R s C s' (9'31)
where s' is the Laplace variable. From this transfer function one finds the filter
cutoff frequency coc and the galvanometer full-scale deflection D respectively as
(R 2 + R )(R + R 3) + R 3R s
co = ------- ------------- -------- (9 3 2)
2 tt(R 2 + R g) R 3Rs C
126 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
IV I \VS\ R R
D = -- --------------------------------------------- 1 :------------------------------ (Q 'i
Gsen C s e J (*2 + * , ) ( * , + * 3> + W
The DPs that the designer is free to specify are R2, P 3, and C. Therefore, one must
first determine which DPs among these have the most influence on the two FRs.
Next, one must find the values of those JDPs that will minimize variability in
performances FRX and FR2.
There are two FRs ( coc and D) to be optimized here. It would indeed be
fortunate if we found two independent DPs, each influencing only one functional
requirement (FRXor FR 2). We could then adjust each FR independently to its own
target value. (Suh [13] treats this desirability as an axiom for ideal design and
presents a formalism towards achieving it.) In the present case the designer used
OA experiments as suggested by Taguchi.
Recall that DPs having little interaction with the noise factors (these DPs
have large SIN ratios, Section 5.2) and a linear relationship with the output response
work best as adjustment factors. Therefore, one has to earnestly seek here possibly
two independently adjustable DPs one to adjust the cutoff frequency (oc, and the
other to adjust the maximum light-beam deflection D.
The designer should use the other DPs not used as adjustment factors to
maximize the SIN ratio (Section 6.3), to help make the design robust. He should
attempt to find DP levels that make the systems response least sensitive to noise,
by statistically experimenting with different levels of control factors. Before
concluding the task, the designer must verify noise sensitivity and linear dependence
over the entire range of possible DP values.
Based on engineering considerations, the designer selected three nominal levels
(treatments) for each of the three DPs (R2, R 3j and C). Table 9.2 shows these levels.
TABLE 9.2
TREATMENT LEVELS FOR DESIGN PARAMETERS
Treatment
1 2 3
*3 (G) 20 50,000 100,000
R2 (Q) 0.01 265 525
C(fi) 1,400 815 231
The broad range of values considered here was intentional and in line with the
Taguchi philosophy (Section 6.5). The designer used two separate OAs in this
problem. He selected the first OA (the inner array specifying the combinations of
the different control factor settings) based on the following considerations: If one
had to test all possible combinations of treatment levels shown in Table 9.2, that
would require to run 33 or a total of 27 full factorial statistical experiments. If one
assumed the additivity of effects (Section 4.2), the task involved running a much
smaller number of experiments. Since the investigation involved three factors at
three-treatments-each with no interactions assumed, one found the total dof
(Section 8.1) as follows:
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 127
TABLE 9.3
COMBINATIONS OF CONTROL FACTOR TREATMENTS IN L9
1 20 0.01 1400
2 50,000 265 815
3 100,000 525 231
4 20 265 231
5 50,000 525 1400
6 100,000 0.01 815
7 20 525 815
8 50,000 0.01 231
9 100,000 265 1400
The designer defined a second OA the outer array (or the Noise Array in
Fig. 5.1 for each row in the L 9 OA of Table 9.3, to measure the variation in
output response that occurred due to the anticipated uncontrolled variation caused
by the industrial quality tolerances of system components and parts (Table 9.1).
The application of the outer array thus would simulate the noise due to the
imprecision of the commercial parts used.
The reader should note that this study did not undertake any physical
experimentation. The designer employed the mathematical models given by
Eqs. (9.3.2) and (9.3.3) to simulate the experimental trials.
A total of seven sources of noise existed in the instrumentation system using
the filter network. Table 9.4 shows the three noise-inflicted levels for each of the
seven sources (the components/parts in the system). Note that the inexactness of
values of the parts (R2, R$, and C) used in fabricating the filter and that of the
system components (Rs, Rg, Gsen, and Vs) would affect their nominal values.
Therefore, each inner array experiment in Table 9.3 was combined with the outer
array to yield a set of noise-affected observations { y h y2>? 3 *. - . ^ 2 7 }* This produced
a mean response value m and an S/N ratio value for each of the two output
128 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
responses {(oc and D). One calculated the applicable nominal is the best S/N ratio
using the equations
E (n 2) = (nm2 - V)/n
E ( cj2) = V = [Z y 2 - {Z y } 2/n]/(n - 1)
TABLE 9.4
LEVELS FOR NOISE FACTORS
Level
Components 2 3
*3 () *3 - 5% *3 R3 + 5%
R2 (Cl) R2 - 5% R2 R2 + 5%
CQi F) C -5 % C C + 5%
rs m 119.82 120 120.18
Rg (ft) 97.853 98 98.147
Gsen (nV/in) 656.594 657.58 658.566
V,(V) 0.014978 0.015 0.015023
Table 9.5 shows the effect of the outer array on Experiment 1 of the inner
array (Table 9.3). The variation in output (D and (Oc columns in Table 9.5) shows
the sensitivity of the two output responses to noise.
Again, an exhaustive test of every combination of noise possible (i.e., a full
factorial design) would require 37 = 2,187 experiments, an expensive undertaking.
The use of the outer OA here minimized the number of combinations necessary to
investigate, while maintaining a representative sample [5]. Since there are seven
factors with three levels each, the L 9 OA described earlier would not suffice.
Instead, one used an L 2 7 array, which can accommodate up to 13 factors at three
levels each.
As mentioned above, an appropriate outer array would simulate the tolerance
noise (an aspect beyond the designers control) for each inner array experiment.
Table 9.5 shows the L 2 7 outer array for the first experiment of Table 9.3. The
investigator calculated the two output responses coc and D for each of the
27 combinations of noise levels, using Eqs. (9.3.2) and (9.3.3) respectively. The
experimental design used the last seven columns of the OA. For each experiment
shown in Table 9.3, one also calculated the S/N values for D and coc. For example,
for R 3 20 Q, R2 = 0.01 2, and C = 1400 (iF, one used Eq. (9.3.4) and the last
two columns of Table 9.5. The next section shows these calculations.
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 129
TABLE 9.5
27 COMBINATIONS OF NOISE FACTORS TESTED FOR EXPERIMENT 1 VALUES
FROM TABLE 9.3 WITH OUTPUT RESPONSES (D AND coc) CALCULATED
Similarly, one calculated the sample mean values for (oc and D by summing
the outputs and dividing by the number of data points in Table 9.5. One repeated
the procedure for each of the nine experiments shown in Table 9.3 (see Fig. 5.1).
Table 9.6 shows the S/N ratio and median output response m for output responses
(oc and D.
averaging this sum over the number of dof (= number of treatments minus one) for
this DP. The necessary calculations are as follows:
Suppose that one observes the response Y to be y h y 2, and y 3 when one sets
the DP / ? 3 at level / ? 3 and replicates the observations three times. During replication
all noise factors remain active. Similarly, suppose that by replication one observes
y 4, y5, and y 6 when the DP R$ is set at level (R3 - 5%) and y j f y& and y 9 when R3
is set at level ( / ? 3 + 5%). The effect of the DP / ? 3 on output Y is then the sum of
the following three variations: 1. The total variation in output while R$ is at level
(tf3 - 5%), which is [(>i - m) + ( y 2 - m) + (y 3 - m)]/2. The total variation while
/ ? 3 is at level /?3, and 3. The total variation while is at level (R3 + 5%). Here,
m is the grand average, y,-/9.
The absolute main effect of R$ on output Y is found by squaring the total
variation caused at each level of R 3, dividing each squared total variation by its
respective dof, and then summing the results. Thus, the absolute effect of R 3 on
the output response (evaluated as the mean sum of squares or MSS) is
where
Sm = ( I y d 1In
I
V = (2 y- - Sm)/(n - 1)
i
TABLE 9.6
S /N RATIO AND M CALCULATED FOR EXPERIMENT 1 IN TABLE 9.3
D c
n 27 27
zyi 85.212 216.201
zy? 269.1657 1735.579
Sm 268.9290 1731.217
V 0.009102 0.167770
S /N 30.39093 25.82230
m 3.156 8.007444
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 131
The sample mean m shown above estimates the magnitude of the output response
for Experiment in Table 9.3. The S/N value reflects the sensitivity of the response
of this output to noise (the higher the S/N ratio, the lower the noise sensitivity).
Table 9.7 shows the completed calculations of the S/N ratios and the mean
performances (coc and D) for each of the nine experiments in Table 9.3.
TABLE 9.7
S / N AND MEAN OUTPUT RESPONSE FOR EACH CONTROL FACTOR
COMBINATION GIVEN IN TABLE 9.3
D
4
S /N m S /N m
Experiment m (Hz) (db) (in)
From Tables 9.3 and 9.7 one may estimate the relative influence that each of the
DPs (R 2, ^ 3 , and C) had on both the S/N ratio and the mean value for each out
put response ( coc and D ). For example, one may find the effect of control factor
7? 3 on the output S/N ratio by taking the average S/N ratio at each level of / ? 3
(i.e., 20 k n , 50 kCl and 100 kQ.). Similarly, one may calculate the effects of the
other two DPs.
Thus, one may estimate the relative influence of each DP on the mean value,
by looking at the average value for the sample mean m at the three treatment values
of that DP. DPs showing a constant S/N ratio response over the entire range of
levels while showing a linear relationship to the mean output response would best
serve as the adjustment factors (Section 6 .8 ).
Table 9,7. Thus, if one denotes the entries in the second column of Table 9.7 by
SNh SN2, SN 3, ..., SN9, then, using the information in the OA in Table 9.3 and a
relationship similar to Eq. (3.3.2), one obtains
SRi = 1/2[(SN, - m j + (SN4 - msn) + (SN7 - msn)]2
+ \/2[(SN2 - msn) + (SN5 - msn) + (SNs - msn)]2
+ 1/2[(SN3 - msn) + (SN6 - m,) + (SN9 - m j ] 2 - CF (9.5.1)
where msn is the sample mean for the calculated S/N values in column 2 of Table
9.7 and CF is a correction factor. One defines CF as
1 ^ I2
I (S N t - m j
9 1 =1
One repeats these calculations for each DP with both the S/N ratio and the mean
response of each output ( eoc and D ).
Table 9.3 shows that the value of 20 Q. for R3 during experiments 1, 4,
and 7, was used, which yielded S/N ratios of 25.821, 25.378 and 25.309 dB
respectively, for the cutoff frequency output coc,. Table 9.7 shows these S/N
ratios. The mean (25.503 dB) of these three S/N ratio values is the estimated
measure o f the sensitivity of the average coc output to (the tolerance-caused)
noise, at R$ = 20 Q, as shown on Table 9.8. Similarly, one calculates the mean
value for coc when R 3 is 20 Q. by averaging the appropriate data points in
Table 9.7. This yields an average of 20.684 Hz for coc, as shown in Table 9.9.
We have to repeat this procedure for all three control factors (DPs ^ 2 ^ 3 * and
C ) and for both output responses coc and D. The results will be as shown in
Tables 9.8-9.11.
TABLE 9.8
ANOVA FOR S /N RATIO OF (oc
TABLE 9.9
ANOVA FOR THE MEAN OF coc
TABLE 9.10
ANOVA FOR THE S /N RATIO OF D
Factor Level Means Sum of
Squares
1 2 3
TABLE 9.11
ANOVA FOR THE MEAN OF D
Given the results in Tables 9.8-9.11, one should be able to select the principal
control factors (i.e., DPs) to adjust the mean performance of the passive filter.
Recall that adjustment factors must be insensitive to noise (i.e., these should have
large and constant S/N ratios) over a broad range of treatment values, and these
factors must have a Unear relationship to the output response (FR ). (Suh [13]
suggests that, additionally, the selection of a control factor as a DP an adjustment
factor in Taguchis terminology should be such that the independence of the FRs
pertaining to their respective DPs is maintained.)
Based on the above considerations, one may choose the DP with a clear,
monotonic effect on the mean value of cutoff frequency coc, while it contributes a
minimum to the mean response of D (Table 9.10), as the adjustment factor for coc.
The DP that best fits this requirement is capacitor C.
Figures 9.2 and 9.3 graphically illustrate the data shown in Tables 9.8-9.11.
These figures easily convey how all three parameters R 2, R$, and C affect the
mean values and S/N ratio of the two output responses.
Figures 9.2 and 9.3 show that C would serve satisfactorily as an adjustment
factor for coc, since its S/N ratio is constant over the broad range of its treatment
levels, and since 6)c shows the desirable dependency on C. The mean output
response for coc does not vary exactly linearly with respect to C, but it does have
a monotonic and almost linear relationship with (oc. For these reasons the designer
selected C to be the adjustment factor for cut off frequency coc.
The results in Table 9.9 show that the contribution to the variation (indicated
by sum of squares) of the cutoff frequency response coc is significantly greater
for / ? 3 than it is for R2. The contribution of R2 to the variation (again suggested
by the sum of squares) in the mean response of D is greater than the contribution
of /?3 , as indicated in Table 9.11. This suggests that R2 is coupled by a lesser
degree to the output coc than is /?3, while also contributing more toward the
variation in the deflection output D. The effect of R2 on mean D is also more
134 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
28
02 26
*-*
1
o
24
co 22
20
18
16
o 14
3
C 12
o
0) 10
2
20 50k 100 k 0.01 265 525 1400 815 231
R* (ohms) Ro ( ohms) C ( |i F )
D e s ig n P a ra m e te rs and treatm en ts
1
0
o
CO
15
o
10
c
o
0)
prominent (Fig. 9.3). It would appear therefore that R2 would serve slightly better
as the adjustment factor for the deflection response D of this filter network, than
would /?3. (However, it should be apparent that the preference for R2 here is only
modest.)
From the sum-of-squares column of Tables 9.8 and 9.10 (as also from
Figs. 9.2 and 9.3) the DP that contributes the most to variation in the S/N ratio can
be identified. Tables 9.8 and 9.10 show that DP R 3 has a significant influence on
the S/N ratio of both D and coc. To achieve robustness, the tolerance of the DP R3
must be reduced so as to increase the S/N ratio to some desirable maximum.
6 . The capacitor C was chosen to be the adjustment factor for coc based on
its relatively flat effect on the S/N ratio of coc but the marked monotonic effect on
mean coc (Fig. 9.2).
7. Since the effect of R2 on mean D was monotonic and also more prominent
than that of the designer chose R2 to adjust D. The choice here was not clear-
cut, because R2 also showed considerable effect on the S/N ratio of D .
8. Since the effect of R$ on the S/N ratio of D was considerable ( / ? 3 contributed
the highest sum of squares in the S/N data), the designer recommended that one
should tighten the tolerance of / ? 3 (shown in Table 9.4) to improve the filters
robustness.
Since the basic technology of the device being designed contained factor
interactions that one could not eliminate, Filippone [23] adopted a trial-and-error
approach to produce an acceptable solution for design problem above. Perhaps
the reader can appreciate the complex nature of the optimization attempted here.
Many real design problems present similar difficulty on way to quality design, in
particular, on way to robustness. We must point out again the basic difficulty one
faced above. The design decisions did not become trivial when one used the main
factor model in order to apply OAs. Still, the insight into the inherent nature of the
problem one gained by applying the inner and outer OAs was decidedly valuable.
Based on conventional methods it would not be possible to reach the improvement
Filippone achieved. Even a method using the analysis of Taylor series type
sensitivities of Eqs. (9.3.2) and (9.3.3) directly would be less useful (see [14],
p. 203).
As is true for the passive filter design problem, one sometimes finds that
technological constraints would prevent the 2 -step optimization procedure from
being rapidly and perhaps wholly effective. The difficulties develop because of DP-
DP interaction effects present, and because a clear separation between the parameter
sets d and a may not exist. For the passive filter, an L8 OA with factors / ? 3 assigned
to column 1, C assigned to column 2, and R2 assigned to column 4, with the three
factors set at two levels each uncovers the 2-factor interactions easily. Table 9.12
TABLE 9.12
RESULTS OF L8 EXPERIMENTS FOR THE PASSIVE FILTER
1 2 4
Experiment *3 (A) C ( mF) R2 (Cl) GJC S/Nc D s/ n d
displays the results of a set of L 8 experiments. The relevant linear graph for L8
(Appendix B) was used here to find which columns would contain the 2-factor
interactions. Figures 9 .4 -9 .7 show the estimated interactions, suggesting that
41
R2 = 0 . 0 1
39 -
37
33
Rz = 525
R2 = 525 +
27 1
R3 = 2 0 1 0 0 k R3 =20 100 k C =1400 \1 F 231 [IF
one would be advised not to ignore interactions R$C, R 3R 2 , CR 2. (We suggest that
the reader verify these estimates using the method shown in Section 8.4.) The
additive or main-factors-only model, therefore, would be inappropriate to apply
here and one should search for robustness minimal sensitivity of on-target
performance to noise by invoking methods beyond Taguchis 2-step approach.
Chapter 10 describes one such approach.
A Direct Method to Achieve Robust
Design
A major simplification of the robust design problem was achieved by Taguchi by
his invocation of the additive (main-factor-only) model. This chapter explores
situations in which factor interactions are significant, in which multiple performance
characteristics are involved, or in which the noise-caused standard deviation a(x)
and the mean /i(x) are related in complex ways. Here a constrained optimization
approach is presented, which is also empirical in character as Taguchis two-step
method is. This approach, due to Bagchi and Kumar [33], limits the search for
robustness to a feasible region in which all target performance requirements are met.
mean square error due to noise. Taguchi [5] had originally suggested that the above
problem be solved as an unconstrained 2 -step optimization problem, using a scaling
or adjustment factor. In this procedure one design factor Qx is postulated to be an
adjusttnent factor . The response characteristic then splits up into (the product of)
two parts: g(Qi)h(x, 0 '), where h(x> 0 ') is independent of f?i. The optimization
procedure then involves the following steps:
Step 1. Choose 0 ' such that it minimizes
/ * > ')
Step 2. Choose 9i such that
Ik (0 ) = Mo
Phadke [5] suggests that the adjustment factor 0X should be identified by
experimentally examining the effect of all design factors on the S/N ratio and the
mean fj.. Any factor that has no effect on the S/N ratio, but a significant effect on ji ,
can be used as the adjustment factor. However, as noted in several real-life design
situations, this identification is not always clear cut, especially when the relationship
between a h and fih is complex [8 ], or when the effects interact [15]. Resorting to
trial and error is also then ineffective. Such difficulties have been highlighted in the
recent literature (refer [18], p. 19.21). This chapter describes an alternative method,
also empirical as Taguchis 2-step optimization method is, to judiciously exploit
DP-noise interactions to reduce variability to make such designs robust.
This alternative method applicable to design problems in which all
performance characteristics and the DPs are quantitative imposes no additional
assumptions. The method is particularly effective when multiple performance
characteristics (for instance, cutoff frequency coc and deflection D in the passive
filter design problem in Chapter 9) must each be brought to their respective targets
and also made robust This new method cuts down the number of independent DPs
in inner array experiments also.
( * 2 + * .) ( * , + i?3) +
(10.3.1)
Following Taguchi, Filippone states that the DPs that the designer is supposed to
specify are /?2, ad C and he sets up his inner array accordingly. However, due
to the presence of the two constraints given by Eqs. (10.3.1) and (10.3.2), only one
of these three DPs is truly determined at the discretion of the designer; the other
two are subsequently determined by the simultaneous solution of the above two
constraints expressed as equations. Suppose that one selects as the independent
DP, the values of C and R2 being determined by Eqs. (10.3.1) and (10.3.2). A
feasible design here would be then the one that satisfies Eqs. (10.3.1) and (10.3.2),
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 143
for these two equations (constraints) ensure that the performances delivered by
such a design would remain on target*
To achieve robustness, one must next determine the value(s) of R$ that would
maximize (a) the robustness of the cutoff frequency, and (b) that of the deflection.
In general, of course, the optimum value of that maximizes S/Nmc may not
coincide with the value of R$ that maximizes S/ND.
Notice that with this improved procedure, the collective influence of all DPs
on the robustness of the design is maximized, the search for the robust design being
restricted to the points on the contour (or subset) of the total design space on which
target performance is assured. This eliminates the chances of carrying out
unconstrained maximization of the S/N ratio(s), and then finding that the adjustment
parameters perhaps will not be able to restore performance to target(s) see [23,
p. 135]. The constrained procedure achieves also the central objective of robust
design: whenever adjustment is possible, we should minimize the quality loss after
adjustment [5, p. 105]. In the present approach adjustment is fa it accompli as one
restricts the search for optimum DPs on the contour on which performance equals
target performance.
Let us first illustrate these points using the passive filter example. Given
coc ( = 6.84 Hz), D ( = 3.00 in.), and a value of R3, the two remaining DPs, namely
R2 and C, may be obtained as
The combination of /?3, R 2, and C thus obtained (with D - 3 in. and coc = 6.84) is
a feasible design (though not yet robust or optimum) for it satisfies the target coc
and D requirements given by Eqs. (10.3.1) and (10.3.2). The next step is to
search for a feasible design (here a value of R$) which maximizes the relevant
S/N ratios (or minimizes variability). Equations (10.3.3) and (10.3.4) together define
the contour on which this search has to be conducted. Consideration of non-linear
constraints in optimization problems is well known. Use of Lagranges multipliers,
when the analytical form of the objective function is known, constitutes a standard
optimization procedure [26]. In the present situation, however, the exact mathematical
dependency of the S/N ratios on the DPs is unknown; hence a (constrained) search
would be an acceptable practical approach.
with the design that enjoys maximum robustness with respect to deflection D. One
may seek here Pareto optimality as shown in Fig. 10.1 [26], and by search generate
an acceptable set of designs between the two designs giving either max ( S/Nmc) or
max (S/Nd ).
O b jective 1
Fig, 10.1 Pareto optimality between two optimization objectives.
R 3 ( ohms)
strength of the interaction demanded by Eq. (10.3.1) between R$ (the DP) and the
various sources of noise (Table 9.4). Figure 10.2 shows also the dependence of
S/Np on i?3* The point-to-point fluctuations reflect the effect of the random trials
in the Monte Carlo runs. In general, these fluctuations would reduce if larger
samples (for instance, 500 instead of the 100 trials used here) were obtained at each
candidate R$ value.
Finding the optimum R3 (and, therefore, the corresponding R^ and C values)
would amount to finding where S /N ^ (or S/ND) is maximum. In a well-behaved
case such as the present one, visual inspection provides sufficiently good estimates
for optimum /?3. In problems involving multiple DPs, one could use empirical
optimization methods such as the Response Surface Methodology (RSM).
Figure 10.2 indicates that a choice of / ? 3 in the range
100 Q < < 350 Q
would be sufficient to assure robustness of coc. It also indicates that S/N^c is
relatively flat in this range. From the figure it can also be seen that R3 should be
in the vicinity of 300 Q in order that S/ND is maximum. Figure 10.3 shows the
Monte Carlo estimates of the variance of coc and the variance of D obtained as
functions of R3. The final choice of R$ would depend on resolving the multiple
(two) objective problem: Maximize the robustness of cutoff frequency (Qc> also
maximize the robustness of deflection D. Several standard methods are available
146 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
R 3 ( ohms)
Fig. 103 Monte Carlo estimates of Varfflc and VarD as functions of R3.
4 ncocCR2
s!V
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 147
where V = W ^ R g R J iD G ^ ).
The resulting values of S/Nac and S/ND as functions of the (alternative)
independent DP (now C) are plotted in Fig. 10.4 confirming that a value of C in
34
__These designs
30 are off-target
,1
</>
26
4
o
o
0)
22
CO
o S/N COC
c
o 18
a
c
O' 14
CO
10
C (farads)
Fig. 10.4 Effect of C on S/Nmc and S/ND.
the vicinity of 400-450 (iF indeed provides maximum robustness both for (0 C and
D. One would appreciate that such confirmation is possible only when each S/N
function has a global maximum.
One key practical advantage of being able to optimize the design by
selecting any one of the three components (R 2, rt3, and C ) of the filter as the
independent DP is that this might facilitate the consideration of fabricating the
filter using particular components available at standard values in the market.
y, given X ) that one develops using empirical data, provided indeed there already
is a cause-effect relationship between X and Y.
The simplest form of regression models is known as the simple linear
regression model.
The word simple here means that there is a single independent variable
(say, X ), but the word linear does not have the interpretation that might seem
self-evident. Specifically, it does not (necessarily) mean that the relationship
between the dependent variable (say, Y) and the single independent variable will
be portrayed graphically as a straight line. Rather, it means that the regression
equation is linear in the parameters . For example, the equation
Y = fa + p xX + e (10.7.1)
is a linear regression equation. The equation
Y = p 0 + p f i + p n X2 + e (10.7.2)
is also a linear regression equation even though Eq. (10.7.2) contains the
quadratic term X2, the equation is still linear in the parameters (p 0, Pu Pn).
Thus Eq. (10.7.2) also is a linear regression equation.
If all the {x y,} points are close to the regression line plotted on the (X, Y)
plane, the linear relationship between Y and X is strong. For a regression model
to be a reliable one, i.e., a model using which one can confidently predict Y given
X, such a strong relationship must exist.
The parameters p$ and p i will generally be unknown. A major goal of regression
analysis is to estimate these (regression) parameters. The most common method for
estimating the parameters in a regression equation iis the method of least squares.
For advanced level regression applications, special methods are available [24].
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 149
= Po + Pl%i + i
Therefore,
= Yt ~ Po - PiXi
In estimating the parameters /J0 and Pi, the sum of the squares of all errors {,} is
minimized. Now,
= I (Yt - P o - M ,) 2 dO.7.3)
Using calculus, one should differentiate the right-hand side of Eq. (10.7.3) first
with respect to J30 and then with respect to pi. This gives two equations in / ? 0 and
Pu and their solution yields j30 and Pi. The results, with n (X 7/) observed pairs
of data, are
_ Z X i Yi - a X i) C L Y i)/n
2 7
I X? - (X x y i n
-1 xn xn ' r
^2 1 x 2l x 21 2
r^oi
, x = Pi , e =
11
Pi
_ _
} X nl X 2_
r
1
where Xy- denotes the /th observation on the yth regression variable Xj. Therefore,
Eq. (10.8.2) may be rewritten as
Y\ - Po + M n + fh%n + i
Yi = Po + P\%2i + /y ^ 2 2 + e2
Yn = A) + P\Xn\ + Pl^nl + /i
As in simple linear regression involving one independent variable, the method
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 151
of least squares stipulates that one should select {/J0, A and /J2} such that the
selection minimizes the sum of the squares of (he errors X e f . For the model in
Eq. (10.8.1) we have
X t l = X (Yi - A, - P iX n - &X 1 2 ) 2 (10.8.3)
The differentiation of the right-hand side of Eq. (10.8.3) with respect to f50, / ? l5 and
$ 2 (separately) and equating the results to 0 gives three equations in the three
unknown parameters, /30, J0i, and /32. These equations are called normal equations,
written in matrix notation as
X 'X j5 = X 'Y (10.8.4)
The solution of this equation is
= (X 'X ) - 1 X 'Y (10.8.5)
The use of a spreadsheet software with matrix inversion capability makes the
computing of {$} from observations {Yh Xy] relatively straightforward. The example
below, which uses empirical data shown in Table 10.1, illustrates the computation steps.
TABLE 10.1
OBSERVED VALUES OF Y, X, AND X 2 ( p = 2)
Y Xx x2
30.1 8 22
32.2 9 19
34.3 11 19
35.4 12 22
34.4 10 18
30.0 8 22
31.4 9 19
32.3 10 18
34.0 12 22
33.1 11 19
33.2 11 19
34.3 12 22
31.1 9 19
30.0 8 22
32.3 10 18
34.4 12 22
30.3 8 22
31.6 9 19
33.3 11 19
32.0 10 18
The data shown in Table 10.1 are the records of observed response values of
Y(Yi) for the indicated values of two independent variables X x and X2. Based on the
prior knowledge of how X x and X2 influence K, the investigator speculates that this
dependency relationship may be modelled by
( 10.8 .6)
152 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
One may use Eq. (10.8.5) directly to estimate the three model parameters
P0, Pi and Pi. A method that reduces the computing effort centres the data first,
by subtracting from each variable the value of the mean (7 bar, Xbar, or X2 bar) of
that variable. With centring, one would need to invert only a 2 x 2 matrix rather
than a 3 x 3 matrix in the present example. The resultant equation that can predict
response Y given values X x and X2 becomes
Y - Fbar = p x(Xx - Xjbar) + p 2(X2 - X2 bar) (10.8.7)
Rearranging Eq. (10.8.7), we get
Y = p 0 + p xX x + p 2X2 (10.8.8)
where Po = 7bar - p xX xbai - p 2X2bar . Using the data of Table 10.1, we have
'- 2 2 n "-2.385'
-1 -1 - 0.285
1 -1 1.815
2 2 , Y = 2.915
Therefore,
A 40 o A _43.0"
XX = , X Y=
0 56 -5 .2
This gives
0
43.0' 1.075*
(J = (X' X) - 1 X' Y = 40
-5 .2 -0.093
0
56
The data given in Table 10.1 produces 7bar = 32.485, X xbax = 10, and X2bar = 20.
Therefore,
Y = 2^.485 + 1.075X* - 0.093X2
becomes the regression model that links the independent variables X x and X2 to the
response variable Y.
To summarize, regression analysis provides a way to develop a mathematical
model from empirical data collected about input factors and the response . When
there is reason to believe that a cause-effect relationship between the response
variable and certain independent variables exists, the regression model links the
response variable quantitatively to the values of the independent variables, facilitating
the prediction of the value of the response, given certain values of the independent
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 153
the variance depends upon the mean, to permit adjustment of the mean to target
(see [14], p. 291) after one has maximized the S/N ratio do not necessarily
make up for disadvantages 1 and 2 .
Alternatively, if a procedure is adopted in which the designer restricts his
search for a robust design among possible alternative designs, in each of which
performance on-target is guaranteed, then disadvantage 3 disappears. Further, if
appropriate experimental designs and mechanistic or regression models are employed,
disadvantages 1 and 2 also disappear. We elaborate this as follows:
Kackar [6 ] showed that when a specific target r 0 is the best value for the
performance Y of the system being designed, one has the following choices to reach
a robust design: First, if the expected performance E(Y) and the variance of Y are
functionally independent of each other, one may achieve the robust design by
reducing the variance, and then adjusting E(Y) to Tq by using one or more adjustment
parameters. Alternatively, if the variance of Y and E(Y) depend on each other
linearly and one can adjust E(Y) to To independent of the coefficient of variation
(^]Var(Yj/E(Y)), one should attempt to reduce the coefficient of variation. Literature
also proposes certain models under which when an adjustment parameter exists,
one may use appropriate PerMIAs [17, 21] to reach robustness.
Let us convert all observed performance values Y to Z when Z = Y - Tq.
Therefore, under the influence of noise, we have
E( Z) = E( Y) - Tb
Now, if we restrict the search for the optimum (robust) design to those design
parameter combinations at which performance (without the disturbing effect of
noise) is exactly r0, then the difference of observed Y and r 0 would be only due to
noise. Therefore, for these designs, E(Z) = 0. Hence, the task of searching for the
optimum design converts to that of seeking out the design within the restricted
search space for which E(Z2) under the influence of noise is minimum.
Note that this restricted search procedure avoids the second step (that of
adjusting the bias to zero) necessary in Taguchis 2-step optimization procedure.
The 2-step procedure seems to work very well provided one is successful in
identifying the adjustment factors DPs that mainly affect the level (location) of
performance, but not its dispersion. Issues pertaining to the difficulties posed by
any dependency of performance variance (dispersion) upon the mean become non
issues in the restricted search method because we are always considering only
those designs which in the absence of noise would give performance at r0. If some
particular noise factor causes undue variability in performance beyond what may
be tolerated, the designer should consider controlling it (i.e., treat it as a DP) to
achieve improved robustness.
One may now summarize the constrained design approach. At the outset we
assume that the designer knows what the different DPs are and the performance(s)
he is attempting to optimize. We assume that the best performance desired
for characteristic Yt is identified by a target value rh The objective is to choose
DP values to reduce the sensitivity of performance to the hard-to-control parameters,
termed noise. The designer should proceed as follows:
Step 1. For each performance characteristic Yt to be made robust while also
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 155
to be set equal to some target value % establish a quantitative model relating all
the DPs {DPi , DP 2, DP ^ ...} to the performance Yiy as in
Y i = f ( D P l 9 D P 2, D P 3 f ... )
Step 2. Write the quantitative model obtained in Step 1 as a constraint :
f ( D P i, D P 2, D P 3, ...) =
If n different performances (1^, Y2j F3, ..., Yn) are being simultaneously targeted
(to r2, t 3, etc.), then one establishes here a set of n constraints given by
equations such as
f ( D P i* D P 2, D P 3 , ...) = T\
g (D P i, D P 2 , D P 3, ...) = Vi
h(JDPi, D P 2 , D P 3, ..,) = t 3
and so on.
Step 3. Solve the equations formed in Step 2 to obtain certain dependent
DPs in terms of the truly independent DPs. Clearly, if there are m DPs and n
constraints with m > n, one has the choice of treating (m - n) DPs as truly independent
(which can be used as the robustness-seeking variables in Step 5 below) while the
others are dependent (their values are restricted so that the desired performance
targets are achieved).
Step 4. Construct a special inner array using only the truly independent
DPs. Also set up the appropriate outer array, or a Monte Carlo experimental set
up, or the physical arrangements for repeated observations of performance
under the influence of real noise. Conduct experiments at each setting of the inner
array row to observe performance(s) (Ki, Y2j y3, ...) under the influence of noise.
Step 5. Apply search, RSM, or some other technique to find the combination
of the independent DPs that minimize the empirically estimated variance (of each
performance Yu Y2, y3, etc.) under the influence of noise. At this point we are
exploiting only DP-noise interactions to improve robustness. If a unique set of DP
values does not optimize all performance characteristics, develop the Pareto
optimal set of candidate designs.
The quantitative models in Step 1 may be either mechanistic (based on physical
laws relating the response 7,- to D P U D P 2j ...) or it may be a regression model
developed using the Box-Behnken experimental design schemes [11] or some
other similar scheme that can lead to at least a second order regression model
including significant 2-factor interactions. When the DPs are discrete or attributive
rather than continuous, the method above may be modified to an enumerative
search to identify the optimum (robust) design.
it to the steps listed in Section 10.9. We then apply the approach to two other
problems one a process design problem solved earlier by Barker [25] and by
Tribus and Szonyi [16], and another discussed by Box [17].
The primary objective of Step 1 in Section 10.9 is to establish mathematical
relationship(s) relating the performance chaiacteristic(s) of interest to the DPs.
When mechanistic models are not available from the functional design of the
product or process [5], one may establish the relationship empirically. In the case
of the passive filter, the application of Kirchhoff s laws allows us to derive these
relationships (Eqs. (10.3.1) and (10.3.2)). Therefore, for the passive filter problem
one may directly go to Step 2.
Equations (10.3.1) and (10.3.2) show the results of Step 2. These two equations
help constrain the total design space consisting of all possible values of R3, R2 and
C to the feasible set of solutions for which on-target performance (coc = 6.84 Hz
and D = 3 in.) would be fa it accompli.
Step 3 is realized by writing Eqs. (10.3.3) and (10.3.4). Perhaps one can see
that the filter design problem now has only one DP, viz., is R3i left as the independent
DP: once we select a value of /?3, the meaningful (feasible) values of R2 and C
become known (fixed) by Eqs. (10.3.3) and (10.3.4).
Since only one independent DP value now remains to be determined so as to
optimize the design, one need not really worry about the inner array in Step 4
in this example. Step 5 is accomplished by performing Monte Carlo simulations
using noise conditions consistent with Table 9.4. As shown in Fig. 10.2, the optimum
value of R 3 is near 300 Q. The corresponding (optimum) values of R2 and C may
be now determined from Eqs. (10.3.3) and (10.3.4). Figure 10.5 confirms that when
R 3 (ohms)
Fig. 10.5 Comparison of outer array and Monte Carlo simulations of noise effects.
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 157
every noise factor has a symmetric distribution (in the present example normal
with the nominal value as the average and a third of the % tolerance as the standard
deviation), outer array experiments and Monte Carlo trials produce comparable
results.
We shall next apply the direct optimization to a second design optimization
example, one initially tackled by Barker [25] and then later by Tribus and Szonyi
[16]. This problem required the optimization of the strength of castings produced
by. a screw moulding process. The six process factors involved are listed in
Table 10.2. The optimum settings were to be found so as to yield minimum variance
in the strength of the castings produced.
TABLE 10.2
WORKING RANGES AND LOW COST TOLERANCES OF DPs
The target performance (strength) required was 160. Barkers solution in which he
centred the variables and then applied Taguchis 2-step optimization produced a
design (coded as 2, 3, 3, 2, 3, 2) with a mean strength of 161.55 and variance 785.
The Tribus-Szonyi solution used Monte Carlo simulation to simulate the influence
of the uncontrolled noise factors. This solution, represented by code (2, 3, 3, 2.5,
3, 2), predicted a mean strength of 154.7 and a variance of 683.
To apply the direct or constrained optimization method we first convert the
Tribus-Szonyi process model for strength [developed by them using multiple
regression, see [16], Eq. (2)] into the constraint
160 = Strength = 111.67 - 3.43(X1 - 2) - 3.68 [3(X1 - i f - 2]
+ 8.33 (X2 - 2) + 9.52 (X3 - 2) + 4.74 (X4 - 2)
- 2.85 [3 (X4 - I f - 2] + 5.00 (X5 - 2)
- 3.42 [3 (X6 - 2) 2 - 2]
Next, we randomly generate different feasible designs and compute their Monte
Carlo estimates of variance of strength (using Eq. (3) of [16]). A part of the
simulation results is shown in Fig. 10.6. A 15-minute search for the optimum
values of XI, X2, X3, X4, and X5 with a 80286-based microcomputer running at
12 MHz produced a best design one with on-target strength (because the
search was restricted only to those designs that would yield 160 as strength).
This design is representable by code (2, 3.37, 3.5, 2.5, 2.5, 2) and has a variance
of 703.
158 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
2.2
t
O'
c
<D
{ft
O|
s
c o
O -C
o
>
T3
0)
O
T3
0>
0 .
Design s o l u t i o n ------
Fig. 10.6 Sample results of random search for a robust design with on-target
casting strength.
The design might further improve if random search was pursued further or if
one applied the RSM constrained by the equation
X2 = 2 + (1/8.33)[160 - {111.67 - 3.43(X1 - 2)
- 3.68[3(X1 - 2) 2 - 2] + 9.52(X3 - 2) + 4.74(X4 - 2)
- 2.85 [3 (X4 - 2) 2 - 2] + 5,00(X5 - 2)
- 3.42 [3 (X6 - 2 f - 2]} ( 1 0 .1 0 .1 )
When the boundary conditions on the working range of the variables were relaxed
somewhat (a consideration based on engineering knowledge) and RSM was
applied, a significantly more robust design resulted. This design, representable by
code (1.83, 2.63, 4.47, 2.67, 2.03, 1.93), possesses a projected variance of 454
while maintaining the target strength at 160. The RSM objective here was to
minimize moulded part strength variance (see Eq. (3), [16]) given by
a 2 = 193.74 + 47.56(X1 - 2) + 28.11[3(X1 - 2) 2 - 2]
+ 15.20(X2 - 2) + 2.95 [3(X2 - 2) 2 - 2]
+ 19.82(X5 - 2) + 2.49 [3(X5 - 2) 2 - 2]
+ 9.58(X6 - 2) + 29.32 [3(X6 - 2) 2 - 2]
If one uses the total loss (Section 1.3) inflicted by a design as a measure, then
it appears that the constrained approach can yield as much robustness (if not
more) than what the conventional 2 -step design optimization method produces.
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 159
What is significant here is that the constrained approach assures zero pre-designed
bias (off-target performance) even with a multiplicity of target performance
characteristics.
Recall now the central focus of robust design as stated by Taguchi [4],
Phadke ([5], p. 281), Box [17], and many others. As against the 2-step optimization
method (Section 6.9), in the constrained optimization approach, one looks for
robustness the minima of the noise-caused variance only at points in the
design space at which the different performance characteristics are each on their
respective nominal-the-best targets. Therefore, one loses little by not working
with the (S/N) ratios in the constrained optimization procedure.
We may give yet another example of applying the constrained design
optimization approach. Box [17] lists a class of design problems seeking robustness
about an operating target value T in which the noise-caused standard deviation
cr(x) and the mean jj(x) are linked in certain special ways. He showed that, because
of this special linkage, the DP set x cannot be separated into subsets x x (the
control parameters influencing only dispersion) and x 2 (the vector of adjustment
parameters) by use of the SIN ratio 10 log 1 0 ( j ^ / o 2). In one illustration with
two DPs X! and x 2 in which dispersion P(xj) equals [cr(xb x 2)//Jz (xu x2)]2, Box
suggests that one should select first Xj so as to minimize P(x{) and then iteratively
vary x 2 to adjust ix(xh x 2) to the desired target T ( T = 10 in Boxs illustration). In
his paper Box provided a plot (Fig. la in [17], reproduced in Fig. 10.7) of the
contour lines of <7 (xj, x2) and /x(xi, x^. Availability of these contours makes the
application of the constrained optimization approach to this problem straight
forward. One searches for the optimum design only on the contour on which
fx(xi, x 2) - 10 (see Fig. 10.6). On this contour one would look for the (x1? x2) point
at which c rfa , x2) is minimum. With the contours being as shown, the design
that meets these criteria is the point marked Q on the plot, coincident with Boxs
solution. We remark again that if mechanistic or mathematical models for /i(x) and
c t ( x ) are not available, one should obtain these empirically with the help of
X1
(M
X
M= 15
contours of \i
contours of 0"
Fig. 10.7 Contour lines of <r(xi, jc2) [-------] and x2) [----- ] showing
the robust designs location at Q .
performance) to the problem only. The approach works particularly well with
multiple performance characteristics: each characteristic to be made robust is
assured here to be on its respective target. By contrast, application of Taguchi* s
2-step method requires fine tuning adjustments to reach targets [5, 13, 14, 23],
frequently cumbersome with multiple performance characteristics. Further, since
the constrained approach uses models containing higher order terms and inter
actions, it explicitly considers all significant interaction and higher order terms.
This is the third advantage of constraining optimization as presented here. The
fourth advantage which results from the preferred use of Monte Carlo experiments
(replacing the use of outer OAs) to study the S/N ratios is the removal of the
chance of biasing due to distortion of variance and biasing of the mean when the
noise factors may not have a symmetric distribution. Such biasing and distortion
may occur when outer OAs are used to simulate noise (as recommended by Taguchi)
in spite of the presence of asymmetric distributions (for example, see [16]).
We have described an unconventional yet effective method to empirically
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 161
Fig. 10.8 Pareto-optimal robust designs are sought on the contour on which
performances / and g are both on target.
Loss Functions and Manufacturing
Tolerances
11.1 LOSS TO SOCIETY IS MORE THAN DEFECTIVE GOODS
This chapter presents loss functions, a quantitative statement developed by Taguchi
of how off-target production economically affects consumers and manufacturers.
Loss functions provide the justification missing in conventional QA about
why a manufacturer should minimize the variability in the performance of a
product or a process. Loss functions also guide the setting of manufacturing tolerances
and the allocation of part tolerances between interacting work centres in a factory,
and between suppliers and the consumer. As we show in this chapter, loss functions
can play a major role in minimizing the burden on society of off-target production
and services.
Loss functions communicate an important reality that Taguchi was the first
to articulate: Producing goods that merely meet specifications is not enough for an
enterprise (Section 1.4). Any product that fails to perform on target inflicts a loss
on society. This loss may be in the form of an inconvenience, a material loss,
production stoppage, repair, adjustment cost, or a complete scrapping of the product.
Ill-fitting shoes, a train that leaves late, a reactor hatch that does not completely
close, a chemical reaction with low yield, or a gun barrel short of the required
finish all these inflict such losses on society.
When a product fails to deliver its required characteristics or it performs
below the expected standard, it must be re-machined, adjusted, re-processed, or if
all these actions fail, discarded. When sold, if it fails to function as the customer
expects, the customer must have it adjusted, re-stitched, return it to store, or throw
it into waste basket. In addition, with off-target performance, often the user incurs
an extra cost due to countermeasures that he must apply to compensate for the
off-target performance.
According to Taguchi, the mere fact that a product meets specifications,
or the traditional use of specifications to communicate user requirements is often
not enough. Performance is ideal, best, or most desirable when it is exactly on target.
Being off target has real adverse consequences which are very serious in certain
situations. For instance, even if the components making up a complete system are
individually within specifications, if each of them just meets the tolerances, many
trivial deviations can stack up, leading to catastrophic system failures.
Taguchi showed that whenever product or process performance deviates
from the target, the loss occurring to society may be quantified. If L(y ) represents
the loss caused by a small performance deviation ( y - m) from the target
performance m, then using Taylor series expansion, it is possible to write
L(y) = L\ip + ( y - m)] = L(m) + - m)I \ !
defines the loss function for ill-fitting shirts near the true neck size (or target
characteristic) m.
A word needs to be said about tolerances at this point. Unlike how designers
commonly mark them on engineering drawings, tolerances need not always be
symmetric. A shirt neck that is 1.5 cm too tight very likely needs an adjustment,
whereas that is 1.5 cm too wide does not. Thus, loss functions need not be always
symmetric.
What use is the loss function? One major use of loss functions is in determining
manufacturing tolerances. Taguchi used cost reasons to show that in most cases
manufacturing tolerances should be not equal to but tighter (narrower) than customer
tolerances.
Observe that this tolerance is tighter than the customer's tolerance, because it is
cheaper here to adjust collar size while one is manufacturing the shirt (this costs
$ 2.50 per shirt in the factory) than re-stitching by a customers local tailor (such
re-stitching costs $ 1 0 . 0 0 per adjustment).
One can see that //o n e is interested in the broader objective of minimizing
the total cost society incurs because of ill-fitting collars, the above procedure for
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 165
setting the manufacturers tolerance helps achieve this minimum. Such a procedure
for setting manufacturing tolerances equitably balances the cost society incurs in
restoring off-target product performance to target performance.
However, manufacturing tolerances need not always be tighter. What
magnitude they should be depends on the relative adjustment costs of the
manufacturer and the customer.
One may use similar procedures to set engineering tolerances across
departments for dimensions, hardness (and other similar characteristics), impurity
in raw materials, magnetic strength, vibration control, balancing of vehicle wheels
and tires, etc. to keep unit costs minimum.
Therefore, L(y) = 3.125(y - 120) 2 describes societys losses. Hence, when the
manufacturer delivers a cable just within the edge of the customer specification
limits (120 10 mV), he inflicts effectively a loss of 3.125(10)2 or $ 312.5 on
society (this time the customer). Recall that the loss function is the quantitative
statement of the loss society incurs whenever performance deviates from target.
Thus the cable being just within the customer specifications does not mean zero
loss to the customer as can be seen from Fig. 11.1. Reducing manufacturing
variability to 4 mV would save society $ (312.50 - 50.00) or $ 262.50 per cable
sold.
On the other hand, one cannot justify reducing manufacturing tolerances
to narrower limits below 4 mV, because then the manufacturers adjustment
cost ($ 50.00/adjustment) would exceed the consumers new loss (which one
finds again from the loss function curve) at the edge of the new limits. Above the
4mV manufacturing tolerance, the consumers loss becomes higher than the
manufacturers cost. Hence, manufacturing tolerances should be 4mV.
Note again the fundamental distinction between the traditional perception of
customer specifications and the notion based on loss functions put forth by Taguchi.
In the traditional viewpoint, meeting specifications implies that everything is
OK. By contrast, in the loss function viewpoint, society incurs a loss of $ 312.50
even i/th e sold cable falls just within the edge of the 120 10 mV specification
limit (see Fig. 11.1).
166 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
mb? max
Performance characteristic
Fig. 11.1 Contrast between the conventional view and Taguchis view of
societys losses when performance deviates from the target.
Since one has two different (asymmetric) tolerances here ( 8Xand S 2), one also will
have two different loss functions one for loose shirts and the other for tight shirts.
Let the cost of re-stitching, delay, and the travel to get a loose shirt adjusted,
etc., together be D x, and that for a tight shirt be D2. Fr simplicity one may
assume D x = D 2 = $ 4.00 per adjustment. Hence the two sides of the loss function
L( y ) involves constants k x(= 4.00/(8{)2) and k2(= 4.00/(<52)2), see Fig. 11.2.
i
target
Figure 11.2 shows that k x and k2 define the two sides of the asymmetric loss
function. If market research establishes that 8X = 0.5 cm and 5 2 = 1 0 cm, then
the loss function will be
U y ) = 16(y - m)2, y < m
= 4 ( y - m)2, y > m (1 1 2 .1 )
Note here that a customer with neck size (m, + S2) is free to buy the shirt of neck
size marked m,*+i (the next higher size). Here the difference between mI+i (the size
of the next higher size shirt) and (m/+ 1 - 8X) (the customers actual neck size)
causes his loss. Figure 11.3 shows the appearance the loss function L (y ) takes here.
At customer neck size (mf + 82) [which also equals (m,+ 1 - 8 x)]y the loss with
the tight shirt is 4(52)2, and the loss with the loose shirt is 16(5i)2.
The transition relationship between 8X and 8 2 may be found as follows: It
turns out that since these two losses should be equal at transition size (m, + <52),
if one produces shirts at 1 cm intervals, say 39, 40, 41, 42, then the losses for the
customer who has the neck size 40.33 cm, regardless of whether he buys a shirt of
40 cm size or one of 41 cm size, would be equal. This evaluation of 8X and S 2
provides us a way to determine at what actual neck size customers would move up
to the next higher stamped neck size (see Fig. 11.3).
Step 2: Determination o f the manufacturing size interval Before one pro
ceeds, one needs to make here the rationality assumption. This assumption
implies that every customer will (a) choose to reduce his inconvenience to minimum,
168 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
Transition point
Fig. 11.3 The shape of the loss functions determines the transition point r.
and (b) have the purchased shirts collar adjusted when the size deviation exceeds
S\ or 8 2 tolerance.
If customer neck sizes are distributed uniformly between say 40 cm and
41 cm, then with 1-cm size interval, the average loss per shirt purchased is
,40.33 o ,41.00 t -> ^
16 j ( y ~ 40) dy + 4 J (41 - y f dy = 16(0.3373) + 4(0.6773)
40.00 40.33
= $ 0.593
In the above calculations one assumed shirts to be available at sizes 40.00 cm and
at 41.00 cm. Note that for all shirts not at the exact customer neck size y\ one
assumes a loss, equal to the quantity given by the loss function equation ( 1 1 .2 . 1 ).
With the availability of shirts reduced to 2-cm size intervals (e.g. 40,42, 44),
the average loss per shirt purchased (using new 8X and S 2) changes to
,40.67 7 42.00 *
(1/2)16} ( y - 40) dy + (1/2)4 J (42 - y f d y = $2.37
40.00 40.67
The multiplier (1/2) comes from the density of people at different neck sizes
spread now over 2 -cm intervals.
Suppose now that the manufacturers extra cost of stitching and selling one
extra neck size is $ 1.80/shirt. (This extra cost consists of the additional inventory
and distribution cost per extra size.) Therefore, the loss to society per shirt sold at
1-cm intervals would equal (0.593 + 1.80) or $ 2.393, and the loss with 2-cm
intervals would equal $ 2 .3 7 .
By trial and error with other size intervals, it may be shown that the ideal
manufacturing interval is 1.80 cm. Note that this interval (1.80 cm) exceeds customer
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 169
tolerance decisions (5! + <52). Comfort guides decisions about customer tolerancc.
Therefore, some shirts purchased by customers will now need re-tailoring. But, at
the 1.80-cm size interval, societys total cost will be minimum.
i
11
11
JZ
------Glass length -
........... J
m. m2
Length
Fig. 11.4 Glass pane size variations.
where k = A0 /5 02, specifies the loss function. Now if the store incurs a loss equal
to A in replacing a glass that is returned and re-cut or replaced, then the stores
cutting tolerance may be found as follows:
U y ) = [A0 /So2](;y - m ) 2 = A
170 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
-W^i)
n
+ U y 2) + - + Uy ) }
=
/I
Kyi - t + (y
) 2 2 - + - + (y - *)2]
=k (11.3.1)
n
2. Loss k a . This results from the performance {y,} of the individual items
being different from their own average /i.
Thus the fundamental measure of variability is the mean squared error of Y
(measured from the target r), and not the variance a 2 alone. Therefore, the reader
should note that ideal performance requires perfection in both accuracy (implying
that ji be equal to t) as well as precision (implying that a 2 be zero).
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 171
11.4 SUMMARY
t
EXERCISES
A manufacturer of quartz watches uses inspection to screen defective watches
before they are shipped. Watches rejected by the inspector are reset by the factory.
If a sold watch turns out to perform outside the warranted 5 s/month tolerance,
the customer is entitled to a replacement. However, a replacement costs the customer
$ 25.00/watch net in postage and inconvenience.
1. Determine the loss function.
2. If the cost of setting a watch is $ 2.00 at the factory, verify that the inspector
should use the tolerance limit 1.41 s/month. Discuss why he should not use the
5 s/month limit.
3. If the mass produced watches submitted for inspection show an average deviation
of +10 s/month from perfect performance, with a standard deviation of 5 s/month,
and if the factory produces 1 0 , 0 0 0 watches per month, estimate the total monthly
loss caused to society if the factory ships the watches uninspected.
4. Suppose that the manufacturer uses a different production method that reduces
the average deviation to 0 s/month while retaining the 5 s/month standard deviation.
If the manufacturer now ships the watches uninspected, verify that the reduction in
societys total loss will be $ 1 ,0 0 0 ,0 0 0 /month.
5. If the performance of the watches produced has the normal distribution, estimate
the number of watches produced/month under the new method (yielding 0 s/month
average and 5 s/month standard deviation) that will fail to meet the 1.41 s/month
tolerance limit. How many of these watches will exceed the 5 s/month customer
tolerance?
Total Quality Management and
Taguchi Methods
This concluding chapter sums up the motivations, opportunities, and methods in
todays renewed push toward quality, termed as Total Quality Management (TQM)
by industry and Quality Loop in the ISO 9000 Quality Standards system [10].
The chapter then locates Taguchi methods within the overall framework of TQM.
One finds certain distinct attitudes, principles, and methods in TQM that are
fundamentally different from how one has managed quality traditionally. The
distinctions are four-fold. First, TQM demands top management commitment to the
quality of the products and services the organization offers. Secondly, it requires
a high sensitivity to what the customer demands. Thirdly, TQM needs use of
systematic and superior methods which usually include statistical methods to
solve quality problems. Finally, it requires a company-wide participation
integration of activities in design, development, production, procurement, and
customer service groups, as these relate to the quality. Here one would resolve
problems not only with methods and technologies, but also by giving people a
chance to contribute. The operators on the line probably know more about what
can go wrong with production than anybody else.
TQM provides some methods for translating these new realities into
opportunities and business results. TQM is the companywide integration of quality
development efforts, quality maintenance efforts, and continuous quality
improvement efforts (see Table 12.1, summarized from [30]). The essence of TQM
is reflected in the contemporary quality management standards and norms of
excellence [10, 27]. TQM draws together all activities pertinent to quality. It directly
engages marketing, R&D, engineering, production, and after sales service for the
common goal of satisfying the customer [9].
TABLE 12.1
HOW TQM CONTRIBUTES TO QUALITY, EFFICIENCY AND RESPONSIVENESS
QUALITY
Management Leadership by Customer Total Systemati
Commitment Strategic Planning Focus Participation Analysis
EFFICIENCY
Management Motivation Internal Resource Quantitative
Support Customers Utilization Analysis
RESPONSIVENESS
Subgroup--------
Fig. 12.1 Xbar and range control charts to achieve statistical control of
manufacturing defects.
have since been much studied and used and their utility documented and illustrated
[9, 22].
Since materials and parts form a key input toward the effective performance
of many products, in the late 30s experts also devised methods to guide acceptance/
rejection decisions when one received a large number of goods truckloads of
raw materials, steel plates, electronic components, supplies, artillery shells, etc.
Generally one could not inspect each item in these supplies economically. Sometimes
the required test would even destroy the item. One termed the methods created here
which are statistical in nature sampling plans.
Thousands of factories use control charts and sampling plans as their mainstay
in QA even today [9, 28]. One should note, however, (hat these two methods aim
primarily at appraisal (of what one produces or buys) and on-line adjustment (of
the process parameters) to assure quality. Embodied in SPC, these two QA methods
provide some but only limited prevention of quality problems. Prevention, we have
since learned, is done best through robust design.
Designing products and processes with the explicit objective of preventing
quality problems is relatively new as a QA procedure. This approach takes aim at
the roots of variability , the primary cause of poor product or process performance.
As explained in this book, this approach requires systematic experimentation with
design and process parameters to uncover how sensitive the quality characteristic
are to the parameters the plant operator controls, and those uncontrolled, regarded
as noise. Such experiments may involve the designers, R&D, manufacturing
176 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
As shown by the Quality Loop [10], TQM helps enhance the saleability of
products and services. It balances the quality levels and the costs of maintaining
them. The product meets customer wants in (a) satisfactory performance, and in
(b) price. The second gain (b) results from the optimization of efforts to deliver
quality at the lowest justifiable cost. This would contribute to the companys growth
and long term survival.
TQM increases producibility. Quality experience (acquired from product
performance feedback from the field) guides design and manufacturing engineers.
This feedback enables them to systematically achieve what the customer needs
and produce it repeatedly, at an acceptable cost. Also, the relationship between
product design standards and the quality capabilities of the plant becomes consciously
designed and established.
TQM increases productivity. A positive and commanding control of quality
results rather than after-the-fact reaction to deviations and re-work of failures.
design. Based on the feedback from the pioneering applications, on field data,
these methods appear to have the potential to provide industry the largest pay-offs
among all known QA methods. Though new in the domain of QA, these methods
use sound and well-established theories of statistics to design and develop high
performance products and reliable processes that would cost less to use and
operate over their lifetime. Few understand Taguchi methods yet. However, what
the pioneers have achieved has already spurred others working in chemical, electrical
and electronic, mechanical, and metallurgical engineering industries [7, 19].
Starting about 1984, AT&T has used Taguchi methods in product/process
development. The fabrication of the WE32100 microprocessor, the WE4000
microcomputer, the WE 256k chip, and other VLSI products are among the
examples here [5, 7, 14, 19, 29]. Taguchi methods helped cut the response time of
UNIX V by 3. Developers have also used Taguchi methods to create effective
personnel appraisal systems.
TABLE 1X2
INDUSTRY EXAMPLES OF TAGUCHI APPLICATIONS OUTSIDE JAPAN
counterproductive. One has to get over this rut because one-factor-at-a-time studies
have no scientific basis. In certain aspects of process troubleshooting and design
optimization, there is no substitute for sound quantitative model building. Statistics
provides the only valid tools to establish reliable cause-effect relationships
empirically.
A refresher course in the elements of statistics and the design of experiments
for its design and manufacturing engineers and R&D scientists is where an enterprise
should start [7]. Mere rudimentary knowledge of statistics is decidedly not enough
to effectively complete a Taguchi-type project. If it lacks in-house expertise, the
company should hire an external expert who can answer the technical questions and
guide the planning and conducting of the first project.
A programme to initiate the use of Taguchi methods in an enterprise consists
of the following preparatory steps in training and orientation [7 ]:
A simple explanation of quality loss.
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 181
7. For each factor, selecting the level that maximizes the S/N ratio and hence
reduces variability and optimizes the design/process.
8. Using the factors (possibly more than one) that do not increase the
variability (i.e., have a flat S/N response) to adjust the mean performance to the
desired target.
9. Performing the confirmation/verification experiment using factor levels
selected in Steps 7 and 8 .
10. If the results appear satisfactory, one stops here. Otherwise, one may pick
different factors or treatment levels and redo Steps 1-9. In certain cases the study
might require use of RSM to improve robustness.
TABLE 12.3
THE WHEN, HOW, AND WHY OF TAGUCHI PROJECTS
Reference
Taguchi methods consciously push quality back to the design stage [7]. A
robust product design be it an electric motor or a software minimizes defects
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHOOS 183
caused by design, materials, production, and the uncontrolled factors present in the
field. A robust process design cuts the ongoing effort in process controls (Fig. 12.2)
required to minimize manufacturing imperfections [14].
Process controls
Fig. 12.2 An improved (robust) process design cuts a companys
on-going efforts in process control.
Appendix
and F-T
CUMULATIVE WM
Z 0 I 4
.0 .5000 5 0 m
.1 .5398 54
.2 .5793 5S 32
.3 .6179 .6217
.4 .6554 .6591
.5 .6915 .6 9 9
.6 .7257 .7291
.7 .7580 .7611
4
.8 .7881 79 >
.9 .8159 .8 1 *
1.0 .8413 .843*
1.1 .8643 8665
1.2 .8849 .8 8 0
1.3 .9032 .9 0 *
1.4 .9192 .9207
1.5 .9332 .9X 5
1.6 .9452 .9463
1.7 .9554 .9564
1.8 .9641 964S
1.9 .9713 .9719
2.0 .9772 .9771
2.1 .9821 .982*
2.2 .9861 .9864
2.3 .9893 9896
2.4 .9918 .992
2.5 .9938 9SMB
2.6 .9953 .9955
2.7 .9965 .9966 i
2.8 .9974 9975
2.9 .9981 9982
3.0 .9987 99911
P robability [ - < C,
Appendix A: Standard Normal, t, Chi-square,
and F-Tables
TABLE A1
CUMULATIVE DISTRIBUTION O F STANDARD NORMAL RANDOM VARIABLE z
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5363 .5675 .5714 .5753
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6226 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7703 .7734 .7764 .7974 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9278 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9430 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9648 .9656 .9664 .9671 .9678 .9686 .9693 .9700 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9762 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9874 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940. .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9990 .9993 .9995 .9997 .9998 .9998 .9999 .9999 1.0000
TABLE A2
CRITICAL VALUES OF TH E STUDENT /-DISTRIBUTION
TABLE A3
CRITICAL VALUES O F THE chi2 DISTRIBUTION
* o ro N ro SO r- ro ro TM iH
Tf m in SO * ro so SI $ o> r- ro in 9 \
s m On 00 in fO
a
On CO s6 ro
a
r4
rj Tf (N fO
r4 so
O oo t"* rj fO
^<N $
m in
3
CO
rf
%
s
so s
CN <> s g # %
\n
On
in On oo in ro Os ro VO ro in cs ro
CN
vo S 8
rf- M ON On Tf 00 ^n N 00 f- On in
ro so
in m SO ro \o 7 <N Os O n
r i 10
w *
A-
<+i
a a
(N \c O n 00 in ro ^r O n ro SC ro i/i r4 c4 T# <N ro
O ^H
TJ
On On l> ON
CnJ (/>
Vm \D ro oo oo SO
O in in 8 r- ON rf in O
<N <0 On
# #
r4
A
rj
w
VO 8
o
o m ^ 00 O n 00 (N t- rs fS Os 00 Q g r^
t *3 in in n in a SO r^ a fN* a O r- # NO a
o} <*> ON 00 SO in ro On ro h* rn in c**> V) (N rr r-i Tf
a so g! N fH
x> ^ 3a> (N M 00 o in CS cn o ii
C
oin in
cn| m f in9 r- 3 3 s ro % O i 00% s
d- 3x> NO ON oo in fO cn r - ro m ro in rvi tt <M
8t 8
O iS
fl k . o in Tf r - r^
On r^ Tf
t*~
^t
CO s
m
<N
oo to ft
SO
Al
T3 T3
TT (N
so 00 SO in ^5 ON
a
cn t-'
a
ro VI ro i/j
a a
<N
A
rj TT
G
O >
8
u
cc
o
ro
o oo
*n lO 5? $
r-j r-
so U5
^ fO
oe oR
in ^H
00 a 00
ro 8 8 00
O
SO
oo 3 %
o
r*
CA <N <M oo SO in OS CO r-
w
ro IT)
A
CO VO
A
r4 rt fN -*T
9
NO Si g r*
kf
- f
aD
>r*
i
Tf On ^
rt
^ 3
3 $
r-
r -*
in
os*
f*>
vn
f*> CO Tf
Ov
a
Tf -H
00 fO
CO t>
w
*T ^4 a
r rJ
ro so CO o
A
o
O n r-
CN rf
Tf fO
r-
<*4
2 S
T3
3d sj
c ocs oo
cm
vo On
05 so
O rJ
oo SO V)
in
A
r^
oo
Os
5en so/) in
i s ro
O n ?. A
r~-
r~ Tf a
5 wS *t
so O rn rn lo ri rs
CO 8! " ^
a
H
O I so 0\
so
n NO
9 5
on j!
Onro
VO 0^
00
i/>
OO vN
in ^
s O$
#
(N
ON
ro f-
O n h-
c*
CO NO
A
ro
9a
w(
00
On
r4
8 (OOrNjJ T*N
A a
ft f-H 5&\
n
3
Tf Q
< ti
J3 O 9 Sf
fS
O'
*
N
r- On
oo so
<n
3 9
t^
t*
Os
sO
ON
ro r-
sw
<N 1/3
in fO
ro SO
a
ro SO
<N U-j
ro to
8 OO s
SO
ro o r4 TT
-O fe rg SO S!
- M
.b i
w
cs 5 S
3
fS
Tt Tf
On
^t
r-
oo
H l~
ON
m, rf
00 ON
VO 00
Tf On Tf
M
8 l> a
h-
m "rr
ro SO
a
oo
cs
ro
GA
B
ro lO
A
On
H Tf
A
^ fcC
D -C -g
~ o o~i N SO fO
r* ^^ m On Tf ac Tf oo
ore
3.31
Tf l> 9 o r- ts ON t '
> ^ fS
2 I OO ^
N
m ^r rl* On rj- h-
SO io fS ^
a ' r ; so 0> SO N fS ro r~ JO
C X3 /) cn ON Ul 90 so CO 00 ON 00
r-i On in Tf SO CO lo
A
ro lO
r i ^r
#
H a s a w*
3u Os
00
CO s 8 00
r-
i-i/) O 00 00 H
On r->
ON rH
CO ON
00
A
s <^N
^ so ^ rr c? tt r- so ro lO CO lo CO Tf
$ tH
ll ^
00 On
m C^J s ^
CMt^ m
oo <S *i ^ h- 3 ?
m ro
rs s s
8 I <N If} vd ^ Tf rf 00 SO ro NO' ro lO CO lO
^ r * . *
rN
^
2 S w4
a 8,
l> oc i/) so ON O OS On w
cd l> cn s g 00 Tf CMN r- in pH CM s
r-i s
.a ^ in On S SO rr rr 06 cn
CO SO CO lO
A A
CO lo
73 >
C
SO TJ- <3N cn ro SO n r- oo t r- O n oo r- r- CS On
cn U5 - <i as so
r-i rf oo in f<5 CO A
CM A
M i/) 2 S SO Uj ^ oo t> CO VO CO lO CO *o
<*>
<u
. .
> o vo <S in On i/
m r- r- O n f+i oo to
.
IN xn S 3
rs r^
CNl ^ o o\ ON S? A
SO NO A
rj- $ ro 3
On so CO lO
w
&o in in
w 06 r ro SO co VO
<N
*n in os 00 ON n i/) tt CO N oo
<N
rsj
jq cn On < fo in *h % 00 VO A
8t a
vd v 5 ^n w rr on CO to
#
t- ro r^ ro SO
O
25 S
T3 \0 fQ oo so
vo r- O' o> SO X r^ ON
6.99
to SO
6.55
3.71
O
i ro m \o ^ r*- r- ro Tf o a
oo
On On vd SO in ri Tf ON 00 i> ro
2 S5 M
o
a *n ph T N rr to so v
<n oe On p** lO 5? (2 CnJ 'O
VI
r4
2 1 05 s SO X ^n 4
'* O n
A
00
A
tT 00
a
tT
%
.ia
r4 Os m N N On <N SO ra NO
4.96
10.04
on
X3 SO V^) ^ r- n SO fS On l> in ro <s A
o in
a
oc 00 X r- - h vd in ro M in
?3 On 2
w
g0) i <N in SO p- 00 Ov
10
APPENDIX 189
oTf o VO cn r-
o
^OHt* vO u%
On VO
CN
ON
00 9 \ CN 00 i-H
CN 9
(<J (N 00 'f t 00 Tt
C-i pn CN CN CM fO rs CN ri
%
PS
a A
fS
A
9A A
r<
CN X ^^ 5 oo ON
00
CN h-
O h
r- r--
On v
m ^00 ? (N ooo rn
f<5
(D
r4
n
m
CN
CN
a
cm ri cn ri cm rl - rl 2 S ri
00 13r* -H fS
rs Tf O <N m ri c-> Tf m h
m
CN <1 ~H 0\ s s s ON VO On oo Tf oo O0 tri
CM fO cn r i cm - ri ri -* r l ri ^ n
in VTi SO t'
1\ w
0
w CNf O 00
CN 'O
O t-
oo
On @ S. O pn t- l> Tf M
T ^ CM H
O n 00 Tf 004
cm pn
T - l
CM cn ri CNCN cn ri
CM fO CM K)
ri <
N N -H N PN
i> ^ 1-H Tf NO CN \C On H o
T ^
O 00
CM CM S sCMa a sg t- On ON If) 00 If) 00#
(N (?) CM CM CM CNcn cn ri oi ri *~*4 3 N n f<
n
05
O CM iH
CM N
00 m
s 00
o S S S vO
On V
m 00
ON *0 On VJ
ff)
c4 ff) <N m CM
n a
CM fO
a
CM fO CM cn n rs rs N
v
*N
f
^4 PN
-H N
cn v CS N r-*
cm sc
n pn so
sj
s VO ON Cn NO m 00
in a
(N <*>
Tf
r4
m
CM
Tf
CM <*$
CM
CN rt cn ^
M 0\
<s r i
SS?
<n r
r*
PN
O'+ \o
PN
On
>o O'
CN ' i-H r i
in in <S oo r^
2.07
2.04
2.77
o
3.00 Z91
00
S % vO
m
*0 CM N A CN a
f- On 'O
CN pn
CM CM CN CM CN r 4 ri ~ ri
< N in pn oo w-i m in
2.92
On Tf 00 ON
2.15
o CS OS
2 .8 6
so
2 .1 1
in t- Tf cn Tf CM <N CN fH 0 oc O t-
cm Tf (N
CM
fO CNrn CM i-O CN CM CM 01 04 ri
n
*C> ^
t O h- On w m OO (0 m ON t" in s CM Tf c* ro
O 00
N V
n
CM
A
CM
A
cn in
cn m
m n
CM
CN
CN
CM
CM
<N rt CM rn
9 \
CM rl
s?
CNfS <N fS
O *< o 9P X
* 4
Tt n Os m r-> o> in o CN oo
f-H %
m rt
^cN <i
Tf
o
CN
A A
ITi h-
<N
A
CN fTi CM
cn n
CM P0
CM
CM
CN
CN*5
CN
CN rt CN cn "(NSr i (N ri
9\
Tf
r- 00 m VO
m IT)
r- m ON \o o cn
CN fH S OO M
s
(M
s
Tf
in
CM
'i: T ^n
<N P
rf *0
CM f^>
%
CM f0
m
CM
CN
CN m
CN
CN rt H rn
8
CN fO
(N fO
On cn CO CM vn 00 0CN
0m vri r~~
2.34
2.31
On
^ T VO n X Tf m n ih cn w
rs Tf CM
A
CM CN #
fn **)
% (N a (N K H rt
cm v SO )0 i-H m in w i N r- 3 Tf V w 00 ^ NO
cn
CM rj m rj
oo ^ r- VO q V) % in%h- cn ^ cn ^ ^ CN PN CN
CN Tf CM Tf CM Tf rs pn (N PO cn m cn pn cn pn CN t+> cn m cn pn CM
vO ,f
00 l/l
\o C^ osO0\ m a
m
ON
^1 VO
m 9\
Tf *>
OO pn
cn T
in r->
cn n
CM w4
m
VO
m <1
r- so
A *0 a
c4 Tf CM Tf CM Tf CM Pn CM pn CN pn CM pn CM fO CM pn CN cn CM m CM pn
p*v O
00
9v CM in pn
SO On
in
00
in ??
vO cn PM O cn
in
m
r n
r- * *}
CM ?pn
n
3 % 4 A *
cn CM Tf CM Tf cn ^r CM cn CM pn CM cn CM P<> CN pn CM pn CM cn
cn
on r-
in
00
f- or-w vO
Os On
in
in
in h
o i
m
00 >n CM l-H
Tf
in
r- n
CN Tf T T
CM Tf
cn ^ CM CN * t CM cn CM fn CM 55 CM t*i CN pn CM pn CM pn
g > 8
CN n m v
oo # ON <N
r-
Tf
r- N
r- *
vO iH
'O
cn 5
vO Os
O
VO
r-
in 9
in vo
in r>
**
Ov VO
cn i/) m
Tf CN
cm CM Tf CM CM CM Tf CM cn CN pn (N pn CM cn
O M
<N m CN VOOl
ON so
VO
On
in
00 3 00#
r^
r-4 <N
Tf h
r- H
CO
vO ?
r5 in
1 4
a
$
I/)
O
m
A
9
Tf CN ^
CN CM Tf CM Tf CM
r-
CN
CM
CM
8 s
CM cn
no h
cn v vO 00
< f*
) 'P
O#
0N r-
o h
vO r*
ON >
cn
ON
o
On
r-
00
fn Tf r-
00* n
CM
00 m
CM nA #
cn t/5 CO in i^ cn in cn Tf m CN CM CN Tf CN ^r CM Tf CM Tf
ON PM ON in Tf v O' *s O 00 m w n fs
in <1 5 m CNt CM M 2 8 GJ j O 00 o oq
cn V cn in cn IS> cn m m IA
%
cn in
cn in m i/>
A A
cn ia m Tf
A
Tf in
00t '
in t*i
r- n r-
NO
O
vO X S $ tj: m
on
^ m Tf s A
<
X) X
cn h
in
m
(N CM
cn 4
m
Tf On Tj- Ov Tf
rj-
90
*
T f 06 Tf 00 rf Tf X Tf 00
A
Tf X Tf X
a
Tf
<S in NO r- 00 On
22
20
CM
TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
'O \fi
r- ps COfs
c- H 00 ON cn 00
g
- H - H CM 1H
V o#
- h ^
> 9\
r*
r- % * 3
- (N ocn fS NO <-HI/)
9
t- M < ?H s CS 1
H
CS
On pM
r-- o V
r*O
* t^- r- fM oo cn
Tt PO VM
O C
(N Os r-
! PS C s|
o rs ~ r4 iM f-H
* *
"O
.ohi r00
-J P
h-O O fO
oo m r- 9\
ps CM On CM 00
cn <N (N TJ
*
cs
S
O
- rl - ^ -J H
ao>
d rJ
o
r- oo ^ O
oo PS
po tn
in rM in
cn <
N
m O
cn 00
CN^vH
2 - ri -J H -H
*o
JO 4 'O
oo 3rr o..o-A C
^M
t 00
T3
*or
00
oo OO s C^) VO
cn in
rn c
m
WJ
.5 &
'w' g
O ps
r 3 ^n Ifi
fO
c k, O
oon \ r- V) T
*
A!
O ^
-I r4
00 TP in in
rf CM
4 O Ov
vH
#
*0
C S
O s N
8 x4>* om OO
NN'O rj-
ONX I/) cm O
NOn
4 *n*
<
inN ON 0\O
5? N
- ri ~ H 2^j 1-H
W X! h N
a t!
t}-
CS r4 5
On OO
nPS
NO Tt cn
VO+ r^
in Tn
^f-
in C-S 9
W t^.\
- H
XJ3 3
Z C OXcd) ocj P rJ
o rrr-* oo oo
so CM
VO oo r- r-
O -O ri ri H <s
8
cs ri 9 s m v-', oe
O
B3 VO O v
T-. oc g S8 S 00 in
00 in
r-a oO
V> VO ^
von
cs r* r4 ps + 3 S
IB
b r-4 ps
as
TT HW 1 s & co po
* OS
<o oOn O
r-N CM or- C\ O
< Q l j CS PS (N H r4 H r- 9 vo
w .b -g *N
j CM oCS r- OO VO in in O
CO a
CS f*>
r4 CS fS
o 00 o00
9 r-O vo
4 -H
r *n o
r-
< O 73 o ^ ^ f4
H w -G ^ 13 o
-* O Tj- TI
CS <N OO OO cn
CN a s On OO 00 OO O
oo ON
3 r i p<5 (N f^5 r- fM
< s *Z fN PO T-^ fH 9
3 -3
> 3 O O w* vO
J G X ) CS N CM^ Tf PO
fS 8 CM
On
r-
oo in cn cm
oo oo
< as <N PO r4 CM <*> CM 9
oo cn
fS|
U
HH
H g ON Cr*-S>
^ ocn 00 H
CM N
r-
o
r-*
ONa
CM
ON o
Os
ON
oo
OO w*
2
10
ri po rJ po CM
CM
# #
OO ^
i CM
u II
a I
OO <oonw rf VO NO
CO m f^>
cn
Ot
00
ON
VO
On
in
ON TT w*
ON 1/)
r-i po r-i po CM f*> fS
CM N
*
%
9
r4
I a v ^) mo
r- Tf IT o
CS O in
o
cn
O o o 3
XJ -H cs po (N PO fS PO
CM rsi CM CM CS cs ri
.5 || m) r*h
o a NO V *. so ON PO
Tf \c ON
CMt Ov
M Tt <Nf O
8 S
ri po H m rj po CM CM CM
w
ri
<
CS r-i ri
2.80 2.64
3.94
2.78 2.62
5 O
4.22 3.90
o S ? m VO
CM
cn
CM
CS n
w>
fS PO
(N
r4
a
CM
CS cs
CN CM CS ri po
9Z'P
NO 00 VO ON 00 ps
fH o 5? cn cn co PO
Cn| "T
(N
CM
A
CM CM rj ri po
O
TJ w
*4
u. co NO
O t^ - o f2 on ge
ON ^ On O in CM Q ae
cn 't cn CM Tf
r-*
CM
r-
CM
VO VO NO 'O h'
CM CM CS CS PO
oG
(N (N
Tf# A OO tN 00 CM
9 cn g S O 8 On
Vi cn / cn l/v cn i/) m
cn
cn cn cn
On
ri
b
00 rr t^
7.88
,^J-
4.26
7.82
X! <N CM l>
cn
O o> VO in
O %
h-
Tr
ON 00 00 oo S 3
cO S cn cn cn cn cn vc
PJ
H S CO
1000
200
400
n 8
50
<N rJ CM 8
Appendix B: Selected Orthogonal Arrays
and Their Linear Graphs
TABLE B1
L4 C23) ORTHOGONAL ARRAY
'
C olum ns
Experim ent 1 2 3
1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1
1 o o 2
Fig. B1
TABLE B2
L8 (27) ORTHOGONAL ARRAY
C olum ns
Experim ent 1 2 3 4 5 6 7
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2
7
O
(a) (b)
Fig. B2
191
192 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
TABLE B3
L9 (34) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4
1 1 1 I 1
2 1 2 2 2
3 I 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 3 1 3 2
8 3 2 1 3
9 3 3 2 1
1 o V - K ) 2
Fig. B3
TABLE B4
L u ( 2 11) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4 5 6 7 8 9 10 11
1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 2 2 2 2 2 2
3 1 1 2 2 2 1 1 1 2 2 2
4 1 2 1 2 2 1 2 2 1 1 2-
5 1 2 2 1 2 2 1 2 1 2 1
6 1 2 2 2 1 2 2 1 2 1 1
7 2 1 2 2 1 1 2 2 1 2 1
8 2 1 2 1 2 2 2 1 1 1 2
9 2 1 1 2 2 2 1 2 2 1 1
10 2 2 2 1 1 1 1 2 2 1 2
11 2 2 1 2 1 2 1 1 1 2 2
12 2 2 1 1 2 1 2 1 2 2 1
Interaction between any two columns is confounded partially with the remaining nine columns
Do not use this array if you are aiming to estimate interactions.
1 O - 0 2
Fig. B4
APPENDIX 193
TABLE B5
Lie (215) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4 5 7 8 9 10 11 12 13 14 15
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 2 2 2 2 2 2 2 2
3 1 1 2 2 2 2 1 1 1 1 2 2 2 2
4 1 1 2 2 2 2 2 2 2 2 1 1 1 1
5 2 2 1 1 2 2 1 1 2 2 2 1 2 2
6 2 2 1 1 2 2 2 2 1 1 2 2 1 1
7 2 2 2 2 1 1 1 1 2 2 2 2 1 1
8 2 2 2 2 1 1 2 2 1 1 1 1 2 2
9 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
10 2 1 2 1 2 1 2 2 1 2 1 2 1 0 1
11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1
12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2
13 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1
14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2
15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1
1 2 3 4 5 6 7 8
a O O O O O O O
Fig. B5
TABLE B6
L j6 (45) ORTHOGONAL ARRAY
Columns
Experiment 1
1 1 1 1 1 1
2 1 2 2 2 2
3 1 3 3 3 3
4 1 4 4 4 4
5 2 1 2 3 4
6 2 2 1 4 3
7 2 3 4 1 2
8 2 4 3 2 1
9 3 1 3 4 2
10 3 2 4 3 1
11 3 3 1 2 4
12 3 4 2 1 3
13 4 1 4 2 3
14 4 2 3 1 4
15 4 3 2 4 1
16 4 4 1 3 2
To estimate the interaction between columns 1 and 2, keep all other columns unassigned
3 ,4 ,5 ,6
10---- ---------- 0 2
Fig. B6
194 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
TABLE B7
L lg (21 x 37) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4 5 6 7 8
1 1 1 1 1 1 1 1 1
2 1 1 2 2 2 2 2 2
3 1 1 3 3 3 3 3 3
4 1 2 1 1 2 2 3 3
5 1 2 2 2 3 3 1 1
6 1 2 3 3 1 1 2 2
7 1 3 1 2 1 3 2 3
8 1 3 2 3 2 1 3 1
9 1 3 3 1 3 2 1 2
10 2 1 1 3 3 2 2 1
11 2 1 2 1 1 3 3 2
12 2 1 3 2 2 1 1 3
13 2 2 1 2 3 1 3 2
14 2 2 2 3 1 2 1 3
15 2 2 3 1 2 3 2 1
16 2 3 1 3 2 3 1 2
17 2 3 2 1 3 1 2 3
18 2 3 3 2 1 2 3 1
Interaction between columns 1 and 2 can be estimated without sacrificing any column.
Columns 1 and 2 can be combined to form a 6-level column. Interactions between any other
pair o f columns are confounded partially with the remaining columns.
15
(a) (b)
Fig. B7 (cont.)
APPENDIX 195
Oco TJ
Fig. B7
ID
OJ CD o O
ro
o
Is- O O
in o O 2
in
oo
O O oj
ro
196 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
TABLE B8
L25 C56) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4 5 6
1 1 1 1 1 1 1
2 1 2 2 2 2 2
3 1 3 3 3 3 3
4 1 4 4 4 4 4
5 1 5 5 5 5 5
6 2 1 2 3 4 5
7 2 2 3 4 5 1
8 2 3 4 5 1 2
9 2 4 5 1 2 3
10 2 5 1 2 3 4
11 3 1 3 5 2 4
12 3 2 4 1 3 5
13 3 3 5 2 4 1
14 3 4 1 3 5 2
15 3 5 2 4 1 3
16 4 1 4 2 5 3
17 4 2 5 3 1 4
18 4 3 1 4 2 5
19 4 4 2 5 3 1
20 4 5 3 1 4 2
21 5 1 5 4 3 2
22 5 2 1 5 4 3
23 5 3 2 1 5 4
24 5 4 3 2 1 5
25 5 5 4 3 2 1
To estimate the interaction between columns 1 and 2, keep all other columns unassigned.
Glossary
1
/(*) = exp { - [(x - j x ) t o f 12}
[1] Taguchi, G. and Don Clausing (1990): Robust quality, Harvard Business
Review, January-February, pp. 65-75.
[2] Fisher, R.A. (1925): Statistical Methods fo r Research Workers , Oliver &
Boyd, Edinburgh.
[3] Rao, C.R. (1947): Factorial experiments derivable from combinatorial
arrangements of arrays, /. Roy. Stat. Soc.t Supply vol. 9, pp. 128-39.
[4] Taguchi, G. (1986): Introduction to Quality Engineering, Asian Productivity
Organization, Tokyo.
[5] Phadke, Madhav, S. (1989): Quality Engineering and Robust Design,
Prentice Hall, Englewood Cliffs, New Jersey.
[6 ] Kackar, R.N. (1985): Off-line quality control, parameter design and the
Taguchi Method Journal o f Quality Technology, vol. 17, pp. 176-209.
[7] Bendell, Tony (1989): Taguchi Methods Proceedings o f the 1988 European
Conference, Elsevier Applied Science, New York.
[8 ] Taguchi, G. and Yu-In Wu (1979): Introduction to Off-Line Quality Control ,
Central Japan Quality Control Association, Magaya.
[9] Juran, J.M. and Gryna, F.M. (1988): Juran's Quality Control Handbook,
4th ed., McGraw-Hill, New York.
[10] ISO 9000 International Standard (1987): International Standards Organization,
Geneva.
[11] John, Peter, W.M. (1990): Statistical Methods in Engineering and Quality
Assurance, Wiley Interscience, New York.
[12] Caulcutt, R. (1990): Putting process changes to factorial test, Process
Engineering, July, vol. 71, pp. 46-47.
[13] Suh, Nam, P. (1990): The Principles o f Design, Oxford University Press,
New York.
[14] Dehnad, K. (1989): Quality Control, Robust Design , and the Taguchi Method,
Wadsworth, CA.
[15] Lochner, R.H. and Matar, J.E. (1990) : Designing f o r Quality, ASQC Press,
Milwaukee, WI.
[16] Tribus, M. and Szonyi, G. (1989): An alternative view of the Taguchi
approach, Quality Progress, May, vol. 22, pp. 46-52.
[17] Box, G.E.P. (1988): Signal-to-noise ratios, performance criteria, and
transformations, Technometrics, February, vol. 30, pp. 1-31.
203
204 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
Main effects (main factor effects), 16, Parameter design, 12, 14, 15, 199
55, 61, 90, 92, 93, 142 Parametric
Manufacturing experiment plan, 83
cost, 5 optimization experiments, 109
size intervals, 168 Pareto optimality, 144, 155
tolerances, 164, 165 Partial factorial designs, 62
Mathematical models, 123, 147 Performance
Mazda, 81 process, 10
Mean product, 10
^treatment* 54 PerMIAs, 136, 153
Sum of Squares (Mean SStor), 39, Phadke, M.S., 140, 159
48, 54 Population, 18, 199
Sum of Squares of deviations (Error Prediction model, 121, 122
Sum of Squares), 47 Prevention
MIL-STD-105D, 178 by quality design, 11
Monte Carlo simulation, 100, 144, 155, cost, 3
156, 159 Probability, 18
Multiple objective optimization, 143 Process
Multiple regression, 150 control, 182, 183
design, 178
performance, 176
Nair, Vijayan, N., 99 Product
New York Stock Exchange, 30 design, 178
Nippon Denso Company, 94 performance, 10, 176
Noise factor array (noise OA), 81, 198 producibility of, 177
Noise factors, 79, 101, 198 saleability of, 176
Nominal-the-best criterion, 83 Productivity, 177
Non-linear effects, 98
Normal distribution, 21, 22
QFD (Quality Function Deployment),
4, 173, 199
Off-line quality control, 2, 199 Quadratic loss function, 81
One-factor designed experiment, 44 Quality
One-factor-at-a-time studies, 179 definition of, 1, 6, 173
One-line quality control, 2, 199 engineering, 2, 199
On-target performance, 4 in design, 79
Operating cost, 5 loop, 176
Optical filter manufacturing, 107 management methods, 174
Optimization, 41, 70, 77, 79, 84, 107,
123
Orthogonal Random
Arrays (OA), 42,44, 72, 86, 90, 114, factors, 178
199 sample, 18, 22
matrix experiments, 63 variable, 21, 22
Orthogonally designed experiments, 9 Randomization of experiments, 34, 45
Outer array (noise OA), 95 200
208 index
Variability Variation
between-factor, 40, 47 between treatments, 40
between-treatment, 50 within treatment, 40
in quality, 8, 104, 175 Verification experiments, 65, 98
of performance, 70, 79, 202 VLSI manufacture, 4
within-factor, 40, 47
within-treatment, 48, 50
Yates procedure, 67
Variance
definition of, 19, 26
effects, 70 Z-statistic, 29
pooled, 27 Z-test, 31
Divided into 12 easy to read chapters, the
book distills the methods and experience of
those in industry who introduced and then
embraced Taguchi Methods as a regular
part of their own product/process innovation
effort. It ends by linking Taguchi Methods
with TQ M (Total Quality Management), and
by providing an improved process design
to upgrade product/process engineering
capability within an organization.
Prentice-Hall of India
U infitod]
New Delhi
View publication stats